Preface

Chapter 8 131

Chapter 9 143

Image Compression 159

Chapter 10 161

Chapter 11 173

Chapter 12 193

Advances in Signal and Image Processing in Biomedical

by Mathiyalagan Palaniappan and Manikandan Annamalai

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

Many-Core Algorithm of the Embedded Zerotree

by Jesús Antonio Alvarez-Cedillo, Teodoro Alvarez-Sanchez, Mario Aguilar-Fernandez and Jacobo Sandoval-Gutierrez

On the Application of Dictionary Learning to Image Compression

The DICOM Image Compression and Patient Data Integration

by Madhuri Suthar and Bahram Jalali

by Ali Akbari and Maria Trocan

using Run Length and Huffman Encoder by Trupti N. Baraskar and Vijay R. Mankar

Applications

Section 3

II

Wavelet Encoder

The area of coding theory consists of techniques that enable reliable delivery of digital data over unreliable communication channels. Such techniques allow coding, while decoding enables reconstruction of the original data in many cases. These techniques have various applications in a variety of fields including computer science and telecommunication. These techniques also enrich the areas of information theory and error detection with various other real-life applications.

The chapters in this comprehensive reference cover the latest developments, methods, approaches, and applications of coding theory in a wide variety of fields and endeavors. This book is compiled with a view to provide researchers, academicians, and readers with an in-depth discussion of the latest advances. It consists of twelve chapters from academicians, practitioners, and researchers from different disciplines of life.

The target audience of this book are professionals and researchers working in the field of coding theory in various disciplines, e.g. computer science, information technology, information and communication sciences, education, health, library, and others. The book is also targeted to information engineers, scientists, researchers, practitioners, academicians, and related industry professionals.

Elsanadily begins the book with a discussion on generalized low-density parity check codes used for the construction and decoding algorithms. Scientists have competed to find capacity approach codes that can be decoded with optimal and feasible decoding algorithms. Generalized LDPC (Low Density Parity Check) codes were found to compare well with such codes as they performed well. LDPC codes are well treated with both types of decoding; the Hard-Decision Decoding (HDD) and the Soft Decision Decoding (SDD). However, the authors feel that there is a need for further investigation for the iterative decoding of Generalized Low Density Parity Check (GLDPC) codes on both Additive White Gaussian Noise (AWGN) and Binary Symmetric Channel (BSC) channels. This chapter first describes the construction of the GLDPC code and discusses its iterative decoding algorithms on BSC and AWGN channels so far. The soft-input soft-output (SISO) decoders, for decoding the component codes in GLDPC codes, show very good error performance with moderate and high code rates. However, the complexities of such decoding algorithms are very high. When bit flipping (BF) algorithms by Gallager, as HDD, were presented to LDPC for its simplicity and speed, they were found to be far from the capacity of the BSC. Therefore, using LDPC codes in optical systems using such algorithms is inefficient. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under the BF algorithm can be improved and the observed error floor can be lowered or even removed. GLDPC codes would then be a competitive choice for optical communications. This chapter discusses the iterative HDD algorithms that improve decoding performance and error floor behavior of GLDPC codes. It also describes the SDD algorithms that maintain the performance but lower decoding simplicity.

This is followed by "Polynomials in Error Detection and Correction in Data Communication System" introduced by Panem et al. This chapter describes the different types of errors encountered in a data communication system over channels and focuses on the role of polynomials in implementing various algorithms for error detection and correction codes. It discusses error detection codes such as simple parity check, two-dimensional parity check, Checksum, cyclic redundancy check; and error correction codes such as Hamming code, Bose–Chaudhuri–Hocquenghem (BCH), Golay codes, Reed-Solomon (RS) Code, LDPC, and Trellis and Turbo codes. It also gives an overview of the architecture and implementation of the codes and discusses the applications of these codes in various systems.

high. The coupling capacitance is one of the main constraints to affect the performance of on-chip interconnects. Motivated by coupling capacitance, Chapter 6, "Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction of Random and Burst Errors" by Kummary et al., introduces the crosstalk at on-chip interconnecting wires. To control single or multiple errors, an efficient error correction code is required. By combining crosstalk avoidance with error control code, the reliable intercommunication is obtained in a network-onchip (NoC)-based system on chip (SoC). Moreover, to reduce the power consumption of error control codes, it integrates the bus invert-based low-power code to network interface of NoC. The advanced work is designed and implemented with Xilinx 14.7; thereby the performance of improved NoC is evaluated and compared with existing works. The 88 mesh-based NoC is simulated at various traffic pat-

terns to analyze the energy dissipation and average data packet latency.

of good signal processing algorithms.

V

for the fewer regions of interest in the image scene.

Computer vision is an important part of computer science derived from digital image processing. The last three decades have witnessed a significant growth of applications in the field of remote sensing, approaching toward super resolution images, image analytics, and scalable image and video coding, biomedical imaging, and automatic surveillance due to internet use explosion. Image processing is a technique to enhance or modify the raw images received from different sources like satellites, biomedical field, and pictures taken in normal day-to-day life for various applications. Image processing systems are becoming popular due to its various applications such as: a) remote sensing, b) medical imaging, c) forensic studies, d) textiles, e) mineral science, h) film industry, and i) document processing etc. Information and processing plays a vital role in the modern era. The demand for multimedia applications has increased enormously. Like many other recent developments, the tremendous growth of image and video processing is due to the contribution from major areas such as good network access, easy availability of powerful personnel computers, devices with large memory capacities, availability of graphics software, more fast processors available in the market, and the evolution

Chapter 7, by Satyarth Praveen, describes the stereo vision system. Normally a single scene is recorded from two different viewing angles, and depth is estimated from the measure of parallax error. This field has caused many researchers and mathematicians to devise novel algorithms for the accurate output of the stereo systems. This chapter gives a complete overview of the stereo system and talks about the efficient estimation of the depth of the object. It emphasizes the fact that if properly linked with other image perception techniques, stereo depth estimation can be made more efficient than the current techniques. The idea revolves around the fact that stereo depth estimation is not necessary for all the pixels of the image. This fact opens room for more complex and accurate depth estimation techniques

Everybody knows that our body is a biological machine and that the information from every biological activity is important for monitoring human activity. This biological information can be collected using physiological instruments that measure heartbeat, circulatory strain, oxygen saturation levels, blood glucose, nerve conduction, mind activity, and so on. These signals are called biological signals and the study of these signal is called bio-signal processing. This gives doctors continuous important clinical information to help them make better clinical evaluations. This leads to a field called bioinformatics where both biomedical engineers and doctors can work together for the wellbeing of humans. Chapter 8 by Mathiyalagan

Sarkar and Majhi, in Chapter 3 of the book, follow with a discussion of "A Direct Construction of Intergroup Complementary Code Set for Code-division Multiple Access (CDMA)." They present a direct construction of intergroup complementary (IGC) code set by using second order generalized Boolean functions (GBFs). Their IGC code set can support interference-free code-division multiplexing. They also illustrate this construction with a graph where the zero-correlation zone (ZCZ) width depends on the number of isolated vertices present in a graph after the deletion of some vertices. The proposed construction can generate the IGC code set with more flexible parameters.

Additive codes were first introduced by Delsarte in 1973 as subgroups of the underlying abelian group in a translation association scheme. Where the association scheme is the Hamming scheme, that is, when the underlying abelian group is of order 2<sup>n</sup> , the additive codes are of the form <sup>α</sup> <sup>2</sup> � <sup>β</sup> <sup>4</sup> with α þ 2β ¼ n. In 2010, Borges et al. introduced 24-additive codes which they defined as the subgroups of α <sup>2</sup> � <sup>β</sup> 4. Chapter 4, "22[u]-Linear and 22[u]-Cyclic Codes" by Aydogdu, aims to introduce 22[u]-linear and 22[u]-cyclic codes where <sup>2</sup> ¼ f g 0, 1 is the binary field and 2½ �¼ <sup>u</sup> f g 0, 1, <sup>u</sup>, 1 <sup>þ</sup> <sup>u</sup> is the ring with four elements and <sup>u</sup><sup>2</sup> <sup>¼</sup> 0. This chapter provides the standard forms of the generator and parity-check matrices of 22[u]-linear codes. Additionally, it also determines the generator polynomials for 22u-linear cyclic codes. Some examples of 22[u]-linear and 22[u]-cyclic codes are also presented in this chapter.

The current cellular networks play an important role in daily communications. They integrate a wide variety of wireless multimedia services with higher data transmission rates, capable to provide much more than basic voice calls. Chapter 5, "The Adaptive Coding Techniques for Dependable Medical Network Channel" by Talha and Kohno, is a motivation to increase the demands of reliable medical network infrastructure economically and establish reliable medical transmission via cellular networks. This chapter describes the dependable wireless medical network using an existing mobile cellular network with sophisticated channel coding technologies. It describes the novel way in which the network is adopted as a "Medical Network Channel (MNC)" system. Adding such adaptive outer coding to an existing cellular standard as inner coding makes a concatenated channel to carry out the MNC design. The adaptive design of the extra outer channel codes depends on the Quality of Services (QoS) of Wireless Body Area Networks (WBANs) and also on the remaining errors from the inner-used cellular decoders. The adaptive extra code has been optimized toward MNC for different medical data QoS priority levels. The accomplishment of QoS constraints, for different WBAN medical data, has been investigated in this chapter for MNC using the theoretical derivations. Positive acceptable results have been achieved.

Error correction codes are very important to detect and correct errors from various noise sources. When the technology is scaling down, the effect of noise sources is

high. The coupling capacitance is one of the main constraints to affect the performance of on-chip interconnects. Motivated by coupling capacitance, Chapter 6, "Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction of Random and Burst Errors" by Kummary et al., introduces the crosstalk at on-chip interconnecting wires. To control single or multiple errors, an efficient error correction code is required. By combining crosstalk avoidance with error control code, the reliable intercommunication is obtained in a network-onchip (NoC)-based system on chip (SoC). Moreover, to reduce the power consumption of error control codes, it integrates the bus invert-based low-power code to network interface of NoC. The advanced work is designed and implemented with Xilinx 14.7; thereby the performance of improved NoC is evaluated and compared with existing works. The 88 mesh-based NoC is simulated at various traffic patterns to analyze the energy dissipation and average data packet latency.

Computer vision is an important part of computer science derived from digital image processing. The last three decades have witnessed a significant growth of applications in the field of remote sensing, approaching toward super resolution images, image analytics, and scalable image and video coding, biomedical imaging, and automatic surveillance due to internet use explosion. Image processing is a technique to enhance or modify the raw images received from different sources like satellites, biomedical field, and pictures taken in normal day-to-day life for various applications. Image processing systems are becoming popular due to its various applications such as: a) remote sensing, b) medical imaging, c) forensic studies, d) textiles, e) mineral science, h) film industry, and i) document processing etc. Information and processing plays a vital role in the modern era. The demand for multimedia applications has increased enormously. Like many other recent developments, the tremendous growth of image and video processing is due to the contribution from major areas such as good network access, easy availability of powerful personnel computers, devices with large memory capacities, availability of graphics software, more fast processors available in the market, and the evolution of good signal processing algorithms.

Chapter 7, by Satyarth Praveen, describes the stereo vision system. Normally a single scene is recorded from two different viewing angles, and depth is estimated from the measure of parallax error. This field has caused many researchers and mathematicians to devise novel algorithms for the accurate output of the stereo systems. This chapter gives a complete overview of the stereo system and talks about the efficient estimation of the depth of the object. It emphasizes the fact that if properly linked with other image perception techniques, stereo depth estimation can be made more efficient than the current techniques. The idea revolves around the fact that stereo depth estimation is not necessary for all the pixels of the image. This fact opens room for more complex and accurate depth estimation techniques for the fewer regions of interest in the image scene.

Everybody knows that our body is a biological machine and that the information from every biological activity is important for monitoring human activity. This biological information can be collected using physiological instruments that measure heartbeat, circulatory strain, oxygen saturation levels, blood glucose, nerve conduction, mind activity, and so on. These signals are called biological signals and the study of these signal is called bio-signal processing. This gives doctors continuous important clinical information to help them make better clinical evaluations. This leads to a field called bioinformatics where both biomedical engineers and doctors can work together for the wellbeing of humans. Chapter 8 by Mathiyalagan

types of errors encountered in a data communication system over channels and focuses on the role of polynomials in implementing various algorithms for error detection and correction codes. It discusses error detection codes such as simple parity check, two-dimensional parity check, Checksum, cyclic redundancy check; and error correction codes such as Hamming code, Bose–Chaudhuri–Hocquenghem (BCH), Golay codes, Reed-Solomon (RS) Code, LDPC, and Trellis and Turbo codes. It also gives an overview of the architecture and implementation of the codes and

Sarkar and Majhi, in Chapter 3 of the book, follow with a discussion of "A Direct Construction of Intergroup Complementary Code Set for Code-division Multiple Access (CDMA)." They present a direct construction of intergroup complementary (IGC) code set by using second order generalized Boolean functions (GBFs). Their IGC code set can support interference-free code-division multiplexing. They also illustrate this construction with a graph where the zero-correlation zone (ZCZ) width depends on the number of isolated vertices present in a graph after the deletion of some vertices. The proposed construction can generate the IGC code set

Additive codes were first introduced by Delsarte in 1973 as subgroups of the underlying abelian group in a translation association scheme. Where the association scheme is the Hamming scheme, that is, when the underlying abelian group is of

to introduce 22[u]-linear and 22[u]-cyclic codes where <sup>2</sup> ¼ f g 0, 1 is the binary field and 2½ �¼ <sup>u</sup> f g 0, 1, <sup>u</sup>, 1 <sup>þ</sup> <sup>u</sup> is the ring with four elements and <sup>u</sup><sup>2</sup> <sup>¼</sup> 0. This chapter provides the standard forms of the generator and parity-check matrices of 22[u]-linear codes. Additionally, it also determines the generator polynomials for 22u-linear cyclic codes. Some examples of 22[u]-linear and

Borges et al. introduced 24-additive codes which they defined as the subgroups of

The current cellular networks play an important role in daily communications. They integrate a wide variety of wireless multimedia services with higher data transmission rates, capable to provide much more than basic voice calls. Chapter 5, "The Adaptive Coding Techniques for Dependable Medical Network Channel" by Talha and Kohno, is a motivation to increase the demands of reliable medical network infrastructure economically and establish reliable medical transmission via cellular networks. This chapter describes the dependable wireless medical network using an existing mobile cellular network with sophisticated channel coding technologies. It describes the novel way in which the network is adopted as a "Medical Network Channel (MNC)" system. Adding such adaptive outer coding to an existing cellular standard as inner coding makes a concatenated channel to carry out the MNC design. The adaptive design of the extra outer channel codes depends on the Quality of Services (QoS) of Wireless Body Area Networks (WBANs) and also on the remaining errors from the inner-used cellular decoders. The adaptive extra code has been optimized toward MNC for different medical data QoS priority levels. The accomplishment of QoS constraints, for different WBAN medical data, has been investigated in this chapter for MNC using the theoretical derivations. Positive

Error correction codes are very important to detect and correct errors from various noise sources. When the technology is scaling down, the effect of noise sources is

4. Chapter 4, "22[u]-Linear and 22[u]-Cyclic Codes" by Aydogdu, aims

<sup>2</sup> � <sup>β</sup>

<sup>4</sup> with α þ 2β ¼ n. In 2010,

discusses the applications of these codes in various systems.

, the additive codes are of the form <sup>α</sup>

22[u]-cyclic codes are also presented in this chapter.

acceptable results have been achieved.

with more flexible parameters.

order 2<sup>n</sup>

α <sup>2</sup> � <sup>β</sup>

IV

Palaniappan and Manikandan Annamalai gives a comprehensive presentation on bioinformatics and will be useful researchers working in this area.

compression without image loss is required since a small amount of loss in the data can lead to a wrong diagnosis by the physicians. Chapter 12 by Trupti N. Baraskar and Vijay R. Mankar deals with the medical image compression work through discrete wavelet-based threshold approach. Using this approach by applying N-level decomposition on 2D wavelet types, various levels of wavelet coefficients are obtained. The lossless hybrid encoding algorithm, which combines run-length encoder and Huffman encoder, has been used for compression and decompression

Structure of the book: The book contains twelve chapters grouped into three sections. Section 1 deals with Error Detection and Correction and contains six chapters. Section 2 concentrates on Signal and Processing and contains three chapters. The last section is devoted to Compression and contains three chapters.

Muhammad Sarfraz

Dr. R. Sudhakar Professor,

NPT – MCET Campus,

College of Life Sciences, Kuwait University,

Department of Information Science,

Department of Electronics and Communication Engineering, Dr. Mahalingam College of Engineering and Technology,

Udumalai Road, Pollachi, Coimbatore – District, Tamilnadu, India

Sabah AlSalem University City, Shadadiya, Safat, Kuwait

purposes.

VII

Edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical characters, and finger print images plays an important role in image understanding or image analysis. Chapter 9 by Madhuri Suthar and Bahram Jalali deals with an edge detection algorithm called Phase-Stretch Adaptive Gradient-Field Extractor (PAGE). This is a new engineering method that takes inspiration from the physical phenomenon of birefringence in an optical system. The introduced method controls the diffractive properties of the simulated medium as a function of spatial location and channelized frequency.

Image and video data compression refers to a process in which the amount of data used to represent image and video is reduced to meet a bit rate (coding rate) requirement (below or at most equal to the maximum available bit rate), while the quality of the reconstructed image or video satisfies a requirement for a certain application and the complexity of computation involved is affordable for the application. The required quality of the reconstructed image and video is application dependent. In still image compression, a certain amount of information loss is allowed and this is called as lossy compression. Normally a transform-based compression scheme removes interpixel correlation. The recent growth of data intensive multimedia-based web applications has not only sustained the need for more efficient ways to encode signals and images but has made compression of such signals central to storage and communication technology. The first recommended international coding standard was Joint Photographic Experts Group (JPEG) and it relies on the 8 8 block based Discrete Cosine Transform (DCT). The limitation of the JPEG standard comes from the introduction of blocking artifacts. A new standard called JPEG-2000 (using wavelets) was introduced because of the advancements in embedded quantization schemes and new transforms. The current problem is the selection of an image compression algorithm depending on criteria of compression ratio, but the quality of reconstructed images depends on the technology used. Chapter 10 is written by Jesús Antonio Alvarez-Cedillo, Teodoro Alvarez-Sanchez, Mario Aguilar-Fernandez, and Jacobo Sandoval-Gutierrez and shows the development of an novel algorithm executed in parallel using the embedded Zerotree wavelet coding scheme, in which the programs integrate parallelism techniques to be implemented and executed in the many-core system Epiphany III, which is a low-cost embedded system.

Signal modeling is a difficult task in contemporary signal and image-processing methodology. Chapter 11, written by Ali Akbari and Maria Trocan, introduces a particular signal modeling method, called synthesis sparse representation, which has been proven to be effective for many signals, such as natural images, and successfully used in a wide range of applications. In this kind of signal modeling, the signal is represented with respect to a dictionary. The authors focus is mainly on dictionary designing, which provides a simple and expressive structure for designing adaptable and efficient dictionaries. This chapter emphasizes the application of dictionary learning to image compression.

There is a huge need in our medical community to develop applications that are low cost, with high compression, as a huge amount of patient data and images need to be transmitted over the network to be reviewed by the physicians for diagnostic purpose. This leads to an area called biomedical image compression. Here image

compression without image loss is required since a small amount of loss in the data can lead to a wrong diagnosis by the physicians. Chapter 12 by Trupti N. Baraskar and Vijay R. Mankar deals with the medical image compression work through discrete wavelet-based threshold approach. Using this approach by applying N-level decomposition on 2D wavelet types, various levels of wavelet coefficients are obtained. The lossless hybrid encoding algorithm, which combines run-length encoder and Huffman encoder, has been used for compression and decompression purposes.

Structure of the book: The book contains twelve chapters grouped into three sections. Section 1 deals with Error Detection and Correction and contains six chapters. Section 2 concentrates on Signal and Processing and contains three chapters. The last section is devoted to Compression and contains three chapters.

> Muhammad Sarfraz Department of Information Science, College of Life Sciences, Kuwait University, Sabah AlSalem University City, Shadadiya, Safat, Kuwait

> > Dr. R. Sudhakar

Professor, Department of Electronics and Communication Engineering, Dr. Mahalingam College of Engineering and Technology, NPT – MCET Campus, Udumalai Road, Pollachi, Coimbatore – District, Tamilnadu, India

Palaniappan and Manikandan Annamalai gives a comprehensive presentation on

Edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical characters, and finger print images plays an important role in image understanding or image analysis. Chapter 9 by Madhuri Suthar and Bahram Jalali deals with an edge detection algorithm called Phase-Stretch Adaptive Gradient-Field Extractor (PAGE). This is a new engineering method that takes inspiration from the physical phenomenon of birefringence in an optical system. The introduced method controls the diffractive properties of the simulated medium as a function of spatial location and channel-

Image and video data compression refers to a process in which the amount of data used to represent image and video is reduced to meet a bit rate (coding rate) requirement (below or at most equal to the maximum available bit rate), while the quality of the reconstructed image or video satisfies a requirement for a certain application and the complexity of computation involved is affordable for the application. The required quality of the reconstructed image and video is application dependent. In still image compression, a certain amount of information loss is allowed and this is called as lossy compression. Normally a transform-based compression scheme removes interpixel correlation. The recent growth of data intensive multimedia-based web applications has not only sustained the need for more efficient ways to encode signals and images but has made compression of such signals central to storage and communication technology. The first recommended international coding standard was Joint Photographic Experts Group (JPEG) and it relies on the 8 8 block based Discrete Cosine Transform (DCT). The limitation of the JPEG standard comes from the introduction of blocking artifacts. A new standard called JPEG-2000 (using wavelets) was introduced because of the advancements in embedded quantization schemes and new transforms. The current problem is the selection of an image compression algorithm depending on criteria of compression ratio, but the quality of reconstructed images depends on the technology used. Chapter 10 is written by Jesús Antonio Alvarez-Cedillo, Teodoro Alvarez-Sanchez, Mario Aguilar-Fernandez, and Jacobo Sandoval-Gutierrez and shows the development of an novel algorithm executed in parallel using the embedded Zerotree wavelet coding scheme, in which the programs integrate parallelism techniques to be implemented and executed in the many-core system Epiphany III, which is a

Signal modeling is a difficult task in contemporary signal and image-processing methodology. Chapter 11, written by Ali Akbari and Maria Trocan, introduces a particular signal modeling method, called synthesis sparse representation, which has been proven to be effective for many signals, such as natural images, and successfully used in a wide range of applications. In this kind of signal modeling, the signal is represented with respect to a dictionary. The authors focus is mainly on dictionary designing, which provides a simple and expressive structure for designing adaptable and efficient dictionaries. This chapter emphasizes the application of

There is a huge need in our medical community to develop applications that are low cost, with high compression, as a huge amount of patient data and images need to be transmitted over the network to be reviewed by the physicians for diagnostic purpose. This leads to an area called biomedical image compression. Here image

bioinformatics and will be useful researchers working in this area.

ized frequency.

low-cost embedded system.

VI

dictionary learning to image compression.

Section 1

Error Detection and

Correction

1

Section 1

## Error Detection and Correction

Chapter 1

Abstract

are also described.

1. Introduction

3

bit-flipping, chase algorithm

Sherif Elsanadily

Generalized Low-Density

and Decoding Algorithms

Parity-Check Codes: Construction

Scientists have competed to find codes that can be decoded with optimal decoding algorithms. Generalized LDPC codes were found to compare well with such codes. LDPC codes are well treated with both types of decoding; HDD and SDD. On the other hand GLDPC codes iterative decoding, on both AWGN and BSC channels, was not sufficiently investigated in the literature. This chapter first describes its construction then discusses its iterative decoding algorithms on both channels so far. The SISO decoders, of GLDPC component codes, show excellent error performance with moderate and high code rate. However, the complexities of such decoding algorithms are very high. When the HDD BF algorithm presented to LDPC for its simplicity and speed, it was far from the BSC capacity. Therefore involving LDPC codes in optical systems using such algorithms is a wrong choice. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under BF algorithm can be improved and they would then be a competitive choice for optical communications. This chapter will discuss the iterative HDD algorithms that improve decoding error performance of GLDPC codes. SDD algorithms that maintain the performance but lowering decoding simplicity

Keywords: channel coding, generalized LDPC codes, iterative decoding,

Generalized LDPC (GLDPC) block codes were first proposed by Tanner [1] as they internally contain block codes (which are called component codes) and not just single parity check (SPC) as the case in LDPC codes. From this definition, we know that LDPC codes can be regarded as a special class of GLDPC codes. GLDPC block codes possess many desirable features, such as large minimum distance [2], good iterative decoding performance, and low error floor [3]. At the same time, the complexity of the processed operations increases due to the inserted complicated constraints. Therefore, scientists are stirred to find a good GLDPC code with suitable subcodes achieving the desired error performance at fair complication. The methods in [4] by Fossorier and the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm

#### Chapter 1

## Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

Sherif Elsanadily

#### Abstract

Scientists have competed to find codes that can be decoded with optimal decoding algorithms. Generalized LDPC codes were found to compare well with such codes. LDPC codes are well treated with both types of decoding; HDD and SDD. On the other hand GLDPC codes iterative decoding, on both AWGN and BSC channels, was not sufficiently investigated in the literature. This chapter first describes its construction then discusses its iterative decoding algorithms on both channels so far. The SISO decoders, of GLDPC component codes, show excellent error performance with moderate and high code rate. However, the complexities of such decoding algorithms are very high. When the HDD BF algorithm presented to LDPC for its simplicity and speed, it was far from the BSC capacity. Therefore involving LDPC codes in optical systems using such algorithms is a wrong choice. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under BF algorithm can be improved and they would then be a competitive choice for optical communications. This chapter will discuss the iterative HDD algorithms that improve decoding error performance of GLDPC codes. SDD algorithms that maintain the performance but lowering decoding simplicity are also described.

Keywords: channel coding, generalized LDPC codes, iterative decoding, bit-flipping, chase algorithm

#### 1. Introduction

Generalized LDPC (GLDPC) block codes were first proposed by Tanner [1] as they internally contain block codes (which are called component codes) and not just single parity check (SPC) as the case in LDPC codes. From this definition, we know that LDPC codes can be regarded as a special class of GLDPC codes. GLDPC block codes possess many desirable features, such as large minimum distance [2], good iterative decoding performance, and low error floor [3]. At the same time, the complexity of the processed operations increases due to the inserted complicated constraints. Therefore, scientists are stirred to find a good GLDPC code with suitable subcodes achieving the desired error performance at fair complication. The methods in [4] by Fossorier and the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm

[5], APP decoding algorithm, are vastly investigated for GLDPC decoding. Both algorithms perform the task with various subcodes, such as Hamming codes [2, 6], BCH codes [6, 7], RS codes [7], and GLDPC code with hybrid subcodes [8, 9]. However, these decoders can be considered highly complicated. In [10], a kind of Hadamard-based GLDPC codes was suggested. The complicated decoding processes were avoided due to the easiness and fastness of the used subcode transform (FHT). On the other hand, this code is not convenient in many communication systems as it is described as low code rate (R≤0:1). The soft-input/soft-output (SISO) decoders for decoding the component codes in GLDPC codes were considered in [2, 6] and [11] showing very good error performance with moderate and high code rate. Two interesting approaches at GLDPC codes have been presented in [4, 12]. In [12] the proposed construction concentrates on maximizing the girth and the minimum distance of the global code. The generated codes achieved coding gain up to 11 dB over 40 GB/s optical channel. In [4] the authors propose doubly generalized LDPC codes. These codes employ local codes at both variable and check nodes. Back to LDPC codes and since their resurrection, most research efforts were directed toward the implementation of these codes over the additive white Gaussian noise (AWGN) channel. The LDPC codes were proven to perform very close to the Shannon limit of AWGN channel, and much work has been carried out to design optimal codes and improve and simplify iterative decoding of these codes over the AWGN channel. The major drawback is that it exhibits considerable computational complexity. Gallager accurately, on the other hand, analyzed the performance of LDPC codes over the binary symmetric channel (BSC) in his original paper and proposed two HDD bit-flipping (BF) algorithms, for which he provided theoretical limits under iterative decoding. However, BF algorithms did not gain much attention. Most of the work that considered BF decoding used it in conjunction with soft information obtained from AWGN channel to improve decoding performance. However, in some applications, such as optical communications, soft values are not available at the receiver. Therefore, optical channels are an excellent example of the BSC, and only hard decision decoding is possible. Currently, BCH and RS codes are exclusively used in optical communication for error control, since there are simple and efficient algorithms for decoding. With the recent introduction of wavelength division multiple access (WDMA), transmission rates in optical communications reach 40 GB/s per channel/fiber, a standard in modem optical networks, SONET/SDH. Moreover, the concept of signal regeneration was abandoned in optical communication with the advancement in lasers so that optical signal is transmitted over larger distance than before, reaching the receiver very attenuated. These developments in optical communication call for an error control code which is very powerful yet has a simple and fast decoding. Furthermore, very low error rates are needed, say BER of 1015.

behavior of GLDPC codes over BSC channels are discussed. They make GLDPC codes very competitive for high-rate optical communications. Soft decision decoding (SDD) algorithms that maintain the performance but lower decoding

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

The check node in the bipartite graph of the LDPC code, as said before, is connected to a number of variable nodes which satisfy a single parity check. The GLDPC is a more generalized form of LDPC as the bits of the VNs, connected to the same CN, constitute a valid codeword of a (n,k) linear block code (other than the simple SPC code). Therefore this (n,k) code is called a constituent code, component code, or simply subcode. The CN which associates with this generalized code is called a generalized CN (GCN). The GLDPC code is referred to as a (N, J, n) regular

• The VN degree (denoted as qw) is constant for all VNs (qw ¼ J).

• The GCN degree (denoted as qc) is constant for all GCNs (qc ¼ n).

• The same constituent code (other than the simple SPC code) stands for all GCNs.

simplicity are also presented.

Figure 1.

5

2. GLDPC code construction

DOI: http://dx.doi.org/10.5772/intechopen.88199

and "strict-sense" code, as depicted in Figure 1, if:

The bipartite graph of the strict-sense (32,2,16) regular GLDPC code.

The BF algorithm proposed by Gallager is a HDD algorithm and is implemented using modulo-2 logic. Therefore it satisfies the requirement for simplicity and speed. However, the performance of the decoding algorithm is far from the capacity of the BSC, and, more importantly, an error floor is generally observed, which seriously constrains implementation of LDPC codes in optical systems. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under the BF algorithm can be improved and the observed error floor can be lowered or even removed. GLDPC codes would then be a competitive choice for optical communications.

In this chapter, iterative decoding of GLDPC codes over AWGN and BSC is studied. HDD algorithms that improve decoding performance and error floor

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

behavior of GLDPC codes over BSC channels are discussed. They make GLDPC codes very competitive for high-rate optical communications. Soft decision decoding (SDD) algorithms that maintain the performance but lower decoding simplicity are also presented.

#### 2. GLDPC code construction

[5], APP decoding algorithm, are vastly investigated for GLDPC decoding. Both algorithms perform the task with various subcodes, such as Hamming codes [2, 6], BCH codes [6, 7], RS codes [7], and GLDPC code with hybrid subcodes [8, 9]. However, these decoders can be considered highly complicated. In [10], a kind of Hadamard-based GLDPC codes was suggested. The complicated decoding processes were avoided due to the easiness and fastness of the used subcode transform (FHT). On the other hand, this code is not convenient in many communication systems as it is described as low code rate (R≤0:1). The soft-input/soft-output (SISO) decoders for decoding the component codes in GLDPC codes were considered in [2, 6] and [11] showing very good error performance with moderate and high code rate. Two interesting approaches at GLDPC codes have been presented in [4, 12]. In [12] the proposed construction concentrates on maximizing the girth and the minimum distance of the global code. The generated codes achieved coding gain up to 11 dB over 40 GB/s optical channel. In [4] the authors propose doubly generalized LDPC codes. These codes employ local codes at both variable and check nodes. Back to LDPC codes and since their resurrection, most research efforts were directed toward the implementation of these codes over the additive white Gaussian noise (AWGN) channel. The LDPC codes were proven to perform very close to the Shannon limit of AWGN channel, and much work has been carried out to design optimal codes and improve and simplify iterative decoding of these codes over the AWGN channel. The major drawback is that it exhibits considerable computational complexity. Gallager accurately, on the other hand, analyzed the performance of LDPC codes over the binary symmetric channel (BSC) in his original paper and proposed two HDD bit-flipping (BF) algorithms, for which he provided theoretical limits under iterative decoding. However, BF algorithms did not gain much attention. Most of the work that considered BF decoding used it in conjunction with soft information obtained from AWGN channel to improve decoding performance. However, in some applications, such as optical communications, soft values are not available at the receiver. Therefore, optical channels are an excellent example of the BSC, and only hard decision decoding is possible. Currently, BCH and RS codes are exclusively used in optical communication for error control, since there are simple and efficient algorithms for decoding. With the recent introduction of wavelength division multiple access (WDMA), transmission rates in optical communications reach 40 GB/s per channel/fiber, a standard in modem optical networks, SONET/SDH. Moreover, the concept of signal regeneration was abandoned in optical communication with the advancement in lasers so that optical signal is transmitted over larger distance than before, reaching the receiver very attenuated. These developments in optical communication call for an error control code which is very powerful yet has a simple and fast decoding. Furthermore, very low error

The BF algorithm proposed by Gallager is a HDD algorithm and is implemented

using modulo-2 logic. Therefore it satisfies the requirement for simplicity and speed. However, the performance of the decoding algorithm is far from the capacity of the BSC, and, more importantly, an error floor is generally observed, which seriously constrains implementation of LDPC codes in optical systems. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under the BF algorithm can be improved and the observed error floor can be lowered or even removed. GLDPC codes would then be a competitive choice for

In this chapter, iterative decoding of GLDPC codes over AWGN and BSC is studied. HDD algorithms that improve decoding performance and error floor

rates are needed, say BER of 1015.

optical communications.

Coding Theory

4

The check node in the bipartite graph of the LDPC code, as said before, is connected to a number of variable nodes which satisfy a single parity check. The GLDPC is a more generalized form of LDPC as the bits of the VNs, connected to the same CN, constitute a valid codeword of a (n,k) linear block code (other than the simple SPC code). Therefore this (n,k) code is called a constituent code, component code, or simply subcode. The CN which associates with this generalized code is called a generalized CN (GCN). The GLDPC code is referred to as a (N, J, n) regular and "strict-sense" code, as depicted in Figure 1, if:


Figure 1. The bipartite graph of the strict-sense (32,2,16) regular GLDPC code.

At any GCN c, the input to the chase decoder is Rc ¼ rc, <sup>1</sup>; ⋯;rc,i f g ; ⋯;rc,n corresponding to the transmitted word Xc ¼ xc, <sup>1</sup>; ⋯; xc,i f g ; ⋯; xc,n and its hard

n o.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

� � <sup>¼</sup> ln pr xð Þ c,i ¼ þ1=xc,i

positions of Yc is used to modify Yc yielding list of test patterns T<sup>q</sup>

Dc <sup>¼</sup> Cq if Rc � Cq j j<sup>2</sup> <sup>≤</sup> Rc � <sup>C</sup><sup>l</sup> �

amount of iterations or satisfying the syndrome check.

competing codeword of D with bc,i 6¼ dc,i, should be found.

r 0

By subtracting the soft input rc,i from the soft output r

r 0 ; ⋯; yc,n

A group of codewords are selected as the most likely ones that hold the transmitted words with the minimum errors. The algorithm operates on the available data (reliability) and flips all possible combinations of p ¼ d=2 demodulated symbols with the least-reliable positions (LRPs). Setting the reliability information in

> pr xð Þ c,i ¼ �1=xc,i � �

valid decoded codeword C<sup>q</sup> is stored in a list Ω as a candidate codeword. Decision codeword Dc ¼ dc, <sup>1</sup>; ⋯; dc,i f g ; ⋯; dc,n should be chosen from this list as it is obtained

Now, every decoded symbol resulting value in the subcode (soft) is to be estimated to be passed back on the edges, linked to symbol nodes with steps as the MP algorithm iteratively to output a final estimate after performing a predefined

Dc is one of them, and the second one, Bc ¼ bc, <sup>1</sup>; ⋯; bc,i f g ; ⋯; bc,n , called as

� � of error patterns with all possible errors confined to p LRP

� � � 2

In order to calculate the reliability of each bit, dc,i, in the decision Dc (i.e., the ith soft output of the soft-input decoder), two codewords Cþ1ð Þ<sup>i</sup> and C�1ð Þ<sup>i</sup> are to be selected from two sets of Ω with minimum Euclidean distance from R. The decision

The soft outputs are generated in the LLR domain outgoing from GCN c using

c,i <sup>¼</sup> Rc � Bc j j<sup>2</sup> � Rc � Dc j j<sup>2</sup> 4 !

If the competing codeword Bc is not found, the following alternative and

where β is a reliability factor. Due to the variation of sample deviation in the input and in the output of the soft decoders, we put a scaling factor, α, to increase

at GCN c, the extrinsic information during the tth iteration, W tð Þ, is obtained. It is then multiplied by the scaling factor, αð Þt , added to the channel observed values Rc,

<sup>¼</sup> <sup>2</sup> σ2

<sup>c</sup> in the list is decoded using algebraic decoder, and the

for everyC<sup>l</sup>

c,i ¼ β � dc,i with β ≥ 0 (4)

Wcð Þ¼ t þ 1 Rc þ αð Þt Wcð Þt (5)

0

� �rc,i (1)

<sup>c</sup> <sup>q</sup><sup>∈</sup> <sup>1</sup>; <sup>⋯</sup>; <sup>2</sup><sup>p</sup> <sup>ð</sup> f g;

∈ Ω (2)

dc,i (3)

c,i for each i∈f g 1; 2; ⋯; n

demodulated values Yc ¼ yc, <sup>1</sup>; ⋯; yc,i

DOI: http://dx.doi.org/10.5772/intechopen.88199

[19] in the log-likelihood ratio (LLR) of decision yc,i as

Λ yc,i

c

<sup>c</sup> <sup>þ</sup> YcÞ. Then every <sup>T</sup><sup>q</sup>

the following approximation formula:

efficient formula is used:

the convergence rate:

7

A set Zc <sup>¼</sup> <sup>Z</sup><sup>q</sup>

using the rule [20]:

Tq <sup>c</sup> <sup>¼</sup> <sup>Z</sup><sup>q</sup>

Figure 2. The procedure of generating a (32,2,16) Hamming-based GLDPC code from its base matrix.

where J is the column weight in the parity-check matrix of the global LDPC code and N is the overall code block length. The GLDPC code is otherwise called "hybrid code" (if not all the GCNs has the same constituent code) [8, 13].

The GLDPC code rate is given by R ¼ K=N ≥ 1 � Jð Þ 1 � k=n , where K denotes its code dimension and N denotes its block length. The GLDPC code, according to the chosen values of its parameters, has multiple wonderful properties such as the better minimum distance (compared to LDPC code with the same code rate) [2]. The GLDPC also converges faster, and it is distinguished by the lower error floor [3]. We are interested here with GLDPC codes based on Hamming codes for simplicity and fast decoding purposes.

Figure 1 elaborates the bipartite graph of a ð Þ N; J; n regular Hamming-based GLDPC code with (4 � 32) global LDPC matrix. The extended Hamming (8,4) constituent code is represented in every GCN. Figure 2 depicts the procedures to get the overall parity-check matrix of this code from the global LDPC matrix (or referred to as the graph adjacency matrix). Every "1" in every row in the global matrix is replaced with a column from the columns of the constituent code parity-check matrix, and every "0" is replaced with a zero column. The assignments of the constituent code H columns should be randomly done to generate a code with good characteristics. There are other constructions of GLDPC which can be further discussed in [14–16].

#### 3. SDD of GLDPC codes

SDD of Hamming-based GLDPC codes was presented in the literature and shows that GLDPC codes are asymptotically good and can achieve the capacity by iterative decoding using soft-input/soft-output subcode decoders [2, 6, 17]. The SISO decoder is typically a sub-optimal erasure decoder extended to deliver soft outputs, e.g., chase-II decoder [18], which is used in this section.

A ð Þ N; J; n GLDPC code is constructed from a ð Þ N:J=n by N random sparse matrix H, with row weights of n and column weights of J, and the parity-check matrix Hc of a ð Þ n; k; d subcode. The resultant GLDPC parity-check matrix is denoted as HGLDPC as discussed in Section 2.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

At any GCN c, the input to the chase decoder is Rc ¼ rc, <sup>1</sup>; ⋯;rc,i f g ; ⋯;rc,n corresponding to the transmitted word Xc ¼ xc, <sup>1</sup>; ⋯; xc,i f g ; ⋯; xc,n and its hard demodulated values Yc ¼ yc, <sup>1</sup>; ⋯; yc,i ; ⋯; yc,n n o.

A group of codewords are selected as the most likely ones that hold the transmitted words with the minimum errors. The algorithm operates on the available data (reliability) and flips all possible combinations of p ¼ d=2 demodulated symbols with the least-reliable positions (LRPs). Setting the reliability information in [19] in the log-likelihood ratio (LLR) of decision yc,i as

$$A\left(y\_{c,i}\right) = \ln\left(\frac{pr(\mathbf{x}\_{c,i} = +1/\mathbf{x}\_{c,i})}{pr(\mathbf{x}\_{c,i} = -1/\mathbf{x}\_{c,i})}\right) = \left(\frac{2}{\sigma^2}\right)r\_{c,i} \tag{1}$$

A set Zc <sup>¼</sup> <sup>Z</sup><sup>q</sup> c � � of error patterns with all possible errors confined to p LRP positions of Yc is used to modify Yc yielding list of test patterns T<sup>q</sup> <sup>c</sup> <sup>q</sup><sup>∈</sup> <sup>1</sup>; <sup>⋯</sup>; <sup>2</sup><sup>p</sup> <sup>ð</sup> f g; Tq <sup>c</sup> <sup>¼</sup> <sup>Z</sup><sup>q</sup> <sup>c</sup> <sup>þ</sup> YcÞ. Then every <sup>T</sup><sup>q</sup> <sup>c</sup> in the list is decoded using algebraic decoder, and the valid decoded codeword C<sup>q</sup> is stored in a list Ω as a candidate codeword. Decision codeword Dc ¼ dc, <sup>1</sup>; ⋯; dc,i f g ; ⋯; dc,n should be chosen from this list as it is obtained using the rule [20]:

$$D\_{\mathfrak{c}} = \mathbf{C}^{\mathfrak{q}} \; \sharp \; \left| \: \left| R\_{\mathfrak{c}} - \mathbf{C}^{\mathfrak{q}} \right|^{2} \le \left| R\_{\mathfrak{c}} - \mathbf{C}^{\mathfrak{l}} \right|^{2} \text{ for every } \mathbf{C}^{\mathfrak{l}} \in \mathfrak{Q} \tag{2}$$

Now, every decoded symbol resulting value in the subcode (soft) is to be estimated to be passed back on the edges, linked to symbol nodes with steps as the MP algorithm iteratively to output a final estimate after performing a predefined amount of iterations or satisfying the syndrome check.

In order to calculate the reliability of each bit, dc,i, in the decision Dc (i.e., the ith soft output of the soft-input decoder), two codewords Cþ1ð Þ<sup>i</sup> and C�1ð Þ<sup>i</sup> are to be selected from two sets of Ω with minimum Euclidean distance from R. The decision Dc is one of them, and the second one, Bc ¼ bc, <sup>1</sup>; ⋯; bc,i f g ; ⋯; bc,n , called as competing codeword of D with bc,i 6¼ dc,i, should be found.

The soft outputs are generated in the LLR domain outgoing from GCN c using the following approximation formula:

$$r\_{c,i}^{'} = \left(\frac{|\mathcal{R}\_c - \mathcal{B}\_c|^2 - |\mathcal{R}\_c - \mathcal{D}\_c|^2}{4}\right) d\_{c,i} \tag{3}$$

If the competing codeword Bc is not found, the following alternative and efficient formula is used:

$$
\sigma\_{c,i}' = \beta \times d\_{c,i} \quad \text{with} \quad \beta \ge 0 \tag{4}
$$

where β is a reliability factor. Due to the variation of sample deviation in the input and in the output of the soft decoders, we put a scaling factor, α, to increase the convergence rate:

$$W\_c(t+1) = R\_c + a(t)W\_c(t) \tag{5}$$

By subtracting the soft input rc,i from the soft output r 0 c,i for each i∈f g 1; 2; ⋯; n at GCN c, the extrinsic information during the tth iteration, W tð Þ, is obtained. It is then multiplied by the scaling factor, αð Þt , added to the channel observed values Rc,

where J is the column weight in the parity-check matrix of the global LDPC code and N is the overall code block length. The GLDPC code is otherwise called "hybrid

The GLDPC code rate is given by R ¼ K=N ≥ 1 � Jð Þ 1 � k=n , where K denotes its code dimension and N denotes its block length. The GLDPC code, according to the chosen values of its parameters, has multiple wonderful properties such as the better minimum distance (compared to LDPC code with the same code rate) [2]. The GLDPC also converges faster, and it is distinguished by the lower error floor [3]. We are interested here with GLDPC codes based on Hamming codes for sim-

Figure 1 elaborates the bipartite graph of a ð Þ N; J; n regular Hamming-based GLDPC code with (4 � 32) global LDPC matrix. The extended Hamming (8,4) constituent code is represented in every GCN. Figure 2 depicts the procedures to get the overall parity-check matrix of this code from the global LDPC matrix (or referred to as the graph adjacency matrix). Every "1" in every row in the global matrix is replaced with a column from the columns of the constituent code

parity-check matrix, and every "0" is replaced with a zero column. The assignments of the constituent code H columns should be randomly done to generate a code with good characteristics. There are other constructions of GLDPC which can be

SDD of Hamming-based GLDPC codes was presented in the literature and shows that GLDPC codes are asymptotically good and can achieve the capacity by iterative decoding using soft-input/soft-output subcode decoders [2, 6, 17]. The SISO decoder is typically a sub-optimal erasure decoder extended to deliver soft

A ð Þ N; J; n GLDPC code is constructed from a ð Þ N:J=n by N random sparse matrix H, with row weights of n and column weights of J, and the parity-check matrix Hc of a ð Þ n; k; d subcode. The resultant GLDPC parity-check matrix is

outputs, e.g., chase-II decoder [18], which is used in this section.

denoted as HGLDPC as discussed in Section 2.

code" (if not all the GCNs has the same constituent code) [8, 13].

The procedure of generating a (32,2,16) Hamming-based GLDPC code from its base matrix.

plicity and fast decoding purposes.

Figure 2.

Coding Theory

further discussed in [14–16].

3. SDD of GLDPC codes

6

• Nonzero syndromes imply nonvalid codewords; in this case the indicated error position will be sent a vote E, and all other bits a vote e as in GCN 2 in Figure 4.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

After votes have been cast, each symbol has received a vote pair: either VV, eV, ee, EV, Ee, or EE. In the example shown, the symbol at the two subcodes' intersec-

The strategy proceeds by passing current bit values from the VNs to the Hamming decoders at GCNs. The HDDs at these GCNs pass back n individual votes to

The magnitude of a vote marks the power of the decision for a GCN about the current symbol (reliable or not). The higher magnitudes mark the unreliable bits,

The J arriving votes to every VN are collected such that all N variable symbols are sorted by the reliability information and the group of LRP bits inverted before

The iterative algorithm proceeds until all symbols become of weight pair VV or the maximum number of iterations is reached. For Hamming subcodes there are only three votes V, e, and E generated by the subcode HDDs described before. The

vote rules, as sets of vote weights, are defined and listed in Table 1.

Rule A Rule B Rule C V ¼ 0, e ¼ 2, E ¼ 3 V ¼ 0, e ¼ 1, E ¼ 2 V ¼ 0, e ¼ 1, E ¼ 3 Vote pair Total Vote pair Total Vote pair Total EE 6 EE 4 EE 6 Ee 5 Ee 3 Ee 4 ee 4 EV, ee 2 EV 3 eV 2 VV 0 eV 1 VV 0 VV 0

tion has a vote pair eV.

DOI: http://dx.doi.org/10.5772/intechopen.88199

the symbols they connect.

the upcoming iteration.

Figure 4.

Table 1.

9

Example votes from subcode decoders.

Vote pair orderings for three vote rules.

and the lower ones mark the bits of more reliability.

Figure 3. Performance variation with erasures p for (65,536,2,64) GLD code.

and the result is considered as a priori information for the decoder at the next ð Þ t þ 1 th iteration.

Figure 3 shows that a chase rather than optimal SISO decoder can be successfully employed in the decoding of high-rate extended Hamming-based GLDPC codes and the BERs are close to the capacity with the efficient fast chase decoding in [21].

#### 4. HDD of GLDPC codes

The HDD such as BF decoding or any other algebraic decoding scheme can be generalized to GLDPC codes especially over BEC or BSC and can be applied in very high-speed applications such as the 40 GBps optical communications. The errorcorrecting capability of the subcodes, at the GCNs, is used to more accurately determine the position of least-reliable symbols. The iterative HDD algorithms for decoding GLDPC codes will be described in the next subsections.

#### 4.1 WBFV algorithm

As mentioned before in BF algorithm of LDPC codes, symbols belonging to the maximum number of unsatisfied CNs, in each iteration, have the binary bits inverted before the following iteration. Subsequently, the failed CNs convey their votes of unit weight to the corresponding connected VNs, and the algorithm inverts the LR bits with the highest amount of votes.

The presented iterative weighted bit-flip voting (WBFV) in [22] employs subcode hard decision decoders (HDDs) at the GCNs that have a greater range of vote weights passed to the connected VNs. It passes the high-weight votes to a specific symbol if the HDD, at a given GCN, considers it to be in error.

As the GLDPC codes of Gallager construction, J ¼ 2, and Hamming subcode are only concerned, all nonzero syndromes of these subcodes will be error-correctable. The algebraic decoders only allow for two instances:

• All-zero syndromes imply a valid codeword; in this case a vote V will be returned to all connected VNs (symbols) as in GCN 1 in Figure 4.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

• Nonzero syndromes imply nonvalid codewords; in this case the indicated error position will be sent a vote E, and all other bits a vote e as in GCN 2 in Figure 4.

After votes have been cast, each symbol has received a vote pair: either VV, eV, ee, EV, Ee, or EE. In the example shown, the symbol at the two subcodes' intersection has a vote pair eV.

The strategy proceeds by passing current bit values from the VNs to the Hamming decoders at GCNs. The HDDs at these GCNs pass back n individual votes to the symbols they connect.

The magnitude of a vote marks the power of the decision for a GCN about the current symbol (reliable or not). The higher magnitudes mark the unreliable bits, and the lower ones mark the bits of more reliability.

The J arriving votes to every VN are collected such that all N variable symbols are sorted by the reliability information and the group of LRP bits inverted before the upcoming iteration.

The iterative algorithm proceeds until all symbols become of weight pair VV or the maximum number of iterations is reached. For Hamming subcodes there are only three votes V, e, and E generated by the subcode HDDs described before. The vote rules, as sets of vote weights, are defined and listed in Table 1.

Figure 4. Example votes from subcode decoders.


#### Table 1.

Vote pair orderings for three vote rules.

and the result is considered as a priori information for the decoder at the next

The HDD such as BF decoding or any other algebraic decoding scheme can be generalized to GLDPC codes especially over BEC or BSC and can be applied in very high-speed applications such as the 40 GBps optical communications. The errorcorrecting capability of the subcodes, at the GCNs, is used to more accurately determine the position of least-reliable symbols. The iterative HDD algorithms for

As mentioned before in BF algorithm of LDPC codes, symbols belonging to the

As the GLDPC codes of Gallager construction, J ¼ 2, and Hamming subcode are only concerned, all nonzero syndromes of these subcodes will be error-correctable.

maximum number of unsatisfied CNs, in each iteration, have the binary bits inverted before the following iteration. Subsequently, the failed CNs convey their votes of unit weight to the corresponding connected VNs, and the algorithm inverts

The presented iterative weighted bit-flip voting (WBFV) in [22] employs subcode hard decision decoders (HDDs) at the GCNs that have a greater range of vote weights passed to the connected VNs. It passes the high-weight votes to a

• All-zero syndromes imply a valid codeword; in this case a vote V will be returned to all connected VNs (symbols) as in GCN 1 in Figure 4.

specific symbol if the HDD, at a given GCN, considers it to be in error.

decoding GLDPC codes will be described in the next subsections.

the LR bits with the highest amount of votes.

The algebraic decoders only allow for two instances:

Figure 3 shows that a chase rather than optimal SISO decoder can be successfully employed in the decoding of high-rate extended Hamming-based GLDPC codes and the BERs are close to the capacity with the efficient fast chase

Performance variation with erasures p for (65,536,2,64) GLD code.

ð Þ t þ 1 th iteration.

Figure 3.

Coding Theory

decoding in [21].

4. HDD of GLDPC codes

4.1 WBFV algorithm

8

Figure 5. Variation in performance with block length for (N,M,15,2) GLDPC codes.

Figure 5 shows that there is an obvious coding gain with increasing N. Actually, the BER curve <sup>p</sup> required to give Pb <sup>¼</sup> <sup>10</sup>�<sup>5</sup> versus <sup>N</sup> showing a linear relationship between log p and log N.

#### 4.2 BCH-based Fossorier decoding algorithm

In [6] GLDPC codes with BCH subcodes (instead of Hamming subcodes) were considered, but for AWGN channel and ML soft decoding due to its higher errorcorrecting capability. The algorithm presented here also uses the high-rate BCH and RS codes, but as HDD algorithm, and can be efficiently applied in very high-speed (40 GBps) optical systems as the soft information is not available due to opticalelectrical conversions [7].

The actions of this algorithm are updated as follows: VN i is considered to be connected to the GCNs j and k. Two messages will be sent from GCN j to VN i. First, it outputs uji (the value of VN i) taken out from the sub-decoder. Second, it outputs Uji (represents data about success or failure of GCN decoding). Uji is then a binary signaling with estimate 1 if there is a valid word or estimate 0 if there is not. W.r.t the arriving messages from node k, the same action is applied.

The VN message vij depends on the value received from the channel yi , and the messages received from the GCNs other than node j. Since GLDPC codes of high code rate (J ¼ 2) is only concerned, there is only one other GCN k). Hence, the updating rule in the symbol node can be expressed as

$$
\upsilon\_{i\dot{\jmath}} = \mathcal{Y}\_i \cdot \overline{U\_{ki}} + \mathfrak{u}\_{ki} \cdot U\_{ki} \tag{6}
$$

5. Modified HDD algorithms for improving the error performance

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

A HDD decoder founded on BF algorithm is presented in [23]. The main view depends on taking advantage of the different sub-decoder states. Therefore it adds this data representing these states as a reliability factor to be utilized for the rest of the decoding process. This additional data (in the form of additional bit) is inserted

The goal is generally to remove all likely produced trapping sets (that generate non-editable errors) from the code construction. This approach is presently still not obtainable due to constraints in the processing speed and implementation. An alternative approach that fulfills the most of this goal is suggested (with reasonable processing speeds). That approach adds resources only operated in unusual cir-

Taking commonness into account and due to its simplicity, the extended Ham-

In the case of using the ext-Hamming code dmin ¼ 4, the sub-decoders of GLDPC code may output errors in both cases of decoding success or failure. At GCN failure case, it cannot clearly locate the error place. In the case of GCN decoding success, the errors may be generated from undiscovered errors (when a received word decoded to non-sent valid ones, i.e., e≥4) or faulty repair (when e>2). Therefore, the errors at a given GCN can be distinguished by the following names:

1. Plain single error (element P): one error and true correction take place.

2. Unknown set (U-set): multiple errors (e>1) with decoder failure (detects but

5.1 Two-side state-aided bit-flipping (TSSA-BF) algorithm

cumstances when the previously mentioned trapping sets happen.

ming code is studied inside the GLDPC codes.

in two sides (VNs and CNs).

Performance of GLDPC codes with RS(15,11,5) codes.

DOI: http://dx.doi.org/10.5772/intechopen.88199

Figure 6.

5.1.1 Failure analysis

can't mark errors).

11

where Uki is the complement of Uki. The processes of the decoder are carried out until the satisfaction of all GCNs or until reaching a certain amount of iterations. Finally, any symbol (VN) connected to a satisfied GCN will take its proposed value. If the VN connected to two unsatisfied GCNs, it will take the original received value.

RS-based GLDPC code of rate r ¼ 0:467 and different lengths are examined (Figure 6). It shows that RS(15,11,5) product code performs better than the corresponding GLDPC of length 225, because it possesses a better minimum distance. As N increases, the BER is improved particularly in the region of error floor. Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

Figure 6. Performance of GLDPC codes with RS(15,11,5) codes.

#### 5. Modified HDD algorithms for improving the error performance

#### 5.1 Two-side state-aided bit-flipping (TSSA-BF) algorithm

A HDD decoder founded on BF algorithm is presented in [23]. The main view depends on taking advantage of the different sub-decoder states. Therefore it adds this data representing these states as a reliability factor to be utilized for the rest of the decoding process. This additional data (in the form of additional bit) is inserted in two sides (VNs and CNs).

The goal is generally to remove all likely produced trapping sets (that generate non-editable errors) from the code construction. This approach is presently still not obtainable due to constraints in the processing speed and implementation. An alternative approach that fulfills the most of this goal is suggested (with reasonable processing speeds). That approach adds resources only operated in unusual circumstances when the previously mentioned trapping sets happen.

Taking commonness into account and due to its simplicity, the extended Hamming code is studied inside the GLDPC codes.

#### 5.1.1 Failure analysis

In the case of using the ext-Hamming code dmin ¼ 4, the sub-decoders of GLDPC code may output errors in both cases of decoding success or failure. At GCN failure case, it cannot clearly locate the error place. In the case of GCN decoding success, the errors may be generated from undiscovered errors (when a received word decoded to non-sent valid ones, i.e., e≥4) or faulty repair (when e>2). Therefore, the errors at a given GCN can be distinguished by the following names:


Figure 5 shows that there is an obvious coding gain with increasing N. Actually, the BER curve <sup>p</sup> required to give Pb <sup>¼</sup> <sup>10</sup>�<sup>5</sup> versus <sup>N</sup> showing a linear relationship

In [6] GLDPC codes with BCH subcodes (instead of Hamming subcodes) were considered, but for AWGN channel and ML soft decoding due to its higher errorcorrecting capability. The algorithm presented here also uses the high-rate BCH and RS codes, but as HDD algorithm, and can be efficiently applied in very high-speed (40 GBps) optical systems as the soft information is not available due to optical-

The actions of this algorithm are updated as follows: VN i is considered to be connected to the GCNs j and k. Two messages will be sent from GCN j to VN i. First, it outputs uji (the value of VN i) taken out from the sub-decoder. Second, it outputs Uji (represents data about success or failure of GCN decoding). Uji is then a binary signaling with estimate 1 if there is a valid word or estimate 0 if there is not. W.r.t

The VN message vij depends on the value received from the channel yi

messages received from the GCNs other than node j. Since GLDPC codes of high code rate (J ¼ 2) is only concerned, there is only one other GCN k). Hence, the

where Uki is the complement of Uki. The processes of the decoder are carried out until the satisfaction of all GCNs or until reaching a certain amount of iterations. Finally, any symbol (VN) connected to a satisfied GCN will take its proposed value. If the VN connected to two unsatisfied GCNs, it will take the original

RS-based GLDPC code of rate r ¼ 0:467 and different lengths are examined (Figure 6). It shows that RS(15,11,5) product code performs better than the corresponding GLDPC of length 225, because it possesses a better minimum distance. As N increases, the BER is improved particularly in the region of error floor.

vij ¼ yi � Uki þ uki � Uki (6)

, and the

the arriving messages from node k, the same action is applied.

updating rule in the symbol node can be expressed as

between log p and log N.

Figure 5.

Coding Theory

electrical conversions [7].

received value.

10

4.2 BCH-based Fossorier decoding algorithm

Variation in performance with block length for (N,M,15,2) GLDPC codes.


#### 5.1.2 Algorithm description

As GLDPC code with 1B-construction is studied in [22] as in Figure 7, a further bit is inserted beside the main bit moving between the VN and GCN. For VNs, it acts as the reliability of its bit value (bit 1 if suspect or bit 0 if assumed correct), and it forwards this additional data to be used by GCNs. For GCNs, this additional bit acts as the power of its decision by raising the reliability levels to 4. The GCN decodes the received word, forwards a signal (bit 1 or 0, namely, flip or keep), and appends this additional bit as the power of this signal (in ascending reliability level arrangement, 1þð Þ 11 , 1�ð Þ 10 , 0�ð Þ 00 and 0þð Þ 01 , corresponding strong flip, weak flip, weak keep, and strong keep, respectively).

#### 5.1.2.1 Horizontal processing

Specifying the ext-Hamming decoder, there are three states at a given GCN as state "0" in the case of zero syndrome, state "1" in the case of one-error repair, and state "2" in the case of decoder failure.

Let Vð Þ<sup>l</sup>

Figure 8.

Uð Þ<sup>l</sup> c,wi ð Þ<sup>1</sup> <sup>U</sup>ð Þ<sup>l</sup> c,wi

Table 2.

Vð Þ<sup>l</sup> c,wi

reliability bit be Vð Þ <sup>0</sup>

information Vð Þ<sup>l</sup>

For dð Þ<sup>l</sup>

message Vð Þ<sup>l</sup>

(dð Þ<sup>l</sup>

13

1� to 1þ.

c,wi <sup>¼</sup> <sup>V</sup>ð Þ<sup>l</sup>

repetitions of the state dð Þ<sup>l</sup>

c,wi

5.1.2.2 Vertical processing

ð Þl

with j ¼ f g 1; 2 ) to which subset it belongs:

reliability state r

c,wi

c,wi

1 0 l

Incoming and outgoing messages between GCNs and VNs.

DOI: http://dx.doi.org/10.5772/intechopen.88199

c,wi

with the one of previous iteration. Let αð Þ<sup>l</sup>

the n bits connected to GCN c. According to αð Þ<sup>l</sup>

process iteratively continues as illustrated in [23].

ð Þ<sup>1</sup> ;Vð Þ<sup>l</sup> c,wi ð Þ2

Possible outgoing messages from GCN c to any connected VN.

ð Þ¼ 2 1 if a suspect bit and 0 if assumed correct one.

<sup>c</sup> , and let βð Þ<sup>l</sup>

state remains for two consecutive iterations. For any GCN c and dð Þ<sup>l</sup>

h i be the two bits representing arriving bit value

ð Þ <sup>2</sup> Alternative denotation Reliability grade Message meaning

� 2 Weak flip

ð Þ¼ 2 0 for all wi ∈W cð Þ. As previously mentioned,

<sup>c</sup> , βð Þ<sup>l</sup>

ð Þ2 , the algorithm can improve its decision, and the horizontal

<sup>c</sup> ¼ 0, the counter role is to enhance the reliability from 0� to 0<sup>þ</sup> if the

<sup>c</sup> ¼ 2) remains for three consecutive iterations, the decoder recalculates the syndrome after flipping this suspect. If the syndrome check is satisfied (i.e., valid codeword), the decoder estimates that bit as error and degrades its reliability from

With four reliability levels and J ¼ 2, there will be a set of 10 possible combina-

<sup>w</sup> is determined based on its incoming messages (from GCNs cj,

tions of incoming messages at VN w, w ¼ 1, 2, ⋯, N. This set can be divided, according to the failure analysis, into three subsets si f g ; i ¼ 1; 2; 3 . For any VN w, its

ð Þ2 from the bit wi indicates a suspected bit and the decoder state

<sup>c</sup> be the number of consecutive previous

<sup>c</sup> values and the additional

<sup>c</sup> ¼ 2, if the

<sup>c</sup> be the number of incoming suspects among

and its reliability, respectively, at GCN c from VN wi. For any GCN c, let initial

1 1 1<sup>+</sup> 1 Strong flip

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

0 0 0� 3 Weak keep 0 1 0<sup>+</sup> 4 Strong keep

Now a local GCN counter will be introduced to compare the present GCN state

For any GCN <sup>c</sup>, <sup>c</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, <sup>⋯</sup>, M, let <sup>d</sup>ð Þ<sup>l</sup> <sup>c</sup> be the cth GCN decoder state at the lth iteration. If dð Þ<sup>l</sup> <sup>c</sup> ¼ 0, the decoder forwards a message (0�) to all set elements of its connected VNs, W cð Þ¼ w1; w2; ::; wi f g ; ::; wn . The message (0�) with reliability level 3 is forwarded assuming W cð Þ may contain a dark set (D-set). If <sup>d</sup>ð Þ<sup>l</sup> <sup>c</sup> ¼ 1, the decoder forwards (1þ) with level 1 on the assumed-error place w<sup>∗</sup> and (0�) to the remaining set elements W<sup>0</sup> ð Þ<sup>c</sup> . Elements of <sup>W</sup><sup>0</sup> ð Þc are given 0� (not 0þ) assuming that they may contain A-set. Finally, If dð Þ<sup>l</sup> <sup>c</sup> ¼ 2, it sends (1�) with level 2 to all elements of W cð Þ which contains a U-set.

As illustrated in Figure 8, let Uð Þ<sup>l</sup> c,wi <sup>¼</sup> <sup>U</sup>ð Þ<sup>l</sup> c,wi ð Þ<sup>1</sup> ; <sup>U</sup>ð Þ<sup>l</sup> c,wi ð Þ2 h i be the two bits representing the outgoing flip message and its reliability, respectively, from GCN c to VN wi. Table 2 illustrates the four possible outgoing messages from GCN c to every connected VN wi.

Figure 7. GLDPC bipartite graph example.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

#### Figure 8.

3. Ambiguous set (A-set): multiple errors (e>2) making the decoder flipping an

4.Dark set (D-set): multiple errors (e≥4) not detected by the decoder which

As GLDPC code with 1B-construction is studied in [22] as in Figure 7, a further bit is inserted beside the main bit moving between the VN and GCN. For VNs, it acts as the reliability of its bit value (bit 1 if suspect or bit 0 if assumed correct), and it forwards this additional data to be used by GCNs. For GCNs, this additional bit acts as the power of its decision by raising the reliability levels to 4. The GCN decodes the received word, forwards a signal (bit 1 or 0, namely, flip or keep), and appends this additional bit as the power of this signal (in ascending reliability level arrangement, 1þð Þ 11 , 1�ð Þ 10 , 0�ð Þ 00 and 0þð Þ 01 , corresponding strong flip, weak

Specifying the ext-Hamming decoder, there are three states at a given GCN as state "0" in the case of zero syndrome, state "1" in the case of one-error repair, and

connected VNs, W cð Þ¼ w1; w2; ::; wi f g ; ::; wn . The message (0�) with reliability level 3 is forwarded assuming W cð Þ may contain a dark set (D-set). If <sup>d</sup>ð Þ<sup>l</sup>

ð Þ<sup>c</sup> . Elements of <sup>W</sup><sup>0</sup>

decoder forwards (1þ) with level 1 on the assumed-error place w<sup>∗</sup> and (0�) to the

c,wi <sup>¼</sup> <sup>U</sup>ð Þ<sup>l</sup>

representing the outgoing flip message and its reliability, respectively, from GCN c to VN wi. Table 2 illustrates the four possible outgoing messages from GCN c to

c,wi

ð Þ<sup>1</sup> ; <sup>U</sup>ð Þ<sup>l</sup> c,wi ð Þ2

h i

<sup>c</sup> ¼ 0, the decoder forwards a message (0�) to all set elements of its

<sup>c</sup> be the cth GCN decoder state at the lth

<sup>c</sup> ¼ 2, it sends (1�) with level 2 to all

ð Þc are given 0� (not 0þ) assuming

be the two bits

<sup>c</sup> ¼ 1, the

assumed-correct bit (false correction).

flip, weak keep, and strong keep, respectively).

state "2" in the case of decoder failure. For any GCN <sup>c</sup>, <sup>c</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, <sup>⋯</sup>, M, let <sup>d</sup>ð Þ<sup>l</sup>

that they may contain A-set. Finally, If dð Þ<sup>l</sup>

elements of W cð Þ which contains a U-set. As illustrated in Figure 8, let Uð Þ<sup>l</sup>

produce zero-syndrome vector.

5.1.2 Algorithm description

Coding Theory

5.1.2.1 Horizontal processing

remaining set elements W<sup>0</sup>

every connected VN wi.

GLDPC bipartite graph example.

Figure 7.

12

iteration. If dð Þ<sup>l</sup>

Incoming and outgoing messages between GCNs and VNs.


#### Table 2.

Possible outgoing messages from GCN c to any connected VN.

Let Vð Þ<sup>l</sup> c,wi <sup>¼</sup> <sup>V</sup>ð Þ<sup>l</sup> c,wi ð Þ<sup>1</sup> ;Vð Þ<sup>l</sup> c,wi ð Þ2 h i be the two bits representing arriving bit value and its reliability, respectively, at GCN c from VN wi. For any GCN c, let initial reliability bit be Vð Þ <sup>0</sup> c,wi ð Þ¼ 2 0 for all wi ∈W cð Þ. As previously mentioned, Vð Þ<sup>l</sup> c,wi ð Þ¼ 2 1 if a suspect bit and 0 if assumed correct one.

Now a local GCN counter will be introduced to compare the present GCN state with the one of previous iteration. Let αð Þ<sup>l</sup> <sup>c</sup> be the number of consecutive previous repetitions of the state dð Þ<sup>l</sup> <sup>c</sup> , and let βð Þ<sup>l</sup> <sup>c</sup> be the number of incoming suspects among the n bits connected to GCN c. According to αð Þ<sup>l</sup> <sup>c</sup> , βð Þ<sup>l</sup> <sup>c</sup> values and the additional information Vð Þ<sup>l</sup> c,wi ð Þ2 , the algorithm can improve its decision, and the horizontal process iteratively continues as illustrated in [23].

For dð Þ<sup>l</sup> <sup>c</sup> ¼ 0, the counter role is to enhance the reliability from 0� to 0<sup>þ</sup> if the state remains for two consecutive iterations. For any GCN c and dð Þ<sup>l</sup> <sup>c</sup> ¼ 2, if the message Vð Þ<sup>l</sup> c,wi ð Þ2 from the bit wi indicates a suspected bit and the decoder state (dð Þ<sup>l</sup> <sup>c</sup> ¼ 2) remains for three consecutive iterations, the decoder recalculates the syndrome after flipping this suspect. If the syndrome check is satisfied (i.e., valid codeword), the decoder estimates that bit as error and degrades its reliability from 1� to 1þ.

#### 5.1.2.2 Vertical processing

With four reliability levels and J ¼ 2, there will be a set of 10 possible combinations of incoming messages at VN w, w ¼ 1, 2, ⋯, N. This set can be divided, according to the failure analysis, into three subsets si f g ; i ¼ 1; 2; 3 . For any VN w, its reliability state r ð Þl <sup>w</sup> is determined based on its incoming messages (from GCNs cj, with j ¼ f g 1; 2 ) to which subset it belongs:

$$\begin{split} s\_1 &= \{ (\mathbf{1}^+, \mathbf{1}^+), (\mathbf{1}^+, \mathbf{1}^-), (\mathbf{1}^+, \mathbf{0}^-) \}, \\ s\_2 &= \{ (\mathbf{1}^-, \mathbf{1}^-), (\mathbf{1}^-, \mathbf{0}^-), (\mathbf{0}^+, \mathbf{1}^+) \}, \\ s\_3 &= \{ (\mathbf{0}^-, \mathbf{0}^-), (\mathbf{0}^+, \mathbf{1}^-), (\mathbf{0}^+, \mathbf{0}^-), (\mathbf{0}^+, \mathbf{0}^+) \} \end{split}$$

$$r\_w^{(l)} \quad = \left\{ \begin{array}{l} \mathbf{0} \quad \text{if} \ \left\{ U\_{c\_{jw}}^{(l)}, j = \mathbf{1}, 2 \right\} \in s\_1 \\ \mathbf{1} \quad \text{if} \ \left\{ U\_{c\_{jw}}^{(l)}, j = \mathbf{1}, 2 \right\} \in s\_2 \\ \mathbf{2} \quad \text{if} \ \left\{ U\_{c\_{jw}}^{(l)}, j = \mathbf{1}, 2 \right\} \in s\_3 \end{array} \right.$$

The VN w with least-reliable level, r ð Þl <sup>w</sup> ¼ 0, needs to be flipped. For r ð Þl <sup>w</sup> ¼ 1, VN state counter is appointed to make a comparison between the VN current reliability state and the one of the previous iteration. Let γ ð Þl <sup>w</sup> be the number of consecutive previous repetitions of the state r ð Þl <sup>w</sup> . With r ð Þl <sup>w</sup> ¼ 1 and according to γ ð Þl <sup>w</sup> values, the VN will not be flipped but counted as a suspected bit. It sends such reliability information Vð Þ<sup>l</sup> cjwð Þ¼ 2 1; j ¼ 1; 2 � � to GCNs to be taken into account. For <sup>r</sup> ð Þl <sup>w</sup> ¼ 2, the VN is considered a reliable bit and kept with Vð Þ<sup>l</sup> cjwð Þ¼ 2 0, j ¼ 1, 2 (i.e., assumed correct bit).

Figures 9–11 show the block diagrams of the overall decoder, horizontal process, and vertical process, respectively. Table 3 illustrates an example for ext-Hamming (8,4) constituent decoder employed at GCN c nð Þ ¼ 8 at lth iteration as the shaded parts represent certain conditions satisfied.

5.1.3 Important notes on the algorithm

<sup>c</sup>1<sup>w</sup> <sup>¼</sup> <sup>V</sup>ð Þ<sup>l</sup>

Block diagram of the vertical process of the TSSA-BF decoder.

DOI: http://dx.doi.org/10.5772/intechopen.88199

, 0�, 1�, or 1+

which is represented by two bits Uð Þ<sup>l</sup>

• The function of GCN state counter αð Þ<sup>l</sup>

up to this present iteration ð Þl , respectively.

<sup>c</sup>2<sup>w</sup>).

Example of ext-Hamming (8,4) constituent decoder employed at GCN c at lth iteration.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

).

• The initial incoming message at GCN c Vð Þ <sup>0</sup>

same (i.e., Vð Þ<sup>l</sup>

cwi <sup>¼</sup> <sup>U</sup>ð Þ<sup>l</sup> cwi ð Þ<sup>1</sup> <sup>U</sup>ð Þ<sup>l</sup> cwi ð Þ2

values 0+

• Uð Þ<sup>l</sup>

15

Table 3.

Figure 11.

• The output messages from VN w to its two connected GCNs cj, j ¼ 1, 2 are the

overall demodulated binary sequence <sup>Y</sup> <sup>¼</sup> yw; <sup>w</sup> <sup>¼</sup> <sup>1</sup>; <sup>2</sup>; <sup>⋯</sup>; <sup>N</sup>; yw <sup>∈</sup>f g <sup>0</sup>; <sup>1</sup> � �.

h i is the outgoing message from GCN <sup>c</sup> to VN wi consisting of two bits representing the reliability level (one of four possible

• The GCN decoder does not output actual decoded words to its connected VNs. Instead, it sends a reliability signal (taking a value of four possible values)

ð Þ<sup>1</sup> and <sup>U</sup>ð Þ<sup>l</sup>

cwi

� � or VN state counter <sup>γ</sup>

cwi

(Flip), (Keep but as suspect) or (Keep as assumed correct).

VN has to take a decision based on its incoming messages. The decision is

c

the number of consecutive iterations with the same state (at this GCN or VN)

cjwð Þ¼ <sup>1</sup> yw, Vð Þ <sup>0</sup>

cjwð Þ¼ 2 0, j ¼ 1, 2 as the

ð Þ2 . On the other hand, the

ð Þl w

� � is to count

#### Figure 9.

Block diagram of the overall TSSA-BF decoder.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

#### Figure 11.

s<sup>1</sup> ¼ 1þ; 1<sup>þ</sup> ð Þ; 1þ; 1� ð Þ; 1þ; 0� f g ð Þ , s<sup>2</sup> ¼ 1�; 1� ð Þ; 1�; 0� ð Þ; 0þ; 1<sup>þ</sup> f g ð Þ ,

> 8 >>>><

> >>>>:

ð Þl <sup>w</sup> . With r

r ð Þl w ¼

The VN w with least-reliable level, r

previous repetitions of the state r

information Vð Þ<sup>l</sup>

correct bit).

Coding Theory

Figure 9.

Figure 10.

14

state and the one of the previous iteration. Let γ

cjwð Þ¼ 2 1; j ¼ 1; 2 � �

parts represent certain conditions satisfied.

Block diagram of the overall TSSA-BF decoder.

Block diagram of the horizontal process of the TSSA-BF decoder.

the VN is considered a reliable bit and kept with Vð Þ<sup>l</sup>

s<sup>3</sup> ¼ 0�; 0� ð Þ; 0þ; 1� ð Þ; 0þ; 0� ð Þ; 0þ; 0<sup>þ</sup> f g ð Þ

cjw; j ¼ 1; 2 n o

cjw; j ¼ 1; 2 n o

cjw; j ¼ 1; 2 n o

ð Þl

∈s<sup>1</sup>

∈s<sup>2</sup>

∈s<sup>3</sup>

<sup>w</sup> be the number of consecutive

ð Þl

cjwð Þ¼ 2 0, j ¼ 1, 2 (i.e., assumed

ð Þl <sup>w</sup> ¼ 1, VN

<sup>w</sup> values, the

ð Þl <sup>w</sup> ¼ 2,

<sup>w</sup> ¼ 0, needs to be flipped. For r

<sup>w</sup> ¼ 1 and according to γ

to GCNs to be taken into account. For r

0 if Uð Þ<sup>l</sup>

1 if Uð Þ<sup>l</sup>

2 if Uð Þ<sup>l</sup>

ð Þl

VN will not be flipped but counted as a suspected bit. It sends such reliability

state counter is appointed to make a comparison between the VN current reliability

ð Þl

Figures 9–11 show the block diagrams of the overall decoder, horizontal process, and vertical process, respectively. Table 3 illustrates an example for ext-Hamming (8,4) constituent decoder employed at GCN c nð Þ ¼ 8 at lth iteration as the shaded

Block diagram of the vertical process of the TSSA-BF decoder.


#### Table 3.

Example of ext-Hamming (8,4) constituent decoder employed at GCN c at lth iteration.

#### 5.1.3 Important notes on the algorithm


The algorithm uses the soft channel values at the beginning of the decoding to classify the received symbols (VNs) by a predetermined threshold. Taking commonness into account and due to its simplicity, the extended Hamming code is studied inside the GLDPC codes. The extended Hamming code with increased dminð Þ dmin ¼ 4 is powerful and suitable for constructing standard-relevant GLDPC codes. The ext-Hamming sub-decoders may produce errors in both cases, decoding success and failure. At the GCN sub-decoder failure, it cannot locate the places of errors. At the GCN sub-decoder success, the decoder errors may emerge from two reasons: the undiscovered errors (as the received word decoded to non-transmitted valid one (e≥ 4), where e is the number of errors at the input of the local decoder)

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

An additional bit is inserted next to the main bit in the message from the GCN to a connected VN. It acts as the decision power of GCN by raising the reliability from two to four levels. The GCN sub-decoder decodes the received word and forwards two bits (the main bit and the additional one) to every connected VN. If the main bit is seen as a decision (flip (1) or keep (0)), the additional one is seen as the decision power (strong (1) or weak (0)). The four reliability levels in descending arrangement are 0þ(01), 0�(00), 1�(10), and 1þ(11), corresponding to strong keep, weak keep, weak flip, and strong flip, respectively. This decoding contains two processes, the

The three possible states, in which the ext-Hamming sub-decoder at a given

1. State 0: when the syndrome gives zero (i.e., the received sequence is a valid

2. State 1: is the case of one-error repair when the syndrome vector is one of the subcode parity-check matrix columns (i.e., error discovered and can be corrected). Therefore, it decodes to the right transmitted codeword or decodes

3. State 2: at the decoder failure (errors detected and cannot be corrected).

the decoder state of the cth GCN at the lth iteration. The procedures of the GCN

Using the same notations as in Figure 8, for any GCN <sup>c</sup>, <sup>c</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, <sup>⋯</sup>, M, let <sup>d</sup>ð Þ<sup>l</sup>

<sup>c</sup> ¼ 0, it sends 0<sup>þ</sup> to all elements of the connected VNs set, W cð Þ = {w1, w2,.., wi,.., wn}. The message 0<sup>þ</sup> ð Þ is the highest reliability level.

0þ) assuming some errors may be involved (with e>2) making the decoder

perform a wrong repair (an assumed-correct bit is flipped).

<sup>c</sup> <sup>¼</sup> 1, it sends 1<sup>þ</sup> (level 1) on the place <sup>b</sup><sup>∗</sup> (an assumed-error bit) and 0�

<sup>c</sup> ¼ 2, it sends 1� (level 2) to all of W cð Þ elements as they contain a set of errors (with e>1) leading to a decoder failure (errors detected with no repair

ð Þ<sup>c</sup> . Elements of <sup>W</sup><sup>0</sup>

<sup>c</sup> be

ð Þc are given 0� (not

GCN processing and the VN processing, as will be explained below.

GCN can be one of them, are defined as follows:

sub-decoder of this algorithm are illustrated below:

(level 3) to the rest of set elements W<sup>0</sup>

to another valid codeword.

and the false repair (e> 2).

DOI: http://dx.doi.org/10.5772/intechopen.88199

5.2.1.1 GCN processing

codeword).

• If dð Þ<sup>l</sup>

• If dð Þ<sup>l</sup>

• If dð Þ<sup>l</sup>

17

capability).

Figure 12. Performance of GLDPC with (32,26) ext-Hamming subcodes.

Figure 12 is showing the BER performance of the ext-Hamming-based GLDPC code (with overall rate R ¼ 0:625) by the TSSA-BF algorithm compared to HD decoder BF algorithms in [7, 22] and SD chase sub-decoder (number of LRPs p ¼ 2). The finite-length 1B-construction-GLDPC codes of block length N = 4096 is used, and the maximum number of iterations (Imax) is set to 20. It is noted that this algorithm outperforms the other HDD ones along various values of Eb=No with gain not <0.5 dB at the expense of a little increase in resources used by the algorithm.

#### 5.2 Classification-based algorithm for BF decoding with initial soft information

This algorithm is a modern bit-flipping decoding approach [24]. It is established on taking advantage of the fast BF HDD method with the help of the data extracted from the AWGN channel.

However it exploits this data at only the start phase of the decoding intentionally to make a certain classification operation. This algorithm also improves its performance by adding an additional bit in the arriving messages at VNs from CNs as a technique to enhance the decision reliability at both VNs and GCNs. The main role of this additional bit is to benefit from the subcodes states as in [23] but with a distinct fashion. This approach allows for considerable enhancement in BER at the expense of additional resources at only the side of VNs. SDD is characterized by complexity and the need to a large amount of real calculations (according to the channel soft information) all during the whole decoding procedure. However, this algorithm (accounted as HDD) needs them only in the start phase, and all processed data thereafter are hard values. Therefore this technique could reduce a significant part of the computational complexity, which was noticed in [23].

#### 5.2.1 Algorithm description

The trapping sets, producing unrepaired errors, form the major part of the error performance letdown of the bipartite graph-based codes. The goal here is to diminish the damaging effect of most of the generated sets even by inserting supplemental resources which will be discussed below in this section.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

The algorithm uses the soft channel values at the beginning of the decoding to classify the received symbols (VNs) by a predetermined threshold. Taking commonness into account and due to its simplicity, the extended Hamming code is studied inside the GLDPC codes. The extended Hamming code with increased dminð Þ dmin ¼ 4 is powerful and suitable for constructing standard-relevant GLDPC codes. The ext-Hamming sub-decoders may produce errors in both cases, decoding success and failure. At the GCN sub-decoder failure, it cannot locate the places of errors. At the GCN sub-decoder success, the decoder errors may emerge from two reasons: the undiscovered errors (as the received word decoded to non-transmitted valid one (e≥ 4), where e is the number of errors at the input of the local decoder) and the false repair (e> 2).

An additional bit is inserted next to the main bit in the message from the GCN to a connected VN. It acts as the decision power of GCN by raising the reliability from two to four levels. The GCN sub-decoder decodes the received word and forwards two bits (the main bit and the additional one) to every connected VN. If the main bit is seen as a decision (flip (1) or keep (0)), the additional one is seen as the decision power (strong (1) or weak (0)). The four reliability levels in descending arrangement are 0þ(01), 0�(00), 1�(10), and 1þ(11), corresponding to strong keep, weak keep, weak flip, and strong flip, respectively. This decoding contains two processes, the GCN processing and the VN processing, as will be explained below.

#### 5.2.1.1 GCN processing

Figure 12 is showing the BER performance of the ext-Hamming-based GLDPC

5.2 Classification-based algorithm for BF decoding with initial soft information

from the AWGN channel.

Figure 12.

Coding Theory

5.2.1 Algorithm description

16

This algorithm is a modern bit-flipping decoding approach [24]. It is established on taking advantage of the fast BF HDD method with the help of the data extracted

However it exploits this data at only the start phase of the decoding intentionally to make a certain classification operation. This algorithm also improves its performance by adding an additional bit in the arriving messages at VNs from CNs as a technique to enhance the decision reliability at both VNs and GCNs. The main role of this additional bit is to benefit from the subcodes states as in [23] but with a distinct fashion. This approach allows for considerable enhancement in BER at the expense of additional resources at only the side of VNs. SDD is characterized by complexity and the need to a large amount of real calculations (according to the channel soft information) all during the whole decoding procedure. However, this algorithm (accounted as HDD) needs them only in the start phase, and all processed data thereafter are hard values. Therefore this technique could reduce a significant

The trapping sets, producing unrepaired errors, form the major part of the error

performance letdown of the bipartite graph-based codes. The goal here is to diminish the damaging effect of most of the generated sets even by inserting supplemental resources which will be discussed below in this section.

part of the computational complexity, which was noticed in [23].

code (with overall rate R ¼ 0:625) by the TSSA-BF algorithm compared to HD decoder BF algorithms in [7, 22] and SD chase sub-decoder (number of LRPs p ¼ 2). The finite-length 1B-construction-GLDPC codes of block length N = 4096 is used, and the maximum number of iterations (Imax) is set to 20. It is noted that this algorithm outperforms the other HDD ones along various values of Eb=No with gain not <0.5 dB at the expense of a little increase in resources used by the algorithm.

Performance of GLDPC with (32,26) ext-Hamming subcodes.

The three possible states, in which the ext-Hamming sub-decoder at a given GCN can be one of them, are defined as follows:


3. State 2: at the decoder failure (errors detected and cannot be corrected).

Using the same notations as in Figure 8, for any GCN <sup>c</sup>, <sup>c</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, <sup>⋯</sup>, M, let <sup>d</sup>ð Þ<sup>l</sup> <sup>c</sup> be the decoder state of the cth GCN at the lth iteration. The procedures of the GCN sub-decoder of this algorithm are illustrated below:


Let Uð Þ<sup>l</sup> c, wi ¼ ½Uð Þ<sup>l</sup> c, wi ð Þ<sup>1</sup> <sup>U</sup>ð Þ<sup>l</sup> c, wi ð Þ� 2 be the two bits which represent the outgoing decision message and its power, respectively, from GCN a to VN wi. Let Vð Þ<sup>l</sup> c, wi be the incoming binary bit value of the constituent codeword at c from VN wi.

The overall set of <sup>N</sup> hard demodulated sequence bits is Y <sup>¼</sup> yw; <sup>w</sup> <sup>¼</sup> <sup>1</sup>; <sup>2</sup>; <sup>⋯</sup>; <sup>N</sup>; � yw <sup>∈</sup>f gg <sup>0</sup>; <sup>1</sup> as yw <sup>¼</sup> <sup>1</sup> <sup>2</sup> ð Þ sgn ð Þþ rw 1 and rw is the soft value of the wth bit in the received sequence from AWGN channel. For any VN w, the initial values Vð Þ <sup>0</sup> cjw ¼ yw for <sup>j</sup> <sup>¼</sup> <sup>1</sup>, 2. Therefore at any GCN <sup>c</sup>, the initial values <sup>V</sup>ð Þ <sup>0</sup> c, wi ¼ ywi for i ¼ 1, 2, ⋯, n. Table 4 illustrates an example for ext-Hamming (8,4) constituent decoder employed at GCN c (with n ¼ 8) at lth iteration.

#### 5.2.1.2 VN processing

For any VN, it is represented by two bits. The main bit is the symbol binary value. The additional bit represents the initial reliability of the symbol value, (1) if a suspect bit or (0) if an assumed-correct bit, and the VN will use this extra information as will be discussed later. The codeword symbols which are represented by VNs are classified into two categories: most-reliable bits (MR) and least-reliable bits (LR). The classification is only initiated according to the soft information received through the channel based on a predetermined threshold.

For any transmitted codeword of length N, the wth code bit vw ∈f g 0; 1 are mapped to xw ∈f g �1; 1 , respectively, and transmitted over AWGN channel which is characterized by the probability density function (pdf) p rð Þ =x given by

$$p(r\_w/\chi\_w) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[-(r\_w - \chi\_w)^2/2\sigma^2\right] \tag{7}$$

taken as a second point η<sup>m</sup> that the received symbol is accounted as MR bit if its value rw approaches it. The absolute of the second point ∣ηm∣ is the same for both �rw values as the two probability density functions are symmetric around the zero

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

�ð Þ <sup>r</sup>�<sup>1</sup> <sup>2</sup> =2σ<sup>2</sup> � e �ð Þ <sup>r</sup>þ<sup>1</sup> <sup>2</sup>

ffiffiffiffiffiffiffiffiffi <sup>2</sup>πσ<sup>2</sup> <sup>p</sup>

ηm

�ð Þ 1 þ r <sup>σ</sup><sup>2</sup> <sup>e</sup>

ð Þ 1 � η<sup>m</sup> e

Motivated by TSSA-BF in [8] with inserting the new parameter λ<sup>w</sup>

¼ 0 should be solved to get ηm:

1 � r <sup>σ</sup><sup>2</sup> <sup>e</sup>

<sup>σ</sup><sup>2</sup> ¼ �ð Þ 1 þ η<sup>m</sup> e

<sup>2</sup> <sup>¼</sup> <sup>0</sup>þ1:<sup>04</sup>

�

�ð Þ <sup>r</sup>þ<sup>1</sup> <sup>2</sup> =2σ<sup>2</sup>

This equation can be solved numerically by Newton-Raphson method and for

Using column weight of two and four levels of reliability, there are 10 probable combinations of these incoming messages to VN w, w ¼ 1, 2, ⋯, N. These combinations are categorized into three subsets si f g ; i ¼ 1; 2; 3 . For any VN w, its reliability

<sup>w</sup> is set according to the arriving messages (belong to which subset).

s<sup>3</sup> ¼ 0�; 0� ð Þ; 0þ; 1� ð Þ; 0þ; 0� ð Þ; 0þ; 0<sup>þ</sup> f g ð Þ

s<sup>1</sup> ¼ 1þ; 1<sup>þ</sup> ð Þ; 1þ; 1� ð Þ; 1þ; 0� f g ð Þ ,

s<sup>2</sup> ¼ 1�; 1� ð Þ; 1�; 0� ð Þ; 0þ; 1<sup>þ</sup> f g ð Þ ,

The classification threshold η<sup>c</sup> for this algorithm is set to be in the middle

reliability of the wth received bit as λw. According to the algorithm, this bit is classified as MR (λ<sup>w</sup> ¼ 0) if ∣rw∣ ≥ηc; else it is classified as LR (λ<sup>w</sup> ¼ 1).

�ð Þ <sup>r</sup>�<sup>1</sup> <sup>2</sup> =2σ<sup>2</sup> � ⋯

�ηm

<sup>=</sup>2σ<sup>2</sup> h i (10)

� ¼ 0 (12)

<sup>2</sup> ¼ 0:52. Denote the initial

<sup>σ</sup><sup>2</sup> (13)

(11)

point. The equation δ

The threshold η<sup>c</sup> over AWGN channel.

then

Figure 13.

therefore

state gð Þ<sup>l</sup>

19

0 ð Þr � � r¼η<sup>m</sup>

DOI: http://dx.doi.org/10.5772/intechopen.88199

δ0

various values of σ<sup>2</sup> (0.1–0.9), ηm≈1:04.

between the two points. Therefore <sup>η</sup><sup>c</sup> <sup>¼</sup> <sup>η</sup>oþη<sup>m</sup>

<sup>δ</sup> <sup>¼</sup> <sup>1</sup> ffiffiffiffiffiffiffiffiffi <sup>2</sup>πσ<sup>2</sup> <sup>p</sup> <sup>e</sup>

ð Þj <sup>r</sup> <sup>r</sup>¼η<sup>m</sup> <sup>¼</sup> <sup>1</sup>

where σ<sup>2</sup> is the variance of the zero-mean Gaussian noise nw that the channel adds to the transmitted value xw (so that rw ¼ xw þ nw) [25].

As illustrated in Figure 13, let η<sup>o</sup> be the standard threshold on which the hard demodulator decision is based. For BPSK over AWGN channel, η<sup>o</sup> ¼ 0 and at this value

$$p(r/\varkappa = \mathbf{1}) = p(r/\varkappa = -\mathbf{1})\tag{8}$$

Let δð Þr be the absolute difference between the two probabilities as

$$\delta(r) = |p(r/\varkappa = \mathbf{1}) - p(r/\varkappa = -\mathbf{1})|\tag{9}$$

The received symbol is assumed to be accounted as LR bit if its value rw gets near to the zero point or η<sup>o</sup> (δ ¼ 0). The value of r with maximum difference δmax can be


Table 4. Output messages at GCN c with ext-Hamming (8,4) decoder.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

Figure 13. The threshold η<sup>c</sup> over AWGN channel.

taken as a second point η<sup>m</sup> that the received symbol is accounted as MR bit if its value rw approaches it. The absolute of the second point ∣ηm∣ is the same for both �rw values as the two probability density functions are symmetric around the zero point. The equation δ 0 ð Þr � � r¼η<sup>m</sup> ¼ 0 should be solved to get ηm:

$$\delta = \frac{1}{\sqrt{2\pi\sigma^2}} \left[ e^{-(r-1)^2/2\sigma^2} - e^{-(r+1)^2/2\sigma^2} \right] \tag{10}$$

then

Let Uð Þ<sup>l</sup>

Coding Theory

yw <sup>∈</sup>f gg <sup>0</sup>; <sup>1</sup> as yw <sup>¼</sup> <sup>1</sup>

5.2.1.2 VN processing

Table 4.

18

c, wi ¼ ½Uð Þ<sup>l</sup>

c, wi

ð Þ<sup>1</sup> <sup>U</sup>ð Þ<sup>l</sup> c, wi

employed at GCN c (with n ¼ 8) at lth iteration.

for <sup>j</sup> <sup>¼</sup> <sup>1</sup>, 2. Therefore at any GCN <sup>c</sup>, the initial values <sup>V</sup>ð Þ <sup>0</sup>

through the channel based on a predetermined threshold.

p rð Þ¼ <sup>w</sup>=xw

Output messages at GCN c with ext-Hamming (8,4) decoder.

adds to the transmitted value xw (so that rw ¼ xw þ nw) [25].

decision message and its power, respectively, from GCN a to VN wi. Let Vð Þ<sup>l</sup>

received sequence from AWGN channel. For any VN w, the initial values Vð Þ <sup>0</sup>

For any VN, it is represented by two bits. The main bit is the symbol binary value. The additional bit represents the initial reliability of the symbol value, (1) if a suspect bit or (0) if an assumed-correct bit, and the VN will use this extra information as will be discussed later. The codeword symbols which are represented by VNs are classified into two categories: most-reliable bits (MR) and least-reliable bits (LR). The classification is only initiated according to the soft information received

For any transmitted codeword of length N, the wth code bit vw ∈f g 0; 1 are mapped to xw ∈f g �1; 1 , respectively, and transmitted over AWGN channel which

<sup>2</sup>πσ<sup>2</sup> <sup>p</sup> exp �ð Þ rw � xw <sup>2</sup>

where σ<sup>2</sup> is the variance of the zero-mean Gaussian noise nw that the channel

The received symbol is assumed to be accounted as LR bit if its value rw gets near to the zero point or η<sup>o</sup> (δ ¼ 0). The value of r with maximum difference δmax can be

As illustrated in Figure 13, let η<sup>o</sup> be the standard threshold on which the hard demodulator decision is based. For BPSK over AWGN channel, η<sup>o</sup> ¼ 0 and at this value

<sup>=</sup>2σ<sup>2</sup> h i

p rð Þ¼ =x ¼ 1 p rð Þ =x ¼ �1 (8)

δð Þ¼ r ∣p rð Þ� =x ¼ 1 p rð Þ =x ¼ �1 ∣ (9)

is characterized by the probability density function (pdf) p rð Þ =x given by

1 ffiffiffiffiffiffiffiffiffi

Let δð Þr be the absolute difference between the two probabilities as

Table 4 illustrates an example for ext-Hamming (8,4) constituent decoder

The overall set of <sup>N</sup> hard demodulated sequence bits is Y <sup>¼</sup> yw; <sup>w</sup> <sup>¼</sup> <sup>1</sup>; <sup>2</sup>; <sup>⋯</sup>; <sup>N</sup>; �

incoming binary bit value of the constituent codeword at c from VN wi.

ð Þ� 2 be the two bits which represent the outgoing

<sup>2</sup> ð Þ sgn ð Þþ rw 1 and rw is the soft value of the wth bit in the

c, wi be the

cjw ¼ yw

(7)

c, wi ¼ ywi for i ¼ 1, 2, ⋯, n.

$$\left. \delta'(r) \right|\_{r=\eta\_m} = \frac{1}{\sqrt{2\pi\sigma^2}} \left[ \frac{1-r}{\sigma^2} e^{-(r-1)^2/2\sigma^2} - \dotsb \right. \tag{11}$$

$$\frac{-(1+r)}{\sigma^2}e^{-(r+1)^2/2\sigma^2} = \mathbf{0} \tag{12}$$

therefore

$$(\mathbf{1} - \eta\_m)e^{\frac{\eta\_m}{\sigma^2}} = -(\mathbf{1} + \eta\_m)e^{\frac{-\eta\_m}{\sigma^2}} \tag{13}$$

This equation can be solved numerically by Newton-Raphson method and for various values of σ<sup>2</sup> (0.1–0.9), ηm≈1:04.

The classification threshold η<sup>c</sup> for this algorithm is set to be in the middle between the two points. Therefore <sup>η</sup><sup>c</sup> <sup>¼</sup> <sup>η</sup>oþη<sup>m</sup> <sup>2</sup> <sup>¼</sup> <sup>0</sup>þ1:<sup>04</sup> <sup>2</sup> ¼ 0:52. Denote the initial reliability of the wth received bit as λw. According to the algorithm, this bit is classified as MR (λ<sup>w</sup> ¼ 0) if ∣rw∣ ≥ηc; else it is classified as LR (λ<sup>w</sup> ¼ 1).

Using column weight of two and four levels of reliability, there are 10 probable combinations of these incoming messages to VN w, w ¼ 1, 2, ⋯, N. These combinations are categorized into three subsets si f g ; i ¼ 1; 2; 3 . For any VN w, its reliability state gð Þ<sup>l</sup> <sup>w</sup> is set according to the arriving messages (belong to which subset). Motivated by TSSA-BF in [8] with inserting the new parameter λ<sup>w</sup>

$$\begin{aligned} s\_1 &= \{ (\mathbf{1}^+, \mathbf{1}^+), (\mathbf{1}^+, \mathbf{1}^-), (\mathbf{1}^+, \mathbf{0}^-) \}, \\ s\_2 &= \{ (\mathbf{1}^-, \mathbf{1}^-), (\mathbf{1}^-, \mathbf{0}^-), (\mathbf{0}^+, \mathbf{1}^+) \}, \\ s\_3 &= \{ (\mathbf{0}^-, \mathbf{0}^-), (\mathbf{0}^+, \mathbf{1}^-), (\mathbf{0}^+, \mathbf{0}^-), (\mathbf{0}^+, \mathbf{0}^+) \} \end{aligned}$$

$$\mathbf{g}\_{w}^{(l)} \;= \begin{cases} \mathbf{0} & \text{if } \left\{ U\_{c\_{j}w}^{(l)}, j = \mathbf{1}, 2 \right\} \in \mathfrak{s}\_{1} \\\mathbf{1} & \text{if } \left\{ U\_{c\_{j}w}^{(l)}, j = \mathbf{1}, 2 \right\} \in \mathfrak{s}\_{2} \\\mathbf{2} & \text{if } \left\{ U\_{c\_{j}w}^{(l)}, j = \mathbf{1}, 2 \right\} \in \mathfrak{s}\_{3} \end{cases}$$


By using the previously mentioned rules, the messages are renewed, and the algorithm proceeds until a zero overall syndrome output or it reaches a predefined number of iterations.

It is worth emphasizing that the GCN decoder does not output actual decoded word to its connected VNs. Instead, it sends a reliability signal (taking a value of four possible values) which is represented by two bits Uð Þ<sup>l</sup> c, wi ð Þ<sup>1</sup> and <sup>U</sup>ð Þ<sup>l</sup> c, wi ð Þ2 . On the other hand, the VN has to take a decision (flip or keep) based on its incoming messages and its initial reliability λ<sup>w</sup> (MR or LR). The function of VN state counter γ ð Þl <sup>w</sup> is to count a number of consecutive iterations with the same state gð Þ<sup>l</sup> <sup>w</sup> (at this VN) up to this present iteration (l).

• Uð Þ<sup>l</sup>

Figure 16.

Figure 15.

cwi <sup>¼</sup> <sup>U</sup>ð Þ<sup>l</sup> cwi ð Þ<sup>1</sup> <sup>U</sup>ð Þ<sup>l</sup> cwi ð Þ2

, 0�, 1�, or 1<sup>+</sup>

Block diagram of the horizontal process of the classification-based decoder.

DOI: http://dx.doi.org/10.5772/intechopen.88199

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

Block diagram of the vertical process of the classification-based decoder.

which is represented by two bits Uð Þ<sup>l</sup>

• The function of VN state counter γ

avoided for fast decoding purposes.

bits (variable nodes) into two classes.

21

values 0<sup>+</sup>

or keep.

h i is the outgoing message from GCN <sup>c</sup> to VN wi consisting of two bits representing the reliability level (one of four possible

• The GCN decoder does not output actual decoded words to its connected VNs. Instead, it sends a reliability signal (taking a value of four possible values)

VN has to take a decision based on its incoming messages. The decision is flip

ð Þ<sup>1</sup> and <sup>U</sup>ð Þ<sup>l</sup>

cwi

cwi

ð Þl w

iterations with the same state (at this VN) up to this present iteration ð Þl .

• The algorithm manages without the greater portion of the overhead of the algorithm in [23] which was specially located in the horizontal (GCN) process.

Figure 17 shows the GLDPC BER performance using the (32,26,4) extended Hamming subcode by this decoding with respect to the bit-flipping algorithms in [7, 22] and [23]. It is noticed that this algorithm surpasses the other ones at the cost of a slight increase in computational complexity resulting from the comparison operations made at the initial classification step. It is also noticed that as N

The predefined number of iterations (20) is found to be very sufficient for good

performance as the additional iterations beyond this limit have no considerable difference in the performance and latency in the decoding process which should be

Not similar to the conventional GLDPC HDD BF decoding, the received sequence soft values are utilized to make appropriate classification of the received

increases, a slow improvement in the performance is achieved.

cjw is the outgoing message from

ð Þ2 . On the other hand, the

� � is to count the number of consecutive

). On the other hand, Vð Þ<sup>l</sup>

VN w to GCN cj containing just the decoded symbol bit value.

Figures 14–16 show the block diagrams of the overall decoder, horizontal process, and vertical process, respectively.

#### 5.2.2 Important notes on the algorithm


Figure 14.

Block diagram of the overall classification-based decoder.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

#### Figure 15.

gð Þl w ¼

• The VN w with least reliability, gð Þ<sup>l</sup>

• For gð Þ<sup>l</sup>

Coding Theory

γ ð Þl

γ ð Þl reliability state gð Þ<sup>l</sup>

will be flipped.

number of iterations.

VN) up to this present iteration (l).

cess, and vertical process, respectively.

<sup>c</sup>1<sup>w</sup> <sup>¼</sup> <sup>V</sup>ð Þ<sup>l</sup>

Block diagram of the overall classification-based decoder.

<sup>c</sup>2<sup>w</sup>).

• The initial incoming message at GCN c Vð Þ <sup>0</sup>

5.2.2 Important notes on the algorithm

same (i.e., Vð Þ<sup>l</sup>

bit Vð Þ <sup>0</sup>

Figure 14.

20

yw ∈f gg 0; 1 .

0 if Uð Þ<sup>l</sup>

8 >>>><

>>>>:

number of previous successive repetitions of this state gð Þ<sup>l</sup>

• Else, the VN is assumed a correct bit and kept without flipping.

four possible values) which is represented by two bits Uð Þ<sup>l</sup>

1 if Uð Þ<sup>l</sup>

2 if Uð Þ<sup>l</sup>

<sup>w</sup> ¼ 1, VN state counter is employed to compare the VN present in

<sup>w</sup> ¼ 2 and, in the same time, λ<sup>w</sup> ¼ 1 (i.e., it is considered as LR bit), the VN

By using the previously mentioned rules, the messages are renewed, and the algorithm proceeds until a zero overall syndrome output or it reaches a predefined

It is worth emphasizing that the GCN decoder does not output actual decoded word to its connected VNs. Instead, it sends a reliability signal (taking a value of

Figures 14–16 show the block diagrams of the overall decoder, horizontal pro-

• The output messages from VN w to its two connected GCNs cj, j ¼ 1, 2 are the

<sup>w</sup> as the overall demodulated binary sequence <sup>Y</sup> <sup>¼</sup> yw; <sup>w</sup> <sup>¼</sup> <sup>1</sup>; <sup>2</sup>; <sup>⋯</sup>; <sup>N</sup>; �

other hand, the VN has to take a decision (flip or keep) based on its incoming messages and its initial reliability λ<sup>w</sup> (MR or LR). The function of VN state counter

<sup>w</sup> is to count a number of consecutive iterations with the same state gð Þ<sup>l</sup>

<sup>w</sup> with one of the previous iterations gð Þ <sup>l</sup>�<sup>1</sup> <sup>w</sup> . Let γ

cjw; j ¼ 1; 2 n o

cjw; j ¼ 1; 2 n o

cjw; j ¼ 1; 2 n o ∈s<sup>1</sup>

∈s<sup>2</sup>

∈s<sup>3</sup>

c, wi

ð Þl <sup>w</sup> be the

<sup>w</sup> ¼ 1 with

<sup>w</sup> . If gð Þ<sup>l</sup>

ð Þ<sup>1</sup> and <sup>U</sup>ð Þ<sup>l</sup>

cjw ¼ yw, j ¼ 1, 2 is the demodulated

c, wi

ð Þ2 . On the

<sup>w</sup> (at this

<sup>w</sup> ¼ 0, needs flipping immediately.

Block diagram of the horizontal process of the classification-based decoder.

#### Figure 16.

Block diagram of the vertical process of the classification-based decoder.


Figure 17 shows the GLDPC BER performance using the (32,26,4) extended Hamming subcode by this decoding with respect to the bit-flipping algorithms in [7, 22] and [23]. It is noticed that this algorithm surpasses the other ones at the cost of a slight increase in computational complexity resulting from the comparison operations made at the initial classification step. It is also noticed that as N increases, a slow improvement in the performance is achieved.

The predefined number of iterations (20) is found to be very sufficient for good performance as the additional iterations beyond this limit have no considerable difference in the performance and latency in the decoding process which should be avoided for fast decoding purposes.

Not similar to the conventional GLDPC HDD BF decoding, the received sequence soft values are utilized to make appropriate classification of the received bits (variable nodes) into two classes.

VNs as followed by the MP algorithm in an iterative method to obtain a final decision after a certain number of iterations or a syndrome condition should be

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

codeword. Therefore algorithm will get the syndromes of Yc.

estimates the number of errors contained in the sequence.

The block diagram of the lowered-complexity chase-based decoding algorithm.

Number of contained errors (e)

<sup>a</sup> <sup>¼</sup> <sup>0</sup> 0 Nothing (the demodulated

<sup>a</sup> 6¼ <sup>0</sup> <sup>&</sup>gt;<sup>2</sup> Apply TP-reduced chase

<sup>a</sup> <sup>¼</sup> <sup>0</sup> <sup>1</sup> Apply HDD (Berlekamp-

Else 2 Apply HDD (Berlekamp-

<sup>c</sup> as follows:

W.r.t any given GCN c, the chase-II-based SISO decoder produces 2<sup>p</sup> TPs by making a perturbation of the p LRPs in the demodulated word with length n (subcode word). Therefore 2<sup>p</sup> HDDs should be performed to obtain a decided

For extended BCH2 (double-error correction), the algorithm computes two

<sup>c</sup> <sup>¼</sup> yc<sup>2</sup> <sup>⊕</sup> yc3<sup>x</sup> ⊕⋯⊕ ycn�<sup>1</sup>xn�<sup>3</sup> <sup>⊕</sup> ycnxn�<sup>2</sup>

<sup>c</sup> <sup>¼</sup> yc<sup>2</sup> <sup>⊕</sup> yc3<sup>x</sup> ⊕⋯⊕ ycn�<sup>1</sup>xn�<sup>3</sup> <sup>⊕</sup> ycnx<sup>n</sup>�<sup>2</sup>

where α is the primitive element of GF(2<sup>m</sup>) that generates the BCH code

If there are no errors (e ¼ 0), it is likely (with high percentage) that the demodulated word is the valid transmitted one and the decoder will not do its task. If 0 < e≤2, the algorithm may execute the HDD (Berlekamp-Massey algorithm) and outputs the decoding decision. In these two preceding cases, the soft-output values

can be estimated as the decision is highly probable to be correct as follows:

According to the values of the syndromes as illustrated in Table 5, the algorithm

Action taken (method of estimating the decision Da)

> vector is the decision codeword)

> > algorithm [12]

Massey algorithm)

Massey algorithm)

Calculation model of extrinsic information r0 a,i

Pyndiah model [15]

r0 a,i ¼ β<sup>0</sup> x di

r0 a,i ¼ β<sup>0</sup> x di

r0 a,i ¼ β<sup>0</sup> x di

 <sup>x</sup>¼<sup>α</sup>,

 x¼α<sup>3</sup> (14)

satisfied.

syndromes S<sup>1</sup>

polynomial.

Figure 18.

S1 <sup>a</sup> <sup>¼</sup> <sup>S</sup><sup>3</sup>

S1 <sup>a</sup> <sup>¼</sup> <sup>0</sup>, S<sup>3</sup>

S1 a ⊕ S<sup>3</sup>

Table 5.

23

The actions of the proposed algorithm.

Syndrome of demodulated vector Sa ¼ Ya:Hbch

<sup>c</sup> and S<sup>3</sup>

S1

DOI: http://dx.doi.org/10.5772/intechopen.88199

S3

Figure 17. Simulated BER curves of ð Þ N; 2; 32 GLDPC codes.

The algorithm not only achieves a better error performance but also requires less iterations than the other competent algorithms. In terms of the impacts of the soft information (from AWGN channel) on coding gain of the GLDPC, the algorithm is revealed to exhibit considerable performance to decoder complexity trade-off. The algorithm can be adapted to handle generalized and more robust subcodes with the capability to correct more errors to improve the performance.

As discussed in [24], the computational complexity is provided in terms of the average number of executed operations for this algorithm against the rather comparable TSSA-BF algorithm. It is noticed that the complexity of this decoder is reduced by more than 60%.

#### 6. Simplified SDD algorithm over AWGN channels

The algorithm, in [26], serves to use the chase SD decoders as minimum as possible to lower their complexity and expedites the decoding procedures. This algorithm is a variant approach from a previous one by [27] to lower the complexity of turbo product code (TPC) with multiple-error correction BCH subcodes. The chase decoder at every row or column input sequence in the product code was used as it attempts to decrease the HDD operations executed on the test patterns (TPs) produced in the chase decoder. The algorithm, explained below, will benefit from the algorithm in [27] for more reduction in the complexity.

The introduced algorithm is the MP method for decoding GLDPC with the chase-II algorithm operated as a posteriori probability decoding on GCNs. It will use extended double-error BCH (with high error-correcting capability) as a subcode to obtain a better performance. For simplicity, all GCNs are represented with the same eBCH code of parameters ð Þ n; k; d . The overall block diagram of this algorithm is depicted in Figure 18.

As discussed before in Section 3 and using the same notations and considerations, the soft-output value of every decoded symbol of the subcode should be calculated, by Eqs. (3) or (4), to be sent back on the connected edges to the GLDPC Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

VNs as followed by the MP algorithm in an iterative method to obtain a final decision after a certain number of iterations or a syndrome condition should be satisfied.

W.r.t any given GCN c, the chase-II-based SISO decoder produces 2<sup>p</sup> TPs by making a perturbation of the p LRPs in the demodulated word with length n (subcode word). Therefore 2<sup>p</sup> HDDs should be performed to obtain a decided codeword. Therefore algorithm will get the syndromes of Yc.

For extended BCH2 (double-error correction), the algorithm computes two syndromes S<sup>1</sup> <sup>c</sup> and S<sup>3</sup> <sup>c</sup> as follows:

$$\begin{aligned} \mathbf{S}\_c^1 &= \boldsymbol{\chi}\_{c2} \oplus \boldsymbol{\upchi}\_{c3} \mathbf{x} \oplus \dots \oplus \boldsymbol{\upchi}\_{cn-1} \mathbf{x}^{n-3} \oplus \boldsymbol{\upchi}\_{cn} \mathbf{x}^{n-2} \big|\_{\mathbf{x}=a'} \\ \mathbf{S}\_c^3 &= \boldsymbol{\upchi}\_{c2} \oplus \boldsymbol{\upupchi}\_{c3} \mathbf{x} \oplus \dots \oplus \boldsymbol{\upupchi}\_{cn-1} \mathbf{x}^{n-3} \oplus \boldsymbol{\upupup}\_{cn} \mathbf{x}^{n-2} \big|\_{\mathbf{x}=a^3} \end{aligned} \tag{14}$$

where α is the primitive element of GF(2<sup>m</sup>) that generates the BCH code polynomial.

According to the values of the syndromes as illustrated in Table 5, the algorithm estimates the number of errors contained in the sequence.

If there are no errors (e ¼ 0), it is likely (with high percentage) that the demodulated word is the valid transmitted one and the decoder will not do its task. If 0 < e≤2, the algorithm may execute the HDD (Berlekamp-Massey algorithm) and outputs the decoding decision. In these two preceding cases, the soft-output values can be estimated as the decision is highly probable to be correct as follows:

Figure 18.

The algorithm not only achieves a better error performance but also requires less iterations than the other competent algorithms. In terms of the impacts of the soft information (from AWGN channel) on coding gain of the GLDPC, the algorithm is revealed to exhibit considerable performance to decoder complexity trade-off. The algorithm can be adapted to handle generalized and more robust subcodes with the

As discussed in [24], the computational complexity is provided in terms of the average number of executed operations for this algorithm against the rather comparable TSSA-BF algorithm. It is noticed that the complexity of this decoder is

The algorithm, in [26], serves to use the chase SD decoders as minimum as possible to lower their complexity and expedites the decoding procedures. This algorithm is a variant approach from a previous one by [27] to lower the complexity of turbo product code (TPC) with multiple-error correction BCH subcodes. The chase decoder at every row or column input sequence in the product code was used as it attempts to decrease the HDD operations executed on the test patterns (TPs) produced in the chase decoder. The algorithm, explained below, will benefit from

The introduced algorithm is the MP method for decoding GLDPC with the chase-II algorithm operated as a posteriori probability decoding on GCNs. It will use extended double-error BCH (with high error-correcting capability) as a subcode to obtain a better performance. For simplicity, all GCNs are represented with the same eBCH code of parameters ð Þ n; k; d . The overall block diagram of this algorithm is

As discussed before in Section 3 and using the same notations and considerations, the soft-output value of every decoded symbol of the subcode should be calculated, by Eqs. (3) or (4), to be sent back on the connected edges to the GLDPC

capability to correct more errors to improve the performance.

6. Simplified SDD algorithm over AWGN channels

the algorithm in [27] for more reduction in the complexity.

reduced by more than 60%.

Simulated BER curves of ð Þ N; 2; 32 GLDPC codes.

Figure 17.

Coding Theory

depicted in Figure 18.

22

The block diagram of the lowered-complexity chase-based decoding algorithm.


Table 5. The actions of the proposed algorithm.

$$r'\_{c,i} = \beta \times d\_{c,i} \text{ with } \beta \ge 0$$

(Im ¼ 5). For clarification, the number of HDDs is normalized to the one of the conventional chase decoder. It is shown that a considerable reduction occurs

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

The results show a significant lowering in the soft decoding operations executed at GCNs compared to conventional chase decoders with little wastage in the BER performance. This scheme is highly required in low error rate applications such as

especially after Eb=No ¼ 2 dB.

DOI: http://dx.doi.org/10.5772/intechopen.88199

optical communication systems.

Author details

Sherif Elsanadily

25

EAEAT, Ministry of Military Production, Cairo, Egypt

provided the original work is properly cited.

\*Address all correspondence to: sherif.elsanadily@eaeat.edu.eg

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

where β is chosen to be evolved with the decoding iterations, βðÞ¼ l ½ � 0:4; 0:6; 0:8; 1; 1; 1; ⋯ .

If e>2, the chase algorithm is needed to extract a decision codeword but will not decode a complete list of 2<sup>p</sup> test patterns (TPs). The proposed algorithm in this case will benefit from the lowered-complexity TP-reduced algorithm in [27]. The amount of reduction in HDDs of the algorithm compared to the standard one is listed in Table 6.

The computational complexity of this algorithm is estimated by the number of hard decision decoding processes (Berlekamp-Massey algorithm) employed at GCNs. The (64,51) eBCH subcode is chosen with double-error correction capability to exploit the multiple calculated syndroms, while keeping a moderate code rate (R ffi 0.6). Therefore, for keeping this rate, only GLDPC codes with column weight (j ¼ 2) are considered. As shown in Figure 19, the number of HDDs in the decoder is calculated for two numbers of LRPs (p ¼ 3, p ¼ 4) and up to five iterations


Table 6.

The reduction of HDDs in the lowered-complexity chase-based decoding algorithm (SNR = 2 dB at Imax ¼ 5).

#### Figure 19.

Comparison of computational complexity of less-complex and conventional SISO decoding algorithms for decoding (64,51) eBCH-based GLDPC codes with length N = 4096 for various values of p and Im.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

(Im ¼ 5). For clarification, the number of HDDs is normalized to the one of the conventional chase decoder. It is shown that a considerable reduction occurs especially after Eb=No ¼ 2 dB.

The results show a significant lowering in the soft decoding operations executed at GCNs compared to conventional chase decoders with little wastage in the BER performance. This scheme is highly required in low error rate applications such as optical communication systems.

#### Author details

r 0

½ � 0:4; 0:6; 0:8; 1; 1; 1; ⋯ .

listed in Table 6.

Coding Theory

BCH code (n,k)

eBCH2 (64,51)

eBCH2 (64,51)

eBCH3 (64,45)

e8CH3 (64,45)

Table 6.

Figure 19.

24

N J Code rate

No. of LRPs (p)

c,i ¼ β � dc,i with β ≥ 0

If e>2, the chase algorithm is needed to extract a decision codeword but will not decode a complete list of 2<sup>p</sup> test patterns (TPs). The proposed algorithm in this case will benefit from the lowered-complexity TP-reduced algorithm in [27]. The amount of reduction in HDDs of the algorithm compared to the standard one is

The computational complexity of this algorithm is estimated by the number of hard decision decoding processes (Berlekamp-Massey algorithm) employed at GCNs. The (64,51) eBCH subcode is chosen with double-error correction capability to exploit the multiple calculated syndroms, while keeping a moderate code rate (R ffi 0.6). Therefore, for keeping this rate, only GLDPC codes with column weight (j ¼ 2) are considered. As shown in Figure 19, the number of HDDs in the decoder is calculated for two numbers of LRPs (p ¼ 3, p ¼ 4) and up to five iterations

> Number of HDDs in standard alg. [7]

4096 2 0.6 3 5120 3072 60

4096 2 0.6 4 10,240 5734 56

4096 2 0.41 4 10,240 5232 51.1

4096 2 0.41 5 20,480 9011 44

The reduction of HDDs in the lowered-complexity chase-based decoding algorithm (SNR = 2 dB at Imax ¼ 5).

Comparison of computational complexity of less-complex and conventional SISO decoding algorithms for decoding (64,51) eBCH-based GLDPC codes with length N = 4096 for various values of p and Im.

Avg. number of HDDs in proposed alg.

Percentage of complexity reduction (%)

where β is chosen to be evolved with the decoding iterations, βðÞ¼ l

Sherif Elsanadily EAEAT, Ministry of Military Production, Cairo, Egypt

\*Address all correspondence to: sherif.elsanadily@eaeat.edu.eg

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### References

[1] Tanner R. A recursive approach to low complexity codes. IEEE Transactions on Information Theory. 1981;27:533-547

[2] Lentmaier M, Zigangirov K. On generalized low-density parity check codes based on Hamming component codes. IEEE Communications Letters. 1999;3:248-250

[3] Liva G, Ryan W, Chiani M. Quasicyclic generalized LDPC codes with low error floors. IEEE Transactions on Communications. 2008;56:49-57

[4] Wang Y, Fossorier M. Doubly generalized LDPC codes over the AWGN channel. IEEE Transactions on Communications. 2009; 57:1312-1319

[5] Bahl L, Cocke J, Jelinek F, Raviv J. Optimal decoding of linear codes for minimizing symbol error rate. IEEE Transactions on Information Theory. 1974;20:284-287

[6] Boutros J, Pothier O, Zemor G. Generalized low density (tanner) codes. In: Proc. IEEE. Int. Conf. Commun.; 1999; vol. 1, pp. 441-445

[7] Miladinovic N, Fossorier M. Generalized LDPC codes with Reed-Solomon and BCH codes as component codes for binary channels. In: Proc. IEEE Global Telecommun. Conf.; St. Louis, MO; 2005; vol. 3; p. 6

[8] Chen J, Tanner RM. A hybrid coding scheme for the Gilbert-Elliott channel. IEEE Transactions on Communications. 2006;54:1787-1796

[9] Abu-Surra S, Liva G, Ryan WE. Lowfloor Tanner codes via Hamming-node or RSCC-node doping. In: Proc. the 16th international conf. on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes; 2006; pp. 245-254

[10] Yue G, Ping L, Wang X. Generalized low-density parity-check codes based on Hadamard constraints. IEEE Transactions on Information Theory. 2007;53:1058-1079

In: IEEE 49th Vehicular Technology Conf. (VTC 99); Houston; 1999; vol. 1;

DOI: http://dx.doi.org/10.5772/intechopen.88199

channels. In: The IEEE 2017 12th International Conference on Computer Engineering and Systems (ICCES);

[27] Chen GT, Cao L, Yu L, Chen CW. Test-pattern-reduced decoding for turbo product codes with multi-error-

Transactions on Communications. 2009;

correcting EBCH codes. IEEE

2017; pp. 320-324

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

57:307-310

[18] Chase D. A class of algorithms for decoding block codes with channel measurement information. IEEE Transactions on Information Theory.

[19] Pyndiah RM. Near-optimum decoding of product codes: Block turbo

Communications. 1998;46:1003-1010

[20] Pyndiah R, Glavieux A, Picart A, Jacq S. Near optimum decoding of products codes. In: Proc. IEEE GLOBECOM94 Conf., vol. 1/3; San Francisco, CA; 1994; pp. 339-343

[21] Hirst S, Honary B, Markarian G. Fast chase algorithm with application in turbo decoding. IEEE Transactions on Communications. 2001;49:1693-1699

[22] Hirst S, Honary B. Decoding of generalized low-density paritycheck codes using weighted bit-flip voting. IEEE Proceedings Communications.

[23] Elsanadily S, Mahran A, Elghandour O. Two-side state-aided bit-flipping decoding of generalized low density

[24] Elsanadily S, Mahran A, Elghandour O. Classification-based algorithm for bit-flipping decoding of GLDPC codes

[25] Ryan WE, Lin S. Channel Codes: Classical and Modern. New York: Cambridge University Press; 2009

[26] Elsanadily S, Mahran A, Elghandour O. Lowered-complexity soft decoding of generalized LDPC codes over AWGN

2002;149:1-5

2122-2125

1520-1523

27

parity check codes. IEEE

over AWGN channels. IEEE Communications Letters. 2018;22:

Communications Letters. 2017;21:

codes. IEEE Transactions on

pp. 274-278

1972;18:170-182

[11] Hirst S, Honary B. Application of efficient chase algorithm in decoding of generalized low-density parity-check codes. IEEE Communications Letters. 2002;6:385-387

[12] Djordjevic I, Milenkovic O, Vasic B. Generalized low-density parity-check codes for optical communication systems. IEEE: Journal of Lightwave Technology. 2005;23:1939-1946

[13] Guan R, Zhang L. Hybrid Hamming GLDPC codes over the binary erasure channel. In: The 2017 11th IEEE International Conference on Anticounterfeiting, Security, and Identification (ASID); 2017; pp. 130-133

[14] Olmos PM, Mitchell DGM, Costello DJ. Analyzing the finite-length performance of generalized ldpc codes. In: 2015 IEEE International Symposium on Information Theory (ISIT); 2015; pp. 2683-2687

[15] Yu Y, Han Y, Zhang L. Hamming-GLDPC codes in BEC and AWGN channel. In: The 6th International Conference on Wireless, Mobile and Multi-Media (ICWMMN 2015); 2015; pp. 103-106

[16] Beemer A, Habib S, Kelley CA, Kliewer J. A generalized algebraic approach to optimizing SC-LDPC codes. In: The 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton); 2017; pp. 672-679

[17] Pothier O, Brunel L, Boutros J. A low complexity FEC scheme based on the intersection of interleaved block codes.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms DOI: http://dx.doi.org/10.5772/intechopen.88199

In: IEEE 49th Vehicular Technology Conf. (VTC 99); Houston; 1999; vol. 1; pp. 274-278

References

Coding Theory

1981;27:533-547

1999;3:248-250

[1] Tanner R. A recursive approach to

[10] Yue G, Ping L, Wang X. Generalized low-density parity-check codes based on

Transactions on Information Theory.

[11] Hirst S, Honary B. Application of efficient chase algorithm in decoding of generalized low-density parity-check codes. IEEE Communications Letters.

[12] Djordjevic I, Milenkovic O, Vasic B. Generalized low-density parity-check codes for optical communication systems. IEEE: Journal of Lightwave Technology. 2005;23:1939-1946

[13] Guan R, Zhang L. Hybrid Hamming GLDPC codes over the binary erasure channel. In: The 2017 11th IEEE International Conference on Anticounterfeiting, Security, and

Identification (ASID); 2017; pp. 130-133

[14] Olmos PM, Mitchell DGM, Costello

performance of generalized ldpc codes. In: 2015 IEEE International Symposium on Information Theory (ISIT); 2015;

[15] Yu Y, Han Y, Zhang L. Hamming-GLDPC codes in BEC and AWGN channel. In: The 6th International Conference on Wireless, Mobile and Multi-Media (ICWMMN 2015); 2015;

[16] Beemer A, Habib S, Kelley CA, Kliewer J. A generalized algebraic approach to optimizing SC-LDPC codes. In: The 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton);

[17] Pothier O, Brunel L, Boutros J. A low complexity FEC scheme based on the intersection of interleaved block codes.

DJ. Analyzing the finite-length

pp. 2683-2687

pp. 103-106

2017; pp. 672-679

Hadamard constraints. IEEE

2007;53:1058-1079

2002;6:385-387

Transactions on Information Theory.

[2] Lentmaier M, Zigangirov K. On generalized low-density parity check codes based on Hamming component codes. IEEE Communications Letters.

[3] Liva G, Ryan W, Chiani M. Quasicyclic generalized LDPC codes with low error floors. IEEE Transactions on Communications. 2008;56:49-57

[4] Wang Y, Fossorier M. Doubly generalized LDPC codes over the

Transactions on Communications. 2009;

[5] Bahl L, Cocke J, Jelinek F, Raviv J. Optimal decoding of linear codes for minimizing symbol error rate. IEEE Transactions on Information Theory.

[6] Boutros J, Pothier O, Zemor G. Generalized low density (tanner) codes. In: Proc. IEEE. Int. Conf. Commun.;

[7] Miladinovic N, Fossorier M. Generalized LDPC codes with Reed-Solomon and BCH codes as component codes for binary channels. In: Proc. IEEE Global Telecommun. Conf.; St. Louis,

[8] Chen J, Tanner RM. A hybrid coding scheme for the Gilbert-Elliott channel. IEEE Transactions on Communications.

[9] Abu-Surra S, Liva G, Ryan WE. Lowfloor Tanner codes via Hamming-node or RSCC-node doping. In: Proc. the 16th international conf. on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes; 2006; pp. 245-254

1999; vol. 1, pp. 441-445

MO; 2005; vol. 3; p. 6

2006;54:1787-1796

26

AWGN channel. IEEE

57:1312-1319

1974;20:284-287

low complexity codes. IEEE

[18] Chase D. A class of algorithms for decoding block codes with channel measurement information. IEEE Transactions on Information Theory. 1972;18:170-182

[19] Pyndiah RM. Near-optimum decoding of product codes: Block turbo codes. IEEE Transactions on Communications. 1998;46:1003-1010

[20] Pyndiah R, Glavieux A, Picart A, Jacq S. Near optimum decoding of products codes. In: Proc. IEEE GLOBECOM94 Conf., vol. 1/3; San Francisco, CA; 1994; pp. 339-343

[21] Hirst S, Honary B, Markarian G. Fast chase algorithm with application in turbo decoding. IEEE Transactions on Communications. 2001;49:1693-1699

[22] Hirst S, Honary B. Decoding of generalized low-density paritycheck codes using weighted bit-flip voting. IEEE Proceedings Communications. 2002;149:1-5

[23] Elsanadily S, Mahran A, Elghandour O. Two-side state-aided bit-flipping decoding of generalized low density parity check codes. IEEE Communications Letters. 2017;21: 2122-2125

[24] Elsanadily S, Mahran A, Elghandour O. Classification-based algorithm for bit-flipping decoding of GLDPC codes over AWGN channels. IEEE Communications Letters. 2018;22: 1520-1523

[25] Ryan WE, Lin S. Channel Codes: Classical and Modern. New York: Cambridge University Press; 2009

[26] Elsanadily S, Mahran A, Elghandour O. Lowered-complexity soft decoding of generalized LDPC codes over AWGN

channels. In: The IEEE 2017 12th International Conference on Computer Engineering and Systems (ICCES); 2017; pp. 320-324

[27] Chen GT, Cao L, Yu L, Chen CW. Test-pattern-reduced decoding for turbo product codes with multi-errorcorrecting EBCH codes. IEEE Transactions on Communications. 2009; 57:307-310

Chapter 2

Abstract

nication channel.

1. Introduction

channel decoding, CRC, LDPC

cation and deep space combination.

the errors so that there is no need to resend the data.

1.1 Need for error coding

lite communication [1].

29

Polynomials in Error Detection

Charanarur Panem, Vinaya Gad and Rajendra S. Gad

Keywords: error detection, error correction, burst error, channel coding,

The chapter gives an overview of the various types of errors encountered in a communication system. It discusses the various error detection and error correction codes. The role of polynomials in error detection and error correction is discussed in detail with the architecture for practical implementation of the codes in a commu-

Different types of errors are encountered during data transmission because of physical defects in the communication medium as well as environmental interference. Environmental interference and physical defects in the communication medium can cause random bit errors during data transmission. Error coding is a method of detecting and correcting these errors to ensure that there are no errors in the information when it is sent from source to destination. Error coding is used for error-free communication in the primary and secondary memory devices such as RAM, ROM, hard disk, CD's, and DVDs, as well as in different digital data communication systems such as network communication, satellite, and cellular communi-

Data transmission errors occur in terrestrial mobile communication due to multipath fading, diffractions or scattering in cellular wireless communications, low signal-to-noise ratio, and limited transmitted power and energy resources in satel-

Error coding uses mathematical formulae to encode data bits at the source into longer bit words for transmission. The "code word" is then decoded at the destination to retrieve the information. The code word consists of extra bits, which provide redundancy, and at the destination, it will decode the data to find out whether the communication channel introduced any error and some schemes can even correct

and Correction in Data

Communication System

#### Chapter 2

## Polynomials in Error Detection and Correction in Data Communication System

Charanarur Panem, Vinaya Gad and Rajendra S. Gad

#### Abstract

The chapter gives an overview of the various types of errors encountered in a communication system. It discusses the various error detection and error correction codes. The role of polynomials in error detection and error correction is discussed in detail with the architecture for practical implementation of the codes in a communication channel.

Keywords: error detection, error correction, burst error, channel coding, channel decoding, CRC, LDPC

#### 1. Introduction

Different types of errors are encountered during data transmission because of physical defects in the communication medium as well as environmental interference. Environmental interference and physical defects in the communication medium can cause random bit errors during data transmission. Error coding is a method of detecting and correcting these errors to ensure that there are no errors in the information when it is sent from source to destination. Error coding is used for error-free communication in the primary and secondary memory devices such as RAM, ROM, hard disk, CD's, and DVDs, as well as in different digital data communication systems such as network communication, satellite, and cellular communication and deep space combination.

#### 1.1 Need for error coding

Data transmission errors occur in terrestrial mobile communication due to multipath fading, diffractions or scattering in cellular wireless communications, low signal-to-noise ratio, and limited transmitted power and energy resources in satellite communication [1].

Error coding uses mathematical formulae to encode data bits at the source into longer bit words for transmission. The "code word" is then decoded at the destination to retrieve the information. The code word consists of extra bits, which provide redundancy, and at the destination, it will decode the data to find out whether the communication channel introduced any error and some schemes can even correct the errors so that there is no need to resend the data.

There are two ways to deal with errors. One way is to introduce redundant information along with the data to be transmitted, which will enable the receiver to deduce the information that has been transmitted. The second way is to include only enough redundancy to allow the receiver to detect that error has occurred, but not which error and the receiver makes a request for retransmission. The first method uses Error-Correcting Codes and the second uses Error-detecting Codes.

Consider a frame having m data bits (message to be sent) and r redundant bits (used for checking). The total number of bits in the frame will be n(m + r), which is referred as n-bit code word. Consider two code-words, 11,001,100 and 11,001,111, and perform Exclusive OR and then count number of 1's in the result. The number of bits in which the codewords are different is called Hamming distance. Suppose the code words are Hamming distance d- apart, it will require d single-bit errors to connect one code word to another. The properties of error detection and error correction depend on the Hamming distance.

• A distance (d + 1) code is required to detect d errors because d-single bit errors cannot change a valid codeword into another valid code. Thus the error is detected at the receiver.

are traveling in the neighboring line crosses over and gets superimposed on the transmission cable. Echo is similar to cross talk; how ever, it occurs in a single transmission line, through which multiple computer ports are sending data at the same time. The data from one port will echo into another, thus resulting in data

Polynomials in Error Detection and Correction in Data Communication System

Error detection uses additional bits in the message to be transmitted. This adds redundancy and facilitates detection and correction of errors. Popular techniques of

This technique is most common and cheap mechanism for detection. The data unit is appended with a redundant bit known as the parity bit. A parity bit generator is used, which adds 1 to the block of data if it contains odd number of 1's, and 0 is added if there are even number of 1's. At the receiver end, the parity is computed of the block of data received and compared with the received parity bit. These scheme uses total even number of 1's; hence it is known as even parity checking. Similarly,

Two-dimensional parity check improves the performance. Here, the data bits are organized in the form of a table, computed for each row as well each column and are

corruption (Figure 1).

Figure 1.

error detection are,

• Checksum.

2. Error detecting codes

• Simple parity check.

• Two-dimensional parity check.

Wireless communication system with channel coding.

DOI: http://dx.doi.org/10.5772/intechopen.86160

2.1 Simple parity checking or one-dimension parity check

you can use odd number of 1's, known as odd parity checking.

• Cyclic redundancy check.

2.2 Two-dimension parity check

31

• A distance (2d + 1) code is required to correct d errors because the codewords will be so apart that the transmitted codeword will be still closer than any other valid codeword, and thus the error can be determined.

#### 1.2 Types of errors in a communication channel

When the data travels from the sender to receiver, different types of errors are encountered in the communication channel [2].

#### 1.2.1 Noise or electrical distortion

When the data travel through a conductor, there are different influences such as sound waves, electrical signals, noise such as electricity from motors, power switches, impulse noise, because of which data can be corrupted or destroyed. Old conductors are unable to handle these types of interference and heavy data traffic, hence the data transmission suffers.

#### 1.2.2 Burst errors

Burst errors are large clumps of bits and occur when there are a number of interconnected bit errors which occur at many places. These types of errors may occur because of some wrong placement in the data chain. It may contain several hundred or thousand-bit errors.

#### 1.2.3 Random bit errors

Data sent on a communication channel consists of thousands of data bits, sent in a particular order or sequence. However, there is a probability that the bits may be rearranged by accident in the transmission process. These types of errors are known as random bit errors.

#### 1.2.4 Cross talk and echo

Cross talk occurs when the transmission cable through which the data is transmitted, is surrounded by other transmission lines. The data and code words, which Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

Figure 1.

There are two ways to deal with errors. One way is to introduce redundant information along with the data to be transmitted, which will enable the receiver to deduce the information that has been transmitted. The second way is to include only enough redundancy to allow the receiver to detect that error has occurred, but not which error and the receiver makes a request for retransmission. The first method uses Error-Correcting Codes and the second uses Error-detecting Codes. Consider a frame having m data bits (message to be sent) and r redundant bits (used for checking). The total number of bits in the frame will be n(m + r), which is referred as n-bit code word. Consider two code-words, 11,001,100 and 11,001,111, and perform Exclusive OR and then count number of 1's in the result. The number of bits in which the codewords are different is called Hamming distance. Suppose the code words are Hamming distance d- apart, it will require d single-bit errors to connect one code word to another. The properties of error detection and error

• A distance (d + 1) code is required to detect d errors because d-single bit errors cannot change a valid codeword into another valid code. Thus the error is

• A distance (2d + 1) code is required to correct d errors because the codewords will be so apart that the transmitted codeword will be still closer than any other

When the data travels from the sender to receiver, different types of errors are

When the data travel through a conductor, there are different influences such as

Burst errors are large clumps of bits and occur when there are a number of interconnected bit errors which occur at many places. These types of errors may occur because of some wrong placement in the data chain. It may contain several hundred

Data sent on a communication channel consists of thousands of data bits, sent in a particular order or sequence. However, there is a probability that the bits may be rearranged by accident in the transmission process. These types of errors are known

Cross talk occurs when the transmission cable through which the data is transmitted, is surrounded by other transmission lines. The data and code words, which

sound waves, electrical signals, noise such as electricity from motors, power switches, impulse noise, because of which data can be corrupted or destroyed. Old conductors are unable to handle these types of interference and heavy data traffic,

valid codeword, and thus the error can be determined.

1.2 Types of errors in a communication channel

encountered in the communication channel [2].

correction depend on the Hamming distance.

detected at the receiver.

Coding Theory

1.2.1 Noise or electrical distortion

hence the data transmission suffers.

1.2.2 Burst errors

or thousand-bit errors.

1.2.3 Random bit errors

as random bit errors.

30

1.2.4 Cross talk and echo

Wireless communication system with channel coding.

are traveling in the neighboring line crosses over and gets superimposed on the transmission cable. Echo is similar to cross talk; how ever, it occurs in a single transmission line, through which multiple computer ports are sending data at the same time. The data from one port will echo into another, thus resulting in data corruption (Figure 1).

#### 2. Error detecting codes

Error detection uses additional bits in the message to be transmitted. This adds redundancy and facilitates detection and correction of errors. Popular techniques of error detection are,


#### 2.1 Simple parity checking or one-dimension parity check

This technique is most common and cheap mechanism for detection. The data unit is appended with a redundant bit known as the parity bit. A parity bit generator is used, which adds 1 to the block of data if it contains odd number of 1's, and 0 is added if there are even number of 1's. At the receiver end, the parity is computed of the block of data received and compared with the received parity bit. These scheme uses total even number of 1's; hence it is known as even parity checking. Similarly, you can use odd number of 1's, known as odd parity checking.

#### 2.2 Two-dimension parity check

Two-dimensional parity check improves the performance. Here, the data bits are organized in the form of a table, computed for each row as well each column and are sent along with the data. The parity is computed for the received data and compared with the received data bits.

#### 2.2.1 Performance

Two-dimension parity checking is mainly used to detect burst errors. It detects a burst error of more than n bits with a high probability. However, this mechanism will not be able to detect the errors if two bits in one data unit are damaged. Example if 11000110 is changed to 01000100 and 10101010 is changed to 00101000 the error will not be detected.

#### 2.3 Checksum

This scheme divides the data bits to be sent into k segments. Each segment consists of m bits. All the segments are added using 1's complement arithmetic. Checksum is obtained by complementing the sum, and the data segments are transmitted together. At the receiver end, again 1's complement arithmetic is used to add all received segments. The sum generated is complemented. The receiver accepts the data if the result of complementing is zero.

errors caused by noise in transmission channels. CRC-32 guarantees 99.999% probability of error detection at the receiver end; hence, this CRC is often used for

Polynomials in Error Detection and Correction in Data Communication System

Cyclic redundancy codes are a subset of cyclic [4, 5] codes that are also a subset of linear block codes. They use a binary alphabet, 0 and 1. Arithmetic is based on Galois Field GF(2), for example, modulo-2 addition (logical XOR) and modulo-2 multiplication (logical AND). The CRC method treats the data frame as a large Binary number. This number is then divided (at the generator end) by a fixed binary number (the generator polynomial) and the resulting CRC value, known as the FCS (Frame Check Sequence), is appended to the end of the data frame and transmitted. The receiver divides the message (including the calculated CRC), by the same polynomial used during transmission and compares its CRC value with the generated CRC value. If it does not match, the system requests for re-transmission

CRC codes are often used for error detection over frames or vectors of a certain length. The frame can be expressed as a polynomial in x, where the exponent of x is

CRC coding is a generalization of the parity check bit. Parity bits are used for short vectors to detect one-bit error. However, if there are errors in two-bit posi-

Let the data to be transmitted consist of a length k binary vector, and represent it

Then, to add redundant bits, so the total length of the code word is n, we should add n-k bits. These redundant bits, which are the CRC bits, can be represented by

(1)

(2)

(3)

the place marker of the coefficient. The vector length L is

Gigabit Ethernet packets [3].

Basic scheme for cyclic redundancy checking.

DOI: http://dx.doi.org/10.5772/intechopen.86160

Figure 2.

of the data frame.

represented by the degree L-1 polynomial.

tions, it will not detect the error.

2.4.1 Error detection procedure

by the degree k-1 polynomial.

the degree n-k-1 polynomial.

33

#### 2.3.1 Performance

The checksum mechanism detects all errors consisting of odd number of bits. It also detects most errors having even number of bits.

#### 2.4 Cyclic redundancy check (CRC)

Cyclic redundancy check is the most powerful and easy to implement error detection mechanism. Checksum uses addition, whereas CRC is based on binary division. In CRC, the data unit is appended at the end by a sequence of redundant bits, called cyclic redundancy check bits. The resulting data unit is exactly divisible by a second, predetermined binary number. At the receiver end, the incoming data unit is divided by the same predetermined binary number. If the remainder is zero, the data unit is assumed to be error-free and is accepted. A remainder indicates that the data unit has encountered an error in transit and therefore is rejected at the receiver. The generalized technique to generate the CRC bits is explained below:

Consider there is a k bit message to be transmitted. The transmitter generates an r-bit sequence called as FCS (frame check sequence). These r bits are appended to the k bit message, so that (k + r) bits are transmitted. The r-bit FCS is generated by dividing the k bit message, appended by r zeros, by a predetermined number. This number is (r + 1) bit length, and can be considered as coefficient of a polynomial, called generator polynomial. The r-bit FCS is generated as the remainder of binary division. Once the (k + r) bit frame is received, it is divided by the same predetermined number. If the remainder is zero, it means there was no error, and the frame is accepted by the receiver.

Operations at both the sender and receiver end are shown in Figure 2.

CRC is widely used in data communications, data storage, and data compression as a powerful method for detecting errors in the data. It is also used in testing of integrated circuits and the detection of logical faults. A cyclic redundancy code is a non-secure hash function designed to detect accidental changes to raw computer data. CRCs are popular because they are simple to implement in binary hardware, are easy to analyze mathematically, and are particularly good at detecting common

Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

Figure 2. Basic scheme for cyclic redundancy checking.

sent along with the data. The parity is computed for the received data and compared

Two-dimension parity checking is mainly used to detect burst errors. It detects a burst error of more than n bits with a high probability. However, this mechanism will not be able to detect the errors if two bits in one data unit are damaged. Example if 11000110 is changed to 01000100 and 10101010 is changed to

This scheme divides the data bits to be sent into k segments. Each segment consists of m bits. All the segments are added using 1's complement arithmetic. Checksum is obtained by complementing the sum, and the data segments are transmitted together. At the receiver end, again 1's complement arithmetic is used to add all received segments. The sum generated is complemented. The receiver

The checksum mechanism detects all errors consisting of odd number of bits. It

Cyclic redundancy check is the most powerful and easy to implement error detection mechanism. Checksum uses addition, whereas CRC is based on binary division. In CRC, the data unit is appended at the end by a sequence of redundant bits, called cyclic redundancy check bits. The resulting data unit is exactly divisible by a second, predetermined binary number. At the receiver end, the incoming data unit is divided by the same predetermined binary number. If the remainder is zero, the data unit is assumed to be error-free and is accepted. A remainder indicates that the data unit has encountered an error in transit and therefore is rejected at the receiver. The generalized technique to generate the CRC bits is explained below: Consider there is a k bit message to be transmitted. The transmitter generates an r-bit sequence called as FCS (frame check sequence). These r bits are appended to the k bit message, so that (k + r) bits are transmitted. The r-bit FCS is generated by dividing the k bit message, appended by r zeros, by a predetermined number. This number is (r + 1) bit length, and can be considered as coefficient of a polynomial, called generator polynomial. The r-bit FCS is generated as the remainder of binary

division. Once the (k + r) bit frame is received, it is divided by the same

predetermined number. If the remainder is zero, it means there was no error, and

CRC is widely used in data communications, data storage, and data compression as a powerful method for detecting errors in the data. It is also used in testing of integrated circuits and the detection of logical faults. A cyclic redundancy code is a non-secure hash function designed to detect accidental changes to raw computer data. CRCs are popular because they are simple to implement in binary hardware, are easy to analyze mathematically, and are particularly good at detecting common

Operations at both the sender and receiver end are shown in Figure 2.

with the received data bits.

00101000 the error will not be detected.

accepts the data if the result of complementing is zero.

also detects most errors having even number of bits.

2.4 Cyclic redundancy check (CRC)

the frame is accepted by the receiver.

32

2.2.1 Performance

Coding Theory

2.3 Checksum

2.3.1 Performance

errors caused by noise in transmission channels. CRC-32 guarantees 99.999% probability of error detection at the receiver end; hence, this CRC is often used for Gigabit Ethernet packets [3].

Cyclic redundancy codes are a subset of cyclic [4, 5] codes that are also a subset of linear block codes. They use a binary alphabet, 0 and 1. Arithmetic is based on Galois Field GF(2), for example, modulo-2 addition (logical XOR) and modulo-2 multiplication (logical AND). The CRC method treats the data frame as a large Binary number. This number is then divided (at the generator end) by a fixed binary number (the generator polynomial) and the resulting CRC value, known as the FCS (Frame Check Sequence), is appended to the end of the data frame and transmitted. The receiver divides the message (including the calculated CRC), by the same polynomial used during transmission and compares its CRC value with the generated CRC value. If it does not match, the system requests for re-transmission of the data frame.

CRC codes are often used for error detection over frames or vectors of a certain length. The frame can be expressed as a polynomial in x, where the exponent of x is the place marker of the coefficient. The vector length L is represented by the degree L-1 polynomial.

$$a(\mathbf{x}) = \sum\_{\iota=0}^{L-1} a\_{\iota} \mathbf{x}^{\iota} = a\_{L-1} \mathbf{x}^{L-1} + a\_{L-2} \mathbf{x}^{L-2} + \dots + a\_1 \mathbf{x} + a\_0 \tag{1}$$

CRC coding is a generalization of the parity check bit. Parity bits are used for short vectors to detect one-bit error. However, if there are errors in two-bit positions, it will not detect the error.

#### 2.4.1 Error detection procedure

Let the data to be transmitted consist of a length k binary vector, and represent it by the degree k-1 polynomial.

$$d(\mathbf{x}) = d\_{k-1}\mathbf{x}^{k-1} + d\_{k-2}\mathbf{x}^{k-2} + \dots + d\_1\mathbf{x} + d\_0 \tag{2}$$

Then, to add redundant bits, so the total length of the code word is n, we should add n-k bits. These redundant bits, which are the CRC bits, can be represented by the degree n-k-1 polynomial.

$$\mathbf{y}\left(\mathbf{x}\right) = d\_{n-k-1}\mathbf{x}^{n-k-1} + \dots + r\_1\mathbf{x} + r\_0 \tag{3}$$

The polynomial for codeword is written as follows:

$$\begin{aligned} c(\mathbf{x}) &= d(\mathbf{x})\mathbf{x}^{n-k} + r(\mathbf{x}) \\ &= d\_{k-1}\mathbf{x}^{n-1} + \dots + d\_1\mathbf{x}^{n-k+1} + d\_0\mathbf{x}^{n-k} + r\_{n-k-1}\mathbf{x}^{n-k-1} + \dots + r\_1\mathbf{x} + r\_0 \end{aligned} \tag{4}$$

CRC polynomial is derived using a degree n-k generator polynomial.

$$\mathbf{g}(\mathbf{x}) = \mathbf{x}^{n-k} + \mathbf{g}\_{n-k-1}\mathbf{x}^{n-k-1} + \dots + \mathbf{g}\_1\mathbf{x} + 1\tag{5}$$

which is a binary polynomial, wherein the highest and lowest coefficients are non-zero (gn-k = 1 and g0 = 1).

The CRC polynomial is derived as.

$$\text{pr}(\mathbf{x}) = R\_{\mathbf{g}(x)}(d(\mathbf{x})\mathbf{x}^{n-k}) \tag{6}$$

• CRC has capacity to detect all single-bit errors.

DOI: http://dx.doi.org/10.5772/intechopen.86160

from any input or output wire of any flip-flop.

polynomial.

probability.

2.4.3 Implementation

Figure 3. Serial CRC.

Figure 4. Parallel CRC.

35

• CRC has capacity to detect all double-bit errors (three 1's).

Polynomials in Error Detection and Correction in Data Communication System

• CRC has capacity to detect any odd number of errors (X + 1).

• CRC has capacity to detect all burst errors of less than the degree of the

• CRC has the capacity to detect most of the larger burst errors with a high

n-bit CRC can be calculated as CRC = Rem [M(x) \* (xn/G(x)) J; where M(x) denotes the message polynomial, G(x) denotes the generator polynomial and n is the degree of polynomial G(x). CRC can be calculated using serial or parallel method. Figure 3 shows the serial data input hardware implementation. The data message input is denoted as Din, clk is used to denote the clock used for the circuits. XOR gates are used before the input of each flip-flop. The output can be obtained

Parallel implementation of CRC is shown in Figure 4. The data message input is to be XOR-ed with a calculated input. The calculated input can be obtained by using matrix method [6]. State equation for LFSRs can be written as: X(i + 1) = Fm. X (i) + H.D(i); where Xi is the ith state of register and X(i + 1) is the (i + 1)th state of

All coefficients of the polynomial are binary, and modulo-2 arithmetic is used [4].

To see how the receiver side can use this code word to detect errors, we first need to derive some properties of it. Let z(x) denote the quotient in the division hence, following is the data polynomial.

$$d(\mathbf{x})\mathbf{x}^{n-k} = \mathbf{g}(\mathbf{x})z(\mathbf{x}) + r(\mathbf{x})\tag{7}$$

In modulo-2 arithmetic, addition and subtraction are alike, and the codeword polynomial can be written as.

$$c(\mathbf{x}) = d(\mathbf{x})\mathbf{x}^{n-k} + r(\mathbf{x}) = \mathbf{g}(\mathbf{x})z(\mathbf{x}) \tag{8}$$

This gives rise to the following theorem [5].

A polynomial c(x) with deg.(c(x)) < n is a code word if and only if g(x)jc(x). If c(x) is transmitted over a channel and there occur errors, they can be represented by an addition of the polynomial e(x), and the received polynomial is.

$$y(\mathbf{x}) = c(\mathbf{x}) + e(\mathbf{x})\tag{9}$$

Thus g(x) is a factor of each transmitted codeword which can be used by the receiver to detect the error. The error is detected if g(x) is not a factor. To check this, the remainder of the division c(x) = g(x) is derived as.

$$\begin{aligned} s(\boldsymbol{x}) &= R\_{\boldsymbol{g}(\boldsymbol{x})} \left( \boldsymbol{y}(\boldsymbol{x}) \right) = R\_{\boldsymbol{g}(\boldsymbol{x})} \left( \boldsymbol{c}(\boldsymbol{x}) + \boldsymbol{e}(\boldsymbol{x}) \right) \\ &= R\_{\boldsymbol{g}(\boldsymbol{x})} \left( R\_{\boldsymbol{g}(\boldsymbol{x})} \left( \boldsymbol{c}(\boldsymbol{x}) \right) + R\_{\boldsymbol{g}(\boldsymbol{x})} \left( \boldsymbol{e}(\boldsymbol{x}) \right) \right) = R\_{\boldsymbol{g}(\boldsymbol{x})} \left( \boldsymbol{e}(\boldsymbol{x}) \right) \end{aligned} \tag{10}$$

This quantity is known as Syndrome. It is directly a function of the error since Rg(<sup>x</sup>)(c(x)) = 0. The syndrome plays an important role in coding theory.

#### 2.4.2 Performance

CRC is a very effective and popular error detection technique. The error detection capabilities of CRC depend on the chosen generator polynomial.

Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160


#### 2.4.3 Implementation

(4)

(5)

(6)

(7)

(8)

(9)

(10)

The polynomial for codeword is written as follows:

non-zero (gn-k = 1 and g0 = 1).

polynomial can be written as.

This gives rise to the following theorem [5].

used [4].

Coding Theory

polynomial is.

2.4.2 Performance

34

The CRC polynomial is derived as.

CRC polynomial is derived using a degree n-k generator polynomial.

which is a binary polynomial, wherein the highest and lowest coefficients are

All coefficients of the polynomial are binary, and modulo-2 arithmetic is

hence, following is the data polynomial.

If c(x) is transmitted over a channel and there occur errors, they can be represented by an addition of the polynomial e(x), and the received

this, the remainder of the division c(x) = g(x) is derived as.

To see how the receiver side can use this code word to detect errors, we first need to derive some properties of it. Let z(x) denote the quotient in the division

In modulo-2 arithmetic, addition and subtraction are alike, and the codeword

A polynomial c(x) with deg.(c(x)) < n is a code word if and only if g(x)jc(x).

Thus g(x) is a factor of each transmitted codeword which can be used by the receiver to detect the error. The error is detected if g(x) is not a factor. To check

This quantity is known as Syndrome. It is directly a function of the error since

CRC is a very effective and popular error detection technique. The error detec-

Rg(<sup>x</sup>)(c(x)) = 0. The syndrome plays an important role in coding theory.

tion capabilities of CRC depend on the chosen generator polynomial.

n-bit CRC can be calculated as CRC = Rem [M(x) \* (xn/G(x)) J; where M(x) denotes the message polynomial, G(x) denotes the generator polynomial and n is the degree of polynomial G(x). CRC can be calculated using serial or parallel method. Figure 3 shows the serial data input hardware implementation. The data message input is denoted as Din, clk is used to denote the clock used for the circuits. XOR gates are used before the input of each flip-flop. The output can be obtained from any input or output wire of any flip-flop.

Parallel implementation of CRC is shown in Figure 4. The data message input is to be XOR-ed with a calculated input. The calculated input can be obtained by using matrix method [6]. State equation for LFSRs can be written as: X(i + 1) = Fm. X (i) + H.D(i); where Xi is the ith state of register and X(i + 1) is the (i + 1)th state of

Figure 3. Serial CRC.

Figure 4. Parallel CRC.

the register, D(i) is the ith serial input bit, Fm is a m x m matrix and H is a 1 x m matrix. Consider the generator polynomial G = {gm, gm-1. - - -,go}.

$$F^{m} = \begin{bmatrix} \mathbf{g}\_{m-1} & \mathbf{l} & \mathbf{0} & \cdots & \cdots & \mathbf{0} \\ \mathbf{g}\_{m-2} & \mathbf{0} & \mathbf{l} & \cdots & \cdots & \mathbf{0} \\ & \mid & \cdots & \cdots & \cdots & \cdots \\ \mid & \cdots & \cdots & \cdots & \cdots & \cdots \\ \mathbf{g}\_{1} & \mathbf{0} & \mathbf{0} & \cdots & \cdots & 1 \\ \mathbf{g}\_{0} & \mathbf{0} & \mathbf{0} & \cdots & \cdots & \mathbf{0} \end{bmatrix} \tag{11}$$

$$H = \begin{bmatrix} 0 & 0 & - & - & - & 0 & 1 & \mathbf{J}^{\mathrm{r}} \end{bmatrix} \tag{12}$$

$$X\_{\
u-1}^{'} = (\mathbf{g}\_{\
u-1}, X\_{\
u-1}) \oplus X\_{\
u-2} \tag{13}$$

$$X\_{\
u-2}^{'} = (\mathbf{g}\_{\
u-2}, X\_{\
u-1}) \oplus X\_{\
u-3} \tag{14}$$

(15)

$$X^{\cdot} \, \! = (\! \! \_0, \! X\_{\cdot \cdot \cdot \cdot} ) \oplus d \tag{16}$$

3. Error correcting codes

A list of some primitive polynomials [4, 5].

Commonly used divisor polynomials [4, 5].

Polynomial name

DOI: http://dx.doi.org/10.5772/intechopen.86160

CRC-32- IEEE802.3

Table 1.

P(x)

Table 2.

37

There are two ways to handle error correction. The first method is known as backward error correction wherein, and the receiver asks for retransmission of data when the error is discovered. The second method is known as backward error

Polynomial Use

CRC-8-SAE SAE J1850 CRC-10 General

Polynomials in Error Detection and Correction in Data Communication System

CRC-15-CAN CAN

CRC-16 USB CRC-24-Radix64 General

CRC-32C General

CRC-32K General

CRC-64-ISO ISO 3309 CRC-64-ECMA ECMA-182

CRC-12 Telecom systems

CRC-16-CCITT XMODEM,X.25,

V.41, Bluetooth, PPP, IrDA, CRCCCITT

Ethernet, MPEG2

The above equations are used for serial computation of CRC. Following equation are used for parallel computation of CRC.

$$\boldsymbol{\varepsilon}^{X^{\prime}}\_{\cdots} = \left( {}^{F^{\prime}}\!(m-1)(m-1) {}\_{r}X\_{m-1} \right) \oplus \left( {}^{F^{\prime}}(m-1)(m-2) {}\_{r}X\_{m-2} \right) \dots \dots \oplus \left( {}^{F^{\prime}}(m-1) {}\_{0} \mathbf{O} \right) \boldsymbol{X}\_{0} \oplus \boldsymbol{d}\_{m-1} \tag{17}$$

$$\stackrel{\circ}{X}\_{n-2} = (F^n(m-2)(m-1), X\_{n-1}) \oplus (F^n(m-2)(m-2), X\_{n-2}) \dots \oplus (F^n(m-2)(0), X\_0 \oplus d\_{n-2}) \tag{18}$$

$$X^{'} \circ = (F^{n}(0)(m-1), X\_{m-1}) \oplus (F^{n}(0)(m-2), X\_{m-2}) \dots \oplus (F^{n}(0)(0), X\_{0} \oplus d\_{0}) \tag{19}$$

Table 1 summaries the commonly used polynomials in different applications and Table 2 gives a list of primitive polynomials.



Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

#### Table 1.

the register, D(i) is the ith serial input bit, Fm is a m x m matrix and H is a 1 x m

The above equations are used for serial computation of CRC. Following equation

Table 1 summaries the commonly used polynomials in different applications

Polynomial Use

CRC-1 Parity CRC-4-ITU ITU G.704 CRC-5-ITU ITU G.704 CRC-5-USB USB CRC-6-ITU ITU G.704 CRC-7 Telecom systems,

CRC-8-ATM ATM HEC CRC-8-CCITT 1-Wire bus CRC-8-Maxim 1-Wire bus CRC-8 General

are used for parallel computation of CRC.

and Table 2 gives a list of primitive polynomials.

Polynomial name

Coding Theory

36

(11)

(12)

(13)

(14)

(15)

(16)

(17)

(18)

(19)

MMC

matrix. Consider the generator polynomial G = {gm, gm-1. - - -,go}.

Commonly used divisor polynomials [4, 5].


Table 2. A list of some primitive polynomials [4, 5].

#### 3. Error correcting codes

There are two ways to handle error correction. The first method is known as backward error correction wherein, and the receiver asks for retransmission of data when the error is discovered. The second method is known as backward error

connection, where the receiver uses an error correction code to correct certain errors.

The codes required for error connection are more sophisticated compared to error detection codes and require more redundant bits. Most error correction is limited to one, two or at the most three-bit errors since it requires large number of redundant bits multiple bit error or burst errors.

error-free information transmission. Hamming invented the first error correcting

The BCH code design can have a precise control over the number of symbol errors correctable by the code. Binary BCH codes can correct multiple bit errors. BCH codes are advantages since a simple algebraic method known as syndrome decoding can be used which simplifies the design of the decoder for these codes,

BCH codes are used in applications such as satellite communications, compact disc players, DVDs, disk drives, solid-state drives, and two-dimensional bar codes. BCH codes are a class of linear, cyclic codes. For a cyclic code any codeword polynomial has its generator polynomial as a factor; so the roots of the code's generator polynomial g(x) are also the roots of code words. BCH codes are

constructed using the roots of g(x) in extended Galois field; binary primitive BCH codes, which correct multiple random errors, form an important subclass. The error

g(x) generates a binary primitive BCH code ith it is the least degree polynomial

where {Ω1(x) Ω<sup>2</sup> (x) Ω<sup>3</sup> (x).. .. Ω<sup>i</sup> (x)} is the smallest set of minimal polynomials

i. Form the syndrome polynomial <sup>s</sup>(x) = s<sup>0</sup> + s1x+s2x<sup>2</sup> <sup>+</sup> <sup>+</sup> sn-K-1x<sup>n</sup><sup>K</sup><sup>1</sup> with the set {s0,s1,s2 … . sn-K-1} being the values of r(x) at α, α2, … . α 2 t. If s(x) is

iii. Obtain the roots of σ(x) and their respective inverses which indicate the error

iv. Complement the bits in the positions indicated by the error locations to obtain the decoded codeword. The syndrome polynomial can be obtained alternately

same as the syndrome nonbinary BCH codes; nonbinary BCH codes form

by dividing r(x) by g(x) and evaluating the remainder at α, α<sup>2</sup>

ii. With the syndromes obtained in step 1 above, form the error-locator polynomial σ(x) using any of the algorithms like Berlekamp, Peterson-Gorenstein-Zierler algorithm, form the error-locator polynomial σ(x) using

, … … .. α2t as roots, α being a primitive element in GF(2m).

) … … (x + α 2t) as a factor. This leads to

, … . α2<sup>t</sup>

. This is

codes (ECC) in 1950. It is known as (7, 4) Hamming code.

DOI: http://dx.doi.org/10.5772/intechopen.86160

Polynomials in Error Detection and Correction in Data Communication System

which uses a small low-power electronic hardware.

correcting binary BCH code has the following parameters:

Block length n=2<sup>m</sup> – 1.

over GF(2) with α, α<sup>2</sup>

g(x) of the form.

with (x + α) (x + α<sup>2</sup>

3.2.1 Decoding of BCH codes

locations.

39

• No. of parity check bits: n – K £ mt.

• Minimum distance: dmin 2t+1.

With this g(x) must have (x + α) (x + α<sup>2</sup>

g(x) = LCM [Ω1(x) Ω<sup>2</sup> (x) Ω<sup>3</sup> (x).. .. Ω<sup>i</sup> (x)].

the syndromes obtained in Step 1.

BCH codes can be encoded using similar method.

) … … (x + α 2t) as a factor.

The decoding of BCH codes involves the following steps:

zero, r(x) itself is a codeword; else proceed as follows.

3.2 BCH codes

Different types of error detection and correction techniques are required for specific noisy channels/media, like random error or burst error or multi-path distortion or channel effects. There are two approaches for error control coding, forward error correction (FEC) and automatic repeat request (ARQ) [7].

FEC error control is used for one-way system whereas, ECC (Error Correcting Codes) with error detection and retransmission called ARQ is used for two-way communication, such as telephone and satellite communications. The classification of FEC is shown in Figure 5.

#### 3.1 Single-bit error correction

A single-bit error can be easily detected using a parity bit; however, for correcting an error, the exact position of the errored bit is required to be detected.

Hamming code is a technique developed by R.W. Hamming, which is used to find out the location of the bit which is in error, Hamming code can be used for data bits of any length and uses the relationship between data bits and redundant bits where 2r ≥ d+r+1.

Procedure for error detection using Hamming code is as follows:


Claude Elwood Shannon (1916–2001) and Richard Hamming (1915–1998), were colleagues at Bell Laboratories pioneer in coding theory. Shannon's channel coding theorem proves that if the information transmission rate is less than the channel capacity, it is possible to design an error correcting code (ECC) with almost

Figure 5. Classification of FEC.

error-free information transmission. Hamming invented the first error correcting codes (ECC) in 1950. It is known as (7, 4) Hamming code.

### 3.2 BCH codes

connection, where the receiver uses an error correction code to correct certain

redundant bits multiple bit error or burst errors.

of FEC is shown in Figure 5.

where 2r ≥ d+r+1.

code.

Figure 5.

38

Classification of FEC.

3.1 Single-bit error correction

The codes required for error connection are more sophisticated compared to error detection codes and require more redundant bits. Most error correction is limited to one, two or at the most three-bit errors since it requires large number of

Different types of error detection and correction techniques are required for specific noisy channels/media, like random error or burst error or multi-path distortion or channel effects. There are two approaches for error control coding, forward error correction (FEC) and automatic repeat request (ARQ) [7].

FEC error control is used for one-way system whereas, ECC (Error Correcting Codes) with error detection and retransmission called ARQ is used for two-way communication, such as telephone and satellite communications. The classification

A single-bit error can be easily detected using a parity bit; however, for correcting an error, the exact position of the errored bit is required to be detected. Hamming code is a technique developed by R.W. Hamming, which is used to find out the location of the bit which is in error, Hamming code can be used for data bits of any length and uses the relationship between data bits and redundant bits

• To each group of m information bits k parity bits are added to form (m+k) bit

Procedure for error detection using Hamming code is as follows:

• Location of each of the (m+k) digits is assigned a decimal value.

performed on selected digits of each codeword.

parity bits provide the bit-position in error if any.

• The k parity bits are placed in positions 1, 2, … , 2 k-1. k parity checks are

• At the receiving end, the parity bits are recalculated. The decimal value of the k

Claude Elwood Shannon (1916–2001) and Richard Hamming (1915–1998), were colleagues at Bell Laboratories pioneer in coding theory. Shannon's channel coding theorem proves that if the information transmission rate is less than the channel capacity, it is possible to design an error correcting code (ECC) with almost

errors.

Coding Theory

The BCH code design can have a precise control over the number of symbol errors correctable by the code. Binary BCH codes can correct multiple bit errors. BCH codes are advantages since a simple algebraic method known as syndrome decoding can be used which simplifies the design of the decoder for these codes, which uses a small low-power electronic hardware.

BCH codes are used in applications such as satellite communications, compact disc players, DVDs, disk drives, solid-state drives, and two-dimensional bar codes.

BCH codes are a class of linear, cyclic codes. For a cyclic code any codeword polynomial has its generator polynomial as a factor; so the roots of the code's generator polynomial g(x) are also the roots of code words. BCH codes are constructed using the roots of g(x) in extended Galois field; binary primitive BCH codes, which correct multiple random errors, form an important subclass. The error correcting binary BCH code has the following parameters:

Block length n=2<sup>m</sup> – 1.


g(x) generates a binary primitive BCH code ith it is the least degree polynomial over GF(2) with α, α<sup>2</sup> , … … .. α2t as roots, α being a primitive element in GF(2m). With this g(x) must have (x + α) (x + α<sup>2</sup> ) … … (x + α 2t) as a factor. This leads to g(x) of the form.

g(x) = LCM [Ω1(x) Ω<sup>2</sup> (x) Ω<sup>3</sup> (x).. .. Ω<sup>i</sup> (x)].

where {Ω1(x) Ω<sup>2</sup> (x) Ω<sup>3</sup> (x).. .. Ω<sup>i</sup> (x)} is the smallest set of minimal polynomials with (x + α) (x + α<sup>2</sup> ) … … (x + α 2t) as a factor.

BCH codes can be encoded using similar method.

#### 3.2.1 Decoding of BCH codes

The decoding of BCH codes involves the following steps:


another class of BCH codes where the coefficients of the code polynomial are also elements from the extended field. Encoding of non-binary BCH codes follows the same procedure as that of binary BCH codes.

3.4.2 Architectures for encoding and decoding Reed-Solomon codes

Polynomials in Error Detection and Correction in Data Communication System

require special hardware or software functions to implement.

special-purpose hardware.

3.4.2.2 Generator polynomial

nomial is denoted as below:

3.4.3 Encoder architecture

3.4.4 Decoder architecture

Key

41

and the codeword is constructed using:

Example: Generator for RS(255,249).

3.4.2.1 Finite (Galois) field arithmetic

DOI: http://dx.doi.org/10.5772/intechopen.86160

Reed-Solomon encoding and decoding can be carried out in software or in

Reed-Solomon codes are based on a specialist area of mathematics known as Galois fields or finite fields. A finite field has the property that arithmetic operations (+, �, �, / etc.) on field elements always have a result in the field. A Reed-Solomon encoder or decoder needs to carry out these arithmetic operations. These operations

A Reed-Solomon codeword is generated using a special polynomial. All valid codewords are exactly divisible by the generator polynomial. The generator poly-

where g(x) is the generator polynomial, i(x) is the information block, c(x) is a

The 2t parity symbols in a systematic Reed-Solomon codeword are given by:

Each of the 6 registers holds a symbol (8 bits). The arithmetic operators carry

A general architecture for decoding Reed-Solomon codes is shown in Figure 8.

r(x) codeword at receiver

L(x) Polynomial of error locator

Si Syndromes

Figure 7 shows the architecture for a systematic RS(255,249) encoder:

out finite field addition or multiplication on a complete symbol.

valid codeword and is referred to as a primitive element of the field.

c xð Þ¼ g xð Þ:i xð Þ: ð Þ 21

(20)

(22)

(23)

(24)

#### 3.3 The binary Golay code

The binary form of the Golay code is one of the most important types of linear binary block codes. The t-error correcting code can correct a maximum of t errors. A perfect t-error correcting code has the property that every word lies within a distance of t to exactly one code word. Equivalently, the code has dmin =2t+ 1, and covering radius t, where the covering radius r is the smallest number such that every word lies within a distance of r to a codeword.

The time complexity for hamming codes is 0(n2) since it is multiplication of two matrices. The time complexity for Golay binary code is as follows 0(n) for the calculating syndrome that is calculating the error.

#### 3.4 Reed-Solomon codes

Reed-Solomon codes are block-based error correcting codes with a wide range of applications in digital communications and storage. Reed-Solomon codes are used to correct errors in many systems such as storage devices, wireless or mobile communications, satellite, DVB and high-speed modems such as ADSL, xDSL. A typical communication channel using Reed-Solomon code is shown in Figure 6.

The Reed-Solomon encoder takes a block of digital data and adds extra redundant bits. Errors occur during transmission or storage due to noise, interference, scratch on CD, etc. The Reed-Solomon decoder processes each block and attempts to correct errors and recover the original data. The number and type of errors that can be corrected depends on the characteristics of the Reed-Solomon code.

#### 3.4.1 Properties of Reed-Solomon codes

Reed Solomon codes are a subset of BCH codes and are linear block codes. A Reed-Solomon code is denoted as RS (n,k) with s-bit symbols.

This means that the encoder takes k data symbols of s bits each and adds parity symbols to make an n symbol code word. There are n-k parity symbols of s bits each. A Reed-Solomon decoder can correct up to t symbols that contain errors in a code word, where 2 t = n-k.

The following diagram shows a typical Reed-Solomon code word.

Figure 6.

The Reed-Solomon code with communication channel.

Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

#### 3.4.2 Architectures for encoding and decoding Reed-Solomon codes

Reed-Solomon encoding and decoding can be carried out in software or in special-purpose hardware.

#### 3.4.2.1 Finite (Galois) field arithmetic

another class of BCH codes where the coefficients of the code polynomial are also elements from the extended field. Encoding of non-binary BCH codes

The binary form of the Golay code is one of the most important types of linear binary block codes. The t-error correcting code can correct a maximum of t errors. A perfect t-error correcting code has the property that every word lies within a distance of t to exactly one code word. Equivalently, the code has dmin =2t+ 1, and covering radius t, where the covering radius r is the smallest number such that

The time complexity for hamming codes is 0(n2) since it is multiplication of two

Reed-Solomon codes are block-based error correcting codes with a wide range of applications in digital communications and storage. Reed-Solomon codes are used to correct errors in many systems such as storage devices, wireless or mobile communications, satellite, DVB and high-speed modems such as ADSL, xDSL. A typical

The Reed-Solomon encoder takes a block of digital data and adds extra redundant bits. Errors occur during transmission or storage due to noise, interference, scratch on CD, etc. The Reed-Solomon decoder processes each block and attempts to correct errors and recover the original data. The number and type of errors that

Reed Solomon codes are a subset of BCH codes and are linear block codes. A

This means that the encoder takes k data symbols of s bits each and adds parity symbols to make an n symbol code word. There are n-k parity symbols of s bits each. A Reed-Solomon decoder can correct up to t symbols that contain errors in a

matrices. The time complexity for Golay binary code is as follows 0(n) for the

communication channel using Reed-Solomon code is shown in Figure 6.

can be corrected depends on the characteristics of the Reed-Solomon code.

The following diagram shows a typical Reed-Solomon code word.

Reed-Solomon code is denoted as RS (n,k) with s-bit symbols.

follows the same procedure as that of binary BCH codes.

every word lies within a distance of r to a codeword.

calculating syndrome that is calculating the error.

3.3 The binary Golay code

Coding Theory

3.4 Reed-Solomon codes

3.4.1 Properties of Reed-Solomon codes

The Reed-Solomon code with communication channel.

code word, where 2 t = n-k.

Figure 6.

40

Reed-Solomon codes are based on a specialist area of mathematics known as Galois fields or finite fields. A finite field has the property that arithmetic operations (+, �, �, / etc.) on field elements always have a result in the field. A Reed-Solomon encoder or decoder needs to carry out these arithmetic operations. These operations require special hardware or software functions to implement.

#### 3.4.2.2 Generator polynomial

A Reed-Solomon codeword is generated using a special polynomial. All valid codewords are exactly divisible by the generator polynomial. The generator polynomial is denoted as below:

$$\mathbf{g}(\mathbf{x}) = \left(\mathbf{x} - \alpha^{\prime}\right)\mathbf{x} - \alpha^{\prime + \prime}\mathbf{\iota} \dots \left(\mathbf{x} - \alpha^{\prime + \Im \iota}\right) \tag{20}$$

and the codeword is constructed using:

$$
\mathfrak{c}(\mathfrak{x}) = \mathfrak{g}(\mathfrak{x}) . i(\mathfrak{x}). \tag{21}
$$

where g(x) is the generator polynomial, i(x) is the information block, c(x) is a valid codeword and is referred to as a primitive element of the field.

Example: Generator for RS(255,249).

$$\mathbf{g(x) = (x - \alpha^0)(x - \alpha^1)(x - \alpha^2)(x - \alpha^3)(x - \alpha^4)(x - \alpha^5)}\tag{22}$$

$$\mathbf{g(x) = x^6 + g\_\varepsilon x^5 + g\_4 x^4 + g\_3 x^3 + g\_2 x^2 + g\_1 x^1 + g\_0} \tag{23}$$

#### 3.4.3 Encoder architecture

The 2t parity symbols in a systematic Reed-Solomon codeword are given by:

$$
\varphi(x) = i(x)x^{n-k} \bmod g(x) \tag{24}
$$

Figure 7 shows the architecture for a systematic RS(255,249) encoder: Each of the 6 registers holds a symbol (8 bits). The arithmetic operators carry

#### out finite field addition or multiplication on a complete symbol.

#### 3.4.4 Decoder architecture

A general architecture for decoding Reed-Solomon codes is shown in Figure 8. Key



Find the roots of this polynomial: This is done using the Chien search algorithm. Finding the symbol error values: Again, this involves solving simultaneous equa-

Low-density parity check (LDPC) codes are a class of linear block code. The term "Low Density" refers to the parity check matrix which contains only few '1's in comparison to '0's. LDPC codes are arguably the best error correction codes in existence at present. LDPC codes were first introduced by R. Gallager in his Ph.D. thesis in 1960. However, they were forgotten due to introduction of Reed-Solomon codes and since there were problems with implementation of LDPC codes due to limited technological know-how. The LDPC codes were rediscovered in mid-90s by

N bit long LDPC code is defined code in terms of M number of parity check equations, and these equations can be described as an M N parity check matrix H. where, M is the number of parity check equations and N is the number of bits in

Consider the 6-bit long codeword in the form which

and changes, therefore this is an irregular parity check matrix. The density of '1's in LDPC code parity check matrix is very low, row weight is the number of '1's in a row, number of symbols taking part in a parity check,

column weight is the number of '1's in a column, number of times a symbol takes

The parity check matrix defines a rate , code where

We can obtain the generator matrix G from parity check matrix H by

We can generate the codeword c by multiplying message m with generator matrix G.

Codeword is said to be valid if it satisfies the syndrome calculation

(25)

(26)

(27)

(28)

(29)

(30)

(31)

tions with t unknowns. A widely-used fast algorithm is the Forney algorithm.

Polynomials in Error Detection and Correction in Data Communication System

3.5 Low-density parity check codes

DOI: http://dx.doi.org/10.5772/intechopen.86160

the code word.

part in parity checks.

.

43

R. Neal and D. Mackay at the Cambridge University.

satisfies 3 parity check equations as shown below.

We can now define 3 6 parity check matrix as,

Figure 7. Block diagram of RS encoder.

Figure 8. Block diagram of RS decoder.

The received codeword r(x) is the original (transmitted) codeword c(x) plus errors:

r(x) = c(x) + e(x).

A Reed-Solomon decoder attempts to identify the position and magnitude of up to t errors (or 2 t erasures) and to correct the errors or erasures.

Syndrome calculation: This is a similar to parity calculation. A Reed-Solomon codeword has 2 t syndromes that depend only on errors (not on the transmitted codeword). The syndromes can be calculated by substituting the 2 t roots of the generator polynomial g(x) into r(x).

Finding the symbol error locations: Error locations are found by solving simultaneous equations with t unknowns. It uses several fast algorithms, which take the advantage of the special matrix structure of these codes and reduce the computational effort. In general two steps are involved.

Find an error locator polynomial: This can be done using the Berlekamp-Massey algorithm or Euclid's algorithm. Euclid's algorithm is more popular because it is easier to implement: however, the Berlekamp-Massey algorithm has efficient hardware and software implementations.

Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

Find the roots of this polynomial: This is done using the Chien search algorithm. Finding the symbol error values: Again, this involves solving simultaneous equations with t unknowns. A widely-used fast algorithm is the Forney algorithm.

#### 3.5 Low-density parity check codes

Low-density parity check (LDPC) codes are a class of linear block code. The term "Low Density" refers to the parity check matrix which contains only few '1's in comparison to '0's. LDPC codes are arguably the best error correction codes in existence at present. LDPC codes were first introduced by R. Gallager in his Ph.D. thesis in 1960. However, they were forgotten due to introduction of Reed-Solomon codes and since there were problems with implementation of LDPC codes due to limited technological know-how. The LDPC codes were rediscovered in mid-90s by R. Neal and D. Mackay at the Cambridge University.

N bit long LDPC code is defined code in terms of M number of parity check equations, and these equations can be described as an M N parity check matrix H.

where, M is the number of parity check equations and N is the number of bits in the code word.

Consider the 6-bit long codeword in the form which satisfies 3 parity check equations as shown below.

$$c\_1 \oplus c\_2 \oplus c\_s = 0 \tag{25}$$

$$c\_1 \oplus c\_4 \oplus c\_6 = 0 \tag{26}$$

$$c\_1 \oplus c\_2 \oplus c\_3 \oplus c\_6 = 0 \tag{27}$$

We can now define 3 6 parity check matrix as,

$$c\_1 \oplus c\_2 \oplus c\_s \tag{28}$$

$$H = \begin{bmatrix} 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 \end{bmatrix} \tag{29}$$

and changes, therefore this is an irregular parity check matrix.

The density of '1's in LDPC code parity check matrix is very low, row weight is the number of '1's in a row, number of symbols taking part in a parity check, column weight is the number of '1's in a column, number of times a symbol takes part in parity checks.

$$\mathcal{H} = \begin{bmatrix} 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 \end{bmatrix} \tag{30}$$

The parity check matrix defines a rate , code where Codeword is said to be valid if it satisfies the syndrome calculation .

We can generate the codeword c by multiplying message m with generator matrix G.

$$
\mathcal{L} = m \mathcal{G} \tag{31}
$$

We can obtain the generator matrix G from parity check matrix H by

The received codeword r(x) is the original (transmitted) codeword c(x) plus

Xi Locations of error Yi Magnitudes of error c(x) code word recovered v Errors in total

A Reed-Solomon decoder attempts to identify the position and magnitude of up

Syndrome calculation: This is a similar to parity calculation. A Reed-Solomon codeword has 2 t syndromes that depend only on errors (not on the transmitted codeword). The syndromes can be calculated by substituting the 2 t roots of the

Finding the symbol error locations: Error locations are found by solving simultaneous equations with t unknowns. It uses several fast algorithms, which take the advantage of the special matrix structure of these codes and reduce the computa-

Find an error locator polynomial: This can be done using the Berlekamp-Massey algorithm or Euclid's algorithm. Euclid's algorithm is more popular because it is easier to implement: however, the Berlekamp-Massey algorithm has efficient hard-

to t errors (or 2 t erasures) and to correct the errors or erasures.

errors:

42

Figure 8.

Figure 7.

Coding Theory

Block diagram of RS encoder.

r(x) = c(x) + e(x).

Block diagram of RS decoder.

generator polynomial g(x) into r(x).

ware and software implementations.

tional effort. In general two steps are involved.

1. arranging the parity check matrix in systematic form using row and column operations and

$$\boldsymbol{H}\_{\rm sys} = \begin{bmatrix} \boldsymbol{I}\_M \end{bmatrix} \boldsymbol{P}\_{\boldsymbol{M} \times \boldsymbol{K}} \begin{bmatrix} \boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{1} & \boldsymbol{0} & \boldsymbol{1} \\ \boldsymbol{0} & \boldsymbol{1} & \boldsymbol{0} & \boldsymbol{1} & \boldsymbol{1} & \boldsymbol{1} \\ \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{1} & \boldsymbol{0} & \boldsymbol{1} & \boldsymbol{1} \end{bmatrix} \tag{32}$$

2. rearranging the systematic parity check matrix.

$$G = \left[ P\_{K \times M}^{r} | I\_K \right] \tag{33}$$

performed on the same. In 1963, Massey proposed a method which was simpler to implement called threshold decoding. Then in 1967, Viterbi proposed a maximum likelihood decoding scheme that was relatively easy to implement for codes with small memory orders. Viterbi decoding was combined with improved versions of sequential decoding and convolutional codes were used in deep-space and satellite communication in early 1970s. A convolutional code is generated by passing the information sequence to be transmitted through a linear finite-state shift register. In general, the shift register consists of K (k-bit) stages and n linear algebraic function

Polynomials in Error Detection and Correction in Data Communication System

Convolution codes have simple encoding and decoding methods and are quite a

An (n,k) convolution code (CC) is defined by a kxn generator matrix, entries of

An (n,k) convolution code with a kxn generator matrix G can be used to encode

Turbo codes were proposed by Berrou and Glavieux in the 1993 International Conference in Communications. Turbo codes demonstrated a performance within 0.5 dB of the channel capacity limit for BPSK. Turbo codes use parallel concatenated

Turbo codes have a remarkable power efficiency in Additive White Gaussian Noise (AWGN) and flat-fading channels for moderately low BER, mostly used in delivery of multimedia services. However turbo codes have a long latency and poor performance at very low BER since turbo codes operate at very low SNR, channel estimation and tracking is a critical issue. The principle of iterative or "turbo" processing can be applied to other problems; Turbo-multiuser detection can improve performance of coded multiple-access systems. Performance close to the Shannon Limit can be achieved (Eb/N0 = �1.6 dB if Rb! 0) at modest complexity. Turbo codes have been proposed for low-power applications such as deep-space and

coding, recursive convolutional encoders, and Pseudo-random interleaving.

I ¼ ð Þ I0ð Þ x ,I1ð Þ X , … ,Ik�<sup>1</sup>ð Þ x : (39)

C ¼ ð Þ C0ð Þ x ,C1ð Þ x , … ,Cn�<sup>1</sup>ð Þ x : (40)

C ¼ I:G: (41)

(37)

(38)

simple generalization of linear codes and have encodings as cyclic codes.

is the generator matrix for a (2,1) convolution code CC1 and

is the generator matrix for a (3,2) convolution code CC2.

generators.

which are polynomials over F2.

DOI: http://dx.doi.org/10.5772/intechopen.86160

3.6.1 Encoding of finite polynomials

a k-tuple of plain-polynomials.

As follows

3.6.2 Turbo codes

45

to get an n-tuple of crypto-polynomials.

$$G = \begin{bmatrix} 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 0 & 1 \end{bmatrix} \tag{34}$$

3. we can verify our results as

$$G.H^T = 0\tag{35}$$

$$H = \begin{bmatrix} 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 \end{bmatrix} \tag{36}$$

Tanner graph is a graphical representation of parity check matrix specifying parity check equations. Tanner graph for LDPC codes, as shown in Figure 9, consists of N number of variable nodes and M number of check nodes. In Tanner graph, mth check node is connected to nth variable node if and only if nth element in mth row in parity check matrix H, hmn is a '1'.

The marked path z2 ! c1 ! z3 ! c6 ! z2 is an example for short cycle of 4. The number of steps needed to return to the original position is known as the girth of the code.

#### 3.6 Convolution codes

Convolutional codes differ from block codes in that the encoder contains memory. The n encoder outputs at any time unit depend not only on the k inputs but also on m previous input blocks. An (n, k, m) convolutional code can be implemented with a k-input, n-output linear sequential circuit with input memory m. Typically, n and k are small integers. Wozencraft proposed sequential decoding as an efficient decoding scheme for convolution codes, and many experimental studies were

Figure 9. LDPC codes tanner graph representation.

Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

performed on the same. In 1963, Massey proposed a method which was simpler to implement called threshold decoding. Then in 1967, Viterbi proposed a maximum likelihood decoding scheme that was relatively easy to implement for codes with small memory orders. Viterbi decoding was combined with improved versions of sequential decoding and convolutional codes were used in deep-space and satellite communication in early 1970s. A convolutional code is generated by passing the information sequence to be transmitted through a linear finite-state shift register. In general, the shift register consists of K (k-bit) stages and n linear algebraic function generators.

Convolution codes have simple encoding and decoding methods and are quite a simple generalization of linear codes and have encodings as cyclic codes.

An (n,k) convolution code (CC) is defined by a kxn generator matrix, entries of which are polynomials over F2.

$$\mathbf{G}\_1 = \left[\mathbf{x}^2 + \mathbf{l}, \mathbf{x}^2 + \mathbf{x} + 1\right] \tag{37}$$

is the generator matrix for a (2,1) convolution code CC1 and

$$G\_2 = \begin{pmatrix} 1+\mathbf{x} & 0 & \mathbf{x}+\mathbf{l} \\ 0 & 1 & \mathbf{x} \end{pmatrix} \tag{38}$$

is the generator matrix for a (3,2) convolution code CC2.

#### 3.6.1 Encoding of finite polynomials

An (n,k) convolution code with a kxn generator matrix G can be used to encode a k-tuple of plain-polynomials.

$$I = (I\_0(\mathbf{x}), I\_1(X), \dots, I\_{k-1}(\mathbf{x})). \tag{39}$$

to get an n-tuple of crypto-polynomials.

$$C = (C\_0(\mathbf{x}), C\_1(\mathbf{x}), \dots, C\_{n-1}(\mathbf{x})).\tag{40}$$

As follows

1. arranging the parity check matrix in systematic form using row and column

Tanner graph is a graphical representation of parity check matrix specifying parity check equations. Tanner graph for LDPC codes, as shown in Figure 9, consists of N number of variable nodes and M number of check nodes. In Tanner graph, mth check node is connected to nth variable node if and only if nth element in mth row

The marked path z2 ! c1 ! z3 ! c6 ! z2 is an example for short cycle of 4. The number of steps needed to return to the original position is known as the girth of the

Convolutional codes differ from block codes in that the encoder contains memory. The n encoder outputs at any time unit depend not only on the k inputs but also on m previous input blocks. An (n, k, m) convolutional code can be implemented with a k-input, n-output linear sequential circuit with input memory m. Typically, n and k are small integers. Wozencraft proposed sequential decoding as an efficient decoding scheme for convolution codes, and many experimental studies were

2. rearranging the systematic parity check matrix.

(32)

(33)

(34)

(35)

(36)

operations and

Coding Theory

3. we can verify our results as

in parity check matrix H, hmn is a '1'.

3.6 Convolution codes

code.

Figure 9.

44

LDPC codes tanner graph representation.

$$\mathbf{C} = I.\mathbf{G}.\tag{41}$$

#### 3.6.2 Turbo codes

Turbo codes were proposed by Berrou and Glavieux in the 1993 International Conference in Communications. Turbo codes demonstrated a performance within 0.5 dB of the channel capacity limit for BPSK. Turbo codes use parallel concatenated coding, recursive convolutional encoders, and Pseudo-random interleaving.

Turbo codes have a remarkable power efficiency in Additive White Gaussian Noise (AWGN) and flat-fading channels for moderately low BER, mostly used in delivery of multimedia services. However turbo codes have a long latency and poor performance at very low BER since turbo codes operate at very low SNR, channel estimation and tracking is a critical issue. The principle of iterative or "turbo" processing can be applied to other problems; Turbo-multiuser detection can improve performance of coded multiple-access systems. Performance close to the Shannon Limit can be achieved (Eb/N0 = �1.6 dB if Rb! 0) at modest complexity. Turbo codes have been proposed for low-power applications such as deep-space and satellite communications, as well as for limited interference applications such as third generation cellular, personal communication services, ad hoc, and sensor networks.

The information capacity (or channel capacity) C of a continuous channel with bandwidth B Hertz can be perturbed by additive Gaussian white noise of power spectral density N0/2, provided bandwidth B satisfies.

$$C = B \log\_2 \left( 1 + \frac{P}{N\_0 B} \right) \quad blts / \text{sec} \, omd \tag{42}$$

each iteration, the estimates of the information bits improve. A correct estimate of the message is achieved by increasing the number of iterations. However, this improvement does not increase linearly. Practically, it is enough to utilize a small number of iterations to achieve acceptable performance [11, 12]. Figure 11 illus-

Polynomials in Error Detection and Correction in Data Communication System

The decoder produces a soft-decision to each message bits in logarithmic form known as a log likelihood ratio (LLR) [11, 12]. At the end of this process, a hard decision is carried out at the second decoder to convert the final signal to 1's and 0's

Error probability can be decreased by adding more code bits - the code rate is increased. Combine both encoding and modulation (using Euclidean distance only). Allow parallel transition in the trellis, and it has significant coding gain (34 dB) without bandwidth compromise. It has the same complexity (same amount of computation, same decoding time and same amount of memory needed). Trellis

trates the structure of turbo decoder.

DOI: http://dx.doi.org/10.5772/intechopen.86160

Figure 11. Turbo code decoder.

Figure 12.

47

Encoder for four state Trellis TCM.

3.6.3 Trellis coded modulation (TCM)

and compare it with the original message" [13, 14].

where P is the average transmitted power P=EbRb (for an ideal system, Rb = C), Eb is the transmitted energy per bit, Rb is transmission rate.

#### 3.6.2.1 Turbo code encoder

The fundamental of turbo encoder is using two identical recursive systematic convolutional (RSC) code arranged in parallel form separated by an interleaver. The nature of the interleaver in turbo code is pseudo-random in order to minimize the correlation between the outputs of encoders that make the best results, and its matrix forms with rows and columns, depending on the block size of the code [8]. The structure of turbo encoder is shown in Figure 10.

Interleaver/deinterleaver are used and play an important role in the performance of turbo codes. The interleaver helps to increase the minimum distance and break the low weight of the input sequence by spreading out the burst errors. This is done by mapping the sequence of bits to another sequence of bits. When the length of the interleaver is very large, Turbo codes achieve excellent performance [9]. According to the structure of turbo encoder, puncturing technique will be used to obtain high rate. Puncturing is operating on the parity bits only, but the systematic bits are not punctured [10].

#### 3.6.2.2 Turbo decoder

Turbo decoders consist of a pair of convolutional decoders which cooperatively and iteratively exchange soft-decision information. The information can be passed from one decoder to the other, where each decoder takes the information corresponding to the systematic, parity bits from the encoder and a priori information from the other decoder and the resulting output generated by the decoder should be soft decisions or estimates. The passing of information between the first and second decoder continues until a given number of iterations is reached. With

Figure 10. Turbo code encoder.

Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

Figure 11. Turbo code decoder.

satellite communications, as well as for limited interference applications such as third generation cellular, personal communication services, ad hoc, and sensor

spectral density N0/2, provided bandwidth B satisfies.

Eb is the transmitted energy per bit, Rb is transmission rate.

The structure of turbo encoder is shown in Figure 10.

The information capacity (or channel capacity) C of a continuous channel with bandwidth B Hertz can be perturbed by additive Gaussian white noise of power

where P is the average transmitted power P=EbRb (for an ideal system, Rb = C),

The fundamental of turbo encoder is using two identical recursive systematic convolutional (RSC) code arranged in parallel form separated by an interleaver. The nature of the interleaver in turbo code is pseudo-random in order to minimize the correlation between the outputs of encoders that make the best results, and its matrix forms with rows and columns, depending on the block size of the code [8].

Interleaver/deinterleaver are used and play an important role in the performance of turbo codes. The interleaver helps to increase the minimum distance and break the low weight of the input sequence by spreading out the burst errors. This is done by mapping the sequence of bits to another sequence of bits. When the length of the interleaver is very large, Turbo codes achieve excellent performance [9]. According to the structure of turbo encoder, puncturing technique will be used to obtain high rate. Puncturing is operating on the parity bits only, but the systematic

Turbo decoders consist of a pair of convolutional decoders which cooperatively and iteratively exchange soft-decision information. The information can be passed

corresponding to the systematic, parity bits from the encoder and a priori information from the other decoder and the resulting output generated by the decoder should be soft decisions or estimates. The passing of information between the first and second decoder continues until a given number of iterations is reached. With

from one decoder to the other, where each decoder takes the information

(42)

networks.

Coding Theory

3.6.2.1 Turbo code encoder

bits are not punctured [10].

3.6.2.2 Turbo decoder

Figure 10. Turbo code encoder.

46

each iteration, the estimates of the information bits improve. A correct estimate of the message is achieved by increasing the number of iterations. However, this improvement does not increase linearly. Practically, it is enough to utilize a small number of iterations to achieve acceptable performance [11, 12]. Figure 11 illustrates the structure of turbo decoder.

The decoder produces a soft-decision to each message bits in logarithmic form known as a log likelihood ratio (LLR) [11, 12]. At the end of this process, a hard decision is carried out at the second decoder to convert the final signal to 1's and 0's and compare it with the original message" [13, 14].

#### 3.6.3 Trellis coded modulation (TCM)

Error probability can be decreased by adding more code bits - the code rate is increased. Combine both encoding and modulation (using Euclidean distance only). Allow parallel transition in the trellis, and it has significant coding gain (34 dB) without bandwidth compromise. It has the same complexity (same amount of computation, same decoding time and same amount of memory needed). Trellis

Figure 12. Encoder for four state Trellis TCM.

discusses error detection codes such as Simple Parity check, Two-dimensional Parity check, Checksum, Cyclic redundancy check; and error corrections codes such as Hamming code, BCH, Golay codes, RS Code, LDPC, Trellis and Turbo codes. It also gives an overview of the architecture and implementation of the codes and dis-

Polynomials in Error Detection and Correction in Data Communication System

, Vinaya Gad<sup>2</sup> and Rajendra S. Gad<sup>1</sup>

2 Department of Computer Science, G.V.M.'s College, Ponda, Goa, India

\*Address all correspondence to: rsgad@unigoa.ac.in

provided the original work is properly cited.

1 Altera SoC Laboratory, Department of Electronics, Goa University, Goa, India

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*

cusses the applications of these codes in various systems.

DOI: http://dx.doi.org/10.5772/intechopen.86160

Author details

49

Charanarur Panem<sup>1</sup>

Figure 13. Trellis representation QPSK.

code has great potential for fading channel and widely used in Modem. Figure 12 shows encoder for four state Trellis TCM.

There is increase in constellation size compared to uncoded communication, increase in throughput (b/s/Hz), and decline in BER performance due to decrease of dmin. Trellis coded modulation (TCM) is used to offset loss resulting from constellation size increase. TCM achieves this higher gain by jointly using the distance properties of the code and the distance properties of the constellation, by carefully mapping coded and uncoded bits to the constellation points. TCM uses "set partitioning" to map the bits to the constellation points. Figure 13 shows Trellis representation for QPSK.

Input: 101 ! Output: 001011.

#### 3.7 Application areas for error correcting codes (ECCs)

Deep Space communication. used a concatenation of Reed-Solomon code and convolutional code.

Storage media. BCH codes and Reed-Solomon codes are used in applications like compact disk players, DVDs, disk drives, NAND flash drives, and 2D bar codes. LDPC codes are used for SSDs and fountain codes are erasure codes used in datastorage applications.

Mobile communication. ARQ is sometimes used with Global System for Mobile (GSM) communication to guarantee data integrity. Traffic channels in 2G standard use convolution code. Convolution and turbo codes are used in 3G (UMTS) networks; convolution coding can be used for low data rates and turbo coding for higher rates.

WiMAX (IEEE 802.16e standard for microwave communications) and highspeed wireless LAN (IEEE 802.11n) use LDPC as a coding scheme.

Satellite communication. For reliable communication in WiMax, optical communication, and power line communication, or in multi-layer flash memories, turbo and LDPC codes are desirable.

Hybrid ARQ is another technique for spectrum efficiency and reliable link. Network coding is one of the most important breakthroughs in information theory in recent years.

#### 4. Conclusion

The chapter describes the different types of errors encountered in a data communication system over channels and focuses on the role of polynomials in implementing various algorithms for error detection and correction codes. It

Polynomials in Error Detection and Correction in Data Communication System DOI: http://dx.doi.org/10.5772/intechopen.86160

discusses error detection codes such as Simple Parity check, Two-dimensional Parity check, Checksum, Cyclic redundancy check; and error corrections codes such as Hamming code, BCH, Golay codes, RS Code, LDPC, Trellis and Turbo codes. It also gives an overview of the architecture and implementation of the codes and discusses the applications of these codes in various systems.

#### Author details

code has great potential for fading channel and widely used in Modem. Figure 12

There is increase in constellation size compared to uncoded communication, increase in throughput (b/s/Hz), and decline in BER performance due to decrease of dmin. Trellis coded modulation (TCM) is used to offset loss resulting from constellation size increase. TCM achieves this higher gain by jointly using the distance properties of the code and the distance properties of the constellation, by carefully mapping coded and uncoded bits to the constellation points. TCM uses "set partitioning" to map the bits to the constellation points. Figure 13 shows Trellis

Deep Space communication. used a concatenation of Reed-Solomon code and

Storage media. BCH codes and Reed-Solomon codes are used in applications like compact disk players, DVDs, disk drives, NAND flash drives, and 2D bar codes. LDPC codes are used for SSDs and fountain codes are erasure codes used in data-

Mobile communication. ARQ is sometimes used with Global System for Mobile (GSM) communication to guarantee data integrity. Traffic channels in 2G standard use convolution code. Convolution and turbo codes are used in 3G (UMTS) networks; convolution coding can be used for low data rates and turbo coding for

WiMAX (IEEE 802.16e standard for microwave communications) and high-

munication, and power line communication, or in multi-layer flash memories,

Hybrid ARQ is another technique for spectrum efficiency and reliable link. Network coding is one of the most important breakthroughs in information theory

The chapter describes the different types of errors encountered in a data com-

munication system over channels and focuses on the role of polynomials in implementing various algorithms for error detection and correction codes. It

Satellite communication. For reliable communication in WiMax, optical com-

speed wireless LAN (IEEE 802.11n) use LDPC as a coding scheme.

shows encoder for four state Trellis TCM.

representation for QPSK.

Figure 13.

Coding Theory

Trellis representation QPSK.

convolutional code.

storage applications.

higher rates.

in recent years.

4. Conclusion

48

Input: 101 ! Output: 001011.

turbo and LDPC codes are desirable.

3.7 Application areas for error correcting codes (ECCs)

Charanarur Panem<sup>1</sup> , Vinaya Gad<sup>2</sup> and Rajendra S. Gad<sup>1</sup> \*


\*Address all correspondence to: rsgad@unigoa.ac.in

<sup>© 2019</sup> The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### References

[1] Available at: https://nptel.ac.in/ courses/106105080/pdf/M3L2.pdf

[2] Available at: https://www.techwalla. com/articles/types-of-errors-in-datacommunication

[3] Bertsekas D, Gallager R. Data Networks. 2nd ed. Prentice Hall; 1992. Available at: web.mit.edu/dimitrib/ www/datanets.html

[4] Forouzan B. Data Communications and Networking. 5th ed. McGraw Hill; 2013

[5] Lin S, Costello DJ. Error Control Coding. 2nd ed. Prentice Hall; 2004

[6] McEliece R. Finite Fields for Computer Scientists and Engineers. Springer; 1986

[7] Available at: https://electronicsforu. com/technology-trends/errorcorrecting-codes-comm-storage

[8] Benkeser C, Burg A, Cupaiuolo T, Huang Q. Design and optimization of an HSDPA Turbo Decoder ASIC. Journal of Solid-State Circuits. 2009

[9] Sadjadpour HR, Sloane NJA, Salehi M, Nebe G. Interleaver design for turbo codes. IEEE Journal on Selected Area In Communications. 2001;19(5):831-837

[10] Raad IS, Yakan M. Implementation of a turbo codes test bed in the Simulink environment. In: International Symposium on Signal Processing and Its Applications. Piscataway: IEEE; 2005. pp. 847-850

[11] Kaza J, Chakrabarti C. Design and implementation of low energy turbo decoders. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2004;12(9):968-977

[12] Moreira JC, Farrell PG. Essential of Error Control Coding. Wiley; 2006

Chapter 3

Abstract

A Direct Construction of

Set for CDMA

with more flexible parameters.

1. Introduction

51

zero-correlation zone, (ZCZ) sequences

Palash Sarkar and Sudhan Majhi

Intergroup Complementary Code

A collection of mutually orthogonal complementary codes (CCs) is said to be complete complementary codes (CCCs) where the number of CCs are equal to the number of constituent sequences in each CC. Intergroup complementary (IGC) code set is a collection of multiple disjoint code groups with the following correlation properties: (1) inside the zero-correlation zone (ZCZ), the aperiodic autocorrelation function (AACF) of any IGC code is zero for all nonzero time shifts; (2) the aperiodic cross-correlation function (ACCF), of two distinct IGC codes, is zero for all time shifts inside the ZCZ when they are taken from the same code groups; and (3) the ACCF, for two IGC codes from two different code groups, is zero everywhere. IGC code set has a larger set size than CCC, and both can be applicable in multicarrier code-division multiple access (CDMA). In this chapter, we present a direct construction of IGC code set by using second-order generalized Boolean functions (GBFs), and our IGC code set can support interference-free code-division multiplexing. We also relate our construction with a graph where the ZCZ width depends on the number of isolated vertices present in a graph after the deletion of some vertices. Here, the construction that we propose can generate IGC code set

Keywords: complementary code (CC), code-division multiple access (CDMA), generalized, Boolean function (GBF), intergroup complementary (IGC) code set,

Code-division multiple access (CDMA) [1] is an important communication technology where sequence signatures with good correlation properties are used to separate multiple users. In CDMA systems, multipath interference (MPI) and multiple access interference (MAI) degrade the performance where MPI and MAI occur due to the multipath propagation, non-ideal synchronization, and non-ideal correlation properties of spreading codes. Spreading code plays a significant role on the overall performance of a CDMA system. The interference-resist capability and system capacity are determined by the correlation properties and available number of spreading codes. Due to ideal auto- and cross-correlation properties, complete complementary codes (CCCs) have been applied to asynchronous multicarrier CDMA (MC-CDMA) [2] communications in order to provide zero interference performance.

[13] Moon TK. Error Correction Coding: Mathematical Methods and Algorithms. Wiley; 2005

[14] Yi B-N. Turbo code design and implementation of high-speed parallel decoder. Telkomnika. 2013;11(4): 2116-2123

#### Chapter 3

References

Coding Theory

communication

www/datanets.html

2013

Springer; 1986

[1] Available at: https://nptel.ac.in/ courses/106105080/pdf/M3L2.pdf

[3] Bertsekas D, Gallager R. Data Networks. 2nd ed. Prentice Hall; 1992. Available at: web.mit.edu/dimitrib/

[2] Available at: https://www.techwalla. com/articles/types-of-errors-in-data[12] Moreira JC, Farrell PG. Essential of Error Control Coding. Wiley; 2006

[13] Moon TK. Error Correction Coding: Mathematical Methods and Algorithms.

[14] Yi B-N. Turbo code design and implementation of high-speed parallel decoder. Telkomnika. 2013;11(4):

Wiley; 2005

2116-2123

[4] Forouzan B. Data Communications and Networking. 5th ed. McGraw Hill;

[5] Lin S, Costello DJ. Error Control Coding. 2nd ed. Prentice Hall; 2004

[7] Available at: https://electronicsforu.

[8] Benkeser C, Burg A, Cupaiuolo T, Huang Q. Design and optimization of an HSDPA Turbo Decoder ASIC. Journal of

[9] Sadjadpour HR, Sloane NJA, Salehi M, Nebe G. Interleaver design for turbo codes. IEEE Journal on Selected Area In Communications. 2001;19(5):831-837

[10] Raad IS, Yakan M. Implementation of a turbo codes test bed in the Simulink

Symposium on Signal Processing and Its Applications. Piscataway: IEEE; 2005.

[11] Kaza J, Chakrabarti C. Design and implementation of low energy turbo decoders. IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

environment. In: International

pp. 847-850

50

2004;12(9):968-977

[6] McEliece R. Finite Fields for Computer Scientists and Engineers.

com/technology-trends/errorcorrecting-codes-comm-storage

Solid-State Circuits. 2009

## A Direct Construction of Intergroup Complementary Code Set for CDMA

Palash Sarkar and Sudhan Majhi

#### Abstract

A collection of mutually orthogonal complementary codes (CCs) is said to be complete complementary codes (CCCs) where the number of CCs are equal to the number of constituent sequences in each CC. Intergroup complementary (IGC) code set is a collection of multiple disjoint code groups with the following correlation properties: (1) inside the zero-correlation zone (ZCZ), the aperiodic autocorrelation function (AACF) of any IGC code is zero for all nonzero time shifts; (2) the aperiodic cross-correlation function (ACCF), of two distinct IGC codes, is zero for all time shifts inside the ZCZ when they are taken from the same code groups; and (3) the ACCF, for two IGC codes from two different code groups, is zero everywhere. IGC code set has a larger set size than CCC, and both can be applicable in multicarrier code-division multiple access (CDMA). In this chapter, we present a direct construction of IGC code set by using second-order generalized Boolean functions (GBFs), and our IGC code set can support interference-free code-division multiplexing. We also relate our construction with a graph where the ZCZ width depends on the number of isolated vertices present in a graph after the deletion of some vertices. Here, the construction that we propose can generate IGC code set with more flexible parameters.

Keywords: complementary code (CC), code-division multiple access (CDMA), generalized, Boolean function (GBF), intergroup complementary (IGC) code set, zero-correlation zone, (ZCZ) sequences

#### 1. Introduction

Code-division multiple access (CDMA) [1] is an important communication technology where sequence signatures with good correlation properties are used to separate multiple users. In CDMA systems, multipath interference (MPI) and multiple access interference (MAI) degrade the performance where MPI and MAI occur due to the multipath propagation, non-ideal synchronization, and non-ideal correlation properties of spreading codes. Spreading code plays a significant role on the overall performance of a CDMA system. The interference-resist capability and system capacity are determined by the correlation properties and available number of spreading codes. Due to ideal auto- and cross-correlation properties, complete complementary codes (CCCs) have been applied to asynchronous multicarrier CDMA (MC-CDMA) [2] communications in order to provide zero interference performance.

Golay proposed a pair of sequences in Golay [3] known as Golay complementary pair (GCP) which is a set of two equal length sequences with the property that the sum of their aperiodic autocorrelation function (AACF) is zero everywhere except at the zero shift. Tseng and Liu [4] extended the idea of GCP to complementary set or complementary code (CC) which contains two or more than two sequences. Davis and Jedwab [5] proposed a direct construction of GCP called Golay-Davis-Jedwab (GDJ) pair by using second-order generalized Boolean functions (GBFs) to reduce peak-to-mean envelope power ratio (PMEPR) for OFDM system. As a generalization of GDJ pair, Paterson introduced a construction of CC Paterson [6] by associating each CC with a graph. Recently, a construction of CC has been reported in Sarkar et al. [7] which is a generalization of Paterson's CC construction. Later, Rathinakumar and Chaturvedi extended Paterson's construction to CCC Rathinakumar and Chaturvedi [8] which is a collection of mutually orthogonal CCs. Although CCs have ideal AACF and aperiodic cross-correlation function (ACCF), they are unable to support a maximum number of users as the set size cannot be larger than the flock size [9–11], where the flock size denotes the number of constituent sequences in each CC. The application of CCC has been extended for the enabling of interference-free MC-CDMA communication by designing a fractionaldelay-resilient receiver in Liu et al. [12].

Cð Þ a; b ð Þ¼ τ

DOI: http://dx.doi.org/10.5772/intechopen.86751

∑ P�1 i¼0

; <sup>C</sup> <sup>j</sup> � �ð Þ¼ <sup>τ</sup> <sup>∑</sup>

The code set is said to be CCC when K ¼ P.

can be divided into P code groups denoted by I<sup>g</sup>

; <sup>C</sup><sup>j</sup> � �ð Þ¼ <sup>τ</sup>

2.2 Generalized Boolean functions

L is called CC lf

Definition 2 Let C<sup>0</sup>; C<sup>1</sup>

C C<sup>i</sup>

ACCF of the CCs is given by

C C<sup>i</sup>

χ<sup>i</sup><sup>0</sup> χ<sup>i</sup><sup>1</sup>

53

…χik�<sup>1</sup>

by ψð Þf and define it as

a (or, b) if a ¼ b. The AACF of a at τ is denoted by Að Þa ð Þτ .

<sup>Σ</sup>L�1�<sup>τ</sup> <sup>i</sup>¼<sup>0</sup> aiþ<sup>τ</sup>b<sup>∗</sup>

where τ is an integer. The above defined function in Eq. (1) is said to be AACF of

Definition 1 An ordered set a<sup>0</sup>; a<sup>1</sup>, …; a<sup>P</sup>�<sup>1</sup> � � containing P sequences of equal length

<sup>A</sup> <sup>a</sup><sup>j</sup> � �ð Þ¼ <sup>τ</sup> LP, <sup>τ</sup> <sup>¼</sup> <sup>0</sup>,

�

C Ci,αCj,<sup>α</sup>

PL, i ¼ j, τ ¼ 0, 0, i ¼ j, 0 < ∣τ∣ < Z,

> ∈I <sup>g</sup><sup>1</sup> , C<sup>j</sup> ∈I

Let <sup>f</sup> : f g <sup>0</sup>; <sup>1</sup> <sup>m</sup> ! <sup>Z</sup><sup>q</sup> (<sup>q</sup> is average number, not less than 2) be a function of <sup>m</sup>

<sup>ψ</sup>ð Þ¼ <sup>f</sup> <sup>ω</sup> <sup>f</sup><sup>0</sup>;<sup>ω</sup> <sup>f</sup> <sup>1</sup> ; …;<sup>ω</sup> <sup>f</sup> <sup>2</sup>m�<sup>1</sup>

<sup>i</sup>¼<sup>0</sup> ij2<sup>j</sup>

ð Þ 0≤i<sup>0</sup> < i<sup>1</sup> < ⋯ < ik�<sup>1</sup> ≤ m � 1 is called a monomial of degree k. The monomials 1, <sup>x</sup>0, …, xm�<sup>1</sup>, x0x1, …, xm�<sup>2</sup>xm�<sup>1</sup>, …, x0x1…xm�<sup>1</sup> are the list of 2<sup>m</sup> monomials over the variables x0, x1, …, xm�1. A GBF f can uniquely be presented as a linear combination of these 2<sup>m</sup> monomials, where the coefficient of each monomial belongs to Zq. We denote the complex valued sequence corresponding to the GBF f

, C<sup>j</sup> ∈I g , ∣τ∣ < Z,

<sup>0</sup>, i 6¼ j, <sup>C</sup><sup>i</sup>

others, otherwise:

0, C<sup>i</sup>

Definition 3 Given an IGC code set I (K,P,L,Z) (Li et al. [19]), K denotes a number of codes, P denotes the number of constituent sequences in each code, L denotes the length of each constituent sequence, and Z denotes ZCZ width, where K ¼ PL=Z. The K codes

P Kð Þ <sup>≤</sup><sup>P</sup> constituent sequences of length L. The <sup>α</sup>th constituent sequence of <sup>C</sup><sup>i</sup> is Ci,<sup>α</sup> ¼ ð Þ Ci,α,0;Ci,α, <sup>1</sup> Ci,α,L�<sup>1</sup> where α ¼ 0, 1, …, P � 1, i ¼ 0, 1, …, K � 1. The

> P�1 α¼0

K=P ¼ L=Z codes. The code set I Kð Þ ; P; L; Z has the following properties:

8 >>>>>><

>>>>>>:

variables x0, x1, …, xm�1. The product of k distinct variables

where fi <sup>¼</sup> f ið Þ <sup>0</sup>; <sup>i</sup>1; …; im�<sup>1</sup> , <sup>ω</sup> <sup>¼</sup> exp 2<sup>π</sup> ffiffiffiffiffiffi

binary representation of the integer i i <sup>¼</sup> <sup>∑</sup><sup>m</sup>�<sup>1</sup>

Boolean functions given by <sup>C</sup> <sup>¼</sup> <sup>f</sup> <sup>0</sup>; <sup>f</sup> <sup>1</sup>; …; <sup>f</sup> <sup>P</sup>�<sup>1</sup>

ΣLþτ�<sup>1</sup> <sup>i</sup>¼<sup>0</sup> ajb<sup>∗</sup>

8 ><

A Direct Construction of Intergroup Complementary Code Set for CDMA

>:

<sup>i</sup> , 0≤τ < L,

<sup>i</sup>�τ<sup>J</sup> �<sup>L</sup> <sup>&</sup>lt; <sup>τ</sup> < 0,

(1)

(2)

(4)

0, otherwise,

0, otherwise:

, …; C<sup>K</sup>�<sup>1</sup> � � be a set of K CCs where each of the CC contains

� �ð Þ¼ <sup>τ</sup> <sup>0</sup>, <sup>∀</sup>τ, i 6¼ <sup>j</sup>: (3)

ð Þ g ¼ 0; 1; …; P � 1 , each group contains

<sup>g</sup><sup>2</sup> , g<sup>1</sup> 6¼ <sup>g</sup>2, <sup>∣</sup>τ<sup>∣</sup> <sup>&</sup>lt; L,

� �, (5)

� �. Let <sup>C</sup> be an order set of <sup>P</sup>

� �. Then the complex valued code

�<sup>1</sup> <sup>p</sup> <sup>=</sup><sup>q</sup> � �, and ð Þ <sup>i</sup>0; <sup>i</sup>1; …; im�<sup>1</sup> are the

The binary Z-complementary sequences were first introduced by Fan et al. [13] and later extended to quadriphase Z-complementary sequences by Li et al. [14].

Recently, a construction of binary Z-complementary pairs has been reported in Adhikary et al. [15]. A direct construction of polyphase Z-complementary codes has been reported in Sarkar et al. [16], which is an extension of Rathinakumar's CCC construction. Due to favorable correlation properties of Z-complementary codes, it can be easily utilized for MC-CDMA system as spreading sequences to mitigate MPI and MAI efficiently [17]. The theoretical bound given in Liu et al. [18] shows that the Z-complementary codes have a much larger set size than CCCs.

IGC code set was first proposed by Li et al. [19] based on CCCs. Their code assignment algorithm shows that the CDMA systems employing the IGC codes (IGC-CDMA) outperform traditional CDMA with respect to bit error rate (BER). However, the ZCZ width of IGC codes [19] is fixed to the length of the elementary codes of the original CCCs, which limits the number of IGC codes. Another improved construction method of IGC codes is proposed in Feng et al. [20] based on the CCCs, interleaving operation, and orthogonal matrix which provides a flexible choice of the ZCZ width. However, there is no such construction which can directly produce IGC code set without having operation on CCCs; it motivates us to give a direct construction of IGC code set.

This chapter contains a direct method to construct IGC code set by applying second-order GBFs. This construction is capable of generating IGC code set with more flexible parameters such as ZCZ width and set size. We also relate our construction with a graph, and it has been shown that ZCZ width and set size of the IGC code set obtained by using our method depend on the number of isolated vertices present in a graph which is achieved by deleting some vertices from a graph.

#### 2. Preliminary

#### 2.1 Correlations of sequences

The ACCF between two sequences a ¼ ð Þ a0; a1; …; aL�<sup>1</sup> and b ¼ ð Þ b0; b1; …; bL�<sup>1</sup> is defined as follows:

A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

$$\mathbf{C}(\mathbf{a}, \mathbf{b})(\tau) = \begin{cases} \Sigma\_{i=0}^{L-1-\tau} a\_{i+\tau} b\_i^\*, & \mathbf{0} \le \tau < L, \\\Sigma\_{i=0}^{L+\tau-1} a\_j b\_{i-\tau'}^\* & -L < \tau < \mathbf{0}, \\\ \mathbf{0}, & \text{otherwise}, \end{cases} \tag{1}$$

where τ is an integer. The above defined function in Eq. (1) is said to be AACF of a (or, b) if a ¼ b. The AACF of a at τ is denoted by Að Þa ð Þτ .

Definition 1 An ordered set a<sup>0</sup>; a<sup>1</sup>, …; a<sup>P</sup>�<sup>1</sup> � � containing P sequences of equal length L is called CC lf

$$\sum\_{i=0}^{P-1} A\left(\mathbf{a}^{j}\right)(\tau) = \begin{cases} LP, & \tau = 0, \\ 0, & \text{otherwise.} \end{cases} \tag{2}$$

Definition 2 Let C<sup>0</sup>; C<sup>1</sup> , …; C<sup>K</sup>�<sup>1</sup> � � be a set of K CCs where each of the CC contains P Kð Þ <sup>≤</sup><sup>P</sup> constituent sequences of length L. The <sup>α</sup>th constituent sequence of <sup>C</sup><sup>i</sup> is Ci,<sup>α</sup> ¼ ð Þ Ci,α,0;Ci,α, <sup>1</sup> Ci,α,L�<sup>1</sup> where α ¼ 0, 1, …, P � 1, i ¼ 0, 1, …, K � 1. The ACCF of the CCs is given by

$$\mathbf{C}\left(\mathbf{C}^{i},\mathbf{C}^{j}\right)(\tau) = \sum\_{a=0}^{P-1} \mathbf{C}\left(\mathbf{C}\_{i,a}\mathbf{C}\_{j,a}\right)(\tau) = \mathbf{0}, \ \forall \tau, \ i \neq j. \tag{3}$$

The code set is said to be CCC when K ¼ P.

Definition 3 Given an IGC code set I (K,P,L,Z) (Li et al. [19]), K denotes a number of codes, P denotes the number of constituent sequences in each code, L denotes the length of each constituent sequence, and Z denotes ZCZ width, where K ¼ PL=Z. The K codes can be divided into P code groups denoted by I<sup>g</sup> ð Þ g ¼ 0; 1; …; P � 1 , each group contains K=P ¼ L=Z codes. The code set I Kð Þ ; P; L; Z has the following properties:

$$\mathbf{C}(\mathbf{C}^{i},\mathbf{C}^{j})(\tau) = \begin{cases} PL, & i = j, \tau \ &= 0, \\ \mathbf{0}, & i = j, \ \mathbf{0} < |\tau| < Z, \\ \mathbf{0}, & i \neq j, \ \mathbf{C}^{i}, \ \mathbf{C}^{j} \in \mathbb{P}^{\mathcal{S}}, |\tau| < Z, \\ \mathbf{0}, & \mathbf{C}^{i} \in \mathbb{P}^{\mathcal{S}\_{1}}, \ \mathbf{C}^{j} \in \mathbb{P}^{\mathcal{S}\_{2}}, \mathbf{g}\_{1} \neq \mathbf{g}\_{2}, \ |\tau| < L, \\ \text{others}, & \text{otherwise}. \end{cases} \tag{4}$$

#### 2.2 Generalized Boolean functions

Let <sup>f</sup> : f g <sup>0</sup>; <sup>1</sup> <sup>m</sup> ! <sup>Z</sup><sup>q</sup> (<sup>q</sup> is average number, not less than 2) be a function of <sup>m</sup> variables x0, x1, …, xm�1. The product of k distinct variables χ<sup>i</sup><sup>0</sup> χ<sup>i</sup><sup>1</sup> …χik�<sup>1</sup> ð Þ 0≤i<sup>0</sup> < i<sup>1</sup> < ⋯ < ik�<sup>1</sup> ≤ m � 1 is called a monomial of degree k. The monomials 1, <sup>x</sup>0, …, xm�<sup>1</sup>, x0x1, …, xm�<sup>2</sup>xm�<sup>1</sup>, …, x0x1…xm�<sup>1</sup> are the list of 2<sup>m</sup> monomials over the variables x0, x1, …, xm�1. A GBF f can uniquely be presented as a linear combination of these 2<sup>m</sup> monomials, where the coefficient of each monomial belongs to Zq. We denote the complex valued sequence corresponding to the GBF f by ψð Þf and define it as

$$\psi(f) = \left(\alpha^{f0}, \alpha^{f\_1}, \dots, \alpha^{f\_{2^m-1}}\right),\tag{5}$$

where fi <sup>¼</sup> f ið Þ <sup>0</sup>; <sup>i</sup>1; …; im�<sup>1</sup> , <sup>ω</sup> <sup>¼</sup> exp 2<sup>π</sup> ffiffiffiffiffiffi �<sup>1</sup> <sup>p</sup> <sup>=</sup><sup>q</sup> � �, and ð Þ <sup>i</sup>0; <sup>i</sup>1; …; im�<sup>1</sup> are the binary representation of the integer i i <sup>¼</sup> <sup>∑</sup><sup>m</sup>�<sup>1</sup> <sup>i</sup>¼<sup>0</sup> ij2<sup>j</sup> � �. Let <sup>C</sup> be an order set of <sup>P</sup> Boolean functions given by <sup>C</sup> <sup>¼</sup> <sup>f</sup> <sup>0</sup>; <sup>f</sup> <sup>1</sup>; …; <sup>f</sup> <sup>P</sup>�<sup>1</sup> � �. Then the complex valued code

Golay proposed a pair of sequences in Golay [3] known as Golay complementary pair (GCP) which is a set of two equal length sequences with the property that the sum of their aperiodic autocorrelation function (AACF) is zero everywhere except at the zero shift. Tseng and Liu [4] extended the idea of GCP to complementary set or complementary code (CC) which contains two or more than two sequences. Davis and Jedwab [5] proposed a direct construction of GCP called Golay-Davis-Jedwab (GDJ) pair by using second-order generalized Boolean functions (GBFs) to reduce peak-to-mean envelope power ratio (PMEPR) for OFDM system. As a generalization of GDJ pair, Paterson introduced a construction of CC Paterson [6] by associating each CC with a graph. Recently, a construction of CC has been reported in Sarkar et al. [7] which is a generalization of Paterson's CC construction. Later,

Rathinakumar and Chaturvedi extended Paterson's construction to CCC

the Z-complementary codes have a much larger set size than CCCs.

delay-resilient receiver in Liu et al. [12].

Coding Theory

direct construction of IGC code set.

a graph.

52

2. Preliminary

is defined as follows:

2.1 Correlations of sequences

Rathinakumar and Chaturvedi [8] which is a collection of mutually orthogonal CCs. Although CCs have ideal AACF and aperiodic cross-correlation function (ACCF), they are unable to support a maximum number of users as the set size cannot be larger than the flock size [9–11], where the flock size denotes the number of constituent sequences in each CC. The application of CCC has been extended for the enabling of interference-free MC-CDMA communication by designing a fractional-

The binary Z-complementary sequences were first introduced by Fan et al. [13] and later extended to quadriphase Z-complementary sequences by Li et al. [14]. Recently, a construction of binary Z-complementary pairs has been reported in Adhikary et al. [15]. A direct construction of polyphase Z-complementary codes has been reported in Sarkar et al. [16], which is an extension of Rathinakumar's CCC construction. Due to favorable correlation properties of Z-complementary codes, it can be easily utilized for MC-CDMA system as spreading sequences to mitigate MPI and MAI efficiently [17]. The theoretical bound given in Liu et al. [18] shows that

IGC code set was first proposed by Li et al. [19] based on CCCs. Their code assignment algorithm shows that the CDMA systems employing the IGC codes (IGC-CDMA) outperform traditional CDMA with respect to bit error rate (BER). However, the ZCZ width of IGC codes [19] is fixed to the length of the elementary

improved construction method of IGC codes is proposed in Feng et al. [20] based on the CCCs, interleaving operation, and orthogonal matrix which provides a flexible choice of the ZCZ width. However, there is no such construction which can directly produce IGC code set without having operation on CCCs; it motivates us to give a

This chapter contains a direct method to construct IGC code set by applying second-order GBFs. This construction is capable of generating IGC code set with more flexible parameters such as ZCZ width and set size. We also relate our

construction with a graph, and it has been shown that ZCZ width and set size of the IGC code set obtained by using our method depend on the number of isolated vertices present in a graph which is achieved by deleting some vertices from

The ACCF between two sequences a ¼ ð Þ a0; a1; …; aL�<sup>1</sup> and b ¼ ð Þ b0; b1; …; bL�<sup>1</sup>

codes of the original CCCs, which limits the number of IGC codes. Another

corresponding to the set of Boolean function C is denoted by ψð Þ C , given by ψð Þ¼ C ψ f <sup>0</sup> ; <sup>ψ</sup> <sup>f</sup> <sup>1</sup> ; …; <sup>ψ</sup> <sup>f</sup> <sup>P</sup>�<sup>1</sup> . The code can also be viewed as a matrix where <sup>ψ</sup> fi�<sup>1</sup> is the ith row of the matrix.

For any given GBF f of m variables, the function fð Þ 1 � x0; 1 � x1; …; 1 � xm�<sup>1</sup> is denoted by <sup>~</sup><sup>f</sup> . For a <sup>Z</sup><sup>q</sup> valued vector <sup>e</sup> <sup>¼</sup> ð Þ <sup>e</sup>0;e1; …;eL�<sup>1</sup> , we denote the vector <sup>e</sup> by ð Þ <sup>e</sup>0;e1; …;eL�<sup>1</sup> where ei <sup>¼</sup> <sup>q</sup> <sup>2</sup> � eið Þ i ¼ 0; 1; …; L � 1 . Now, we define the following notations a and a <sup>∗</sup> , where a is derived from a by reversing it and a <sup>∗</sup> is the complex conjugate of a.

#### 2.3 Quadratic forms and graphs

In this context, we introduce some lemmas and new notations which will be used for our proposed construction.

Definition 4 Let f be a GBF of variables x0, x1, …, xm�<sup>1</sup> over Zq. Consider a list of kð Þ 0≤k < m indices 0≤j <sup>0</sup> < j <sup>1</sup> < ⋯j <sup>k</sup> < m, and write x ¼ xj <sup>0</sup> ; xj 1 ; …; xj k�1 . Consider c ¼ ð Þ c0;c1; …;ck�<sup>1</sup> to be a fixed binary vector. Then we define ψ fj x¼c as a complex valued vector with ωf i0<sup>0</sup> <sup>i</sup><sup>1</sup> <sup>ð</sup> <sup>0</sup>⋯0im�1<sup>i</sup> as a ith component if ij <sup>α</sup> ¼ c<sup>α</sup> for each 0≤α < k and equal to zero otherwise. For k ¼ 0, the complex valued vector ψ fj x¼c is nothing, but the vector ψð Þf is defined before.

Let <sup>Q</sup> : f g <sup>0</sup>; <sup>1</sup> <sup>m</sup> ! <sup>Z</sup><sup>q</sup> be a quadratic form of <sup>m</sup> variables <sup>x</sup>0, x1, …, xm�<sup>1</sup>: <sup>A</sup> quadratic GBF is of the form Rathinakumar and Chaturvedi [8]

$$f = Q + \sum\_{i=0}^{m-1} \mathbf{g}\_i \mathbf{x}\_i + \mathbf{g}',\tag{6}$$

where x<sup>γ</sup> is one of the end vertices in the path. Then

DOI: http://dx.doi.org/10.5772/intechopen.86751

A Direct Construction of Intergroup Complementary Code Set for CDMA

3. Construction of IGC code set from GBFs

• <sup>b</sup> <sup>¼</sup> ð Þ <sup>b</sup>0; <sup>b</sup>1; …; bk�<sup>1</sup> , <sup>b</sup><sup>i</sup> <sup>¼</sup> ð Þ bi,0; bi, <sup>1</sup>; …; bi,k�<sup>1</sup> <sup>∈</sup>Z<sup>k</sup>

<sup>⋯</sup><sup>0</sup> gm�<sup>1</sup> � � <sup>∈</sup>Z<sup>p</sup>

> <sup>α</sup> þ ∑ �1 α¼0 bαxj

<sup>2</sup>, d<sup>0</sup> ¼ d<sup>0</sup>

q.

construction:

• x ¼ xj

d0 <sup>j</sup> ¼ d<sup>0</sup>

• <sup>Γ</sup> <sup>¼</sup> gm�pJ

length.

• ð Þ� i,j

<sup>S</sup>bd<sup>0</sup> <sup>f</sup> <sup>þ</sup> <sup>q</sup>

or

and

55

M � N.

define order sets Sbd<sup>0</sup> and ~

<sup>2</sup> <sup>∑</sup> k α¼0 dαxj

<sup>S</sup>bd<sup>0</sup> <sup>¼</sup> <sup>f</sup> <sup>þ</sup> <sup>q</sup>

~

contains 2<sup>k</sup>þ<sup>1</sup> GBFs.

<sup>S</sup>bd<sup>0</sup> <sup>¼</sup> <sup>~</sup><sup>f</sup> <sup>þ</sup> <sup>q</sup>

<sup>0</sup> ; xj 1 ; …xj k�1 � �∈Z<sup>k</sup>

• <sup>d</sup> <sup>¼</sup> ð Þ <sup>d</sup>0; <sup>d</sup>1; …; dk�<sup>1</sup> <sup>∈</sup>Z<sup>k</sup>

j, <sup>2</sup>; …; d<sup>0</sup> j,p � �∈Z<sup>p</sup>

gm�pþ1<sup>0</sup>

j, <sup>1</sup>; d<sup>0</sup>

generates a set of CCC, where <sup>ψ</sup> <sup>∗</sup> ð Þ� denotes the complex conjugate of <sup>ψ</sup>ð Þ� .

In this section, we propose a direct construction of IGC code set by using Boolean algebra and graph theory. Before proposing the main theorem of the construction, we define some sets and vectors and present some lemmas. First we define some notations which will be used throughout in our

<sup>2</sup>, x<sup>0</sup> ¼ xm�p0xm�pþ<sup>1</sup>; …; xm�<sup>1</sup>

<sup>1</sup>; d<sup>0</sup> <sup>2</sup>; …; d<sup>0</sup> p � � and

• a�b denotes the dot products of any two vectors a and b which are of the same

• A⊗B denotes the Kronecker product of any two matrices of arbitrary size.

h i, i <sup>¼</sup> <sup>0</sup>, <sup>1</sup>, …, M � 1 and <sup>j</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …, N denotes a matrix of order

� � : <sup>d</sup>; <sup>d</sup><sup>α</sup> <sup>∈</sup>f g <sup>0</sup>; <sup>1</sup> � �, (10)

� � : d∈Z2; d∈Z<sup>k</sup>

Sbd<sup>0</sup> corresponding to the GBF f as follows:

<sup>α</sup>xm�pþα�<sup>1</sup> þ dx<sup>γ</sup>

n o, (11)

� � : d∈Z2; d∈ Z<sup>k</sup>

n o (12)

Let <sup>f</sup> be a GBF of <sup>m</sup> variables <sup>x</sup>0, x1, …, xm�<sup>1</sup> over <sup>Z</sup>q. For <sup>b</sup><sup>∈</sup> <sup>Z</sup><sup>k</sup>

<sup>α</sup> þ ∑ p

<sup>2</sup> ð Þ� <sup>d</sup> <sup>þ</sup> <sup>b</sup> <sup>x</sup> <sup>þ</sup> <sup>d</sup><sup>0</sup> � <sup>x</sup><sup>0</sup> <sup>þ</sup> dx<sup>γ</sup>

<sup>2</sup> ð Þ� <sup>d</sup> <sup>þ</sup> <sup>b</sup> <sup>x</sup> <sup>þ</sup> <sup>d</sup><sup>0</sup> � <sup>x</sup><sup>0</sup> <sup>þ</sup> dx<sup>γ</sup>

From the above expression, it is clear that each of the order sets Sbd<sup>0</sup> and ~

α¼1 d0

<sup>2</sup> <sup>j</sup> <sup>¼</sup> <sup>1</sup>; <sup>2</sup>; …; <sup>2</sup><sup>p</sup> ð Þ.

� �∈Z<sup>p</sup>

2 .

<sup>2</sup>, d<sup>0</sup>

2

2

∈Z<sup>p</sup> <sup>2</sup> , we

Sbd<sup>0</sup>

<sup>2</sup> <sup>i</sup> <sup>¼</sup> <sup>1</sup>; <sup>2</sup>; …; <sup>2</sup><sup>k</sup> � �.

<sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ : <sup>0</sup><sup>≤</sup> <sup>t</sup> < 2<sup>k</sup> � �<sup>∪</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup> � � : <sup>0</sup>≤<sup>t</sup> < 2<sup>k</sup> n o (9)

where g<sup>0</sup> , gi ∈ Z<sup>q</sup> are arbitrary.

For a quadratic GBF, f,G fð Þ denotes the graph of f. The G fð Þ is obtained by joining the vertices xi and xj by an edge if there is a term qi,j xixjð Þ 0≤ i < j≤ m � 1 in the GBF f with qi,j 6¼ 0 qi,j ∈ Z<sup>q</sup> . Consider a function <sup>f</sup><sup>j</sup> xj¼<sup>c</sup>, derived by fixing xj at c in f. The graph of fj xj¼<sup>c</sup> is denoted by G f<sup>j</sup> xj¼c which is obtained by deleting the vertex xj and all the edges which are connected to xj from G fð Þ. Then G fj x¼c is obtained from G fð Þ by deleting xj <sup>0</sup> , xj 1 , …, xj k�1 : G fj x¼c represent the same graph for all <sup>c</sup><sup>∈</sup> f g <sup>0</sup>; <sup>1</sup> <sup>k</sup> . Therefore, for all <sup>c</sup> in 0f g ; <sup>1</sup> <sup>k</sup> ; f x¼c have the same quadratic form. Note that the quadratic forms of f and ~f are the same; thus, they have associated with the same graph.

Lemma 1 Construction of CCC [8].

Let f: 0f g ; <sup>1</sup> <sup>m</sup> ! <sup>Z</sup><sup>q</sup> be a GBF and f its reversal. Assume that G f<sup>j</sup> x¼c is a path for each <sup>c</sup><sup>∈</sup> f g <sup>0</sup>; <sup>1</sup> <sup>k</sup> and the edges in the path have the same weight q=2. Let ð Þ b0; b1; …; bk�<sup>1</sup> be the binary representation of the integer t. Define the order sets of GBFs C<sup>t</sup> to be

$$\left\{ f + \frac{q}{2} \left( \sum\_{a=0}^{k-1} d\_a \mathbf{x}\_{\mathbf{j}\_a} + \sum\_{a=0}^{k-1} b\_a \mathbf{x}\_{\mathbf{j}\_a} + d \mathbf{x}\_{\mathbf{\gamma}} \right) : d, d\_a \in \{0, 1\} \right\}, \tag{7}$$

and the order set of GBFs C~ t to be

$$\left\{ \bar{f} + \frac{q}{2} \left( \sum\_{a=0}^{k-1} d\_a \overline{\boldsymbol{x}}\_{\boldsymbol{j}\_a} + \sum\_{a=0}^{k-1} b\_a \overline{\boldsymbol{x}}\_{\boldsymbol{j}\_a} + \overline{d} \boldsymbol{x}\_{\boldsymbol{\gamma}} \right) : \overline{d}, d\_a \in \{0, 1\} \right\}, \tag{8}$$

A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

where x<sup>γ</sup> is one of the end vertices in the path. Then

corresponding to the set of Boolean function C is denoted by ψð Þ C , given by

. The code can also be viewed as a matrix where

<sup>2</sup> � eið Þ i ¼ 0; 1; …; L � 1 . Now, we define the following

<sup>0</sup> ; xj 1 ; …; xj k�1 . Consider

x¼c

x¼c

<sup>α</sup> ¼ c<sup>α</sup> for each 0≤α < k and equal

is nothing, but the

, (6)

which is obtained by deleting the

represent the same graph

have the same quadratic

x¼c is a path

, (7)

, (8)

as a complex

xixjð Þ 0≤ i < j≤ m � 1 in

x¼c is

xj¼<sup>c</sup>, derived by fixing xj at

For any given GBF f of m variables, the function fð Þ 1 � x0; 1 � x1; …; 1 � xm�<sup>1</sup> is denoted by <sup>~</sup><sup>f</sup> . For a <sup>Z</sup><sup>q</sup> valued vector <sup>e</sup> <sup>¼</sup> ð Þ <sup>e</sup>0;e1; …;eL�<sup>1</sup> , we denote the vector <sup>e</sup> by

notations a and a <sup>∗</sup> , where a is derived from a by reversing it and a <sup>∗</sup> is the complex

In this context, we introduce some lemmas and new notations which will be

Let <sup>Q</sup> : f g <sup>0</sup>; <sup>1</sup> <sup>m</sup> ! <sup>Z</sup><sup>q</sup> be a quadratic form of <sup>m</sup> variables <sup>x</sup>0, x1, …, xm�<sup>1</sup>: <sup>A</sup>

m�1 i¼0 gi xi þ g<sup>0</sup>

For a quadratic GBF, f,G fð Þ denotes the graph of f. The G fð Þ is obtained by

. Consider a function fj

xj¼c 

> ; f x¼c

x¼c

: d; d<sup>α</sup> ∈f g 0; 1

: d; d<sup>α</sup> ∈f g 0; 1

f ¼ Q þ ∑

Definition 4 Let f be a GBF of variables x0, x1, …, xm�<sup>1</sup> over Zq. Consider a list of

<sup>k</sup> < m, and write x ¼ xj

ψð Þ¼ C ψ f <sup>0</sup>

Coding Theory

conjugate of a.

<sup>ψ</sup> fi�<sup>1</sup>

; <sup>ψ</sup> <sup>f</sup> <sup>1</sup>

ð Þ <sup>e</sup>0;e1; …;eL�<sup>1</sup> where ei <sup>¼</sup> <sup>q</sup>

kð Þ 0≤k < m indices 0≤j

vector ψð Þf is defined before.

where g<sup>0</sup>

2.3 Quadratic forms and graphs

used for our proposed construction.

is the ith row of the matrix.

; …; <sup>ψ</sup> <sup>f</sup> <sup>P</sup>�<sup>1</sup>

<sup>0</sup> < j

valued vector with ωf i0<sup>0</sup> <sup>i</sup><sup>1</sup> <sup>ð</sup> <sup>0</sup>⋯0im�1<sup>i</sup> as a ith component if ij

, gi ∈ Z<sup>q</sup> are arbitrary.

the GBF f with qi,j 6¼ 0 qi,j ∈ Z<sup>q</sup>

obtained from G fð Þ by deleting xj

associated with the same graph.

Lemma 1 Construction of CCC [8].

<sup>f</sup> <sup>þ</sup> <sup>q</sup> <sup>2</sup> <sup>∑</sup> k�1 α¼0

and the order set of GBFs C~ t to be

<sup>~</sup><sup>f</sup> <sup>þ</sup> <sup>q</sup> <sup>2</sup> <sup>∑</sup> k�1 α¼0

c in f. The graph of fj

for all <sup>c</sup><sup>∈</sup> f g <sup>0</sup>; <sup>1</sup> <sup>k</sup>

GBFs C<sup>t</sup> to be

54

to zero otherwise. For k ¼ 0, the complex valued vector ψ fj

<sup>1</sup> < ⋯j

c ¼ ð Þ c0;c1; …;ck�<sup>1</sup> to be a fixed binary vector. Then we define ψ fj

quadratic GBF is of the form Rathinakumar and Chaturvedi [8]

joining the vertices xi and xj by an edge if there is a term qi,j

xj¼<sup>c</sup> is denoted by G f<sup>j</sup>

. Therefore, for all <sup>c</sup> in 0f g ; <sup>1</sup> <sup>k</sup>

dαxj

dαxj

vertex xj and all the edges which are connected to xj from G fð Þ. Then G fj

form. Note that the quadratic forms of f and ~f are the same; thus, they have

<sup>0</sup> , xj 1 , …, xj k�1 : G fj

Let f: 0f g ; <sup>1</sup> <sup>m</sup> ! <sup>Z</sup><sup>q</sup> be a GBF and f its reversal. Assume that G f<sup>j</sup>

for each <sup>c</sup><sup>∈</sup> f g <sup>0</sup>; <sup>1</sup> <sup>k</sup> and the edges in the path have the same weight q=2. Let ð Þ b0; b1; …; bk�<sup>1</sup> be the binary representation of the integer t. Define the order sets of

> <sup>α</sup> þ ∑ k�1 α¼0

> <sup>α</sup> þ ∑ k�1 α¼0

bαxj

bαxj

<sup>α</sup> þ dx<sup>γ</sup>

<sup>α</sup> þ dx<sup>γ</sup>

$$\left\{\boldsymbol{\psi}(\mathbf{C}^{t}) : \mathbf{0} \le t < \mathbf{2}^{k}\right\} \cup \left\{\boldsymbol{\nu}^{\*}\left(\bar{\mathbf{C}}^{t}\right) : \mathbf{0} \le t < \mathbf{2}^{k}\right\} \tag{9}$$

generates a set of CCC, where <sup>ψ</sup> <sup>∗</sup> ð Þ� denotes the complex conjugate of <sup>ψ</sup>ð Þ� .

#### 3. Construction of IGC code set from GBFs

In this section, we propose a direct construction of IGC code set by using Boolean algebra and graph theory. Before proposing the main theorem of the construction, we define some sets and vectors and present some lemmas. First we define some notations which will be used throughout in our construction:

• x ¼ xj <sup>0</sup> ; xj 1 ; …xj k�1 � �∈Z<sup>k</sup> <sup>2</sup>, x<sup>0</sup> ¼ xm�p0xm�pþ<sup>1</sup>; …; xm�<sup>1</sup> � �∈Z<sup>p</sup> 2 .

$$\mathbf{b} \bullet \mathbf{b} = (b\_0, b\_1, \dots, b\_{k-1}),\\ \mathbf{b}\_i = (b\_{i,0}, b\_{i,1}, \dots, b\_{i,k-1}) \in \mathbb{Z}\_2^k\\ \left(i = 1, 2, \dots, 2^k\right).$$

• <sup>d</sup> <sup>¼</sup> ð Þ <sup>d</sup>0; <sup>d</sup>1; …; dk�<sup>1</sup> <sup>∈</sup>Z<sup>k</sup> <sup>2</sup>, d<sup>0</sup> ¼ d<sup>0</sup> <sup>1</sup>; d<sup>0</sup> <sup>2</sup>; …; d<sup>0</sup> p � � and d0 <sup>j</sup> ¼ d<sup>0</sup> j, <sup>1</sup>; d<sup>0</sup> j, <sup>2</sup>; …; d<sup>0</sup> j,p � �∈Z<sup>p</sup> <sup>2</sup> <sup>j</sup> <sup>¼</sup> <sup>1</sup>; <sup>2</sup>; …; <sup>2</sup><sup>p</sup> ð Þ.

$$\bullet \ \Gamma = \left( \mathbf{g}\_{m-p} \mathbf{g}\_{m-p+1} \dots \mathbf{g}\_{m-1} \right) \in \mathbb{Z}\_q^p.$$


Let <sup>f</sup> be a GBF of <sup>m</sup> variables <sup>x</sup>0, x1, …, xm�<sup>1</sup> over <sup>Z</sup>q. For <sup>b</sup><sup>∈</sup> <sup>Z</sup><sup>k</sup> <sup>2</sup>, d<sup>0</sup> ∈Z<sup>p</sup> <sup>2</sup> , we define order sets Sbd<sup>0</sup> and ~ Sbd<sup>0</sup> corresponding to the GBF f as follows:

$$S\_{\mathbf{b}\mathbf{d}'} \left\{ f + \frac{q}{2} \left( \sum\_{a=0}^{k} d\_a \mathbb{x}\_{\mathbf{j}\_a} + \sum\_{a=0}^{-1} b\_a \mathbb{x}\_{\mathbf{j}\_a} + \sum\_{a=1}^{p} d'\_a \mathbb{x}\_{m-p+a-1} + d\mathbf{x}\_{\gamma} \right) : d, d\_a \in \{0, 1\} \right\}, \tag{10}$$

or

$$\mathcal{S}\_{\mathbf{b}\mathbf{d}'} = \left\{ f + \frac{q}{2} \left( (d + \mathbf{b}) \cdot \mathbf{x} + \mathbf{d}' \cdot \mathbf{x}' + d\mathbf{x}\_{\mathcal{I}} \right) : d \in \mathbb{Z}\_2, \mathbf{d} \in \mathbb{Z}\_2^k \right\},\tag{11}$$

and

$$\bar{\mathbf{S}}\_{\mathbf{bd'}} = \left\{ \bar{f} + \frac{q}{2} \left( (\mathbf{d} + \mathbf{b}) \cdot \overline{\mathbf{x}} + \mathbf{d'} \cdot \overline{\mathbf{x'}} + \overline{d}x\_{\gamma} \right) : d \in \mathbb{Z}\_2, \mathbf{d} \in \mathbb{Z}\_2^k \right\} \tag{12}$$

From the above expression, it is clear that each of the order sets Sbd<sup>0</sup> and ~ Sbd<sup>0</sup> contains 2<sup>k</sup>þ<sup>1</sup> GBFs.

Lemma <sup>2</sup> Let f be a GBF of m variables with the property that each <sup>c</sup>∈f g <sup>0</sup>; <sup>1</sup> <sup>k</sup> , G f ð j<sup>x</sup>¼<sup>c</sup><sup>Þ</sup> contains a path over m � <sup>k</sup> � <sup>p</sup>ð Þ <sup>0</sup>≤<sup>k</sup> <sup>&</sup>lt; <sup>m</sup>; <sup>p</sup> <sup>≥</sup><sup>0</sup> vertices and p isolated vertices labeled m � p, m � p þ 1, …, m � 1 such that 0≤ k þ p≤ m � 2ð Þ m ≥2 . Further, assume that there was no edges between deleted vertices (as defined before, the restricted variables in a GBF are considered as the vertices to be deleted in the graph of the Boolean function) and isolated vertices before the deletion. Let x<sup>γ</sup> be one of the end vertices of the path in G fj x¼c � � and the weight of each edge in the path be q=2. Let a<sup>0</sup> 1, a<sup>0</sup> <sup>2</sup>, …, a<sup>0</sup> 2m�<sup>p</sup> be binary vector representations of <sup>0</sup>, <sup>1</sup>, <sup>2</sup>m�<sup>p</sup> � <sup>1</sup> of length m � p and <sup>r</sup>1, <sup>r</sup>2, …, <sup>r</sup>2<sup>p</sup> be binary vector representations of 0, 1, 2<sup>p</sup> � <sup>1</sup> of length p. Also let l be a positive integer such that l <sup>¼</sup> <sup>Σ</sup>k�<sup>1</sup> <sup>i</sup>¼<sup>0</sup> di2<sup>j</sup> <sup>þ</sup> <sup>d</sup>2k. Then for any choice of g<sup>0</sup> , gj <sup>∈</sup>Zq, the codes <sup>ψ</sup> (Sbd<sup>0</sup>) and <sup>ψ</sup> <sup>~</sup> <sup>S</sup>bd0Þ � can be expressed as

Now we define a GBF F<sup>b</sup>

a1 a2 ⋮ a2<sup>m</sup>

Table 1.

a0 1r1 a<sup>2</sup>0r<sup>1</sup> ⋮ a0 <sup>2</sup>m�<sup>p</sup>r<sup>1</sup> a0 1r2 a<sup>2</sup>0r<sup>2</sup> ⋮ a0 <sup>2</sup>m�<sup>p</sup>r<sup>2</sup> ⋮ a0 <sup>1</sup>r2<sup>p</sup> a0 <sup>2</sup>r2<sup>p</sup> ⋮ a0 <sup>2</sup>m�<sup>p</sup> r2<sup>p</sup>

Table 2.

a0 1 a0 2 ⋮ a0 2m�<sup>p</sup>

Table 3.

57

Truth table over m variables.

Truth table over m � p variables.

Truth table over m variables.

Fb

DOI: http://dx.doi.org/10.5772/intechopen.86751

<sup>ψ</sup> <sup>S</sup>bd ð Þ¼0 <sup>ω</sup>F<sup>b</sup>

<sup>l</sup> <sup>¼</sup> <sup>f</sup> <sup>þ</sup> <sup>q</sup>

A Direct Construction of Intergroup Complementary Code Set for CDMA

<sup>l</sup> over m variables by

2 <sup>d</sup><sup>0</sup> � � � <sup>x</sup><sup>0</sup>

Let <sup>a</sup>1, <sup>a</sup>2, …, <sup>a</sup>2<sup>m</sup> be binary vector representations of <sup>0</sup>, <sup>1</sup>, <sup>2</sup><sup>m</sup> � <sup>1</sup> of length m, given in Table 1. The truth table given in Table 1 can also be expressed as the truth table

<sup>¼</sup> <sup>F</sup><sup>b</sup><sup>l</sup> <sup>þ</sup> <sup>Γ</sup> <sup>þ</sup> <sup>q</sup>

given in Table 2. Table 3 contains a truth table over m � p variables.

<sup>l</sup>ð Þ <sup>a</sup><sup>j</sup>

From Tables 1–3, it is observed that the code ψ Sbd ð Þ0 can be expressed as

<sup>2</sup> ð Þ� <sup>d</sup> <sup>þ</sup> <sup>b</sup> <sup>x</sup> <sup>þ</sup> <sup>d</sup><sup>0</sup> � <sup>x</sup><sup>0</sup> <sup>þ</sup> dx<sup>γ</sup> � �

.

h i, l <sup>¼</sup> <sup>0</sup>,1, <sup>2</sup>kþ<sup>1</sup> � <sup>1</sup>, j <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …, <sup>2</sup><sup>m</sup> (19)

(18)

$$\begin{split} \boldsymbol{\Psi}(\mathbf{S\_{bd'}}) &= \left[ \boldsymbol{\Psi}(\mathbf{F\_{bl}}) \boldsymbol{\alpha}^{\left(\Gamma + \frac{\mathbf{d}}{2}\mathbf{d'}\right) \cdot \mathbf{r}\_{\mathcal{V}}} \right], \quad \boldsymbol{l} = \mathbf{0}, \mathbf{1}, \ldots, \mathbf{2}^{k+1} - \mathbf{1}, \boldsymbol{j}' = \mathbf{1}, \mathbf{2}, \ldots, \mathbf{2}^{p}, \\ \boldsymbol{\Psi}\left(\tilde{\mathbf{S\_{bd'}}}\right) &= \left[ \boldsymbol{\Psi}\left(\mathbf{F\_{bl}'}\right) \boldsymbol{\alpha}^{\left(\Gamma + \frac{\mathbf{d}}{2}\mathbf{d'}\right) \cdot \mathbf{\tilde{r}}\_{\mathcal{V}}} \right], \quad \boldsymbol{l} = \mathbf{0}, \mathbf{1}, \ldots, \mathbf{2}^{k+1} - \mathbf{1}, \boldsymbol{j}' = \mathbf{1}, \mathbf{2}, \ldots, \mathbf{2}^{p}, \end{split} \tag{13}$$

where

$$\begin{split} \boldsymbol{\Psi}(\boldsymbol{F\_{\mathsf{bl}}}) &= \left(\boldsymbol{\alpha}^{\boldsymbol{F\_{\mathsf{bl}}}\left(\mathbf{a\_{1}^{\prime}}\right)}, \boldsymbol{\alpha}^{\boldsymbol{F\_{\mathsf{bl}}}\left(\mathbf{a\_{1}^{\prime}}\right)}, \dots, \boldsymbol{\alpha}^{\boldsymbol{F\_{\mathsf{bl}}}\left(\mathbf{a\_{m-p}^{\prime}}\right)}\right), \\ \boldsymbol{\Psi}(\boldsymbol{F\_{\mathsf{bl}}}) &= \left(\boldsymbol{\alpha}^{\boldsymbol{F\_{\mathsf{bl}}}\left(\mathbf{a\_{1}^{\prime}}\right)}, \boldsymbol{\alpha}^{\boldsymbol{F\_{\mathsf{bl}}}\left(\mathbf{a\_{1}^{\prime}}\right)}, \dots, \boldsymbol{\alpha}^{\boldsymbol{F\_{\mathsf{bl}}}\left(\mathbf{a\_{m-p}^{\prime}}\right)}\right), \end{split}$$

$$\begin{split} \boldsymbol{F\_{\mathsf{bl}}} &= \boldsymbol{f}^{\prime} + \frac{q}{2}\left(\left(\mathbf{d} + \mathbf{b}\right) \cdot \mathbf{x} + d\mathbf{x}\_{\mathsf{r}}\right), \\ \boldsymbol{F\_{\mathsf{bl}}^{\prime}} &= \boldsymbol{f}^{\prime} + \frac{q}{2}\left(\left(\mathbf{d} + \mathbf{b}\right) \cdot \overline{\mathbf{x}} + \overline{d}\mathbf{x}\_{\mathsf{r}}\right), \end{split} \tag{14}$$

$$\boldsymbol{f}^{\prime} = \mathbf{Q} + \sum\_{i=0}^{m-p-1} g\_{i}\mathbf{x}\_{i} + \mathbf{g}^{\prime}.$$

Proof 1 Since there are no edges between the deleted and isolated vertices before the deletion of k vertices xj <sup>0</sup> , xj 1 , …, xj k�1 , the quadratic form Q presented in G fð Þ can be expressed as

$$\mathbf{Q} = \frac{q}{2} \sum\_{a=0}^{m-k-p-1} \mathbb{x}\_{\pi(a)} \mathbb{x}\_{\pi(a+1)} + \sum\_{0 \le \mu < v \le k-1} b'\_{j\_\mu j\_\nu} \mathbb{x}\_{j\_\nu} \mathbb{x}\_{j\_\nu} + \sum\_{a=0}^{m-k-p-1} \sum\_{\sigma=0}^{k-1} c'\_{\pi(a)j\_\sigma} \mathbb{x}\_{\pi(a)} \mathbb{x}\_{j\_\sigma}, \tag{15}$$

where π is a permutation over the set f g 0; 1; …; m � 1 ∖ j 0; j <sup>1</sup>; …; j k�1 � �∪ f g m � p; m � p þ 1; …; m � 1 , b<sup>0</sup> j μ,j <sup>v</sup> ∈Z<sup>q</sup> � � which denotes the weight between the vertices xj <sup>μ</sup> and xj <sup>v</sup> and c<sup>0</sup> π αð Þ,j σ ∈Z<sup>q</sup> � � denotes the weight between the vertices xπ αð Þ and xjσ. Therefore, f 0 is a GBF of m � p variables x0, x1, …, xm�p�1, and the GBF f of m variables can be expressed as.

$$f = f' + \sum\_{i=m-p}^{m-1} \mathbf{g}\_i \mathbf{x}\_i \tag{16}$$

or

$$f = f' + \Gamma \cdot \mathbf{x}'.\tag{17}$$

A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

Now we define a GBF F<sup>b</sup> <sup>l</sup> over m variables by

$$\begin{split} F\_l^\mathbf{b} &= f + \frac{q}{2} \left( (\mathbf{d} + \mathbf{b}) \cdot \mathbf{x} + \mathbf{d}' \cdot \mathbf{x}' + d x\_\gamma \right) \\ &= F\_{\mathbf{b}l} + \left( \Gamma + \frac{q}{2} \mathbf{d}' \right) \cdot \mathbf{x}'. \end{split} \tag{18}$$

Let <sup>a</sup>1, <sup>a</sup>2, …, <sup>a</sup>2<sup>m</sup> be binary vector representations of <sup>0</sup>, <sup>1</sup>, <sup>2</sup><sup>m</sup> � <sup>1</sup> of length m, given in Table 1. The truth table given in Table 1 can also be expressed as the truth table given in Table 2. Table 3 contains a truth table over m � p variables.

From Tables 1–3, it is observed that the code ψ Sbd ð Þ0 can be expressed as

$$\mathcal{Y}(\mathbb{S}\_{\mathbf{b}\mathbf{d}'}) = \begin{bmatrix} o^{F\_l^b(\mathbf{a}\_\uparrow)} \end{bmatrix}, \ l = \mathbf{0}, \mathbf{1}, \ \mathbf{2}^{k+1} - \mathbf{1}, j = \mathbf{1}, \mathbf{2}, \dots, \mathbf{2}^m \tag{19}$$


#### Table 1.

Lemma <sup>2</sup> Let f be a GBF of m variables with the property that each <sup>c</sup>∈f g <sup>0</sup>; <sup>1</sup> <sup>k</sup>

� � and the weight of each edge in the path be q=2. Let a<sup>0</sup>

� (13)

� �

� �

<sup>2</sup> ð Þ� <sup>d</sup> <sup>þ</sup> <sup>b</sup> <sup>x</sup> <sup>þ</sup> dx<sup>γ</sup> � �,

<sup>2</sup> ð Þ� <sup>d</sup> <sup>þ</sup> <sup>b</sup> <sup>x</sup> <sup>þ</sup> dx<sup>γ</sup> � �,

Proof 1 Since there are no edges between the deleted and isolated vertices before the

b0 j μ,j v xj μ xj

<sup>b</sup><sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ<sup>2</sup> ; …; ;ω<sup>F</sup><sup>0</sup>

<sup>ψ</sup>ð Þ¼ <sup>F</sup>b<sup>l</sup> <sup>ω</sup><sup>F</sup>b<sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ<sup>1</sup> ; ;ω<sup>F</sup>b<sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ<sup>2</sup> ; …; ;ω<sup>F</sup>b<sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ <sup>2</sup>m�<sup>p</sup>

<sup>b</sup><sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ<sup>1</sup> ; ;ω<sup>F</sup><sup>0</sup>

m�p�1 i¼0 gi xi þ g<sup>0</sup> :

0 ≤μ < v≤k�1

j μ,j <sup>v</sup> ∈Z<sup>q</sup>

f ¼ f 0 þ ∑ m�1 i¼m�p gi

> f ¼ f 0 þ Γ � x<sup>0</sup>

, l <sup>¼</sup> <sup>0</sup>, <sup>1</sup>, …, <sup>2</sup>kþ<sup>1</sup> � <sup>1</sup>, j<sup>0</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …, <sup>2</sup>p,

, l <sup>¼</sup> <sup>0</sup>, <sup>1</sup>, …, <sup>2</sup><sup>k</sup>þ<sup>1</sup> � <sup>1</sup>, j<sup>0</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …, <sup>2</sup>p,

be binary vector representations of <sup>0</sup>, <sup>1</sup>, <sup>2</sup>m�<sup>p</sup> � <sup>1</sup> of length m � p and <sup>r</sup>1, <sup>r</sup>2, …, <sup>r</sup>2<sup>p</sup> be binary vector representations of 0, 1, 2<sup>p</sup> � <sup>1</sup> of length p. Also let l be a positive integer such

G f ð j<sup>x</sup>¼<sup>c</sup><sup>Þ</sup> contains a path over m � <sup>k</sup> � <sup>p</sup>ð Þ <sup>0</sup>≤<sup>k</sup> <sup>&</sup>lt; <sup>m</sup>; <sup>p</sup> <sup>≥</sup><sup>0</sup> vertices and p isolated vertices labeled m � p, m � p þ 1, …, m � 1 such that 0≤ k þ p≤ m � 2ð Þ m ≥2 . Further, assume that there was no edges between deleted vertices (as defined before, the restricted variables in a GBF are considered as the vertices to be deleted in the graph of the Boolean function) and isolated vertices before the deletion. Let x<sup>γ</sup> be one of the end vertices

of the path in G fj

can be expressed as

that l <sup>¼</sup> <sup>Σ</sup>k�<sup>1</sup>

Coding Theory

ψ ~

where

deletion of k vertices xj

<sup>2</sup> <sup>∑</sup> m�k�p�1 α¼0

<sup>μ</sup> and xj

0

can be expressed as.

f g m � p; m � p þ 1; …; m � 1 , b<sup>0</sup>

<sup>v</sup> and c<sup>0</sup>

π αð Þ,j σ ∈Z<sup>q</sup>

expressed as

vertices xj

or

56

Therefore, f

<sup>Q</sup> <sup>¼</sup> <sup>q</sup>

x¼c

<sup>ψ</sup> <sup>S</sup>bd ð Þ¼0 <sup>ψ</sup>ð Þ <sup>F</sup><sup>b</sup><sup>l</sup> <sup>ω</sup> <sup>Γ</sup>þ<sup>q</sup>

Sbd0Þ ¼ ψ F<sup>0</sup>

<sup>i</sup>¼<sup>0</sup> di2<sup>j</sup> <sup>þ</sup> <sup>d</sup>2k. Then for any choice of g<sup>0</sup>

<sup>2</sup>d<sup>0</sup> ð Þ�<sup>r</sup><sup>j</sup> 0

<sup>2</sup>d<sup>0</sup> ð Þ�<sup>r</sup><sup>j</sup> 0

<sup>0</sup> ð Þ¼ <sup>ω</sup><sup>F</sup><sup>0</sup>

Fb<sup>l</sup> ¼ f 0 þ q

F0 <sup>b</sup><sup>l</sup> <sup>¼</sup> <sup>~</sup> f <sup>0</sup> <sup>þ</sup> <sup>q</sup>

f

<sup>0</sup> , xj 1 , …, xj k�1

<sup>0</sup> ¼ Q þ ∑

xπ αð Þxπ αð Þ <sup>þ</sup><sup>1</sup> þ ∑

where π is a permutation over the set f g 0; 1; …; m � 1 ∖ j

h i

h i

bl � �ω <sup>Γ</sup>þ<sup>q</sup>

ψ Fb<sup>l</sup>

,

<sup>S</sup>bd0Þ �

(14)

(15)

1, a<sup>0</sup> <sup>2</sup>, …, a<sup>0</sup> 2m�<sup>p</sup>

, gj <sup>∈</sup>Zq, the codes <sup>ψ</sup> (Sbd<sup>0</sup>) and <sup>ψ</sup> <sup>~</sup>

,

,

∑ k�1 σ¼0 c 0 π αð Þj σ xπ αð Þxj σ ,

xi (16)

: (17)

k�1 � �∪

0; j <sup>1</sup>; …; j

� � which denotes the weight between the

<sup>b</sup><sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ <sup>2</sup>m�<sup>p</sup>

, the quadratic form Q presented in G fð Þ can be

<sup>v</sup> þ ∑ m�k�p�1 α¼0

� � denotes the weight between the vertices xπ αð Þ and xjσ.

is a GBF of m � p variables x0, x1, …, xm�p�1, and the GBF f of m variables

Truth table over m variables.


#### Table 2.

Truth table over m variables.


Table 3. Truth table over m � p variables. or

$$\boldsymbol{\mu}(\mathbf{S}\_{\mathbf{b}\mathbf{d}'}) = \left[\boldsymbol{\mu}(F\_{\mathbf{k}})\boldsymbol{\mu}^{\left(\mathbb{T}+\frac{\mathbf{d}}{2}\mathbf{d}'\right)\cdot\mathbf{r}\_{f}}\right], \quad l=0,1,\ldots,\mathcal{2}^{k+1}-\mathbf{1},\mathbf{j}'=\mathbf{1},2,\ldots,\mathcal{2}^{p},\tag{20}$$

and

F0

where

where

where l ¼ 0; 1; 2; 3.

1Þ ψð Þ¼ S<sup>00</sup>

<sup>2</sup><sup>Þ</sup> <sup>ψ</sup> <sup>~</sup>

3Þ ψð Þ¼ S<sup>01</sup>

<sup>4</sup><sup>Þ</sup> <sup>ψ</sup> <sup>~</sup>

59

<sup>S</sup><sup>01</sup> � � <sup>¼</sup>

<sup>S</sup><sup>00</sup> � � <sup>¼</sup>

<sup>¼</sup> <sup>ω</sup><sup>1</sup>

<sup>¼</sup> <sup>ω</sup><sup>3</sup>

<sup>b</sup>0<sup>l</sup> <sup>¼</sup> <sup>2</sup>x1x<sup>2</sup> <sup>þ</sup> <sup>3</sup>x0ð Þþ <sup>x</sup><sup>1</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>x</sup><sup>0</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>1</sup> <sup>þ</sup> <sup>q</sup>

A Direct Construction of Intergroup Complementary Code Set for CDMA

<sup>¼</sup> <sup>ω</sup>0<sup>ψ</sup> <sup>C</sup><sup>0</sup> � � <sup>ω</sup><sup>1</sup>

<sup>ψ</sup> <sup>C</sup><sup>0</sup> � � <sup>¼</sup>

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> � � <sup>¼</sup>

<sup>¼</sup> <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>0</sup> � � <sup>ω</sup><sup>3</sup>

ψ C<sup>0</sup> � � � �

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> � � <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> h i � �

ψ C<sup>0</sup> � � � �

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> � � <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> h i � �

DOI: http://dx.doi.org/10.5772/intechopen.86751

The codes corresponding to the sets of Boolean functions are listed below:

ω1ω2ω1ω1ω2ω2ω0ω3ω2ω3ω2ω2ω3ω3ω1ω<sup>0</sup> ω1ω0ω1ω3ω2ω0ω0ω1ω2ω1ω2ω0ω3ω1ω1ω<sup>2</sup> ω1ω2ω1ω1ω0ω0ω2ω1ω2ω3ω2ω2ω1ω1ω3ω<sup>2</sup> ω1ω0ω1ω3ω0ω2ω2ω3ω2ω1ω2ω0ω1ω3ω3ω<sup>0</sup>

> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>

ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>

> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>

ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>

ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>

<sup>2</sup> <sup>d</sup>0x<sup>0</sup> <sup>þ</sup> <sup>b</sup>0x<sup>0</sup> <sup>þ</sup> dx<sup>2</sup>

� �, (27)

(28)

(29)

(30)

(31)

where

$$\boldsymbol{\Psi}(F\_{\mathbf{b}l}) = \left(\boldsymbol{\alpha}^{F\_{\mathbf{b}}\left(\mathbf{a}\_{1}^{\prime}\right)}, \boldsymbol{\alpha}^{F\_{\mathbf{b}}\left(\mathbf{a}\_{2}^{\prime}\right)}, \dots, \boldsymbol{\alpha}^{F\_{\mathbf{b}}\left(\mathbf{a}\_{2^{m-0}}^{\prime}\right)}\right), \ l = \mathbf{0}, \mathbf{1}, \ \mathbf{2}^{k+1} - \mathbf{1}. \tag{21}$$

Similarly, we can show that

$$\boldsymbol{\mu}\left(\tilde{\boldsymbol{S}}\_{\mathbf{b}\mathbf{d}'}\right) = \begin{bmatrix} \boldsymbol{\mu}\left(\boldsymbol{F}\_{\mathbf{b}\mathbf{l}}'\right)\boldsymbol{\alpha}^{\left(\Gamma + \frac{\mathbf{d}}{2}\mathbf{d}'\right)\cdot\overline{\mathbf{r}}\_{\mathcal{I}}} \end{bmatrix}, \quad \boldsymbol{l} = \mathbf{0}, \mathbf{1}, \ \mathbf{2}^{k+1} - \mathbf{1}, \mathbf{j}' = \mathbf{1}, \mathbf{2}, \ldots, \mathbf{2}^{p}. \tag{22}$$

Example 1 Let f be a GBF of four variables over Z4, given by

$$f(\mathbf{x}\_0, \mathbf{x}\_1, \mathbf{x}\_2, \mathbf{x}\_3) = 2\mathbf{x}\_1\mathbf{x}\_2 + 3\mathbf{x}\_0(\mathbf{x}\_1 + \mathbf{x}\_2) + \mathbf{x}\_0 + \mathbf{x}\_2 + \mathbf{x}\_3 + \mathbf{1}.\tag{23}$$

From the G fð Þ, given in Figure 1, it is clear that after the deletion of the vertex x0, the resultant graph contains a path over the vertices x1, x<sup>2</sup> and an isolated vertex x3. For this example k ¼ 1 and p ¼ 1. Therefore, the vectors b, d, d<sup>0</sup> , x, Γ and x<sup>0</sup> are of length one and belong to Z2.

Hence, b ¼ bj 0 � � <sup>¼</sup> ð Þ¼ <sup>b</sup><sup>0</sup> <sup>b</sup>0, <sup>d</sup> <sup>¼</sup> ð Þ¼ <sup>d</sup><sup>0</sup> <sup>d</sup>0, <sup>d</sup><sup>0</sup> <sup>¼</sup> <sup>d</sup><sup>0</sup> 1 � � <sup>¼</sup> <sup>d</sup><sup>0</sup> 1, x ¼ xj 0 � � <sup>¼</sup> ð Þ¼ <sup>x</sup><sup>0</sup> <sup>x</sup>0, <sup>Γ</sup> <sup>¼</sup> gm�<sup>p</sup> � � <sup>¼</sup> <sup>g</sup><sup>0</sup> � � <sup>¼</sup> 1, and <sup>x</sup><sup>0</sup> <sup>¼</sup> xm�<sup>p</sup> � � <sup>¼</sup> ð Þ¼ <sup>x</sup><sup>3</sup> <sup>x</sup>3. The set of Boolean functions Sb0d<sup>0</sup> <sup>1</sup> and <sup>~</sup> Sb0d<sup>0</sup> 1 are given below:

$$\mathcal{S}\_{b\circ d\_1'} = \left\{ f + \frac{q}{2} \left( d\_0 \mathbb{x}\_0 + b\_0 \mathbb{x}\_0 + d\_1' \mathbb{x}\_3 + d \mathbb{x}\_2 \right) \; : \; d, \; d\_0 \in \{0, 1\} \right\}, \tag{24}$$

and

$$\tilde{s}\_{b\_0d\_1'} = \left\{ \tilde{f} + \frac{q}{2} \left( d\_0 \overline{\mathbf{x}}\_0 + b\_0 \overline{\mathbf{x}}\_0 + d\_1' \overline{\mathbf{x}}\_3 + \overline{d} \mathbf{x}\_2 \right) : d, \ \boldsymbol{d}\_0 \in \{ \mathbf{0}, \mathbf{1} \} \right\}. \tag{25}$$

The GBFs Fb0<sup>l</sup> and F<sup>0</sup> <sup>b</sup>0<sup>l</sup> are given by

$$F\_{bol} = 2\mathbf{x}\_1\mathbf{x}\_2 + 3\mathbf{x}\_0(\mathbf{x}\_1 + \mathbf{x}\_2) + \mathbf{x}\_0 + \mathbf{x}\_2 + \mathbf{1} + \frac{q}{2}(d\_0\mathbf{x}\_0 + b\_0\mathbf{x}\_0 + d\mathbf{x}\_2) \tag{26}$$

Figure 1. The graph of the GBF 2x1x2 þ 3x0(x1 þ x2) þ x0 þ x2 þ x3 þ 1. A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

and

or

Coding Theory

where

ψ ~

�

and belong to Z2. Hence, b ¼ bj

<sup>¼</sup> ð Þ¼ <sup>x</sup><sup>0</sup> <sup>x</sup>0, <sup>Γ</sup> <sup>¼</sup> gm�<sup>p</sup>

Boolean functions Sb0d<sup>0</sup>

Sb0d<sup>0</sup>

~s b0d<sup>0</sup> 1

The GBFs Fb0<sup>l</sup> and F<sup>0</sup>

and

Figure 1.

58

<sup>ψ</sup> <sup>S</sup>bd ð Þ¼0 <sup>ψ</sup>ð Þ <sup>F</sup><sup>b</sup><sup>l</sup> <sup>ω</sup> <sup>Γ</sup>þ<sup>q</sup>

Similarly, we can show that

0

<sup>1</sup> <sup>¼</sup> <sup>f</sup> <sup>þ</sup> <sup>q</sup>

<sup>¼</sup> <sup>~</sup><sup>f</sup> <sup>þ</sup> <sup>q</sup>

� �

<sup>1</sup> and <sup>~</sup> Sb0d<sup>0</sup> 1

Sbd0Þ ¼ ψ F<sup>0</sup>

<sup>2</sup>d<sup>0</sup> ð Þ�<sup>r</sup><sup>j</sup> 0

<sup>2</sup>d<sup>0</sup> ð Þ�<sup>r</sup><sup>j</sup> 0

! � �

Example 1 Let f be a GBF of four variables over Z4, given by

� � <sup>¼</sup> ð Þ¼ <sup>b</sup><sup>0</sup> <sup>b</sup>0, <sup>d</sup> <sup>¼</sup> ð Þ¼ <sup>d</sup><sup>0</sup> <sup>d</sup>0, <sup>d</sup><sup>0</sup> <sup>¼</sup> <sup>d</sup><sup>0</sup>

� � <sup>¼</sup> 1, and <sup>x</sup><sup>0</sup> <sup>¼</sup> xm�<sup>p</sup>

<sup>1</sup> x<sup>3</sup> þ dx<sup>2</sup> � � : <sup>d</sup>; <sup>d</sup><sup>0</sup> <sup>∈</sup> f g <sup>0</sup>; <sup>1</sup> n o

<sup>1</sup>x<sup>3</sup> þ dx<sup>2</sup> � � : <sup>d</sup>; <sup>d</sup><sup>0</sup> <sup>∈</sup>f g <sup>0</sup>; <sup>1</sup> n o

2

are given below:

Fb<sup>l</sup> a<sup>0</sup> 2m�0

, l <sup>¼</sup> <sup>0</sup>, <sup>1</sup>, …, <sup>2</sup>kþ<sup>1</sup> � <sup>1</sup>, j<sup>0</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …, <sup>2</sup>p, (20)

, l <sup>¼</sup> <sup>0</sup>, <sup>1</sup>, <sup>2</sup>kþ<sup>1</sup> � <sup>1</sup>, j<sup>0</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …, <sup>2</sup>p:

f xð Þ¼ <sup>0</sup>; x1; x2; x<sup>3</sup> 2x1x<sup>2</sup> þ 3x0ð Þþ x<sup>1</sup> þ x<sup>2</sup> x<sup>0</sup> þ x<sup>2</sup> þ x<sup>3</sup> þ 1: (23)

From the G fð Þ, given in Figure 1, it is clear that after the deletion of the vertex x0, the resultant graph contains a path over the vertices x1, x<sup>2</sup> and an isolated vertex x3. For this

, l <sup>¼</sup> <sup>0</sup>, <sup>1</sup>, <sup>2</sup>kþ<sup>1</sup> � <sup>1</sup>: (21)

, x, Γ and x<sup>0</sup> are of length one

ð Þ d0x<sup>0</sup> þ b0x<sup>0</sup> þ dx<sup>2</sup> (26)

1, x ¼ xj

0 � �

, (24)

: (25)

1 � � <sup>¼</sup> <sup>d</sup><sup>0</sup>

� � <sup>¼</sup> ð Þ¼ <sup>x</sup><sup>3</sup> <sup>x</sup>3. The set of

(22)

h i

<sup>ψ</sup>ð Þ¼ <sup>F</sup><sup>b</sup><sup>l</sup> <sup>ω</sup>Fb<sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ<sup>1</sup> ;ωFb<sup>l</sup> <sup>a</sup><sup>0</sup> ð Þ<sup>2</sup> ; …;<sup>ω</sup>

bl � �ω <sup>Γ</sup>þ<sup>q</sup>

h i

example k ¼ 1 and p ¼ 1. Therefore, the vectors b, d, d<sup>0</sup>

¼ g<sup>0</sup>

<sup>2</sup> <sup>d</sup>0x<sup>0</sup> <sup>þ</sup> <sup>b</sup>0x<sup>0</sup> <sup>þ</sup> <sup>d</sup><sup>0</sup>

<sup>2</sup> <sup>d</sup>0x<sup>0</sup> <sup>þ</sup> <sup>b</sup>0x<sup>0</sup> <sup>þ</sup> <sup>d</sup><sup>0</sup>

<sup>b</sup>0<sup>l</sup> are given by

Fb0<sup>l</sup> <sup>¼</sup> <sup>2</sup>x1x<sup>2</sup> <sup>þ</sup> <sup>3</sup>x0ð Þþ <sup>x</sup><sup>1</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>x</sup><sup>0</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>1</sup> <sup>þ</sup> <sup>q</sup>

The graph of the GBF 2x1x2 þ 3x0(x1 þ x2) þ x0 þ x2 þ x3 þ 1.

$$F\_{byl}' = 2\overline{\mathbf{x}}\_1 \overline{\mathbf{x}}\_2 + 3\overline{\mathbf{x}}\_0 (\overline{\mathbf{x}}\_1 + \overline{\mathbf{x}}\_2) + \overline{\mathbf{x}}\_0 + \overline{\mathbf{x}}\_2 + 1 + \frac{q}{2} \left( d\_0 \overline{\mathbf{x}}\_0 + b\_0 \overline{\mathbf{x}}\_0 + \overline{d} \mathbf{x}\_2 \right),\tag{27}$$

where l ¼ 0; 1; 2; 3.

The codes corresponding to the sets of Boolean functions are listed below:

$$\begin{aligned} \mathbf{1}) \quad \boldsymbol{\Psi}(\mathbf{S}\_{00}) &= \begin{bmatrix} \alpha^1 \alpha^2 \alpha^1 \alpha^1 \alpha^2 \alpha^2 \alpha^0 \alpha^3 \alpha^2 \alpha^3 \alpha^2 \alpha^0 \alpha^2 \alpha^3 \alpha^3 \alpha^1 \alpha^0\\ \alpha^1 \alpha^0 \alpha^1 \alpha^3 \alpha^2 \alpha^0 \alpha^0 \alpha^1 \alpha^2 \alpha^1 \alpha^2 \alpha^0 \alpha^0 \alpha^3 \alpha^1 \alpha^1 \alpha^2\\ \alpha^1 \alpha^2 \alpha^1 \alpha^1 \alpha^0 \alpha^0 \alpha^2 \alpha^1 \alpha^2 \alpha^0 \alpha^2 \alpha^2 \alpha^1 \alpha^1 \alpha^3 \alpha^2\\ \alpha^1 \alpha^0 \alpha^1 \alpha^3 \alpha^0 \alpha^2 \alpha^2 \alpha^3 \alpha^2 \alpha^1 \alpha^2 \alpha^0 \alpha^1 \alpha^3 \alpha^3 \alpha^0 \end{bmatrix} \tag{28}$$
 
$$\begin{aligned} &= \left[ \alpha^0 \boldsymbol{\Psi} \left( \mathbf{C}^0 \right) \qquad \alpha^1 \boldsymbol{\Psi} \left( \mathbf{C}^0 \right) \right] \end{aligned} \tag{39}$$

where

<sup>ψ</sup> <sup>C</sup><sup>0</sup> � � <sup>¼</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup> 2 6 6 6 4 3 7 7 7 5: <sup>2</sup><sup>Þ</sup> <sup>ψ</sup> <sup>~</sup> <sup>S</sup><sup>00</sup> � � <sup>¼</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> 2 6 6 6 6 4 3 7 7 7 7 5 <sup>¼</sup> <sup>ω</sup><sup>1</sup> <sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> � � <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> h i � � (29)

where

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> � � <sup>¼</sup> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> 2 6 6 6 4 3 7 7 7 5: 3Þ ψð Þ¼ S<sup>01</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup> 2 6 6 6 6 4 3 7 7 7 7 5 : <sup>¼</sup> <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>0</sup> � � <sup>ω</sup><sup>3</sup> ψ C<sup>0</sup> � � � � (30) <sup>4</sup><sup>Þ</sup> <sup>ψ</sup> <sup>~</sup> <sup>S</sup><sup>01</sup> � � <sup>¼</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> 2 6 6 6 6 4 3 7 7 7 7 5 : <sup>¼</sup> <sup>ω</sup><sup>3</sup> <sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> � � <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>0</sup> h i � � (31)

$$\begin{aligned} \mathbf{S} \quad \boldsymbol{\Psi} (\mathbf{S}\_{10}) &= \begin{bmatrix} a^1 a^0 a^1 a^1 a^3 a^2 a^0 a^0 a^1 a^2 a^1 a^2 a^0 a^0 a^3 a^1 a^1 a^2\\ a^1 a^2 a^1 a^1 a^2 a^2 a^0 a^0 a^3 a^2 a^3 a^2 a^0 a^3 a^0 a^1 a^0\\ a^1 a^0 a^1 a^3 a^0 a^0 a^2 a^0 a^3 a^0 a^1 a^0 a^1 a^0 a^3 a^0\\ a^1 a^2 a^1 a^1 a^0 a^0 a^0 a^2 a^1 a^2 a^3 a^2 a^0 a^1 a^0 a^3 a^2\\ \end{bmatrix} \\ &= \begin{bmatrix} a^0 \boldsymbol{\varupright} \left( \mathbf{C}^1 \right) & a^1 \boldsymbol{\upup} \left( \mathbf{C}^1 \right) \end{bmatrix} \end{aligned} \tag{32}$$

I <sup>t</sup> <sup>¼</sup> <sup>I</sup> t

DOI: http://dx.doi.org/10.5772/intechopen.86751

<sup>2</sup>kþ<sup>t</sup> <sup>¼</sup> <sup>I</sup>

which forms an IGC code set I 2kþpþ<sup>1</sup>

1 � � and <sup>ψ</sup> <sup>S</sup>bd<sup>0</sup>

I

1 � � and <sup>ψ</sup> <sup>S</sup>bd<sup>0</sup>

1 � �; <sup>ψ</sup> <sup>S</sup>bd<sup>0</sup>

<sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ ð Þ<sup>τ</sup> <sup>∑</sup>

<sup>¼</sup> <sup>A</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ ð Þ<sup>τ</sup> <sup>∑</sup>

<sup>2</sup> ¼ d<sup>0</sup>

<sup>A</sup> <sup>ψ</sup> <sup>S</sup>bd ð Þ ð Þ0 <sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ¼ <sup>þ</sup> <sup>τ</sup>

2 � � � � <sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ¼ <sup>þ</sup> <sup>τ</sup>

For d<sup>0</sup>

For d<sup>0</sup>

C ψ Sbd<sup>0</sup>

show that

61

<sup>1</sup> ¼ d<sup>0</sup>

<sup>1</sup> 6¼ d<sup>0</sup>

1 � �; <sup>ψ</sup> <sup>S</sup>bd<sup>0</sup>

Proof 2 Let ψ Sbd<sup>0</sup>

0≤τ < 2m�p, τ ∈ Z) is

C ψ Sbd<sup>0</sup>

þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

ACCF of ψ Sbd<sup>0</sup>

and

<sup>s</sup> : <sup>0</sup>≤<sup>s</sup> < 2<sup>p</sup> � � <sup>¼</sup> <sup>ψ</sup> <sup>S</sup>bd ð Þ0 : <sup>d</sup><sup>0</sup>

; 2kþ<sup>1</sup> ; 2m; 2m�<sup>p</sup> � �.

<sup>2</sup>kþ<sup>t</sup> <sup>s</sup> : <sup>0</sup>≤<sup>s</sup> < 2<sup>p</sup> n o <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup>

A Direct Construction of Intergroup Complementary Code Set for CDMA

2

<sup>2</sup>p�<sup>η</sup> i¼1

<sup>C</sup>ðψð Þ <sup>F</sup>b<sup>l</sup> , <sup>ψ</sup>ð Þ <sup>F</sup>b<sup>l</sup> <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

2<sup>m</sup>þk�pþ<sup>1</sup> �Σ<sup>2</sup>p�<sup>η</sup> <sup>i</sup>¼<sup>1</sup> <sup>ω</sup> <sup>Γ</sup>þ<sup>q</sup>

2, the ACCF given in Eq. (38) can be expressed as

8 >>>>>><

>>>>>>:

2<sup>m</sup>þk�pþ<sup>1</sup> �Σ<sup>2</sup>p�<sup>η</sup> <sup>i</sup>¼<sup>1</sup> <sup>ω</sup> <sup>Γ</sup>þ<sup>q</sup>

The terms in Eqs. (39) and (40) are derived from the autocorrelation properties of the CC<sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ. It is also observed that the codes from the same code group I<sup>t</sup> have ideal auto- and cross-correlation properties inside the ZCZ width 2<sup>m</sup>�<sup>p</sup>. Similarly, we can

8 >>><

>>>:

<sup>2</sup>p�<sup>η</sup> i¼1

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup>

, the ACCF given in Eq. (38) reduced to AACF as follows:

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

2

2 � � � � <sup>η</sup>2m�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup>

Cðψð Þ Fb<sup>l</sup> , ψð Þ Fb<sup>l</sup> ð Þτ ∑

<sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

ω <sup>r</sup>þ<sup>q</sup>

<sup>2</sup>p�<sup>η</sup> i¼1

<sup>þ</sup> <sup>A</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

<sup>∈</sup> f g <sup>0</sup>; <sup>1</sup> <sup>p</sup> � � (36)

<sup>∈</sup>f g <sup>0</sup>; <sup>1</sup> <sup>p</sup> � �, � (37)

. Then the

(38)

(39)

(40)

, <sup>τ</sup> <sup>¼</sup> <sup>0</sup>, 0 < <sup>η</sup> < 2p,

0≤η < 2<sup>p</sup>:

Sbd0Þ : d<sup>0</sup>

� � be any two codes from a code group I<sup>t</sup>

� � at the time shift <sup>η</sup>2m�<sup>p</sup> <sup>þ</sup> <sup>τ</sup> (where <sup>0</sup><sup>≤</sup> <sup>η</sup> < 2p, <sup>η</sup><sup>∈</sup> <sup>Z</sup>,

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> :

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

0, 0 < ∣τ∣ < 2<sup>m</sup>�p,

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> <sup>r</sup><sup>i</sup>

0, τ ¼ 0, η ¼ 0, 0, 0 < ∣τ∣ < 2<sup>m</sup>�p,

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

, <sup>τ</sup> <sup>¼</sup> <sup>0</sup>, <sup>0</sup><sup>≤</sup> <sup>η</sup> < 2p,

0≤η < 2<sup>p</sup>:

where

$$\begin{aligned} \boldsymbol{\Psi} \begin{pmatrix} \mathbf{C}^{1} \end{pmatrix} &= \begin{bmatrix} \alpha^{1} \alpha^{0} a^{0} a^{1} a^{3} a^{2} a^{0} a^{0} a^{1} \\ \alpha^{1} a^{2} a^{0} a^{1} a^{0} a^{2} a^{0} a^{0} \\ \alpha^{1} a^{0} a^{1} a^{3} a^{0} a^{0} a^{2} a^{3} \\ \alpha^{1} a^{2} a^{1} a^{1} a^{0} a^{0} a^{0} a^{2} a^{1} \end{bmatrix}. \\\\ \boldsymbol{\Phi} (\hat{\mathbf{S}}\_{10}) &= \begin{bmatrix} \alpha^{0} a^{1} a^{1} a^{3} a^{2} a^{0} a^{0} a^{3} a^{0} a^{1} a^{0} a^{0} a^{0} a^{0} a^{2} a^{1} a^{3} a^{2} a^{3} \\ \alpha^{0} a^{1} a^{3} a^{0} a^{0} a^{1} a^{0} a^{0} a^{0} a^{0} a^{2} a^{0} a^{0} a^{0} a^{0} a^{0} a^{0} \\ \alpha^{2} a^{1} a^{1} a^{3} a^{0} a^{0} a^{1} a^{0} a^{1} a^{0} a^{0} a^{0} a^{2} a^{3} a^{1} a^{0} a^{1} \\ \alpha^{0} a^{1} a^{1} a^{3} a^{0} a^{2} a^{1} a^{0} a^{1} a^{0} a^{0} a^{2} a^{3} a^{1} a^{0} a^{1} a^{1} \end{bmatrix} \\ &= \begin{bmatrix} \alpha^{1} \mathbf{y} \left( \bar{\mathbf{C}}^{1} \right) & a^{0} \mathbf{y} \left( \bar{\mathbf{C}}^{1} \right) \end{bmatrix} \end{aligned} \tag{33}$$

where

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> � � <sup>¼</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> 2 6 6 6 4 3 7 7 7 5: 7Þ ψð Þ¼ S<sup>11</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup> 2 6 6 6 6 6 6 4 3 7 7 7 7 7 7 5 : <sup>¼</sup> <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>1</sup> � � <sup>ω</sup><sup>3</sup> ψ C<sup>1</sup> � � � � (34) <sup>8</sup><sup>Þ</sup> <sup>ψ</sup> <sup>~</sup> <sup>S</sup><sup>11</sup> � � <sup>¼</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> 2 6 6 6 6 6 6 4 3 7 7 7 7 7 7 5 : <sup>¼</sup> <sup>ω</sup><sup>3</sup> <sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> � � <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> h i � � (35)

Theorem 1 Let f be a GBF over m variables as defined in Lemma 1 and Lemma 2. Suppose I<sup>0</sup>, I<sup>1</sup> , …, I<sup>2</sup>kþ1�<sup>1</sup> are a list of <sup>2</sup><sup>k</sup> <sup>þ</sup> <sup>1</sup> code groups defined by

A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

$$I^t = \left\{ I^t\_\mathbf{s} : \mathbf{0} \le \mathfrak{s} < \mathfrak{I}^p \right\} = \left\{ \boldsymbol{\varphi}(\mathbf{S\_{bd}}) : \mathbf{d}' \in \{0, 1\}^p \right\} \tag{36}$$

and

5Þ ψð Þ¼ S<sup>10</sup>

<sup>6</sup><sup>Þ</sup> <sup>ψ</sup> <sup>~</sup> S<sup>10</sup> � � <sup>¼</sup>

7Þ ψð Þ¼ S<sup>11</sup>

<sup>8</sup><sup>Þ</sup> <sup>ψ</sup> <sup>~</sup> S<sup>11</sup> � � <sup>¼</sup>

Suppose I<sup>0</sup>, I<sup>1</sup>

60

where

Coding Theory

where

<sup>¼</sup> <sup>ω</sup>0<sup>ψ</sup> <sup>C</sup><sup>1</sup> � � <sup>ω</sup><sup>1</sup>

<sup>ψ</sup> <sup>C</sup><sup>1</sup> � � <sup>¼</sup>

<sup>¼</sup> <sup>ω</sup><sup>1</sup>

<sup>¼</sup> <sup>ω</sup><sup>3</sup>

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> � �

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> � �

<sup>¼</sup> <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>1</sup> � � <sup>ω</sup><sup>3</sup>

<sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> � � <sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> h i � �

ψ C<sup>1</sup> � � � �

¼

<sup>ω</sup><sup>0</sup><sup>ψ</sup> <sup>C</sup><sup>~</sup> <sup>1</sup> h i � �

ψ C<sup>1</sup> � � � �

ω1ω0ω1ω3ω2ω0ω0ω1ω2ω1ω2ω0ω3ω1ω1ω<sup>2</sup>

:

:

(32)

(33)

(34)

(35)

ω1ω2ω1ω1ω2ω2ω0ω3ω2ω3ω2ω2ω3ω3ω1ω<sup>0</sup>

ω1ω0ω1ω3ω0ω2ω2ω3ω2ω1ω2ω0ω1ω3ω3ω<sup>0</sup>

ω1ω2ω1ω1ω0ω0ω2ω1ω2ω3ω2ω2ω1ω1ω3ω<sup>2</sup>

ω1ω0ω1ω3ω2ω0ω0ω<sup>1</sup> ω1ω2ω1ω1ω2ω2ω0ω<sup>3</sup> ω1ω0ω1ω3ω0ω2ω2ω<sup>3</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>

ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>

ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>

ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>

ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>

ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>

ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>

ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>

ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>

ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>

ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>

ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>

ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>

ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>

Theorem 1 Let f be a GBF over m variables as defined in Lemma 1 and Lemma 2.

, …, I<sup>2</sup>kþ1�<sup>1</sup> are a list of <sup>2</sup><sup>k</sup> <sup>þ</sup> <sup>1</sup> code groups defined by

$$I^{\mathcal{I}^k + t} = \left\{ I\_s^{\mathcal{I}^k + t} : \mathbf{0} \le \mathfrak{s} < \mathfrak{2}^p \right\} = \left\{ \boldsymbol{\psi}^\* \left( \tilde{\mathcal{S}}\_{\mathbf{b}\mathbf{d}'} \right) : \mathbf{d}' \in \{0, 1\}^p \right\},\tag{37}$$

which forms an IGC code set I 2kþpþ<sup>1</sup> ; 2kþ<sup>1</sup> ; 2m; 2m�<sup>p</sup> � �. Proof 2 Let ψ Sbd<sup>0</sup> 1 � � and <sup>ψ</sup> <sup>S</sup>bd<sup>0</sup> 2 � � be any two codes from a code group I<sup>t</sup> . Then the ACCF of ψ Sbd<sup>0</sup> 1 � � and <sup>ψ</sup> <sup>S</sup>bd<sup>0</sup> 2 � � at the time shift <sup>η</sup>2m�<sup>p</sup> <sup>þ</sup> <sup>τ</sup> (where <sup>0</sup><sup>≤</sup> <sup>η</sup> < 2p, <sup>η</sup><sup>∈</sup> <sup>Z</sup>, 0≤τ < 2m�p, τ ∈ Z) is

C ψ Sbd<sup>0</sup> 1 � �; <sup>ψ</sup> <sup>S</sup>bd<sup>0</sup> 2 � � � � <sup>η</sup>2m�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup> ¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 Cðψð Þ Fb<sup>l</sup> , ψð Þ Fb<sup>l</sup> ð Þτ ∑ <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 <sup>C</sup>ðψð Þ <sup>F</sup>b<sup>l</sup> , <sup>ψ</sup>ð Þ <sup>F</sup>b<sup>l</sup> <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ ð Þ<sup>τ</sup> <sup>∑</sup> <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>¼</sup> <sup>A</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ ð Þ<sup>τ</sup> <sup>∑</sup> <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>r</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>þ</sup> <sup>A</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> : (38)

For d<sup>0</sup> <sup>1</sup> ¼ d<sup>0</sup> <sup>2</sup> ¼ d<sup>0</sup> , the ACCF given in Eq. (38) reduced to AACF as follows:

$$A(\boldsymbol{\nu}(\mathbf{S}\_{\mathbf{b}\mathbf{d}'}))(\eta \mathbf{2}^{m-p} + \tau) = \begin{cases} 2^{m+k-p+1} \\ \times \Sigma\_{i=1}^{2^r - \eta} \boldsymbol{\alpha}^{\{\Gamma + \frac{\mathsf{s}\_i}{2}\mathsf{d}\_i\} \cdot \mathbf{r}\_{i+q} - \left(\Gamma + \frac{\mathsf{s}\_i}{2}\mathsf{d}\_i\right) \cdot \mathbf{r}\_i}, & \tau = \mathbf{0}, \ \mathbf{0} \le \boldsymbol{\eta} < \mathcal{D}^p, \\ \mathbf{0}, & \mathbf{0} < |\boldsymbol{\tau}| < 2^{m-p}, \\ & \mathbf{0} \le \boldsymbol{\eta} < 2^p. \end{cases} \tag{39}$$

For d<sup>0</sup> <sup>1</sup> 6¼ d<sup>0</sup> 2, the ACCF given in Eq. (38) can be expressed as

$$\mathbf{C}\left(\boldsymbol{\nu}\left(\mathbf{S}\_{\mathbf{b}\mathbf{d}\_{i}^{\prime}}\right),\boldsymbol{\nu}\left(\mathbf{S}\_{\mathbf{b}\mathbf{d}\_{i}^{\prime}}\right)\right)(\eta\mathbf{2}^{m-p}+\boldsymbol{\tau})=\begin{cases} 2^{m+k-p+1} \\ \times \boldsymbol{\Sigma}\_{i=1}^{2^{m}-\eta}\boldsymbol{\alpha}^{(\Gamma+\frac{\eta}{2}\mathbf{d}\_{i}^{\prime})\cdot\mathbf{r}\_{i\cdot\eta}-(\Gamma+\frac{\eta}{2}\mathbf{d}\_{i}^{\prime})\mathbf{r}\_{i}} & \boldsymbol{\tau}=\mathbf{0},\ \mathbf{0}<\eta<2^{p}, \\ \mathbf{0}, & \boldsymbol{\tau}=\mathbf{0},\ \boldsymbol{\eta}=\mathbf{0}, \\ \mathbf{0}, & \mathbf{0}<|\boldsymbol{\tau}|<2^{m-p}, \\ & \mathbf{0}\leq\eta<2^{p}. \end{cases} \tag{40}$$

The terms in Eqs. (39) and (40) are derived from the autocorrelation properties of the CC<sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ. It is also observed that the codes from the same code group I<sup>t</sup> have ideal auto- and cross-correlation properties inside the ZCZ width 2<sup>m</sup>�<sup>p</sup>. Similarly, we can show that

$$A\left(\boldsymbol{\upmu}^{\*}\left(\tilde{\mathbf{S}}\_{\mathbf{bd}'}\right)\right)(\boldsymbol{\upeta}2^{m-p}+\tau) = \begin{cases} 2^{m+k-p+1} \\ \times \boldsymbol{\upSigma}\_{i=1}^{2^{p}-\eta} a^{\left(\Gamma+\textsf{M}\_{i}'\right)\cdot\widetilde{\pi}\_{i}-\left(\Gamma+\textsf{M}\_{i}'\right)\cdot\widetilde{\pi}\_{i+q}}, & \tau=0,\,0 \le \eta < 2^{p}, \\\ 0, & 0 < |\tau| < 2^{m-p}, \\\ & 0 \le \eta < 2^{p}, \end{cases} \tag{41}$$

The results in Eqs. (43) and (44) are obtained by using the ideal cross-correlation properties of CCCs. To complete the proof, now we only need to show that the ACCFs of any code from Itu and I2kþtv u; v∈Z; 1≤u; v≤2<sup>k</sup> � � are zeros everywhere. In this case, tu and tv are any two integers in 0; 2<sup>k</sup> � � and may or may not be equal. Let

> <sup>2</sup>p�<sup>η</sup> i¼1

> > <sup>2</sup>p�<sup>η</sup> i¼1

<sup>2</sup>p�η�<sup>1</sup> i¼1

The above result is also obtained by using the ideal cross-correlation properties of CCCs. From Eqs. (39)-(45), we observed that the AACFs and ACCFs of the codes of the same group are zeros inside the ZCZ width 2<sup>m</sup>�<sup>p</sup> and the ACCFs of the codes from different code groups are zeros everywhere. Hence, we can conclude that

Example 2 Let f be a GBF of four variables as given in Example 1.

Then the obtained IGC code set Ið Þ 8; 4; 16; 8 corresponding to the GBF f is given

ω <sup>Γ</sup>þ<sup>q</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

; 2<sup>k</sup>þ<sup>1</sup> ; 2<sup>m</sup>; 2<sup>m</sup>�<sup>p</sup> � �.

ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>

ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>

ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>

ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>

ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>

ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>

ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>

ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>

<sup>2</sup>kþtv where bu, b<sup>v</sup> are binary vector representations of

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

(46)

:

(45)

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup>

ψðSbud1

<sup>0</sup> ∈I

tu, tv, respectively. Then

C ψ S<sup>b</sup>ud<sup>0</sup>

þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

¼ 0 ∀τ, η:

I <sup>0</sup>, I<sup>1</sup>

below:

63

Code group 1:

¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

tu , ψ <sup>∗</sup> ~

1 � �; <sup>ψ</sup> <sup>∗</sup> <sup>~</sup>

Cðψ ∗ F<sup>0</sup>

Sbvd2 0 � �∈I

DOI: http://dx.doi.org/10.5772/intechopen.86751

S<sup>b</sup>vd<sup>0</sup> 2 � � � � <sup>η</sup>2m�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup>

bul

<sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup>tu ð Þ; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup>tv ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

, …, I<sup>2</sup>kþ1�<sup>1</sup> form an IGC code set I 2<sup>k</sup>þpþ<sup>1</sup>

<sup>0</sup> ¼ ψð Þ S<sup>00</sup>

¼

<sup>1</sup> ¼ ψð Þ S<sup>01</sup>

¼

I 0

I 0

bvl � �ð Þ<sup>τ</sup> <sup>∑</sup>

A Direct Construction of Intergroup Complementary Code Set for CDMA

� �, <sup>ψ</sup> <sup>∗</sup> ð Þ <sup>F</sup>bvl ð Þ<sup>τ</sup> <sup>∑</sup>

� � � � <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

bul � �, <sup>ψ</sup> <sup>∗</sup> <sup>F</sup><sup>0</sup>

Cðψ ∗ F<sup>0</sup>

<sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup>tu ð Þ; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> tv

and for d<sup>0</sup> <sup>1</sup> 6¼ d<sup>0</sup> 2,

$$\mathbf{C}\left(\boldsymbol{\psi}^{\*}\left(\bar{\mathbf{S}}\_{\mathbf{b}\mathbf{d}\_{1}^{\*}}\right),\boldsymbol{\psi}^{\*}\left(\bar{\mathbf{S}}\_{\mathbf{b}\mathbf{d}\_{1}^{\*}}\right)\right)(\eta 2^{m-p}+\tau) = \begin{cases} 2^{m+k-p+1} \\ \times \boldsymbol{\Sigma}\_{i=1}^{2^{p}-y} a^{\left(\Gamma + \frac{\mathbf{s}\mathbf{d}\_{1}^{\*}}{2}\right)\tilde{\mathbf{s}}\_{i} - \left(\Gamma + \frac{\mathbf{s}\mathbf{d}\_{1}^{\*}}{2}\right)\tilde{\mathbf{s}}\_{i+q}}, & \tau = 0, \ 0 < \eta < 2^{p}, \\ \mathbf{0}, & \tau = 0, \ \eta = 0, \\ \mathbf{0}, & \mathbf{0} < |\boldsymbol{\tau}| < 2^{m-p}, \\ & \mathbf{0} \le \eta < 2^{p}. \end{cases} \tag{42}$$

From Eqs. (41) and (42), we get that the codes from the same code group I<sup>2</sup>kþ<sup>t</sup> have ideal auto- and cross-correlation properties inside the ZCZ width 2<sup>m</sup>�<sup>p</sup>.

Now we show that the ACCFs between any two codes of any two different code groups I <sup>t</sup><sup>1</sup> and I<sup>t</sup><sup>2</sup> <sup>0</sup><sup>≤</sup> <sup>t</sup>1; <sup>t</sup><sup>2</sup> < 2<sup>k</sup> � � are zeros everywhere. Let <sup>ψ</sup> <sup>S</sup>b1d<sup>0</sup> 1 � �∈<sup>I</sup> <sup>t</sup><sup>1</sup> , ψ Sb2d<sup>0</sup> 2 � �<sup>∈</sup> <sup>I</sup> <sup>t</sup><sup>2</sup> where b1, b<sup>2</sup> are binary vector representations of t1, t2, and d<sup>0</sup> 1, d<sup>0</sup> <sup>2</sup> are any two binary vectors in Zp <sup>2</sup> . Then

C ψ Sb1d<sup>0</sup> 1 � �; <sup>ψ</sup> <sup>S</sup>b2d<sup>0</sup> 2 � � � � <sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup> ¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 Cðψð Þ Fb1<sup>l</sup> , ψð Þ Fb2<sup>l</sup> ð Þτ ∑ <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 <sup>C</sup>ðψð Þ <sup>F</sup>b1<sup>l</sup> , <sup>ψ</sup>ð Þ <sup>F</sup>b2<sup>l</sup> <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ ð Þ<sup>τ</sup> <sup>∑</sup> <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> ¼ 0 ∀τ, η: (43)

Similarly, we can also show that the ACCFs between any two codes of any two different code groups I<sup>2</sup>kþt<sup>1</sup> and I<sup>2</sup>kþt<sup>2</sup> 0≤t1; t<sup>2</sup> < 2<sup>k</sup> � � are zeros everywhere, i.e.,

C ψ ~ S<sup>b</sup>1d<sup>0</sup> 1 � �; <sup>ψ</sup> <sup>∗</sup> <sup>~</sup> S<sup>b</sup>2d<sup>0</sup> 2 � � � � <sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup> ¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 Cðψ ∗ F<sup>0</sup> b1l � �, <sup>ψ</sup> <sup>∗</sup> <sup>F</sup><sup>0</sup> b2l � �ð Þ<sup>τ</sup> <sup>∑</sup> <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþ<sup>η</sup> þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 Cðψ ∗ F<sup>0</sup> b1l � �, <sup>ψ</sup> <sup>∗</sup> <sup>F</sup><sup>0</sup> b2l � � <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ<sup>1</sup> <sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>1</sup> � �; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>2</sup> � � � � <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþ<sup>η</sup> <sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>1</sup> � �; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>2</sup> � � � � <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ<sup>1</sup> ¼ 0 ∀τ, η: (44)

#### A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

The results in Eqs. (43) and (44) are obtained by using the ideal cross-correlation properties of CCCs. To complete the proof, now we only need to show that the ACCFs of any code from Itu and I2kþtv u; v∈Z; 1≤u; v≤2<sup>k</sup> � � are zeros everywhere. In this case, tu and tv are any two integers in 0; 2<sup>k</sup> � � and may or may not be equal. Let ψðSbud1 <sup>0</sup> ∈I tu , ψ <sup>∗</sup> ~ Sbvd2 0 � �∈I <sup>2</sup>kþtv where bu, b<sup>v</sup> are binary vector representations of tu, tv, respectively. Then

C ψ S<sup>b</sup>ud<sup>0</sup> 1 � �; <sup>ψ</sup> <sup>∗</sup> <sup>~</sup> S<sup>b</sup>vd<sup>0</sup> 2 � � � � <sup>η</sup>2m�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup> ¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 Cðψ ∗ F<sup>0</sup> bul � �, <sup>ψ</sup> <sup>∗</sup> <sup>F</sup><sup>0</sup> bvl � �ð Þ<sup>τ</sup> <sup>∑</sup> <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0 Cðψ ∗ F<sup>0</sup> bul � �, <sup>ψ</sup> <sup>∗</sup> ð Þ <sup>F</sup>bvl ð Þ<sup>τ</sup> <sup>∑</sup> <sup>2</sup>p�<sup>η</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup>tu ð Þ; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup>tv ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> <sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup>tu ð Þ; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> tv � � � � <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup> <sup>2</sup>p�η�<sup>1</sup> i¼1 ω <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup> <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup> ¼ 0 ∀τ, η: (45)

The above result is also obtained by using the ideal cross-correlation properties of CCCs. From Eqs. (39)-(45), we observed that the AACFs and ACCFs of the codes of the same group are zeros inside the ZCZ width 2<sup>m</sup>�<sup>p</sup> and the ACCFs of the codes from different code groups are zeros everywhere. Hence, we can conclude that I <sup>0</sup>, I<sup>1</sup> , …, I<sup>2</sup>kþ1�<sup>1</sup> form an IGC code set I 2<sup>k</sup>þpþ<sup>1</sup> ; 2<sup>k</sup>þ<sup>1</sup> ; 2<sup>m</sup>; 2<sup>m</sup>�<sup>p</sup> � �.

Example 2 Let f be a GBF of four variables as given in Example 1. Then the obtained IGC code set Ið Þ 8; 4; 16; 8 corresponding to the GBF f is given below:

Code group 1:

A ψ <sup>∗</sup> ~ Sbd<sup>0</sup>

Coding Theory

C ψ <sup>∗</sup> ~ Sbd<sup>0</sup> 1 � �

I

Zp <sup>2</sup> . Then

and for d<sup>0</sup>

� � � � <sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ¼ <sup>þ</sup> <sup>τ</sup>

<sup>1</sup> 6¼ d<sup>0</sup> 2,

; ψ <sup>∗</sup> S~bd<sup>0</sup> 2

� � � �

C ψ Sb1d<sup>0</sup>

þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

¼ 0 ∀τ, η:

C ψ ~ S<sup>b</sup>1d<sup>0</sup> 1 � �

¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

> þ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

<sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>1</sup>

¼ 0 ∀τ, η:

62

¼ ∑ <sup>2</sup>kþ1�<sup>1</sup> l¼0

1 � �

� � � �

<sup>¼</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ ð Þ<sup>τ</sup> <sup>∑</sup>

; ψ ∗ ~ S<sup>b</sup>2d<sup>0</sup> 2

Cðψ ∗ F<sup>0</sup>

� �

b1l � �

> b1l � �

; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>2</sup> � � � �

> ; <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>2</sup> � � � �

� � � �

Cðψ ∗ F<sup>0</sup>

� �

<sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>∗</sup> <sup>C</sup><sup>~</sup> <sup>t</sup><sup>1</sup>

2<sup>m</sup>þk�pþ<sup>1</sup> �Σ<sup>2</sup>p�<sup>η</sup> <sup>i</sup>¼<sup>1</sup> <sup>ω</sup> <sup>Γ</sup>þ<sup>q</sup>

8 >>><

>>>:

<sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ¼ <sup>þ</sup> <sup>τ</sup>

<sup>t</sup><sup>1</sup> and I<sup>t</sup><sup>2</sup> <sup>0</sup><sup>≤</sup> <sup>t</sup>1; <sup>t</sup><sup>2</sup> < 2<sup>k</sup> � � are zeros everywhere. Let <sup>ψ</sup> <sup>S</sup>b1d<sup>0</sup>

b1, b<sup>2</sup> are binary vector representations of t1, t2, and d<sup>0</sup>

Cðψð Þ Fb1<sup>l</sup> , ψð Þ Fb2<sup>l</sup> ð Þτ ∑

<sup>þ</sup> <sup>C</sup> <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ; <sup>ψ</sup> <sup>C</sup><sup>t</sup> ð Þ ð Þ <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

, ψ ∗ F<sup>0</sup> b2l � �

> , ψ ∗ F<sup>0</sup> b2l � �

; ψ Sb2d<sup>0</sup> 2 <sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup>

2<sup>m</sup>þk�pþ<sup>1</sup> �Σ<sup>2</sup>p�<sup>η</sup> <sup>i</sup>¼<sup>1</sup> <sup>ω</sup> <sup>Γ</sup>þ<sup>q</sup>

From Eqs. (41) and (42), we get that the codes from the same code group I<sup>2</sup>kþ<sup>t</sup> have

Now we show that the ACCFs between any two codes of any two different code groups

8 >>>>>><

>>>>>>:

ideal auto- and cross-correlation properties inside the ZCZ width 2<sup>m</sup>�<sup>p</sup>.

<sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup>

<sup>2</sup>p�<sup>η</sup> i¼1

<sup>C</sup>ðψð Þ <sup>F</sup>b1<sup>l</sup> , <sup>ψ</sup>ð Þ <sup>F</sup>b2<sup>l</sup> <sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�<sup>η</sup> i¼1

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

Similarly, we can also show that the ACCFs between any two codes of any two

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

<sup>2</sup>p�η�<sup>1</sup> i¼1

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþ<sup>η</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþ<sup>η</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ<sup>1</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ<sup>1</sup>

different code groups I<sup>2</sup>kþt<sup>1</sup> and I<sup>2</sup>kþt<sup>2</sup> 0≤t1; t<sup>2</sup> < 2<sup>k</sup> � � are zeros everywhere, i.e.,

ð Þτ ∑ <sup>2</sup>p�<sup>η</sup> i¼1

<sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

<sup>τ</sup> � <sup>2</sup><sup>m</sup>�<sup>p</sup> ð Þ <sup>∑</sup>

<sup>η</sup>2<sup>m</sup>�<sup>p</sup> ð Þ <sup>þ</sup> <sup>τ</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþη� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>p�η�<sup>1</sup> i¼1

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

0, 0 < ∣τ∣ < 2<sup>m</sup>�p,

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �ri� <sup>Γ</sup>þ<sup>q</sup>

1 � �

1, d<sup>0</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

ω <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup>

∈I

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþηþ1� <sup>Γ</sup>þ<sup>q</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

<sup>t</sup><sup>1</sup> , ψ Sb2d<sup>0</sup>

<sup>2</sup>d<sup>0</sup> ð Þ<sup>2</sup> �r<sup>i</sup>

2 � �

<sup>2</sup> are any two binary vectors in

∈ I

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþ<sup>η</sup> , <sup>τ</sup> <sup>¼</sup> <sup>0</sup>, <sup>0</sup>≤<sup>η</sup> < 2p,

0, τ ¼ 0, η ¼ 0, 0, 0 < ∣τ∣ < 2<sup>m</sup>�p,

0≤η < 2p,

<sup>2</sup>d<sup>0</sup> ð Þ<sup>1</sup> �riþ<sup>η</sup> , <sup>τ</sup> <sup>¼</sup> <sup>0</sup>, 0 < <sup>η</sup> < 2p,

0≤η < 2<sup>p</sup>:

(41)

(42)

<sup>t</sup><sup>2</sup> where

(43)

(44)

$$\begin{aligned} I\_0^0 &= \psi(S\_{00})\\ &= \begin{bmatrix} a^1 a^2 a^1 a^1 a^2 a^2 a^2 a^0 a^3 a^2 a^3 a^2 a^2 a^3 a^3 a^1 a^0\\ a^1 a^0 a^1 a^0 a^3 a^2 a^0 a^0 a^1 a^2 a^0 a^1 a^0 a^1 a^1 a^2\\ a^1 a^0 a^1 a^1 a^0 a^0 a^0 a^2 a^1 a^2 a^0 a^2 a^0 a^1 a^1 a^3 a^2\\ a^1 a^0 a^1 a^1 a^0 a^0 a^2 a^2 a^2 a^3 a^2 a^1 a^2 a^0 a^1 a^3 a^3 a^0 \end{bmatrix} \\\\ I\_1^0 &= \psi(S\_{01})\\ &= \begin{bmatrix} a^1 a^2 a^1 a^1 a^1 a^0 a^2 a^0 a^0 a^0 a^1 a^0 a^0 a^1 a^1 a^1 a^3\\ a^1 a^0 a^1 a^0 a^1 a^0 a^1 a^0 a^0 a^0 a^0 a^1 a^1 a^3 a^0\\ a^1 a^0 a^1 a^1 a^0 a^0 a^0 a^1 a^0 a^0 a^0 a^0 a^1 a^0 a^3 a^0\\ a^1 a^0 a^1 a^1 a^0 a^0 a^0 a^2 a^1 a^0 a^0 a^1 a^0 a^1 a^0 \end{bmatrix}.\end{aligned} \tag{46}$$

Code group 2:

$$\begin{aligned} I\_1^1 &= \psi(S\_{10})\\ &= \begin{bmatrix} \alpha^1 \alpha^0 a^0 a^1 a^3 a^2 a^0 a^0 a^1 a^2 a^1 a^2 a^0 a^3 a^0 a^1 a^1 a^0\\ \alpha^1 a^0 a^1 a^1 a^0 a^2 a^0 a^0 a^3 a^2 a^0 a^2 a^0 a^3 a^1 a^0\\ \alpha^1 a^0 a^1 a^0 a^1 a^0 a^2 a^2 a^3 a^2 a^1 a^0 a^0 a^1 a^3 a^0\\ \alpha^1 a^0 a^1 a^1 a^0 a^0 a^0 a^2 a^1 a^2 a^0 a^3 a^0 a^1 a^0 a^3 a^2 \end{bmatrix} \\\\ I\_1^2 &= \psi(S\_{11})\\ &= \begin{bmatrix} \alpha^1 a^0 a^0 a^1 a^0 a^3 a^0 a^0 a^1 a^0 a^0 a^3 a^0 a^2 a^1 a^3 a^0\\ \alpha^1 a^0 a^1 a^1 a^0 a^2 a^0 a^0 a^3 a^0 a^1 a^0 a^0 a^1 a^0 a^3 a^2\\ \alpha^1 a^0 a^1 a^1 a^0 a^0 a^2 a^0 a^3 a^0 a^0 a^0 a^2 a^0 a^1 a^1 a^0\\ \alpha^1 a^0 a^1 a^1 a^0 a^0 a^2 a^2 a^0 a^0 a^1 a^0 a^0 a^0 a^1 a^0 a^1 \end{bmatrix}.\end{aligned} \tag{47}$$

Code group 3:

I <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup> S<sup>00</sup> � � ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> I <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup> S<sup>01</sup> � � ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> : (48)

The correlation properties of I (8, 4, 16, 8) are described in Figure 2 where Figure 2a presents the absolute value of AACF sum of each code in Ið Þ 8; 4; 16; 8 , Figure 2b shows absolute value of ACCF sum between any two distinct codes from the same code group, and Figure 2c presents the absolute value of ACCF sum

In this chapter, we have presented a direct construction of IGC code set by using second-order GBFs. The AACF sidelobes of the codes of constructed IGC code set are zeros within ZCZ width, and the ACCFs of any two different codes of the same code group are zeros inside the ZCZ width, whereas the ACCFs of any two different codes from two different code groups are zeros everywhere. We have shown that there is a relation between our proposed construction and graph. The ZCZ width of the proposed IGC code set depends on the number of isolated vertices present in a graph after the deletion of some vertices. We also have shown that the ZCZ width of the proposed IGC code set by our construction is flexible and it can extend their applications. It is observed that most of the constructions given in literature are based on CCCs, whereas our construction can produce IGC code

between any two distinct codes from different code groups.

A Direct Construction of Intergroup Complementary Code Set for CDMA

DOI: http://dx.doi.org/10.5772/intechopen.86751

4. Summary

Figure 2.

Correlation plots of Ið Þ 8; 4; 16; 8 .

set directly.

Code group 4:

I <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup> S<sup>10</sup> � � ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> I <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup> S<sup>11</sup> � � ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> : (49)

A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

Figure 2. Correlation plots of Ið Þ 8; 4; 16; 8 .

Code group 2:

Coding Theory

Code group 3:

Code group 4:

64

I 1

I 1 <sup>0</sup> ¼ ψð Þ S<sup>10</sup>

ω1ω0ω1ω3ω2ω0ω0ω1ω2ω1ω2ω0ω3ω1ω1ω<sup>2</sup> ω1ω2ω1ω1ω2ω2ω0ω3ω2ω3ω2ω2ω3ω3ω1ω<sup>0</sup> ω1ω0ω1ω3ω0ω2ω2ω3ω2ω1ω2ω0ω1ω3ω3ω<sup>0</sup> ω1ω2ω1ω1ω0ω0ω2ω1ω2ω3ω2ω2ω1ω1ω3ω<sup>2</sup>

ω1ω0ω1ω3ω2ω0ω0ω1ω0ω3ω0ω2ω1ω3ω3ω<sup>0</sup> ω1ω2ω1ω1ω2ω2ω0ω3ω0ω1ω0ω0ω1ω1ω3ω<sup>2</sup> ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup> ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>

ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>

ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>

ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup> ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup> ω<sup>0</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>

ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>

ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>1</sup>

ω<sup>0</sup>ω<sup>1</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>3</sup>

ω<sup>2</sup>ω<sup>1</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>0</sup>ω<sup>3</sup>ω<sup>0</sup>ω<sup>1</sup>ω<sup>0</sup>ω<sup>2</sup>ω<sup>2</sup>ω<sup>3</sup>ω<sup>3</sup>ω<sup>2</sup>ω<sup>3</sup>

:

(47)

(48)

(49)

¼

<sup>1</sup> ¼ ψð Þ S<sup>11</sup>

¼

I 2 <sup>0</sup> <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup>

I 2 <sup>1</sup> <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup>

I 3 <sup>0</sup> <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup>

¼

¼

¼

¼

I 3 <sup>1</sup> <sup>¼</sup> <sup>ψ</sup> <sup>∗</sup> <sup>~</sup>

S<sup>00</sup> � �

S<sup>01</sup> � �

S<sup>10</sup> � �

> S<sup>11</sup> � �

The correlation properties of I (8, 4, 16, 8) are described in Figure 2 where Figure 2a presents the absolute value of AACF sum of each code in Ið Þ 8; 4; 16; 8 , Figure 2b shows absolute value of ACCF sum between any two distinct codes from the same code group, and Figure 2c presents the absolute value of ACCF sum between any two distinct codes from different code groups.

#### 4. Summary

In this chapter, we have presented a direct construction of IGC code set by using second-order GBFs. The AACF sidelobes of the codes of constructed IGC code set are zeros within ZCZ width, and the ACCFs of any two different codes of the same code group are zeros inside the ZCZ width, whereas the ACCFs of any two different codes from two different code groups are zeros everywhere. We have shown that there is a relation between our proposed construction and graph. The ZCZ width of the proposed IGC code set depends on the number of isolated vertices present in a graph after the deletion of some vertices. We also have shown that the ZCZ width of the proposed IGC code set by our construction is flexible and it can extend their applications. It is observed that most of the constructions given in literature are based on CCCs, whereas our construction can produce IGC code set directly.

Coding Theory

#### Author details

Palash Sarkar and Sudhan Majhi\* IIT Patna, India

\*Address all correspondence to: smajhi@iitp.ac.in

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

References

[1] Hanzo LL, Yang L-L, Kuan E-L, Yen K. CDMA Overview. United States: IEEE; 2004. Available from: https://ieee xplore.ieee.org/document/5732958

DOI: http://dx.doi.org/10.5772/intechopen.86751

A Direct Construction of Intergroup Complementary Code Set for CDMA

[10] Liu Z, Guan YL, Parampalli U. New complete complementary codes for peak-to-mean power control in multicarrier CDMA. IEEE Transactions on Communications. 2014;62(3):1105-1113

[11] Das S, Budišin S, Majhi S, Liu Z, Guan YL. A multiplier-free generator for polyphase complete complementary codes. IEEE Transactions on Signal Processing. 2018;66(5):1184-1196

[12] Liu Z, Guan YL, Chen HH. Fractional-delay-resilient receiver design for interference-free MC-CDMA communications based on complete

complementary codes. IEEE Transactions on Wireless

[13] Fan P, Yuan W, Tu Y.

14(8):509-512

Communications. 2015;14(3):1226-1236

Z-complementary binary sequences. IEEE Signal Processing Letters. 2007;

Z-complementary sequences. In: Fourth International Workshop on Signal Design and its Applications in Communications; 2009. pp. 36-39

[15] Adhikary AR, Majhi S, Liu Z, Guan YL. New sets of even-length binary Zcomplementary pairs with asymptotic ZCZ ratio of 3/4. IEEE Signal Processing

[16] Sarkar P, Majhi S, Liu Z. Optimal Z-

[17] Zhang C, Tao X, Yamada S, Hatori M. Sequence set with three zero correlation zones and its application in MC-CDMA system. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences. 2006;E89-A(9):2275-2282

Letters. 2018;25(7):970-973

complementary code set from generalized Reed-Muller codes. IEEE Transactions on Communications. 2019;

67(3):1783-1796

[14] Li X, Fan P, Tang X, Hao L. Constructions of quadriphase

[2] Fazel K, Kaiser S. MC-CDMA and MC-DS-CDMA. United States: Wiley; 2008. Available from: https://ieeexplore.

[3] Golay M. Complementary series. IRE Transactions on Information Theory.

[4] Tseng C-C, Liu C. Complementary sets of sequences. IEEE Transactions on Information Theory. 1972;18(5):644-652

[5] Davis JA, Jedwab J. Peak-to-mean power control in OFDM, Golay complementary sequences, and Reed-Muller codes. IEEE Transactions on Information Theory. 1999;45(7):

[6] Paterson KG. Generalized Reed-Muller codes and power control in OFDM modulation. IEEE Transactions on Information Theory. 2000;46(1):

[7] Sarkar P, Majhi S, Liu Z. A direct and generalized construction of polyphase complementary set with low PMEPR and high code-rate for OFDM system. n. d. Available from: http://arxiv.org/abs/

[8] Rathinakumar A, Chaturvedi AK. Complete mutually orthogonal Golay complementary sets from Reed-Muller

[9] Ke P, Zhou Z. A generic construction of Z-periodic complementary sequence sets with flexible flock size and zero correlation zone length. IEEE Signal Processing Letters. 2015;22(9):

codes. IEEE Transactions on Information Theory. 2008;54(3):

ieee.org/document/8043168

1961;7(2):82-87

2397-2417

104-120

1901.05545

1339-1346

1462-1466

67

A Direct Construction of Intergroup Complementary Code Set for CDMA DOI: http://dx.doi.org/10.5772/intechopen.86751

#### References

[1] Hanzo LL, Yang L-L, Kuan E-L, Yen K. CDMA Overview. United States: IEEE; 2004. Available from: https://ieee xplore.ieee.org/document/5732958

[2] Fazel K, Kaiser S. MC-CDMA and MC-DS-CDMA. United States: Wiley; 2008. Available from: https://ieeexplore. ieee.org/document/8043168

[3] Golay M. Complementary series. IRE Transactions on Information Theory. 1961;7(2):82-87

[4] Tseng C-C, Liu C. Complementary sets of sequences. IEEE Transactions on Information Theory. 1972;18(5):644-652

[5] Davis JA, Jedwab J. Peak-to-mean power control in OFDM, Golay complementary sequences, and Reed-Muller codes. IEEE Transactions on Information Theory. 1999;45(7): 2397-2417

[6] Paterson KG. Generalized Reed-Muller codes and power control in OFDM modulation. IEEE Transactions on Information Theory. 2000;46(1): 104-120

[7] Sarkar P, Majhi S, Liu Z. A direct and generalized construction of polyphase complementary set with low PMEPR and high code-rate for OFDM system. n. d. Available from: http://arxiv.org/abs/ 1901.05545

[8] Rathinakumar A, Chaturvedi AK. Complete mutually orthogonal Golay complementary sets from Reed-Muller codes. IEEE Transactions on Information Theory. 2008;54(3): 1339-1346

[9] Ke P, Zhou Z. A generic construction of Z-periodic complementary sequence sets with flexible flock size and zero correlation zone length. IEEE Signal Processing Letters. 2015;22(9): 1462-1466

[10] Liu Z, Guan YL, Parampalli U. New complete complementary codes for peak-to-mean power control in multicarrier CDMA. IEEE Transactions on Communications. 2014;62(3):1105-1113

[11] Das S, Budišin S, Majhi S, Liu Z, Guan YL. A multiplier-free generator for polyphase complete complementary codes. IEEE Transactions on Signal Processing. 2018;66(5):1184-1196

[12] Liu Z, Guan YL, Chen HH. Fractional-delay-resilient receiver design for interference-free MC-CDMA communications based on complete complementary codes. IEEE Transactions on Wireless Communications. 2015;14(3):1226-1236

[13] Fan P, Yuan W, Tu Y. Z-complementary binary sequences. IEEE Signal Processing Letters. 2007; 14(8):509-512

[14] Li X, Fan P, Tang X, Hao L. Constructions of quadriphase Z-complementary sequences. In: Fourth International Workshop on Signal Design and its Applications in Communications; 2009. pp. 36-39

[15] Adhikary AR, Majhi S, Liu Z, Guan YL. New sets of even-length binary Zcomplementary pairs with asymptotic ZCZ ratio of 3/4. IEEE Signal Processing Letters. 2018;25(7):970-973

[16] Sarkar P, Majhi S, Liu Z. Optimal Zcomplementary code set from generalized Reed-Muller codes. IEEE Transactions on Communications. 2019; 67(3):1783-1796

[17] Zhang C, Tao X, Yamada S, Hatori M. Sequence set with three zero correlation zones and its application in MC-CDMA system. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences. 2006;E89-A(9):2275-2282

Author details

Coding Theory

IIT Patna, India

66

Palash Sarkar and Sudhan Majhi\*

\*Address all correspondence to: smajhi@iitp.ac.in

provided the original work is properly cited.

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

[18] Liu Z, Guan YL, Ng BC, Chen HH. Correlation and set size bounds of complementary sequences with low correlation zone. IEEE Transactions on Communications. 2011;59(12): 3285-3289

Chapter 4

Codes

Abstract

Ismail Aydogdu

group is of order 2<sup>n</sup>

<sup>2</sup> � <sup>Z</sup><sup>β</sup>

of Z2Z2[u]-linear and Z2Z2[u]-cyclic codes.

parity-check matrix, minimal spanning set

C of length n is a vector subspace of F<sup>n</sup>

correcting codes over the ring Z<sup>α</sup>

69

subgroups of Z<sup>α</sup>

1. Introduction

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic

Additive codes were first introduced by Delsarte in 1973 as subgroups of the underlying abelian group in a translation association scheme. In the case where the association scheme is the Hamming scheme, that is, when the underlying abelian

2010, Borges et al. introduced Z2Z4-additive codes which they defined them as the

cyclic codes where Z<sup>2</sup> ¼ f g 0; 1 is the binary field and Z2½ �¼ u f g 0; 1; u; 1 þ u is the ring with four elements and <sup>u</sup><sup>2</sup> <sup>¼</sup> 0. We give the standard forms of the generator and parity-check matrices of Z2Z2[u]-linear codes. Further, we determine the generator polynomials for Z2Z2½ � u -linear cyclic codes. We also present some examples

In coding theory, the most important class of error-correcting codes is the family of linear codes, because the encoding and decoding procedures for a linear code are faster and simpler than those for arbitrary nonlinear codes. Many practically important linear codes have also an efficient decoding. Specifically, a linear code

Among all the codes over finite fields, binary linear codes (linear codes over F2) have a very special and important place because of their easy implementations and applications. In the beginning, researchers were mainly studying on linear codes over fields, especially binary fields. However, in 1994, a remarkable paper written by Hammons et al. [1] brought a new direction to studies on coding theory. In this paper, they showed that some well-known nonlinear codes, the Nordstrom-Robinson code, Kerdock codes, and Delsarte-Goethals code, are actually binary images of some linear codes over the ring of integers modulo, 4, i.e., Z4. Such connections motivate the researchers to study on codes over different rings even over other structural algebras such as groups or modules. Even though the structure of binary linear codes and quaternary linear codes (codes over F<sup>4</sup> or Z4) have been studied in details for the last 50 years, recently, in 2010, a new class of error-

<sup>2</sup> � <sup>Z</sup><sup>β</sup>

class of binary linear codes and the class of quaternary linear codes have been introduced by Borges et al. in [2]. A Z2Z4-additive code C is defined as a subgroup of

4. In this chapter we introduce Z2Z2[u]-linear and Z2Z2[u]-

<sup>2</sup> � <sup>Z</sup><sup>β</sup>

<sup>q</sup>, where F<sup>q</sup> is a finite field with q elements.

<sup>4</sup> called additive codes that generalizes the

<sup>4</sup> with α þ 2β ¼ n. In

, the additive codes are of the form Z<sup>α</sup>

Keywords: Z2Z2[u]-linear codes, cyclic codes, generator matrix, duality,

[19] Li J, Huang A, Guizani M, Chen HH. Inter group complementary codes for interference resistant CDMA wireless communications. IEEE Transactions on Wireless Communications. 2008;7(1): 166-174

[20] Feng L, Zhou X, Fan P. A construction of inter-group complementary codes with flexible ZCZ length. Journal of Zhejiang University SCIENCE-C. 2011;12(10):846-854

Chapter 4

[18] Liu Z, Guan YL, Ng BC, Chen HH. Correlation and set size bounds of complementary sequences with low correlation zone. IEEE Transactions on

[19] Li J, Huang A, Guizani M, Chen HH. Inter group complementary codes for interference resistant CDMA wireless communications. IEEE Transactions on Wireless Communications. 2008;7(1):

complementary codes with flexible ZCZ length. Journal of Zhejiang University SCIENCE-C. 2011;12(10):846-854

Communications. 2011;59(12):

[20] Feng L, Zhou X, Fan P. A construction of inter-group

3285-3289

Coding Theory

166-174

68

## Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes

Ismail Aydogdu

#### Abstract

Additive codes were first introduced by Delsarte in 1973 as subgroups of the underlying abelian group in a translation association scheme. In the case where the association scheme is the Hamming scheme, that is, when the underlying abelian group is of order 2<sup>n</sup> , the additive codes are of the form Z<sup>α</sup> <sup>2</sup> � <sup>Z</sup><sup>β</sup> <sup>4</sup> with α þ 2β ¼ n. In 2010, Borges et al. introduced Z2Z4-additive codes which they defined them as the subgroups of Z<sup>α</sup> <sup>2</sup> � <sup>Z</sup><sup>β</sup> 4. In this chapter we introduce Z2Z2[u]-linear and Z2Z2[u] cyclic codes where Z<sup>2</sup> ¼ f g 0; 1 is the binary field and Z2½ �¼ u f g 0; 1; u; 1 þ u is the ring with four elements and <sup>u</sup><sup>2</sup> <sup>¼</sup> 0. We give the standard forms of the generator and parity-check matrices of Z2Z2[u]-linear codes. Further, we determine the generator polynomials for Z2Z2½ � u -linear cyclic codes. We also present some examples of Z2Z2[u]-linear and Z2Z2[u]-cyclic codes.

Keywords: Z2Z2[u]-linear codes, cyclic codes, generator matrix, duality, parity-check matrix, minimal spanning set

#### 1. Introduction

In coding theory, the most important class of error-correcting codes is the family of linear codes, because the encoding and decoding procedures for a linear code are faster and simpler than those for arbitrary nonlinear codes. Many practically important linear codes have also an efficient decoding. Specifically, a linear code C of length n is a vector subspace of F<sup>n</sup> <sup>q</sup>, where F<sup>q</sup> is a finite field with q elements. Among all the codes over finite fields, binary linear codes (linear codes over F2) have a very special and important place because of their easy implementations and applications. In the beginning, researchers were mainly studying on linear codes over fields, especially binary fields. However, in 1994, a remarkable paper written by Hammons et al. [1] brought a new direction to studies on coding theory. In this paper, they showed that some well-known nonlinear codes, the Nordstrom-Robinson code, Kerdock codes, and Delsarte-Goethals code, are actually binary images of some linear codes over the ring of integers modulo, 4, i.e., Z4. Such connections motivate the researchers to study on codes over different rings even over other structural algebras such as groups or modules. Even though the structure of binary linear codes and quaternary linear codes (codes over F<sup>4</sup> or Z4) have been studied in details for the last 50 years, recently, in 2010, a new class of errorcorrecting codes over the ring Z<sup>α</sup> <sup>2</sup> � <sup>Z</sup><sup>β</sup> <sup>4</sup> called additive codes that generalizes the class of binary linear codes and the class of quaternary linear codes have been introduced by Borges et al. in [2]. A Z2Z4-additive code C is defined as a subgroup of Zα <sup>2</sup> � <sup>Z</sup><sup>β</sup> <sup>4</sup>, where α and β are positive integers and α þ 2β ¼ n. Despite the fact that Z2Z4-additive codes are a new type of codes, they have shown to have some applications in fields such as the field of steganography. Another important ring of four elements which is not isomorphic to Z<sup>4</sup> is the ring Z<sup>2</sup> þ uZ<sup>2</sup> ¼ f g 0; 1; u; 1 þ u <sup>¼</sup> <sup>R</sup> <sup>¼</sup> <sup>Z</sup>2½ � <sup>u</sup> where <sup>u</sup><sup>2</sup> <sup>¼</sup> 0. Working with the ring <sup>R</sup> has some advantages compared to the ring Z4. For example, the Gray images of linear codes over R are always binary linear codes which is not always the case for Z4. Further, since the finite field F<sup>2</sup> is a subring of the ring R, the factorization of polynomials over R is the same with the factorization of polynomials over F2, so we do not need Hensel's lift for factorization. Moreover, decoding algorithm of cyclic codes over R is easier than that over Z4. In this chapter of the book, we introduce Z2Z2½ � u linear and Z2Z2½ � u -cyclic codes. The original study about linear and cyclic codes over Z2Z2½ � u was done by Aydogdu et al. in [3, 4]. So, this chapter is a survey on Z2Z2½ � u -linear and Z2Z2½ � u -cyclic codes which were introduced in [3, 4].

Note that the ring R is isomorphic to Z<sup>2</sup>

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

CF

Hence, if C ⊆Z<sup>α</sup>

distance in Z<sup>α</sup>

k0, k1, k<sup>2</sup> ∈Zþ. Now let us consider the following sets:

where if h i <sup>b</sup> <sup>¼</sup> <sup>R</sup><sup>β</sup>, then <sup>b</sup> is called free over <sup>R</sup><sup>β</sup>

Now, denote the dimension of C0, C1, and C<sup>F</sup>

Definition 2.3. Let a0; a1; …; aα�<sup>1</sup>; b0; b1; …; bβ�<sup>1</sup>

2

<sup>2</sup> � <sup>R</sup><sup>β</sup> to the Hamming distance in <sup>Z</sup><sup>n</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup> as follows:

as a binary code under the special Gray map.

<sup>2</sup> � <sup>R</sup><sup>β</sup> ! <sup>Z</sup><sup>n</sup>

We define the Gray map as follows:

Ψ : Z<sup>α</sup>

codeword <sup>v</sup> <sup>¼</sup> ð Þ <sup>v</sup>1; <sup>v</sup><sup>2</sup> <sup>∈</sup>Z<sup>α</sup>

naturally defined as

71

also called a Z2Z2½ � u -linear code.

2.1 Generator matrices of Z2Z2[u]-linear codes

<sup>β</sup> <sup>¼</sup> ð Þ <sup>a</sup>; <sup>b</sup> <sup>∈</sup>Z<sup>α</sup>

<sup>C</sup><sup>0</sup> <sup>¼</sup> ð Þ <sup>a</sup>; ub <sup>∈</sup>Z<sup>α</sup>

<sup>C</sup><sup>1</sup> <sup>¼</sup> ð Þ <sup>a</sup>; ub <sup>∈</sup>Z<sup>α</sup>

<sup>Z</sup>2Z2½ � <sup>u</sup> -linear code <sup>C</sup> is isomorphic to a group of the form <sup>Z</sup>k0þk<sup>2</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup> <sup>j</sup><sup>b</sup> is free over <sup>R</sup><sup>β</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup>j<sup>a</sup> 6¼ <sup>0</sup> <sup>⊆</sup>C\C<sup>F</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup>j<sup>a</sup> <sup>¼</sup> <sup>0</sup> <sup>⊆</sup> <sup>C</sup>\C<sup>F</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup> is a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code group isomorphic to <sup>Z</sup>k0þk<sup>2</sup>

then we say C is of type ð Þ α; β; k0; k1; k<sup>2</sup> . We can consider any Z2Z2½ � u -linear code C

∈Z<sup>α</sup>

<sup>Ψ</sup> <sup>a</sup>0; <sup>a</sup>1; …; <sup>a</sup>α�<sup>1</sup>; <sup>p</sup><sup>0</sup> <sup>þ</sup> uq0; <sup>p</sup><sup>1</sup> <sup>þ</sup> uq1; …; <sup>p</sup>β�<sup>1</sup> <sup>þ</sup> uqβ�<sup>1</sup> 

<sup>¼</sup> <sup>a</sup>0; <sup>a</sup>1; …; <sup>a</sup>α�<sup>1</sup>; <sup>q</sup>0; <sup>q</sup>1; …; <sup>q</sup>β�<sup>1</sup>; <sup>p</sup><sup>0</sup> <sup>þ</sup> <sup>q</sup>0; <sup>p</sup><sup>1</sup> <sup>þ</sup> <sup>q</sup>1; …; <sup>p</sup>β�<sup>1</sup> <sup>þ</sup> <sup>q</sup>β�<sup>1</sup> 

where n ¼ α þ 2β. The Gray map Ψ is an isometry which transforms the Lee

wt vð Þ¼ wtHð Þþ v<sup>1</sup> wtLð Þ v<sup>2</sup>

dð Þ¼ C minf g d cð Þj <sup>1</sup>;c<sup>2</sup> c1;c<sup>2</sup> ∈C such that c<sup>1</sup> 6¼ c<sup>2</sup>

where d cð Þ¼ <sup>1</sup>;c<sup>2</sup> wt cð Þ <sup>1</sup> � c<sup>2</sup> . If C is a Z2Z2½ � u -linear code of type ð Þ α; β; k0; k1; k<sup>2</sup> , then Gray image <sup>Ψ</sup>ð Þ <sup>C</sup> is a binary linear code of length <sup>n</sup> <sup>¼</sup> <sup>α</sup> <sup>þ</sup> <sup>2</sup><sup>β</sup> and size 2<sup>n</sup>. It is

A generator matrix for a linear code C is the matrix G with rows that are formed by a minimal spanning set of C. All linear combinations of the rows of the generator

where wtHð Þ v<sup>1</sup> is the Hamming weight of v<sup>1</sup> and wtLð Þ v<sup>2</sup> is the Lee weight of v2. Further, the minimum distance of the Z2Z2½ � u -linear code C, denoted by dð Þ C , is

distance between two codewords is the Hamming weight and the Lee weight of their differences, respectively. The Hamming weight of a codeword is defined as the number of its non-zero entries, and the Lee weights of the elements of R are defined as wtLð Þ¼ 0 0, wtLð Þ¼ 1 1, wtLð Þ¼ u 2, wtLð Þ¼ 1 þ u 1. It is worth mentioning that the Gray map Ψ is linear, i.e., for a Z2Z2½ � u -linear code C, we have Ψð Þ C as a binary linear code which is not the case for Z2Z4-additive codes in general. We can extend the definition of the Lee weight of a codeword in R to the Lee weight of a

:

<sup>2</sup> as an additive group. Therefore, any

β

β

<sup>β</sup> as k0, k2, and k1, respectively.

<sup>2</sup> � <sup>Z</sup>2k<sup>1</sup>

<sup>2</sup> , for some

<sup>2</sup> � <sup>Z</sup>2k<sup>1</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup> with bi <sup>¼</sup> pi <sup>þ</sup> uqi

<sup>2</sup> . The Hamming and the Lee

<sup>2</sup> ,

.

### 2. Z2Z2[u]-linear codes

Let <sup>Z</sup><sup>2</sup> <sup>¼</sup> f g <sup>0</sup>; <sup>1</sup> be the finite field and <sup>R</sup> <sup>¼</sup> <sup>Z</sup><sup>2</sup> <sup>þ</sup> <sup>u</sup>Z<sup>2</sup> <sup>¼</sup> f g <sup>0</sup>; <sup>1</sup>; <sup>u</sup>; <sup>1</sup> <sup>þ</sup> <sup>u</sup> , <sup>u</sup><sup>2</sup> <sup>¼</sup> <sup>0</sup> be the finite ring of four elements. Since Z<sup>2</sup> is a subring of R, we define the following set:

$$\mathbb{Z}\_2\mathbb{R} = \{ (c\_1, c\_2) | c\_1 \in \mathbb{Z}\_2 \text{ and } c\_2 \in \mathcal{R} \}$$

This set is not well defined with respect to the usual multiplication by u∈ R. So it is not an R-module. Hence the set Z2R cannot be endowed with an algebraic structure directly. Therefore we introduce a new multiplication to make it well defined and enriched with an algebraic structure.

Let d∈ R; then d can be expressed in the form d ¼ r þ uq with r, q∈Z2. We define the following map:

$$\begin{aligned} \eta: \mathcal{R} &\to \mathbb{Z}\_2 \\ \eta(d) &= r = \overline{d} \end{aligned}$$

as ηð Þ¼ 0 0, ηð Þ¼ 1 1, ηð Þ¼ u 0 and ηð Þ¼ 1 þ u 1. It is easy to see that the mapping η is a ring homomorphism. Now, using this map we define the following R-scalar multiplication on Z2R. For any element d∈ R:

$$d(c\_1, c\_2) = (\eta(d)c\_1, dc\_2) = \left(\overline{d}c\_1, dc\_2\right)^\*$$

This new multiplication is well defined and also can be extended over Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> as follows. Let <sup>d</sup><sup>∈</sup> <sup>R</sup> and <sup>v</sup> <sup>¼</sup> <sup>a</sup>0; <sup>a</sup>1; …; <sup>a</sup><sup>α</sup>�<sup>1</sup>; <sup>b</sup>0; <sup>b</sup>1; …; <sup>b</sup><sup>β</sup>�<sup>1</sup> <sup>∈</sup>Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup>; then define

$$dv = \left(\eta(d)a\_0, \eta(d)a\_1, \dots, \eta(d)a\_{a-1}, db\_0, db\_1, \dots, db\_{\beta -1}\right).$$

$$= \left(\overline{da}a\_0, \overline{da}a\_1, \dots, \overline{da}a\_{a-1}, db\_0, db\_1, \dots, db\_{\beta -1}\right).$$

Lemma 2.1. Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> is an <sup>R</sup>-module with respect to the multiplication defined above.

Definition 2.2. Let C be a non-empty subset of Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup>. <sup>C</sup> is called <sup>a</sup> <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code if it is an <sup>R</sup>-submodule of <sup>Z</sup><sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup>.

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

Zα <sup>2</sup> � <sup>Z</sup><sup>β</sup>

Coding Theory

in [3, 4].

following set:

2. Z2Z2[u]-linear codes

define the following map:

Lemma 2.1. Z<sup>α</sup>

above.

70

defined and enriched with an algebraic structure.

R-scalar multiplication on Z2R. For any element d∈ R:

follows. Let <sup>d</sup><sup>∈</sup> <sup>R</sup> and <sup>v</sup> <sup>¼</sup> <sup>a</sup>0; <sup>a</sup>1; …; <sup>a</sup><sup>α</sup>�<sup>1</sup>; <sup>b</sup>0; <sup>b</sup>1; …; <sup>b</sup><sup>β</sup>�<sup>1</sup> <sup>∈</sup>Z<sup>α</sup>

Definition 2.2. Let C be a non-empty subset of Z<sup>α</sup>

<sup>a</sup> <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code if it is an <sup>R</sup>-submodule of <sup>Z</sup><sup>α</sup>

<sup>4</sup>, where α and β are positive integers and α þ 2β ¼ n. Despite the fact that

Let <sup>Z</sup><sup>2</sup> <sup>¼</sup> f g <sup>0</sup>; <sup>1</sup> be the finite field and <sup>R</sup> <sup>¼</sup> <sup>Z</sup><sup>2</sup> <sup>þ</sup> <sup>u</sup>Z<sup>2</sup> <sup>¼</sup> f g <sup>0</sup>; <sup>1</sup>; <sup>u</sup>; <sup>1</sup> <sup>þ</sup> <sup>u</sup> , <sup>u</sup><sup>2</sup> <sup>¼</sup> <sup>0</sup>

Z2R ¼ f g ð Þj c1;c<sup>2</sup> c<sup>1</sup> ∈ Z<sup>2</sup> and c<sup>2</sup> ∈ R

Let d∈ R; then d can be expressed in the form d ¼ r þ uq with r, q∈Z2. We

η : R ! Z<sup>2</sup> ηð Þ¼ d r ¼ d

d cð Þ¼ <sup>1</sup>;c<sup>2</sup> ð Þ¼ ηð Þ d c1; dc<sup>2</sup> dc1; dc<sup>2</sup>

dv ¼ ηð Þ d a0; ηð Þ d a1; …; ηð Þ d a<sup>α</sup>�<sup>1</sup>; db0; db1; …; db<sup>β</sup>�<sup>1</sup> 

This new multiplication is well defined and also can be extended over Z<sup>α</sup>

¼ da0; da1; …; da<sup>α</sup>�<sup>1</sup>; db0; db1; …; db<sup>β</sup>�<sup>1</sup> :

<sup>2</sup> � <sup>R</sup><sup>β</sup> is an <sup>R</sup>-module with respect to the multiplication defined

<sup>2</sup> � <sup>R</sup><sup>β</sup>.

<sup>2</sup> � <sup>R</sup><sup>β</sup>. <sup>C</sup> is called

<sup>2</sup> � <sup>R</sup><sup>β</sup> as

<sup>2</sup> � <sup>R</sup><sup>β</sup>; then define

as ηð Þ¼ 0 0, ηð Þ¼ 1 1, ηð Þ¼ u 0 and ηð Þ¼ 1 þ u 1. It is easy to see that the mapping η is a ring homomorphism. Now, using this map we define the following

This set is not well defined with respect to the usual multiplication by u∈ R. So it is not an R-module. Hence the set Z2R cannot be endowed with an algebraic structure directly. Therefore we introduce a new multiplication to make it well

be the finite ring of four elements. Since Z<sup>2</sup> is a subring of R, we define the

Z2Z4-additive codes are a new type of codes, they have shown to have some applications in fields such as the field of steganography. Another important ring of four elements which is not isomorphic to Z<sup>4</sup> is the ring Z<sup>2</sup> þ uZ<sup>2</sup> ¼ f g 0; 1; u; 1 þ u <sup>¼</sup> <sup>R</sup> <sup>¼</sup> <sup>Z</sup>2½ � <sup>u</sup> where <sup>u</sup><sup>2</sup> <sup>¼</sup> 0. Working with the ring <sup>R</sup> has some advantages compared to the ring Z4. For example, the Gray images of linear codes over R are always binary linear codes which is not always the case for Z4. Further, since the finite field F<sup>2</sup> is a subring of the ring R, the factorization of polynomials over R is the same with the factorization of polynomials over F2, so we do not need Hensel's lift for factorization. Moreover, decoding algorithm of cyclic codes over R is easier than that over Z4. In this chapter of the book, we introduce Z2Z2½ � u linear and Z2Z2½ � u -cyclic codes. The original study about linear and cyclic codes over Z2Z2½ � u was done by Aydogdu et al. in [3, 4]. So, this chapter is a survey on Z2Z2½ � u -linear and Z2Z2½ � u -cyclic codes which were introduced

Note that the ring R is isomorphic to Z<sup>2</sup> <sup>2</sup> as an additive group. Therefore, any <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code <sup>C</sup> is isomorphic to a group of the form <sup>Z</sup>k0þk<sup>2</sup> <sup>2</sup> � <sup>Z</sup>2k<sup>1</sup> <sup>2</sup> , for some k0, k1, k<sup>2</sup> ∈Zþ. Now let us consider the following sets:

$$\mathcal{C}\_{\beta}^{\mathrm{F}} = \left\langle \left\{ (a, b) \in \mathbb{Z}\_2^a \times \mathcal{R}^\beta | b \text{ is free over } \mathcal{R}^\beta \right\} \right\rangle$$

where if h i <sup>b</sup> <sup>¼</sup> <sup>R</sup><sup>β</sup>, then <sup>b</sup> is called free over <sup>R</sup><sup>β</sup> :

$$\mathcal{C}\_{0} = \left\langle \left\{ (a, ub) \in \mathbb{Z}\_{2}^{a} \times \mathcal{R}^{\beta} | a \neq \mathbf{0} \right\} \right\rangle \subseteq \mathcal{CN}\_{\beta}^{F}$$

$$\mathcal{C}\_{1} = \left\langle \left\{ (a, ub) \in \mathbb{Z}\_{2}^{a} \times \mathcal{R}^{\beta} | a = \mathbf{0} \right\} \right\rangle \subseteq \mathcal{CN}\_{\beta}^{F}$$

Now, denote the dimension of C0, C1, and C<sup>F</sup> <sup>β</sup> as k0, k2, and k1, respectively. Hence, if C ⊆Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> is a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code group isomorphic to <sup>Z</sup>k0þk<sup>2</sup> <sup>2</sup> � <sup>Z</sup>2k<sup>1</sup> <sup>2</sup> , then we say C is of type ð Þ α; β; k0; k1; k<sup>2</sup> . We can consider any Z2Z2½ � u -linear code C as a binary code under the special Gray map.

Definition 2.3. Let a0; a1; …; aα�<sup>1</sup>; b0; b1; …; bβ�<sup>1</sup> ∈Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> with bi <sup>¼</sup> pi <sup>þ</sup> uqi . We define the Gray map as follows:

$$\begin{aligned} \Psi: \mathbb{Z}\_2^a \times \mathbb{R}^\beta &\to \mathbb{Z}\_2^n \\ \Psi \Big( a\_0, a\_1, \dots, a\_{a-1}, p\_0 + \iota q\_0, p\_1 + \iota q\_1, \dots, p\_{\beta-1} + \iota q\_{\beta-1} \Big) \\ &= \left( a\_0, a\_1, \dots, a\_{a-1}, q\_0, q\_1, \dots, q\_{\beta-1}, p\_0 + q\_0, p\_1 + q\_1, \dots, p\_{\beta-1} + q\_{\beta-1} \right) \end{aligned}$$

where n ¼ α þ 2β. The Gray map Ψ is an isometry which transforms the Lee distance in Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> to the Hamming distance in <sup>Z</sup><sup>n</sup> <sup>2</sup> . The Hamming and the Lee distance between two codewords is the Hamming weight and the Lee weight of their differences, respectively. The Hamming weight of a codeword is defined as the number of its non-zero entries, and the Lee weights of the elements of R are defined as wtLð Þ¼ 0 0, wtLð Þ¼ 1 1, wtLð Þ¼ u 2, wtLð Þ¼ 1 þ u 1. It is worth mentioning that the Gray map Ψ is linear, i.e., for a Z2Z2½ � u -linear code C, we have Ψð Þ C as a binary linear code which is not the case for Z2Z4-additive codes in general. We can extend the definition of the Lee weight of a codeword in R to the Lee weight of a codeword <sup>v</sup> <sup>¼</sup> ð Þ <sup>v</sup>1; <sup>v</sup><sup>2</sup> <sup>∈</sup>Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> as follows:

$$wt(v) = wt\_H(v\_1) + wt\_L(v\_2)$$

where wtHð Þ v<sup>1</sup> is the Hamming weight of v<sup>1</sup> and wtLð Þ v<sup>2</sup> is the Lee weight of v2. Further, the minimum distance of the Z2Z2½ � u -linear code C, denoted by dð Þ C , is naturally defined as

$$d(\mathcal{C}) = \min\{d(c\_1, c\_2) | c\_1, c\_2 \in \mathcal{C} \text{ such that } c\_1 \neq c\_2\}$$

where d cð Þ¼ <sup>1</sup>;c<sup>2</sup> wt cð Þ <sup>1</sup> � c<sup>2</sup> . If C is a Z2Z2½ � u -linear code of type ð Þ α; β; k0; k1; k<sup>2</sup> , then Gray image <sup>Ψ</sup>ð Þ <sup>C</sup> is a binary linear code of length <sup>n</sup> <sup>¼</sup> <sup>α</sup> <sup>þ</sup> <sup>2</sup><sup>β</sup> and size 2<sup>n</sup>. It is also called a Z2Z2½ � u -linear code.

#### 2.1 Generator matrices of Z2Z2[u]-linear codes

A generator matrix for a linear code C is the matrix G with rows that are formed by a minimal spanning set of C. All linear combinations of the rows of the generator matrix G constitute the linear code C. We can produce an equivalent code to the C by applying elementary row and column operations on the generator matrix G. For given two linear codes, if one can be obtained from the other by permutation of their coordinates or (if necessary) changing the coordinates by their unit multiples, then these codes are said to be permutation equivalent code or only equivalent code. Furthermore, the standard form of the matrix G is a special form which is obtained by applying elementary row operations to G. Having the standard form of the generator matrix is very useful that we can easily determine the type of the code and then calculate its size directly. Note that the generator matrices in the standard form of linear codes over a ring contain the minimum number of rows. The theorem below determines the standard form of the generator matrix of a Z2Z2½ � u -linear code C.

Theorem 2.1.1. [3] Let C be a Z2Z2½ � u -linear code of type ð Þ α; β; k0; k1; k<sup>2</sup> . Then C is a permutation equivalent to a Z2Z2½ � u -linear code with the following generator matrix of the standard form:

$$\mathbf{G}\_t = \begin{bmatrix} I\_{k\_0} & A\_1 & \mathbf{0} & \mathbf{0} & uT \\\\ \mathbf{0} & \mathbf{S} & I\_{k\_1} & A & B\_1 + uB\_2 \\\\ \mathbf{0} & \mathbf{0} & \mathbf{0} & uI\_{k\_2} & uD \end{bmatrix} \tag{1}$$

1

2 6 4

0 1

� � � � �

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

0

u

1 þ u

3 7 <sup>5</sup> <sup>¼</sup> Ik<sup>0</sup> 0

C ¼ fð Þ 0; 0; 0; j0; 0 ;ð Þ 1; 0; 1; j0; u ;ð Þ 0; 1; 1; j 1; 1 þ u ;ð Þ 1; 1; 0; j 1; 1 ,ð Þ 0; 1; 1; j 1 þ u; 1 ,

Moreover, the Gray image Ψð Þ C of C is a simplex code of length 7 with parame-

In the literature, there is a very well-known concept for the duals of the codes

<sup>2</sup> � <sup>R</sup><sup>β</sup>.

þ ∑ αþβ j¼αþ1

Further, the dual code <sup>C</sup><sup>⊥</sup> of a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code <sup>C</sup> is defined in the usual way

Hence, if <sup>C</sup> is a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code, then <sup>C</sup><sup>⊥</sup> is also a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code. It is worth mentioning that any two codewords of a Z2Z2½ � u -linear code may be orthogonal to each other, but the binary parts of the codewords may not be orthogonal.

whereas the binary or R-components are not orthogonal. Moreover, the Gray map

We give the standard form of the parity-check matrices of Z2Z2½ � u -linear codes

Theorem 2.2.2. [3] Let C be a Z2Z2½ � u -linear code of type ð Þ α; β; k0; k1; k<sup>2</sup> with the standard form generator matrix (1). Then the parity-check matrix of C (the

�T<sup>t</sup> <sup>0</sup> �ð Þ <sup>B</sup><sup>1</sup> <sup>þ</sup> uB<sup>2</sup> <sup>t</sup> <sup>þ</sup> <sup>D</sup><sup>t</sup>

<sup>1</sup> <sup>I</sup><sup>α</sup>�k<sup>0</sup> �uS<sup>t</sup> 0 0

0 0 �uAt uIk<sup>2</sup> <sup>0</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup><sup>j</sup> h i <sup>v</sup>; <sup>w</sup> <sup>¼</sup> 0 for all <sup>v</sup>∈<sup>C</sup> � �:

the set of all codewords that are orthogonal to every codeword of C. A generator matrix for C<sup>⊥</sup> is called a parity-check matrix of C. In this part, we determine the standard form of the parity-check matrix of a Z2Z2½ � u -linear code C. Let us begin

ters [7, 3, 4] which is the dual of the well-known [7, 4, 3] Hamming code.

2.2 Duality on Z2Z2[u]-linear codes and parity-check matrices

A1 S

� � � � �

0 Ik1

" #

uT

<sup>q</sup>, the dual code C<sup>⊥</sup> of C in F<sup>n</sup>

. The inner product

<sup>2</sup> � <sup>R</sup><sup>β</sup>

<sup>2</sup> � <sup>R</sup><sup>2</sup> are orthogonal to each other,

At �Dt <sup>I</sup><sup>β</sup>�k1�k<sup>2</sup>

3 7 5:

vjwj ∈ R:

B<sup>1</sup> þ uB<sup>2</sup>

(2)

<sup>q</sup> is

1

ð Þ 1; 1; 0; j 1 þ u; 1 þ u ;ð Þ 0; 0; 0; ju; u ;ð Þg 1; 0; 1; ju; 0 :

over finite fields and rings. If C is a linear code over F<sup>n</sup>

Definition 2.2.1 Let v and w be the two elements in Z<sup>α</sup>

h i v; w ¼ u ∑

<sup>C</sup><sup>⊥</sup> <sup>¼</sup> <sup>w</sup> <sup>∈</sup>Z<sup>α</sup>

α i¼1 vıwi � �

with the definition of an inner product over Z<sup>α</sup>

of v and w is defined by

with respect to this inner product as

For example, 1ð Þ ; <sup>1</sup><sup>j</sup> <sup>1</sup> <sup>þ</sup> <sup>u</sup>; <sup>u</sup> ,ð Þ <sup>0</sup>; <sup>1</sup><sup>j</sup> <sup>u</sup>; <sup>u</sup> <sup>∈</sup>Z<sup>2</sup>

generator matrix of the dual code C<sup>⊥</sup>) is given by

�A<sup>t</sup>

2 6 4

Ψ preserves the orthogonality.

with the following theorem.

Hs ¼

73

1 1

• C is of type 3ð Þ ; 2; 1; 1; 0 .

• <sup>C</sup> has 21þ2�<sup>1</sup> <sup>¼</sup> 8 codewords:

0

Therefore,

where A, A1, B1, B2,T, and D are matrices with all entries from Z<sup>2</sup> and Ik<sup>0</sup> , Ik<sup>1</sup> , and Ik<sup>2</sup> are identity matrices with given sizes. Further C has 2<sup>k</sup>0þ2k1þk<sup>2</sup> codewords.

Proof. It is well known that any linear code of length β over the ring

R ¼ Z<sup>2</sup> þ uZ<sup>2</sup> has the generator matrix of the form Ik<sup>1</sup> A<sup>0</sup> B 0 <sup>1</sup> <sup>þ</sup> uB<sup>0</sup> 2 0 uI<sup>0</sup> <sup>k</sup><sup>2</sup> uD<sup>0</sup> " #. Moreover, any binary linear code of length α can be generated by the matrix I 0 <sup>k</sup><sup>0</sup> A<sup>0</sup> 1 h i. Since C is a Z2Z2½ � u -linear code of length α þ β, then C can be generated by the following matrix:

$$\underbrace{\begin{bmatrix} I\_{k\_0}^{'} & A\_1^{'} & \mid & T\_{01} & T\_{02} & T\_{03} \\ \mathbf{S}\_{01} & \mathbf{S}\_{02} & & I\_{k\_1} & A^{'} & B\_1^{'} + \iota B\_2^{'} \\ \mathbf{S}\_{11} & \mathbf{S}\_{12} & & & & \end{bmatrix}}\_{a} \underbrace{\begin{bmatrix} I\_{k\_1} & A^{'} & B\_1^{'} + \iota B\_2^{'} \\ \mathbf{0} & \iota I\_{k\_2}^{'} & \iota D' \end{bmatrix}}\_{\beta}$$

with all binary entries. By applying the necessary row operations to the above matrix, we have the desired form.

Example 2.1.2. Let C be a Z2Z2½ � u -linear code with the generator matrix 1 0 1 1 0 1 � � � � � 1 þ u 1 1 þ u 1 þ u " #: First, adding the second row to the first row, we have

$$
\begin{bmatrix}
\mathbf{1} & \mathbf{0} & \mathbf{1} & \mid & u & \mathbf{0} \\
\\ \mathbf{0} & \mathbf{1} & \mathbf{1} & \mid & \mathbf{1} & \mathbf{1} + u
\end{bmatrix}.
$$

Then multiplying the second row by u and adding it to first row, we have the following standard form of the generator matrix:

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

$$
\begin{bmatrix}
\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxdot{\boxed{\boxed{\boxdot{\boxed{\boxdot{\boxdot{\boxed{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\boxdot{\frac{\frac{\cdots}{\frac{\cdots}{\frac{\cdots}{\frac{\cdots}{\frac{\cdots}{\cdot}}{\cdot }}}}}}}{\cdot}}}}{\pm \cdot }}}}}}
}}}
}}
}}}
}}
}}
}}
}}
} } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } $$

Therefore,

matrix G constitute the linear code C. We can produce an equivalent code to the C by applying elementary row and column operations on the generator matrix G. For given two linear codes, if one can be obtained from the other by permutation of their coordinates or (if necessary) changing the coordinates by their unit multiples, then these codes are said to be permutation equivalent code or only equivalent code. Furthermore, the standard form of the matrix G is a special form which is obtained by applying elementary row operations to G. Having the standard form of the generator matrix is very useful that we can easily determine the type of the code and then calculate its size directly. Note that the generator matrices in the standard form of linear codes over a ring contain the minimum number of rows. The theorem below determines the standard form of the generator matrix of a

Theorem 2.1.1. [3] Let C be a Z2Z2½ � u -linear code of type ð Þ α; β; k0; k1; k<sup>2</sup> . Then C is a permutation equivalent to a Z2Z2½ � u -linear code with the following generator

Ik<sup>0</sup> A<sup>1</sup> 0 0 uT

0 S Ik<sup>1</sup> A B<sup>1</sup> þ uB<sup>2</sup>

Ik<sup>1</sup> A<sup>0</sup> B

3 7 5

: First, adding the second row to the first row, we

3 5:

0 uI<sup>0</sup>

0 <sup>1</sup> <sup>þ</sup> uB<sup>0</sup> 2

" #

<sup>k</sup><sup>2</sup> uD<sup>0</sup>

(1)

. More-

.

0 <sup>k</sup><sup>0</sup> A<sup>0</sup> 1 h i

000 uIk<sup>2</sup> uD

where A, A1, B1, B2,T, and D are matrices with all entries from Z<sup>2</sup> and Ik<sup>0</sup> , Ik<sup>1</sup> , and Ik<sup>2</sup> are identity matrices with given sizes. Further C has 2<sup>k</sup>0þ2k1þk<sup>2</sup> codewords. Proof. It is well known that any linear code of length β over the ring

over, any binary linear code of length α can be generated by the matrix I

Since C is a Z2Z2½ � u -linear code of length α þ β, then C can be generated by the

� � � � � � �

T<sup>01</sup> T<sup>02</sup> T<sup>03</sup> Ik<sup>1</sup> A<sup>0</sup> B

0 uI<sup>0</sup>

with all binary entries. By applying the necessary row operations to the above

Example 2.1.2. Let C be a Z2Z2½ � u -linear code with the generator matrix

0 <sup>1</sup> <sup>þ</sup> uB<sup>0</sup> 2

<sup>k</sup><sup>2</sup> uD<sup>0</sup>


Z2Z2½ � u -linear code C.

Coding Theory

following matrix:

1 0

have

72

1 1 0 1

matrix of the standard form:

Gs ¼

R ¼ Z<sup>2</sup> þ uZ<sup>2</sup> has the generator matrix of the form

I 0 <sup>k</sup><sup>0</sup> A<sup>0</sup> 1

matrix, we have the desired form.

" #

1 þ u 1

� � � � � S<sup>01</sup>

S<sup>02</sup>

S<sup>12</sup>


1 þ u 1 þ u

1

2 4

following standard form of the generator matrix:

0

1

� � � � � �

Then multiplying the second row by u and adding it to first row, we have the

u

0

1 þ u

1

1

1

0

S<sup>11</sup>


$$\mathcal{L} = \{ (\mathbf{0}, \mathbf{0}, \mathbf{0}, |\mathbf{0}, \mathbf{0}\rangle, (\mathbf{1}, \mathbf{0}, \mathbf{1}, |\mathbf{0}, \mathbf{u}\rangle, (\mathbf{0}, \mathbf{1}, \mathbf{1}, |\mathbf{1}, \mathbf{1} + \mathbf{u}\rangle, (\mathbf{1}, \mathbf{0}, |\mathbf{1}, \mathbf{1}\rangle, (\mathbf{0}, \mathbf{1}, \mathbf{1}, |\mathbf{1} + \mathbf{u}, \mathbf{1}\rangle,$$

$$(\mathbf{1}, \mathbf{1}, \mathbf{0}, |\mathbf{1} + \mathbf{u}, \mathbf{1} + \mathbf{u}\rangle, (\mathbf{0}, \mathbf{0}, \mathbf{0}, |\mathbf{u}, \mathbf{u}\rangle, (\mathbf{1}, \mathbf{0}, \mathbf{1}, |\mathbf{u}, \mathbf{0}\rangle).$$

Moreover, the Gray image Ψð Þ C of C is a simplex code of length 7 with parameters [7, 3, 4] which is the dual of the well-known [7, 4, 3] Hamming code.

#### 2.2 Duality on Z2Z2[u]-linear codes and parity-check matrices

In the literature, there is a very well-known concept for the duals of the codes over finite fields and rings. If C is a linear code over F<sup>n</sup> <sup>q</sup>, the dual code C<sup>⊥</sup> of C in F<sup>n</sup> <sup>q</sup> is the set of all codewords that are orthogonal to every codeword of C. A generator matrix for C<sup>⊥</sup> is called a parity-check matrix of C. In this part, we determine the standard form of the parity-check matrix of a Z2Z2½ � u -linear code C. Let us begin with the definition of an inner product over Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup>.

Definition 2.2.1 Let v and w be the two elements in Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> . The inner product of v and w is defined by

$$\langle v, w \rangle = u \left( \sum\_{i=1}^{a} v\_i w\_i \right) + \sum\_{j=a+1}^{a+\beta} v\_j w\_j \in \mathcal{R}.$$

Further, the dual code <sup>C</sup><sup>⊥</sup> of a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code <sup>C</sup> is defined in the usual way with respect to this inner product as

$$\mathcal{C}^{\perp} = \left\{ w \in \mathbb{Z}\_2^a \times \mathcal{R}^\beta \, | \, \langle v, w \rangle = \mathbf{0} \text{ for all } v \in \mathcal{C} \right\}.$$

Hence, if <sup>C</sup> is a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code, then <sup>C</sup><sup>⊥</sup> is also a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code. It is worth mentioning that any two codewords of a Z2Z2½ � u -linear code may be orthogonal to each other, but the binary parts of the codewords may not be orthogonal. For example, 1ð Þ ; <sup>1</sup><sup>j</sup> <sup>1</sup> <sup>þ</sup> <sup>u</sup>; <sup>u</sup> ,ð Þ <sup>0</sup>; <sup>1</sup><sup>j</sup> <sup>u</sup>; <sup>u</sup> <sup>∈</sup>Z<sup>2</sup> <sup>2</sup> � <sup>R</sup><sup>2</sup> are orthogonal to each other, whereas the binary or R-components are not orthogonal. Moreover, the Gray map Ψ preserves the orthogonality.

We give the standard form of the parity-check matrices of Z2Z2½ � u -linear codes with the following theorem.

Theorem 2.2.2. [3] Let C be a Z2Z2½ � u -linear code of type ð Þ α; β; k0; k1; k<sup>2</sup> with the standard form generator matrix (1). Then the parity-check matrix of C (the generator matrix of the dual code C<sup>⊥</sup>) is given by

$$H\_s = \begin{bmatrix} -A\_1^t & I\_{a-k\_0} & -uS^t & 0 & 0\\ -T^t & 0 & -\left(B\_1 + uB\_2\right)^t + D^t A^t & -D^t & I\_{\beta-k\_1-k\_2} \\ 0 & 0 & -uA^t & uI\_{k\_2} & 0 \end{bmatrix}.$$

Furthermore, <sup>∣</sup>C⊥∣ <sup>¼</sup> <sup>2</sup><sup>α</sup>�k<sup>0</sup> <sup>2</sup>2ð Þ <sup>β</sup>�k1�k<sup>2</sup> <sup>2</sup>k<sup>2</sup> .

Proof. It can be easily checked that Gs � <sup>H</sup><sup>t</sup> <sup>s</sup> ¼ 0. Therefore every row of Hs is orthogonal to the rows of Gs. Further, since the generator matrices in the standard form of linear codes contain the minimum number of rows, C<sup>⊥</sup> has <sup>2</sup><sup>α</sup>�k<sup>0</sup> 22ð Þ <sup>β</sup>�k1�k<sup>2</sup> <sup>2</sup>k<sup>2</sup> codewords. Hence, <sup>∣</sup><sup>C</sup> <sup>∣</sup> <sup>C</sup>⊥∣ <sup>¼</sup> <sup>2</sup>k<sup>0</sup> <sup>2</sup>2k12k<sup>2</sup> <sup>2</sup><sup>α</sup>�k<sup>0</sup> 22ð Þ <sup>β</sup>�k1�k<sup>2</sup> <sup>2</sup>k<sup>2</sup> <sup>¼</sup> <sup>2</sup><sup>α</sup>þ2<sup>β</sup>. So, the rows of the matrix Hs are not only orthogonal to C, but also they generate all dual space.

0 ¼ h i z; w ¼ u að <sup>1</sup>d<sup>0</sup> þ a2d<sup>1</sup> þ ⋯ þ a0d<sup>α</sup>�1Þ þ b1e<sup>0</sup> þ b2e<sup>1</sup> þ ⋯ þ b0e<sup>β</sup>�<sup>1</sup>

<sup>2</sup> � <sup>R</sup><sup>β</sup> and <sup>v</sup> <sup>¼</sup> <sup>a</sup>0; <sup>a</sup>1; …; <sup>a</sup><sup>α</sup>�1; <sup>b</sup>0; <sup>b</sup>1; …; <sup>b</sup><sup>β</sup>�<sup>1</sup>

identified with a module element consisting of two polynomials each from different

This identification gives a one-to-one correspondence between elements in

Definition 3.3. Let d xð Þ∈ R½ � x and ð Þ v xð Þ; w xð Þ ∈ Ra, <sup>β</sup>. We define the following

d xð Þ ∗ ð Þ¼ v xð Þ; w xð Þ ð Þ d xð Þv xð Þ modu; d xð Þw xð Þ

This multiplication is well defined, and moreover, Rα, <sup>β</sup> is a R½ � x -module with

The codewords of C may be represented as polynomials in Rα,<sup>β</sup> by using the

Theorem 3.4. A code C is a Z2Z2½ � u -linear cyclic code if and only if C is an

Let <sup>C</sup> be a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code. We know that both <sup>C</sup> and <sup>R</sup>[x]/〈x<sup>β</sup> � <sup>1</sup>〉 are

<sup>Φ</sup> : <sup>C</sup> ! <sup>R</sup>½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>β</sup> � <sup>1</sup> <sup>Φ</sup> <sup>f</sup> <sup>1</sup>ð Þ <sup>x</sup> ; <sup>f</sup> <sup>2</sup>ð Þ <sup>x</sup> <sup>¼</sup> <sup>f</sup> <sup>2</sup>ð Þ <sup>x</sup> :

It is clear that Φ is a module homomorphism where the Imð Þ Φ is an R[x] submodule of <sup>R</sup>[x]/〈x<sup>β</sup> � <sup>1</sup>〉 and kerð Þ <sup>Φ</sup> is a submodule of <sup>C</sup>. Since <sup>Φ</sup>ð Þ <sup>C</sup> is an ideal

3.1 The generators and the spanning sets of Z2Z2[u]-linear cyclic codes

; <sup>b</sup><sup>0</sup> <sup>þ</sup> <sup>b</sup>1<sup>x</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> <sup>b</sup>β�<sup>1</sup>xβ�<sup>1</sup> <sup>∈</sup> <sup>R</sup>α, <sup>β</sup>:

; <sup>b</sup>β�<sup>1</sup> <sup>þ</sup> <sup>b</sup>0<sup>x</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> <sup>b</sup>β�<sup>2</sup>xβ�<sup>1</sup> <sup>∈</sup> <sup>R</sup>α,β:

; <sup>b</sup><sup>0</sup> <sup>þ</sup> <sup>b</sup>1<sup>x</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> <sup>b</sup><sup>β</sup>�1x<sup>β</sup>�<sup>1</sup>

¼ h i v; T wð Þ :

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

<sup>2</sup> � <sup>R</sup><sup>β</sup> and elements in <sup>R</sup>α, <sup>β</sup>.

respect to this multiplication.

above identification. Thus, if C ⊆ Z<sup>α</sup>

v ¼ a0; a1; …; aα�<sup>1</sup>; b0; b1; …; bβ�<sup>1</sup>

R½ � x -submodule of Rα, <sup>β</sup>.

scalar multiplication:

Let C ⊆Z<sup>α</sup>

Zα

Hence, T wð Þ∈C<sup>⊥</sup> and so <sup>C</sup><sup>⊥</sup> is also cyclic.

¼ ð Þ a xð Þ; b xð Þ :

rings in <sup>R</sup>α, <sup>β</sup> <sup>¼</sup> <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup>h i <sup>α</sup> � <sup>1</sup> � <sup>R</sup>½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>β</sup> � <sup>1</sup> such that

v xð Þ¼ <sup>a</sup><sup>0</sup> <sup>þ</sup> <sup>a</sup>1<sup>x</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> <sup>a</sup><sup>α</sup>�1x<sup>α</sup>�<sup>1</sup>

∈C can be viewed as

Further, the property T vð Þ¼ aα�<sup>1</sup>; a0; …; aα�<sup>2</sup>; bβ�<sup>1</sup>; b0; …; bβ�<sup>2</sup>

v xð Þ¼ <sup>a</sup><sup>0</sup> <sup>þ</sup> <sup>a</sup>1<sup>x</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> <sup>a</sup>α�<sup>1</sup>xα�<sup>1</sup>

<sup>x</sup> <sup>∗</sup> v xð Þ¼ <sup>a</sup>α�<sup>1</sup> <sup>þ</sup> <sup>a</sup>0<sup>x</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> <sup>a</sup>α�<sup>2</sup>xα�<sup>1</sup>

R½ � x -modules. Then we define the following map:

<sup>Φ</sup>ð Þ¼ <sup>C</sup> h i g xð Þþ ua xð Þ with a xð Þ<sup>∣</sup> g xð Þ<sup>∣</sup> <sup>x</sup><sup>β</sup> � 1 mod 2.

of the ring <sup>R</sup>[x]/〈x<sup>β</sup> � <sup>1</sup>〉, we have

Further the kernel of Φ is

75

Hence we give the following theorem.

¼ u að <sup>0</sup>d<sup>α</sup>�<sup>1</sup> þ a1d<sup>0</sup> þ ⋯ þ a<sup>α</sup>�1d<sup>α</sup>�2Þ þ b0e<sup>β</sup>�<sup>1</sup> þ b1e<sup>0</sup> þ ⋯ þ b<sup>β</sup>�1e<sup>β</sup>�<sup>2</sup>

∈C. v∈C can be

<sup>2</sup> � <sup>R</sup><sup>β</sup> is a cyclic code, then the element

∈C translates to

Example 2.2.3. Let C be a Z2Z2½ � u -linear code of type 3ð Þ ; 2; 1; 1; 0 with the standard form of the generator matrix in (2). Then the parity-check matrix of C is

$$
\begin{bmatrix} -A\_1^t & I\_{3-1} & \vert & -u\mathbf{S}^t & \mathbf{0} \\\\ -T^t & \mathbf{0} & \vert & -(B\_1 + uB\_2)^t + D^t A^t & I\_{2-1-0} \end{bmatrix} = \begin{bmatrix} \boxed{\mathbf{0}} & \boxed{\mathbf{1}} & \mathbf{0} \\\\ \boxed{\mathbf{1}} & \boxed{\mathbf{0}} & \mathbf{1} \\\\ \boxed{\mathbf{1}} & \boxed{\mathbf{0}} & \boxed{\mathbf{0}} \end{bmatrix} \quad \left| \begin{array}{c} \boxed{u} \\ \boxed{u} \\ \boxed{\mathbf{1}+u} \end{array} \; \begin{array}{c} \boxed{\mathbf{0}} \\ \boxed{\mathbf{0}} \\ \boxed{\mathbf{1}} \end{array} \right|.
$$

Therefore, <sup>C</sup><sup>⊥</sup> is of type 3ð Þ ; <sup>2</sup>; <sup>2</sup>; <sup>1</sup>; <sup>0</sup> and has 22 2<sup>2</sup>�<sup>1</sup> 20 <sup>¼</sup> 16 codewords. The Gray image <sup>Ψ</sup> <sup>C</sup><sup>⊥</sup> � � is a well-known Hamming code with parameters 7½ � ; <sup>4</sup>; <sup>3</sup> .

Corollary 2.2.4. If <sup>C</sup> is a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code of type ð Þ <sup>α</sup>; <sup>β</sup>; <sup>k</sup>0; <sup>k</sup>1; <sup>k</sup><sup>2</sup> , then <sup>C</sup><sup>⊥</sup> is of type ð Þ α; β; α � k0; β � k<sup>1</sup> � k2; k<sup>2</sup> .

### 3. Z2Z2[u]-linear cyclic codes

Cyclic codes form a very small but highly structured and important subset of the set of linear codes. In general, these codes are much easier to implement, and hence they have a very rich algebraic structure that allows them to be encoded and decoded in a relatively easier way. Since cyclic codes can be identified as ideals in a certain ring, they are also of considerable interest from an algebraic point of view. Cyclic codes over finite fields were first introduced by E. Prange in 1957 and 1959 with two Air Force Cambridge Research Laboratory reports. In this section we study the structure of Z2Z2[u]-linear cyclic codes for a positive odd integer β. We give the generator polynomials and the spanning sets for a Z2Z2[u]-linear cyclic code C.

Definition 3.1. An R-submodule C of Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> is called a <sup>Z</sup>2Z2[u]-linear cyclic code if for any codeword v ¼ a0; a1; …; a<sup>α</sup>�<sup>1</sup>; b0; b1; …; b<sup>β</sup>�<sup>1</sup> � �∈ C, its cyclic shift T vð Þ¼ a<sup>α</sup>�<sup>1</sup>; a0; …; a<sup>α</sup>�<sup>2</sup>; b<sup>β</sup>�<sup>1</sup>; b0; …; b<sup>β</sup>�<sup>2</sup> � � is also in C.

Lemma 3.2. If C is a Z2Z2[u]-linear cyclic code, then the dual code C<sup>⊥</sup> is also a Z2Z2[u]-linear cyclic code.

Proof. Let C be a Z2Z2[u]-linear cyclic code and w ¼ d0; d1; …; d<sup>α</sup>�<sup>1</sup>;e0;e1; …;e<sup>β</sup>�<sup>1</sup> � �∈C<sup>⊥</sup>. We will show that T wð Þ∈C<sup>⊥</sup>. Since <sup>w</sup> <sup>∈</sup>C<sup>⊥</sup>, for v ¼ a0; a1; …; a<sup>α</sup>�<sup>1</sup>; b0; b1; …; b<sup>β</sup>�<sup>1</sup> � �∈C, we have

$$\langle v, w \rangle = \mathfrak{u}(a\_0 d\_0 + a\_1 d\_1 + \dots + a\_{a-1} d\_{a-1}) + \left( b\_0 e\_0 + b\_1 e\_1 + \dots + b\_{\beta - 1} e\_{\beta - 1} \right) = \mathbf{0}.$$

Now, let <sup>θ</sup> <sup>¼</sup> lcmð Þ <sup>α</sup>; <sup>β</sup> . Since <sup>C</sup> is cyclic, then <sup>T</sup><sup>θ</sup> ð Þ¼ <sup>v</sup> <sup>v</sup>, and <sup>T</sup><sup>θ</sup>�<sup>1</sup> ð Þ¼ v ða1; a2; …; a0; b1; b2; …; b0Þ ¼ z∈C. Therefore,

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

Furthermore, <sup>∣</sup>C⊥∣ <sup>¼</sup> <sup>2</sup><sup>α</sup>�k<sup>0</sup> <sup>2</sup>2ð Þ <sup>β</sup>�k1�k<sup>2</sup> <sup>2</sup>k<sup>2</sup> . Proof. It can be easily checked that Gs � <sup>H</sup><sup>t</sup>

dual space.

Coding Theory

of C is

�A<sup>t</sup> 1 �T<sup>t</sup>

code C.

74

2 4

I<sup>3</sup>�<sup>1</sup> 0

� � � � � �

type ð Þ α; β; α � k0; β � k<sup>1</sup> � k2; k<sup>2</sup> .

3. Z2Z2[u]-linear cyclic codes

Definition 3.1. An R-submodule C of Z<sup>α</sup>

T vð Þ¼ a<sup>α</sup>�<sup>1</sup>; a0; …; a<sup>α</sup>�<sup>2</sup>; b<sup>β</sup>�<sup>1</sup>; b0; …; b<sup>β</sup>�<sup>2</sup>

Z2Z2[u]-linear cyclic code.

w ¼ d0; d1; …; d<sup>α</sup>�<sup>1</sup>;e0;e1; …;e<sup>β</sup>�<sup>1</sup>

for v ¼ a0; a1; …; a<sup>α</sup>�<sup>1</sup>; b0; b1; …; b<sup>β</sup>�<sup>1</sup>

code if for any codeword v ¼ a0; a1; …; a<sup>α</sup>�<sup>1</sup>; b0; b1; …; b<sup>β</sup>�<sup>1</sup>

Proof. Let C be a Z2Z2[u]-linear cyclic code and

Now, let <sup>θ</sup> <sup>¼</sup> lcmð Þ <sup>α</sup>; <sup>β</sup> . Since <sup>C</sup> is cyclic, then <sup>T</sup><sup>θ</sup>

ða1; a2; …; a0; b1; b2; …; b0Þ ¼ z∈C. Therefore,

� �∈C, we have

� � is also in C.

�uSt

�ð Þ <sup>B</sup><sup>1</sup> <sup>þ</sup> uB<sup>2</sup> <sup>t</sup> <sup>þ</sup> Dt

Therefore, <sup>C</sup><sup>⊥</sup> is of type 3ð Þ ; <sup>2</sup>; <sup>2</sup>; <sup>1</sup>; <sup>0</sup> and has 22

is orthogonal to the rows of Gs. Further, since the generator matrices in the standard form of linear codes contain the minimum number of rows, C<sup>⊥</sup> has

Example 2.2.3. Let C be a Z2Z2½ � u -linear code of type 3ð Þ ; 2; 1; 1; 0 with the standard form of the generator matrix in (2). Then the parity-check matrix

0

I2�1�<sup>0</sup>

Corollary 2.2.4. If <sup>C</sup> is a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code of type ð Þ <sup>α</sup>; <sup>β</sup>; <sup>k</sup>0; <sup>k</sup>1; <sup>k</sup><sup>2</sup> , then <sup>C</sup><sup>⊥</sup> is of

Cyclic codes form a very small but highly structured and important subset of the set of linear codes. In general, these codes are much easier to implement, and hence they have a very rich algebraic structure that allows them to be encoded and decoded in a relatively easier way. Since cyclic codes can be identified as ideals in a certain ring, they are also of considerable interest from an algebraic point of view. Cyclic codes over finite fields were first introduced by E. Prange in 1957 and 1959 with two Air Force Cambridge Research Laboratory reports. In this section we study the structure of Z2Z2[u]-linear cyclic codes for a positive odd integer β. We give the generator polynomials and the spanning sets for a Z2Z2[u]-linear cyclic

Lemma 3.2. If C is a Z2Z2[u]-linear cyclic code, then the dual code C<sup>⊥</sup> is also a

� �∈C<sup>⊥</sup>. We will show that T wð Þ∈C<sup>⊥</sup>. Since <sup>w</sup> <sup>∈</sup>C<sup>⊥</sup>,

h i v; w ¼ u að <sup>0</sup>d<sup>0</sup> þ a1d<sup>1</sup> þ ⋯ þ a<sup>α</sup>�<sup>1</sup>d<sup>α</sup>�<sup>1</sup>Þ þ b0e<sup>0</sup> þ b1e<sup>1</sup> þ ⋯ þ b<sup>β</sup>�<sup>1</sup>e<sup>β</sup>�<sup>1</sup>

3 5 ¼ 0 1

1

2<sup>2</sup>�<sup>1</sup>

At

image <sup>Ψ</sup> <sup>C</sup><sup>⊥</sup> � � is a well-known Hamming code with parameters 7½ � ; <sup>4</sup>; <sup>3</sup> .

<sup>2</sup><sup>α</sup>�k<sup>0</sup> 22ð Þ <sup>β</sup>�k1�k<sup>2</sup> <sup>2</sup>k<sup>2</sup> codewords. Hence, <sup>∣</sup><sup>C</sup> <sup>∣</sup> <sup>C</sup>⊥∣ <sup>¼</sup> <sup>2</sup>k<sup>0</sup> <sup>2</sup>2k12k<sup>2</sup> <sup>2</sup><sup>α</sup>�k<sup>0</sup> 22ð Þ <sup>β</sup>�k1�k<sup>2</sup> <sup>2</sup>k<sup>2</sup> <sup>¼</sup> <sup>2</sup><sup>α</sup>þ2<sup>β</sup>. So, the rows of the matrix Hs are not only orthogonal to C, but also they generate all

<sup>s</sup> ¼ 0. Therefore every row of Hs

� � � � � � � �

20 <sup>¼</sup> 16 codewords. The Gray

u u 0 0

1

1 þ u

0 0

<sup>2</sup> � <sup>R</sup><sup>β</sup> is called a <sup>Z</sup>2Z2[u]-linear cyclic

� � <sup>¼</sup> <sup>0</sup>:

ð Þ¼ v

ð Þ¼ <sup>v</sup> <sup>v</sup>, and <sup>T</sup><sup>θ</sup>�<sup>1</sup>

� �∈ C, its cyclic shift

$$\mathbf{0} = \langle \mathbf{z}, w \rangle = u(a\_1 d\_0 + a\_2 d\_1 + \dots + a\_0 d\_{a-1}) + \left( b\_1 \mathbf{e}\_0 + b\_2 \mathbf{e}\_1 + \dots + b\_0 \mathbf{e}\_{\beta - 1} \right)$$

$$= u(a\_0 d\_{a-1} + a\_1 d\_0 + \dots + a\_{a-1} d\_{a-2}) + \left( b\_0 \mathbf{e}\_{\beta - 1} + b\_1 \mathbf{e}\_0 + \dots + b\_{\beta - 1} \mathbf{e}\_{\beta - 2} \right)$$

$$= \langle v, T(w) \rangle.$$

Hence, T wð Þ∈C<sup>⊥</sup> and so <sup>C</sup><sup>⊥</sup> is also cyclic.

Let C ⊆Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> and <sup>v</sup> <sup>¼</sup> <sup>a</sup>0; <sup>a</sup>1; …; <sup>a</sup><sup>α</sup>�1; <sup>b</sup>0; <sup>b</sup>1; …; <sup>b</sup><sup>β</sup>�<sup>1</sup> ∈C. v∈C can be identified with a module element consisting of two polynomials each from different rings in <sup>R</sup>α, <sup>β</sup> <sup>¼</sup> <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup>h i <sup>α</sup> � <sup>1</sup> � <sup>R</sup>½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>β</sup> � <sup>1</sup> such that

$$\begin{aligned} \nu(\mathfrak{x}) &= \left( a\_0 + a\_1 \mathfrak{x} + \dots + a\_{a-1} \mathfrak{x}^{a-1}, b\_0 + b\_1 \mathfrak{x} + \dots + b\_{\beta -1} \mathfrak{x}^{\beta -1} \right), \\ &= (a(\mathfrak{x}), b(\mathfrak{x})). \end{aligned}$$

This identification gives a one-to-one correspondence between elements in Zα <sup>2</sup> � <sup>R</sup><sup>β</sup> and elements in <sup>R</sup>α, <sup>β</sup>.

Definition 3.3. Let d xð Þ∈ R½ � x and ð Þ v xð Þ; w xð Þ ∈ Ra, <sup>β</sup>. We define the following scalar multiplication:

$$d(\mathfrak{x}) \* (\nu(\mathfrak{x}), w(\mathfrak{x})) = (d(\mathfrak{x})\nu(\mathfrak{x}) \bmod \mathfrak{u}, d(\mathfrak{x})w(\mathfrak{x})) $$

This multiplication is well defined, and moreover, Rα, <sup>β</sup> is a R½ � x -module with respect to this multiplication.

The codewords of C may be represented as polynomials in Rα,<sup>β</sup> by using the above identification. Thus, if C ⊆ Z<sup>α</sup> <sup>2</sup> � <sup>R</sup><sup>β</sup> is a cyclic code, then the element v ¼ a0; a1; …; aα�<sup>1</sup>; b0; b1; …; bβ�<sup>1</sup> ∈C can be viewed as

$$\nu(\mathfrak{x}) = \left( a\_0 + a\_1 \mathfrak{x} + \dots + a\_{a-1} \mathfrak{x}^{a-1}, b\_0 + b\_1 \mathfrak{x} + \dots + b\_{\beta - 1} \mathfrak{x}^{\beta - 1} \right) \in \mathcal{R}\_{a, \beta}.$$

Further, the property T vð Þ¼ aα�<sup>1</sup>; a0; …; aα�<sup>2</sup>; bβ�<sup>1</sup>; b0; …; bβ�<sup>2</sup> ∈C translates to

$$\mathfrak{a} \ast \boldsymbol{\nu}(\mathfrak{x}) = \left( a\_{a-1} + a\_0 \mathfrak{x} + \dots + a\_{a-2} \mathfrak{x}^{a-1}, b\_{\beta-1} + b\_0 \mathfrak{x} + \dots + b\_{\beta-2} \mathfrak{x}^{\beta-1} \right) \in \mathcal{R}\_{a, \beta}.$$

Hence we give the following theorem.

Theorem 3.4. A code C is a Z2Z2½ � u -linear cyclic code if and only if C is an R½ � x -submodule of Rα, <sup>β</sup>.

#### 3.1 The generators and the spanning sets of Z2Z2[u]-linear cyclic codes

Let <sup>C</sup> be a <sup>Z</sup>2Z2½ � <sup>u</sup> -linear code. We know that both <sup>C</sup> and <sup>R</sup>[x]/〈x<sup>β</sup> � <sup>1</sup>〉 are R½ � x -modules. Then we define the following map:

$$
\Phi: \mathcal{C} \to \mathcal{R}[\mathfrak{x}]/\langle \mathfrak{x}^{\beta} - \mathbf{1} \rangle
$$

$$
\Phi(f\_1(\mathfrak{x}), f\_2(\mathfrak{x})) = f\_2(\mathfrak{x}).
$$

It is clear that Φ is a module homomorphism where the Imð Þ Φ is an R[x] submodule of <sup>R</sup>[x]/〈x<sup>β</sup> � <sup>1</sup>〉 and kerð Þ <sup>Φ</sup> is a submodule of <sup>C</sup>. Since <sup>Φ</sup>ð Þ <sup>C</sup> is an ideal of the ring <sup>R</sup>[x]/〈x<sup>β</sup> � <sup>1</sup>〉, we have

<sup>Φ</sup>ð Þ¼ <sup>C</sup> h i g xð Þþ ua xð Þ with a xð Þ<sup>∣</sup> g xð Þ<sup>∣</sup> <sup>x</sup><sup>β</sup> � 1 mod 2. Further the kernel of Φ is

$$\ker(\Phi) = \left\{ (f(\mathfrak{x}), \mathbf{0}) \in \mathcal{C} | f(\mathfrak{x}) \in \mathbb{Z}\_2^a \times \mathcal{R}^\beta \right\}.$$

Definition 3.1.2. Let N be an R-module. A linearly independent subset P of N that spans N is called a basis of N . If an R-module has a basis, then it is called a free

Note that for a Z2Z2½ � u -linear cyclic code C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ , if g xð Þ 6¼ 0, then C is a free R-module. However, if g xð Þ¼ 0 and a xð Þ 6¼ 0, then it is not a free R-module. But we can still present C with the minimal spanning sets. The following theorem determines the minimal spanning sets for a Z2Z2½ � u -linear

Theorem 3.1.3. [4] Let C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ be a Z2Z2½ � u -linear

ð Þ f xð Þ; <sup>0</sup> � �,

ð Þ l xð Þ; g xð Þþ ua xð Þ � �,

<sup>i</sup>¼<sup>0</sup> <sup>x</sup><sup>i</sup> hgð Þ <sup>x</sup> l xð Þ; uhgð Þ <sup>x</sup> a xð Þ � � � �

Example 3.1.4. Let C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ be a Z2Z2½ � u -linear cyclic code in <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> � � � <sup>R</sup>½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> � � with the following generator polynomials:

g xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>x</sup><sup>4</sup> <sup>þ</sup> <sup>x</sup><sup>5</sup> <sup>þ</sup> <sup>x</sup><sup>6</sup>,a xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup>:

10110001 þ u 1 1 þ u 1 þ u 111 1 110100 uuu 0 u 0 0 0111010 0 uu u 0 u 0 0011 101 0 0 u uu 0 u

It is worth mentioning that the Gray image Φð Þ C of C is a linear binary code with the parameters [21, 5, 10], which are optimal. If the code C has the best minimum distance compared to the existing bounds for fixed length and the size, then C is

Example 3.1.5. Let us consider the cyclic code C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ

g xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>x</sup><sup>4</sup> <sup>þ</sup> <sup>x</sup><sup>5</sup> <sup>þ</sup> <sup>x</sup><sup>6</sup> <sup>þ</sup> <sup>x</sup><sup>7</sup> <sup>þ</sup> <sup>x</sup><sup>8</sup>,a xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup>:

Again by using the minimal spanning sets in the above theorem, we have the

Therefore, we have g xð Þ¼ a xð Þb xð Þ) b xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup> and g xð Þhg ð Þ¼ <sup>x</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> ) hg ð Þ¼ <sup>x</sup> <sup>1</sup> <sup>þ</sup> <sup>x</sup>. Hence by using the minimal spanning sets in Theorem 3.1.3, we can write the generator matrix for the Z2Z2½ � u -linear cyclic code C as

where f xð Þhfð Þ¼ <sup>x</sup> <sup>x</sup><sup>α</sup> � <sup>1</sup>,g xð Þhgð Þ¼ <sup>x</sup> <sup>x</sup><sup>β</sup> � 1 and g xð Þ¼ a xð Þb xð Þ. Then S ¼ S1∪S2∪S<sup>3</sup> forms a minimal spanning set for C as an R-module. Furthermore, C

cyclic code in Rα, <sup>β</sup> with f xð Þ,l xð Þ,g xð Þ, and a xð Þ in Theorem 3.1.1. Let

S<sup>1</sup> ¼ ∪ degð Þ hfð Þ <sup>x</sup> �<sup>1</sup> <sup>i</sup>¼<sup>0</sup> <sup>x</sup><sup>i</sup>

S<sup>2</sup> ¼ ∪ degð Þ hg ð Þ <sup>x</sup> �<sup>1</sup> <sup>i</sup>¼<sup>0</sup> <sup>x</sup><sup>i</sup>

S<sup>3</sup> ¼ ∪ degð Þ� b xð Þ 1

Proof. Please see the proof of the Theorem 4 in [4].

f xð Þ¼ <sup>x</sup><sup>7</sup> � <sup>1</sup>,l xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup>

has 2degð Þ hfð Þ <sup>x</sup> 4degð Þ hg ð Þ <sup>x</sup> 2degð Þ b xð Þ codewords.

R-module.

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

cyclic code C.

follows:

77

G ¼

called optimal or good parameter code.

following generator matrix for C:

in <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> � � � <sup>R</sup>½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>9</sup> � <sup>1</sup> � � with generators:

f xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>x</sup><sup>4</sup>,l xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup>,

Now, define the set

$$I = \{ f(\mathfrak{x}) \in \mathbb{Z}\_2[\mathfrak{x}]/\langle \mathfrak{x}^a - \mathfrak{1} \rangle | \, (f(\mathfrak{x}), \mathfrak{0}) \in \ker(\mathfrak{Q}) \}.$$

It is clear that <sup>I</sup> is an ideal and hence a cyclic code in the ring <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup>h i <sup>α</sup> � <sup>1</sup> . So, by the well-known results about the generators of binary cyclic codes, I is generated by f(x), i.e., I ¼ h i f xð Þ .

Now, let ð Þ m xð Þ; 0 ∈kerΦ. So, we have m xð Þ ∈I ¼ h i f xð Þ , and hence m xð Þ¼ k xð Þf xð Þ for some polynomial k xð Þ∈Z2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup>h i <sup>α</sup> � <sup>1</sup> . Therefore ð Þ¼ m xð Þ; 0 k xð Þ ∗ ð Þ f xð Þ; 0 , and this implies that kerΦ is a submodule of C generated by one element of the form ð Þ f xð Þ; <sup>0</sup> with f xð Þ<sup>∣</sup> <sup>x</sup>ð Þ <sup>α</sup> � <sup>1</sup> mod 2. Then by the First Isomorphism Theorem, we have

$$\mathcal{C}/\ker \Phi \cong \langle \mathbf{g}(\mathbf{x}) + \mathfrak{u}a(\mathbf{x}) \rangle.$$

Let ð Þ l xð Þ; g xð Þþ ua xð Þ ∈ C such that Φðl xð Þ; g xð Þþ ua xð ÞÞ ¼ h i g xð Þþ ua xð Þ . This discussion shows that any Z2Z2½ � u -linear cyclic code C can be generated as a R[x]-submodule of Rα,<sup>β</sup> by two elements of the form (f(x),0) and ð Þ l xð Þ; g xð Þþ ua xð Þ such that

$$d\_1(\mathfrak{x}) \* (f(\mathfrak{x}), \mathfrak{O}) + d\_2(\mathfrak{x})(l(\mathfrak{x}), \mathfrak{g}(\mathfrak{x}) + \mathfrak{u}a(\mathfrak{x})) $$

where d1ð Þ x , d2ð Þ x ∈ R½ � x . Since the polynomial d1ð Þ x can be restricted to a polynomial in Z2½ � x , we can write

$$\mathcal{C} = \langle (f(\mathfrak{x}), \mathfrak{O}), (l(\mathfrak{x}), \mathfrak{g}(\mathfrak{x}) + \mathfrak{u}a(\mathfrak{x})) \rangle$$

with binary polynomials f xð Þ and l xð Þ where f xð Þ<sup>∣</sup> <sup>x</sup>ð Þ <sup>α</sup> � <sup>1</sup> mod 2 and a xð Þ<sup>∣</sup> g xð Þ<sup>∣</sup> <sup>x</sup><sup>β</sup> � <sup>1</sup> mod 2.

Theorem 3.1.1. [4] Let C be a Z2Z2½ � u -linear cyclic code in Rα, <sup>β</sup>. Then C can be identified uniquely as C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ where f xð Þ<sup>∣</sup> <sup>x</sup>ð Þ <sup>α</sup> � <sup>1</sup> mod 2, a xð Þ<sup>∣</sup> g xð Þ<sup>∣</sup> <sup>x</sup><sup>β</sup> � <sup>1</sup> mod 2, and l xð Þ is a binary polynomial satisfying degð Þ l xð Þ < degð Þ f xð Þ and f xð Þ<sup>∣</sup> <sup>x</sup>β�<sup>1</sup> a xð Þ l xð Þmod <sup>u</sup>.

Proof. We can easily see from the above discussion and Theorem 11 in [5] that C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ with the polynomials f xð Þ,l xð Þ,g xð Þ and a xð Þ are as stated in the theorem. So, we need to only show the uniqueness of the generator polynomials. Since h i f xð Þ and h i g xð Þþ ua xð Þ are cyclic codes over Z<sup>2</sup> and over R, respectively, this implies the uniqueness of the polynomials f xð Þ,g xð Þ and a xð Þ. Now suppose that degð Þ l xð Þ >degð Þ f xð Þ with degð Þ� l xð Þ degð Þ¼ f xð Þ t. Let

$$\mathcal{D} = \langle (f(\mathbf{x}), \mathbf{0}), (l(\mathbf{x}) + \mathbf{x}^t f(\mathbf{x}), \mathbf{g}(\mathbf{x}) + \mathbf{u}a(\mathbf{x})) \rangle = \langle (f(\mathbf{x}), \mathbf{0}), (l(\mathbf{x}), \mathbf{g}(\mathbf{x}) + \mathbf{u}a(\mathbf{x})) \rangle + \mathfrak{a}^t \ast \langle (f(\mathbf{x}), \mathbf{0}) \rangle.$$

Therefore D ⊆C. On the other hand,

$$
\langle l(\mathfrak{x}), \mathfrak{g}(\mathfrak{x}) + \mathfrak{u}\mathfrak{a}(\mathfrak{x}) \rangle = \langle l(\mathfrak{x}) + \mathfrak{x}^t f(\mathfrak{x}), \mathfrak{g}(\mathfrak{x}) + \mathfrak{u}\mathfrak{a}(\mathfrak{x}) \rangle - \mathfrak{x}^t \ast \langle (f(\mathfrak{x}), \mathfrak{0}) \rangle.
$$

So, C ⊆ D and hence D ¼ C.

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

kerð Þ¼ <sup>Φ</sup> ð Þ f xð Þ; <sup>0</sup> <sup>∈</sup>C<sup>j</sup> f xð Þ∈Z<sup>α</sup>

Now, let ð Þ m xð Þ; 0 ∈kerΦ. So, we have m xð Þ ∈I ¼ h i f xð Þ , and hence m xð Þ¼ k xð Þf xð Þ for some polynomial k xð Þ∈Z2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup>h i <sup>α</sup> � <sup>1</sup> . Therefore

ð Þ¼ m xð Þ; 0 k xð Þ ∗ ð Þ f xð Þ; 0 , and this implies that kerΦ is a submodule of C generated by one element of the form ð Þ f xð Þ; <sup>0</sup> with f xð Þ<sup>∣</sup> <sup>x</sup>ð Þ <sup>α</sup> � <sup>1</sup> mod 2. Then by the

C=kerΦ ffi h i g xð Þþ ua xð Þ :

Let ð Þ l xð Þ; g xð Þþ ua xð Þ ∈ C such that Φðl xð Þ; g xð Þþ ua xð ÞÞ ¼ h i g xð Þþ ua xð Þ . This discussion shows that any Z2Z2½ � u -linear cyclic code C can be generated as a R[x]-submodule of Rα,<sup>β</sup> by two elements of the form (f(x),0) and

d1ð Þ x ∗ ð Þþ f xð Þ; 0 d2ð Þ x ð Þ l xð Þ; g xð Þþ ua xð Þ

C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ

a xð Þ 

Proof. We can easily see from the above discussion and Theorem 11 in [5] that C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ with the polynomials f xð Þ,l xð Þ,g xð Þ and a xð Þ are as stated in the theorem. So, we need to only show the uniqueness of the generator polynomials. Since h i f xð Þ and h i g xð Þþ ua xð Þ are cyclic codes over Z<sup>2</sup> and over R, respectively, this implies the uniqueness of the polynomials f xð Þ,g xð Þ and a xð Þ. Now suppose that degð Þ l xð Þ >degð Þ f xð Þ with degð Þ� l xð Þ degð Þ¼ f xð Þ t. Let

<sup>D</sup> <sup>¼</sup> ð Þ f xð Þ; <sup>0</sup> ; l xð Þþ xt h i ð Þ f xð Þ; g xð Þþ ua xð Þ <sup>¼</sup> h i ð Þ f xð Þ; <sup>0</sup> ;ð Þ l xð Þ; g xð Þþ ua xð Þ <sup>þ</sup> xt <sup>∗</sup> h i ð Þ f xð Þ; <sup>0</sup> :

h i l xð Þ; g xð Þþ ua xð Þ <sup>¼</sup> l xð Þþ xt h i f xð Þ; g xð Þþ ua xð Þ � xt <sup>∗</sup> h i ð Þ f xð Þ; <sup>0</sup> :

l xð Þmod u.

with binary polynomials f xð Þ and l xð Þ where f xð Þ<sup>∣</sup> <sup>x</sup>ð Þ <sup>α</sup> � <sup>1</sup> mod 2 and

Theorem 3.1.1. [4] Let C be a Z2Z2½ � u -linear cyclic code in Rα, <sup>β</sup>. Then C can be identified uniquely as C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ where f xð Þ<sup>∣</sup> <sup>x</sup>ð Þ <sup>α</sup> � <sup>1</sup> mod 2, a xð Þ<sup>∣</sup> g xð Þ<sup>∣</sup> <sup>x</sup><sup>β</sup> � <sup>1</sup> mod 2, and l xð Þ is a binary polynomial

where d1ð Þ x , d2ð Þ x ∈ R½ � x . Since the polynomial d1ð Þ x can be restricted to a

<sup>I</sup> <sup>¼</sup> f xð Þ <sup>∈</sup>Z2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>α</sup> f g h ij � <sup>1</sup> ð Þ f xð Þ; <sup>0</sup> <sup>∈</sup> kerð Þ <sup>Φ</sup> :

It is clear that <sup>I</sup> is an ideal and hence a cyclic code in the ring <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup>h i <sup>α</sup> � <sup>1</sup> . So, by the well-known results about the generators of binary cyclic codes, I is generated

Now, define the set

Coding Theory

by f(x), i.e., I ¼ h i f xð Þ .

First Isomorphism Theorem, we have

ð Þ l xð Þ; g xð Þþ ua xð Þ such that

polynomial in Z2½ � x , we can write

satisfying degð Þ l xð Þ < degð Þ f xð Þ and f xð Þ<sup>∣</sup> <sup>x</sup>β�<sup>1</sup>

Therefore D ⊆C. On the other hand,

So, C ⊆ D and hence D ¼ C.

76

a xð Þ<sup>∣</sup> g xð Þ<sup>∣</sup> <sup>x</sup><sup>β</sup> � <sup>1</sup> mod 2.

<sup>2</sup> � <sup>R</sup><sup>β</sup> :

Definition 3.1.2. Let N be an R-module. A linearly independent subset P of N that spans N is called a basis of N . If an R-module has a basis, then it is called a free R-module.

Note that for a Z2Z2½ � u -linear cyclic code C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ , if g xð Þ 6¼ 0, then C is a free R-module. However, if g xð Þ¼ 0 and a xð Þ 6¼ 0, then it is not a free R-module. But we can still present C with the minimal spanning sets. The following theorem determines the minimal spanning sets for a Z2Z2½ � u -linear cyclic code C.

Theorem 3.1.3. [4] Let C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ be a Z2Z2½ � u -linear cyclic code in Rα, <sup>β</sup> with f xð Þ,l xð Þ,g xð Þ, and a xð Þ in Theorem 3.1.1. Let

$$\begin{aligned} \mathfrak{S}\_1 &= \bigcup\_{i=0}^{\deg\left(h\_{\mathfrak{f}}(\mathbf{x})\right)-1} \{\mathfrak{x}^i(f(\mathbf{x}), \mathbf{0})\}, \\ \mathfrak{S}\_2 &= \bigcup\_{i=0}^{\deg\left(h\_{\mathfrak{f}}(\mathbf{x})\right)-1} \{\mathfrak{x}^i(l(\mathbf{x}), \mathbf{g}(\mathbf{x}) + \mathfrak{u}a(\mathbf{x}))\}, \\ \mathfrak{S}\_3 &= \bigcup\_{i=0}^{\deg\left(h(\mathbf{x})\right)-1} \{\mathfrak{x}^i\left(h\_{\mathfrak{f}}(\mathbf{x})l(\mathbf{x}), \mathfrak{u}h\_{\mathfrak{g}}(\mathbf{x})a(\mathbf{x})\right)\}. \end{aligned}$$

where f xð Þhfð Þ¼ <sup>x</sup> <sup>x</sup><sup>α</sup> � <sup>1</sup>,g xð Þhgð Þ¼ <sup>x</sup> <sup>x</sup><sup>β</sup> � 1 and g xð Þ¼ a xð Þb xð Þ. Then S ¼ S1∪S2∪S<sup>3</sup> forms a minimal spanning set for C as an R-module. Furthermore, C has 2degð Þ hfð Þ <sup>x</sup> 4degð Þ hg ð Þ <sup>x</sup> 2degð Þ b xð Þ codewords.

Proof. Please see the proof of the Theorem 4 in [4].

Example 3.1.4. Let C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ be a Z2Z2½ � u -linear cyclic code in <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> � � � <sup>R</sup>½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> � � with the following generator polynomials:

$$\begin{aligned} f(\boldsymbol{\mathfrak{x}}) &= \boldsymbol{\mathfrak{x}}^{\mathsf{T}} - \mathbf{1}, \boldsymbol{l}(\boldsymbol{\mathfrak{x}}) = \mathbf{1} + \boldsymbol{\mathfrak{x}}^{2} + \boldsymbol{\mathfrak{x}}^{3} \\ g(\boldsymbol{\mathfrak{x}}) &= \mathbf{1} + \boldsymbol{\mathfrak{x}} + \boldsymbol{\mathfrak{x}}^{2} + \boldsymbol{\mathfrak{x}}^{3} + \boldsymbol{\mathfrak{x}}^{4} + \boldsymbol{\mathfrak{x}}^{5} + \boldsymbol{\mathfrak{x}}^{6}, \boldsymbol{a}(\boldsymbol{\mathfrak{x}}) = \mathbf{1} + \boldsymbol{\mathfrak{x}}^{2} + \boldsymbol{\mathfrak{x}}^{3}. \end{aligned}$$

Therefore, we have g xð Þ¼ a xð Þb xð Þ) b xð Þ¼ <sup>1</sup> <sup>þ</sup> <sup>x</sup> <sup>þ</sup> <sup>x</sup><sup>3</sup> and g xð Þhg ð Þ¼ <sup>x</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> ) hg ð Þ¼ <sup>x</sup> <sup>1</sup> <sup>þ</sup> <sup>x</sup>. Hence by using the minimal spanning sets in Theorem 3.1.3, we can write the generator matrix for the Z2Z2½ � u -linear cyclic code C as follows:


It is worth mentioning that the Gray image Φð Þ C of C is a linear binary code with the parameters [21, 5, 10], which are optimal. If the code C has the best minimum distance compared to the existing bounds for fixed length and the size, then C is called optimal or good parameter code.

Example 3.1.5. Let us consider the cyclic code C ¼ h i ð Þ f xð Þ; 0 ;ð Þ l xð Þ; g xð Þþ ua xð Þ in <sup>Z</sup>2½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>7</sup> � <sup>1</sup> � � � <sup>R</sup>½ � <sup>x</sup> <sup>=</sup> <sup>x</sup><sup>9</sup> � <sup>1</sup> � � with generators:

$$\begin{aligned} f(\boldsymbol{\mathfrak{x}}) &= \mathbf{1} + \boldsymbol{\mathfrak{x}}^2 + \boldsymbol{\mathfrak{x}}^3 + \boldsymbol{\mathfrak{x}}^4, \boldsymbol{l}(\boldsymbol{\mathfrak{x}}) = \mathbf{1} + \boldsymbol{\mathfrak{x}} + \boldsymbol{\mathfrak{x}}^3, \\ g(\boldsymbol{\mathfrak{x}}) &= \mathbf{1} + \boldsymbol{\mathfrak{x}} + \boldsymbol{\mathfrak{x}}^2 + \boldsymbol{\mathfrak{x}}^3 + \boldsymbol{\mathfrak{x}}^4 + \boldsymbol{\mathfrak{x}}^5 + \boldsymbol{\mathfrak{x}}^6 + \boldsymbol{\mathfrak{x}}^7 + \boldsymbol{\mathfrak{x}}^8, \boldsymbol{a}(\boldsymbol{\mathfrak{x}}) = \mathbf{1} + \boldsymbol{\mathfrak{x}} + \boldsymbol{\mathfrak{x}}^2. \end{aligned}$$

Again by using the minimal spanning sets in the above theorem, we have the following generator matrix for C:


spanning sets for Z2Z2[u]-linear cyclic codes. We also presented some illustrative

The author would like to thank professors Irfan Siap and Taher Abualrub for their valuable comments and suggestions to improve the quality of the chapter.

examples of both Z2Z2[u]-linear codes and Z2Z2[u]-cyclic codes.

Acknowledgements

Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

Author details

Ismail Aydogdu

79

Yildiz Technical University, Istanbul, Turkey

provided the original work is properly cited.

\*Address all correspondence to: iaydogdu@yildiz.edu.tr

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

The Gray image Φð Þ C of C is a [25, 11, 4] linear binary code. Moreover we can write the standard form of this generator matrix as

Hence <sup>C</sup> is of type 7ð Þ ; <sup>9</sup>; <sup>3</sup>; <sup>1</sup>; <sup>6</sup> and has 2<sup>11</sup> <sup>¼</sup> 2048 codewords.

#### 4. Conclusion

In this chapter we introduced Z2Z2[u]-linear and Z2Z2[u]-cyclic codes. We determined the standard forms of the generator and parity-check matrices of Z2Z2[u]-linear codes. We further gave the generator polynomials and minimal

#### Z2Z2[u]-Linear and Z2Z2[u]-Cyclic Codes DOI: http://dx.doi.org/10.5772/intechopen.86281

spanning sets for Z2Z2[u]-linear cyclic codes. We also presented some illustrative examples of both Z2Z2[u]-linear codes and Z2Z2[u]-cyclic codes.

### Acknowledgements

G ¼

4. Conclusion

78

Coding Theory

101 1 100 0 0 0 000000 0101 1 10 0 0 0 000000 00101 1 1 0 0 0 000000 1 1010001 þ u 1 þ u 1 þ u 111111 101 1 100 u 0 0 u 00000 0101110 0 u 0 0 u 0000 00101 1 1 0 0 u 0 0 u 000 100101 1 0 0 0 u 0 0 u 0 0 1 100101 0 0 0 0 u 0 0 u 0 1 1 10010 0 0 0 00 u 0 0 u

The Gray image Φð Þ C of C is a [25, 11, 4] linear binary code. Moreover we can

Hence <sup>C</sup> is of type 7ð Þ ; <sup>9</sup>; <sup>3</sup>; <sup>1</sup>; <sup>6</sup> and has 2<sup>11</sup> <sup>¼</sup> 2048 codewords.

In this chapter we introduced Z2Z2[u]-linear and Z2Z2[u]-cyclic codes. We determined the standard forms of the generator and parity-check matrices of Z2Z2[u]-linear codes. We further gave the generator polynomials and minimal

write the standard form of this generator matrix as

:

The author would like to thank professors Irfan Siap and Taher Abualrub for their valuable comments and suggestions to improve the quality of the chapter.

### Author details

Ismail Aydogdu Yildiz Technical University, Istanbul, Turkey

\*Address all correspondence to: iaydogdu@yildiz.edu.tr

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### References

[1] Hammons AR, Kumar V, Calderbank AR, Sloane NJA, Solé P. The Z4-linearity of Kerdock, Preparata, Goethals, and related codes. IEEE Transactions on Information Theory. 1994;40:301-319

Chapter 5

Abstract

Channel

results were achieved.

1. Introduction

81

The Adaptive Coding Techniques

for Dependable Medical Network

The readily existing cellular networks play an important role in the daily life communications by integrating a wide variety of wireless multimedia services with higher data transmission rates, capable to provide much more than basic voice calls. In order to increase the demands of reliable medical network infrastructure economically and establish reliable medical transmission via cellular networks, this chapter has been designed as a dependable wireless medical network using an existing mobile cellular network with sophisticated channel coding technologies, providing a new novel way of the network that is adopted as a "Medical Network Channel (MNC)" system. Adding such adaptive outer coding with an existing cellular standard as inner coding makes a concatenated channel to carry out the MNC design. The adaptive design of extra outer channel codes depends on the Quality of Services (QoS) of Wireless Body Area Networks WBANs and also on the remaining errors from the inner-used cellular decoders. The adaptive extra code has been optimized toward "Medical Network Channel (MNC)" for different medical data QoS priority levels. The accomplishment of QoS constraints for different WBAN medical data has been investigated in this chapter for "Medical Network Channel (MNC)" by using the theoretical derivations, where positive acceptable

A medical telemonitoring system is one of the telecommunication techniques that access delivery to healthcare services and one of the main applications for Medical Information Communication Technology (MICT). Recently, Information and Communication Technology (ICT) for medical and healthcare application has drawn substantial attention, which plays an important role to support dependable and effective medical technologies to solve significant problems in any society. The WBAN technology has proved out newly in the latest standardization as IEEE 802.15.6 [1]. WBAN standard aims to provide an international standard for short range, low power, and extremely reliable wireless communication within the surrounding area of the human body, supporting an enormous range of data rates from 75.9 Kbps narrow band (NB) up to 15.6 Mbps ultra-wide band (UWB) for various sets of applications [2]. WBAN technology is growing as a key technology for MICT

Emtithal Ahmed Talha and Ryuji Kohno

Keywords: UMTS, LTE, WBANs, QoS, concatenated codes

[2] Borges J, Fernández-Córdoba C, Pujol J, Rifà J, Villanueva M. Z2Z4 linear codes: Generator matrices and duality. Designs, Codes and Cryptography. 2010;54(2):167-179

[3] Aydogdu I, Abualrub T, Siap I. On Z2Z2[u]additive codes. International Journal of Computer Mathematics. 2015; 92(9):1806-1814

[4] Aydogdu I, Abualrub T, Siap I. Z2Z2[u]-cyclic and constacyclic codes. IEEE Transactions on Information Theory. 2017;63(8):4883-4893

[5] Abualrub T, Siap I, Aydin N. Z2Z4 additive cyclic codes. IEEE Transactions on Information Theory. 2014;60(3): 1508-1514

#### Chapter 5

References

Coding Theory

[1] Hammons AR, Kumar V, Calderbank AR, Sloane NJA, Solé P. The Z4-linearity of Kerdock, Preparata, Goethals, and related codes. IEEE Transactions on Information Theory. 1994;40:301-319

[2] Borges J, Fernández-Córdoba C, Pujol J, Rifà J, Villanueva M. Z2Z4 linear codes: Generator matrices and

[3] Aydogdu I, Abualrub T, Siap I. On Z2Z2[u]additive codes. International Journal of Computer Mathematics. 2015;

[4] Aydogdu I, Abualrub T, Siap I. Z2Z2[u]-cyclic and constacyclic codes. IEEE Transactions on Information Theory. 2017;63(8):4883-4893

[5] Abualrub T, Siap I, Aydin N. Z2Z4 additive cyclic codes. IEEE Transactions on Information Theory. 2014;60(3):

duality. Designs, Codes and Cryptography. 2010;54(2):167-179

92(9):1806-1814

1508-1514

80

## The Adaptive Coding Techniques for Dependable Medical Network Channel

Emtithal Ahmed Talha and Ryuji Kohno

#### Abstract

The readily existing cellular networks play an important role in the daily life communications by integrating a wide variety of wireless multimedia services with higher data transmission rates, capable to provide much more than basic voice calls. In order to increase the demands of reliable medical network infrastructure economically and establish reliable medical transmission via cellular networks, this chapter has been designed as a dependable wireless medical network using an existing mobile cellular network with sophisticated channel coding technologies, providing a new novel way of the network that is adopted as a "Medical Network Channel (MNC)" system. Adding such adaptive outer coding with an existing cellular standard as inner coding makes a concatenated channel to carry out the MNC design. The adaptive design of extra outer channel codes depends on the Quality of Services (QoS) of Wireless Body Area Networks WBANs and also on the remaining errors from the inner-used cellular decoders. The adaptive extra code has been optimized toward "Medical Network Channel (MNC)" for different medical data QoS priority levels. The accomplishment of QoS constraints for different WBAN medical data has been investigated in this chapter for "Medical Network Channel (MNC)" by using the theoretical derivations, where positive acceptable results were achieved.

Keywords: UMTS, LTE, WBANs, QoS, concatenated codes

#### 1. Introduction

A medical telemonitoring system is one of the telecommunication techniques that access delivery to healthcare services and one of the main applications for Medical Information Communication Technology (MICT). Recently, Information and Communication Technology (ICT) for medical and healthcare application has drawn substantial attention, which plays an important role to support dependable and effective medical technologies to solve significant problems in any society. The WBAN technology has proved out newly in the latest standardization as IEEE 802.15.6 [1]. WBAN standard aims to provide an international standard for short range, low power, and extremely reliable wireless communication within the surrounding area of the human body, supporting an enormous range of data rates from 75.9 Kbps narrow band (NB) up to 15.6 Mbps ultra-wide band (UWB) for various sets of applications [2]. WBAN technology is growing as a key technology for MICT to transfigure the future of healthcare; therefore, WBANs have been attracting a great treaty of attentions from researchers both in academia and industry in the last few years [3]. A QoS is a major concern for WBAN medical application. Therefore, the researcher concerning QoS issues in WBANs should handle all of that very seriously in an effective way [4]. The cellular standards have been adopted by the European Union (EU) as a mandatory standard for member states and are spreading throughout much of the world. The cellular standards have been developed by considering enhancement in all aspects such as transmission speed, transmission way, data rate, error correction capabilities, channel capacity and QoS as general. UMTS is the main standard of the third generation (3G) with Wide Code Division Multiply Access (WCDMA) air interface, and LTE is the main standard of the fourth generation (4G). The bandwidth of a WCDMA is 5 MHz, and it is enough to provide data rates of 144 and 384 Kbps and even 2 Mbps in good conditions. On the other hand, LTE provides UL peak rates of 75 Mb/s, and QoS facilities permitting a transfer latency of less than 5 ms in the radio access network and supports accessible carrier bandwidth from 1.4 to 20 MHz. UMTS and LTE are used to cover both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) operations and integrate a wide variety of wireless multimedia services with high data transmission rates, capable of providing much more than basic voice calls [5–7].

reliability required based on the BER for different medical data and other con-

The Adaptive Coding Techniques for Dependable Medical Network Channel

the chapter proposes a novel way of conducting error control encoding and decoding with QoS constraints by using the concatenated code techniques to build the MNC system. Consequently, the MNC intends to add extra channel code in order to combine the WBANs and the cellular networks and optimize the technical parameters for this extra channel depending on the reliability required for the medical data QoS levels and channel conditions as well. Therefore, the adaptive external channel code choice has six pairs of encoding and decoding, three for QoS levels (high, medium, and low) and then two for the channel condition (normal and worse). The restriction of UMTS or LTE channel codes is a standard, which is fixed by the European Telecommunication Standard Institute (ETSI) [5–7]. The technical parameters cannot be changed in order to provide good system performance. The only way is to design and optimize good adaptive extra outer channel codes with strong decoding capabilities resulting in better performance for MNC to

transmit the WBAN medical data robustly.

DOI: http://dx.doi.org/10.5772/intechopen.83615

WBANs O/P, MNC can be with or without extra code.

2. The dependable medical network channel configurations

transmissions through the readily existing cellular network.

AWGN, Rayleigh fading, or burst noise.

83

The error control coding plays an important role in modifying the reliability issues. The concatenated codes are one of the error control coding techniques that have been widely adopted due to their simplicity and effectiveness [10]. Therefore,

The objective of the chapter is to design a reliable and dependable MNC system through the cellular networks to provide reliable transmission for all QoS medical data coming from WBANs. The structural design of MNC is based on channel coding of those using concatenated channel code techniques in the serial manner which adds extra channel codes to the cellular UMTS or LTE codes. The inner channel codes in MNC are cellular network standard UMTS or LTE error correction codes that cannot be changed in order to enhance the error performances, with regard to the international standards. On the other hand, the extra outer channel code in MNC is a changeable parameter for achieving different QoS constraints of medical data, which used the convolution code as the main error correction technique. Then, it will add end-to-end connection of WBANs to this MNC system using WBAN standard error correction techniques itself. According to QoS of

This chapter reflects about categorizing the eighth level QoS of WBANs to three different QoS (lower, medium, and higher) set levels. To achieve the chosen QoS, there is a need for adaptive external code with limited or strong error correcting capability with high, medium, or low coding rate and redundancy. Through those techniques, the MNC system is adaptive to varying propagation conditions and also adaptive to various QoS constraints. Therefore, the work here focuses to overcome different PHY errors that may occur during the transmission in an unpredictable way, making the channel situation time-to-time change, such as Gaussian noise

The "Medical Network Channel (MNC)" system is a new system adopted in this chapter, which works to serve transmission of medical data robustly from WBANs through the cellular standard networks. It is mainly based on the error control coding techniques to ensure the dependability required for such medical data. The idea of the concatenated codes was used for connecting the WBANs with the cellular networks. The purpose is to have reliable and dependable medical data

straints [8].

The way to connect WBAN technology network with other networks such as cellular networks UMTS and LTE is a key point for this chapter to serve the WBAN medical data transmission through the readily existing cellular networks. Therefore, the concept is to use the error controlling coding and decoding based on the concatenated channel codes with the cellular readily existing codes to design the "Medical Network Channel (MNC)" system. Reliable transmission of medical data is critical and essential since it is related to diagnosis and treatments of human body diseases. In ICT field, the reliable transmission procedures must guarantee detection and correction of erroneous transmissions. However, the transmission channel is often subject to various disturbances and interferences from the external environment conditions (noise).

The chapter focuses on the dependability of medical telemonitoring system from WBANs through UMTS and LTE via "Medical Network Channel (MNC)" system. Dependability of medical data transmission via MNC is defined as the probability of the "Medical Network Channel (MNC)" system to operate successfully, which means transmitted medical data reach their destination completely uncorrupted and guarantee minimum performances with lower error rate as much as possible under different environmental conditions. There are different methods that can be employed to overcome the channel impairments, such as increasing transmission power or the use of error control coding schemes in information theory field. A high level of reliability can be obtained by introducing redundancy bits in the signal transmission (encoding).

Medical Network Channel (MNC) system has been introduced to solve the reliability issues for medical data transmission when considering different QoS levels. The WBAN medical data are sensitive and any type of noise can corrupt them during transmission. Although the cellular standards include significant amounts of error detection and correction techniques, which are designed for daily life conversation mainly, some errors may still be present in the received data, and these transmission errors are not serious for the daily communication, but when considered for medical uses, they can have fatal outcomes. For that reason, the UMTS and LTE codes are designed for certain levels of channel condition, and if the error becomes more than the estimated condition, then the error becomes more serious and the cellular network standards perform worse using the preexisting error detection and correction capability. The Medical QoS levels have different

#### The Adaptive Coding Techniques for Dependable Medical Network Channel DOI: http://dx.doi.org/10.5772/intechopen.83615

reliability required based on the BER for different medical data and other constraints [8].

The error control coding plays an important role in modifying the reliability issues. The concatenated codes are one of the error control coding techniques that have been widely adopted due to their simplicity and effectiveness [10]. Therefore, the chapter proposes a novel way of conducting error control encoding and decoding with QoS constraints by using the concatenated code techniques to build the MNC system. Consequently, the MNC intends to add extra channel code in order to combine the WBANs and the cellular networks and optimize the technical parameters for this extra channel depending on the reliability required for the medical data QoS levels and channel conditions as well. Therefore, the adaptive external channel code choice has six pairs of encoding and decoding, three for QoS levels (high, medium, and low) and then two for the channel condition (normal and worse). The restriction of UMTS or LTE channel codes is a standard, which is fixed by the European Telecommunication Standard Institute (ETSI) [5–7]. The technical parameters cannot be changed in order to provide good system performance. The only way is to design and optimize good adaptive extra outer channel codes with strong decoding capabilities resulting in better performance for MNC to transmit the WBAN medical data robustly.

The objective of the chapter is to design a reliable and dependable MNC system through the cellular networks to provide reliable transmission for all QoS medical data coming from WBANs. The structural design of MNC is based on channel coding of those using concatenated channel code techniques in the serial manner which adds extra channel codes to the cellular UMTS or LTE codes. The inner channel codes in MNC are cellular network standard UMTS or LTE error correction codes that cannot be changed in order to enhance the error performances, with regard to the international standards. On the other hand, the extra outer channel code in MNC is a changeable parameter for achieving different QoS constraints of medical data, which used the convolution code as the main error correction technique. Then, it will add end-to-end connection of WBANs to this MNC system using WBAN standard error correction techniques itself. According to QoS of WBANs O/P, MNC can be with or without extra code.

This chapter reflects about categorizing the eighth level QoS of WBANs to three different QoS (lower, medium, and higher) set levels. To achieve the chosen QoS, there is a need for adaptive external code with limited or strong error correcting capability with high, medium, or low coding rate and redundancy. Through those techniques, the MNC system is adaptive to varying propagation conditions and also adaptive to various QoS constraints. Therefore, the work here focuses to overcome different PHY errors that may occur during the transmission in an unpredictable way, making the channel situation time-to-time change, such as Gaussian noise AWGN, Rayleigh fading, or burst noise.

#### 2. The dependable medical network channel configurations

The "Medical Network Channel (MNC)" system is a new system adopted in this chapter, which works to serve transmission of medical data robustly from WBANs through the cellular standard networks. It is mainly based on the error control coding techniques to ensure the dependability required for such medical data. The idea of the concatenated codes was used for connecting the WBANs with the cellular networks. The purpose is to have reliable and dependable medical data transmissions through the readily existing cellular network.

to transfigure the future of healthcare; therefore, WBANs have been attracting a great treaty of attentions from researchers both in academia and industry in the last few years [3]. A QoS is a major concern for WBAN medical application. Therefore, the researcher concerning QoS issues in WBANs should handle all of that very seriously in an effective way [4]. The cellular standards have been adopted by the European Union (EU) as a mandatory standard for member states and are spreading throughout much of the world. The cellular standards have been developed by considering enhancement in all aspects such as transmission speed, transmission way, data rate, error correction capabilities, channel capacity and QoS as general. UMTS is the main standard of the third generation (3G) with Wide Code Division Multiply Access (WCDMA) air interface, and LTE is the main standard of the fourth generation (4G). The bandwidth of a WCDMA is 5 MHz, and it is enough to provide data rates of 144 and 384 Kbps and even 2 Mbps in good conditions. On the other hand, LTE provides UL peak rates of 75 Mb/s, and QoS facilities permitting a transfer latency of less than 5 ms in the radio access network and supports accessible carrier bandwidth from 1.4 to 20 MHz. UMTS and LTE are used to cover both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) operations and integrate a wide variety of wireless multimedia services with high data transmission rates, capable of providing much more than basic voice calls [5–7].

The way to connect WBAN technology network with other networks such as cellular networks UMTS and LTE is a key point for this chapter to serve the WBAN medical data transmission through the readily existing cellular networks. Therefore,

The chapter focuses on the dependability of medical telemonitoring system from WBANs through UMTS and LTE via "Medical Network Channel (MNC)" system. Dependability of medical data transmission via MNC is defined as the probability of the "Medical Network Channel (MNC)" system to operate successfully, which means transmitted medical data reach their destination completely uncorrupted and guarantee minimum performances with lower error rate as much as possible under different environmental conditions. There are different methods that can be employed to overcome the channel impairments, such as increasing transmission power or the use of error control coding schemes in information theory field. A high level of reliability can be obtained by introducing redundancy bits in the signal transmission (encoding). Medical Network Channel (MNC) system has been introduced to solve the reliability issues for medical data transmission when considering different QoS levels. The WBAN medical data are sensitive and any type of noise can corrupt them during transmission. Although the cellular standards include significant amounts of error detection and correction techniques, which are designed for daily life conversation mainly, some errors may still be present in the received data, and these transmission errors are not serious for the daily communication, but when considered for medical uses, they can have fatal outcomes. For that reason, the UMTS and LTE codes are designed for certain levels of channel condition, and if the error becomes more than the estimated condition, then the error becomes more serious and the cellular network standards perform worse using the preexisting error detection and correction capability. The Medical QoS levels have different

the concept is to use the error controlling coding and decoding based on the concatenated channel codes with the cellular readily existing codes to design the "Medical Network Channel (MNC)" system. Reliable transmission of medical data is critical and essential since it is related to diagnosis and treatments of human body diseases. In ICT field, the reliable transmission procedures must guarantee detection and correction of erroneous transmissions. However, the transmission channel is often subject to various disturbances and interferences from the external environ-

ment conditions (noise).

Coding Theory

82

Figure 1 shows the whole "Medical Network Channel (MNC)" system that is the core base of this chapter. The different medical data QoS levels from the WBANs have been considered in designing the phase as well as the different assumed channel conditions. The structural design of the proposal is described in Figure 2 by using the concatenated code techniques for different QoS of WBANs. The inner code for the MNC structure is introduced in Table 1. The UMTS and LTE provide both error detection and error correction as channel coding scheme. Here, it is assumed that the inner channel of the "Medical Network Channel (MNC)" system uses uplink UMTS channel as Common Packet Channel (CPCH) working by the convolution code rate 1/2. Similarly, the downlink UMTS channel was assumed

using Forward Access Channel (FACH) working by the convolution code rate 1/3. Furthermore, LTE was assumed using Broadcast Channel (BCH) working by the

The Adaptive Coding Techniques for Dependable Medical Network Channel

The technical parameters for the extra channel detailed here by using extra outer encoder as convolution encoder are concatenated to the inner cellular standard channel codes. Among all the FEC codes, the convolution codes have great advantages using continuous data streams and can manage the performance with only two parameters: the code rate R and the constraint length K. Also, convolution codes have high error correction in comparison to block codes and less complexity in comparison to turbo codes. The soft decoding algorithm and the hard decoding algorithm, can make easily changes in the performances. Since the extra channel is a key point to have high performance for "Medical Network Channel (MNC)"-proposed system, the choice here of the extra code is driven by convolution codes. The outer channel is the existing WBAN channel that uses BCH code as a main code to correct the error. The system "Medical Network Channel (MNC)" has been considering the performance with and without end-toend connection of the WBAN codes. The assumption is that the medical data coming from the WBANs with transmission rate 75.9Kb/s are entering the extra outer channel that will be the only optimized channel in "Medical Network Channel (MNC)" system, and then are entering the inner cellular channels within data

convolution code rate 1/3.

DOI: http://dx.doi.org/10.5772/intechopen.83615

rate less than the channel capacities.

able performance capability such as 10<sup>3</sup>

international standards of error correction code.

are external that are added to receive the medical data only.

711]

165]

RK G Dfree Error

Figure 2.

Table 2.

85

Data rate

3G-DL 144Kb/s 1/3 9 [557 663

4G 75 Mb/s 1/3 7 [133 171

The inner cellular network code capabilities.

3. Adaptive dependable system for WBAN medical data

The "Medical Network Channel (MNC)"-proposed system is dependable, which ensures to give the different QoS level of medical data transmission within accept-

high QoS levels within higher required bit energy to interference (Eb/No) values as possible under different assumed noise conditions. The WBAN has eight QoS levels. The QoS levels for the medical data have divided to three parts as lower priority QoS level, medium priority QoS level, and higher priority QoS level. Depending on these priority levels, the proposed system MNC has been designed as shown in

Table 2 shows all the error-correcting capabilities related to the UL and DL inner channels' technical capabilities for the UMTS and LTE with regard to the

The criteria of the extra code selections in "Medical Network Channel (MNC)" system have two main parts in the structure: the fixed parts, which are related to the cellular standard networks or WBAN technology, and the changeable parts, which

(t)

3G-UL 144Kb/s 1/2 9 [561 753] 12 6 17λ 256\*L 122,694

Guard space (g)

18 9 26λ 256\*L 2275

15 7 20λ 64\*L 416

Trellis paths (E)

Sum Wd

,10<sup>5</sup> and 10<sup>7</sup> BER for low, medium, and

Figure 1.

Medical network channel codes via cellular networks.

Figure 2.

The medical network channel system for QoS of WBAN medical data.


#### Table 1.

The inner cellular network code techniques.

The Adaptive Coding Techniques for Dependable Medical Network Channel DOI: http://dx.doi.org/10.5772/intechopen.83615

using Forward Access Channel (FACH) working by the convolution code rate 1/3. Furthermore, LTE was assumed using Broadcast Channel (BCH) working by the convolution code rate 1/3.

The technical parameters for the extra channel detailed here by using extra outer encoder as convolution encoder are concatenated to the inner cellular standard channel codes. Among all the FEC codes, the convolution codes have great advantages using continuous data streams and can manage the performance with only two parameters: the code rate R and the constraint length K. Also, convolution codes have high error correction in comparison to block codes and less complexity in comparison to turbo codes. The soft decoding algorithm and the hard decoding algorithm, can make easily changes in the performances. Since the extra channel is a key point to have high performance for "Medical Network Channel (MNC)"-proposed system, the choice here of the extra code is driven by convolution codes. The outer channel is the existing WBAN channel that uses BCH code as a main code to correct the error. The system "Medical Network Channel (MNC)" has been considering the performance with and without end-toend connection of the WBAN codes. The assumption is that the medical data coming from the WBANs with transmission rate 75.9Kb/s are entering the extra outer channel that will be the only optimized channel in "Medical Network Channel (MNC)" system, and then are entering the inner cellular channels within data rate less than the channel capacities.

#### 3. Adaptive dependable system for WBAN medical data

The "Medical Network Channel (MNC)"-proposed system is dependable, which ensures to give the different QoS level of medical data transmission within acceptable performance capability such as 10<sup>3</sup> ,10<sup>5</sup> and 10<sup>7</sup> BER for low, medium, and high QoS levels within higher required bit energy to interference (Eb/No) values as possible under different assumed noise conditions. The WBAN has eight QoS levels. The QoS levels for the medical data have divided to three parts as lower priority QoS level, medium priority QoS level, and higher priority QoS level. Depending on these priority levels, the proposed system MNC has been designed as shown in Figure 2.

Table 2 shows all the error-correcting capabilities related to the UL and DL inner channels' technical capabilities for the UMTS and LTE with regard to the international standards of error correction code.

The criteria of the extra code selections in "Medical Network Channel (MNC)" system have two main parts in the structure: the fixed parts, which are related to the cellular standard networks or WBAN technology, and the changeable parts, which are external that are added to receive the medical data only.


#### Table 2.

The inner cellular network code capabilities.

Figure 1 shows the whole "Medical Network Channel (MNC)" system that is the core base of this chapter. The different medical data QoS levels from the WBANs have been considered in designing the phase as well as the different assumed channel conditions. The structural design of the proposal is described in Figure 2 by using the concatenated code techniques for different QoS of WBANs. The inner code for the MNC structure is introduced in Table 1. The UMTS and LTE provide both error detection and error correction as channel coding scheme. Here, it is assumed that the inner channel of the "Medical Network Channel (MNC)" system uses uplink UMTS channel as Common Packet Channel (CPCH) working by the convolution code rate 1/2. Similarly, the downlink UMTS channel was assumed

Figure 1.

Coding Theory

Figure 2.

UMTS DL

Table 1.

84

Medical network channel codes via cellular networks.

The medical network channel system for QoS of WBAN medical data.

Coding type

Coding rate R and constraint length K

FACH Convolution R = 1/3 & K = 9 Di = 3\*Ki + 24

UMTS UL CPCH Convolution R = 1/2 & K = 9 Di = 2\*Ki + 16

LTE BCH Convolution R = 1/3 & K = 7 Di = 3\*Ki + 18

Number of encoded bits

TRCH type

The inner cellular network code techniques.


The system design that is detailed above has been adjusted for the different QoS levels of medical data. The technical parameters of the extra channel codes have been fixed for the "Medical Network Channel (MNC)." The capabilities have been determined for the AWGN channel and for the Rayleigh fading with a parameter distribution function equal to 0.55. However, for seeking the reality, these channel conditions may be good or worse than those determined. Table 4 details all the "Medical Network Channel (MNC)" adaptive design parameters with regard to the

The error-bound probabilities are calculated depending on the inner, outer, and extra outer decoders separately. Continuously, the code performance is analyzed in terms of decoded BER. BER is normally calculated as a function of Eb/No. Here Eb represents the average energy transmitted per information bit and No represents the

The performance bounds theoretically are driven under AWGN with and without adding WBANs end to end to "Medical Network Channel (MNC)"-proposed system. Then, the performance bounds theoretically are driven under Rayleigh fading channel without adding WBANs end to end; this step is only to demonstrate the feasibility of the "Medical Network Channel (MNC)" system and to find out the numbers of errors in the output of inner cellular decoders and to test the optimized extra channel code theoretically in "Medical Network Channel (MNC)" for differ-

Table 5 explains all the technical parameters used in the theoretical evaluations. The theoretical bound follows number of steps to calculate the error probabilities for the adaptive "Medical Network Channel (MNC)" concatenated channel codes: the first step in the O/P of the inner cellular decoders, then second in the O/P of the extra channels decoders (the three sets for different QoS levels), and at last, in the O/P of the WBAN outer decoders. These numerical evaluations have been done in

The theoretical calculations for the error bound of the "Medical Network Chan-

15 29!/15! 14! 77558760

18 35!/18! 17! 4.5376e+009

16 32!/16! 15! 300540195

22 43!/22! 21! 1.0520e+012

1352078

92378

<sup>2</sup>dfree<sup>1</sup> Sum of Wd

Wd = ∑[7 8 22 44 22 94 219] = 416

Wd = ∑ [33 281 2179 15035 105166] = 122694

Wd = ∑[11 32 195 564 1473] = 2275

Wd = ∑[2 22 60 148 340 1008 2642 6748] = 10970

Wd = ∑[1 24 113 287] = 425

Wd = ∑[2 10 108 10 11 54 64] = 169

nel (MNC)"-proposed system via AWGN could be done as many steps in the decoding side as in (Eq. (1)–(11)). The inner and extra channel used convolutional

4. Theoretical error-bound performance calculation key points

The Adaptive Coding Techniques for Dependable Medical Network Channel

single-sided power spectral density of the assumed AWGN channel.

ent QoS medical data levels under AWGN and Rayleigh fading channels.

the two assumed inner cellular channel codes: UMTS and LTE.

165]

711]

367]

1/2&8 [247 371] 10 19!/10! 9!

QoS data Code R & K G df Cdfree

UMTS-UL Inner 1/2&9 [561 753] 12 23!/12! 11!

Outer 1/3&8 [225 331

1/4&8 [235 275 313 357]

LTE 1/3&7 [133 171

UMTS-DL 1/3&9 [557 663

Error correcting code capabilities for MNC system.

Lowest QoS level

Highest QoS level

Medium QoS level

Table 5.

87

capability of correcting the channel errors.

DOI: http://dx.doi.org/10.5772/intechopen.83615

Table 3.

Designing parameters of MNC adaptive codes related to QoS priority levels.

The assumption in this chapter is a WBAN chip installed in the mobile device to carry-on the medical data via the cellular systems through the "Medical Network Channel (MNC)" system to ensure the reliability required for the different sets of medical data. The extra code is adaptive by carrying parameters that are selectable with regard to the two main requirements: first, with regard to various kinds of the QoS of medical data entering the extra code from the WBAN code and second, with regard to the kind of the channel conditions that affected the transmission in PHY channels.

The goal of "Medical Network Channel (MNC)" is figured out by designing the extra code with regard to the QoS by analyzing the WBAN medical data QoS needed. Table 3 categorizes the QoS of the WBAN medical data into three sets, with regard to the priority level, in order to design the MNC system. The first set is the highest priority level such as a biological signal (ECG, EMG, and EEG), the second set is a medium priority level such as medical data (temperature, blood pressure, and blood sugar), and the third set is the lowest priority level such as data management, audio, and video.

"Medical Network Channel (MNC)" used the three sets later to design and optimize the MNC system depending on that. The first set highest priority level will carry on through strong design MNC achieving 10<sup>7</sup> BER, then the second set medium priority design system achieves 10<sup>5</sup> BER, and then the third set lowest priority design system achieves 10<sup>3</sup> BER within higher Eb/No as possible.

In the "Medical Network Channel (MNC)" super PHY channel, the remaining error from the inner cellular decoder optimized the technical parameters of the extra outer code as shown in Table 3.


#### Table 4.

All error-correcting capabilities for MNC-proposed system codes.

#### The Adaptive Coding Techniques for Dependable Medical Network Channel DOI: http://dx.doi.org/10.5772/intechopen.83615

The system design that is detailed above has been adjusted for the different QoS levels of medical data. The technical parameters of the extra channel codes have been fixed for the "Medical Network Channel (MNC)." The capabilities have been determined for the AWGN channel and for the Rayleigh fading with a parameter distribution function equal to 0.55. However, for seeking the reality, these channel conditions may be good or worse than those determined. Table 4 details all the "Medical Network Channel (MNC)" adaptive design parameters with regard to the capability of correcting the channel errors.

#### 4. Theoretical error-bound performance calculation key points

The error-bound probabilities are calculated depending on the inner, outer, and extra outer decoders separately. Continuously, the code performance is analyzed in terms of decoded BER. BER is normally calculated as a function of Eb/No. Here Eb represents the average energy transmitted per information bit and No represents the single-sided power spectral density of the assumed AWGN channel.

The performance bounds theoretically are driven under AWGN with and without adding WBANs end to end to "Medical Network Channel (MNC)"-proposed system. Then, the performance bounds theoretically are driven under Rayleigh fading channel without adding WBANs end to end; this step is only to demonstrate the feasibility of the "Medical Network Channel (MNC)" system and to find out the numbers of errors in the output of inner cellular decoders and to test the optimized extra channel code theoretically in "Medical Network Channel (MNC)" for different QoS medical data levels under AWGN and Rayleigh fading channels.

Table 5 explains all the technical parameters used in the theoretical evaluations. The theoretical bound follows number of steps to calculate the error probabilities for the adaptive "Medical Network Channel (MNC)" concatenated channel codes: the first step in the O/P of the inner cellular decoders, then second in the O/P of the extra channels decoders (the three sets for different QoS levels), and at last, in the O/P of the WBAN outer decoders. These numerical evaluations have been done in the two assumed inner cellular channel codes: UMTS and LTE.

The theoretical calculations for the error bound of the "Medical Network Channel (MNC)"-proposed system via AWGN could be done as many steps in the decoding side as in (Eq. (1)–(11)). The inner and extra channel used convolutional


#### Table 5.

Error correcting code capabilities for MNC system.

The assumption in this chapter is a WBAN chip installed in the mobile device to carry-on the medical data via the cellular systems through the "Medical Network Channel (MNC)" system to ensure the reliability required for the different sets of medical data. The extra code is adaptive by carrying parameters that are selectable with regard to the two main requirements: first, with regard to various kinds of the QoS of medical data entering the extra code from the WBAN code and second, with regard to the kind of the channel conditions that affected the transmission in PHY

Lowest QoS level 1/2 & 8 [247 371] 10 5 10,970 126 bits/block Medium QoS level Outer 1/3 & 8 [225 331 367] 16 8 425 189 bits/block Highest QoS level 1/4 & 8 [235 275 313 357] 22 11 169 252 bits/block

Wd

II Size

QoS data sets Code R & K G dfree t Sum

Designing parameters of MNC adaptive codes related to QoS priority levels.

The goal of "Medical Network Channel (MNC)" is figured out by designing the

extra code with regard to the QoS by analyzing the WBAN medical data QoS needed. Table 3 categorizes the QoS of the WBAN medical data into three sets, with regard to the priority level, in order to design the MNC system. The first set is the highest priority level such as a biological signal (ECG, EMG, and EEG), the second set is a medium priority level such as medical data (temperature, blood pressure, and blood sugar), and the third set is the lowest priority level such as data manage-

"Medical Network Channel (MNC)" used the three sets later to design and optimize the MNC system depending on that. The first set highest priority level will carry on through strong design MNC achieving 10<sup>7</sup> BER, then the second set medium priority design system achieves 10<sup>5</sup> BER, and then the third set lowest priority design system achieves 10<sup>3</sup> BER within higher Eb/No as possible.

In the "Medical Network Channel (MNC)" super PHY channel, the remaining error from the inner cellular decoder optimized the technical parameters of the

I/P 100 Kb/s [51 bits/s length]

Adaptive extra channel encoders Lowest QoS level Medium QoS level Highest QoS level (2,1,8) Dfree 10 T = 5 I/P 63b/s O/P 126 b/s

> I/P 126 b/s O/P 252 b/s

> I/P 126 b/s O/P 378 b/s

> I/P 126 b/s O/P 378 b/s

(3,1,8) Dfree 16 T = 8 I/P 63b/s O/P 189 b/s

(2,1,9) Dfree 12 T = 6 I/P 189 b/s O/P 378 b/s

(3,1,9) Dfree 18 T = 9 I/P 189 b/s O/P 567 b/s

(3,1,7) Dfree 15 T = 7 I/P 189 b/s O/P 567 b/s

(4,1,8) Dfree 22 T = 11 I/P 63b/s O/P 252 b/s

(2,1,9) Dfree 12 T = 6 I/P 252 b/s O/P 504 b/s

(3,1,9) Dfree 18 T = 9 I/P 252 b/s O/P 756 b/s

(3,1,7) Dfree 15 T = 7 I/P 252 b/s O/P 756 b/s

Outer WBAN encoder 63 bits/s

UMTS UL inner encoder (2,1,9) Dfree 12 T = 6

UMTS DL inner encoder (3,1,9) Dfree 18 T = 9

LTE UL inner encoder (3,1,7) Dfree 15 T = 7

All error-correcting capabilities for MNC-proposed system codes.

channels.

Table 4.

86

Table 3.

Coding Theory

ment, audio, and video.

extra outer code as shown in Table 3.

decoder that works using Viterbi algorithm and the outer WBAN channel used the block code decoder. First of all, the UMTS inner decoder calculates the first inner probability bit errors Pbi bound as in Eq.(1)–(4).

$$P\_{bi} \le \frac{1}{\text{bi}} \sum\_{di=0}^{\infty} \mathcal{W}\_{di} P\_{ei}(di) \tag{1}$$

Px ¼ Q

DOI: http://dx.doi.org/10.5772/intechopen.83615

63 i¼2þ1

63 i � �Pi

> P ρj,i � � ffi

> > ρj,i

Pbfsk ffi

Pb <sup>≤</sup> w df ð Þ

inner cellular code performance can be calculated by Eq. (17).

j,i � �Eb <sup>¼</sup> <sup>2</sup>σ<sup>2</sup>

Eb <sup>¼</sup> <sup>E</sup> <sup>ρ</sup><sup>2</sup>

Pbi ≤Pbfsk �

C df

w dfreei ð Þ

bi <sup>C</sup> dfreei

1 2 1 �

<sup>b</sup> <sup>C</sup>df 2df�1

1 σ2 ρ

Eq. (10).

Eq. (11).

Pb with WBAN ≤ ∑

where <sup>e</sup><sup>Λ</sup> � <sup>ρ</sup><sup>2</sup>

modulation as in Eq. (13).

89

j,i 2 σ<sup>2</sup> ρ � � and <sup>Q</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2R Eb =N<sup>0</sup> � � q

The Adaptive Coding Techniques for Dependable Medical Network Channel

¼ Q

Then, by applying Eq. (10) in Eq. (7), we will have the final MNC system by adding WBAN code end to end for all QoS assumed and via AWGN channel in

The theoretical calculations for the error bound of the MNC system via Rayleigh fading channel could be done by the same steps of calculating it via AWGN channel without end-to-end connection of the WBANs. For this part, the Pj,I attenuations are random independent Rayleigh variables of probability density as in Eq. (12).

<sup>ρ</sup>j,ie �

equals 1 if ρj,i ≥ 0, and 0 if not. In all theoretical work of MNC system, this Rayleigh variable has been estimated as 0.55, which is greater than 0, to evaluate the super PHY channel of MNC system. First of all, the cellular decoder calculates the first inner probability bit errors Pbi bound as a function of the performance of the BFSK

ρ2 j,i 2σ2 ρ � �Y

ρj,i

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Eb =No 1 þ Eb =No

1 4R Eb =No � �df

<sup>d</sup>! � ð Þ <sup>d</sup> � <sup>1</sup> ! (15)

<sup>ρ</sup>Eb ¼ 0:55Eb assumed (16)

1 4Ri Eb =No � �dfreei

" # s

where C can be calculated using the free distance dfreei of the inner cellular code by Eq. (15) and appeared in Table 5. Eb represents the average energy received per symbol of transmitted information, and it is calculated as in Eq. (16). Then, the

<sup>2</sup>df�<sup>1</sup> <sup>¼</sup> ð Þ <sup>2</sup> <sup>d</sup> � <sup>1</sup> !

2dfreei�1

Generally speaking, the data stream coming from the cellular inner code feeds the extra channel code. The code performance of the extra outer code is a function of the inner cellular code. Second, the outer decoder calculates the second outer

≥ 0 are the indicators of the set {ρj,i ≥ 0}, which

≥0 (12)

(13)

(14)

(17)

By using Eq. (9) as a function of Eq. (6) to calculate the final bound, we can have

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 � 51=63 � Eb =N<sup>0</sup> � � q

<sup>x</sup> withno WBAN ð Þ <sup>1</sup> � Px withno WBAN <sup>63</sup>�<sup>i</sup> (11)

Px with WBAN ¼ Pb withno WBAN � Px (10)

(9)

where Pei(di) is the probability of confusing two sequences differing in distance di and positions of inner cellular code, and can be calculated as in Eq. (2). Wdi is the weight spectrum that is the average number of bit errors associated with sequences of weigh di, and it is calculated for all codes that work in this "Medical Network Channel (MNC)" system as in Table 5; w(d), d ≥ df. Wdi term can be evaluated using the transfer function of the convolution code. Generally, for codes whose constraint length is greater than a few units (typically, ν ≥ 5), the calculation of the transfer function can prove to be complex; then it is preferred to determine the spectrum of the code, or at least the first terms of this spectrum, using an algorithm that explores the various paths of the lattice diagram [9, 10].

$$P\_{\rm ef}(di) = Q\left(\sqrt{2\,di \text{Ri } \mathbb{E}\_{\mathbb{A}}/\_{N\_0}}\right) \tag{2}$$

where di is an inner cellular code free distance and Ri is an inner cellular code rate, both showed in Table 5. Q function is clear in information theory and can be calculated using infinity integration as in Eq. (3).

$$Q(\varkappa) \cong \int\_{\varkappa}^{\infty} \frac{1}{\sqrt{2\pi}} e^{-t^2/2} dt \tag{3}$$

$$P\_{bi} \le \frac{1}{bi} \sum\_{di=0}^{\infty} \mathcal{W}\_{di} \cdot \mathcal{Q}\left(\sqrt{2 \, di \text{Ri} \, i \, E\_b /\_{N\_0}}\right) \tag{4}$$

Generally speaking, the data stream coming from the cellular inner codes feed to the extra outer codes. The code performance of the extra outer code is a function of the cellular inner code. Second, the extra outer decoder calculates the second outer probability bit errors Pbo bound separately as in Eq. (5)–Eq. (6) by the outer code parameter introduced in Table 5 for the three different QoS levels of WBAN medical data.

$$P\_{bo} \leq \frac{1}{bo} \sum\_{do=0}^{\infty} W\_{do} \cdot \mathbb{Q}\left(\sqrt{2 \, doRo \, ^{E\_b}/\_{N\_0}}\right) \tag{5}$$

$$P\_b \text{ } withino \text{ } WBNN \le P\_{bi} \cdot P\_{bo} \tag{6}$$

The outer code performances of the MNC system can be calculated by Eq. (6) for lower, medium, and higher QoS classes of medical data depending on the parameters applied to the extra channel. The last step is introduced by calculating the final outer code performance of the system. The WBAN decoder (63, 51, 2) calculates the last probability bit error P bound using Eq. (7)–Eq. (10), which is a function of the extra outer code.

$$P\_b \le \sum\_{i=t+1}^n \binom{n}{i} P\_x^i (1 - P\_x)^{n-i} \
where \binom{n}{i} = \frac{n!}{(n-i)! \cdot i!} \tag{7}$$

$$P\_b \le \sum\_{i=2+1}^{63} \binom{63}{i} P\_\mathbf{x}^i (\mathbf{1} - P\_\mathbf{x})^{63-i} \tag{8}$$

The Adaptive Coding Techniques for Dependable Medical Network Channel DOI: http://dx.doi.org/10.5772/intechopen.83615

$$P\_{\mathbf{x}} = \mathbb{Q}\left(\sqrt{2\mathbf{R}\,\mathbf{E}\_{\mathbf{b}}/\_{N\_0}}\right) = \mathbb{Q}\left(\sqrt{2 \times \mathbf{51}/\mathbf{63} \cdot \mathbf{E}\_{\mathbf{b}}/\_{N\_0}}\right) \tag{9}$$

By using Eq. (9) as a function of Eq. (6) to calculate the final bound, we can have Eq. (10).

$$P\_x \text{ with WBAN} = P\_b \text{ within WBAN} \cdot P\_x \tag{10}$$

Then, by applying Eq. (10) in Eq. (7), we will have the final MNC system by adding WBAN code end to end for all QoS assumed and via AWGN channel in Eq. (11).

$$P\_b \text{ with } \text{WBAN} \le \sum\_{i=2+1}^{63} \binom{63}{i} P\_x^i \text{ with no } \text{WBAN} \left(1 - P\_x \text{ with no } \text{WBAN} \right)^{63-i} \tag{11}$$

The theoretical calculations for the error bound of the MNC system via Rayleigh fading channel could be done by the same steps of calculating it via AWGN channel without end-to-end connection of the WBANs. For this part, the Pj,I attenuations are random independent Rayleigh variables of probability density as in Eq. (12).

$$P(\rho\_{j,i}) \cong \frac{1}{\sigma\_{\rho}^2} \rho\_{j,i} \mathbf{e}^{\left(-\frac{\rho\_{j,i}^2}{2\sigma\_{\rho}^2}\right)} \prod\_{\rho\_{j,i}} \mathbf{n}\_{\rho\_{j,i}} \ge \mathbf{0} \tag{12}$$

where <sup>e</sup><sup>Λ</sup> � <sup>ρ</sup><sup>2</sup> j,i 2 σ<sup>2</sup> ρ � � and <sup>Q</sup> ρj,i ≥ 0 are the indicators of the set {ρj,i ≥ 0}, which equals 1 if ρj,i ≥ 0, and 0 if not. In all theoretical work of MNC system, this Rayleigh variable has been estimated as 0.55, which is greater than 0, to evaluate the super PHY channel of MNC system. First of all, the cellular decoder calculates the first inner probability bit errors Pbi bound as a function of the performance of the BFSK modulation as in Eq. (13).

$$P\_{bfik} \cong \frac{1}{2} \left[ 1 - \sqrt{\frac{\overline{E\_b}/N\_o}{1 + \overline{E\_b}/N\_o}} \right] \tag{13}$$

$$P\_b \le \frac{w(df)}{b} C\_{2df-1}^{df} \left(\frac{1}{4R\,\,\overline{E\_b}/N\_o}\right)^{df} \tag{14}$$

where C can be calculated using the free distance dfreei of the inner cellular code by Eq. (15) and appeared in Table 5. Eb represents the average energy received per symbol of transmitted information, and it is calculated as in Eq. (16). Then, the inner cellular code performance can be calculated by Eq. (17).

$$\mathbf{C}\_{2df-1}^{df} = \frac{(2d-1)!}{d! \cdot (d-1)!} \tag{15}$$

$$\overline{E\_b} = E\left(\rho\_{j,i}^2\right) \\ E\_b = 2\sigma\_\rho^2 E\_b = 0.55 E\_b \text{ assumed} \tag{16}$$

$$P\_{bi} \le P\_{bfk} \cdot \frac{w(dfeei)}{bi} C\_{2dfeei-1}^{dfeei} \left(\frac{1}{4Ri\,\overline{E\_b}/N\_o}\right)^{dfeei} \tag{17}$$

Generally speaking, the data stream coming from the cellular inner code feeds the extra channel code. The code performance of the extra outer code is a function of the inner cellular code. Second, the outer decoder calculates the second outer

decoder that works using Viterbi algorithm and the outer WBAN channel used the block code decoder. First of all, the UMTS inner decoder calculates the first inner

where Pei(di) is the probability of confusing two sequences differing in distance di and positions of inner cellular code, and can be calculated as in Eq. (2). Wdi is the weight spectrum that is the average number of bit errors associated with sequences of weigh di, and it is calculated for all codes that work in this "Medical Network Channel (MNC)" system as in Table 5; w(d), d ≥ df. Wdi term can be evaluated using the transfer function of the convolution code. Generally, for codes whose constraint length is greater than a few units (typically, ν ≥ 5), the calculation of the transfer function can prove to be complex; then it is preferred to determine the spectrum of the code, or at least the first terms of this spectrum, using an algorithm

> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 diRi Eb =<sup>N</sup><sup>0</sup> � � q

> > ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 diRi Eb =<sup>N</sup><sup>0</sup> � � q

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 doRo Eb =<sup>N</sup><sup>0</sup> � � q

Pb withno WBAN ≤Pbi � Pbo (6)

i � � <sup>¼</sup> <sup>n</sup>!

<sup>x</sup>ð Þ <sup>1</sup> � Px <sup>63</sup>�<sup>i</sup> (8)

ð Þ <sup>n</sup> � <sup>i</sup> ! � <sup>i</sup>! (7)

where di is an inner cellular code free distance and Ri is an inner cellular code rate, both showed in Table 5. Q function is clear in information theory and can be

> 1 ffiffiffiffiffi <sup>2</sup><sup>π</sup> <sup>p</sup> <sup>e</sup> �t <sup>2</sup>=2

Generally speaking, the data stream coming from the cellular inner codes feed to the extra outer codes. The code performance of the extra outer code is a function of the cellular inner code. Second, the extra outer decoder calculates the second outer probability bit errors Pbo bound separately as in Eq. (5)–Eq. (6) by the outer code parameter introduced in Table 5 for the three different QoS levels of WBAN

Z∞

x

Wdi � Q

Wdo � Q

for lower, medium, and higher QoS classes of medical data depending on the parameters applied to the extra channel. The last step is introduced by calculating the final outer code performance of the system. The WBAN decoder (63, 51, 2) calculates the last probability bit error P bound using Eq. (7)–Eq. (10), which is a

The outer code performances of the MNC system can be calculated by Eq. (6)

<sup>x</sup>ð Þ <sup>1</sup> � Px <sup>n</sup>�<sup>i</sup> where <sup>n</sup>

63 i � � Pi

WdiPeið Þ di (1)

dt (3)

(2)

(4)

(5)

Pbi ≤ 1 bi <sup>∑</sup> ∞ di¼0

that explores the various paths of the lattice diagram [9, 10].

calculated using infinity integration as in Eq. (3).

Pbi ≤ 1 bi <sup>∑</sup> ∞ di¼0

Pbo ≤

function of the extra outer code.

Pb ≤ ∑<sup>n</sup> i¼tþ1

n i � � Pi

> Pb ≤ ∑ 63 i¼2þ1

1 bo <sup>∑</sup> ∞ do¼0

medical data.

88

Peið Þ¼ di Q

Q xð Þffi

probability bit errors Pbi bound as in Eq.(1)–(4).

Coding Theory

probability bit errors Pbo bound separately as in Eq. (18) by the outer code parameters introduced in Table 5.

$$P\_{bo} \leq P\_{bfsk} \cdot \frac{w(dfreeo)}{bo} C\_{2dfreeo-1}^{dfreeo} \left(\frac{1}{4Ro\,\overline{E\_b}/N\_o}\right)^{dfreo} \tag{18}$$

Then, the extra outer code performances of super PHY channel MNC system under Rayleigh fading can be calculated by Eq. (19).

$$P\_b \le P\_{bi} \cdot P\_{bo} \tag{19}$$

In the second step in the O/P of the extra channel code, there are three targeting QoS levels. Therefore, the probability of the error can be calculated from Eq. (5) as in Eq. (22)–(24) for the different code sets. From here, the error probability for the MNC system without end-to-end connection of WBANs through the LTE can be

The final step here can be done when the WBANs are connected end to end through the proposed system. Therefore, using Eq. (28) in Eq. (11), we can have the

The third case is via Rayleigh fading. In this case where the inner codes work as a

UMTS channel, there are two kinds of codes: UL and DL. By using the cellular parameters, we will have the probability of errors as in Eq. (30) in the case of UL

<sup>b</sup> <sup>C</sup> dfreei

bi <sup>C</sup> dfreei

2dfreei�1

2dfreei�1

2dfreeo�1

2dfreeo�1

2dfreeo�1

Pb withno WBAN ≤ Pbi UL, DL � Pbo LQoS, MQoS, HQoS (35)

In the second step in the O/P of the extra channel code, there are three targeting QoS levels. Therefore, the probability of the error can be calculated from Eq. (18) as

bo <sup>C</sup> dfreeo

bo <sup>C</sup> dfreeo

bo <sup>C</sup> dfreeo

From here, the error probability for the "Medical Network Channel (MNC)" proposed system without end-to-end connection of WBANs could be calculated

The fourth case is via Rayleigh fading. In the case where the inner codes work as an LTE channel, when using the cellular parameters in Eq. (17), we will have the

<sup>b</sup> <sup>C</sup> dfreei

2dfreei�1

w dfreei ð Þ

w dfreei ð Þ

w dfreeo ð Þ

w dfreeo ð Þ

w dfreeo ð Þ

w dfreei ð Þ

from Eq. (19) as three levels of error probability as in Eq. (35).

Pb withno WBAN ≤Pbi LTE � Pbo LQoS, MQoS, HQoS (28)

bwithno WBAN ð Þ <sup>1</sup> � Pb withno WBAN <sup>63</sup>�<sup>i</sup>

1 4Ri Eb =No dfreei

1 4Ri Eb =No dfreei

1 4Ro Eb =No dfreeo

1 4Ro Eb =No dfreeo

1 4Ro Eb =No dfreeo

1 4Ri Eb =No dfreei (29)

(30)

(31)

(32)

(33)

(34)

(36)

calculated from Eq. (6) as six levels of error probability as in Eq. (28).

The Adaptive Coding Techniques for Dependable Medical Network Channel

final error probability of the "Medical Network Channel (MNC)" system.

63 i <sup>P</sup> <sup>i</sup>

63 i¼2þ1

Pbwith WBAN � LTE≤ ∑

DOI: http://dx.doi.org/10.5772/intechopen.83615

and Eq. (31) in the case of DL.

in Eq. (32)–(34).

Pbi UL ≤Pbfsk �

Pbi DL ≤Pbfsk �

Pbo LQoS≤Pbfsk �

Pbo MQoS≤Pbfsk �

Pbo HQoS≤ Pbfsk �

Pbi LTE ≤Pbfsk �

probability of errors as Eq. (36).

91

The theoretical performances have been calculated for the MNC system with different QoS levels by using the cellular standards as an inner code via AWGN and Rayleigh fading noisy channels.

The first case is via WBANs. In this case, where the inner codes work as a UMTS channel, there are two kinds of codes, when using the cellular parameters in Table 2. One is the error probability as in Eq. (20) for UL and other is the error probability as in Eq. (21) for DL.

$$P\_{bi} \text{UL} \le \mathbf{122694} \text{ Q} \left( \sqrt{\mathbf{12^{\
u\_b}/\_{N\_0}}} \right) \tag{20}$$

$$P\_{bi}DL \le 2275\,\mathrm{Q}\left(\sqrt{12\,\mathrm{^{E\_b}/\_{N\_0}}}\right) \tag{21}$$

In the second step in the O/P of the extra channel code, there are three targeting QoS levels. Therefore, the probability of the error can be calculated from Eq. (5) as in Eq. (22)–Eq. (24) for the different code sets.

$$P\_{bo} \, LQoS \le 10970 \, Q\left(\sqrt{10 \, \, ^{E\_b}/\_{N\_0}}\right) \tag{22}$$

$$P\_{bo} \, M \text{QoS} \le 425 \, \text{Q} \left( \sqrt{32 / 3 \, ^{E\_b}/\_{N\_0}} \right) \tag{23}$$

$$P\_{bo} \ H \mathsf{QoS} \leq \mathsf{169} \ \mathsf{Q}\left(\sqrt{\mathsf{11} \ {}^{E\_{b}}/\_{N\_{0}}}\right) \tag{24}$$

From here, the error probability for the "Medical Network Channel (MNC)" proposed system without end-to-end connection of WBANs can be calculated from Eq. (6) as six levels of error probability as in Eq. (25).

$$P\_b \text{ } within \text{ } \text{WBAN} \le P\_{bi} \text{ } UL, DL \cdot P\_{bo} \text{ } LQ \text{S, } MQ \text{S, } HQ \text{S, } HQ \text{S} \tag{25}$$

The final steps here can be done when the WBANs are connected end to end through the system. Therefore, using Eq. (25) in Eq. (11), we can have the final error probability of the system.

$$P\_b \text{ with } \text{WBAN} \le \sum\_{i=2+1}^{63} \binom{63}{i} P\_x^i \text{ with no } \text{WBAN} \left(1 - P\_x \text{ with no } \text{WBAN} \right)^{63-i} \text{ (26)}$$

The second case is via WBANs. In this case where the inner codes work as an LTE channel, when using the LTE cellular parameters in Table 2, we will have the probability of errors as in Eq. (27).

Pbi LTE≤ 416 Q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 10Eb =<sup>N</sup><sup>0</sup> � � q (27)

The Adaptive Coding Techniques for Dependable Medical Network Channel DOI: http://dx.doi.org/10.5772/intechopen.83615

In the second step in the O/P of the extra channel code, there are three targeting QoS levels. Therefore, the probability of the error can be calculated from Eq. (5) as in Eq. (22)–(24) for the different code sets. From here, the error probability for the MNC system without end-to-end connection of WBANs through the LTE can be calculated from Eq. (6) as six levels of error probability as in Eq. (28).

$$P\_b \text{ } withino \text{ WBAN} \le P\_{bi} \text{ LTE} \cdot P\_{bo} \text{ } LQ \text{S, M} \text{QoS, H} \text{QS, H} \text{QS} \tag{28}$$

The final step here can be done when the WBANs are connected end to end through the proposed system. Therefore, using Eq. (28) in Eq. (11), we can have the final error probability of the "Medical Network Channel (MNC)" system.

$$P\_b 
with \text{ WBAN} - \text{LTE} \le \sum\_{i=2+1}^{63} \binom{63}{i} P\_b^i 
with \text{ no WBAN } (1 - P\_b \text{ with no WBAN})^{63-i} \tag{29}$$

The third case is via Rayleigh fading. In this case where the inner codes work as a UMTS channel, there are two kinds of codes: UL and DL. By using the cellular parameters, we will have the probability of errors as in Eq. (30) in the case of UL and Eq. (31) in the case of DL.

$$P\_{bi} \text{ UL} \le P\_{bfsk} \cdot \frac{w(dfeei)}{b} \mathcal{C}\_{2dfrei}^{dfrei} \left(\frac{\mathbf{1}}{4 \text{Ri} \, \overline{E\_b}/\mathcal{N}\_o}\right)^{dfrei} \tag{30}$$

$$P\_{bi} \, DL \le P\_{bfk} \cdot \frac{w \, (dfeei)}{bi} C\_{2dfrei}^{dfrei} \left(\frac{1}{4 \text{Ri} \, \overline{E\_b} / N\_o}\right)^{dfrei} \tag{31}$$

In the second step in the O/P of the extra channel code, there are three targeting QoS levels. Therefore, the probability of the error can be calculated from Eq. (18) as in Eq. (32)–(34).

$$P\_{bo} \, LQoS \le P\_{bfk} \cdot \frac{w(dfreeo)}{bo} C\_{2dfreeo-1}^{dfreeo} \left(\frac{1}{4Ro\,\overline{E\_b}/N\_o}\right)^{dfreeo} \tag{32}$$

$$P\_{bo} \, MQoS \le P\_{bfk} \cdot \frac{w(dfreeo)}{bo} C\_{2dfreeo-1}^{dfreeo} \left(\frac{1}{4Ro\,\overline{E\_b}/N\_o}\right)^{dfreeo} \tag{33}$$

$$P\_{bo} \ HQoS \le P\_{bfik} \cdot \frac{w(dfreeo)}{bo} C\_{2dfreeo-1}^{dfreeo} \left(\frac{1}{4Ro\,\overline{E\_b}/N\_o}\right)^{dfreeo} \tag{34}$$

From here, the error probability for the "Medical Network Channel (MNC)" proposed system without end-to-end connection of WBANs could be calculated from Eq. (19) as three levels of error probability as in Eq. (35).

$$P\_b \text{ } within \text{ } W \\ \text{BAN} \le P\_{bi} \text{ } UL, DL \cdot P\_{bo} \text{ } LQ \\ \text{S, M} \\ \text{QoS, H} \\ \text{QS, H} \\ \text{QS} \tag{35}$$

The fourth case is via Rayleigh fading. In the case where the inner codes work as an LTE channel, when using the cellular parameters in Eq. (17), we will have the probability of errors as Eq. (36).

Pbi LTE ≤Pbfsk � w dfreei ð Þ <sup>b</sup> <sup>C</sup> dfreei 2dfreei�1 1 4Ri Eb =No dfreei (36)

probability bit errors Pbo bound separately as in Eq. (18) by the outer code param-

2dfreeo�1

Then, the extra outer code performances of super PHY channel MNC system

The theoretical performances have been calculated for the MNC system with different QoS levels by using the cellular standards as an inner code via AWGN and

The first case is via WBANs. In this case, where the inner codes work as a UMTS

In the second step in the O/P of the extra channel code, there are three targeting QoS levels. Therefore, the probability of the error can be calculated from Eq. (5) as

From here, the error probability for the "Medical Network Channel (MNC)" proposed system without end-to-end connection of WBANs can be calculated from

The final steps here can be done when the WBANs are connected end to end through the system. Therefore, using Eq. (25) in Eq. (11), we can have the final

The second case is via WBANs. In this case where the inner codes work as an LTE channel, when using the LTE cellular parameters in Table 2, we will have the

Pbi LTE≤ 416 Q

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 12 Eb =<sup>N</sup><sup>0</sup> � � q

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 10 Eb =<sup>N</sup><sup>0</sup> � � q

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 32=3 Eb =<sup>N</sup><sup>0</sup> � � q

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 11 Eb =<sup>N</sup><sup>0</sup> � � q

<sup>x</sup> withno WBAN ð Þ <sup>1</sup> � Px withno WBAN <sup>63</sup>�<sup>i</sup> (26)

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 10Eb =<sup>N</sup><sup>0</sup> � � q

Pb withno WBAN ≤ Pbi UL, DL � Pbo LQoS, MQoS, HQoS (25)

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 12 Eb =<sup>N</sup><sup>0</sup> � � q

channel, there are two kinds of codes, when using the cellular parameters in Table 2. One is the error probability as in Eq. (20) for UL and other is the error

PbiUL ≤122694 Q

PbiDL≤2275 Q

Pbo LQoS≤10970 Q

Pbo MQoS ≤425 Q

Pbo HQoS≤ 169 Q

1 4Ro Eb =No � �dfreeo

Pb ≤Pbi � Pbo (19)

(18)

(20)

(21)

(22)

(23)

(24)

(27)

bo <sup>C</sup> dfreeo

w dfreeo ð Þ

eters introduced in Table 5.

Coding Theory

Rayleigh fading noisy channels.

probability as in Eq. (21) for DL.

in Eq. (22)–Eq. (24) for the different code sets.

Eq. (6) as six levels of error probability as in Eq. (25).

63 i � � Pi

error probability of the system.

probability of errors as in Eq. (27).

63 i¼2þ1

Pb with WBAN ≤ ∑

90

Pbo ≤Pbfsk �

under Rayleigh fading can be calculated by Eq. (19).

Then, from here, the error probability for the "Medical Network Channel (MNC)" system without end-to-end connection of WBANs can be calculated from Eq. (19) as three levels of error probability as in Eq. (37).

$$P\_b \text{ } withino \text{ WBAN} \le P\_{bi} \text{ } L\text{TE} \cdot P\_{bo} \text{ } L\text{QoS, M}\text{QoS, H}\text{QS} \text{ } \text{S} \tag{37}$$

Figure 4.

Figure 5.

Figure 6.

93

All priority results via UMTS under Rayleigh fading theoretically.

The Adaptive Coding Techniques for Dependable Medical Network Channel

DOI: http://dx.doi.org/10.5772/intechopen.83615

All priority results via LTE under AWGN theoretically.

All priority results via LTE under Rayleigh fading theoretically.

Finally, Table 6 shows the numerical evaluation of the "Medical Network Channel (MNC)" system with different categories, with the inner channel as UMTS UL, DL, and LTE as well. Regarding to the figures results, Figure 3 shows the theoretical performance when the channel is affected by AWGN for MNC via


#### Table 6.

Theoretical error bit performances for MNC system.

Figure 3. All priority results via UMTS under AWGN theoretically.

The Adaptive Coding Techniques for Dependable Medical Network Channel DOI: http://dx.doi.org/10.5772/intechopen.83615

Figure 4. All priority results via UMTS under Rayleigh fading theoretically.

Figure 5. All priority results via LTE under AWGN theoretically.

Figure 6. All priority results via LTE under Rayleigh fading theoretically.

Then, from here, the error probability for the "Medical Network Channel (MNC)" system without end-to-end connection of WBANs can be calculated from

Finally, Table 6 shows the numerical evaluation of the "Medical Network Channel (MNC)" system with different categories, with the inner channel as UMTS UL, DL, and LTE as well. Regarding to the figures results, Figure 3 shows the theoretical performance when the channel is affected by AWGN for MNC via

The results via AWGN Eb/No 0 dB 1 dB 2 dB 3 dB 4 dB 5 dB WBANs 0.1763 0.1379 0.0933 0.0515 0.0215 0.0062 LTE 0.3256 0.0807 0.0143 0.0017 0.0001 0.0000 Low-QoS 2.7957 0.1717 0.0054 0.0001 0.0000 0.0000 Medium-QoS 0.0755 0.0042 0.0001 0.0000 0.0000 0.0000 High-QoS 0.0251 0.0014 0.0000 0.0000 0.0000 0.0000 Low-QoS-WBANs 1.0000 0.0506 0.0000 0.0000 0.0000 0.0000 Medium-QoS-WBANs 0.0127 0.0000 0.0000 0.0000 0.0000 0.0000 High-QoS-WBANs 0.5854 0.0000 0.0000 0.0000 0.0000 0.0000 The results via Rayleigh fading of PDF 0.55 Eb/No 0 dB 1 dB 2 dB 3 dB 4 dB 5 dB UMTS-UL 4.9532 0.0001 0.0000 0.0000 0.0000 0.0000 Low-QoS 1.9352 0.0000 0.0000 0.0000 0.0000 0.0000 Medium-QoS 9.0438 0.0000 0.0000 0.0000 0.0000 0.0000 High-QoS 4.5376 0.0000 0.0000 0.0000 0.0000 0.0000

Pb withno WBAN ≤Pbi LTE Pbo LQoS, MQoS, HQoS (37)

Eq. (19) as three levels of error probability as in Eq. (37).

Table 6.

Coding Theory

Figure 3.

92

Theoretical error bit performances for MNC system.

All priority results via UMTS under AWGN theoretically.

UMTS networks. Figure 4 shows the theoretical performance when the channel is affected by Rayleigh fading for MNC via UMTS networks. Figure 5 shows the theoretical performance when the channel is affected by AWGN for MNC via LTE networks. Finally, Figure 6 shows the theoretical performance when the channel is affected by Rayleigh fading for MNC via LTE networks.

Conflict of interest

DOI: http://dx.doi.org/10.5772/intechopen.83615

Author details

95

Emtithal Ahmed Talha<sup>1</sup>

\* and Ryuji Kohno<sup>2</sup>

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

1 Sudan International University, Khartoum, Sudan

2 Yokohama National University, Yokohama, Japan

provided the original work is properly cited.

\*Address all correspondence to: Emtithal-talha-xb@ynu.jp

There is no conflict of interest for this work from anyone.

The Adaptive Coding Techniques for Dependable Medical Network Channel

#### 5. Conclusions

The main purpose of "Medical Network Channel (MNC)" systems is to have a reliable medical network channel via the cellular infrastructure networks by end-toend WBAN connection. Therefore, the stanchions establishment of "Medical Network Channel (MNC)" with error controlling coding and decoding through existing infrastructure networks such as UMTS and LTE is introduced in this chapter with an end-to-end connection of WBANs and without the connection of WBANs considering the medical data coming from different sources. The understanding of the eight levels of the QoS medical data has been done well; however, the optimizations here have been classified into three classes (lower, medium, and higher) for all medical QoS data. Therefore, the MNC system is a novel way considering the dependability issues by this way for the first time with regard to the QoS constraint for the different medical applications of WBANs. Although the adaptive extra outer code for "Medical Network Channel (MNC)" is based on the convolution code, the choice of the technical parameters is different from one to another depending on the QoS targeted and on the capability of cellular standard itself, which is a remaining error in the O/P of the inner cellular code.

Although the current cellular standard has strong error detection and correction capability, it is designed well for the daily life communication without considering medical data transmission, and in some hard noisy channel situations that exceed the design capabilities, the cellular network cannot perform well. Therefore, the Medical Networks channel MNC system has been introduced new novel approach to connect WBANs end-to-end via the cellular networks by providing very large BER for the different assumed QoS levels of medical data to be transmit robustly and achieving the enhancement Eb/No gap under all the environments condition that assumed in compare to conventional cellular system alone. Then the adaptive "Medical Network Channel (MNC)" system overcomes the weakness of cellular networks with regard to the dependability issues and provides even better performance than the cellular network for the purpose of medical data transmission. These performances allow MNC equivalence for transmitting medical data by the highest possible level of the dependability required. In regard to achieving different QoS of WBAN requirements, the results in Table 6 and BER Figures 3–6 cleared all the study cases carefully for adaptive "Medical Network Channel (MNC)" system. Generally, the adaptive medical network channel introduced in this chapter is through the cellular networks. However, all communication network standards can be applied using error correcting techniques to be adaptive for medical data transmission.

#### Acknowledgements

The first author would like to express thanks to the academic supervisor Prof. Ryuji Kohno for his help and guidance and to all Kohno-Lab members in Yokohama National University. Correspondingly, the first author would like to express thanks to the dean and engineering members in Sudan International University.

The Adaptive Coding Techniques for Dependable Medical Network Channel DOI: http://dx.doi.org/10.5772/intechopen.83615

#### Conflict of interest

UMTS networks. Figure 4 shows the theoretical performance when the channel is affected by Rayleigh fading for MNC via UMTS networks. Figure 5 shows the theoretical performance when the channel is affected by AWGN for MNC via LTE networks. Finally, Figure 6 shows the theoretical performance when the channel is

The main purpose of "Medical Network Channel (MNC)" systems is to have a reliable medical network channel via the cellular infrastructure networks by end-toend WBAN connection. Therefore, the stanchions establishment of "Medical Network Channel (MNC)" with error controlling coding and decoding through existing infrastructure networks such as UMTS and LTE is introduced in this chapter with an end-to-end connection of WBANs and without the connection of WBANs considering the medical data coming from different sources. The understanding of the eight levels of the QoS medical data has been done well; however, the optimizations here have been classified into three classes (lower, medium, and higher) for all medical QoS data. Therefore, the MNC system is a novel way considering the dependability issues by this way for the first time with regard to the QoS constraint for the different medical applications of WBANs. Although the adaptive extra outer code for "Medical Network Channel (MNC)" is based on the convolution code, the choice of the technical parameters is different from one to another depending on the QoS targeted and on the capability of cellular standard itself, which is a

Although the current cellular standard has strong error detection and correction capability, it is designed well for the daily life communication without considering medical data transmission, and in some hard noisy channel situations that exceed the design capabilities, the cellular network cannot perform well. Therefore, the Medical Networks channel MNC system has been introduced new novel approach to connect WBANs end-to-end via the cellular networks by providing very large BER for the different assumed QoS levels of medical data to be transmit robustly and achieving the enhancement Eb/No gap under all the environments condition that assumed in compare to conventional cellular system alone. Then the adaptive "Medical Network Channel (MNC)" system overcomes the weakness of cellular networks with regard to the dependability issues and provides even better performance than the cellular network for the purpose of medical data transmission. These performances allow MNC equivalence for transmitting medical data by the highest possible level of the dependability required. In regard to achieving different QoS of WBAN requirements, the results in Table 6 and BER Figures 3–6 cleared all the study cases carefully for adaptive "Medical Network Channel (MNC)" system. Generally, the adaptive medical network channel introduced in this chapter is through the cellular networks. However, all communication network standards can be applied using error correcting techniques to be adaptive for medical data trans-

The first author would like to express thanks to the academic supervisor Prof. Ryuji Kohno for his help and guidance and to all Kohno-Lab members in Yokohama National University. Correspondingly, the first author would like to express thanks

to the dean and engineering members in Sudan International University.

affected by Rayleigh fading for MNC via LTE networks.

remaining error in the O/P of the inner cellular code.

5. Conclusions

Coding Theory

mission.

94

Acknowledgements

There is no conflict of interest for this work from anyone.

#### Author details

Emtithal Ahmed Talha<sup>1</sup> \* and Ryuji Kohno<sup>2</sup>


<sup>© 2019</sup> The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### References

[1] IEEE. IEEE standard for local and metropolitan area networks–Part 15.6: Wireless Body Area Networks. IEEE Std 802.15.6-2012; 2012

[2] Yuce MR, Khan J. Wireless Body Area Networks: Technology, Implementation, and Applications. Pan Stanford: CRC Press; 2011

[3] Movassaghi S, Abolhasan M, Lipman J, Smith D, Jamalipour A. Wireless body area networks: A survey. IEEE Communications Surveys & Tutorials. 2014;16(3):1658-1686. Third

[4] Murtaza S. QoS Taxonomy towards wireless body area network solutions. International Journal of Application or Innovation in Engineering & Management (IJAIEM). 2013;2(4)

[5] ETSI. Universal Mobile Telecommunications System (UMTS); Multiplexing and Channel Coding (FDD). European Telecommunications Standards Institute, Technical Specification TS 125 212; 2011

[6] ETSI. Universal Mobile Telecommunications System (UMTS); Multiplexing and Channel Coding (TDD). European Telecommunications Standards Institute, Technical Specification TS 125 222; 2010

[7] ETSI. LTE; Evolved Universal Terrestrial Radio Access (E-UTRA); Multiplexing and Channel Coding. European Telecommunications Standards Institute, Technical Specification TS 136 212; 2013

[8] Xing J, Zhu Y. A Survey on body area network. In: 5th International Conference on Wireless Communications, Networking and Mobile Computing, 2009. WiCom '09. pp. 1-4; 2009

[9] Alain Glavieux, Channel Coding in Communication Networks: From Theory to Turbocodes. ISTE Ltd, Wiley; 2007

[10] Semenov S, Krouk E. Modulation and Coding Techniques in Wireless Communications. John Wiley & Sons; 2011

**97**

**Chapter 6**

**Abstract**

**1. Introduction**

Combined Crosstalk Avoidance

for Detection and Correction of

Code with Error Control Code

Random and Burst Errors

**Keywords:** NoC, CAC, ECC LPC, SoC, FPGA

interconnects; thereby a reliable communication is obtained.

*Ashok Kumar Kummary, Perumal Dananjayan,* 

*Kalannagari Viswanath and Vanga Karunakar Reddy*

Error correction codes are majorly important to detect and correct occurred errors because of various noise sources. When the technology is scaling down, the effect of noise sources is high. The coupling capacitance is one of the main constraints to affect the performance of on-chip interconnects. Because of coupling capacitance, the crosstalk is introduced at on-chip interconnecting wires. To control the single or multiple errors, an efficient error correction code is required. By combining crosstalk avoidance with error control code, the reliable intercommunication is obtained in network-on-chip (NoC)-based system on chip (SoC). To reduce the power consumption of error control codes, the bus invert-based low-power code is integrated to network interface of NoC. The advanced work is designed and implemented with Xilinx 14.7; thereby the performance of improved NoC is evaluated and compared with existing work. The 8×8 mesh-based NoC is simulated at various traffic patterns to analyze the energy dissipation and average data packet latency.

As technology is scaling up, a number of circuits are integrated in on-chip. The intercommunication among the on-chip devices is majorly important because of millions of integrated devices in the system on chip (SoC). The communication architectures of SoC are not efficient to provide high performance; thereby network-on-chip (NoC) is the new paradigm introduced [1]. Because of parallelism, the NoC is providing high performance in terms of scalability and flexibility even in the case of millions of on-chip devices. Still, NoC suffers with design parameters that affect the performance of NoC. As technology is scaling up, the performance of NoC is mainly affected with coupling capacitance. The effect of crosstalk capacitance is more in horizontal than in vertical; thereby the crosstalk errors frequently occur in on-chip interconnecting wires [2]. An efficient error correction code is required to control the crosstalk error that may occur once or multiple times. The crosstalk avoidance codes (CAC) are popularized to control the error in on-chip

#### **Chapter 6**

References

Coding Theory

802.15.6-2012; 2012

Stanford: CRC Press; 2011

area networks: A survey. IEEE

2014;16(3):1658-1686. Third

Innovation in Engineering & Management (IJAIEM). 2013;2(4)

[5] ETSI. Universal Mobile

Standards Institute, Technical Specification TS 125 212; 2011

Standards Institute, Technical Specification TS 125 222; 2010

[7] ETSI. LTE; Evolved Universal Terrestrial Radio Access (E-UTRA); Multiplexing and Channel Coding. European Telecommunications Standards Institute, Technical Specification TS 136 212; 2013

[8] Xing J, Zhu Y. A Survey on body area

network. In: 5th International Conference on Wireless

pp. 1-4; 2009

96

Communications, Networking and Mobile Computing, 2009. WiCom '09.

[6] ETSI. Universal Mobile

[1] IEEE. IEEE standard for local and metropolitan area networks–Part 15.6: Wireless Body Area Networks. IEEE Std [9] Alain Glavieux, Channel Coding in Communication Networks: From Theory to Turbocodes. ISTE Ltd, Wiley;

[10] Semenov S, Krouk E. Modulation and Coding Techniques in Wireless Communications. John Wiley & Sons;

2007

2011

[2] Yuce MR, Khan J. Wireless Body Area Networks: Technology,

Implementation, and Applications. Pan

[3] Movassaghi S, Abolhasan M, Lipman J, Smith D, Jamalipour A. Wireless body

Communications Surveys & Tutorials.

[4] Murtaza S. QoS Taxonomy towards wireless body area network solutions. International Journal of Application or

Telecommunications System (UMTS); Multiplexing and Channel Coding (FDD). European Telecommunications

Telecommunications System (UMTS); Multiplexing and Channel Coding (TDD). European Telecommunications

## Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction of Random and Burst Errors

*Ashok Kumar Kummary, Perumal Dananjayan, Kalannagari Viswanath and Vanga Karunakar Reddy*

#### **Abstract**

Error correction codes are majorly important to detect and correct occurred errors because of various noise sources. When the technology is scaling down, the effect of noise sources is high. The coupling capacitance is one of the main constraints to affect the performance of on-chip interconnects. Because of coupling capacitance, the crosstalk is introduced at on-chip interconnecting wires. To control the single or multiple errors, an efficient error correction code is required. By combining crosstalk avoidance with error control code, the reliable intercommunication is obtained in network-on-chip (NoC)-based system on chip (SoC). To reduce the power consumption of error control codes, the bus invert-based low-power code is integrated to network interface of NoC. The advanced work is designed and implemented with Xilinx 14.7; thereby the performance of improved NoC is evaluated and compared with existing work. The 8×8 mesh-based NoC is simulated at various traffic patterns to analyze the energy dissipation and average data packet latency.

**Keywords:** NoC, CAC, ECC LPC, SoC, FPGA

#### **1. Introduction**

As technology is scaling up, a number of circuits are integrated in on-chip. The intercommunication among the on-chip devices is majorly important because of millions of integrated devices in the system on chip (SoC). The communication architectures of SoC are not efficient to provide high performance; thereby network-on-chip (NoC) is the new paradigm introduced [1]. Because of parallelism, the NoC is providing high performance in terms of scalability and flexibility even in the case of millions of on-chip devices. Still, NoC suffers with design parameters that affect the performance of NoC. As technology is scaling up, the performance of NoC is mainly affected with coupling capacitance. The effect of crosstalk capacitance is more in horizontal than in vertical; thereby the crosstalk errors frequently occur in on-chip interconnecting wires [2]. An efficient error correction code is required to control the crosstalk error that may occur once or multiple times. The crosstalk avoidance codes (CAC) are popularized to control the error in on-chip interconnects; thereby a reliable communication is obtained.

#### **Figure 1.** *Trend of relative delay with scaling of technology [3].*

The CAC reduced the worst-case switching capacitance in on-chip interconnects by avoiding the switching transitions of data that is 010-101. This condition reduced the worst-case capacitance from (1 + 4*λ*)*CL* to (1 + 2*λ*)*CL*; hence, the energy dissipation is reduced from (1 + 4*λ*)*CL* α*V* 2 *dd* to (1 + 2*λ*)*CL* α*V* 2 *dd* where λ is the ratio of coupling capacitance to total capacitance, *CL* is the self-capacitance of interconnection wire, α is the transition factor, and *Vdd* is the supply voltage for the system. The energy dissipation is reduced by reducing the transition activity of interconnection wires for the data packet. The behavior of relative delay with scaling of technology is shown in **Figure 1**.

International Technology Roadmap for Semiconductor (ITRS-2011) predicted that when the delay from the gates is reduced, the delay from wires is increased with scaling of technology because the interconnecting wires are affected more when scaling of technology is less than 45 nm [3]. Hence, the interconnecting wires affected the performance of NoC-based SoC in terms of delay as well as energy consumption, and it is huge in case of the errors.

The errors mainly occur in interconnecting wires because of coupling capacitance; thereby strong error correction code (ECC) is required to detect and correct errors [4]. The error may occur once or multiple times, and also multiple errors occurred in interconnection wires; hence, the ECCs are not enough to detect and correct errors. In literature, different techniques are proposed for detection and correction of multiple errors. The parity check, dual rail (DR), modified dual rail (MDR), boundary shift code (BSC), and CAC are popularized among various techniques for control of multiple errors in interconnection wires.

The remaining chapter is as follows: Section 2 includes the related work of error control methods and also discussed the merits and demerits. Section 3 presented the proposed encoder and decoder of combined CAC-ECC method. Section 4 gives the advanced encoding and decoding of NoC router with combined LPC-CAC-ECC scheme. Section 5 discussed the implementation of proposed work in NoC architecture, and finally, Section 6 concludes the chapter.

#### **2. Related work**

The detection and correction of errors present in the on-chip interconnects are majorly important because it leads to drop or block data packet; hence the performance

**99**

**3. Joint CAC-ECC**

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction…*

of NoC architecture is reduced. Huge research is going on for the error detection and correction of on-chip interconnects. Parity code is used to detect 1-bit error in data packet. It is simple but errors are not corrected. Hamming code is proposed for detection of 2-bit errors and also correction of 1-bit error [5]. Various error control codes are

To reduce delay in on-chip interconnects, forbidden overlap condition (FOC), forbidden transition condition (FTC), and forbidden pattern condition (FPC) of crosstalk avoidance code are used [6]. The crosstalk avoidance codes are reduced delay from (1 + 4λ)*CL* to (1 + 2λ)*CL* and also energy dissipation. Still, the area utilization has increased because the extra bits are added to the original data packet. To control multiple errors, duplicate parity (DAP) is proposed by duplicating data packets, and then hamming code is used for transmission of duplicated data [7]. The average data packet latency is reduced by DAP-based method, although the power consumption has increased because the number of interconnecting wires is increased. To reduce the power consumption in the case of crosstalk avoidance codes, Sridhara and Shanbhag [8] proposed and combined a low-power code (LPC)

The FOC, FTC, and FPC are used for detection and correction of errors due to crosstalk; thereby the data packet latency is reduced. To reduce power consumption, bus invert-based technique is used and combined with error control codes. The joint LPC-CAC code improved the performance of NoC, but still, area utilization has increased. To obtain reliable on-chip communication, Single Error Correcting-Burst Error Detecting (SEC-BED) with Hybrid Automatic Repeat reQuest (HARQ )

is proposed. The single random error is detected and corrected, whereas the

ing parity check bits and go-back-N-based retransmission used in HARQ. To reduce worst-case bus delay, joint crosstalk avoidance with triple error correction (JTEC) code is proposed. In encoding operation, the code word is used with hamming code and then duplicated. Because hamming code is duplicated, the decoder detected 4-bit errors and corrected 3-bit errors. Because JTEC required large area, JTEC is advanced as JTEC-simultaneous quadruple error detection (JTEC-SQED). The JTEC-SQED replaced hamming codes into Hsiao codes; thereby the performance is improved when compared with JTEC. To increase error control capability, triplicate add parity (TAP) is used for encoding data and compared with sent parity bit in decoder to detect and correct the errors in interconnecting wires [9]. The TAP-based error control scheme efficiently detected and corrected 1-bit, 2-bit, and some 3-bit errors. Still, the power consumption has increased because the

required number of interconnecting wires increases.

retransmission is requested when double random and burst errors are detected. The SEC-BED scheme detected errors efficiently, although the delay, area, and power consumption have increased because of Ex-Or-based tree structure used in calculat-

To reduce the power consumption with TAP-based scheme, this chapter proposes joint LPC-CAC-ECC scheme to detect and correct 1-bit, 2-bit, and some 3-bit burst errors efficiently, and also the power consumption of codec module is reduced with the help of BI technique. The proposed work is mainly concentrated on controlling of multiple errors and also improving of NoC communication architecture.

The ability of error control method is determined by the reliable communication which is provided in the presence of errors. By embedding the error control schemes, the performance of the system is reduced when compared with error control scheme-less system. The data packet latency and power consumption

introduced with the help of hamming code because it is easy to implement.

*DOI: http://dx.doi.org/10.5772/intechopen.83561*

with crosstalk avoidance codes.

#### *Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction… DOI: http://dx.doi.org/10.5772/intechopen.83561*

of NoC architecture is reduced. Huge research is going on for the error detection and correction of on-chip interconnects. Parity code is used to detect 1-bit error in data packet. It is simple but errors are not corrected. Hamming code is proposed for detection of 2-bit errors and also correction of 1-bit error [5]. Various error control codes are introduced with the help of hamming code because it is easy to implement.

To reduce delay in on-chip interconnects, forbidden overlap condition (FOC), forbidden transition condition (FTC), and forbidden pattern condition (FPC) of crosstalk avoidance code are used [6]. The crosstalk avoidance codes are reduced delay from (1 + 4λ)*CL* to (1 + 2λ)*CL* and also energy dissipation. Still, the area utilization has increased because the extra bits are added to the original data packet. To control multiple errors, duplicate parity (DAP) is proposed by duplicating data packets, and then hamming code is used for transmission of duplicated data [7]. The average data packet latency is reduced by DAP-based method, although the power consumption has increased because the number of interconnecting wires is increased. To reduce the power consumption in the case of crosstalk avoidance codes, Sridhara and Shanbhag [8] proposed and combined a low-power code (LPC) with crosstalk avoidance codes.

The FOC, FTC, and FPC are used for detection and correction of errors due to crosstalk; thereby the data packet latency is reduced. To reduce power consumption, bus invert-based technique is used and combined with error control codes. The joint LPC-CAC code improved the performance of NoC, but still, area utilization has increased. To obtain reliable on-chip communication, Single Error Correcting-Burst Error Detecting (SEC-BED) with Hybrid Automatic Repeat reQuest (HARQ ) is proposed. The single random error is detected and corrected, whereas the retransmission is requested when double random and burst errors are detected. The SEC-BED scheme detected errors efficiently, although the delay, area, and power consumption have increased because of Ex-Or-based tree structure used in calculating parity check bits and go-back-N-based retransmission used in HARQ.

To reduce worst-case bus delay, joint crosstalk avoidance with triple error correction (JTEC) code is proposed. In encoding operation, the code word is used with hamming code and then duplicated. Because hamming code is duplicated, the decoder detected 4-bit errors and corrected 3-bit errors. Because JTEC required large area, JTEC is advanced as JTEC-simultaneous quadruple error detection (JTEC-SQED). The JTEC-SQED replaced hamming codes into Hsiao codes; thereby the performance is improved when compared with JTEC. To increase error control capability, triplicate add parity (TAP) is used for encoding data and compared with sent parity bit in decoder to detect and correct the errors in interconnecting wires [9]. The TAP-based error control scheme efficiently detected and corrected 1-bit, 2-bit, and some 3-bit errors. Still, the power consumption has increased because the required number of interconnecting wires increases.

To reduce the power consumption with TAP-based scheme, this chapter proposes joint LPC-CAC-ECC scheme to detect and correct 1-bit, 2-bit, and some 3-bit burst errors efficiently, and also the power consumption of codec module is reduced with the help of BI technique. The proposed work is mainly concentrated on controlling of multiple errors and also improving of NoC communication architecture.

#### **3. Joint CAC-ECC**

The ability of error control method is determined by the reliable communication which is provided in the presence of errors. By embedding the error control schemes, the performance of the system is reduced when compared with error control scheme-less system. The data packet latency and power consumption

*Coding Theory*

**Figure 1.**

tion is reduced from (1 + 4*λ*)*CL* α*V*

*Trend of relative delay with scaling of technology [3].*

consumption, and it is huge in case of the errors.

ture, and finally, Section 6 concludes the chapter.

The CAC reduced the worst-case switching capacitance in on-chip interconnects by avoiding the switching transitions of data that is 010-101. This condition reduced the worst-case capacitance from (1 + 4*λ*)*CL* to (1 + 2*λ*)*CL*; hence, the energy dissipa-

2

*dd* where λ is the ratio of coupling

*dd* to (1 + 2*λ*)*CL* α*V*

capacitance to total capacitance, *CL* is the self-capacitance of interconnection wire, α is the transition factor, and *Vdd* is the supply voltage for the system. The energy dissipation is reduced by reducing the transition activity of interconnection wires for the data packet. The behavior of relative delay with scaling of technology is shown in **Figure 1**. International Technology Roadmap for Semiconductor (ITRS-2011) predicted that when the delay from the gates is reduced, the delay from wires is increased with scaling of technology because the interconnecting wires are affected more when scaling of technology is less than 45 nm [3]. Hence, the interconnecting wires affected the performance of NoC-based SoC in terms of delay as well as energy

The errors mainly occur in interconnecting wires because of coupling capacitance; thereby strong error correction code (ECC) is required to detect and correct errors [4]. The error may occur once or multiple times, and also multiple errors occurred in interconnection wires; hence, the ECCs are not enough to detect and correct errors. In literature, different techniques are proposed for detection and correction of multiple errors. The parity check, dual rail (DR), modified dual rail (MDR), boundary shift code (BSC), and CAC are popularized among various

The remaining chapter is as follows: Section 2 includes the related work of error control methods and also discussed the merits and demerits. Section 3 presented the proposed encoder and decoder of combined CAC-ECC method. Section 4 gives the advanced encoding and decoding of NoC router with combined LPC-CAC-ECC scheme. Section 5 discussed the implementation of proposed work in NoC architec-

The detection and correction of errors present in the on-chip interconnects are majorly important because it leads to drop or block data packet; hence the performance

2

techniques for control of multiple errors in interconnection wires.

**98**

**2. Related work**

affect more in the presence of error control schemes. To detect and correct multiple errors efficiently, the CAC-ECC methods are combined. In this chapter, the 1-bit and 2-bit errors due to crosstalk are detected and corrected and also some of the 3-bit errors.

Triplicate add parity (TAP)-based encoder is used to transfer the data from source to destination through the interconnection wires. **Figure 2** depicts TAPbased encoder of joint CAC-ECC scheme. The 32-bit data triplicates to encode to the destination through interconnection wires. In the advanced encoder, each data bit triplicates and also calculates overall parity of 32-bit data; hence, a total of 97-bits of data are encoded to the decoder section. By triplication, the errors are efficiently controlled in the interconnecting wires.

The parity bit is also measured with Ex-Or operation and encoded to the decoder section to check the parity of received data. By the comparison of parity, the errors are detected and corrected efficiently. The decoder structure of joint CAC-ECC scheme is shown in **Figure 3**.

The decoder divides encoded data into three groups, and parity of each group is calculated and compared with sent parity bit. The encoded data are divided into three groups of 32-bit data with the help of group separator, and sent parity bit (p0) is used to compare with the parity of each group (p1, p2, p3). The 1-bit and 2-bit errors are detected when parity of group changed from sent parity. **Table 1** depicts the different possibilities of errors at data bits in interconnection wires. The 1-bit errors are detected with its parity. The 2- and 3-bit errors are identified by considering the following instances:

Instance I: p1 = p2 and p2 ≠ p3.

To find out error-less group, the parity of group 1 is compared with sent parity (p0). If p0 is equal to p1, then group 1 is considered as error-less, otherwise group 3 is considered as error-less.

Instance II: p1 ≠ p2 and p2 = p3.

To find out error-less group, the parity of group 1 is compared with sent parity. If p0 is equal to p1, then group 1 is considered as error-free, otherwise group 2 is error-free.

**101**

**Table 1.**

Group 1 (p1)

Group 2 (p2)

Group 3 (p3)

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction…*

To find out error-less group, the parity of group 1 is compared with sent parity. If p0 = p1, then the group is considered as error-free, otherwise group 2 is error-free.

The error bits highlighted in **Table 1** are considered for this instance. Because of not detecting the even parities, the error bits having even parities are divided into two categories. (i) 1-bit error in three groups is called burst error. When p0 = p1 and p1 = p2, then burst error is detectable or else consider another category.

(ii) The original data of group 1, group 2, and group 3 are compared to select errorfree group. When group 1 is equal to group 2, then group 2 is selected as error-free or else group 2 and group 3 are compared again. When group 2 is equal to group 3, then group is considered as error-free or else group-1 is considered as error-free. The number of detection and correction bits of ECC depended on the hamming distance of technique. The hamming distance of TAP-based scheme is four; that is, the triplication of data is presented, the hamming distance is three, and also one is from added parity bit. If the hamming distance of original data packets is k, then

**1-bit error 2-bit errors 3-bit errors**

1 0 0 1 0 1 **2 0 0 1** 0 1 0 3 0 0 1 2 2

0 1 0 1 1 0 **0 2 0 1** 1 0 2 0 3 0 2 1 0

0 0 1 0 1 1 **0 0 2 1** 2 2 1 0 0 3 0 0 1

Correct Correct Correct Incorrect

*DOI: http://dx.doi.org/10.5772/intechopen.83561*

Instance III: p1 ≠ p2 and p2 ≠ p3.

*Decoder structure of joint CAC-ECC scheme.*

**Figure 3.**

Instance IV: p1 = p2 and p2 = p3.

*Different possible error bits in interconnecting wires.*

**Figure 2.** *TAP-based encoder of proposed scheme.*

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction… DOI: http://dx.doi.org/10.5772/intechopen.83561*

**Figure 3.** *Decoder structure of joint CAC-ECC scheme.*

*Coding Theory*

the 3-bit errors.

controlled in the interconnecting wires.

scheme is shown in **Figure 3**.

ing the following instances:

is considered as error-less.

error-free.

Instance I: p1 = p2 and p2 ≠ p3.

Instance II: p1 ≠ p2 and p2 = p3.

affect more in the presence of error control schemes. To detect and correct multiple errors efficiently, the CAC-ECC methods are combined. In this chapter, the 1-bit and 2-bit errors due to crosstalk are detected and corrected and also some of

Triplicate add parity (TAP)-based encoder is used to transfer the data from source to destination through the interconnection wires. **Figure 2** depicts TAPbased encoder of joint CAC-ECC scheme. The 32-bit data triplicates to encode to the destination through interconnection wires. In the advanced encoder, each data bit triplicates and also calculates overall parity of 32-bit data; hence, a total of 97-bits of data are encoded to the decoder section. By triplication, the errors are efficiently

The parity bit is also measured with Ex-Or operation and encoded to the decoder section to check the parity of received data. By the comparison of parity, the errors are detected and corrected efficiently. The decoder structure of joint CAC-ECC

The decoder divides encoded data into three groups, and parity of each group is calculated and compared with sent parity bit. The encoded data are divided into three groups of 32-bit data with the help of group separator, and sent parity bit (p0) is used to compare with the parity of each group (p1, p2, p3). The 1-bit and 2-bit errors are detected when parity of group changed from sent parity. **Table 1** depicts the different possibilities of errors at data bits in interconnection wires. The 1-bit errors are detected with its parity. The 2- and 3-bit errors are identified by consider-

To find out error-less group, the parity of group 1 is compared with sent parity (p0). If p0 is equal to p1, then group 1 is considered as error-less, otherwise group 3

To find out error-less group, the parity of group 1 is compared with sent parity. If p0 is equal to p1, then group 1 is considered as error-free, otherwise group 2 is

**100**

**Figure 2.**

*TAP-based encoder of proposed scheme.*

Instance III: p1 ≠ p2 and p2 ≠ p3.

To find out error-less group, the parity of group 1 is compared with sent parity. If p0 = p1, then the group is considered as error-free, otherwise group 2 is error-free.

Instance IV: p1 = p2 and p2 = p3.

The error bits highlighted in **Table 1** are considered for this instance. Because of not detecting the even parities, the error bits having even parities are divided into two categories. (i) 1-bit error in three groups is called burst error. When p0 = p1 and p1 = p2, then burst error is detectable or else consider another category. (ii) The original data of group 1, group 2, and group 3 are compared to select errorfree group. When group 1 is equal to group 2, then group 2 is selected as error-free or else group 2 and group 3 are compared again. When group 2 is equal to group 3, then group is considered as error-free or else group-1 is considered as error-free.

The number of detection and correction bits of ECC depended on the hamming distance of technique. The hamming distance of TAP-based scheme is four; that is, the triplication of data is presented, the hamming distance is three, and also one is from added parity bit. If the hamming distance of original data packets is k, then


**Table 1.** *Different possible error bits in interconnecting wires.* the number of detection error bits is k <sup>−</sup> 1 and the number of correction bits is \_\_\_ k − 1 2 ; hence, CAC-ECC scheme detected three error bits and corrected two error bits.

Though the CAC-ECC scheme has detected and corrected crosstalk errors efficiently, the power consumption and data packet latency have huge increase because more number of interconnecting wires are used in the advanced error control scheme. Because of triplication of original data, the combined CAC-ECC scheme used more number of wires; thereby the power consumption of advanced method has increased.

#### **4. Advanced NoC router**

The errors affect more on the performance of NoC-based SoC because of more number of interconnection links involved for parallel processing. The combined CAC-ECC scheme is embedded in the network interface (NI) of router; thereby the errors are controlled and also avoided to propagate to remaining network. The encoder of error control scheme is embedded to transmit NI (TX-NI), and decoder of error control scheme is added to the receive NI (RX-NI); thereby the original data are transferred efficiently. Because of embedded combined CAC-ECC in the NI, the router of NoC presented huge power consumption; hence, there is a need of reducing the power consumption in NoC. By analyzing various error control schemes in NI, flexible unequal error control (FUEC) methodology is introduced and generalized to any kind of error control codes [10].

#### **4.1 Combined LPC-CA-ECC scheme**

To reduce power consumption in NoC architecture in the case of error control schemes, the low-power code is added to the error control codes; thereby the power consumption is reduced and also errors are corrected efficiently. Bus invert (BI) method is used to reduce the transition activity of interconnecting wires; thereby the power consumption is reduced. The power consumption is given in eq. (1).

$$P\_d = \alpha \mathbf{C}\_L \mathbf{f}\_c \mathbf{V}\_{dd}^2 \tag{1}$$

**103**

**5. Implementation**

**Figure 4.**

*Bus invert method of low-power code.*

to detect and correct error in decoder.

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction…*

The advanced work of error control scheme is embedded in the NI of the NoC, and also codec module is responsible for encoding and decoding of the data without errors. To improve the data transfer speed of NoC, each router is added with an extra PE; thereby the number of router required to complete data transfer is reduced. To control arbitration among ports of router, the advanced scheduling algorithms are used in arbiter. The selected port transferred the data to the output through crossbar switch. To avoid deadlock error, each port of router is composed of buffer memory and its controller. A store and forward packet switching (SF) and minimal routing algorithm are used to improve the performance of mesh-based NoC architecture [11].

The advanced error control scheme is designed in Xilinx 14.7 and implemented on Virtex-6 Field Programmable Gate Array (FPGA) target device. The simulation and synthesis are demonstrated for each module of NoC. The performance of NoC is evaluated in terms of area utilization (occupied slices, LUT-FF pairs, and bonded IOBs), latency (delay), and power consumption. **Table 2** shows the performance of encoder and decoder of joint CAC-ECC scheme. The area utilization and delay of codec module are increased linearly with the increase of data width because of the number of interconnection wires in encoder and also the number of cycles required

The data transfer of the encoder is more than the decoder because of the number of rounds required to detect and correct. The required cycles increase more in the case of more number of error bits and also higher data width. **Tables 3** and **4** show the performance of NoC router with CAC-ECC scheme and joint CAC-ECC-LPC scheme. From **Table 3**, it is inferred that the data transfer speed of NoC in the presence of soft errors decreased with the increase of data width because more number of interconnecting

*DOI: http://dx.doi.org/10.5772/intechopen.83561*

where α is the transition activity, *CL* is the load capacitance, *fc* is the maximum clock frequency, and *Vdd* is the supply voltage. From Eq. (1), it is known that the dynamic power consumption is directly proportional to the transition activity.

The bus invert-based low-power code (LPC) is shown in **Figure 4**. The BI technique reduced the number of transitions by using the hamming distance of original data packet; thereby the original data are inverted before encoding. The original data are inverted when the hamming distance is more than half, otherwise it is sent to encoder without inverting. The majority of voter circuit with combination of Ex-Or gates inverted the data when it is required. The majority of voter circuit is composed of a number of full-adders, which increases the size of circuit.

#### **4.2 HARQ**

The combined LPC-CAC-ECC scheme detected and corrected multiple crosstalk errors and also reduced the power consumption of on-chip interconnects. The error control scheme does not correct some of the 3-bit errors; hence, the hybrid automatic retransmission request (HARQ ) is enabled to retransfer the data from source to destination. The HARQ resend the data packets when the receiver asserted continuous three negative acknowledgments (NACK).

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction… DOI: http://dx.doi.org/10.5772/intechopen.83561*

**Figure 4.** *Bus invert method of low-power code.*

*Coding Theory*

**4. Advanced NoC router**

and generalized to any kind of error control codes [10].

*Pd* = *αCL fcVdd*

continuous three negative acknowledgments (NACK).

**4.1 Combined LPC-CA-ECC scheme**

the number of detection error bits is k <sup>−</sup> 1 and the number of correction bits is \_\_\_

The errors affect more on the performance of NoC-based SoC because of more number of interconnection links involved for parallel processing. The combined CAC-ECC scheme is embedded in the network interface (NI) of router; thereby the errors are controlled and also avoided to propagate to remaining network. The encoder of error control scheme is embedded to transmit NI (TX-NI), and decoder of error control scheme is added to the receive NI (RX-NI); thereby the original data are transferred efficiently. Because of embedded combined CAC-ECC in the NI, the router of NoC presented huge power consumption; hence, there is a need of reducing the power consumption in NoC. By analyzing various error control schemes in NI, flexible unequal error control (FUEC) methodology is introduced

To reduce power consumption in NoC architecture in the case of error control schemes, the low-power code is added to the error control codes; thereby the power consumption is reduced and also errors are corrected efficiently. Bus invert (BI) method is used to reduce the transition activity of interconnecting wires; thereby the power consumption is reduced. The power consumption is given in eq. (1).

where α is the transition activity, *CL* is the load capacitance, *fc* is the maximum clock frequency, and *Vdd* is the supply voltage. From Eq. (1), it is known that the dynamic power consumption is directly proportional to the transition activity. The bus invert-based low-power code (LPC) is shown in **Figure 4**. The BI technique reduced the number of transitions by using the hamming distance of original data packet; thereby the original data are inverted before encoding. The original data are inverted when the hamming distance is more than half, otherwise it is sent to encoder without inverting. The majority of voter circuit with combination of Ex-Or gates inverted the data when it is required. The majority of voter circuit is composed of a number of full-adders, which increases the size

The combined LPC-CAC-ECC scheme detected and corrected multiple crosstalk

errors and also reduced the power consumption of on-chip interconnects. The error control scheme does not correct some of the 3-bit errors; hence, the hybrid automatic retransmission request (HARQ ) is enabled to retransfer the data from source to destination. The HARQ resend the data packets when the receiver asserted

<sup>2</sup> (1)

hence, CAC-ECC scheme detected three error bits and corrected two error bits. Though the CAC-ECC scheme has detected and corrected crosstalk errors efficiently, the power consumption and data packet latency have huge increase because more number of interconnecting wires are used in the advanced error control scheme. Because of triplication of original data, the combined CAC-ECC scheme used more number of wires; thereby the power consumption of advanced method has increased.

k − 1 2 ;

**102**

of circuit.

**4.2 HARQ**

The advanced work of error control scheme is embedded in the NI of the NoC, and also codec module is responsible for encoding and decoding of the data without errors. To improve the data transfer speed of NoC, each router is added with an extra PE; thereby the number of router required to complete data transfer is reduced. To control arbitration among ports of router, the advanced scheduling algorithms are used in arbiter. The selected port transferred the data to the output through crossbar switch. To avoid deadlock error, each port of router is composed of buffer memory and its controller. A store and forward packet switching (SF) and minimal routing algorithm are used to improve the performance of mesh-based NoC architecture [11].

#### **5. Implementation**

The advanced error control scheme is designed in Xilinx 14.7 and implemented on Virtex-6 Field Programmable Gate Array (FPGA) target device. The simulation and synthesis are demonstrated for each module of NoC. The performance of NoC is evaluated in terms of area utilization (occupied slices, LUT-FF pairs, and bonded IOBs), latency (delay), and power consumption. **Table 2** shows the performance of encoder and decoder of joint CAC-ECC scheme. The area utilization and delay of codec module are increased linearly with the increase of data width because of the number of interconnection wires in encoder and also the number of cycles required to detect and correct error in decoder.

The data transfer of the encoder is more than the decoder because of the number of rounds required to detect and correct. The required cycles increase more in the case of more number of error bits and also higher data width. **Tables 3** and **4** show the performance of NoC router with CAC-ECC scheme and joint CAC-ECC-LPC scheme. From **Table 3**, it is inferred that the data transfer speed of NoC in the presence of soft errors decreased with the increase of data width because more number of interconnecting


**Table 2.**

*Area utilization and delay of codec module of CAC-ECC scheme.*


**Table 3.**

*Performance of NoC router with CAC-ECC scheme.*

wires are required for encoder of CAC-ECC scheme and also more number of cycles are required in decoder to detect and correct the soft errors in on-chip interconnecting wires. Hence, the power consumption is huge when data width is large.

From **Table 4**, it is clear that the low-power code is reducing the total power consumption even in the case of error control schemes. It is observed that the data transfer speed of NoC maintained the same and the power consumption is reduced from little to huge when data width is increased. Still, the area utilization is increased in joint LPC-CAC-ECC scheme because of a number of combinational circuits required in BI method to reduce the power consumption.

As BI code worked based on hamming distance of original data, the area utilization is increased. Still, it is reduced when hamming distance of original data is less than half; hence, the performance of NoC is improved.

**Table 5** shows the comparison of proposed error control scheme with recent schemes for 32-bit of data width. From **Table 5**, it is observed that the proposed method shows better results than the existing error control methods. The comparison is shown with various parameters such as number of wires used, number of error detection and correction, swing voltage of interconnect, delay for detection and correction, and also power consumption. Among all methods, CADEC provided better results than the proposed work. Still, detection and correction of CADEC are limited to 2-bit errors. The power consumption of proposed work has improved to 11% than JTEC.


**105**

**Figure 5.**

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction…*

**Error detection**

1 Hamming 32 38 Double Single 1.02 1 + 4λ 49.30

3 DAP [13] 32 65 Double Single 1.02 1 + 2λ 16.22

and burst error of two

and burst error of three

and burst error of three

*Simulation results of data packet latency (a) and energy dissipation (b) of advanced NoC with others.*

**Error correction**

32 39 Double Single 1.02 1 + 4λ 51.60

1-bit and 2-bit errors

1-bit and 2-bit errors

1-bit, 2-bit errors, and some of 3

**Link swing voltage (V)**

**Delay Power** 

0.89 1 + 2λ 26.77

0.81 1 + 2λ 39.49

0.61 1 + 2λ 34.86

**consumption (μW)**

*DOI: http://dx.doi.org/10.5772/intechopen.83561*

**Data width**

5 JTEC [14] 32 77 Random

*Comparison of advanced error control scheme with recent work.*

**Number of wires used**

32 77 Random

32 97 Random

**S. no. Coding scheme**

2 Hsiao

4 CADEC [7]

6 Joint LPC-CAC-ECC

**Table 5.**

SEC-DED [12]

**Table 4.**

*Performance of NoC router with joint LPC-CAC-ECC method.*

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction… DOI: http://dx.doi.org/10.5772/intechopen.83561*


#### **Table 5.**

*Coding Theory*

**Family Number of** 

**Family Number of** 

**occupied slices**

**Table 3.**

**Table 2.**

**slice registers**

*Area utilization and delay of codec module of CAC-ECC scheme.*

*Performance of NoC router with CAC-ECC scheme.*

wires are required for encoder of CAC-ECC scheme and also more number of cycles are required in decoder to detect and correct the soft errors in on-chip interconnecting

8-bit 814 566 375 3.70 9.56 16-bit 952 636 453 5.52 47.42 32-bit 1858 1485 676 6.06 89.55

**Number of fully used LUT-FF pairs**

**Encoder Decoder Encoder Decoder Encoder Decoder Encoder Decoder**

8-bit 1 15 2 39 35 36 1.01 2.82 16-bit 3 24 2 42 67 68 1.37 3.15 32-bit 7 49 7 79 131 132 1.53 3.20

> **Latency (ns)**

**Number of bonded IOBs**

> **Power consumption (mW)**

**Delay (ns)**

From **Table 4**, it is clear that the low-power code is reducing the total power consumption even in the case of error control schemes. It is observed that the data transfer speed of NoC maintained the same and the power consumption is reduced from little to huge when data width is increased. Still, the area utilization is increased in joint LPC-CAC-ECC scheme because of a number of combinational

As BI code worked based on hamming distance of original data, the area utilization is increased. Still, it is reduced when hamming distance of original data is less

**Table 5** shows the comparison of proposed error control scheme with recent schemes for 32-bit of data width. From **Table 5**, it is observed that the proposed method shows better results than the existing error control methods. The comparison is shown with various parameters such as number of wires used, number of error detection and correction, swing voltage of interconnect, delay for detection and correction, and also power consumption. Among all methods, CADEC provided better results than the proposed work. Still, detection and correction of CADEC are limited to 2-bit errors. The power consumption of proposed work has

8-bit 823 635 440 3.70 9.40 16-bit 996 764 518 5.52 46.90 32-bit 1926 1599 1014 6.06 81.64

**Number of fully used LUT-FF pairs** **Latency (ns)**

**Power consumption (mW)**

wires. Hence, the power consumption is huge when data width is large.

**Number of slice LUTs**

**Number of slice LUTs**

circuits required in BI method to reduce the power consumption.

**Number of slice LUTs**

than half; hence, the performance of NoC is improved.

improved to 11% than JTEC.

**Family Number of** 

**slice registers**

*Performance of NoC router with joint LPC-CAC-ECC method.*

**104**

**Table 4.**

*Comparison of advanced error control scheme with recent work.*

**Figure 5.** *Simulation results of data packet latency (a) and energy dissipation (b) of advanced NoC with others.*

To analyze the data packet latency and energy dissipation, the advanced NoC architecture is simulated 32 times at uniform-random traffic in Riviera-pro windows version. Each experiment of simulation showed less latency and less energy dissipation. The 8×8 mesh-based NoC is simulated and compared with recent NoC, that is, JTC [14], and also uncoded NoC. From **Figure 5**, it is clear that the advanced NoC has lesser data packet latency and has greater than uncoded because the advanced router transfers the data with joint CAC-ECC scheme in case of occurred errors; otherwise it transfers without error control scheme. The energy dissipation of advanced NoC is lesser than both existing works because the BI-based LPC is utilized in router in case of error control code being embedded in the NI.

#### **6. Conclusion**

The scaling of technology introduces number of errors in on-chip interconnects. The crosstalk errors majorly affected the performance of NoC communication architecture due to the coupling capacitance between the interconnecting wires. This chapter discussed number of errors and their control schemes. To control multiple errors, joint CAC-ECC scheme is embedded in NI; hence the errors are controlled and avoided to propagate remaining network. As error control scheme presented more power consumption, the BI-based method of low-power code used to reduce the case of error control scheme is used. The performance of advanced NoC is simulated and compared with recent work; thereby 11% improvement is shown when compared with JTC. To analyze the data packet latency and energy dissipation, the 8×8 mesh-based NoC architecture is simulated and compared with recent work; thereby the advanced NoC architecture shows better results than the recent NoC.

#### **Conflict of interest**

It is declared that this article has no "conflict of interest."

#### **Author details**

Ashok Kumar Kummary1 \*, Perumal Dananjayan2 , Kalannagari Viswanath3 and Vanga Karunakar Reddy1

1 Matrusri Engineering College, Hyderabad, Telangana, India

2 Pondicherry Engineering College, Puducherry, India

3 R.L. Jalappa Institute of Technology, Bangalore, India

\*Address all correspondence to: kashok483@gmail.com

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**107**

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction…*

[9] Maheswari M, Seetharaman G. Design of a novel error correction coding with crosstalk avoidance for reliable on-chip interconnection link. International Journal of Computer Applications in Technology. 2014;**49**(1):80-88. DOI: 10.1504/

[10] Gracia-Morán J, Saiz-Adalid LJ, Gil-Tomás D, Gil-Vicente PJ. Improving error correction codes for multiplecell upsets in space applications. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2018;**26**:2132-2142. DOI: 10.1109/

IJCAT.2014.059097

TVLSI.2018.2837220

v7i3.12.15864

DFT.2006.22

[11] Kumar K A, Dananjayan P.

[12] Fu B, Ampadu P. Burst error detection hybrid ARQ with crosstalkdelay reduction for reliable on-chip interconnects. In: Defect and Fault Tolerance in VLSI Systems, 2009. DFT'09. 24th IEEE International Symposium on 2009; IEEE. pp. 440- 448. DOI: 10.1109/DFT.2009.45

[13] Pande PP, Ganguly A, Feero B, Belzer B, Grecu C. Design of low power & reliable networks on Chip through joint crosstalk avoidance and forward error correction coding. In: Null 2006 Oct 4; IEEE. pp. 466-476. DOI:10.1109/

[14] Ganguly A, Pande PP, Belzer B. Crosstalk-aware channel coding

10.1109/TVLSI.2008.2005722

schemes for energy efficient and reliable NOC interconnects. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2009;**17**(11):1626-1639. DOI:

Reduction of power consumption using joint low power code with crosstalk avoidance code in case of crosstalk and random burst errors. International Journal of Engineering & Technology. 2018;**7**(3.12):62-68. DOI: 10.14419/ijet.

*DOI: http://dx.doi.org/10.5772/intechopen.83561*

[1] Benini L, De Micheli G. Networks on chips: A new SoC paradigm. Computer-IEEE Computer Society. 2002;**35**:70-78.

**References**

DOI: 10.1109/2.976921

10.1145/639929.639933

[2] Patel KN, Markov IL. Errorcorrection and crosstalk avoidance in DSM busses. In: Proceedings of the 2003 International Workshop on System-Level Interconnect Prediction 2003 Apr 5; ACM. pp. 9-14. DOI:

[3] The International Technology Roadmap for Semiconductors

[4] Fu B, Ampadu P. Error Control for Network-on-Chip Links. New York; Springer Science & Business Media, Verlag; 2011. DOI:

[5] Hamming RW. Error detecting and error correcting codes. Bell System Technical Journal. 1950;**29**(2):147-160. DOI: 10.1002/j.1538-7305.1950.tb00463.x

[6] Pande PP, Zhu H, Ganguly A, Grecu C. Energy reduction through crosstalk avoidance coding in NoC paradigm. In: Digital System Design: Architectures, Methods and Tools, 2006. DSD 2006. 9th EUROMICRO Conference on 2006 Sep; IEEE. pp. 689-695. DOI: 10.1109/

[7] Ganguly A, Pande PP, Belzer B, Grecu C. Design of low power & reliable networks on chip through joint crosstalk avoidance and multiple error correction coding. Journal of Electronic Testing. 2008;**24**(1-3):67-81. DOI: 10.1007/

[8] Sridhara SR, Shanbhag NR. Coding for system-on-chip networks: A unified framework. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2005;**13**(6):655-667. DOI: 10.1109/

www.itrs2.net/2013-itrs.html

10.1007/978-1-4419-9313-7

DSD.2006.49

s10836-007-5035-1

TVLSI.2005.848816

[Internet]. 2011. Available from: http://

*Combined Crosstalk Avoidance Code with Error Control Code for Detection and Correction… DOI: http://dx.doi.org/10.5772/intechopen.83561*

#### **References**

*Coding Theory*

**6. Conclusion**

**Conflict of interest**

**Author details**

Ashok Kumar Kummary1

and Vanga Karunakar Reddy1

**106**

provided the original work is properly cited.

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

To analyze the data packet latency and energy dissipation, the advanced NoC architecture is simulated 32 times at uniform-random traffic in Riviera-pro windows version. Each experiment of simulation showed less latency and less energy dissipation. The 8×8 mesh-based NoC is simulated and compared with recent NoC, that is, JTC [14], and also uncoded NoC. From **Figure 5**, it is clear that the advanced NoC has lesser data packet latency and has greater than uncoded because the advanced router transfers the data with joint CAC-ECC scheme in case of occurred errors; otherwise it transfers without error control scheme. The energy dissipation of advanced NoC is lesser than both existing works because the BI-based LPC is utilized in router in case of error control code being embedded in the NI.

The scaling of technology introduces number of errors in on-chip interconnects.

The crosstalk errors majorly affected the performance of NoC communication architecture due to the coupling capacitance between the interconnecting wires. This chapter discussed number of errors and their control schemes. To control multiple errors, joint CAC-ECC scheme is embedded in NI; hence the errors are controlled and avoided to propagate remaining network. As error control scheme presented more power consumption, the BI-based method of low-power code used to reduce the case of error control scheme is used. The performance of advanced NoC is simulated and compared with recent work; thereby 11% improvement is shown when compared with JTC. To analyze the data packet latency and energy dissipation, the 8×8 mesh-based NoC architecture is simulated and compared with recent work; thereby the advanced NoC architecture shows better results than the recent NoC.

It is declared that this article has no "conflict of interest."

1 Matrusri Engineering College, Hyderabad, Telangana, India

2 Pondicherry Engineering College, Puducherry, India

3 R.L. Jalappa Institute of Technology, Bangalore, India

\*Address all correspondence to: kashok483@gmail.com

\*, Perumal Dananjayan2

, Kalannagari Viswanath3

[1] Benini L, De Micheli G. Networks on chips: A new SoC paradigm. Computer-IEEE Computer Society. 2002;**35**:70-78. DOI: 10.1109/2.976921

[2] Patel KN, Markov IL. Errorcorrection and crosstalk avoidance in DSM busses. In: Proceedings of the 2003 International Workshop on System-Level Interconnect Prediction 2003 Apr 5; ACM. pp. 9-14. DOI: 10.1145/639929.639933

[3] The International Technology Roadmap for Semiconductors [Internet]. 2011. Available from: http:// www.itrs2.net/2013-itrs.html

[4] Fu B, Ampadu P. Error Control for Network-on-Chip Links. New York; Springer Science & Business Media, Verlag; 2011. DOI: 10.1007/978-1-4419-9313-7

[5] Hamming RW. Error detecting and error correcting codes. Bell System Technical Journal. 1950;**29**(2):147-160. DOI: 10.1002/j.1538-7305.1950.tb00463.x

[6] Pande PP, Zhu H, Ganguly A, Grecu C. Energy reduction through crosstalk avoidance coding in NoC paradigm. In: Digital System Design: Architectures, Methods and Tools, 2006. DSD 2006. 9th EUROMICRO Conference on 2006 Sep; IEEE. pp. 689-695. DOI: 10.1109/ DSD.2006.49

[7] Ganguly A, Pande PP, Belzer B, Grecu C. Design of low power & reliable networks on chip through joint crosstalk avoidance and multiple error correction coding. Journal of Electronic Testing. 2008;**24**(1-3):67-81. DOI: 10.1007/ s10836-007-5035-1

[8] Sridhara SR, Shanbhag NR. Coding for system-on-chip networks: A unified framework. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2005;**13**(6):655-667. DOI: 10.1109/ TVLSI.2005.848816

[9] Maheswari M, Seetharaman G. Design of a novel error correction coding with crosstalk avoidance for reliable on-chip interconnection link. International Journal of Computer Applications in Technology. 2014;**49**(1):80-88. DOI: 10.1504/ IJCAT.2014.059097

[10] Gracia-Morán J, Saiz-Adalid LJ, Gil-Tomás D, Gil-Vicente PJ. Improving error correction codes for multiplecell upsets in space applications. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2018;**26**:2132-2142. DOI: 10.1109/ TVLSI.2018.2837220

[11] Kumar K A, Dananjayan P. Reduction of power consumption using joint low power code with crosstalk avoidance code in case of crosstalk and random burst errors. International Journal of Engineering & Technology. 2018;**7**(3.12):62-68. DOI: 10.14419/ijet. v7i3.12.15864

[12] Fu B, Ampadu P. Burst error detection hybrid ARQ with crosstalkdelay reduction for reliable on-chip interconnects. In: Defect and Fault Tolerance in VLSI Systems, 2009. DFT'09. 24th IEEE International Symposium on 2009; IEEE. pp. 440- 448. DOI: 10.1109/DFT.2009.45

[13] Pande PP, Ganguly A, Feero B, Belzer B, Grecu C. Design of low power & reliable networks on Chip through joint crosstalk avoidance and forward error correction coding. In: Null 2006 Oct 4; IEEE. pp. 466-476. DOI:10.1109/ DFT.2006.22

[14] Ganguly A, Pande PP, Belzer B. Crosstalk-aware channel coding schemes for energy efficient and reliable NOC interconnects. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2009;**17**(11):1626-1639. DOI: 10.1109/TVLSI.2008.2005722

Section 2

Signal and Imaging

Processing

109

Section 2

## Signal and Imaging Processing

Chapter 7

Abstract

that follow.

feature extraction

1. Introduction

111

Satyarth Praveen

Efficient Depth Estimation Using

The stereo vision system is one of the popular computer vision techniques. The idea here is to use the parallax error to our advantage. A single scene is recorded from two different viewing angles, and depth is estimated from the measure of parallax error. This technique is more than a century old and has proven useful in many applications. This field has made a lot of researchers and mathematicians to devise novel algorithms for the accurate output of the stereo systems. This system is particularly useful in the field of robotics. It provides them with the 3D understanding of the scene by giving them estimated object depths. This chapter, along with a complete overview of the stereo system, talks about the efficient estimation of the depth of the object. It stresses on the fact that if coupled with other perception techniques, stereo depth estimation can be made a lot more efficient than the current techniques. The idea revolves around the fact that stereo depth estimation is not necessary for all the pixels of the image. This fact opens room for more complex and accurate depth estimation techniques for the fewer regions of interest in the image scene. Further details about this idea are discussed in the subtopics

Keywords: stereo vision, computer vision, disparity, depth estimation, camera,

As researchers and innovators, we have often tried to take hints and ideas from nature and convert them into beautiful versions of technology that can be used for the betterment and advancement of the human race. The human eyes inspire yet another artificial visual system, the stereo vision. The idea is to use the parallax error from two different viewing angles of the same object to estimate the distance of the object from the camera. The parallax error is inversely proportional to the depth and brings it down to a single trivial equation, whereas the estimation of the parallax error, known as the disparity between the pixels in the image frames, is a much engaging nontrivial task to handle. Depth estimation is possible only for the overlapping fields of view between the two views as shown in Figure 1. The multi-view system is a much better, reliable, and robust setup for depth estimation of the objects in the image compared to a monocular view. Details regarding this are

discussed in the following subsections of the chapter.

Sparse Stereo-Vision with Other

Perception Techniques

#### Chapter 7

## Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

Satyarth Praveen

#### Abstract

The stereo vision system is one of the popular computer vision techniques. The idea here is to use the parallax error to our advantage. A single scene is recorded from two different viewing angles, and depth is estimated from the measure of parallax error. This technique is more than a century old and has proven useful in many applications. This field has made a lot of researchers and mathematicians to devise novel algorithms for the accurate output of the stereo systems. This system is particularly useful in the field of robotics. It provides them with the 3D understanding of the scene by giving them estimated object depths. This chapter, along with a complete overview of the stereo system, talks about the efficient estimation of the depth of the object. It stresses on the fact that if coupled with other perception techniques, stereo depth estimation can be made a lot more efficient than the current techniques. The idea revolves around the fact that stereo depth estimation is not necessary for all the pixels of the image. This fact opens room for more complex and accurate depth estimation techniques for the fewer regions of interest in the image scene. Further details about this idea are discussed in the subtopics that follow.

Keywords: stereo vision, computer vision, disparity, depth estimation, camera, feature extraction

#### 1. Introduction

As researchers and innovators, we have often tried to take hints and ideas from nature and convert them into beautiful versions of technology that can be used for the betterment and advancement of the human race. The human eyes inspire yet another artificial visual system, the stereo vision. The idea is to use the parallax error from two different viewing angles of the same object to estimate the distance of the object from the camera. The parallax error is inversely proportional to the depth and brings it down to a single trivial equation, whereas the estimation of the parallax error, known as the disparity between the pixels in the image frames, is a much engaging nontrivial task to handle. Depth estimation is possible only for the overlapping fields of view between the two views as shown in Figure 1. The multi-view system is a much better, reliable, and robust setup for depth estimation of the objects in the image compared to a monocular view. Details regarding this are discussed in the following subsections of the chapter.

2. Background

2.2 Camera calibration

2.2.1 Intrinsic camera calibration

aspect ratio, and distortion coefficients.

axis of the camera setup.

Figure 2.

113

The architecture overview.

2.1 The overview of the stereo architecture

DOI: http://dx.doi.org/10.5772/intechopen.86303

This architecture presents a simple overview of how the stereo system works. As shown in Figure 2, cameras with similar properties are calibrated individually for their intrinsic calibration parameters (Subtopic 2.2.1). The two cameras are then mounted on a rigid stereo rig and calibrated together as a single system to get the extrinsic calibration parameters (Subtopic 2.2.2). The images collected from the two cameras are then undistorted to remove the camera distortion effects. From the extrinsic calibration parameters, we know the rotation and translation of one camera w.r.t. the other (right camera w.r.t. the left camera); we use this information to align the two images from the stereo system along the epipolar line (Subtopic 2.2.2). The image pair is then used for disparity estimation (Topic 2.3), the most nontrivial part of the process. The concept proposed in this chapter targets this substep of the process. Perfect pixel matching is a hard problem in itself. So, achieving a real-time performance on images makes the problem nothing but more complex. Once we have a pixel-to-pixel correspondence between the two images, i.e., the disparity for each pixel, we can directly compute the depth for each of them using a single formula. The following topics discuss the steps as mentioned above in greater detail.

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

Camera calibration is a fundamental step in computer vision applications. There are two aspects of camera calibration, namely, intrinsic calibration and extrinsic calibration. Some of the experts whose algorithms are used for camera calibration

Intrinsic calibration, Step 2 in Figure 2, provides us with the internal properties of the camera, such as focal length in pixels, optical center in pixels, shear constant,

• The optical center is the position in the image that coincides with the principal

are Zhang [2], Scaramuzza [3], Jean-Yves Bouguet [4], and Tsai [5].

This instrument was first described to us in 1838 by Charles Whitestone to view relief pictures. He called it the stereoscope. A lot of other inventors and visionaries later used this concept to develop their versions of stereoscopes. It even led to the establishment of the London Stereoscopic Company in 1854. The concept of depth estimation using multiple views was used even for the estimation of the distance of the far away astronomical objects in the early times. The depth is also directly proportional to the distance between the two cameras of the stereo vision system, also called the baseline. Hence the estimation of such vast distances demanded us to use the longest possible baseline length that we could use. So the data was recorded from Earth being on either side of the sun, making the baseline length to be the same as the diameter of the Earth's orbit around the sun, and then the depth of the astronomical objects is measured. This method was called the stellar parallax or trigonometric parallax [1].

Considering other applications, robotic applications demand plenty of stereo vision systems for close object depth estimations. Be it humanoids, robots for folding clothes or picking objects, or even autonomous vehicles, stereo vision systems solve many complexities. On top of that, if the use case is for unidirectional short-range applications, good stereo systems can even eradicate the need for lidars or radars and hence aid toward much cost-cutting.

This chapter presents a new idea while using the existing techniques for depth estimation. The motivation is to make the depth estimation procedure a lot lighter and faster. In simple words, the intension is to avoid the calculation of depth for the pixels that are not required. It is most usable when coupled with other perception techniques like object-detection and semantic-segmentation. These perception steps help us rule out the unrequired pixels for which depth estimation can be avoided. The implications and findings of this are discussed later.

Future sections of the chapter are primarily segregated as the Background and the Proposed Approach. The Background is arranged as follows: the overview of the architecture; camera calibration; stereo matching problem, i.e., disparity; and depth estimation. Further, the proposed approach contains the algorithm, the results, and possible future works.

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

#### 2. Background

#### 2.1 The overview of the stereo architecture

This architecture presents a simple overview of how the stereo system works. As shown in Figure 2, cameras with similar properties are calibrated individually for their intrinsic calibration parameters (Subtopic 2.2.1). The two cameras are then mounted on a rigid stereo rig and calibrated together as a single system to get the extrinsic calibration parameters (Subtopic 2.2.2). The images collected from the two cameras are then undistorted to remove the camera distortion effects. From the extrinsic calibration parameters, we know the rotation and translation of one camera w.r.t. the other (right camera w.r.t. the left camera); we use this information to align the two images from the stereo system along the epipolar line (Subtopic 2.2.2). The image pair is then used for disparity estimation (Topic 2.3), the most nontrivial part of the process. The concept proposed in this chapter targets this substep of the process. Perfect pixel matching is a hard problem in itself. So, achieving a real-time performance on images makes the problem nothing but more complex. Once we have a pixel-to-pixel correspondence between the two images, i.e., the disparity for each pixel, we can directly compute the depth for each of them using a single formula. The following topics discuss the steps as mentioned above in greater detail.

#### 2.2 Camera calibration

This instrument was first described to us in 1838 by Charles Whitestone to view relief pictures. He called it the stereoscope. A lot of other inventors and visionaries later used this concept to develop their versions of stereoscopes. It even led to the establishment of the London Stereoscopic Company in 1854. The concept of depth estimation using multiple views was used even for the estimation of the distance of the far away astronomical objects in the early times. The depth is also directly proportional to the distance between the two cameras of the stereo vision system, also called the baseline. Hence the estimation of such vast distances demanded us to use the longest possible baseline length that we could use. So the data was recorded from Earth being on either side of the sun, making the baseline length to be the same as the diameter of the Earth's orbit around the sun, and then the depth of the astronomical objects is measured. This method was called the stellar parallax or

Considering other applications, robotic applications demand plenty of stereo vision systems for close object depth estimations. Be it humanoids, robots for folding clothes or picking objects, or even autonomous vehicles, stereo vision systems solve many complexities. On top of that, if the use case is for unidirectional short-range applications, good stereo systems can even eradicate the need for lidars

This chapter presents a new idea while using the existing techniques for depth estimation. The motivation is to make the depth estimation procedure a lot lighter and faster. In simple words, the intension is to avoid the calculation of depth for the pixels that are not required. It is most usable when coupled with other perception techniques like object-detection and semantic-segmentation. These perception steps help us rule out the unrequired pixels for which depth estimation can be avoided.

Future sections of the chapter are primarily segregated as the Background and the Proposed Approach. The Background is arranged as follows: the overview of the architecture; camera calibration; stereo matching problem, i.e., disparity; and depth estimation. Further, the proposed approach contains the algorithm, the results, and

trigonometric parallax [1].

Figure 1. The stereo setup.

Coding Theory

possible future works.

112

or radars and hence aid toward much cost-cutting.

The implications and findings of this are discussed later.

Camera calibration is a fundamental step in computer vision applications. There are two aspects of camera calibration, namely, intrinsic calibration and extrinsic calibration. Some of the experts whose algorithms are used for camera calibration are Zhang [2], Scaramuzza [3], Jean-Yves Bouguet [4], and Tsai [5].

#### 2.2.1 Intrinsic camera calibration

Intrinsic calibration, Step 2 in Figure 2, provides us with the internal properties of the camera, such as focal length in pixels, optical center in pixels, shear constant, aspect ratio, and distortion coefficients.

• The optical center is the position in the image that coincides with the principal axis of the camera setup.

Figure 2. The architecture overview.

	- Barrel distortion: the lines seem to be curving inward as they move away from the camera center.

In the stereo camera system, one camera is the reference frame, and the other camera is calibrated w.r.t. the first camera. Hence, after the extrinsic calibration, if the two cameras are arranged along the X-axis, the baseline length information is returned as the translation along the same axis. Along with the rotation and translation, stereo calibration also updates the focal length of the overall stereo camera system. This focal length is common to both the cameras in the stereo system and is different from that of the individual focal lengths of the two cameras. The reason is that the two cameras now need to look at the joint portion of the scene; hence, choosing similar cameras, Step 1 in Figure 2, if not identical, can be an essential factor for a good stereo system. Dissimilar cameras significantly affect the image quality when using a common focal length. The camera with a more considerable difference between the old focal length and the new focal length gives a highly pixelated image. This difference in the image quality of the two cameras reflects in

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

DOI: http://dx.doi.org/10.5772/intechopen.86303

the later stage of disparity estimation. It makes the process of finding the

becomes a lot easier if we rectify the output images of the stereo pair. Also,

cally reduces the computations required by the disparity algorithm.

disparity estimation or unnecessary noise.

Figure 3. Camera axes.

2.3 Disparity/stereo matching

115

corresponding pixels in the two images much harder; hence, it might lead to wrong

unrectified images are more prone to incorrect disparity estimation. In this step, we warp the output image of the second camera using the extrinsic parameters w.r.t. the reference camera. This warping ensures that the pixels belonging to the same objects in the two cameras lie along the same scan line in both images. So instead of the larger search space, i.e., the complete image, the search for disparity estimation can be restricted to a single row of the image. This scan line is called the epipolar line, and the plane that intersects with this epipolar line and the object point in 3D world coordinate is called the epipolar plane (see Figure 4). This process dramati-

This section talks about the most nontrivial aspect of the entire process of depth estimation using stereo, i.e., computing the disparity map from the stereo image pair. If considering the raw image pair from the stereo, the entire image is the search space to find the corresponding matching pixel. Although we might be able

Another use of the extrinsic parameters is image rectification, Step 6 in Figure 2. Computing disparity is not impossible without this step, but the problem statement


#### 2.2.2 Extrinsic camera calibration

While intrinsic calibration provides us with intrinsic camera properties, extrinsic calibration provides us with external details like the effective movement w.r.t., a reference point in the three-dimensional world coordinate system. These constants incorporate the movement of the camera frame in six degrees of freedom. Considering the axes shown in Figure 3, if the image plane lies in the X-Y plane and the camera is oriented along the Z-axis, the six degrees of freedom are translation along the X-axis, translation along the Y-axis, translation along the Z-axis, rotation along the X-axis (pitch), rotation along the Y-axis (yaw), and rotation along the Z-axis (roll).

Extrinsic calibration, Step 4 in Figure 2, is particularly crucial in the stereo camera setup because it gives the exact baseline distance between the two camera centers. The approximate baseline is decided initially before setting up the camera units. This decision is necessary and different depending on the application of the stereo system. As the baseline length is directly proportional to the detected object depth, a more extended baseline would increase the range of the system to measure more considerable distances, while a shorter baseline would allow only short-range depth estimation. The downside to a larger baseline is the smaller overlap between the views of the two cameras. So although the system would have a greater range, it will only be for a smaller section of the view, whereas a stereo system with a smaller baseline would have a much larger overlapping view and hence would provide short-range distance estimation for a more extensive section of the view. Neither of the two systems can replace one another. Hence, keeping this significant difference in mind while choosing the correct baseline is essential.

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

Figure 3. Camera axes.

• Shear is the slant orientation of the image recorded. This disorientation may occur during the digitized process of grabbing the image frame from the sensors. Based on today's technical advancements and complex systems, it is safe to assume that the recorded image has zero or very close to zero shears.

• Aspect ratio defines the shape of the pixels of the image sensor. For example, the National Television System Committee (NTSC) TV system defines nonsquare image pixels with an aspect ratio of 10:11. However, in most of the general cases, it is safe to assume that pixels are square and hence the aspect

• Distortion coefficients are used to undistort the recorded image from the camera. The camera image is prone to pick up some distortions based on the built of the lenses and the camera system or based on the position of the object and the camera. The former is called optical distortion, and the latter is called

perspective distortion. Distortion coefficients are used to undistort the optical distortions only. Undistorting the images ensures that the output image is not affected by any of the manufacturing defects in the camera, at least in the ideal

• Barrel distortion: the lines seem to be curving inward as they move away

• Pincushion distortion: the lines seem to be curving outward as they move

• Mustache distortion: this is a mix of the two distortions and the toughest

While intrinsic calibration provides us with intrinsic camera properties, extrinsic calibration provides us with external details like the effective movement w.r.t., a reference point in the three-dimensional world coordinate system. These constants incorporate the movement of the camera frame in six degrees of freedom. Considering the axes shown in Figure 3, if the image plane lies in the X-Y plane and the camera is oriented along the Z-axis, the six degrees of freedom are translation along the X-axis, translation along the Y-axis, translation along the Z-axis, rotation along the X-axis (pitch), rotation along the Y-axis (yaw), and rotation along

Extrinsic calibration, Step 4 in Figure 2, is particularly crucial in the stereo camera setup because it gives the exact baseline distance between the two camera centers. The approximate baseline is decided initially before setting up the camera units. This decision is necessary and different depending on the application of the stereo system. As the baseline length is directly proportional to the detected object depth, a more extended baseline would increase the range of the system to measure more considerable distances, while a shorter baseline would allow only short-range depth estimation. The downside to a larger baseline is the smaller overlap between the views of the two cameras. So although the system would have a greater range, it will only be for a smaller section of the view, whereas a stereo system with a smaller baseline would have a much larger overlapping view and hence would provide short-range distance estimation for a more extensive section of the view. Neither of the two systems can replace one another. Hence, keeping this significant difference

case. There are three kinds of optical distortions:

from the camera center.

one to handle.

2.2.2 Extrinsic camera calibration

the Z-axis (roll).

114

away from the camera center.

in mind while choosing the correct baseline is essential.

ratio is 1.

Coding Theory

In the stereo camera system, one camera is the reference frame, and the other camera is calibrated w.r.t. the first camera. Hence, after the extrinsic calibration, if the two cameras are arranged along the X-axis, the baseline length information is returned as the translation along the same axis. Along with the rotation and translation, stereo calibration also updates the focal length of the overall stereo camera system. This focal length is common to both the cameras in the stereo system and is different from that of the individual focal lengths of the two cameras. The reason is that the two cameras now need to look at the joint portion of the scene; hence, choosing similar cameras, Step 1 in Figure 2, if not identical, can be an essential factor for a good stereo system. Dissimilar cameras significantly affect the image quality when using a common focal length. The camera with a more considerable difference between the old focal length and the new focal length gives a highly pixelated image. This difference in the image quality of the two cameras reflects in the later stage of disparity estimation. It makes the process of finding the corresponding pixels in the two images much harder; hence, it might lead to wrong disparity estimation or unnecessary noise.

Another use of the extrinsic parameters is image rectification, Step 6 in Figure 2. Computing disparity is not impossible without this step, but the problem statement becomes a lot easier if we rectify the output images of the stereo pair. Also, unrectified images are more prone to incorrect disparity estimation. In this step, we warp the output image of the second camera using the extrinsic parameters w.r.t. the reference camera. This warping ensures that the pixels belonging to the same objects in the two cameras lie along the same scan line in both images. So instead of the larger search space, i.e., the complete image, the search for disparity estimation can be restricted to a single row of the image. This scan line is called the epipolar line, and the plane that intersects with this epipolar line and the object point in 3D world coordinate is called the epipolar plane (see Figure 4). This process dramatically reduces the computations required by the disparity algorithm.

#### 2.3 Disparity/stereo matching

This section talks about the most nontrivial aspect of the entire process of depth estimation using stereo, i.e., computing the disparity map from the stereo image pair. If considering the raw image pair from the stereo, the entire image is the search space to find the corresponding matching pixel. Although we might be able

The sum of absolute difference (SAD)

DOI: http://dx.doi.org/10.5772/intechopen.86303

Normalized cross-correlation (NCC)

CSADð Þ¼ d ∑

ð Þ u;v ∈Wmð Þ x;y

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

Normalized pixel : ^I xð Þ¼ ; <sup>y</sup>

ð Þ u;v ∈Wmð Þ x;y

IL � Left image or first camera image IR � Right image or second camera image

I uð Þ� ; v Image pixel intensity at location u, v

considerably affected by factors such as illumination differences and viewing angles. To minimize the effect these factors have on the output, the pixel patches used for similarity check can be normalized before using SSD or SAD similarity values. Some other approaches that help make the algorithm independent of such factors are rank transform and census transform. These transformations eliminate

output. Obtaining a "dense" disparity map with restricted computations is the major challenge when designing algorithms. The dominant factors affecting the

Lambertian surfaces follow the property of Lambertian reflectance, i.e., they look the same to the observer irrespective of the viewing angle. An ideal "matte" surface is an excellent example of a Lambertian surface. If the surface in the scene does not follow this property, it might appear to be different regarding illuminance and brightness in the two camera views. This characteristic can lead to incorrect stereo

Noise can be present in the images as a result of low-quality electronic devices or shooting the images at higher ISO settings. Higher ISO settings make the sensor more sensitive to the light entering the camera. This setting can magnify the effect of unwanted light entering the camera sensor and is nothing but noise. This noise is most certainly different for the two cameras and hence again making disparity

This issue occurs mainly for an object lying far away in the scene. Since the baseline is directly proportional to the distance of objects, stereo systems with smaller

• Photometric constraints (Lambertian/non-Lambertian surfaces)

Although these cost functions are decent choices for similarity measure, they are

Despite handling these sensitive cases, it takes a lot to estimate a dense disparity

CNCð Þ¼ d ∑

In Eqs. (1)–(4), below is the legend for the symbols used:

Wm � Matching window

d � Pixel disparity

the sensitivity toward absolute intensity and outliers.

matching and hence wrong disparity values.

• Pixels containing multiple surfaces

• Noise in the two images

estimation harder.

117

similarity measure of the corresponding pixels are as follows:

∣ILð Þ� u; v IRð Þ u � d; v ∣ (2)

^ILð Þ <sup>u</sup>; <sup>v</sup> ^IRð Þ <sup>u</sup> � <sup>d</sup>; <sup>v</sup> (4)

(3)

I xð Þ� ; y I <sup>I</sup> � <sup>I</sup> Wmð Þ x;y

Figure 4.

The epipolar plane. X is the object point in the world coordinates, x and x<sup>0</sup> are the corresponding pixels in the two image planes, e and e<sup>0</sup> are the epipoles of the two image planes, and O and O<sup>0</sup> are the corresponding camera centers.

to streamline the search space a little bit based on common sense, that will still not be comparable to searching a single row of the image. In an ideal case, the most robust system would be the one that can overlook all the image distortions, artifacts, and occlusion cases and give us a pixel-to-pixel disparity estimation by finding its perfect match in the corresponding image. [6–8] are some of the datasets that provide us with the ground truth disparity images along with the stereo image pair (see Figure 5). Researchers came up with different novel ideas and techniques involving custom calibration methods, high-end camera units, sensors, and better disparity estimation techniques to estimate sub-pixel disparities for highly accurate ground truth [9, 10]. While these methods are suitable to generate ground truths, real-time systems demand inexpensive solutions. Hence, in most of the cases, the applications do not require extremely accurate calibration but rely on fairly good camera calibration, inexpensive image rectification, and simple matching algorithms to get good enough disparity maps.

One of the significant elements of the stereo matching algorithms is the cost function that is used to evaluate the similarity. Some of the significant cost functions are:

The sum of squared difference (SSD)

$$C\_{\text{SSD}}(d) = \sum\_{(\mathfrak{u}, \boldsymbol{\nu}) \in \mathcal{W}\_{\mathfrak{m}}(\mathbf{x}; \boldsymbol{\mathfrak{y}})} \left[ I\_L(\mathfrak{u}, \boldsymbol{\nu}) - I\_R(\mathfrak{u} - d, \boldsymbol{\nu}) \right]^2 \tag{1}$$

Figure 5. Middlebury stereo dataset. Scene (left), ground truth disparity (right).

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

The sum of absolute difference (SAD)

$$\mathcal{C}\_{\text{SAD}}(d) = \sum\_{(u,v) \in \mathcal{W}\_m(x,y)} |I\_L(u,v) - I\_R(u-d,v)| \tag{2}$$

Normalized cross-correlation (NCC)

$$\text{Normalized pixel}: \hat{I}(\mathbf{x}, \mathbf{y}) = \frac{I(\mathbf{x}, \mathbf{y}) - \overline{I}}{||I - \overline{I}||\_{W\_m(\mathbf{x}, \mathbf{y})}} \tag{3}$$

$$C\_{\rm NC}(d) = \sum\_{(\mathfrak{u}, \boldsymbol{\nu}) \in \mathcal{W}\_{\rm m}(\mathbf{x}, \boldsymbol{y})} \hat{I}\_{L}(\mathfrak{u}, \boldsymbol{\nu}) \hat{I}\_{R}(\mathfrak{u} - d, \boldsymbol{\nu}) \tag{4}$$

In Eqs. (1)–(4), below is the legend for the symbols used:

IL � Left image or first camera image IR � Right image or second camera image Wm � Matching window d � Pixel disparity I uð Þ� ; v Image pixel intensity at location u, v

Although these cost functions are decent choices for similarity measure, they are considerably affected by factors such as illumination differences and viewing angles. To minimize the effect these factors have on the output, the pixel patches used for similarity check can be normalized before using SSD or SAD similarity values. Some other approaches that help make the algorithm independent of such factors are rank transform and census transform. These transformations eliminate the sensitivity toward absolute intensity and outliers.

Despite handling these sensitive cases, it takes a lot to estimate a dense disparity output. Obtaining a "dense" disparity map with restricted computations is the major challenge when designing algorithms. The dominant factors affecting the similarity measure of the corresponding pixels are as follows:

• Photometric constraints (Lambertian/non-Lambertian surfaces)

Lambertian surfaces follow the property of Lambertian reflectance, i.e., they look the same to the observer irrespective of the viewing angle. An ideal "matte" surface is an excellent example of a Lambertian surface. If the surface in the scene does not follow this property, it might appear to be different regarding illuminance and brightness in the two camera views. This characteristic can lead to incorrect stereo matching and hence wrong disparity values.

• Noise in the two images

Noise can be present in the images as a result of low-quality electronic devices or shooting the images at higher ISO settings. Higher ISO settings make the sensor more sensitive to the light entering the camera. This setting can magnify the effect of unwanted light entering the camera sensor and is nothing but noise. This noise is most certainly different for the two cameras and hence again making disparity estimation harder.

• Pixels containing multiple surfaces

This issue occurs mainly for an object lying far away in the scene. Since the baseline is directly proportional to the distance of objects, stereo systems with smaller

to streamline the search space a little bit based on common sense, that will still not be comparable to searching a single row of the image. In an ideal case, the most robust system would be the one that can overlook all the image distortions, artifacts, and occlusion cases and give us a pixel-to-pixel disparity estimation by finding its perfect match in the corresponding image. [6–8] are some of the datasets that provide us with the ground truth disparity images along with the stereo image pair (see Figure 5). Researchers came up with different novel ideas and techniques involving custom calibration methods, high-end camera units, sensors, and better disparity estimation techniques to estimate sub-pixel disparities for highly accurate ground truth [9, 10]. While these methods are suitable to generate ground truths, real-time systems demand inexpensive solutions. Hence, in most of the cases, the applications do not require extremely accurate calibration but rely on fairly good camera calibration, inexpensive image rectification, and simple matching algo-

The epipolar plane. X is the object point in the world coordinates, x and x<sup>0</sup> are the corresponding pixels in the two image planes, e and e<sup>0</sup> are the epipoles of the two image planes, and O and O<sup>0</sup> are the corresponding camera

One of the significant elements of the stereo matching algorithms is the cost function that is used to evaluate the similarity. Some of the significant cost func-

½ � ILð Þ� u; v IRð Þ u � d; v

<sup>2</sup> (1)

rithms to get good enough disparity maps.

The sum of squared difference (SSD)

CSSDð Þ¼ d ∑

Middlebury stereo dataset. Scene (left), ground truth disparity (right).

ð Þ u;v ∈Wmð Þ x;y

tions are:

Figure 5.

116

Figure 4.

Coding Theory

centers.

baseline face this issue even at average distances, whereas systems with larger baseline face it at a greater distance. It's something similar along the lines of Johnson's criteria [11] that we are a little helpless for this kind of problem. Hence it is crucial to choose the stereo baseline suitable to one's use case.

A few of these unfavorable aftereffects can be handled with post-processing of the disparity maps, but they can aid us only to a certain extent. A dense disparity map at real time is still a nontrivial task. Some of the cases to be kept in mind when

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

The median filter is an easy way to tackle this problem. However, it might fail in the case of a little larger spurious disparity speckles. Speckle filtering can be done using other approaches, such as the removal of tiny blobs that are inconsistent with the background. This approach gives decent results. Though this removes most of the incorrect disparity values, it leaves the disparity maps with a lot of holes or blank

Many factors lead to blank values in the disparity map. These holes are caused mainly due to occlusion or the removal of false disparity values. Occlusion can be detected using the left–right disparity consistency check, i.e., two disparity maps, each w.r.t. the first and second camera image can be obtained, and the disparity values of the corresponding pixels must be the same; the pixels that are left out are

Most of the algorithms give integer disparity values. However, such discrete values give discontinuous disparity maps and lead to a lot of information loss, particularly at more considerable distances. Some of the common ways to handle this are

Having seen the cost functions and the challenges in computing the disparity, we can now go on to the algorithms used for its computation. Starting from a broader classification of the approaches, they are talked about in the following subtopics for

Local methods tend to look at only a small patch of the image, i.e., only a small group of pixels around the selected pixel is considered. This local approach lacks the overall understanding of the scene but is very efficient and less computationally expensive compared to global methods. The issue with not having the complete understanding of the whole image leads to more erroneous disparity maps as it is susceptible to the local ambiguities of the region such as occluded pixels or uniform-

Global methods have almost always beaten the local methods concerning output

textured surfaces. This noise is taken care of up to a certain extent by some post-processing methods. The post-processing steps have also received significant attention from the experts as it helps keep the process inexpensive. Area-based methods, feature-based methods, as well as methods based on a gradient optimiza-

quality but incur large computations. These algorithms are immune to local

ideally the occluded pixels. These holes can be filled by surface fitting or

working on a post-processing algorithm are as follows:

• Removal of spurious stereo matches

DOI: http://dx.doi.org/10.5772/intechopen.86303

• Filling of holes in the disparity map

distributing neighboring disparity estimates.

the most common techniques of disparity estimation.

• Sub-pixel estimation

gradient descent and curve fitting.

2.3.1 Local stereo matching methods

tion lie in this category.

119

2.3.2 Global stereo matching methods

values.

• Occluded pixels

These are those pixels of the 3D scene that are visible in one frame and not visible in the other (see Figure 6). It is practically impossible to find the disparity of these pixels as no match exists for that pixel in the corresponding image. The disparities for these pixels are only estimated with the help of smart interpolation techniques or reasonable approximations.

• The surface texture of the 3D object

This property of the object is another factor leading to confused or false disparity estimation. Surfaces such as a blank wall, road, or sky have no useful texture, and hence it is impossible to compute their disparity based on simple block matching techniques. These kinds of use cases require the intelligence of global methods that consider the information presented in the entire image instead of just a single scan line (discussed later in the chapter).

• The uniqueness of the object in the scene

If the object in the scene is not unique, there is a good chance that the disparity computed is incorrect because the algorithm is vulnerable to matching with the wrong corresponding pixel. A broader view of the matching patch can help here up to a certain extent, but that comes with the additional cost of required computations.

• Synchronized image capture from the two cameras

The images captured from the two cameras must be taken at the same time, especially in the moving environment scenarios. In the case of continuous scene recording, the output from the two cameras can be synchronized at the software level, or the two cameras can be hardware-triggered for the synchronized output image. While hardware trigger gives perfectly synchronized output, software level synchronization is a lot more easily achieved with decently accurate synchronization.

Figure 6. Occlusion.

#### Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

A few of these unfavorable aftereffects can be handled with post-processing of the disparity maps, but they can aid us only to a certain extent. A dense disparity map at real time is still a nontrivial task. Some of the cases to be kept in mind when working on a post-processing algorithm are as follows:

• Removal of spurious stereo matches

baseline face this issue even at average distances, whereas systems with larger baseline face it at a greater distance. It's something similar along the lines of Johnson's criteria [11] that we are a little helpless for this kind of problem. Hence it

These are those pixels of the 3D scene that are visible in one frame and not visible in the other (see Figure 6). It is practically impossible to find the disparity of these pixels as no match exists for that pixel in the corresponding image. The disparities for these pixels are only estimated with the help of smart interpolation techniques

This property of the object is another factor leading to confused or false disparity estimation. Surfaces such as a blank wall, road, or sky have no useful texture, and hence it is impossible to compute their disparity based on simple block matching techniques. These kinds of use cases require the intelligence of global methods that consider the information presented in the entire image instead of just a single scan

If the object in the scene is not unique, there is a good chance that the

• Synchronized image capture from the two cameras

disparity computed is incorrect because the algorithm is vulnerable to matching with the wrong corresponding pixel. A broader view of the matching patch can help here up to a certain extent, but that comes with the additional cost of

The images captured from the two cameras must be taken at the same time, especially in the moving environment scenarios. In the case of continuous scene recording, the output from the two cameras can be synchronized at the software level, or the two cameras can be hardware-triggered for the synchronized output image. While hardware trigger gives perfectly synchronized output, software level synchronization is a lot more easily achieved with decently accurate synchronization.

is crucial to choose the stereo baseline suitable to one's use case.

• Occluded pixels

Coding Theory

or reasonable approximations.

line (discussed later in the chapter).

required computations.

Figure 6. Occlusion.

118

• The surface texture of the 3D object

• The uniqueness of the object in the scene

The median filter is an easy way to tackle this problem. However, it might fail in the case of a little larger spurious disparity speckles. Speckle filtering can be done using other approaches, such as the removal of tiny blobs that are inconsistent with the background. This approach gives decent results. Though this removes most of the incorrect disparity values, it leaves the disparity maps with a lot of holes or blank values.

• Filling of holes in the disparity map

Many factors lead to blank values in the disparity map. These holes are caused mainly due to occlusion or the removal of false disparity values. Occlusion can be detected using the left–right disparity consistency check, i.e., two disparity maps, each w.r.t. the first and second camera image can be obtained, and the disparity values of the corresponding pixels must be the same; the pixels that are left out are ideally the occluded pixels. These holes can be filled by surface fitting or distributing neighboring disparity estimates.

• Sub-pixel estimation

Most of the algorithms give integer disparity values. However, such discrete values give discontinuous disparity maps and lead to a lot of information loss, particularly at more considerable distances. Some of the common ways to handle this are gradient descent and curve fitting.

Having seen the cost functions and the challenges in computing the disparity, we can now go on to the algorithms used for its computation. Starting from a broader classification of the approaches, they are talked about in the following subtopics for the most common techniques of disparity estimation.

#### 2.3.1 Local stereo matching methods

Local methods tend to look at only a small patch of the image, i.e., only a small group of pixels around the selected pixel is considered. This local approach lacks the overall understanding of the scene but is very efficient and less computationally expensive compared to global methods. The issue with not having the complete understanding of the whole image leads to more erroneous disparity maps as it is susceptible to the local ambiguities of the region such as occluded pixels or uniformtextured surfaces. This noise is taken care of up to a certain extent by some post-processing methods. The post-processing steps have also received significant attention from the experts as it helps keep the process inexpensive. Area-based methods, feature-based methods, as well as methods based on a gradient optimization lie in this category.

#### 2.3.2 Global stereo matching methods

Global methods have almost always beaten the local methods concerning output quality but incur large computations. These algorithms are immune to local

peculiarities and can sometimes handle difficult regions that would be hard to handle using local methods. Dynamic programming and nearest neighbor methods lie in this category. Global methods are rarely used because of their high computational demands. Researchers mostly incline toward the local stereo matching methods because of its vast range of possible applications with a real-time stereo output.

<sup>z</sup> <sup>¼</sup> <sup>f</sup> � <sup>B</sup>

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

As discussed earlier, the formula for depth incorporates its inversely proportional relation to the disparity as well as the directly proportional relation to the baseline. Focal length and baseline are stereo camera constants that are obtained

z � Depth of the object point from the stereo unit in meters

B � Baseline distance between the two camera units in meters

I1, I<sup>2</sup> � Corresponding image from Camera 1 and Camera 2

Eq. (5) can be better understood using the following simple depth proof: As we can see from the diagram in Figure 8, the camera plane is parallel to the

In Eq. (5) and Figure 8, below is the legend for the symbols used:

f � Effective focal length of the stereo unit in pixels

from the stereo calibration.

image plane:

Figure 8.

121

The stereo vision geometry.

d � Pixel disparity

DOI: http://dx.doi.org/10.5772/intechopen.86303

from Eq. (6), we know that

O � Object point in the world frame C1, C<sup>2</sup> � Camera 1 and Camera 2

<sup>d</sup> (5)

∴ΔOPC<sup>1</sup> � ΔC1MI<sup>1</sup> (6) and ΔOPC<sup>2</sup> � ΔC2NI<sup>2</sup> (7)

Block matching is among the simplest and most popular disparity estimation algorithms. It involves the comparison of a block of pixels surrounding the pixel under study. This comparison between the two patches is made using one or a group of cost functions that are not restricted to the ones mentioned above. SSD and SAD perform pretty well and hence are the first choices in many algorithms (see Figure 7 for the disparity output of the stereo block matching algorithm).

Some modifications to this basic approach that exists in the current literature are variations in the shape, size, and count of the pixel blocks used for each pixel of interest. Other areas of modification include the cost function and preprocessing and post-processing of the disparity map. [12–14] are some examples of the approaches mentioned above. Although most of these modifications show improvement in the accuracy and quality of the obtained disparity map, they all come with an added computational expense. Hence, like most of the algorithmic choices, even the stereo matching algorithms boil down to the direct trade-off between computation and accuracy. So it is particularly important to choose the algorithms based on the specific applications and the use case that governs their usability. With these limitations in place, the time has presented us with excellent technical advances, and hence many researchers are now devising solutions with the power of GPUs in mind. Parallelizing the above algorithms makes them compatible to run on GPUs and overcome most of the speed limitations. Though the number of computations being done is almost the same, their parallel execution takes a lot less time compared to their serial execution. This advancement opens doors for the execution of more complex algorithms much faster and hence allows better quality outputs in real time.

#### 2.4 Depth estimation

#### 2.4.1 Conventional method

Once we already have the disparity map for a pair of stereo images, getting the pixel-wise distance from it is the easy part. This information can be obtained using a linear formula (see Eq. (5)):

Figure 7. Disparity output using stereo block matching algorithm.

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

$$z = \frac{f \times B}{d} \tag{5}$$

As discussed earlier, the formula for depth incorporates its inversely proportional relation to the disparity as well as the directly proportional relation to the baseline. Focal length and baseline are stereo camera constants that are obtained from the stereo calibration.

In Eq. (5) and Figure 8, below is the legend for the symbols used:

z � Depth of the object point from the stereo unit in meters f � Effective focal length of the stereo unit in pixels B � Baseline distance between the two camera units in meters d � Pixel disparity O � Object point in the world frame C1, C<sup>2</sup> � Camera 1 and Camera 2 I1, I<sup>2</sup> � Corresponding image from Camera 1 and Camera 2

Eq. (5) can be better understood using the following simple depth proof: As we can see from the diagram in Figure 8, the camera plane is parallel to the image plane:

$$\text{1.} \therefore \Delta \text{OPC}\_1 \sim \Delta \text{C}\_1 \text{MI}\_1 \tag{6}$$

$$\text{and } \Delta \text{OPC}\_2 \sim \Delta \text{C}\_2 \text{NI}\_2 \tag{7}$$

from Eq. (6), we know that

Figure 8. The stereo vision geometry.

peculiarities and can sometimes handle difficult regions that would be hard to handle using local methods. Dynamic programming and nearest neighbor methods lie in this category. Global methods are rarely used because of their high computational demands. Researchers mostly incline toward the local stereo matching methods because of its vast range of possible applications with a real-time stereo

Block matching is among the simplest and most popular disparity estimation algorithms. It involves the comparison of a block of pixels surrounding the pixel under study. This comparison between the two patches is made using one or a group of cost functions that are not restricted to the ones mentioned above. SSD and SAD perform pretty well and hence are the first choices in many algorithms (see Figure 7 for the disparity output of the stereo block matching algorithm).

Some modifications to this basic approach that exists in the current literature are variations in the shape, size, and count of the pixel blocks used for each pixel of interest. Other areas of modification include the cost function and preprocessing and post-processing of the disparity map. [12–14] are some examples of the approaches mentioned above. Although most of these modifications show improvement in the accuracy and quality of the obtained disparity map, they all come with an added computational expense. Hence, like most of the algorithmic choices, even the stereo matching algorithms boil down to the direct trade-off between computation and accuracy. So it is particularly important to choose the algorithms based on the specific applications and the use case that governs their usability. With these limitations in place, the time has presented us with excellent technical advances, and hence many researchers are now devising solutions with the power of GPUs in mind. Parallelizing the above algorithms makes them compatible to run on GPUs and overcome most of the speed limitations. Though the number of computations being done is almost the same, their parallel execution takes a lot less time compared to their serial execution. This advancement opens doors for the execution of more complex algorithms much faster and hence allows

Once we already have the disparity map for a pair of stereo images, getting the pixel-wise distance from it is the easy part. This information can be obtained using a

output.

Coding Theory

better quality outputs in real time.

2.4 Depth estimation

2.4.1 Conventional method

linear formula (see Eq. (5)):

Disparity output using stereo block matching algorithm.

Figure 7.

120

$$\frac{z}{f} = \frac{C\_1 P}{I\_1 M} \tag{8}$$

and from Eq. (7),

$$\frac{z}{f} = \frac{C\_2 P}{I\_2 N} \tag{9}$$

Since baseline is the distance between the two cameras in a stereo unit,

$$\dots B = \mathbf{C}\_1 \mathbf{P} - \mathbf{C}\_2 \mathbf{P} \tag{10}$$

from Eqs. (8) and (9), we can rewrite Eq. (10) as

$$B = \frac{\mathcal{Z}}{f} \times (I\_1 \mathcal{M} - I\_2 \mathcal{N}) \tag{11}$$

From the definition, it is evident that ð Þ I1M � I2N is nothing but disparity. Therefore, from Eq. (11) we arrive at the original equation of depth, i.e.,

$$z = \frac{f \times B}{d} \tag{12}$$

3. Proposed approach

Disparity output using deep learning methods [15].

DOI: http://dx.doi.org/10.5772/intechopen.86303

Figure 9.

3.1 Algorithm

123

Many researchers have been working and brainstorming on the issue of sparse disparity maps. A lot of the real-time non-deep learning disparity methods fail to generate dense disparity maps. And deep learning methods have been lagging behind in this case because they are slower and lack accuracy. The performance comparison mainly assumes embedded hardware and not high-end compute machines. However, this approach aims to eradicate their need for certain use cases. The focus of this approach is to quest on the fact if sparse disparity map is really an issue. Sticking to the motivation of this chapter, sparse disparity maps are more than enough to give meaningful information if combined with other smart perception techniques. For example, if methods like object-detection [18, 19] or semanticsegmentation [20–23] give an output of identified object pixels in the image, sparse stereo output can be used to estimate the depth of the entire identified pixel group with the help of only a few major feature pixels. As a researcher, it is essential to acknowledge the fact that "one solution fits all" is not always the best approach for performance-centric problems. Moreover, because the dense disparity maps take up

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

If the obtained output is a sparse disparity, high credibility is a nonnegotiable requirement. While many hacks are used to filter the nonsensical disparity values, they are ultimately heuristics and not smart techniques that have any understanding of the scene. There is always the possibility that sometimes the good disparity values are filtered out. Since the current approach works on mostly the most critical feature points, the credibility for their disparity is the maximum in the selected region of pixels. More so, multiple distance functions reaffirm the calculated disparity. Higher confidence in the output disparity can be obtained by making use of higher level structural information of the objects in the scene. The structural buildup of the scene is lost information that is mostly neglected in the non-deep learning approaches. The approach proposed in this chapter intends to use this information to our advantage.

The conventional techniques start with a window scan of the entire image and look for the best disparity values. Mostly a post-processing step follows which

much computational power, we drop the aim of doing so.

The proof for the above equation implies that the depth from the stereo unit is only dependent on the stereo focal length, the baseline length, and the disparity between the corresponding pixels in the image pair. For this exact reason, depth estimation using stereo is more robust and better suited. It is independent of any orientation or poses of the stereo unit w.r.t. the scene in the 3D world coordinates. The depth of an object shown by the stereo unit is not affected by any movement of the unit at the same distance from the object. This characteristic does not hold when the depth is being estimated using the monocular camera using methods other than deep learning. Calculating depth using a monocular camera is highly dependent on the exact pose of the unit w.r.t. the scene in the 3D world coordinates. The pose constants that work for depth estimation in one pose of the camera are most certainly guaranteed not to work when the camera is repositioned to some other pose at the same depth from the object.

#### 2.4.2 Deep learning method

All the methods discussed above are ultimately static methods that work on the base ground of traditional computer vision. Deep learning, gaining popularity in the recent years, has shown promising results in almost all fields that it has been applied to. Sticking to the trend, the researchers and experts used it to estimate depths and disparity as well, and as expected, the results are encouraging enough for all enthusiasts for further motivated research.

Exploiting the limits of deep learning, it has also shown motivating results for depth on monocular images as well. This idea is particularly interesting because in this approach the learning model can be trained without the need for disparity map or depth information [15–17]. The output from one camera is treated as the ground truth for the other camera's input image. The logic is, to give as output, a disparity map which when used to shift the pixels of the first camera image gives us an image that is equivalent to the second camera image. This disparity output is then used to compute depth using the simple depth formula (Figures 9–11).

Figure 9. Disparity output using deep learning methods [15].

#### 3. Proposed approach

z <sup>f</sup> <sup>¼</sup> <sup>C</sup>1<sup>P</sup>

z <sup>f</sup> <sup>¼</sup> <sup>C</sup>2<sup>P</sup>

Since baseline is the distance between the two cameras in a stereo unit,

From the definition, it is evident that ð Þ I1M � I2N is nothing but disparity. Therefore, from Eq. (11) we arrive at the original equation of depth, i.e.,

<sup>z</sup> <sup>¼</sup> <sup>f</sup> � <sup>B</sup>

The proof for the above equation implies that the depth from the stereo unit is only dependent on the stereo focal length, the baseline length, and the disparity between the corresponding pixels in the image pair. For this exact reason, depth estimation using stereo is more robust and better suited. It is independent of any orientation or poses of the stereo unit w.r.t. the scene in the 3D world coordinates. The depth of an object shown by the stereo unit is not affected by any movement of the unit at the same distance from the object. This characteristic does not hold when the depth is being estimated using the monocular camera using methods other than deep learning. Calculating depth using a monocular camera is highly dependent on the exact pose of the unit w.r.t. the scene in the 3D world coordinates. The pose constants that work for depth estimation in one pose of the camera are most certainly guaranteed not to work when the camera is repositioned to some other

All the methods discussed above are ultimately static methods that work on the base ground of traditional computer vision. Deep learning, gaining popularity in the recent years, has shown promising results in almost all fields that it has been applied to. Sticking to the trend, the researchers and experts used it to estimate depths and disparity as well, and as expected, the results are encouraging enough for all enthu-

Exploiting the limits of deep learning, it has also shown motivating results for depth on monocular images as well. This idea is particularly interesting because in this approach the learning model can be trained without the need for disparity map or depth information [15–17]. The output from one camera is treated as the ground truth for the other camera's input image. The logic is, to give as output, a disparity map which when used to shift the pixels of the first camera image gives us an image that is equivalent to the second camera image. This disparity output is then used to

compute depth using the simple depth formula (Figures 9–11).

from Eqs. (8) and (9), we can rewrite Eq. (10) as

pose at the same depth from the object.

siasts for further motivated research.

2.4.2 Deep learning method

122

<sup>B</sup> <sup>¼</sup> <sup>z</sup> f

and from Eq. (7),

Coding Theory

<sup>I</sup>1<sup>M</sup> (8)

<sup>I</sup>2<sup>N</sup> (9)

∴ B ¼ C1P � C2P (10)

� ð Þ I1M � I2N (11)

<sup>d</sup> (12)

Many researchers have been working and brainstorming on the issue of sparse disparity maps. A lot of the real-time non-deep learning disparity methods fail to generate dense disparity maps. And deep learning methods have been lagging behind in this case because they are slower and lack accuracy. The performance comparison mainly assumes embedded hardware and not high-end compute machines. However, this approach aims to eradicate their need for certain use cases. The focus of this approach is to quest on the fact if sparse disparity map is really an issue. Sticking to the motivation of this chapter, sparse disparity maps are more than enough to give meaningful information if combined with other smart perception techniques. For example, if methods like object-detection [18, 19] or semanticsegmentation [20–23] give an output of identified object pixels in the image, sparse stereo output can be used to estimate the depth of the entire identified pixel group with the help of only a few major feature pixels. As a researcher, it is essential to acknowledge the fact that "one solution fits all" is not always the best approach for performance-centric problems. Moreover, because the dense disparity maps take up much computational power, we drop the aim of doing so.

If the obtained output is a sparse disparity, high credibility is a nonnegotiable requirement. While many hacks are used to filter the nonsensical disparity values, they are ultimately heuristics and not smart techniques that have any understanding of the scene. There is always the possibility that sometimes the good disparity values are filtered out. Since the current approach works on mostly the most critical feature points, the credibility for their disparity is the maximum in the selected region of pixels. More so, multiple distance functions reaffirm the calculated disparity. Higher confidence in the output disparity can be obtained by making use of higher level structural information of the objects in the scene. The structural buildup of the scene is lost information that is mostly neglected in the non-deep learning approaches. The approach proposed in this chapter intends to use this information to our advantage.

#### 3.1 Algorithm

The conventional techniques start with a window scan of the entire image and look for the best disparity values. Mostly a post-processing step follows which

deletes the spurious disparity values from the final output. In this proposed approach, a post-processing step is not required, and the correspondence algorithm runs for a much smaller number of pixels compared to the entire image. Following the above statement, this approach starts with finding the most prominent features in the two input images. It is critical because it ensures three things—the selection of discrete pixels that ensure high disparity confidence, the removal of any post-processing step, and a drastic reduction in the input size for disparity estimation. The first point takes care of high credibility, whereas the other two points ensure a significant performance boost.

Once we have the features of the two input images, we use a combination of multiple techniques. Since it is a conventional image processing technique with the requirement of not being computationally heavy, there is only so much information that each method can carry. A combination of the same has the potential to overcome this flaw.

The first technique, i.e., the feature matching technique, is the most dynamic part of the algorithm. It requires modification for every different type of feature selection. As the feature of interest for this chapter is line segments, the discussion restricts to the same. Here the features not only are matched by pixel values but depend on the feature properties as well. For example, the slope is an essential property for a line. It helps to identify the similarity in the structure of the compared scene. However, this has the naïve loophole that it can match with any similar-looking line. Hence, it is not possible to entirely rely on this distance estimation technique.

The second technique is the typical window matching technique (see Figure 12). However, the difference is the size and shape of the window decided for each individual feature. The line segment detected in that area governs the shape and size of each window. The window must cover entirely the smaller of the two lines (detected lines in the two input images). For a little context, a few pixels pad the feature line within the window (see Figure 12). This one difference from the typical window matching makes much difference because each feature has a unique size which indicates that each window captures a significant image feature in its entirety and not just clueless parts of it. Irrespective of the added advantage, this method still has all the flaws of the box matching technique, the significant difference being the difference in the illumination of the two camera views. This difference can lead to erroneous disparity values. The next distance estimation technique handles this flaw.

The third technique is the census feature matching technique (see Figure 13) which makes the pixel matching intensity independent. It captures the relationship between the intensity values in a selected neighborhood and does not rely on exact intensity values. Although this step makes the previous distance estimation seem redundant, it helps in the cases where the relation between pixels intensities is the same for multiple positions of the search space. On top of that, unlike the window matching technique, the census features require a single point of interest for each window and hence cannot have a non-square window size for the image features.

Window matching technique: the window covers the entire feature segment along with some pixel padding.

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

While the above metrics help find an accurate match of the corresponding pixels, it is necessary to identify the pixels that do not have a corresponding matching pixel. It is mainly the case with occluded pixels and is a significant factor to take care of to ensure high accuracy. The steps that ensure this necessity are feature matching and disparity aggregation (discussed later) steps. In the feature matching step, a corresponding match is searched only for features that are funda-

mentally and structurally the same. Failure to find such candidates leads to dropping the particular feature. After this initial screening, disparity aggregation does the final screening. Here if the disparity values obtained from the different metrics go out of a range, they are rejected. This thresholding can be relied upon

Figure 13 shows the use of census features for this approach.

Figure 12.

125

Figure 11.

Right image overlapped with the detected feature lines [7].

DOI: http://dx.doi.org/10.5772/intechopen.86303

because the estimated ranges are in the depth space.

Figure 10. Left image overlapped with the detected feature lines [7].

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

Figure 11. Right image overlapped with the detected feature lines [7].

deletes the spurious disparity values from the final output. In this proposed

ensure a significant performance boost.

come this flaw.

Coding Theory

mation technique.

technique handles this flaw.

Left image overlapped with the detected feature lines [7].

Figure 10.

124

approach, a post-processing step is not required, and the correspondence algorithm runs for a much smaller number of pixels compared to the entire image. Following the above statement, this approach starts with finding the most prominent features in the two input images. It is critical because it ensures three things—the selection of discrete pixels that ensure high disparity confidence, the removal of any

post-processing step, and a drastic reduction in the input size for disparity estimation. The first point takes care of high credibility, whereas the other two points

Once we have the features of the two input images, we use a combination of multiple techniques. Since it is a conventional image processing technique with the requirement of not being computationally heavy, there is only so much information that each method can carry. A combination of the same has the potential to over-

The first technique, i.e., the feature matching technique, is the most dynamic part of the algorithm. It requires modification for every different type of feature selection. As the feature of interest for this chapter is line segments, the discussion restricts to the same. Here the features not only are matched by pixel values but depend on the feature properties as well. For example, the slope is an essential property for a line. It helps to identify the similarity in the structure of the compared scene. However, this has the naïve loophole that it can match with any similar-looking line. Hence, it is not possible to entirely rely on this distance esti-

The second technique is the typical window matching technique (see Figure 12). However, the difference is the size and shape of the window decided for each individual feature. The line segment detected in that area governs the shape and size of each window. The window must cover entirely the smaller of the two lines (detected lines in the two input images). For a little context, a few pixels pad the feature line within the window (see Figure 12). This one difference from the typical window matching makes much difference because each feature has a unique size which indicates that each window captures a significant image feature in its entirety and not just clueless parts of it. Irrespective of the added advantage, this method still has all the flaws of the box matching technique, the significant difference being the difference in the illumination of the two camera views. This difference can lead to erroneous disparity values. The next distance estimation

Window matching technique: the window covers the entire feature segment along with some pixel padding.

The third technique is the census feature matching technique (see Figure 13) which makes the pixel matching intensity independent. It captures the relationship between the intensity values in a selected neighborhood and does not rely on exact intensity values. Although this step makes the previous distance estimation seem redundant, it helps in the cases where the relation between pixels intensities is the same for multiple positions of the search space. On top of that, unlike the window matching technique, the census features require a single point of interest for each window and hence cannot have a non-square window size for the image features. Figure 13 shows the use of census features for this approach.

While the above metrics help find an accurate match of the corresponding pixels, it is necessary to identify the pixels that do not have a corresponding matching pixel. It is mainly the case with occluded pixels and is a significant factor to take care of to ensure high accuracy. The steps that ensure this necessity are feature matching and disparity aggregation (discussed later) steps. In the feature matching step, a corresponding match is searched only for features that are fundamentally and structurally the same. Failure to find such candidates leads to dropping the particular feature. After this initial screening, disparity aggregation does the final screening. Here if the disparity values obtained from the different metrics go out of a range, they are rejected. This thresholding can be relied upon because the estimated ranges are in the depth space.

#### Figure 13.

Census feature matching technique—redline is the feature segment, the red dots are the pixels of interest on the feature segment, the dashed squares are the census feature kernels for the pixels of interest.

Next is the disparity aggregation step that combines the disparity values obtained from all of the above metrics. The main characteristic encapsulated here is the fact that this aggregation step can reject outlier disparity values as well. The upper and lower bound of the disparity values can be obtained from Eq. (12). Extending the same we can get the disparity error range in the pixel space (Eq. (13)).

$$\text{Disparity Error Range} = -fB \frac{(\varkappa + \jmath)}{(z - \varkappa)(z + \jmath)} \tag{13}$$

to fail. In such cases, custom image descriptors, where closest feature points can define the pixel of interest, can be used to overcome the above flaw. Getting this right is a challenge because the disparity between the two stereo images makes it a nontrivial problem to select the correct features to describe the pixel of interest. Since the introduced disparity can lead to some difference in the background of the two input images, hence not all features can be used to describe the pixel of interest. Another critical factor that can help in improving the performance of the output is better identification of the edges of the detected objects. Something that I would like to call "dislocated kernels" might help improve the accuracies. The idea is – not to be restricted by the fact that the pixel of interest needs to lie at the

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

DOI: http://dx.doi.org/10.5772/intechopen.86303

Final feature disparity map—red denotes closer pixels, blue denotes farther pixels.

All of the above ideas and approaches work behind a single motivation of attaining the maximum credibility of the computed output. Along the same lines, if the current approach can be optimized enough, we might have enough room for the conventional yet effective stereo consistency check. Since even this check is to be performed on the feature elements of the image, the number of input pixels is

Throughout this chapter, we talked about the fundamental details with a slight background of the stereo vision system and about how sparse disparity maps can be highly credible. We discussed the primary use cases and applications along with the conceptual working of this system. As discussed earlier, there are a lot of complex challenges to be taken care of when using stereos for any application. The solutions to these challenges are no magic bullet and require some digging in to figure out the solution that works best for the chosen application. Many experts have launched ready-made stereo vision systems with a reasonable amount of accuracy to save researchers from the efforts of setting up a good stereo system themselves. These are fit to be used for almost all personal projects, and some of them are even suitable

for extensive projects. Examples of some of these products are ZED Stereo, Microsoft Kinect, Bumblebee, and many more. Multiple solutions have come into existence, to speed up the process of depth estimation. While some of these devices use custom-built hardware for faster computation, others use a variety of cameras, e.g., infrared cameras, to make the process of stereo matching much more comfortable. The approach proposed in this chapter guides toward making use of even

meager and can lead to very high confidence overall.

the sparse disparity maps with greater confidence.

center of the kernel.

Figure 14.

4. Conclusion

127

In Eq. (13), below is the legend for the symbols used:


#### 3.2 Results

Figure 14 shows the final result of the above approach. The disparity values of the image features are color-coded based on the disparity values. Visually the output looks much inferior to the standard disparity estimation techniques, but the motivation of this chapter has been different since the beginning. This approach is capable of performing better than the typical approaches because in a combined pipeline, i.e., its combination with other smart perception techniques, it's capable of performing much better because it mostly avoids the false disparity values.

#### 3.3 Future work

Although the proposed method is promising in some instances, it will not always perform better for obvious reasons. In case there is a requirement to estimate the disparity of some pixel that does not lie in the feature pool, this algorithm is bound

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

#### Figure 14.

Next is the disparity aggregation step that combines the disparity values obtained from all of the above metrics. The main characteristic encapsulated here is the fact that this aggregation step can reject outlier disparity values as well. The upper and lower bound of the disparity values can be obtained from Eq. (12). Extending the same we can get the disparity error range in the pixel space

feature segment, the dashed squares are the census feature kernels for the pixels of interest.

Census feature matching technique—redline is the feature segment, the red dots are the pixels of interest on the

Disparity Error Range ¼ �f B ð Þ <sup>x</sup> <sup>þ</sup> <sup>y</sup>

z � Depth of the object point from the stereo unit in meters

B � Baseline distance between the two camera units in meters

the image features are color-coded based on the disparity values. Visually the output looks much inferior to the standard disparity estimation techniques, but the motivation of this chapter has been different since the beginning. This approach is capable of performing better than the typical approaches because in a combined pipeline, i.e., its combination with other smart perception techniques, it's capable of

performing much better because it mostly avoids the false disparity values.

x � Arbitrary margin distance in meters taken infront of the object y � Arbitrary margin distance in meters taken behind the object

Figure 14 shows the final result of the above approach. The disparity values of

Although the proposed method is promising in some instances, it will not always perform better for obvious reasons. In case there is a requirement to estimate the disparity of some pixel that does not lie in the feature pool, this algorithm is bound

f � Effective focal length of the stereo unit in pixels

In Eq. (13), below is the legend for the symbols used:

ð Þ z � x ð Þ z þ y

(13)

(Eq. (13)).

Figure 13.

Coding Theory

3.2 Results

3.3 Future work

126

Final feature disparity map—red denotes closer pixels, blue denotes farther pixels.

to fail. In such cases, custom image descriptors, where closest feature points can define the pixel of interest, can be used to overcome the above flaw. Getting this right is a challenge because the disparity between the two stereo images makes it a nontrivial problem to select the correct features to describe the pixel of interest. Since the introduced disparity can lead to some difference in the background of the two input images, hence not all features can be used to describe the pixel of interest.

Another critical factor that can help in improving the performance of the output is better identification of the edges of the detected objects. Something that I would like to call "dislocated kernels" might help improve the accuracies. The idea is – not to be restricted by the fact that the pixel of interest needs to lie at the center of the kernel.

All of the above ideas and approaches work behind a single motivation of attaining the maximum credibility of the computed output. Along the same lines, if the current approach can be optimized enough, we might have enough room for the conventional yet effective stereo consistency check. Since even this check is to be performed on the feature elements of the image, the number of input pixels is meager and can lead to very high confidence overall.

#### 4. Conclusion

Throughout this chapter, we talked about the fundamental details with a slight background of the stereo vision system and about how sparse disparity maps can be highly credible. We discussed the primary use cases and applications along with the conceptual working of this system. As discussed earlier, there are a lot of complex challenges to be taken care of when using stereos for any application. The solutions to these challenges are no magic bullet and require some digging in to figure out the solution that works best for the chosen application. Many experts have launched ready-made stereo vision systems with a reasonable amount of accuracy to save researchers from the efforts of setting up a good stereo system themselves. These are fit to be used for almost all personal projects, and some of them are even suitable for extensive projects. Examples of some of these products are ZED Stereo, Microsoft Kinect, Bumblebee, and many more. Multiple solutions have come into existence, to speed up the process of depth estimation. While some of these devices use custom-built hardware for faster computation, others use a variety of cameras, e.g., infrared cameras, to make the process of stereo matching much more comfortable. The approach proposed in this chapter guides toward making use of even the sparse disparity maps with greater confidence.

This chapter was an attempt to cover most of the fundamental concepts that govern the working of the stereo vision systems and give an alternative for fast depth estimation techniques. The intention was to give enthusiastic readers enough information about the topic by the end of the chapter, to make them capable of digging deeper into advances sections of the module.

References

1330-1334

[1] The ABC's of Distances [Internet]. 2018. Available from: http://www.astro.

DOI: http://dx.doi.org/10.5772/intechopen.86303

structured light. In: Computer Vision and Pattern Recognition, 2003; Proceedings 2003 IEEE Computer Society Conference on 18 June 2003;

[10] Scharstein D, Hirschmüller H, Kitajima Y, Krathwohl G, Nešić N, Wang X, et al. High-resolution stereo datasets with subpixel-accurate ground truth. In: German Conference on Pattern Recognition. Cham: Springer;

[11] Sjaardema TA, Smith CS, Birch GC. History and evolution of the Johnson

[12] Hirschmuller H. Accurate and efficient stereo processing by semiglobal matching and mutual

information. In: Computer Vision and Pattern Recognition CVPR 2005; IEEE Computer Society Conference on 20 June 2005; IEEE; 2005. Vol. 2.

[13] Hirschmuller H. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008;30(2):328-341

[14] Spangenberg R, Langner T, Adfeldt S, Rojas R. Large scale semi-global matching on the CPU. In: Intelligent Vehicles Symposium Proceedings, 2014

[15] Godard C, Mac Aodha O, Brostow GJ. Unsupervised monocular depth estimation with left-right consistency.

[16] Zbontar J, LeCun Y. Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning

Research. 2016;17(1–32):2

IEEE. 2014. pp. 195-201

CVPR. 2017;2(6):7

IEEE; Vol. 1. pp. I-I

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

2014. pp. 31-42

pp. 807-814

criteria. SANDIA Report, SAND2015-6368. 2015

[2] Zhang Z. A flexible new technique

Transactions on Pattern Analysis and Machine Intelligence. 2000;22:

[3] Scaramuzza D, Martinelli A, Siegwart R. A toolbox for easily calibrating omnidirectional cameras. In: Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on 9 October

ucla.edu/wright/distance.html

for camera calibration. IEEE

2006; IEEE; pp. 5695-5701

Vision. 2002;47(1–3):7-42

2012; IEEE; pp. 3354-3361

2012. pp. 746-760

129

[8] Silberman N, Hoiem D, Kohli P, Fergus R. Indoor segmentation and support inference from rgbd images. In: European Conference on Computer Vision. Berlin, Heidelberg: Springer;

[9] Scharstein D, Szeliski R. Highaccuracy stereo depth maps using

[4] Camera Calibration Toolbox for Matlab [Internet]. 2015. Available from: http://www.vision.caltech.edu/bougue tj/calib\_doc/ [Accessed: 14-10-2015]

[5] Tsai Camera Calibration [Internet]. 2003. Available from: http://homepages. inf.ed.ac.uk/rbf/CVonline/LOCAL\_ COPIES/DIAS1/ [Accessed: 05-11-2003]

[6] Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer

[7] Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The kitti vision benchmark suite. Computer Vision and Pattern Recognition (CVPR). In: 2012 IEEE Conference on 16 June

[Accessed: 07-09-2018]

#### Acknowledgements

I want to use this space to thank first and foremost my ex-employer, the Hi-Tech Robotic Systemz, and my mentor at the company, Gaurav Singh, for introducing me and giving me enough opportunities in this domain that helped me grow my knowledge in this field.

Next, I would like to thank my friends Karan Sanwal, Nalin Goel, Smriti Singh, Megha Mishra, Subhash Gupta, Shilpa Panwar, and Priyanka Tete for their constant support in reviewing the chapter and providing me with valuable insights for the modification of this content.

Last but not least, I would like to thank my parents and my sister for their motivating supports toward writing this chapter.

Had it not been the constant backing of all these people, I might not be able to imagine writing this chapter. So a heartily thank you to all of them.

#### Author details

Satyarth Praveen Master of Engineering in Robotics, University of Maryland, College Park, Maryland, United States of America

\*Address all correspondence to: satyarth@terpmail.umd.edu

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques DOI: http://dx.doi.org/10.5772/intechopen.86303

#### References

This chapter was an attempt to cover most of the fundamental concepts that govern the working of the stereo vision systems and give an alternative for fast depth estimation techniques. The intention was to give enthusiastic readers enough information about the topic by the end of the chapter, to make them capable of

I want to use this space to thank first and foremost my ex-employer, the Hi-Tech Robotic Systemz, and my mentor at the company, Gaurav Singh, for introducing me and giving me enough opportunities in this domain that helped me grow my

Next, I would like to thank my friends Karan Sanwal, Nalin Goel, Smriti Singh, Megha Mishra, Subhash Gupta, Shilpa Panwar, and Priyanka Tete for their constant support in reviewing the chapter and providing me with valuable insights for the

Last but not least, I would like to thank my parents and my sister for their

imagine writing this chapter. So a heartily thank you to all of them.

Master of Engineering in Robotics, University of Maryland, College Park,

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: satyarth@terpmail.umd.edu

Had it not been the constant backing of all these people, I might not be able to

digging deeper into advances sections of the module.

motivating supports toward writing this chapter.

Acknowledgements

Coding Theory

knowledge in this field.

Author details

Satyarth Praveen

128

Maryland, United States of America

provided the original work is properly cited.

modification of this content.

[1] The ABC's of Distances [Internet]. 2018. Available from: http://www.astro. ucla.edu/wright/distance.html [Accessed: 07-09-2018]

[2] Zhang Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000;22: 1330-1334

[3] Scaramuzza D, Martinelli A, Siegwart R. A toolbox for easily calibrating omnidirectional cameras. In: Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on 9 October 2006; IEEE; pp. 5695-5701

[4] Camera Calibration Toolbox for Matlab [Internet]. 2015. Available from: http://www.vision.caltech.edu/bougue tj/calib\_doc/ [Accessed: 14-10-2015]

[5] Tsai Camera Calibration [Internet]. 2003. Available from: http://homepages. inf.ed.ac.uk/rbf/CVonline/LOCAL\_ COPIES/DIAS1/ [Accessed: 05-11-2003]

[6] Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision. 2002;47(1–3):7-42

[7] Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The kitti vision benchmark suite. Computer Vision and Pattern Recognition (CVPR). In: 2012 IEEE Conference on 16 June 2012; IEEE; pp. 3354-3361

[8] Silberman N, Hoiem D, Kohli P, Fergus R. Indoor segmentation and support inference from rgbd images. In: European Conference on Computer Vision. Berlin, Heidelberg: Springer; 2012. pp. 746-760

[9] Scharstein D, Szeliski R. Highaccuracy stereo depth maps using structured light. In: Computer Vision and Pattern Recognition, 2003; Proceedings 2003 IEEE Computer Society Conference on 18 June 2003; IEEE; Vol. 1. pp. I-I

[10] Scharstein D, Hirschmüller H, Kitajima Y, Krathwohl G, Nešić N, Wang X, et al. High-resolution stereo datasets with subpixel-accurate ground truth. In: German Conference on Pattern Recognition. Cham: Springer; 2014. pp. 31-42

[11] Sjaardema TA, Smith CS, Birch GC. History and evolution of the Johnson criteria. SANDIA Report, SAND2015-6368. 2015

[12] Hirschmuller H. Accurate and efficient stereo processing by semiglobal matching and mutual information. In: Computer Vision and Pattern Recognition CVPR 2005; IEEE Computer Society Conference on 20 June 2005; IEEE; 2005. Vol. 2. pp. 807-814

[13] Hirschmuller H. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008;30(2):328-341

[14] Spangenberg R, Langner T, Adfeldt S, Rojas R. Large scale semi-global matching on the CPU. In: Intelligent Vehicles Symposium Proceedings, 2014 IEEE. 2014. pp. 195-201

[15] Godard C, Mac Aodha O, Brostow GJ. Unsupervised monocular depth estimation with left-right consistency. CVPR. 2017;2(6):7

[16] Zbontar J, LeCun Y. Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning Research. 2016;17(1–32):2

[17] Mayer N, Ilg E, Hausser P, Fischer P, Cremers D, Dosovitskiy A, et al. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. pp. 4040-4048

[18] Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. pp. 779-788

[19] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, et al. Ssd: Single Shot Multibox Detector. In: European Conference on Computer Vision. Cham: Springer; 2016. pp. 21-37

[20] Paszke A, Chaurasia A, Kim S, Culurciello E. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147. 2016

[21] Dissecting the Camera Matrix, Part 3: The Intrinsic Matrix [Internet]. 2013. Available from: http://ksimek.github.io/ 2013/08/13/intrinsic/ [Accessed: 13-08- 2013]

[22] Bhatti A. Current Advancements in Stereo Vision. Rijeka: InTech; 2012. https://scholar.google.com/scholar?oi= gsb95&q=current%20advances%20in% 20stereo%20vision%20rijeka&lookup= 0&hl=en

[23] Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge University Press; 2003. https://scholar.google.com/scholar?hl= en&as\_sdt=0%2C5&q=multiple+view +geometry+in+computer+vision& btnG=&oq=multiple+view+

**131**

**Chapter 8**

**Abstract**

**1. Introduction**

Applications

through increasingly noninvasive measures.

information is more useful for the clinicians [1–10].

**2. Advancement of biomedical applications**

diagnosis the diseases by doctors.

**Keywords:** patient, bio signals, Medical image, processing, decision

Advances in Signal and Image

*Mathiyalagan Palaniappan and Manikandan Annamalai*

Our bodies are continually passing on information about our prosperity. This information can be collected using physiological instruments that measure beat, circulatory strain, oxygen drenching levels, blood glucose, nerve conduction, mind activity, and so on. For the most part, such estimations are taken at unequivocal spotlights in time and noted on a patient's outline. Working with conventional bio-estimation apparatuses, the sign can be figured by programming to give doctors continuous information and more noteworthy bits of knowledge to help in clinical evaluations. By utilizing progressively modern intends to break down what our bodies are stating, we can conceivably decide the condition of a patient's wellbeing

The signals are measured and analyzed from the organs of human body using various instruments. These types of signal processing called bio signal processing. The major challenges are to remove the noise from the signals and the resulting

The MRI, PET, CT, etc., generates more images and these images are processed

This section aims to collect a diverse and complementary set of emerging techniques that demonstrate new developments and applications of advanced signal and image processing in medical imaging. It will help both physicians and radiologists in the image

A noteworthy test for execution of sign handling arrangements in cyber-physical systems (CPS) is the trouble of gaining information from topographically circulated

using Artificial Intelligence and machine learning algorithms. The bio signal processing and machine learning [12] based medical image analysis accurately

interpretation, and help technicians to exchange the latest technical progresses.

**2.1 Signal processing in networked cyber-physical systems**

Processing in Biomedical

#### **Chapter 8**

[17] Mayer N, Ilg E, Hausser P, Fischer P, Cremers D, Dosovitskiy A, et al. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.

[18] Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified,

Proceedings of the IEEE Conference on

real-time object detection. In:

Computer Vision and Pattern Recognition. 2016. pp. 779-788

[19] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, et al. Ssd: Single Shot Multibox Detector. In: European Conference on Computer Vision. Cham: Springer; 2016. pp. 21-37

[20] Paszke A, Chaurasia A, Kim S, Culurciello E. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint

[21] Dissecting the Camera Matrix, Part 3: The Intrinsic Matrix [Internet]. 2013. Available from: http://ksimek.github.io/ 2013/08/13/intrinsic/ [Accessed: 13-08-

[22] Bhatti A. Current Advancements in Stereo Vision. Rijeka: InTech; 2012. https://scholar.google.com/scholar?oi= gsb95&q=current%20advances%20in% 20stereo%20vision%20rijeka&lookup=

[23] Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge University Press; 2003. https://scholar.google.com/scholar?hl= en&as\_sdt=0%2C5&q=multiple+view +geometry+in+computer+vision& btnG=&oq=multiple+view+

arXiv:1606.02147. 2016

2013]

0&hl=en

130

pp. 4040-4048

Coding Theory

## Advances in Signal and Image Processing in Biomedical Applications

*Mathiyalagan Palaniappan and Manikandan Annamalai*

#### **Abstract**

Our bodies are continually passing on information about our prosperity. This information can be collected using physiological instruments that measure beat, circulatory strain, oxygen drenching levels, blood glucose, nerve conduction, mind activity, and so on. For the most part, such estimations are taken at unequivocal spotlights in time and noted on a patient's outline. Working with conventional bio-estimation apparatuses, the sign can be figured by programming to give doctors continuous information and more noteworthy bits of knowledge to help in clinical evaluations. By utilizing progressively modern intends to break down what our bodies are stating, we can conceivably decide the condition of a patient's wellbeing through increasingly noninvasive measures.

**Keywords:** patient, bio signals, Medical image, processing, decision

#### **1. Introduction**

The signals are measured and analyzed from the organs of human body using various instruments. These types of signal processing called bio signal processing. The major challenges are to remove the noise from the signals and the resulting information is more useful for the clinicians [1–10].

The MRI, PET, CT, etc., generates more images and these images are processed using Artificial Intelligence and machine learning algorithms. The bio signal processing and machine learning [12] based medical image analysis accurately diagnosis the diseases by doctors.

#### **2. Advancement of biomedical applications**

This section aims to collect a diverse and complementary set of emerging techniques that demonstrate new developments and applications of advanced signal and image processing in medical imaging. It will help both physicians and radiologists in the image interpretation, and help technicians to exchange the latest technical progresses.

#### **2.1 Signal processing in networked cyber-physical systems**

A noteworthy test for execution of sign handling arrangements in cyber-physical systems (CPS) is the trouble of gaining information from topographically circulated

perception hubs and putting away/preparing the amassed information at the combination focus (FC) [1, 2]. All things considered, there has been an ongoing flood of enthusiasm for improvement of conveyed and shared sign preparing advancements where adjustment, estimation, as well as control are performed locally and correspondence is constrained to nearby neighborhoods.

Cyber-physical systems provides major facilities and it has more potential digital and physical assaults by enemies on sign handling modules could prompt an assortment of extreme outcomes including client data spillage, devastation of foundations, and jeopardizing human lives [3]. Then again, the requirement for participation between neighboring hubs makes it basic to anticipate the revelation of delicate nearby data during conveyed data combination step.

#### **2.2 Cyber-physical systems in signal processing**

Cyber-physical systems give strategies and help to tackle prognostic issues in an assortment of medicinal areas [4]. Machine leaning algorithms (ML) are used to examine the significance of clinical parameters, e.g. expectation of ailment movement, therapeutic learning, patient administration, etc.., [5]. ML is being utilized for information examination in medical field [6]. It contend that the fruitful execution of ML strategies can help the combination of PC based frameworks in the medicinal services condition giving chances to encourage and upgrade crafted by medicinal specialists.

#### **2.3 Multimodal multimedia signal processing**

Analysts in various fields use multimodal information. One of its most regular usages is in the field of human computer interaction (HCI). Here, a methodology is a characteristic method for collaboration: discourse, vision, confront articulations, penmanship, motions, head, and body developments [7, 8]. Multimodal interfaces encourages human computer interface [9] supplant the conventional console and mouse. Multimodal speaker recognition, distinguishes the dynamic speaker in a sound video grouping, which contains a few speakers, in light of the connection between the sound and the development in the video [10].

#### **2.4 Statistical signal processing**

Statistical signal processing is an approach to signal processing which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applications [13, 14].

#### **2.5 Signal processing techniques for data hiding and audio watermarking**

In signal processing techniques for data hiding a novel system for embeddings and recuperating "shrouded" information in sound documents. In this procedure, the period of picked segments of the host sound flag is controlled in a way that might be identified by a collector with the best possible "key" [15]. Without the key, the shrouded information is imperceptible; both aurally and through visually impaired computerized flag handling assaults. The technique portrayed is both aurally straightforward and vigorous and can be connected to both simple and computerized sound flags, the last including uncompressed and additionally packed sound record designs. Information stowing away present by relative stage encoding and quantization record adjustment stage encoding technique [16].

**133**

*Advances in Signal and Image Processing in Biomedical Applications*

Optical signal processing unites different fields of optics and signal processing to be specific, nonlinear gadgets and procedures, simple and computerized signal, and propelled information tweak arrangements to accomplish fast signal processing capacities that can conceivably work at the line rate of fiber optic interchanges [17, 18]. Data can be encoded in abundancy, stage, wavelength, polarization and spatial highlights of an optical wave to accomplish high-limit transmission. Different optical nonlinearities and chromatic scattering have been appeared to empower key sub-framework applications, for example, wavelength transformation, multicasting, multiplexing, demultiplexing, and tunable optical postponements. Optical flag preparing utilizing cognizant optical recurrence looks over could have different potential applications for optical correspondences. At first a way to deal with accomplish a tunable optical high-arrange QAM [19] age in light of multichannel total and an all-optical pilot-tone based self-homodyne-recognition is utilized by two situations: (i) numerous WDM channels with adequate intensity of pilot tones, and (ii) a solitary channel with a low-control pilot tone. At long last, a divided data transmission assignment empowered by reconfigurable channel cutting and sewing.

The virtual physiological human is synonymous with a program in computational biomedicine that plans to build up a system of strategies and advancements to examine the human body in general [20, 21]. It is predicated on the transformational character of data innovation, offered as a powerful influence for that most essential of human concerns, our own wellbeing and prosperity. The VPH is a composed gathering of computational structures and ICT-based instruments for the multilevel displaying and recreation of the human life structures and physiology. Once adequately built up, the VPH [22] will give a fundamental innovative foundation to the Physiome Project, to pathology-particular activities in translational

Research in electroencephalogram (EEG) based brain-computer interfaces (BCIs) has been extensively extending during the most recent couple of years. To Such extent owes an enormous degree to the multidisciplinary and testing nature of BCI inquire about. Sign preparing and example acknowledgment without a doubt comprise fundamental segments of a BCI framework. Sign handling calculations are connected to the EEG sign to interpret mental states which are pertinent for BCI activity. In this instructional exercise, the fundamental BCI ideas, for example, mind movement checking, BCI task, and the important mental states for BCI, are presented. The fundamental kinds of significant mental states for BCI, to be specific engine symbolism (ERD/ERS), enduring state visual evoked possibilities (SSVEP) [23], and occasion related possibilities are introduced alongside commonsense

The EEG preparing for mental state disentangling is depicted inside and out. The multivariate idea of the EEG joined with the neuroscience learning on hemispheric cerebrum specialization is beneficially considered to infer ideal mixes of the individual sign creating the EEG [24]. BCIs are named by the kind of mind action utilized for control. Among a few classifications of EEG-based BCIs, including P300, unfaltering state visual evoked potential (SSVEP), occasion related desynchronization (ERD), and moderate cortical potential based sign preparing.

*DOI: http://dx.doi.org/10.5772/intechopen.88759*

**2.7 Virtual physiological human initiative**

**2.8 Brain-computer interfaces**

application precedents.

research, and to vertical answers for the biomedical business.

**2.6 Optical signal processing**

#### **2.6 Optical signal processing**

*Coding Theory*

medicinal specialists.

**2.4 Statistical signal processing**

tions [13, 14].

perception hubs and putting away/preparing the amassed information at the combination focus (FC) [1, 2]. All things considered, there has been an ongoing flood of enthusiasm for improvement of conveyed and shared sign preparing advancements where adjustment, estimation, as well as control are performed locally and corre-

Cyber-physical systems provides major facilities and it has more potential digital and physical assaults by enemies on sign handling modules could prompt an assortment of extreme outcomes including client data spillage, devastation of foundations, and jeopardizing human lives [3]. Then again, the requirement for participation between neighboring hubs makes it basic to anticipate the revelation

Cyber-physical systems give strategies and help to tackle prognostic issues in an assortment of medicinal areas [4]. Machine leaning algorithms (ML) are used to examine the significance of clinical parameters, e.g. expectation of ailment movement, therapeutic learning, patient administration, etc.., [5]. ML is being utilized for information examination in medical field [6]. It contend that the fruitful

execution of ML strategies can help the combination of PC based frameworks in the medicinal services condition giving chances to encourage and upgrade crafted by

Analysts in various fields use multimodal information. One of its most regular usages is in the field of human computer interaction (HCI). Here, a methodology is a characteristic method for collaboration: discourse, vision, confront articulations, penmanship, motions, head, and body developments [7, 8]. Multimodal interfaces encourages human computer interface [9] supplant the conventional console and mouse. Multimodal speaker recognition, distinguishes the dynamic speaker in a sound video grouping, which contains a few speakers, in light of the connection

Statistical signal processing is an approach to signal processing which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applica-

In signal processing techniques for data hiding a novel system for embeddings and recuperating "shrouded" information in sound documents. In this procedure, the period of picked segments of the host sound flag is controlled in a way that might be identified by a collector with the best possible "key" [15]. Without the key, the shrouded information is imperceptible; both aurally and through visually impaired computerized flag handling assaults. The technique portrayed is both aurally straightforward and vigorous and can be connected to both simple and computerized sound flags, the last including uncompressed and additionally packed sound record designs. Information stowing away present by relative stage encoding

**2.5 Signal processing techniques for data hiding and audio watermarking**

and quantization record adjustment stage encoding technique [16].

spondence is constrained to nearby neighborhoods.

**2.2 Cyber-physical systems in signal processing**

**2.3 Multimodal multimedia signal processing**

between the sound and the development in the video [10].

of delicate nearby data during conveyed data combination step.

**132**

Optical signal processing unites different fields of optics and signal processing to be specific, nonlinear gadgets and procedures, simple and computerized signal, and propelled information tweak arrangements to accomplish fast signal processing capacities that can conceivably work at the line rate of fiber optic interchanges [17, 18]. Data can be encoded in abundancy, stage, wavelength, polarization and spatial highlights of an optical wave to accomplish high-limit transmission. Different optical nonlinearities and chromatic scattering have been appeared to empower key sub-framework applications, for example, wavelength transformation, multicasting, multiplexing, demultiplexing, and tunable optical postponements. Optical flag preparing utilizing cognizant optical recurrence looks over could have different potential applications for optical correspondences. At first a way to deal with accomplish a tunable optical high-arrange QAM [19] age in light of multichannel total and an all-optical pilot-tone based self-homodyne-recognition is utilized by two situations: (i) numerous WDM channels with adequate intensity of pilot tones, and (ii) a solitary channel with a low-control pilot tone. At long last, a divided data transmission assignment empowered by reconfigurable channel cutting and sewing.

#### **2.7 Virtual physiological human initiative**

The virtual physiological human is synonymous with a program in computational biomedicine that plans to build up a system of strategies and advancements to examine the human body in general [20, 21]. It is predicated on the transformational character of data innovation, offered as a powerful influence for that most essential of human concerns, our own wellbeing and prosperity. The VPH is a composed gathering of computational structures and ICT-based instruments for the multilevel displaying and recreation of the human life structures and physiology. Once adequately built up, the VPH [22] will give a fundamental innovative foundation to the Physiome Project, to pathology-particular activities in translational research, and to vertical answers for the biomedical business.

#### **2.8 Brain-computer interfaces**

Research in electroencephalogram (EEG) based brain-computer interfaces (BCIs) has been extensively extending during the most recent couple of years. To Such extent owes an enormous degree to the multidisciplinary and testing nature of BCI inquire about. Sign preparing and example acknowledgment without a doubt comprise fundamental segments of a BCI framework. Sign handling calculations are connected to the EEG sign to interpret mental states which are pertinent for BCI activity. In this instructional exercise, the fundamental BCI ideas, for example, mind movement checking, BCI task, and the important mental states for BCI, are presented. The fundamental kinds of significant mental states for BCI, to be specific engine symbolism (ERD/ERS), enduring state visual evoked possibilities (SSVEP) [23], and occasion related possibilities are introduced alongside commonsense application precedents.

The EEG preparing for mental state disentangling is depicted inside and out. The multivariate idea of the EEG joined with the neuroscience learning on hemispheric cerebrum specialization is beneficially considered to infer ideal mixes of the individual sign creating the EEG [24]. BCIs are named by the kind of mind action utilized for control. Among a few classifications of EEG-based BCIs, including P300, unfaltering state visual evoked potential (SSVEP), occasion related desynchronization (ERD), and moderate cortical potential based sign preparing.

#### **3. Neural networks and computing**

In humans, interactions between neuron circuits, systems and signals among micro-, meso- and macro-scales of brain dynamics underpin the functional organization of the brain that supports our daily life activity. Mathematical, computational and experimental neuroscientists apply a variety of methods, techniques and algorithms, both in animals and humans, ranging from single cell recordings to whole brain imaging, in order to identify the core mechanisms that govern the interactions among these scales. Although our knowledge of neural mechanisms, circuits and networks underlying brain dynamics and functions constantly grows, the integration of this knowledge to provide a conceptual framework of emergent behavior [25] and pattern formation occurring on different levels of spatial organization remains challenging.

#### **4. Big data in bioinformatics**

In biomedical calculation, the nonstop difficulties are: the board, investigation, and capacity of the biomedical information. The Spark engineering enables us to create suitable and productive strategies to use an enormous number of pictures for characterization, which can be redone as for one another [27]. In prescription, the information experienced are for the most part acquired from patients. This information comprise of physiological sign, pictures, and recordings. They can be put away or transmitted utilizing proper equipment and systems. One of the administrations utilized in prescription for the capacity and transmission of picture information is the picture archiving and communication system (PACS).

The enormous information innovations are ordered into four classes [28, 29]: (1) information stockpiling and recovery, (2) mistake ID, (3) information examination, and (4) stage mix arrangement. These classifications are related and may cover; for example, most information input applications may bolster basic information examination, or the other way around.

#### **5. Image reconstruction and analysis**

The examination model has been recently abused as an option in contrast to the traditional scanty amalgamation model for planning picture recreation techniques. Applying an appropriate examination administrator on the picture of intrigue yields a cosparse result [30] which empowers us to remake the picture from under sampled information. Moreover earlier in the investigation setting and hypothetically ponder the uniqueness issues as far as examination administrators when all is said in done position and the particular 2D limited distinction administrator. In light of the possibility of iterative co-bolster discovery (ICD) a novel picture remaking model and a successful calculation, accomplishing essentially better recreation execution.

#### **5.1 Biomedical imaging**

Utilization of computer-aided technologies in tissue engineering research and development has evolved a development of a new field of computer-aided tissue engineering (CATE). Three dimensional (3D) printing is an added substance producing process. This innovation furnishes us with the chance to make 3D structures by including material a layer-by-layer premise, utilizing various types of materials, for example, earthenware production, metals, plastics, and polymers. These days,

**135**

user interaction.

**Figure 1.**

*Tumor MR image taken after 15 days.*

**5.2 Intelligent imaging**

*Advances in Signal and Image Processing in Biomedical Applications*

tissue building examinations are occurring on an across the board premise in the fields of recovery, reclamation, or substitution of blemished or harmed useful living organs and tissues. 3D bio-printing [31] is a flexible developing innovation that is discovering its way through all parts of human life. The capability of 3D printers can be abused in territories of biomedical designing, for example, key research, tranquilize conveyance, testing, and additionally in clinical practice. About all present therapeutic nonorganic inserts, for example, ear prostheses, are made in foreordained sizes and designs that are generally utilized for patients. This method permits more precise customized assembling of gadgets made to the patient's own particulars. Bio-printing is being utilized to make more exact nonbiologic and organic research. Describe a method for hiding data in audio files that employs the manipulation of the phase of selected spectral components of the host audio file we describe a method for hiding data in audio files that employs the manipulation of

For an example, in **Figure 1** automated quantification of tumors remains to quantify signal intensity changes in MR images, and this is a difficult problem because of the artifacts affecting images such as partial volume effects and intensity in homogeneities. Low level segmentation methods such as intensity thresholding, edge detection, region growing, region merging and morphological operation are not well suited for automated quantification of the signal abnormalities as these techniques rely on image operators that analyze intensity, texture or shape locally in each voxel, and therefore too easily mislead by ambiguities in the image or require

Data driven systems have gotten expanding consideration as of late to solve different issues in biomedical imaging. Information driven models and methodologies so forth., give promising execution in picture remaking issues in attractive reverberation imaging, processed tomography, and different modalities in respect

the phase of selected spectral components of the host audio file.

*DOI: http://dx.doi.org/10.5772/intechopen.88759*

*Advances in Signal and Image Processing in Biomedical Applications DOI: http://dx.doi.org/10.5772/intechopen.88759*

**Figure 1.** *Tumor MR image taken after 15 days.*

*Coding Theory*

**3. Neural networks and computing**

zation remains challenging.

**4. Big data in bioinformatics**

examination, or the other way around.

**5.1 Biomedical imaging**

**5. Image reconstruction and analysis**

In humans, interactions between neuron circuits, systems and signals among micro-, meso- and macro-scales of brain dynamics underpin the functional organization of the brain that supports our daily life activity. Mathematical, computational and experimental neuroscientists apply a variety of methods, techniques and algorithms, both in animals and humans, ranging from single cell recordings to whole brain imaging, in order to identify the core mechanisms that govern the interactions among these scales. Although our knowledge of neural mechanisms, circuits and networks underlying brain dynamics and functions constantly grows, the integration of this knowledge to provide a conceptual framework of emergent behavior [25] and pattern formation occurring on different levels of spatial organi-

In biomedical calculation, the nonstop difficulties are: the board, investigation, and capacity of the biomedical information. The Spark engineering enables us to create suitable and productive strategies to use an enormous number of pictures for characterization, which can be redone as for one another [27]. In prescription, the information experienced are for the most part acquired from patients. This information comprise of physiological sign, pictures, and recordings. They can be put away or transmitted utilizing proper equipment and systems. One of the administrations utilized in prescription for the capacity and transmission of picture

The enormous information innovations are ordered into four classes [28, 29]: (1) information stockpiling and recovery, (2) mistake ID, (3) information examination, and (4) stage mix arrangement. These classifications are related and may cover; for example, most information input applications may bolster basic information

The examination model has been recently abused as an option in contrast to the traditional scanty amalgamation model for planning picture recreation techniques. Applying an appropriate examination administrator on the picture of intrigue yields a cosparse result [30] which empowers us to remake the picture from under sampled information. Moreover earlier in the investigation setting and hypothetically ponder the uniqueness issues as far as examination administrators when all is said in done position and the particular 2D limited distinction administrator. In light of the possibility of iterative co-bolster discovery (ICD) a novel picture remaking model and a

successful calculation, accomplishing essentially better recreation execution.

Utilization of computer-aided technologies in tissue engineering research and development has evolved a development of a new field of computer-aided tissue engineering (CATE). Three dimensional (3D) printing is an added substance producing process. This innovation furnishes us with the chance to make 3D structures by including material a layer-by-layer premise, utilizing various types of materials, for example, earthenware production, metals, plastics, and polymers. These days,

information is the picture archiving and communication system (PACS).

**134**

tissue building examinations are occurring on an across the board premise in the fields of recovery, reclamation, or substitution of blemished or harmed useful living organs and tissues. 3D bio-printing [31] is a flexible developing innovation that is discovering its way through all parts of human life. The capability of 3D printers can be abused in territories of biomedical designing, for example, key research, tranquilize conveyance, testing, and additionally in clinical practice. About all present therapeutic nonorganic inserts, for example, ear prostheses, are made in foreordained sizes and designs that are generally utilized for patients. This method permits more precise customized assembling of gadgets made to the patient's own particulars. Bio-printing is being utilized to make more exact nonbiologic and organic research. Describe a method for hiding data in audio files that employs the manipulation of the phase of selected spectral components of the host audio file we describe a method for hiding data in audio files that employs the manipulation of the phase of selected spectral components of the host audio file.

For an example, in **Figure 1** automated quantification of tumors remains to quantify signal intensity changes in MR images, and this is a difficult problem because of the artifacts affecting images such as partial volume effects and intensity in homogeneities. Low level segmentation methods such as intensity thresholding, edge detection, region growing, region merging and morphological operation are not well suited for automated quantification of the signal abnormalities as these techniques rely on image operators that analyze intensity, texture or shape locally in each voxel, and therefore too easily mislead by ambiguities in the image or require user interaction.

#### **5.2 Intelligent imaging**

Data driven systems have gotten expanding consideration as of late to solve different issues in biomedical imaging. Information driven models and methodologies so forth., give promising execution in picture remaking issues in attractive reverberation imaging, processed tomography, and different modalities in respect

#### **Figure 2.**

*3D MRI image visualization using 3D slicer. Image courtesy of the 3D slicer.*

to conventional methodologies utilizing hand-created models, for example, the discrete cosine change, wavelets. In this term envelops the most recent methodologies for making all parts of imaging framework information driven, including information procurement and examining, picture recreation, and handling/examination. Intelligent imaging frameworks would ceaselessly gain from huge datasets and on-the-fly and adjust for speed, productivity, and picture execution or quality.

A best example of intelligent imaging is the identification of affected and healthy images based on the discrimination capabilities in fundus image textures. For this purpose, as a texture descriptor for retinal eye images has been done and the area and time consumption has been reduced by the means of extended binary patterns (EBP). The main aim is to reduce the size and time consumption and also differentiate between age-related macular degeneration (AMD) and diabetic retinopathy (DR), and normal fundus images with the retina background texture by leaving lesions previous segmentation stage with the proposed procedure and obtaining promising results. The best results of each experiment on the model set are highlighted in tables. This work makes use of the EBP operator. In particular, the performance of EBP was compared with LPB as shown in **Figure 2**.

#### **5.3 PDE based image analysis**

The real issue with the exact mode disintegration (EMD) calculation is its absence of a hypothetical system. In this way, it is hard to portray and assess 2-D case, the utilization of an elective usage to the algorithmic meaning of the alleged "filtering process" utilized as a part of the first EMD strategy. This approach,

**137**

*Advances in Signal and Image Processing in Biomedical Applications*

particularly in light of fractional differential conditions (PDEs) and depends on a nonlinear dissemination based sifting procedure to tackle the mean envelope estimation issue. In the 1-D case, the productivity of the PDE-based technique, contrasted with the first EMD algorithmic rendition, was additionally delineated in an ongoing paper. As of late, a few 2-D expansions of the EMD strategy have been proposed. Regardless of some exertion, 2-D adaptations for EMD show up inadequately performing and are extremely tedious. So an expansion to the 2-D space of the PDE-based approach is widely depicted. This approach has been connected in instances of both flag and picture decay. The acquired outcomes affirm the value of the new PDE-based filtering process for the disintegration of different sorts of information. The adequacy of the approach empowers its utilization in various signal and image applications such as denoising, detrending or texture analysis.

**Figure 2** demonstrates the visualization of 3D MRI image using 3D slicer used to

In recent years, hyperspectral imaging (HSI) [32] has risen as a promising optical innovation for biomedical applications, principally forever sciences look into, yet in addition went for nonintrusive conclusion and picture guided medical procedure. Hyperspectral imaging (HSI) innovations have been utilized broadly in medicinal research, focusing on different organic wonders and various tissue writes. Their high ghostly determination over an extensive variety of wavelengths empowers procurement of spatial data comparing to various light interacting natural mixes. It is fit for giving constant quantitative data to a few organic procedures in both solid and ailing tissues. Hyperspectral inspecting and determination are for the most part considered the key factors that recognize HSI and MSI, while MSI centers around discrete and generally separated wavelength groups, HSI basically uses extremely tight and neighboring otherworldly groups over a nonstop phantom

Medicinal imaging procedures have generally been being used in the finding and identification of disease. Microcalcifications and masses are the soonest indications of tumor which must be identified utilizing current procedures. The trouble in characterization of generous and harmful microcalcifications [33] likewise causes a critical issue in restorative picture handling. Computerized classifiers might be valuable for radiologists in recognizing benevolent and dangerous examples. Consequently, an artificial neural system (ANN) [11, 26] which can be filled in as a computerized classifier is examined. In medicinal picture preparing, ANNs have been connected to an assortment of information order and example acknowledgment errands and turn into a promising characterization instrument in bosom malignancy. In this way, extraordinary determinations of picture highlights will bring about various arrangement choices. These requests can be parceled into three sorts: in any case, the method in light of estimations, for instance, support vector machine; second, the methodology in perspective on oversee, for instance, decision tree and unforgiving sets; and third, fake neural framework. Diverse ANNs made rely upon extending the veritable positive (TP) revelation rate and decreasing the bogus positive (FP) and false negative (FN) acknowledgment

*DOI: http://dx.doi.org/10.5772/intechopen.88759*

**5.4 Visualization of 3D MRI brain tumor image**

**5.5 Hyperspectral imaging**

perform various analysis on brain tumor in early stages.

range, as to recreate the range of every pixel in the image.

**5.6 Artificial neural networks in image processing**

*Advances in Signal and Image Processing in Biomedical Applications DOI: http://dx.doi.org/10.5772/intechopen.88759*

particularly in light of fractional differential conditions (PDEs) and depends on a nonlinear dissemination based sifting procedure to tackle the mean envelope estimation issue. In the 1-D case, the productivity of the PDE-based technique, contrasted with the first EMD algorithmic rendition, was additionally delineated in an ongoing paper. As of late, a few 2-D expansions of the EMD strategy have been proposed. Regardless of some exertion, 2-D adaptations for EMD show up inadequately performing and are extremely tedious. So an expansion to the 2-D space of the PDE-based approach is widely depicted. This approach has been connected in instances of both flag and picture decay. The acquired outcomes affirm the value of the new PDE-based filtering process for the disintegration of different sorts of information. The adequacy of the approach empowers its utilization in various signal and image applications such as denoising, detrending or texture analysis.

#### **5.4 Visualization of 3D MRI brain tumor image**

**Figure 2** demonstrates the visualization of 3D MRI image using 3D slicer used to perform various analysis on brain tumor in early stages.

#### **5.5 Hyperspectral imaging**

*Coding Theory*

to conventional methodologies utilizing hand-created models, for example, the discrete cosine change, wavelets. In this term envelops the most recent methodologies for making all parts of imaging framework information driven, including information procurement and examining, picture recreation, and handling/examination. Intelligent imaging frameworks would ceaselessly gain from huge datasets and on-the-fly and adjust for speed, productivity, and picture execution or quality. A best example of intelligent imaging is the identification of affected and healthy images based on the discrimination capabilities in fundus image textures. For this purpose, as a texture descriptor for retinal eye images has been done and the area and time consumption has been reduced by the means of extended binary patterns (EBP). The main aim is to reduce the size and time consumption and also differentiate between age-related macular degeneration (AMD) and diabetic retinopathy (DR), and normal fundus images with the retina background texture by leaving lesions previous segmentation stage with the proposed procedure and obtaining promising results. The best results of each experiment on the model set are highlighted in tables. This work makes use of the EBP operator. In particular, the

*3D MRI image visualization using 3D slicer. Image courtesy of the 3D slicer.*

performance of EBP was compared with LPB as shown in **Figure 2**.

The real issue with the exact mode disintegration (EMD) calculation is its absence of a hypothetical system. In this way, it is hard to portray and assess 2-D case, the utilization of an elective usage to the algorithmic meaning of the alleged "filtering process" utilized as a part of the first EMD strategy. This approach,

**136**

**Figure 2.**

**5.3 PDE based image analysis**

In recent years, hyperspectral imaging (HSI) [32] has risen as a promising optical innovation for biomedical applications, principally forever sciences look into, yet in addition went for nonintrusive conclusion and picture guided medical procedure. Hyperspectral imaging (HSI) innovations have been utilized broadly in medicinal research, focusing on different organic wonders and various tissue writes. Their high ghostly determination over an extensive variety of wavelengths empowers procurement of spatial data comparing to various light interacting natural mixes. It is fit for giving constant quantitative data to a few organic procedures in both solid and ailing tissues. Hyperspectral inspecting and determination are for the most part considered the key factors that recognize HSI and MSI, while MSI centers around discrete and generally separated wavelength groups, HSI basically uses extremely tight and neighboring otherworldly groups over a nonstop phantom range, as to recreate the range of every pixel in the image.

#### **5.6 Artificial neural networks in image processing**

Medicinal imaging procedures have generally been being used in the finding and identification of disease. Microcalcifications and masses are the soonest indications of tumor which must be identified utilizing current procedures. The trouble in characterization of generous and harmful microcalcifications [33] likewise causes a critical issue in restorative picture handling. Computerized classifiers might be valuable for radiologists in recognizing benevolent and dangerous examples. Consequently, an artificial neural system (ANN) [11, 26] which can be filled in as a computerized classifier is examined. In medicinal picture preparing, ANNs have been connected to an assortment of information order and example acknowledgment errands and turn into a promising characterization instrument in bosom malignancy. In this way, extraordinary determinations of picture highlights will bring about various arrangement choices. These requests can be parceled into three sorts: in any case, the method in light of estimations, for instance, support vector machine; second, the methodology in perspective on oversee, for instance, decision tree and unforgiving sets; and third, fake neural framework. Diverse ANNs made rely upon extending the veritable positive (TP) revelation rate and decreasing the bogus positive (FP) and false negative (FN) acknowledgment

rate for the perfect result. Use of wavelet in ANNs, for instance, particle swarm optimized wavelet neural network (PSOWNN), biorthogonal spline wavelet ANN, second-orchestrate dim dimension ANN, and Gabor wavelets ANN can improve the affectability and explicitness which are acquired in masses and microcalcification acknowledgment.

### **6. Discussion**

Biomedical sign and picture handling comprises particular interests in the informative and research field in biomedical structure. With the redesigned physiological data, a wide course of action of innovative works in clinical techniques makes usage of this thought in the restorative applications. With headway in biomedical imaging, the proportion of data created by multimodality picture strategies, e.g., stretching out from computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, single photon emission computed tomography (SPECT), and positron emission tomography (PET), magnetic particle imaging, EE/MEG, optical microscopy and tomography, photoacoustic tomography, electron tomography, and atomic force microscopy, has grown exponentially and the possibility of such data has logically ended up being more amazing. This represents an awesome test on the best way to grow new propelled imaging strategies and computational models for productive information handling, investigation and displaying in clinical applications and in understanding the basic natural process. Signal and image processing is pervasive in present day biomedical imaging, as it gives fundamental procedures to picture development, upgrade, coding, stockpiling, transmission, examination, comprehension, and representation from any of an expanding number of various multidimensional detecting modalities. To address this difficulty, usually natural image preprocessing methodology, for example, highlight extraction, picture combination, grouping and division need acclimatized clever strategies that can handle with the mass and decent variety of the data and frequently have the capacity to incorporate and process information from nonimaging sources.

#### **7. Conclusion**

This chapter mainly focused on signals and latest techniques in medical image processing which will create more interest in biomedical research fields. With the latest trends in data acquisition, a wide course of action of innovative works in clinical techniques applied in restorative applications. In biomedical imaging the data acquisition systems like computed tomography (CT), magnetic resonance imaging (MRI), ultrasound single photon emission computed tomography (SPECT), positron emission tomography (PET), optical microscopy etc., captures images of the patients. These systems grow exponentially and generate huge data which had more useful information. The high performance computing (HPC) methods analyze the images and visualize images in 3D view as well as pixel wise analysis with very less processing time. The major challenges in the brain tumor detection are to explore the exact location, shape and different tumor tissues and nontumor tissues. The artificial intelligence (AI) and machine learning (ML) address these challenges which supports radiologist and also for patients.

**139**

**Author details**

Mathiyalagan Palaniappan1

\* and Manikandan Annamalai2

2 Vivekananda College of Technology for Women, Namakkal, Tamilnadu, India

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

1 Sri Ramakrishna Engineering College, Coimbatore, Tamilnadu, India

\*Address all correspondence to: mathiyalagan.p@srec.ac.in

provided the original work is properly cited.

*Advances in Signal and Image Processing in Biomedical Applications*

*DOI: http://dx.doi.org/10.5772/intechopen.88759*

*Advances in Signal and Image Processing in Biomedical Applications DOI: http://dx.doi.org/10.5772/intechopen.88759*

*Coding Theory*

tion acknowledgment.

from nonimaging sources.

which supports radiologist and also for patients.

**7. Conclusion**

**6. Discussion**

rate for the perfect result. Use of wavelet in ANNs, for instance, particle swarm optimized wavelet neural network (PSOWNN), biorthogonal spline wavelet ANN, second-orchestrate dim dimension ANN, and Gabor wavelets ANN can improve the affectability and explicitness which are acquired in masses and microcalcifica-

Biomedical sign and picture handling comprises particular interests in the informative and research field in biomedical structure. With the redesigned physiological data, a wide course of action of innovative works in clinical techniques makes usage of this thought in the restorative applications. With headway in biomedical imaging, the proportion of data created by multimodality picture strategies, e.g., stretching out from computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, single photon emission computed tomography (SPECT), and positron emission tomography (PET), magnetic particle imaging, EE/MEG, optical microscopy and tomography, photoacoustic tomography, electron tomography, and atomic force microscopy, has grown exponentially and the possibility of such data has logically ended up being more amazing. This represents an awesome test on the best way to grow new propelled imaging strategies and computational models for productive information handling, investigation and displaying in clinical applications and in understanding the basic natural process. Signal and image processing is pervasive in present day biomedical imaging, as it gives fundamental procedures to picture development, upgrade, coding, stockpiling, transmission, examination, comprehension, and representation from any of an expanding number of various multidimensional detecting modalities. To address this difficulty, usually natural image preprocessing methodology, for example, highlight extraction, picture combination, grouping and division need acclimatized clever strategies that can handle with the mass and decent variety of the data and frequently have the capacity to incorporate and process information

This chapter mainly focused on signals and latest techniques in medical image processing which will create more interest in biomedical research fields. With the latest trends in data acquisition, a wide course of action of innovative works in clinical techniques applied in restorative applications. In biomedical imaging the data acquisition systems like computed tomography (CT), magnetic resonance imaging (MRI), ultrasound single photon emission computed tomography (SPECT), positron emission tomography (PET), optical microscopy etc., captures images of the patients. These systems grow exponentially and generate huge data which had more useful information. The high performance computing (HPC) methods analyze the images and visualize images in 3D view as well as pixel wise analysis with very less processing time. The major challenges in the brain tumor detection are to explore the exact location, shape and different tumor tissues and nontumor tissues. The artificial intelligence (AI) and machine learning (ML) address these challenges

**138**

#### **Author details**

Mathiyalagan Palaniappan1 \* and Manikandan Annamalai2

1 Sri Ramakrishna Engineering College, Coimbatore, Tamilnadu, India

2 Vivekananda College of Technology for Women, Namakkal, Tamilnadu, India

\*Address all correspondence to: mathiyalagan.p@srec.ac.in

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### **References**

[1] Jin J, Allison BZ, Wang X, Neuper C. A combined brain-computer interface based on P300 potentials and motion-onset visual evoked potentials. Journal of Neuroscience Methods. 2012;**205**:265-276. DOI: 10.1016/j. jneumeth.2012.01.004

[2] Katzenbeisser S, Petitcolas FAP. Information Hiding: Techniques for Steganography and Digital Watermarking. Norwood, MA: Artech House; 2000

[3] Kohl P, Noble D. Systems biology and the virtual physiological human. Molecular Systems Biology 2009;**292**: 1-6. DOI: 10.1038/msb.2009.51

[4] Amini S, Veilleux D, Villemure I. Tissue and cellular morphological changes in growth plate explants under compression. Journal of Biomechanics. 2010;**43**(13):2582-2588

[5] Saini S, Vijay R. Back propagation artificial neural network. In: Proceedings of the 5th International Conference on Communication Systems and Network Technologies; Gwalior, India. April 2015. pp. 1177-1180

[6] Behr J, Choi SM, Grosskop S, Hong H, Nam SA, Peng Y, et al. Modeling, visualization, and interaction techniques for diagnosis and treatment planning in cardiology. Computers & Graphics. 2000;**24**(5):741-753

[7] McInerney T, Terzopoulos D. Deformable models in medical image analysis: A survey. Medical Image Analysis. 1996;**1**(2):91-108

[8] Lustig M, Donoho DL, Santos JM, Pauly JM. Compressed sensing MRI. IEEE Signal Processing Magazine. Mar. 2008;**25**(2):72-82

[9] Wang LV. Multiscale photoacoustic microscopy and computed

tomography. Nature Photonics. Sep. 2009;**3**(9):503-509

[10] Beard P. Biomedical photo acoustic imaging. Interface Focus. 2011;**1**(4):602-631

[11] Ghesu FC et al. Marginal space deep learning: Efficient architecture for volumetric image parsing. IEEE Transactions on Medical Imaging. May 2016;**35**(5):1217-1228

[12] Wang G. A perspective on deep imaging. IEEE Access. 2016;**4**:8914-8924

[13] Esteva A et al. Dermatologistlevel classification of skin cancer with deep neural networks. Nature. 2017;**542**(7639):115-118

[14] Okada M. A digital filter for the QRS complex detection. IEEE Transactions on Bio-Medical Engineering BME. 1979;**26**:700-703

[15] Ergun E, Batakçı L. Audio watermarking scheme based on embedding strategy in low frequency components with a binary image. Digital Signal Processing. March 2009;**19**(2):277-286

[16] Kadambe S, Murray R, Boudreaux-Bartels GF. Wavelet transform-based QRS complex detector. IEEE Transactions on Biomedical Engineering. 1999;**46**:838-848

[17] Awad ES. Data interchange across cores of multi-core optical fibers. Optical Fiber Technology Volume 26, Part B. December 2015;**26**:157-162

[18] Hamilton PS, Tompkins WJ. Quantitative investigation of QRS detection rules using the MIT/BIH arrhythmia database. IEEE Transactions on Biomedical Engineering BME. 1986;**33**:1157-1165

**141**

*Advances in Signal and Image Processing in Biomedical Applications*

industry association. Physics Procedia.

[26] Bhateja V, Patel H, Krishn A, Sahu A, Lay-Ekualille A. Multimodal

[27] Mjahad A, Rosado-Muñoz A, Bataller Mompeán M, Francés-Víllora JV, Ventricular Fibrillation G-MJF. Tachycardia detection from surface ECG using time-frequency representation images as input dataset for cyber-physical systems. Computer Methods and Programs in Biomedicine.

[28] Arenja N, Riffel JH, Djioko CJ, Andre F, Fritz T, Halder M, et al. Right ventricular long axis strain-validation of a novel parameter in non-ischemic dilated cardiomyopathy using standard cardiac magnetic resonance imaging. European Journal of Radiology.

[29] Mavratzakis A, Herbert C, Walla P. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study. NeuroImage. 2016;**124**:931-946

[30] Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia.

[31] Wieser MJ, Brosch T. Faces in context: A review and systematization of contextual influences on affective face processing. Frontiers in Psychology.

[32] Olofsson JK, Nordin S, Sequeira H, Polich J. Affective picture processing:

2007;**45**(1):174-194

2012;**3**:471

medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sensors Journal.

2015;**15**(12):6783-6790

2017;**141**:119-127

2016;**85**:1322-1328

2016;**87**:139-146

*DOI: http://dx.doi.org/10.5772/intechopen.88759*

[19] Kahn JM, Ho K-P. Spectral efficiency

limits and modulation/detection techniques for DWDM Systems IEEE. Journal of Selected Topics in Quantum Electronics. 2004;**10**(2):259-272

[20] Chen SW, Chen HC, Chan HL. A real-time QRS detection method based on moving averaging incorporating with wavelet denoising. Computer Methods and Programs in Biomedicine.

[21] He B, Li G, Lian J. A spline Laplacian ECG estimator in realistic geometry volume conductor. IEEE Transactions

on Biomedical Engineering.

[22] Perrin F, Pernier J, Bertrand O, Giard MH, Echallier JF. Mapping of scalp potentials by surface spline interpolation. Electroencephalography

and Clinical Neurophysiology.

[23] Kawakatsu H. Methods for evaluating pictures and extracting music by 2D DFA and 2D FFT. 19th international conference on knowledge based and intelligent information and engineering

[24] Kawakatsu H. Fluctuation analysis for photographs of tourist spots and music extraction from photographs. In: Lecture Notes in Engineering and Computer Science: Proceedings of the World Congress on Engineering 2014; WCE 2014: 2-4 July, 2014, London, UK. Vol. 1. 2014.

[25] Manandhar P, Ward A, Allen P, Cotter DJ, Mcwhirter JG, Shepherd TJ. An automated algorithm for measurement of surgical tip excursion in ultrasonic vibration using the spatial 2-dimensional Fourier transform in an optical image. 44th annual symposium of the ultrasonic

systems. Procedia Computer Science.

2002;**49**(2):110-117

1987;**66**:75-81

2015;**60**:834-840

pp. 558-561

2006;**82**:187-195

*Advances in Signal and Image Processing in Biomedical Applications DOI: http://dx.doi.org/10.5772/intechopen.88759*

[19] Kahn JM, Ho K-P. Spectral efficiency limits and modulation/detection techniques for DWDM Systems IEEE. Journal of Selected Topics in Quantum Electronics. 2004;**10**(2):259-272

[20] Chen SW, Chen HC, Chan HL. A real-time QRS detection method based on moving averaging incorporating with wavelet denoising. Computer Methods and Programs in Biomedicine. 2006;**82**:187-195

[21] He B, Li G, Lian J. A spline Laplacian ECG estimator in realistic geometry volume conductor. IEEE Transactions on Biomedical Engineering. 2002;**49**(2):110-117

[22] Perrin F, Pernier J, Bertrand O, Giard MH, Echallier JF. Mapping of scalp potentials by surface spline interpolation. Electroencephalography and Clinical Neurophysiology. 1987;**66**:75-81

[23] Kawakatsu H. Methods for evaluating pictures and extracting music by 2D DFA and 2D FFT. 19th international conference on knowledge based and intelligent information and engineering systems. Procedia Computer Science. 2015;**60**:834-840

[24] Kawakatsu H. Fluctuation analysis for photographs of tourist spots and music extraction from photographs. In: Lecture Notes in Engineering and Computer Science: Proceedings of the World Congress on Engineering 2014; WCE 2014: 2-4 July, 2014, London, UK. Vol. 1. 2014. pp. 558-561

[25] Manandhar P, Ward A, Allen P, Cotter DJ, Mcwhirter JG, Shepherd TJ. An automated algorithm for measurement of surgical tip excursion in ultrasonic vibration using the spatial 2-dimensional Fourier transform in an optical image. 44th annual symposium of the ultrasonic

industry association. Physics Procedia. 2016;**87**:139-146

[26] Bhateja V, Patel H, Krishn A, Sahu A, Lay-Ekualille A. Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sensors Journal. 2015;**15**(12):6783-6790

[27] Mjahad A, Rosado-Muñoz A, Bataller Mompeán M, Francés-Víllora JV, Ventricular Fibrillation G-MJF. Tachycardia detection from surface ECG using time-frequency representation images as input dataset for cyber-physical systems. Computer Methods and Programs in Biomedicine. 2017;**141**:119-127

[28] Arenja N, Riffel JH, Djioko CJ, Andre F, Fritz T, Halder M, et al. Right ventricular long axis strain-validation of a novel parameter in non-ischemic dilated cardiomyopathy using standard cardiac magnetic resonance imaging. European Journal of Radiology. 2016;**85**:1322-1328

[29] Mavratzakis A, Herbert C, Walla P. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study. NeuroImage. 2016;**124**:931-946

[30] Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia. 2007;**45**(1):174-194

[31] Wieser MJ, Brosch T. Faces in context: A review and systematization of contextual influences on affective face processing. Frontiers in Psychology. 2012;**3**:471

[32] Olofsson JK, Nordin S, Sequeira H, Polich J. Affective picture processing:

**140**

*Coding Theory*

**References**

[1] Jin J, Allison BZ, Wang X,

jneumeth.2012.01.004

House; 2000

Neuper C. A combined brain-computer interface based on P300 potentials and motion-onset visual evoked potentials. Journal of Neuroscience Methods. 2012;**205**:265-276. DOI: 10.1016/j.

tomography. Nature Photonics.

[10] Beard P. Biomedical photo acoustic imaging. Interface Focus.

[11] Ghesu FC et al. Marginal space deep learning: Efficient architecture for volumetric image parsing. IEEE Transactions on Medical Imaging. May

[12] Wang G. A perspective on deep imaging. IEEE Access.

[13] Esteva A et al. Dermatologistlevel classification of skin cancer with deep neural networks. Nature.

[15] Ergun E, Batakçı L. Audio watermarking scheme based on embedding strategy in low frequency components with a binary image. Digital Signal Processing. March

[14] Okada M. A digital filter for the QRS complex detection. IEEE Transactions on Bio-Medical Engineering BME.

Sep. 2009;**3**(9):503-509

2011;**1**(4):602-631

2016;**35**(5):1217-1228

2016;**4**:8914-8924

1979;**26**:700-703

2009;**19**(2):277-286

[16] Kadambe S, Murray R, Boudreaux-Bartels GF. Wavelet

December 2015;**26**:157-162

1986;**33**:1157-1165

[18] Hamilton PS, Tompkins WJ. Quantitative investigation of QRS detection rules using the MIT/BIH arrhythmia database. IEEE Transactions on Biomedical Engineering BME.

transform-based QRS complex detector. IEEE Transactions on Biomedical Engineering. 1999;**46**:838-848

[17] Awad ES. Data interchange across cores of multi-core optical fibers. Optical Fiber Technology Volume 26, Part B.

2017;**542**(7639):115-118

[2] Katzenbeisser S, Petitcolas FAP. Information Hiding: Techniques for Steganography and Digital

Watermarking. Norwood, MA: Artech

[3] Kohl P, Noble D. Systems biology and the virtual physiological human. Molecular Systems Biology 2009;**292**: 1-6. DOI: 10.1038/msb.2009.51

[4] Amini S, Veilleux D, Villemure I. Tissue and cellular morphological changes in growth plate explants under compression. Journal of Biomechanics.

[5] Saini S, Vijay R. Back propagation

Proceedings of the 5th International Conference on Communication Systems and Network Technologies; Gwalior, India. April 2015. pp. 1177-1180

[6] Behr J, Choi SM, Grosskop S, Hong H, Nam SA, Peng Y, et al.

Graphics. 2000;**24**(5):741-753

[8] Lustig M, Donoho DL,

microscopy and computed

[7] McInerney T, Terzopoulos D. Deformable models in medical image analysis: A survey. Medical Image Analysis. 1996;**1**(2):91-108

Santos JM, Pauly JM. Compressed sensing MRI. IEEE Signal Processing Magazine. Mar. 2008;**25**(2):72-82

[9] Wang LV. Multiscale photoacoustic

Modeling, visualization, and interaction techniques for diagnosis and treatment planning in cardiology. Computers &

2010;**43**(13):2582-2588

artificial neural network. In:

An integrative review of ERP findings. Biological Psychology. 2008;**77**(3):247-265

[33] Rajeswari J, Jagannath M. Advances in biomedical signal and image processing—A systematic review. Informatics in Medicine Unlocked. DOI: 10.1016/j.imu.2017.04.002

Chapter 9

Abstract

Phase-Stretch Adaptive

Madhuri Suthar and Bahram Jalali

(https://github.com/JalaliLabUCLA).

1. Introduction

restoring audio quality [9].

143

Keywords: computational imaging, physics-inspired algorithms,

phase stretch transform, feature engineering, Gabor filter, digital image processing

Physical phenomena described by partial differential equations (PDE) have inspired a new field in computational imaging and computer vision [1]. Such physics-inspired algorithms based on PDEs have been successful for image smoothening and restoration. Image restoration can be viewed as obtaining the solution to evolution equations by minimizing an energy function. The most popular PDE technique for image smoothening treats the original image as the initial state of a diffusion process and extracts filtered versions from its evolution at different times. This embeds the original image into a family of simpler images at a hierarchical scale. Such a scale-space representation is useful for extracting semantically important information [2]. Physics based algorithms not only outperform their conventional counterparts, but also have enabled new applications. Usage of these algorithms range from feature detection in digital images [3–5], to 3D modeling of objects from 2D images [6, 7], to optical character recognition [8] as well as for

Gradient-Field Extractor (PAGE)

Emulated by an algorithm, certain physical phenomena have useful properties for image transformation. For example, image denoising can be achieved by propagating the image through the heat diffusion equation. Different stages of the temporal evolution represent a multiscale embedding of the image. Stimulated by the photonic time stretch, a realtime data acquisition technology, the Phase Stretch Transform (PST) emulates 2D propagation through a medium with group velocity dispersion, followed by coherent (phase) detection. The algorithm performs exceptionally well as an edge and texture extractor, in particular in visually impaired images. Here, we introduce a decomposition method that draws inspiration from the birefringent diffractive propagation. This decomposition method, which we term as Phase-stretch Adaptive Gradient-field Extractor (PAGE) embeds the original image into a set of feature maps that selects semantic information at different scale, orientation, and spatial frequency. We demonstrate applications of this algorithm in edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical characters and finger print images. The code for this algorithm is available here

#### Chapter 9

*Coding Theory*

An integrative review of ERP findings. Biological Psychology.

in biomedical signal and image processing—A systematic review. Informatics in Medicine Unlocked. DOI:

10.1016/j.imu.2017.04.002

[33] Rajeswari J, Jagannath M. Advances

2008;**77**(3):247-265

**142**

## Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

Madhuri Suthar and Bahram Jalali

#### Abstract

Emulated by an algorithm, certain physical phenomena have useful properties for image transformation. For example, image denoising can be achieved by propagating the image through the heat diffusion equation. Different stages of the temporal evolution represent a multiscale embedding of the image. Stimulated by the photonic time stretch, a realtime data acquisition technology, the Phase Stretch Transform (PST) emulates 2D propagation through a medium with group velocity dispersion, followed by coherent (phase) detection. The algorithm performs exceptionally well as an edge and texture extractor, in particular in visually impaired images. Here, we introduce a decomposition method that draws inspiration from the birefringent diffractive propagation. This decomposition method, which we term as Phase-stretch Adaptive Gradient-field Extractor (PAGE) embeds the original image into a set of feature maps that selects semantic information at different scale, orientation, and spatial frequency. We demonstrate applications of this algorithm in edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical characters and finger print images. The code for this algorithm is available here (https://github.com/JalaliLabUCLA).

Keywords: computational imaging, physics-inspired algorithms, phase stretch transform, feature engineering, Gabor filter, digital image processing

#### 1. Introduction

Physical phenomena described by partial differential equations (PDE) have inspired a new field in computational imaging and computer vision [1]. Such physics-inspired algorithms based on PDEs have been successful for image smoothening and restoration. Image restoration can be viewed as obtaining the solution to evolution equations by minimizing an energy function. The most popular PDE technique for image smoothening treats the original image as the initial state of a diffusion process and extracts filtered versions from its evolution at different times. This embeds the original image into a family of simpler images at a hierarchical scale. Such a scale-space representation is useful for extracting semantically important information [2]. Physics based algorithms not only outperform their conventional counterparts, but also have enabled new applications. Usage of these algorithms range from feature detection in digital images [3–5], to 3D modeling of objects from 2D images [6, 7], to optical character recognition [8] as well as for restoring audio quality [9].

Phase Stretch Transform (PST) is a physics inspired algorithm that emulates 2D propagation through a medium with group velocity dispersion, followed by coherent (phase) detection [10, 11]. The algorithm performs exceptionally well as edge and texture extractor, in particular in visually impaired images [12]. This transform has an inherent equalization ability that supports wide dynamic range of operation for feature detection [12–14]. It also exhibits superior properties over conventional derivative operators, particularly in terms of feature enhancement in noisy low contrast images. These properties have been exploited to develop image processing tools for clinical needs such as a decision support system for radiologists to diagnose pneumothorax [15, 16], for resolution enhancement in brain MRI images [17], single molecule imaging [18], and image segmentation [19].

PST emulates the physics of photonic time stretch [20], a real time measurement technology that has enabled observation as well as detection of ultrafast, nonrepetitive events like optical rogue waves [21], optical fiber soliton explosions [22] and birth of mode locking in laser [23]. Further, by combining photonic time stretch technology with machine learning algorithms, a world record accuracy has been achieved for classification of cancer cells in blood stream [24, 25].

The photonic time stretch employs group-velocity dispersion (GVD) in an optical fiber to slow down an analog signal in time by propagating a modulated optical pulse through the time stretch system which is governed by the following equation:

$$E\_o(x,t) = \frac{1}{2\pi} \int\_{-\infty}^{+\infty} \tilde{E}\_i(\mathbf{0}, \alpha) \cdot \left[ e^{\frac{-j\theta\_2 m^2}{2}} \right] \cdot e^{j\alpha t} d\alpha \tag{1}$$

we consider an optical pulse with two linearly orthogonal polarizations, E~<sup>x</sup> and E~y,

varying), the two orthogonal polarizations E~<sup>x</sup> and E~<sup>y</sup> will have different propagation constants and hence, a phase difference at the output given by the following equation:

By controlling the value of nx and ny, as well the dependence of refractive index on frequency nxð Þ ω and nyð Þ ω , we are able to detect a semantic hyper-dimensional feature set from a 2D image. We demonstrate with several visual examples in the later part of this chapter that the above filter banks can be applied for image processing and computer vision applications such as for detection of fabrication artifacts in semiconductor chips, development of clinical decision support systems, recognition of optical characters or finger prints. In particular, we show that PAGE features outperform the conventional derivative operators as well as directional

Further, we address the dual problem of spatial resolution and dynamic range limitations in an imaging system. In an ideal imaging system, the numerical aperture and the wavelength of an optical set up are the only factors that determine the spatial resolution offered by the modality. But under non-ideal conditions, the number of photons collected from a specimen control its dynamic range (the ratio between the largest and the smallest value of a variable quantity) which in turn also limits the spatial resolution. This leads to the fundamental dual-problem of spatial

Certain approaches to improve the resolution of the imaging system include use of wide-field fluorescence microscopy [27, 28] which offers better resolution than confocal fluorescence microscopy [29], multiple fluorophores [30, 31]. Also, various image processing techniques such as multi-scale analysis using wavelets [32, 33] have been proposed for improving the resolution while retaining important visual information post the image acquisition. We show later in the chapter that we are able to alleviate this dual-problem by incorporating, in our algorithm, a local adaptive contrast enhancement operator, also known as Tone Mapping Operator (TMO)

Other steps of the proposed decomposition method are discussed at length in the next section. The organization of the chapter is as follows. In Section 2, we describe the details of the proposed decomposition method. Experimental results and con-

Different steps of our proposed decomposition method Phase-stretch Gradientfield Extractor (PAGE) for feature engineering are shown in Figure 1. The first step is to apply an adaptive tone mapping operator (TMO) to enhance the local contrast. Next, we reduce the noise by applying a smoothening kernel in frequency domain (this operation can also be done in spatial domain). We then apply a spectral phase kernel that emulates the birefringence and frequency channelized diffractive

resolution and dynamic range limitations in an imaging modality [26].

c

Δϕ <sup>¼</sup> <sup>ϕ</sup><sup>x</sup> � <sup>ϕ</sup><sup>y</sup> <sup>¼</sup> Δβ � <sup>l</sup> <sup>¼</sup> <sup>ω</sup><sup>m</sup>

<sup>E</sup>~ið Þ¼ <sup>z</sup>, <sup>t</sup> <sup>E</sup>~<sup>x</sup> <sup>þ</sup> <sup>E</sup>~<sup>y</sup> (3)

<sup>λ</sup> is a function of refractive index (spatially

∣nx � ny∣ � L (4)

propagating through a dispersive diffractive medium such that

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

As the propagation constant <sup>β</sup> <sup>¼</sup> <sup>n</sup>:2<sup>π</sup>

DOI: http://dx.doi.org/10.5772/intechopen.90361

which leads to excellent dynamic range.

2. Mathematical framework

145

clusions are presented in Sections 3 and 4, respectively.

Gabor filter banks.

where, β<sup>2</sup> = GVD parameter, z is propagation distance, Eoð Þ z, t is the reshaped output pulse at distance z and time t. The response of dispersive element in the time-stretch system can be approximated a phase propagator <sup>K</sup>~½ �¼ <sup>ω</sup> <sup>e</sup> �jβ2zω<sup>2</sup> <sup>2</sup> which leads to the definition of PST for a discrete 2D signal as following:

$$\mathbb{P}\mathbb{S}\mathbb{T}\{E\_i[\mathbf{x}, y]\} \triangleq \mathbb{A}\left\{IFFT^2\{FFT^2\{E\_i[\mathbf{x}, y]\}\cdot \tilde{K}[\boldsymbol{\mu}, \boldsymbol{\nu}]\}\right\}\tag{2}$$

In the above equations, Ei½ � <sup>x</sup>, <sup>y</sup> is the input image, FFT<sup>2</sup> is 2D Fast Fourier Transform, IFFT<sup>2</sup> is 2D Inverse Fast Fourier Transform, x and y are the spatial variables and, <sup>u</sup> and <sup>v</sup> are spatial frequency variables. The function K u <sup>~</sup>½ � , <sup>v</sup> is called the warped phase kernel implemented in frequency domain for image processing.

PST utilizes the GVD dispersion to convert a real image to a complex quantity such that the spatial phase after the IFFT<sup>2</sup> operation is a function of frequency. Upon thresholding, the high frequency edges survive. The phase kernel for the PST is designed by converting the 2D Cartesian frequencies u and v to polar coordinates which results in a symmetric Cartesian phase kernel. However, as digital images are fundamentally two-dimensional, there is an inherent loss of information in the features detected by PST. This motivates us to develop a more comprehensive approach that captures angular as well as spatial frequency information in a semantic fashion.

In this chapter, we introduce Phase-stretch Adaptive Gradient-field Extractor (PAGE), a new physics inspired feature engineering algorithm that computes a feature set comprising of edges at different spatial frequencies, at different orientations, and at different scales. These filters metaphorically emulate the physics of birefringent (orientation-dependent) diffractive propagation through a physical medium with a specific diffractive property. In such a medium, the dielectric constant of the medium and hence, its refractive index is a function of spatial frequency and the polarization in the transverse plane. To understand this metaphoric analogy,

Phase Stretch Transform (PST) is a physics inspired algorithm that emulates 2D propagation through a medium with group velocity dispersion, followed by coherent (phase) detection [10, 11]. The algorithm performs exceptionally well as edge and texture extractor, in particular in visually impaired images [12]. This transform has an inherent equalization ability that supports wide dynamic range of operation for feature detection [12–14]. It also exhibits superior properties over conventional derivative operators, particularly in terms of feature enhancement in noisy low contrast images. These properties have been exploited to develop image processing tools for clinical needs such as a decision support system for radiologists to diagnose pneumothorax [15, 16], for resolution enhancement in brain MRI images [17],

PST emulates the physics of photonic time stretch [20], a real time measurement

The photonic time stretch employs group-velocity dispersion (GVD) in an optical fiber to slow down an analog signal in time by propagating a modulated optical pulse through the time stretch system which is governed by the following equation:

<sup>E</sup>~ið Þ� 0,<sup>ω</sup> <sup>e</sup>

where, β<sup>2</sup> = GVD parameter, z is propagation distance, Eoð Þ z, t is the reshaped output pulse at distance z and time t. The response of dispersive element in the

In the above equations, Ei½ � <sup>x</sup>, <sup>y</sup> is the input image, FFT<sup>2</sup> is 2D Fast Fourier Transform, IFFT<sup>2</sup> is 2D Inverse Fast Fourier Transform, x and y are the spatial variables and, <sup>u</sup> and <sup>v</sup> are spatial frequency variables. The function K u <sup>~</sup>½ � , <sup>v</sup> is called the warped phase kernel implemented in frequency domain for image processing. PST utilizes the GVD dispersion to convert a real image to a complex quantity such that the spatial phase after the IFFT<sup>2</sup> operation is a function of frequency. Upon thresholding, the high frequency edges survive. The phase kernel for the PST is designed by converting the 2D Cartesian frequencies u and v to polar coordinates which results in a symmetric Cartesian phase kernel. However, as digital images are fundamentally two-dimensional, there is an inherent loss of information in the features detected by PST. This motivates us to develop a more comprehensive approach that captures angular as well as spatial frequency information in a seman-

In this chapter, we introduce Phase-stretch Adaptive Gradient-field Extractor (PAGE), a new physics inspired feature engineering algorithm that computes a feature set comprising of edges at different spatial frequencies, at different orientations, and at different scales. These filters metaphorically emulate the physics of birefringent (orientation-dependent) diffractive propagation through a physical medium with a specific diffractive property. In such a medium, the dielectric constant of the medium and hence, its refractive index is a function of spatial frequency and the polarization in the transverse plane. To understand this metaphoric analogy,

�jβ2zω<sup>2</sup> 2 � �

� e jωt

f g Ei½ � <sup>x</sup>, <sup>y</sup> � K u <sup>~</sup>½ � , <sup>v</sup> � � � � (2)

dω (1)

�jβ2zω<sup>2</sup>

<sup>2</sup> which

technology that has enabled observation as well as detection of ultrafast, nonrepetitive events like optical rogue waves [21], optical fiber soliton explosions [22] and birth of mode locking in laser [23]. Further, by combining photonic time stretch technology with machine learning algorithms, a world record accuracy has been

single molecule imaging [18], and image segmentation [19].

achieved for classification of cancer cells in blood stream [24, 25].

1 2π ðþ<sup>∞</sup> �∞

time-stretch system can be approximated a phase propagator <sup>K</sup>~½ �¼ <sup>ω</sup> <sup>e</sup>

leads to the definition of PST for a discrete 2D signal as following:

f g Ei½ � <sup>x</sup>, <sup>y</sup> ≜∡ IFFT<sup>2</sup> FFT<sup>2</sup>

Eoð Þ¼ z, t

tic fashion.

Coding Theory

144

we consider an optical pulse with two linearly orthogonal polarizations, E~<sup>x</sup> and E~y, propagating through a dispersive diffractive medium such that

$$
\ddot{E}\_i(\mathbf{z}, t) = \ddot{E}\_\mathbf{x} + \ddot{E}\_\mathbf{y} \tag{3}
$$

As the propagation constant <sup>β</sup> <sup>¼</sup> <sup>n</sup>:2<sup>π</sup> <sup>λ</sup> is a function of refractive index (spatially varying), the two orthogonal polarizations E~<sup>x</sup> and E~<sup>y</sup> will have different propagation constants and hence, a phase difference at the output given by the following equation:

$$
\Delta\phi = \phi\_{\rm x} - \phi\_{\rm y} = \Delta\beta \cdot l = \frac{\alpha\_m}{c} |n\_{\rm x} - n\_{\rm y}| \cdot L \tag{4}
$$

By controlling the value of nx and ny, as well the dependence of refractive index on frequency nxð Þ ω and nyð Þ ω , we are able to detect a semantic hyper-dimensional feature set from a 2D image. We demonstrate with several visual examples in the later part of this chapter that the above filter banks can be applied for image processing and computer vision applications such as for detection of fabrication artifacts in semiconductor chips, development of clinical decision support systems, recognition of optical characters or finger prints. In particular, we show that PAGE features outperform the conventional derivative operators as well as directional Gabor filter banks.

Further, we address the dual problem of spatial resolution and dynamic range limitations in an imaging system. In an ideal imaging system, the numerical aperture and the wavelength of an optical set up are the only factors that determine the spatial resolution offered by the modality. But under non-ideal conditions, the number of photons collected from a specimen control its dynamic range (the ratio between the largest and the smallest value of a variable quantity) which in turn also limits the spatial resolution. This leads to the fundamental dual-problem of spatial resolution and dynamic range limitations in an imaging modality [26].

Certain approaches to improve the resolution of the imaging system include use of wide-field fluorescence microscopy [27, 28] which offers better resolution than confocal fluorescence microscopy [29], multiple fluorophores [30, 31]. Also, various image processing techniques such as multi-scale analysis using wavelets [32, 33] have been proposed for improving the resolution while retaining important visual information post the image acquisition. We show later in the chapter that we are able to alleviate this dual-problem by incorporating, in our algorithm, a local adaptive contrast enhancement operator, also known as Tone Mapping Operator (TMO) which leads to excellent dynamic range.

Other steps of the proposed decomposition method are discussed at length in the next section. The organization of the chapter is as follows. In Section 2, we describe the details of the proposed decomposition method. Experimental results and conclusions are presented in Sections 3 and 4, respectively.

#### 2. Mathematical framework

Different steps of our proposed decomposition method Phase-stretch Gradientfield Extractor (PAGE) for feature engineering are shown in Figure 1. The first step is to apply an adaptive tone mapping operator (TMO) to enhance the local contrast. Next, we reduce the noise by applying a smoothening kernel in frequency domain (this operation can also be done in spatial domain). We then apply a spectral phase kernel that emulates the birefringence and frequency channelized diffractive

two-dimensional Inverse Fast Fourier Transform, TMO is a spatially adaptive Tone Mapping Operator and <sup>u</sup> and <sup>v</sup> are frequency variables. The function K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> is called the PAGE kernel and the function L u <sup>~</sup>½ � , <sup>v</sup> is a smoothening kernel, both implemented in frequency domain. For all our simulations here, we consider L u <sup>~</sup>½ � , <sup>v</sup> to be low pass Gaussian filter whose cut off frequency is determined by the sigma of

The PAGE operator fg can then be defined as the phase of the output of the

In the next subsections, we discuss each of the above mentioned kernels in detail

A tone mapping operator (TMO) is applied to enhance the local contrast in the

We operate on the input image using a TMO first, followed by smoothening operator (low pass filter) and not vice versa. The reason to follow this sequence of operation is as follows. Noise present in an image is mostly represented by the high frequency components in the spectrum. These high frequency components can be present at both low-light-level or at high-light-level in the spatial domain. Because of the use of a tone mapping operator, the low-light-level features get over emphasized [34, 35]. This also leads to amplification of the image noise particularly in lowlight scenarios. By applying a smoothening filter after the TMO operation, we aim to remove these noise artifacts from the contrast enhancement step. Alternatively, where any noise is left after the application of a smoothening kernel on the input image, it could be amplified by the TMO operation in the next step. Therefore, one may need to alternate between the smoothening step and TMO before obtaining a

2.2 Phase-stretch adaptive gradient-field extractor (PAGE) kernel

define the translated frequency variable u<sup>0</sup> and v<sup>0</sup>

Phase-stretch adaptive gradient-field extractor (PAGE) filter banks are defined by the PAGE kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> and are designed to compute semantic information from an image at different orientations and frequencies. The PAGE kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> , consists of a phase filter which is a function of frequency variable u and v, and a steerable angle variable θ which controls the directionality of the response. We first

> u<sup>0</sup> ¼ u � cosð Þþ θ v � sin ð Þθ (8) v<sup>0</sup> ¼ u � sinθÞ þ v � cosð Þθ (9)

input image Ei½ � x, y . This technique is a standard method in the field of image processing to solve the problem of limited contrast in an imaging system while still preserving important details and thereby, helps in improving the dynamic range of an imaging system via post processing. By applying a tone mapping operator to the input image, an enhanced contrast can be achieved. While various TMO operators have been developed for adaptive contrast enhancement, here, we implement the TMO step by applying a Contrast Limited Adaptive Histogram Equalization

f g Ei½ � x, y ¼ ∡f g f g Ei½ � x, y (7)

stretch operation fg applied on the input image Ei½ � x, y :

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

DOI: http://dx.doi.org/10.5772/intechopen.90361

and demonstrate the operation of each step using simulation results.

the Gaussian filter (σLPF).

where ∡h i� is the angle operator.

2.1 Tone mapping operator (TMO)

(CLAHE) operator to the input image.

final satisfactory result [36].

147

#### Figure 1.

Different steps of the phase-stretch gradient-field extractor (PAGE) algorithm. The pipeline starts with application of tone mapping in the spatial domain. This is followed by a smoothening and a spectral phase operation in the frequency domain. The spectral phase operation is the main component of the PAGE algorithm. The generated hyper-dimensional feature vector is thresholded and post-processed by morphological operations. PAGE embeds the original image into a set of feature maps that select semantic information at different scale, orientation, and spatial frequency.

Figure 2.

The phase-stretch gradient-field extractor (PAGE) feature map of an X-ray image. The original image is shown on the left (A). PAGE embeds the original image into a feature map that selects semantic information at different orientations as shown in (B). The orientation of the edges is encoded into various color values here.

propagation. The final step of PAGE is to apply thresholding and morphological operations on the generated feature vectors in spatial domain to produce the final output. The PAGE output embeds the original image into a set of feature maps that select semantic information at different scale, orientation, and spatial frequency. We show in Figure 2 how PAGE embeds semantic information at different orientations for an X-ray image of a flower.

The sequence of steps of our physics-inspired feature extraction method, PAGE, can be represented by the following equations. We first define the birefringent stretch operator fg as follows:

$$E\_o[\mathbf{x}, \mathbf{y}] = \mathbb{S}\{E\_i[\mathbf{x}, \mathbf{y}]\} = \text{IFFT}^2\{\tilde{K}[\boldsymbol{u}, \boldsymbol{v}, \boldsymbol{\theta}] \cdot \tilde{L}[\boldsymbol{u}, \boldsymbol{v}] \cdot \text{FFT}^2\{\text{TMO}\{E\_i[\mathbf{x}, \mathbf{y}]\}\} \tag{5}$$

where Eo½ � x, y is a complex quantity defined as,

$$E\_o[\mathbf{x}, \mathbf{y}] = |E\_o[\mathbf{x}, \mathbf{y}]| \, e^{i\theta[\mathbf{x}, \mathbf{y}]} \tag{6}$$

In the above equations, Ei½ � x, y is the input image, x and y are the spatial variables, FFT<sup>2</sup> is the two-dimensional Fast Fourier Transform, IFFT<sup>2</sup> is the Phase-Stretch Adaptive Gradient-Field Extractor (PAGE) DOI: http://dx.doi.org/10.5772/intechopen.90361

two-dimensional Inverse Fast Fourier Transform, TMO is a spatially adaptive Tone Mapping Operator and <sup>u</sup> and <sup>v</sup> are frequency variables. The function K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> is called the PAGE kernel and the function L u <sup>~</sup>½ � , <sup>v</sup> is a smoothening kernel, both implemented in frequency domain. For all our simulations here, we consider L u <sup>~</sup>½ � , <sup>v</sup> to be low pass Gaussian filter whose cut off frequency is determined by the sigma of the Gaussian filter (σLPF).

The PAGE operator fg can then be defined as the phase of the output of the stretch operation fg applied on the input image Ei½ � x, y :

$$\mathbb{P}\{E\_i[\mathfrak{x}, \mathfrak{y}]\} = \mathfrak{A}\{\mathbb{S}\{E\_i[\mathfrak{x}, \mathfrak{y}]\}\}\tag{7}$$

where ∡h i� is the angle operator.

In the next subsections, we discuss each of the above mentioned kernels in detail and demonstrate the operation of each step using simulation results.

#### 2.1 Tone mapping operator (TMO)

A tone mapping operator (TMO) is applied to enhance the local contrast in the input image Ei½ � x, y . This technique is a standard method in the field of image processing to solve the problem of limited contrast in an imaging system while still preserving important details and thereby, helps in improving the dynamic range of an imaging system via post processing. By applying a tone mapping operator to the input image, an enhanced contrast can be achieved. While various TMO operators have been developed for adaptive contrast enhancement, here, we implement the TMO step by applying a Contrast Limited Adaptive Histogram Equalization (CLAHE) operator to the input image.

We operate on the input image using a TMO first, followed by smoothening operator (low pass filter) and not vice versa. The reason to follow this sequence of operation is as follows. Noise present in an image is mostly represented by the high frequency components in the spectrum. These high frequency components can be present at both low-light-level or at high-light-level in the spatial domain. Because of the use of a tone mapping operator, the low-light-level features get over emphasized [34, 35]. This also leads to amplification of the image noise particularly in lowlight scenarios. By applying a smoothening filter after the TMO operation, we aim to remove these noise artifacts from the contrast enhancement step. Alternatively, where any noise is left after the application of a smoothening kernel on the input image, it could be amplified by the TMO operation in the next step. Therefore, one may need to alternate between the smoothening step and TMO before obtaining a final satisfactory result [36].

#### 2.2 Phase-stretch adaptive gradient-field extractor (PAGE) kernel

Phase-stretch adaptive gradient-field extractor (PAGE) filter banks are defined by the PAGE kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> and are designed to compute semantic information from an image at different orientations and frequencies. The PAGE kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> , consists of a phase filter which is a function of frequency variable u and v, and a steerable angle variable θ which controls the directionality of the response. We first define the translated frequency variable u<sup>0</sup> and v<sup>0</sup>

$$u' = u \cdot \cos\left(\theta\right) + v \cdot \sin\left(\theta\right) \tag{8}$$

$$v' = u \cdot \sin \theta \,\mathrm{/} + v \cdot \cos \left(\theta \right) \tag{9}$$

propagation. The final step of PAGE is to apply thresholding and morphological operations on the generated feature vectors in spatial domain to produce the final output. The PAGE output embeds the original image into a set of feature maps that select semantic information at different scale, orientation, and spatial frequency. We show in Figure 2 how PAGE embeds semantic information at different orien-

The phase-stretch gradient-field extractor (PAGE) feature map of an X-ray image. The original image is shown on the left (A). PAGE embeds the original image into a feature map that selects semantic information at different orientations as shown in (B). The orientation of the edges is encoded into various color values here.

Different steps of the phase-stretch gradient-field extractor (PAGE) algorithm. The pipeline starts with application of tone mapping in the spatial domain. This is followed by a smoothening and a spectral phase operation in the frequency domain. The spectral phase operation is the main component of the PAGE algorithm. The generated hyper-dimensional feature vector is thresholded and post-processed by morphological operations. PAGE embeds the original image into a set of feature maps that select semantic information at different scale,

can be represented by the following equations. We first define the birefringent

Eo½ �¼ x, y j j Eo½ � x, y e

In the above equations, Ei½ � x, y is the input image, x and y are the spatial variables, FFT<sup>2</sup> is the two-dimensional Fast Fourier Transform, IFFT<sup>2</sup> is the

Eo½ �¼ <sup>x</sup>, <sup>y</sup> f g Ei½ � <sup>x</sup>, <sup>y</sup> <sup>¼</sup> IFFT<sup>2</sup> K u <sup>~</sup>½ �� , <sup>v</sup>, <sup>θ</sup> L u <sup>~</sup>½ �� , <sup>v</sup> FFT<sup>2</sup>

where Eo½ � x, y is a complex quantity defined as,

The sequence of steps of our physics-inspired feature extraction method, PAGE,

f g TMO Ef g <sup>i</sup>½ � <sup>x</sup>, <sup>y</sup> (5)

<sup>j</sup>θ½ � <sup>x</sup>,<sup>y</sup> (6)

tations for an X-ray image of a flower.

stretch operator fg as follows:

Figure 1.

Coding Theory

Figure 2.

146

orientation, and spatial frequency.

such that the frequency vector rotates along the origin with θ

$$
\mu' + j\nu' \Leftarrow \mu + j\nu \tag{10}
$$

Figure 3A–P show the generated phase profiles for ϕ<sup>1</sup> u<sup>0</sup> ð Þ� ϕ<sup>2</sup> v<sup>0</sup> ð Þ that select semantic information at different orientation and frequency as described in Eqs. (10)–(13) using PAGE kernels. These phase kernels are applied to the input image spectrum. Using the steerable angle, the directionality of edge response can be controlled in the output phase of the transformed image. The detected output response for each directional filter is thresholded using a bi-level method. This is

done to preserve negative high amplitude values as well as positive high

In order to detect features in a particular direction spread over the all the frequency components in the spectrum, we construct the PAGE filter banks by using Eqs. (9)–(13) for K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> , <sup>ϕ</sup><sup>1</sup> <sup>u</sup><sup>0</sup> ð Þ and <sup>ϕ</sup><sup>1</sup> <sup>v</sup><sup>0</sup> ð Þ respectively. By controlling the value of sigma σ<sup>u</sup><sup>0</sup> of normal distribution for ϕ<sup>1</sup> u<sup>0</sup> ð Þ filter, we avoid any overlapping

We first evaluate the performance of these kernel by qualitatively comparing the feature detection of PAGE with PST. The image under analysis is a gray-scale image

Phase-stretch gradient-field extractor (PAGE) filter banks (A)–(P) phase filter banks as defined in Eqs. (8)– (13) for various frequencies and directions. The frequency variables u and v are normalized from �ω<sup>u</sup> to þω<sup>u</sup> and �ω<sup>v</sup> to þωv, respectively. The center μ<sup>v</sup><sup>0</sup> of the phase kernel Sv<sup>0</sup> is gradually increased for control over the

frequency distribution. The values for steerable angle θ considered here are 0, π/4,π/2, 3π/4.

amplitude values.

2.2.1 Directionality

Figure 3.

149

of directional filters as seen in Figure 4.

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

DOI: http://dx.doi.org/10.5772/intechopen.90361

We then define the PAGE kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> as a function of frequency variable <sup>u</sup> and v and steerable angle θ as follows:

$$
\tilde{K}[u, v, \theta] = \tilde{K}[u', v'] = \exp\left\{j \cdot \phi\_1(u') \cdot \phi\_2(v')\right\} \tag{11}
$$

where

$$\phi\_1(u') = \mathbb{S}\_{u'} \cdot \frac{\mathbf{1}}{\sigma\_{u'} \sqrt{2\pi}} \cdot \exp^{-(|u'| - \mu\_{u'})^2 / 2\sigma\_{u'}^2} \tag{12}$$

$$\phi\_2(\nu') = \mathbb{S}\_{\nu'} \cdot \frac{\mathbf{1}}{|\nu'| \sigma\_{\nu'} \sqrt{2\pi}} \cdot \exp^{-(\ln\left(|\nu'|\right) - \mu\_{\nu'})^2/2\sigma\_{\nu'}^2} \tag{13}$$

There are two important things that should be noted here. First, we consider the modulus of our translated frequency variable u<sup>0</sup> and v<sup>0</sup> so that our kernel is symmetric for proper phase operation as discussed in [12]. Second, for all our simulation examples here, when we consider a bank of PAGE filters, we first normalize ϕ<sup>1</sup> u<sup>0</sup> ð Þ and ϕ<sup>2</sup> v<sup>0</sup> ð Þ in the range (0,1) for all values of θ and then, multiply the filter banks with Su<sup>0</sup> and Sv0, respectively, in order to make sure that the amplitude of each filter in the bank is same.

These filter banks can detect features at a particular frequency and/or in a particular direction. Therefore, by selecting a desired direction and/or frequency, a hyper-dimensional feature map can be constructed. We list all parameters in Table 1 that control different functionalities of our proposed decomposition method PAGE.


The values of these parameters for Figure 2 simulation result are: Su<sup>0</sup> ¼ 3.4, Sv<sup>0</sup> ¼ 1.2, μu<sup>0</sup> ¼ 0, μv<sup>0</sup> ¼ 0.4, σu<sup>0</sup> ¼ 0.05, σv<sup>0</sup> ¼ 0.7, σLPF ¼ 0.1 and Threshold Min ð Þ¼ � , Max ð Þ 1,0.0019. The number of filters considered for a 1° resolution is equals to 180.

#### Table 1.

Different parameters of our physics-inspired feature decomposition method PAGE.

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE) DOI: http://dx.doi.org/10.5772/intechopen.90361

Figure 3A–P show the generated phase profiles for ϕ<sup>1</sup> u<sup>0</sup> ð Þ� ϕ<sup>2</sup> v<sup>0</sup> ð Þ that select semantic information at different orientation and frequency as described in Eqs. (10)–(13) using PAGE kernels. These phase kernels are applied to the input image spectrum. Using the steerable angle, the directionality of edge response can be controlled in the output phase of the transformed image. The detected output response for each directional filter is thresholded using a bi-level method. This is done to preserve negative high amplitude values as well as positive high amplitude values.

#### 2.2.1 Directionality

such that the frequency vector rotates along the origin with θ

and v and steerable angle θ as follows:

where

Coding Theory

in the bank is same.

method PAGE.

equals to 180.

Table 1.

148

Notation Variable

u and v Spatial frequency θ Steerable angle

ϕ<sup>1</sup> ð Þ� Normal filter ϕ2ð Þ� Log normal filter Su<sup>0</sup> Strength of ϕ<sup>1</sup> filter Sv<sup>0</sup> Strength of ϕ<sup>2</sup> filter

u<sup>0</sup> and v<sup>0</sup> Translated spatial frequency

μu<sup>0</sup> Mean of normal distribution for ϕ<sup>1</sup> filter μv<sup>0</sup> Mean of log-normal distribution for ϕ<sup>2</sup> filter σu<sup>0</sup> Sigma of normal distribution for ϕ<sup>1</sup> filter σv<sup>0</sup> Sigma of log-normal distribution for ϕ<sup>2</sup> filter

Different parameters of our physics-inspired feature decomposition method PAGE.

<sup>σ</sup>LPF Sigma of Gaussian distribution for L u <sup>~</sup>½ � , <sup>v</sup> smoothening kernel Threshold Min ð Þ , Max Bi-level feature thresholding for morphological operations

The values of these parameters for Figure 2 simulation result are: Su<sup>0</sup> ¼ 3.4, Sv<sup>0</sup> ¼ 1.2, μu<sup>0</sup> ¼ 0, μv<sup>0</sup> ¼ 0.4, σu<sup>0</sup> ¼ 0.05, σv<sup>0</sup> ¼ 0.7, σLPF ¼ 0.1 and Threshold Min ð Þ¼ � , Max ð Þ 1,0.0019. The number of filters considered for a 1° resolution is

K u <sup>~</sup>½ �¼ , <sup>v</sup>, <sup>θ</sup> K u <sup>~</sup> <sup>0</sup>

<sup>ϕ</sup><sup>1</sup> <sup>u</sup><sup>0</sup> ð Þ¼ Su<sup>0</sup> � <sup>1</sup>

<sup>ϕ</sup><sup>2</sup> <sup>v</sup><sup>0</sup> ð Þ¼ Sv<sup>0</sup> � <sup>1</sup>

σu0 ffiffiffiffiffi <sup>2</sup><sup>π</sup> <sup>p</sup> � exp � ju<sup>0</sup>

ffiffiffiffiffi

These filter banks can detect features at a particular frequency and/or in a particular direction. Therefore, by selecting a desired direction and/or frequency, a hyper-dimensional feature map can be constructed. We list all parameters in Table 1 that control different functionalities of our proposed decomposition

There are two important things that should be noted here. First, we consider the modulus of our translated frequency variable u<sup>0</sup> and v<sup>0</sup> so that our kernel is symmetric for proper phase operation as discussed in [12]. Second, for all our simulation examples here, when we consider a bank of PAGE filters, we first normalize ϕ<sup>1</sup> u<sup>0</sup> ð Þ and ϕ<sup>2</sup> v<sup>0</sup> ð Þ in the range (0,1) for all values of θ and then, multiply the filter banks with Su<sup>0</sup> and Sv0, respectively, in order to make sure that the amplitude of each filter

∣v0 ∣σ<sup>v</sup><sup>0</sup>

We then define the PAGE kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> as a function of frequency variable <sup>u</sup>

u<sup>0</sup> þ jv<sup>0</sup> ( u þ jv (10)

, v<sup>0</sup> ½ �¼ exp j � ϕ<sup>1</sup> u<sup>0</sup> ð Þ� ϕ<sup>2</sup> v<sup>0</sup> f g ð Þ (11)

=2σ<sup>2</sup>

=2σ<sup>2</sup>

<sup>u</sup><sup>0</sup> (12)

<sup>v</sup><sup>0</sup> (13)

j�μ<sup>u</sup> ð Þ0 <sup>2</sup>

<sup>2</sup><sup>π</sup> <sup>p</sup> � exp � ln <sup>j</sup>v<sup>0</sup> ð Þ�j <sup>μ</sup><sup>v</sup> ð Þ0 <sup>2</sup>

In order to detect features in a particular direction spread over the all the frequency components in the spectrum, we construct the PAGE filter banks by using Eqs. (9)–(13) for K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> , <sup>ϕ</sup><sup>1</sup> <sup>u</sup><sup>0</sup> ð Þ and <sup>ϕ</sup><sup>1</sup> <sup>v</sup><sup>0</sup> ð Þ respectively. By controlling the value of sigma σ<sup>u</sup><sup>0</sup> of normal distribution for ϕ<sup>1</sup> u<sup>0</sup> ð Þ filter, we avoid any overlapping of directional filters as seen in Figure 4.

We first evaluate the performance of these kernel by qualitatively comparing the feature detection of PAGE with PST. The image under analysis is a gray-scale image

#### Figure 3.

Phase-stretch gradient-field extractor (PAGE) filter banks (A)–(P) phase filter banks as defined in Eqs. (8)– (13) for various frequencies and directions. The frequency variables u and v are normalized from �ω<sup>u</sup> to þω<sup>u</sup> and �ω<sup>v</sup> to þωv, respectively. The center μ<sup>v</sup><sup>0</sup> of the phase kernel Sv<sup>0</sup> is gradually increased for control over the frequency distribution. The values for steerable angle θ considered here are 0, π/4,π/2, 3π/4.

#### Figure 4.

Phase-stretch gradient field extractor (PAGE) directional filter banks (A)–(D) the directional filter banks of PAGE computed using the definition in Eqs. (9)–(13) for steerable angle θ = 0, π/4, π/2 and 3π/4, respectively. By monitoring the value of sigma <sup>σ</sup><sup>u</sup><sup>0</sup> of the normal filter <sup>ϕ</sup><sup>1</sup> <sup>u</sup><sup>0</sup> ð Þ, the angular spread of kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> can be controlled to avoid any overlapping of directional filters.

of a rose. For a better visual understanding of our method, we first compute orthogonal directional responses as shown in Figure 5. We then show results of edge detection using PST and PAGE in Figure 6. The values for the parameters strength Su<sup>0</sup> ¼ 2:8, Sv<sup>0</sup> ¼ 0:5, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters considered for a 1° resolution is equals to 180. Morphological operations used for the result shown in Figure 6C include edge thinning and isolated pixel removing for each directional response. As evident in Figure 6, edges are accurately extracted with our technique. Different colors in the computed edge response indicate the edge directionality.

functionality, we show the features detected at low and high frequency using the rose image as an example in the Figure 7. As seen in the figure, the features detected

Feature detection using phase-stretch gradient field extractor (PAGE) at low and high frequency: Features detected at low frequency are much smoother whereas for high frequency, the features are sharper. This

We demonstrate the effectiveness of our decomposition method by comparing the directional edge response obtained by applying Gabor filter banks to an optical character image. We design 24 Gabor directional filters and augment the response from each of the filters to generate the image in Figure 8B. As seen in Figure 8C, with PAGE we have a better spatial localization of the edge response. By spatial localization, we mean that inherently PAGE has a sharper edge response, as seen in the figure. This is because, unlike the Gabor filters whose bandwidth is determined by the sigma parameter of the filter, in PAGE, the bandwidth of the response is determined by the input image dimension. Therefore, there is better localization of edge with PAGE. The parameters values are strength Su<sup>0</sup> ¼ 2:8, Sv<sup>0</sup> ¼ 0:5, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 .

at low frequency are smoother and at high frequency are sharper.

demonstrates the frequency selectivity for feature detection using PAGE.

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

DOI: http://dx.doi.org/10.5772/intechopen.90361

The number of filters considered for a 1° resolution is equals to 180.

Comparison to Gabor feature extractors: Features detected using Gabor do not have inherent spatial feature localization. With PAGE, the features are more sharper as the bandwidth of the response is determined by the

3.1 Comparison to Gabor feature extractors

3. Discussion

Figure 8.

151

input image dimension.

Figure 7.

#### 2.2.2 Frequency selectivity

The PAGE filter banks can also be designed to detect edges at a particular frequency by controlling the spread of log normal distribution. To demonstrate this

#### Figure 5.

Phase-stretch gradient-field extractor (PAGE) directional filter banks response the original image is shown in (A). We design two directional PAGE filters here to detect vertical (θ ¼ π=2) and horizontal (θ ¼ 0) edges as shown in (B) and (C) respectively.

#### Figure 6.

Comparison of feature detection using phase stretch transform (PST) and phase-stretch gradient-field extractor (PAGE) the original image is shown in (A). The output edge image obtained using PST without the support of directional response is shown in (B). The edge map obtained using PAGE filter banks that support edge detection at all frequencies is shown in (C). Different color values are used to show the orientation of the edges.

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE) DOI: http://dx.doi.org/10.5772/intechopen.90361

#### Figure 7.

of a rose. For a better visual understanding of our method, we first compute orthogonal directional responses as shown in Figure 5. We then show results of edge detection using PST and PAGE in Figure 6. The values for the parameters strength Su<sup>0</sup> ¼ 2:8, Sv<sup>0</sup> ¼ 0:5, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters considered for a 1° resolution is equals to 180. Morphological operations used for the result shown in Figure 6C include edge thinning and isolated pixel removing for each directional response. As evident in Figure 6, edges are accurately extracted with our technique. Different colors in the computed edge response indicate the edge directionality.

Phase-stretch gradient field extractor (PAGE) directional filter banks (A)–(D) the directional filter banks of PAGE computed using the definition in Eqs. (9)–(13) for steerable angle θ = 0, π/4, π/2 and 3π/4, respectively. By monitoring the value of sigma <sup>σ</sup><sup>u</sup><sup>0</sup> of the normal filter <sup>ϕ</sup><sup>1</sup> <sup>u</sup><sup>0</sup> ð Þ, the angular spread of kernel K u <sup>~</sup>½ � , <sup>v</sup>, <sup>θ</sup> can be

The PAGE filter banks can also be designed to detect edges at a particular frequency by controlling the spread of log normal distribution. To demonstrate this

Phase-stretch gradient-field extractor (PAGE) directional filter banks response the original image is shown in (A). We design two directional PAGE filters here to detect vertical (θ ¼ π=2) and horizontal (θ ¼ 0) edges as

Comparison of feature detection using phase stretch transform (PST) and phase-stretch gradient-field extractor (PAGE) the original image is shown in (A). The output edge image obtained using PST without the support of directional response is shown in (B). The edge map obtained using PAGE filter banks that support edge detection at all frequencies is shown in (C). Different color values are used to show the orientation of the edges.

2.2.2 Frequency selectivity

shown in (B) and (C) respectively.

controlled to avoid any overlapping of directional filters.

Figure 4.

Coding Theory

Figure 5.

Figure 6.

150

Feature detection using phase-stretch gradient field extractor (PAGE) at low and high frequency: Features detected at low frequency are much smoother whereas for high frequency, the features are sharper. This demonstrates the frequency selectivity for feature detection using PAGE.

functionality, we show the features detected at low and high frequency using the rose image as an example in the Figure 7. As seen in the figure, the features detected at low frequency are smoother and at high frequency are sharper.

#### 3. Discussion

#### 3.1 Comparison to Gabor feature extractors

We demonstrate the effectiveness of our decomposition method by comparing the directional edge response obtained by applying Gabor filter banks to an optical character image. We design 24 Gabor directional filters and augment the response from each of the filters to generate the image in Figure 8B. As seen in Figure 8C, with PAGE we have a better spatial localization of the edge response. By spatial localization, we mean that inherently PAGE has a sharper edge response, as seen in the figure. This is because, unlike the Gabor filters whose bandwidth is determined by the sigma parameter of the filter, in PAGE, the bandwidth of the response is determined by the input image dimension. Therefore, there is better localization of edge with PAGE. The parameters values are strength Su<sup>0</sup> ¼ 2:8, Sv<sup>0</sup> ¼ 0:5, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters considered for a 1° resolution is equals to 180.

#### Figure 8.

Comparison to Gabor feature extractors: Features detected using Gabor do not have inherent spatial feature localization. With PAGE, the features are more sharper as the bandwidth of the response is determined by the input image dimension.

### 3.2 Comparison to derivative feature extractors

To demonstrate the superiority of our decomposition method, we compare the edge response obtained by applying derivative based operators to a test image shown in Figure 9A. The response to a derivative based operator is computed by using the edge function of Matlab software (canny) and is shown in Figure 9B. As seen in Figure 9C, PAGE outperforms derivative based operators by producing the orientation information and low contrast details. The parameters values are strength Su<sup>0</sup> ¼ 2:7, Sv<sup>0</sup> ¼ 0:5, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters considered for a 1° resolution is equals to 180.

PAGE feature response is able to capture the edges corresponding to the chip layout (even the low contrast details). Based on the viewing angle (camera position), the layout edges should appropriately be rendered in the image as well as in the edge map. This can be used to identify any chip artifacts during the fabrication process. The parameters values for generating the feature map shown in Figure 10 are strength Su<sup>0</sup> ¼ 3:1, Sv<sup>0</sup> ¼ 0:9, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0042 . The number of filters considered for a 1°

We also apply PAGE to detect directional edge response to an image of a finger

Next, we show application of our decomposition method PAGE to extract edges of vessels from a retinal image in Figure 12. The distribution of vessels based on the orientation of the edges can be used as an important feature to detect any abnormalities in the eye structure. As seen, the PAGE feature response is able to capture both the low contrast details as well as information about the directionality of the

Fingerprint feature map using phase-stretch gradient-field extractor (PAGE). The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). As the edges of the fingerprint rotate, the response value changes (shown here with different color

Vessel detection using phase-stretch gradient-field extractor (PAGE) on an image of a retina. The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). Different color values are used to show the orientation of the edges. The low contrast vessels are not only detected using PAGE but also information on how the direction of the blood flow

changes across the eye based on the vessel distribution is extracted.

print as shown in Figure 11. Not only does PAGE detects a directional edge response, but also has an inherent equalization property to detect low contrast edges. The parameters values are strength Su<sup>0</sup> ¼ 1:5, Sv<sup>0</sup> ¼ 0:4, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:08 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The

number of filters considered for a 1° resolution is equals to 180.

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

DOI: http://dx.doi.org/10.5772/intechopen.90361

resolution is equals to 180.

Figure 11.

value).

Figure 12.

153

#### 3.3 Simulation results

We apply our decomposition method to different types of images to show that the directional edge response obtained by PAGE can be used for various computer vision applications. For example, in Figure 10, we show application of PAGE to a Single Electron Microscope (SEM) image of an integrated circuit chip. As seen, the

#### Figure 9.

Comparison to derivative feature extractors: Features detected with derivative based edge operators calculate the directionality based on the horizontal and vertical gradients and do not provide information about the spatial frequency of the edges. PAGE provides both the orientation as well as the spatial frequency selectivity in the output response.

#### Figure 10.

Fabrication artifact detection using phase-stretch gradient-field extractor (PAGE) on a single Electron microscope (SEM) image of integrated circuit chip. The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). Different color values are used to show the orientation of the edges that correspond to the chip layout and can be used to detect fabrication artifacts.

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE) DOI: http://dx.doi.org/10.5772/intechopen.90361

PAGE feature response is able to capture the edges corresponding to the chip layout (even the low contrast details). Based on the viewing angle (camera position), the layout edges should appropriately be rendered in the image as well as in the edge map. This can be used to identify any chip artifacts during the fabrication process. The parameters values for generating the feature map shown in Figure 10 are strength Su<sup>0</sup> ¼ 3:1, Sv<sup>0</sup> ¼ 0:9, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0042 . The number of filters considered for a 1° resolution is equals to 180.

We also apply PAGE to detect directional edge response to an image of a finger print as shown in Figure 11. Not only does PAGE detects a directional edge response, but also has an inherent equalization property to detect low contrast edges. The parameters values are strength Su<sup>0</sup> ¼ 1:5, Sv<sup>0</sup> ¼ 0:4, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:08 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters considered for a 1° resolution is equals to 180.

Next, we show application of our decomposition method PAGE to extract edges of vessels from a retinal image in Figure 12. The distribution of vessels based on the orientation of the edges can be used as an important feature to detect any abnormalities in the eye structure. As seen, the PAGE feature response is able to capture both the low contrast details as well as information about the directionality of the

#### Figure 11.

3.2 Comparison to derivative feature extractors

resolution is equals to 180.

3.3 Simulation results

Coding Theory

Figure 9.

output response.

Figure 10.

152

fabrication artifacts.

To demonstrate the superiority of our decomposition method, we compare the edge response obtained by applying derivative based operators to a test image shown in Figure 9A. The response to a derivative based operator is computed by using the edge function of Matlab software (canny) and is shown in Figure 9B. As seen in Figure 9C, PAGE outperforms derivative based operators by producing the

strength Su<sup>0</sup> ¼ 2:7, Sv<sup>0</sup> ¼ 0:5, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters considered for a 1°

We apply our decomposition method to different types of images to show that the directional edge response obtained by PAGE can be used for various computer vision applications. For example, in Figure 10, we show application of PAGE to a Single Electron Microscope (SEM) image of an integrated circuit chip. As seen, the

Comparison to derivative feature extractors: Features detected with derivative based edge operators calculate the directionality based on the horizontal and vertical gradients and do not provide information about the spatial frequency of the edges. PAGE provides both the orientation as well as the spatial frequency selectivity in the

Fabrication artifact detection using phase-stretch gradient-field extractor (PAGE) on a single Electron microscope (SEM) image of integrated circuit chip. The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). Different color values are used to show the orientation of the edges that correspond to the chip layout and can be used to detect

orientation information and low contrast details. The parameters values are

Fingerprint feature map using phase-stretch gradient-field extractor (PAGE). The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). As the edges of the fingerprint rotate, the response value changes (shown here with different color value).

#### Figure 12.

Vessel detection using phase-stretch gradient-field extractor (PAGE) on an image of a retina. The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). Different color values are used to show the orientation of the edges. The low contrast vessels are not only detected using PAGE but also information on how the direction of the blood flow changes across the eye based on the vessel distribution is extracted.

vessel edges which is coded in form of the color value in RGB space. The parameters values are strength Su<sup>0</sup> ¼ 2:2, Sv<sup>0</sup> ¼ 1:1, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters considered for a 1° resolution is equals to 180.

### 4. Conclusions

In this chapter, a presentation is made on a new feature engineering method that takes inspiration from the physical phenomenon of birefringence in an optical system. The introduced method called Phase-stretch Adaptive Gradient-field Extractor (PAGE) controls the diffractive properties of the simulated medium as a function of spatial location and channelized frequency. This method when applied to 2D digital images extracts semantic information from the input image at different orientation, scale and frequency and embeds this information into a hyperdimensional feature map. The computed response is compared to other directional filters such as Gabor to demonstrate superior performance of PAGE. Applications of the algorithm for edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical character and finger print images is also shown.

### Acknowledgements

The authors would like to thank Dr. Ata Mahjoubfar for his helpful comments on this work during his post-doctoral studies in Jalali Lab at UCLA. This work was partially supported by the National Institutes of Health (NIH) Grant No. 5R21 GM107924-03 and the Office of Naval Research (ONR) Multi-disciplinary University Research Initiatives (MURI) program on Optical Computing.

Author details

Madhuri Suthar<sup>1</sup>

155

Los Angeles, California, USA

\* and Bahram Jalali1,2,3,4

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

DOI: http://dx.doi.org/10.5772/intechopen.90361

California - Los Angeles, Los Angeles, California, USA

California - Los Angeles, Los Angeles, California, USA

provided the original work is properly cited.

\*Address all correspondence to: madhurisuthar@ucla.edu

1 Department of Electrical and Computer Engineering, University of

3 Department of Bioengineering, University of California - Los Angeles,

4 Department of Surgery, David Geffen School of Medicine, University of

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

2 California NanoSystems Institute, Los Angeles, California, USA

#### Conflict of interest

The authors declare no conflict of interest.

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE) DOI: http://dx.doi.org/10.5772/intechopen.90361

#### Author details

vessel edges which is coded in form of the color value in RGB space. The parameters values are strength Su<sup>0</sup> ¼ 2:2, Sv<sup>0</sup> ¼ 1:1, μ<sup>u</sup><sup>0</sup> ¼ 0, μ<sup>v</sup><sup>0</sup> ¼ 0:4, σ<sup>u</sup><sup>0</sup> ¼ 0:05, σ<sup>v</sup><sup>0</sup> ¼ 0:7, σLPF ¼ 0:1 and Threshold Min ð Þ¼ � , Max ð Þ 1, 0:0019 . The number of filters consid-

In this chapter, a presentation is made on a new feature engineering method that

takes inspiration from the physical phenomenon of birefringence in an optical system. The introduced method called Phase-stretch Adaptive Gradient-field Extractor (PAGE) controls the diffractive properties of the simulated medium as a function of spatial location and channelized frequency. This method when applied to 2D digital images extracts semantic information from the input image at different

orientation, scale and frequency and embeds this information into a hyperdimensional feature map. The computed response is compared to other directional filters such as Gabor to demonstrate superior performance of PAGE. Applications of the algorithm for edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical

The authors would like to thank Dr. Ata Mahjoubfar for his helpful comments on this work during his post-doctoral studies in Jalali Lab at UCLA. This work was partially supported by the National Institutes of Health (NIH) Grant No. 5R21 GM107924-03 and the Office of Naval Research (ONR) Multi-disciplinary University Research Initiatives (MURI) program on

ered for a 1° resolution is equals to 180.

character and finger print images is also shown.

The authors declare no conflict of interest.

4. Conclusions

Coding Theory

Acknowledgements

Optical Computing.

Conflict of interest

154

Madhuri Suthar<sup>1</sup> \* and Bahram Jalali1,2,3,4

1 Department of Electrical and Computer Engineering, University of California - Los Angeles, Los Angeles, California, USA

2 California NanoSystems Institute, Los Angeles, California, USA

3 Department of Bioengineering, University of California - Los Angeles, Los Angeles, California, USA

4 Department of Surgery, David Geffen School of Medicine, University of California - Los Angeles, Los Angeles, California, USA

\*Address all correspondence to: madhurisuthar@ucla.edu

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### References

[1] Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1990;12(7):629-639

[2] Weickert J. Anisotropic Diffusion in Image Processing. Stuttgart: Teubner; 1998

[3] Catt F, Lions PL, Morel JM, Coll T. Image selective smoothing and edge detection by nonlinear diffusion. SIAM Journal on Numerical Analysis. 1992; 29(1):182-193

[4] Alvarez L, Lions PL, Morel JM. Image selective smoothing and edge detection by nonlinear diffusion. II. SIAM Journal on Numerical Analysis. 1992;29(3): 845-866

[5] Nordstrom KN. Biased anisotropic diffusion: A unified regularization and diffusion approach to edge detection. Image and Vision Computing. 1990; 8(4):318-327

[6] Zhao H, Lu M, Yao A, Guo Y, Chen Y, Zhang L. Physics inspired optimization on semantic transfer features: An alternative method for room layout estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. pp. 10-18

[7] Yang S, Pan Z, Amert T, Wang K, Yu L, Berg T, et al. Physics-inspired garment recovery from a single-view image. ACM Transactions on Graphics (TOG). 2018;37(5):170

[8] Phan TQ, Shivakumara P, Tan CL. Detecting text in the real world. In: Proceedings of the 20th ACM International Conference on Multimedia. ACM; 2012. pp. 765-768

[9] Fadeyev V, Haber C. A novel application of high energy physics technology to the problem of audio preservation. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2004;518 (1–2):456-462

regression (Conference presentation). In: Optical Data Science II, Vol. 10937. International Society for Optics and

DOI: http://dx.doi.org/10.5772/intechopen.90361

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

2Fscholar%3Fq%3Dinfo%

3D0%26hl%3Den

3A4wixwVzMMSAJ%3Ascholar.google. com%2F%26output%3Dcite%26scirp%

[26] Yasuma F, Mitsunaga T, Iso D, Nayar SK. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and

spectrum. IEEE Transactions on Image Processing. 2010;19(9):2241-2253

[27] Gustafsson MG. Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution. Proceedings of the National Academy of Sciences. 2005;102(37):13081-13086

[28] Gustafsson MG, Shao L, Carlton PM, Wang CR,

Golubovskaya IN, Cande WZ, et al. Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophysical Journal. 2008;94(12):4957-4970

[29] Hell S, Stelzer EH. Fundamental improvement of resolution with a 4Piconfocal fluorescence microscope using

Communications. 1992;93(5–6):277-282

[30] Hess ST, Girirajan TP, Mason MD. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophysical Journal. 2006;91(11):4258-4272

[31] Bates M, Huang B, Zhuang X. Super-resolution microscopy by nanoscale localization of photoswitchable fluorescent probes. Current Opinion in Chemical Biology. 2008;

[32] Temizel A, Vlachos T. Wavelet domain image resolution enhancement using cycle-spinning. Electronics Letters. 2005;41(3):119-121

[33] Piao Y, Park H. Image resolution enhancement using inter-subband

12(5):505-514

two-photon excitation. Optics

[18] Ilovitsh T, Jalali B, Asghari MH, Zalevsky Z. Phase stretch transform for

microscopy. Biomedical Optics Express.

Photonics; 2019. p. 109370E

super-resolution localization

[19] Ang RB, Nisar H, Khan MB, Tsai CY. Image segmentation of activated sludge phase contrast images

using phase stretch transform. Microscopy. 2018;68(2):144-158

[20] Mahjoubfar A, Churkin DV, Barland S, Broderick N, Turitsyn SK,

applications. Nature Photonics. 2017;

[21] Solli DR, Ropers C, Koonath P, Jalali B. Optical rogue waves. Nature.

[22] Herink G, Kurtz F, Jalali B, Solli DR,

Jalali B. Time stretch and its

Ropers C. Real-time spectral interferometry probes the internal dynamics of femtosecond soliton molecules. Science. 2017;356(6333):

[23] Herink G, Jalali B, Ropers C, Solli DR. Resolving the build-up of femtosecond mode-locking with singleshot spectroscopy at 90 MHz frame rate. Nature Photonics. 2016;10(5):321

[24] Chen CL, Mahjoubfar A, Tai LC, Blaby IK, Huang A, Niazi KR, et al. Deep learning in label-free cell classification. Scientific Reports. 2016;6:21471

[25] Mahjoubfar A, Chen CL, Jalali B. Artificial Intelligence in Label-Free Microscopy. Springer; 2017. Available from: https://scholar.google.com/ scholar?hl=en&as\_sdt=2005&cites= 2319859981831178467&scipsc=&q= Artificial+Intelligence+in+Label-free +Microscopy&btnG=#d=gs\_cit&u=%

2007;450(7172):1054

11(6):341

50-54

157

2016;7(10):4198-4209

[10] Asghari MH, Jalali B. Edge detection in digital images using dispersive phase stretch transform. International Journal of Biomedical Imaging. 2015;2015: 687819

[11] JalaliLabUCLA/Imagefeature-detection-using-Phase-Stretch-Transform. Available from: https://gith ub.com/JalaliLabUCLA/Image-featuredetection-using-Phase-Stretch-Transf orm/

[12] Suthar M, Asghari H, Jalali B. Feature enhancement in visually impaired images. IEEE Access. 2017;6: 1407-1415

[13] Jalali B, Suthar M, Asghari M, Mahjoubfar A. Physics-based feature engineering. In: Optics, Photonics and Laser Technology. Cham: Springer; 2017. pp. 255-275

[14] Jalali B, Suthar M, Asghari M, Mahjoubfar A. Optics-inspired computing. In: Proceedings of the 5th International Conference on Photonics, Optics and Laser Technology, Vol. 1; 2017. pp. 340-345

[15] Suthar M, Mahjoubfar A, Seals K, Lee EW, Jalaii B. Diagnostic tool for pneumothorax. In: 2016 IEEE Photonics Society Summer Topical Meeting Series (SUM). IEEE; 2016. pp. 218-219

[16] Suthar M. Decision support systems for radiologists based on phase stretch transform [Doctoral dissertation]. USA: UCLA; 2016. Available from: https:// escholarship.org/uc/item/39p0h9jp

[17] He S, Jalali B. Medical image superresolution using phase stretch anchored Phase-Stretch Adaptive Gradient-Field Extractor (PAGE) DOI: http://dx.doi.org/10.5772/intechopen.90361

regression (Conference presentation). In: Optical Data Science II, Vol. 10937. International Society for Optics and Photonics; 2019. p. 109370E

References

Coding Theory

1990;12(7):629-639

1998

29(1):182-193

845-866

8(4):318-327

pp. 10-18

156

[1] Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence.

technology to the problem of audio preservation. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2004;518

[10] Asghari MH, Jalali B. Edge detection in digital images using dispersive phase stretch transform. International Journal of Biomedical Imaging. 2015;2015:

feature-detection-using-Phase-Stretch-Transform. Available from: https://gith ub.com/JalaliLabUCLA/Image-featuredetection-using-Phase-Stretch-Transf

[12] Suthar M, Asghari H, Jalali B. Feature enhancement in visually impaired images. IEEE Access. 2017;6:

[13] Jalali B, Suthar M, Asghari M, Mahjoubfar A. Physics-based feature engineering. In: Optics, Photonics and Laser Technology. Cham: Springer;

[14] Jalali B, Suthar M, Asghari M, Mahjoubfar A. Optics-inspired computing. In: Proceedings of the 5th International Conference on Photonics, Optics and Laser Technology, Vol. 1;

[15] Suthar M, Mahjoubfar A, Seals K, Lee EW, Jalaii B. Diagnostic tool for pneumothorax. In: 2016 IEEE Photonics Society Summer Topical Meeting Series

[16] Suthar M. Decision support systems for radiologists based on phase stretch transform [Doctoral dissertation]. USA: UCLA; 2016. Available from: https:// escholarship.org/uc/item/39p0h9jp

[17] He S, Jalali B. Medical image superresolution using phase stretch anchored

(SUM). IEEE; 2016. pp. 218-219

[11] JalaliLabUCLA/Image-

(1–2):456-462

687819

orm/

1407-1415

2017. pp. 255-275

2017. pp. 340-345

[2] Weickert J. Anisotropic Diffusion in Image Processing. Stuttgart: Teubner;

[3] Catt F, Lions PL, Morel JM, Coll T. Image selective smoothing and edge detection by nonlinear diffusion. SIAM Journal on Numerical Analysis. 1992;

[4] Alvarez L, Lions PL, Morel JM. Image selective smoothing and edge detection by nonlinear diffusion. II. SIAM Journal on Numerical Analysis. 1992;29(3):

[5] Nordstrom KN. Biased anisotropic diffusion: A unified regularization and diffusion approach to edge detection. Image and Vision Computing. 1990;

[6] Zhao H, Lu M, Yao A, Guo Y, Chen Y, Zhang L. Physics inspired optimization on semantic transfer features: An alternative method for room layout estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017.

[7] Yang S, Pan Z, Amert T, Wang K, Yu L, Berg T, et al. Physics-inspired garment recovery from a single-view image. ACM Transactions on Graphics

[8] Phan TQ, Shivakumara P, Tan CL. Detecting text in the real world. In: Proceedings of the 20th ACM International Conference on

Multimedia. ACM; 2012. pp. 765-768

[9] Fadeyev V, Haber C. A novel application of high energy physics

(TOG). 2018;37(5):170

[18] Ilovitsh T, Jalali B, Asghari MH, Zalevsky Z. Phase stretch transform for super-resolution localization microscopy. Biomedical Optics Express. 2016;7(10):4198-4209

[19] Ang RB, Nisar H, Khan MB, Tsai CY. Image segmentation of activated sludge phase contrast images using phase stretch transform. Microscopy. 2018;68(2):144-158

[20] Mahjoubfar A, Churkin DV, Barland S, Broderick N, Turitsyn SK, Jalali B. Time stretch and its applications. Nature Photonics. 2017; 11(6):341

[21] Solli DR, Ropers C, Koonath P, Jalali B. Optical rogue waves. Nature. 2007;450(7172):1054

[22] Herink G, Kurtz F, Jalali B, Solli DR, Ropers C. Real-time spectral interferometry probes the internal dynamics of femtosecond soliton molecules. Science. 2017;356(6333): 50-54

[23] Herink G, Jalali B, Ropers C, Solli DR. Resolving the build-up of femtosecond mode-locking with singleshot spectroscopy at 90 MHz frame rate. Nature Photonics. 2016;10(5):321

[24] Chen CL, Mahjoubfar A, Tai LC, Blaby IK, Huang A, Niazi KR, et al. Deep learning in label-free cell classification. Scientific Reports. 2016;6:21471

[25] Mahjoubfar A, Chen CL, Jalali B. Artificial Intelligence in Label-Free Microscopy. Springer; 2017. Available from: https://scholar.google.com/ scholar?hl=en&as\_sdt=2005&cites= 2319859981831178467&scipsc=&q= Artificial+Intelligence+in+Label-free +Microscopy&btnG=#d=gs\_cit&u=% 2Fscholar%3Fq%3Dinfo% 3A4wixwVzMMSAJ%3Ascholar.google. com%2F%26output%3Dcite%26scirp% 3D0%26hl%3Den

[26] Yasuma F, Mitsunaga T, Iso D, Nayar SK. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing. 2010;19(9):2241-2253

[27] Gustafsson MG. Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution. Proceedings of the National Academy of Sciences. 2005;102(37):13081-13086

[28] Gustafsson MG, Shao L, Carlton PM, Wang CR, Golubovskaya IN, Cande WZ, et al. Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophysical Journal. 2008;94(12):4957-4970

[29] Hell S, Stelzer EH. Fundamental improvement of resolution with a 4Piconfocal fluorescence microscope using two-photon excitation. Optics Communications. 1992;93(5–6):277-282

[30] Hess ST, Girirajan TP, Mason MD. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophysical Journal. 2006;91(11):4258-4272

[31] Bates M, Huang B, Zhuang X. Super-resolution microscopy by nanoscale localization of photoswitchable fluorescent probes. Current Opinion in Chemical Biology. 2008; 12(5):505-514

[32] Temizel A, Vlachos T. Wavelet domain image resolution enhancement using cycle-spinning. Electronics Letters. 2005;41(3):119-121

[33] Piao Y, Park H. Image resolution enhancement using inter-subband

correlation in wavelet domain. In: 2007 IEEE International Conference on Image Processing, Vol. 1. IEEE; 2007. p. I-445

[34] Granados M, Aydn TO, Tena JR, Lalonde JF, Theobalt C. HDR image noise estimation for denoising tone mapped images. In: Proceedings of the 12th European Conference on Visual Media Production. ACM; 2015. p. 7

[35] Perry S. Image and video noise: An industry perspective. In: Denoising of Photographic Images and Video. Cham: Springer; 2018. pp. 207-234

[36] Milanfar P. A tour of modern image filtering: New insights and methods, both practical and theoretical. IEEE Signal Processing Magazine. 2012;30(1): 106-128

**159**

Section 3

Image Compression

Section 3
