**DIGITAL FILTERS AND SIGNAL PROCESSING**

Edited by **Fausto Pedro García Márquez** and **Noor Zaman**

#### **Digital Filters and Signal Processing**

http://dx.doi.org/10.5772/45654 Edited by Fausto Pedro García Márquez and Noor Zaman

#### **Contributors**

Barmak Honarvar Shakibaei Asli, Raveendran Paramesran, Alexey V. Mokeev, Jan Peter Hessling, Masayuki Kawamata, Shunsuke Yamaki, Masahide Abe, Radu Matei, Daniela Matei, Fumio Itami, Behrouz Nowrouzian, Seyyed Ali Hashemi, Fausto Pedro García Márquez, Raul Ruiz De La Hermosa Gonzalez-Carrato, Jesús María Pinar Perez, Noor Zaman, Mnueer Ahmed, Håkan Johansson, Oscar Gustafsson

#### **© The Editor(s) and the Author(s) 2013**

The moral rights of the and the author(s) have been asserted.

All rights to the book as a whole are reserved by INTECH. The book as a whole (compilation) cannot be reproduced, distributed or used for commercial or non-commercial purposes without INTECH's written permission. Enquiries concerning the use of the book should be directed to INTECH rights and permissions department (permissions@intechopen.com).

Violations are liable to prosecution under the governing Copyright Law.

Individual chapters of this publication are distributed under the terms of the Creative Commons Attribution 3.0 Unported License which permits commercial use, distribution and reproduction of the individual chapters, provided the original author(s) and source publication are appropriately acknowledged. If so indicated, certain images may not be included under the Creative Commons license. In such cases users will need to obtain permission from the license holder to reproduce the material. More details and guidelines concerning content reuse and adaptation can be foundat http://www.intechopen.com/copyright-policy.html.

#### **Notice**

Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published chapters. The publisher assumes no responsibility for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained in the book.

First published in Croatia, 2013 by INTECH d.o.o. eBook (PDF) Published by IN TECH d.o.o. Place and year of publication of eBook (PDF): Rijeka, 2019. IntechOpen is the global imprint of IN TECH d.o.o. Printed in Croatia

Legal deposit, Croatia: National and University Library in Zagreb

Additional hard and PDF copies can be obtained from orders@intechopen.com

Digital Filters and Signal Processing Edited by Fausto Pedro García Márquez and Noor Zaman p. cm. ISBN 978-953-51-0871-9 eBook (PDF) ISBN 978-953-51-6289-6

## We are IntechOpen, the first native scientific publisher of Open Access books

3,350+ Open access books available

108,000+

International authors and editors

114M+ Downloads

151 Countries delivered to Our authors are among the

Top 1% most cited scientists

12.2%

Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI)

## Interested in publishing with us? Contact book.department@intechopen.com

Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com

## **Meet the editors**

Dr. Fausto Pedro García Márquez received the European Doctorate in Engineering at the School of Industrial Engineers (ETSII) of Ciudad Real, University of Castilla-La Mancha (UCLM, Spain), March 2004. He received a degree in Engineering from University of Cartagena in Murcia, Spain, (September 1998) and Technical Engineer from the Polytechnic University School at UCLM (Sep-

tember 1995), and recently, he got the degree in Business Administration and Management at the Faculty of Law and Social Sciences at UCLM (December 2006). He also holds the titles of Supper Technician in Labor Risks Prevention by UCLM (July 2000) and Transport Specialist at the Polytechnic University of Madrid, Spain, (June 2001). He is a Senior Lecturer (with Tenure) at UCLM, and Honorary Senior Research Fellow at Birmingham University (UK).

Dr. Noor Zarman acquired his Degree in Engineering, and Master's in Computer Science at the University of Agriculture in Faisalabad. His academic achievements further extended towards PhD in Information Technology at the University of Tecknologi PETRONAS (UTP), Malaysia. He is currently working as a Faculty member at the College of Computer Science and Information

Technology, King Faisal University in Saudi Arabia.

Contents

**Preface VII**

Chapter 1 **Maintenance Management Based on Signal Processing 1**

Carrato, Jesús María Pinar Perez and Noor Zaman

Chapter 3 **Deterministic Sampling for Quantification of Modeling**

Chapter 4 **Direct Methods for Frequency Filter Performance Analysis 81**

Chapter 5 **Frequency Transformation for Linear State-Space Systems and**

Chapter 6 **A Study on a Filter Bank Structure With Rational Scaling Factors**

Chapter 7 **Digital Filter Implementation of Orthogonal Moments 157**

Chapter 8 **Two-Rate Based Structures for Computationally Efficient Wide-**

Shunsuke Koshita, Masahide Abe and Masayuki Kawamata

Barmak Honarvar Shakibaei Asli and Raveendran Paramesran

**Its Application to High-Performance Analog/Digital Filters 109**

Chapter 2 **Spectral Analysis of Exons in DNA Signals 33**

**Uncertainty of Signals 53**

**and Its Applications 139**

**Band FIR Systems 189**

Håkan Johansson and Oscar Gustafsson

Jan Peter Hessling

Alexey Mokeev

Fumio Itami

Fausto Pedro García Márquez, Raúl Ruiz de la Hermosa González-

Noor Zaman, Ahmed Muneer and Fausto Pedro García Márquez

## Contents

#### **Preface XI**



Preface

worldwide.

identification.

cases with great flexibility and reliability.

Digital filters, together with signal processing, are being employed in the new technologies and information systems, and implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different

This book presents advanced developments in digital filters and signal processing methods covering different case studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed

An approach employing digital filters and signal processing methods based on wavelet transforms is presented in order to be applied in the maintenance management of wind turbines. It is completed with other techniques as the fast Fourier transform. It leads to a

The wavelet transforms are also employed as a spectral analysis of exons in deoxyribonucleic acid (DNA) signals. These regions are diffused in a noise created by a mixture of exon-intron nucleotides. A better identification of exons results in fairly complete translation of RNA from DNA. Researchers have proposed several techniques based on computational and statistical signal processing concepts but an optimal solution is still lacking. The target signal is filtered by wavelet transforms to reduce the noise created by 1/f diffused noise. The signal is then processed in a series of computational steps to generate a power spectral density estimation graph. Exons are approximated with reference to discrimination measure between intron and exons. The PSD's graph glimpses a clear picture of exons boundaries comparable with the standard NCBI range. The results have been compared with existing approaches and significance was found in the exons regions

Statistical signal processing traditionally focuses on extraction of information from noisy measurements. Typically, parameters or states are estimated by various filtering operations. The quality of signal processing operations is assessed by evaluating the statistical uncertainty of the result. The processing could for instance simulate, correct, modulate, evaluate or control the response of a physical system. A statistical model of the parameters describing to which degree the dynamic model is known and accurate will be assumed given, instead of being the target of investigation as in system identification. Model uncertainty (of parameters) is then propagated to model-ing uncertainty (of the result).

reduction of operating costs, availability, reliability, lifetime and maintenance costs.

Chapter 11 **Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing 275** Radu Matei and Daniela Matei

## Preface

Chapter 9 **Analytical Approach for Synthesis of Minimum L2-Sensitivity Realizations for State-Space Digital Filters 213**

Chapter 10 **Particle Swarm Optimization of Highly Selective Digital Filters**

Chapter 11 **Analytical Design of Two-Dimensional Filters and Applications**

Seyyed Ali Hashemi and Behrouz Nowrouzian

**in Biomedical Image Processing 275**

Radu Matei and Daniela Matei

**VI** Contents

Shunsuke Yamaki, Masahide Abe and Masayuki Kawamata

**over the Finite-Precision Multiplier Coefficient Space 243**

Digital filters, together with signal processing, are being employed in the new technologies and information systems, and implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different cases with great flexibility and reliability.

This book presents advanced developments in digital filters and signal processing methods covering different case studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed worldwide.

An approach employing digital filters and signal processing methods based on wavelet transforms is presented in order to be applied in the maintenance management of wind turbines. It is completed with other techniques as the fast Fourier transform. It leads to a reduction of operating costs, availability, reliability, lifetime and maintenance costs.

The wavelet transforms are also employed as a spectral analysis of exons in deoxyribonucleic acid (DNA) signals. These regions are diffused in a noise created by a mixture of exon-intron nucleotides. A better identification of exons results in fairly complete translation of RNA from DNA. Researchers have proposed several techniques based on computational and statistical signal processing concepts but an optimal solution is still lacking. The target signal is filtered by wavelet transforms to reduce the noise created by 1/f diffused noise. The signal is then processed in a series of computational steps to generate a power spectral density estimation graph. Exons are approximated with reference to discrimination measure between intron and exons. The PSD's graph glimpses a clear picture of exons boundaries comparable with the standard NCBI range. The results have been compared with existing approaches and significance was found in the exons regions identification.

Statistical signal processing traditionally focuses on extraction of information from noisy measurements. Typically, parameters or states are estimated by various filtering operations. The quality of signal processing operations is assessed by evaluating the statistical uncertainty of the result. The processing could for instance simulate, correct, modulate, evaluate or control the response of a physical system. A statistical model of the parameters describing to which degree the dynamic model is known and accurate will be assumed given, instead of being the target of investigation as in system identification. Model uncertainty (of parameters) is then propagated to model-ing uncertainty (of the result). Applications include e.g. various mechanical and electrical applications using uncertain differential equations, and statistical signal processing. The so-called brute force Monte Carlo method is the indisputable reference method to propagate model uncertainty. Its main disadvantage is its slow convergence, or requirement of using many samples of the model (large ensembles). The use of excitation matrices made it possible to construct universal generic ensembles. The efficiency of the minimal simplex (SPX) ensemble is indeed high but so is also its third moment. While the standard (STD) maximizes the range of each parameter, the binary (BIN) minimizes it by varying all parameters in all samples. The STD is the simplest while the SPX is the most efficient ensemble. In the example, the BIN was most accurate. For non-parametric models with many parameters, reduction of samples may be required. Elimination of singular values (ESV) and correlated sampling (CRS) were two such techniques. The presented ensembles are not to be associated to random sampling as a method. They are nothing but a few examples of deterministic sampling, likely the best ensembles are yet to be discovered. It is indeed challenging but also rewarding to find novel deterministic sampling strategies. Once the sampling rules are found, the application is just as simple as random sampling, but usually much more efficient. Deterministic sampling is one of very few methods capable of non-linear propagation of uncertainty through large signal processing models.

known internal descriptions of linear systems, for analysis of relationships between analog/ digital filters and frequency transformation. The state-space representation is a powerful tool for synthesis of filter structures with high-performance such as the low sensitivity, low roundoff noise, and high dynamic range. The properties to be presented here are closely related to the following three elements of linear state-space systems: the controllability Gramian, the observability Gramian, and the second-order modes. These three elements are known to be very important in synthesis of high-performance filter structures. It is developed to the technique of design and synthesis of analog and digital filters with high performance structures. It is extended to variable filters with high-performance structures. An application in biomedical image processing is done employing an analytical design of two-dimensional filters. Various types of 2D filters are approached, both recursive infinite impulse response (IIR) and non-recursive finite impulse response (FIR). The design methods are done on recursive filters, because they are the most efficient. The proposed design methods start from either digital or analog 1D prototypes with a desired characteristic, employing analog prototypes, since the design turns out to be simpler and the 2D filters result of lower complexity. The prototype transfer function results from one of the common approximations (Butterworth, Chebyshev, elliptic) and the shape of the prototype frequency response corresponds to the desired characteristic of the final 2D filter. The specific complex frequency transformation from the axis to the complex plane will be determined for each type of 2D filter separately, starting from the geometrical specification of its shape in the frequency plane. The 2D filter transfer function results directly factorized, which is a major advantage in its implementation. The proposed design method also applies the bilinear transform as an intermediate step in determining the 1D to 2D frequency mapping. In order to compensate the distortions of their shape towards the margins of the frequency plane, a prewarping is applied, which however will increase the filter order. All the proposed design techniques are mainly analytical but also involve numerical optimization, in particular rational approximations (e.g. Chebyshev-Padé). Some of the designed 2D filters result with complex coefficients. However this should not be a serious shortcoming, since such IIR is

Preface IX

A filter bank structure with rational scaling factors and its applications is presented. The frequency patterns of the filter bank is analysed to show how to synthesize scaled signals arbitrarily. In addition, possible problems are identified with the structure in image scaling. Theoretical conditions for solving the problems are also derived through the input-output relation of the filter bank. A design procedure with the conditions is also provided. Through simulation results is demonstrated that the quality of scaled images is comparable to those of images with typical structures. It is used to potential issues and advantages in utilizing

The geometric moments (GMs) are an important aspect of the real-time image processing applications. One of the fast methods to generate GMs is from cascaded digital filter outputs. However, a concern of this design is that the outputs of the digital filters, which operate as accumulators, increase exponentially as the orders of moment increase. New formulations of a set of lower digital filter output values, as the order of moments increase,

the scheme as well as traditional ones in image processing.

also used.

Direct methods for frequency filter performance analysis are considered. The features of the suggested performance analysis for signal processing methods are related to consistent mathematical models of input signals and the analog and digital filter impulse characteristics of a set of continuous/discrete semi-infinite or finite damped oscillatory components being used. Simple semi-infinite harmonic and aperiodic signals and compound signals, and impulse characteristics of any form can be synthesized on the base of components set mentioned. The uniformity of mathematical signal and filter description enables one to apply a one-type compact form for their characterization as a set of complex amplitudes, complex frequencies and time parameters, and it simplifies significantly performance analysis of signal processing by analog or digital filters at any possible input signal parameter variation. The signals are directly linked with Laplace transform spectral representations, since the damped oscillatory component is the base function of the Laplace transform. The application of signal/filter frequency and frequency-time representations, based on Laplace transform, allowed developing simple and effective direct methods for performance analysis of signal processing of analog and digital filters. The analysis methods can be used in substitute of mathematical models as well, where complex amplitudes and/or complex frequencies are time functions.

The frequency transformation for linear state-space systems plays important roles in signal processing from both the theoretical and practical point of view. It is applied to highperformance analog/digital filters. The frequency transformation easily allows obtaining any kind of frequency selective filter from a given prototype low-pass filter, and the frequency transformation is also applied to the design of variable filters that enable real-time tuning of cut off frequencies and thus have been widely used in many modern applications of signal processing. The use of the state-space representation is discussed, which is one of the wellknown internal descriptions of linear systems, for analysis of relationships between analog/ digital filters and frequency transformation. The state-space representation is a powerful tool for synthesis of filter structures with high-performance such as the low sensitivity, low roundoff noise, and high dynamic range. The properties to be presented here are closely related to the following three elements of linear state-space systems: the controllability Gramian, the observability Gramian, and the second-order modes. These three elements are known to be very important in synthesis of high-performance filter structures. It is developed to the technique of design and synthesis of analog and digital filters with high performance structures. It is extended to variable filters with high-performance structures.

Applications include e.g. various mechanical and electrical applications using uncertain differential equations, and statistical signal processing. The so-called brute force Monte Carlo method is the indisputable reference method to propagate model uncertainty. Its main disadvantage is its slow convergence, or requirement of using many samples of the model (large ensembles). The use of excitation matrices made it possible to construct universal generic ensembles. The efficiency of the minimal simplex (SPX) ensemble is indeed high but so is also its third moment. While the standard (STD) maximizes the range of each parameter, the binary (BIN) minimizes it by varying all parameters in all samples. The STD is the simplest while the SPX is the most efficient ensemble. In the example, the BIN was most accurate. For non-parametric models with many parameters, reduction of samples may be required. Elimination of singular values (ESV) and correlated sampling (CRS) were two such techniques. The presented ensembles are not to be associated to random sampling as a method. They are nothing but a few examples of deterministic sampling, likely the best ensembles are yet to be discovered. It is indeed challenging but also rewarding to find novel deterministic sampling strategies. Once the sampling rules are found, the application is just as simple as random sampling, but usually much more efficient. Deterministic sampling is one of very few methods capable of non-linear propagation of uncertainty through large

Direct methods for frequency filter performance analysis are considered. The features of the suggested performance analysis for signal processing methods are related to consistent mathematical models of input signals and the analog and digital filter impulse characteristics of a set of continuous/discrete semi-infinite or finite damped oscillatory components being used. Simple semi-infinite harmonic and aperiodic signals and compound signals, and impulse characteristics of any form can be synthesized on the base of components set mentioned. The uniformity of mathematical signal and filter description enables one to apply a one-type compact form for their characterization as a set of complex amplitudes, complex frequencies and time parameters, and it simplifies significantly performance analysis of signal processing by analog or digital filters at any possible input signal parameter variation. The signals are directly linked with Laplace transform spectral representations, since the damped oscillatory component is the base function of the Laplace transform. The application of signal/filter frequency and frequency-time representations, based on Laplace transform, allowed developing simple and effective direct methods for performance analysis of signal processing of analog and digital filters. The analysis methods can be used in substitute of mathematical models as well, where complex amplitudes and/or

The frequency transformation for linear state-space systems plays important roles in signal processing from both the theoretical and practical point of view. It is applied to highperformance analog/digital filters. The frequency transformation easily allows obtaining any kind of frequency selective filter from a given prototype low-pass filter, and the frequency transformation is also applied to the design of variable filters that enable real-time tuning of cut off frequencies and thus have been widely used in many modern applications of signal processing. The use of the state-space representation is discussed, which is one of the well-

signal processing models.

VIII Preface

complex frequencies are time functions.

An application in biomedical image processing is done employing an analytical design of two-dimensional filters. Various types of 2D filters are approached, both recursive infinite impulse response (IIR) and non-recursive finite impulse response (FIR). The design methods are done on recursive filters, because they are the most efficient. The proposed design methods start from either digital or analog 1D prototypes with a desired characteristic, employing analog prototypes, since the design turns out to be simpler and the 2D filters result of lower complexity. The prototype transfer function results from one of the common approximations (Butterworth, Chebyshev, elliptic) and the shape of the prototype frequency response corresponds to the desired characteristic of the final 2D filter. The specific complex frequency transformation from the axis to the complex plane will be determined for each type of 2D filter separately, starting from the geometrical specification of its shape in the frequency plane. The 2D filter transfer function results directly factorized, which is a major advantage in its implementation. The proposed design method also applies the bilinear transform as an intermediate step in determining the 1D to 2D frequency mapping. In order to compensate the distortions of their shape towards the margins of the frequency plane, a prewarping is applied, which however will increase the filter order. All the proposed design techniques are mainly analytical but also involve numerical optimization, in particular rational approximations (e.g. Chebyshev-Padé). Some of the designed 2D filters result with complex coefficients. However this should not be a serious shortcoming, since such IIR is also used.

A filter bank structure with rational scaling factors and its applications is presented. The frequency patterns of the filter bank is analysed to show how to synthesize scaled signals arbitrarily. In addition, possible problems are identified with the structure in image scaling. Theoretical conditions for solving the problems are also derived through the input-output relation of the filter bank. A design procedure with the conditions is also provided. Through simulation results is demonstrated that the quality of scaled images is comparable to those of images with typical structures. It is used to potential issues and advantages in utilizing the scheme as well as traditional ones in image processing.

The geometric moments (GMs) are an important aspect of the real-time image processing applications. One of the fast methods to generate GMs is from cascaded digital filter outputs. However, a concern of this design is that the outputs of the digital filters, which operate as accumulators, increase exponentially as the orders of moment increase. New formulations of a set of lower digital filter output values, as the order of moments increase, are described. This method enables the usage of the lower digital filter output values for higher-order moments. Another approach to reduce the digital filter structure proposed by Hatamian, in the computation of geometric moments which leads to faster computation to obtain them, is considered. The proposed method is modelled using the 2-D Ztransform. The recursive methods are used in Tchebichef moments (TMs) and inverse Tchebichef moments (ITMs) computations—recurrence relation regards to the order and with respect to the discrete variable. A digital filter structure is proposed for reconstruction based on the 2D convolution between the digital filter outputs used in the computation of the TMs and the impulse response of the proposed digital filter. A comparison on the performance of the proposed algorithms and some of the existing methods for computing TMs and ITMs shows that the proposed algorithms are faster. A concern in obtaining the Krawtchouk Moments (KMs) from an image is the computational costs. The first approach uses the digital filter outputs to form GMs and the KMs are obtained via GMs. The second method uses a direct approach to achieve KMs from the digital filter outputs.

digital filters are optimized over the discrete multiplier coefficient space, resulting in FRM digital filters which are capable of direct implementation in digital hardware platform without any need for further optimization. A new PSO algorithm is developed to tackle three different problems. In this PSO algorithm, a set of indexed look-up tables (LUTs) of permissible CSD multiplier coefficient values is generated to ensure that in the course of optimization, the multiplier coefficient update operations constituent in the underlying PSO algorithm lead to values that are guaranteed to conform to the desired CSD wordlength, etc. In addition, a general set of constraints is derived in terms of multiplier coefficients to guarantee that the IIR bilinear-LDI interpolation digital subfilters automatically remain BIBO stable throughout the course of PSO algorithm. Moreover, by introducing barren layers, the particles are ensured to automatically remain inside the boundaries of LUTs in

**Dr. Fausto Pedro García Márquez**

Universidad Castilla-La Mancha

Department of Computer Science

College of Computer Science & Information Technology

ETSI Industriales

Preface XI

Ciudad Real, Spain **Dr. Noor Zaman**

King Faisal University Al Ahasa Al Hofuf Kingdom of Saudi Arabia

course of optimization

The two-rate based structures for computationally efficient wide-band FIR systems are done. Regular wide-band finite-length impulse response systems tend to have a very high computational complexity when the bandwidth approaches the whole Nyquist band. It is presented in two-rate based structures which can be used to obtain substantially more efficient wide-band FIR systems. The two-rate based structure is appropriate for so called left-band and right-band systems, which have don't-care bands at the low-frequency and high-frequency regions, respectively. A multi-function system realizations is also considered.

The L2-sensitivity minimization is a technique employed for the synthesis of high-accuracy digital filter structures, which achieves quite low-coefficient quantization error. It can be employed in order to reduce to undesirable finite-word-length (FWL) effects arise due to the coefficient truncation and arithmetic roundoff. It is employed for to the L2-sensitivity minimization problem for second-order digital filters. It can be algebraically solved in closed form, where the L2-sensitivity minimization problem is also solved analytically for arbitrary filter order if second-order modes with the same results. A general expression of the transfer function of digital filters is defined with all second-order modes. It is obtained by a frequency transformation on a first-order prototype FIR digital filter with the absence of limit cycles of the minimum L2-sensitivity realizations, synthesized by selecting an appropriate orthogonal matrix.

The design, realization and discrete particle swarm optimization (PSO) of frequency response masking (FRM) IIR digital filters is done in detail. FRM IIR digital filters are designed by FIR masking digital subfilters together with IIR interpolation digital subfilters. The FIR filter design is straightforward and can be performed by using hitherto techniques. The IIR digital subfilter design topology consists of a parallel combination of a pair of allpass networks so that its magnitude-frequency response matches that of an odd order elliptic minimum Q-factor (EMQF) transfer function. This design is realized using the bilinear-lossless-discrete-integrator (bilinear-LDI) approach, with multiplier coefficient values represented as finite-precision (canonical signed digit) CSD numbers. The FRM digital filters are optimized over the discrete multiplier coefficient space, resulting in FRM digital filters which are capable of direct implementation in digital hardware platform without any need for further optimization. A new PSO algorithm is developed to tackle three different problems. In this PSO algorithm, a set of indexed look-up tables (LUTs) of permissible CSD multiplier coefficient values is generated to ensure that in the course of optimization, the multiplier coefficient update operations constituent in the underlying PSO algorithm lead to values that are guaranteed to conform to the desired CSD wordlength, etc. In addition, a general set of constraints is derived in terms of multiplier coefficients to guarantee that the IIR bilinear-LDI interpolation digital subfilters automatically remain BIBO stable throughout the course of PSO algorithm. Moreover, by introducing barren layers, the particles are ensured to automatically remain inside the boundaries of LUTs in course of optimization

are described. This method enables the usage of the lower digital filter output values for higher-order moments. Another approach to reduce the digital filter structure proposed by Hatamian, in the computation of geometric moments which leads to faster computation to obtain them, is considered. The proposed method is modelled using the 2-D Ztransform. The recursive methods are used in Tchebichef moments (TMs) and inverse Tchebichef moments (ITMs) computations—recurrence relation regards to the order and with respect to the discrete variable. A digital filter structure is proposed for reconstruction based on the 2D convolution between the digital filter outputs used in the computation of the TMs and the impulse response of the proposed digital filter. A comparison on the performance of the proposed algorithms and some of the existing methods for computing TMs and ITMs shows that the proposed algorithms are faster. A concern in obtaining the Krawtchouk Moments (KMs) from an image is the computational costs. The first approach uses the digital filter outputs to form GMs and the KMs are obtained via GMs. The second method uses a direct

The two-rate based structures for computationally efficient wide-band FIR systems are done. Regular wide-band finite-length impulse response systems tend to have a very high computational complexity when the bandwidth approaches the whole Nyquist band. It is presented in two-rate based structures which can be used to obtain substantially more efficient wide-band FIR systems. The two-rate based structure is appropriate for so called left-band and right-band systems, which have don't-care bands at the low-frequency and high-frequency regions, respectively. A multi-function system realizations is also

The L2-sensitivity minimization is a technique employed for the synthesis of high-accuracy digital filter structures, which achieves quite low-coefficient quantization error. It can be employed in order to reduce to undesirable finite-word-length (FWL) effects arise due to the coefficient truncation and arithmetic roundoff. It is employed for to the L2-sensitivity minimization problem for second-order digital filters. It can be algebraically solved in closed form, where the L2-sensitivity minimization problem is also solved analytically for arbitrary filter order if second-order modes with the same results. A general expression of the transfer function of digital filters is defined with all second-order modes. It is obtained by a frequency transformation on a first-order prototype FIR digital filter with the absence of limit cycles of the minimum L2-sensitivity realizations, synthesized by selecting an

The design, realization and discrete particle swarm optimization (PSO) of frequency response masking (FRM) IIR digital filters is done in detail. FRM IIR digital filters are designed by FIR masking digital subfilters together with IIR interpolation digital subfilters. The FIR filter design is straightforward and can be performed by using hitherto techniques. The IIR digital subfilter design topology consists of a parallel combination of a pair of allpass networks so that its magnitude-frequency response matches that of an odd order elliptic minimum Q-factor (EMQF) transfer function. This design is realized using the bilinear-lossless-discrete-integrator (bilinear-LDI) approach, with multiplier coefficient values represented as finite-precision (canonical signed digit) CSD numbers. The FRM

approach to achieve KMs from the digital filter outputs.

considered.

X Preface

appropriate orthogonal matrix.

#### **Dr. Fausto Pedro García Márquez**

ETSI Industriales Universidad Castilla-La Mancha Ciudad Real, Spain

**Dr. Noor Zaman** Department of Computer Science College of Computer Science & Information Technology King Faisal University Al Ahasa Al Hofuf Kingdom of Saudi Arabia

**Chapter 1**

**Maintenance Management Based on Signal Processing**

Most of the wind turbines are three-blade units (Figure 1.) [55]. Once the wind drives the blades, the energy is transmitted via the main shaft through the gearbox (supported by the bearings) to the generator. The generator speed must be as near as possible to the optimal for the generation of electricity. At the top of the tower, assembled on a base or foundation, the housing or nacelle is mounted and the alignment with the direction of the wind is con‐ trolled by a yaw system. There is also a pitch system in each blade. This mechanism controls the wind power and sometimes is employed as an aerodynamic brake. The wind turbine features a hydraulic brake to stop itself when it is needed. Finally, there is a meteorological unit that provides information about the wind (speed and direction) to the control system.

Maintenance is a key tool to ensure the operation of all components of a set. One of the ob‐ jectives is to use available resources efficiently. The classical theory of maintenance was fo‐ cused on the corrective and preventive maintenance [9] but alternatives to corrective and preventive maintenance have appeared in recent years. One of them is Condition Based Maintenance, which ensures the continuous monitoring and inspection of the wind turbine detecting emerging faults and organizing maintenance tasks that anticipate the failure [59]. Condition Based Maintenance implies acquisition, processing, analysis and interpretation of data and the selection of proper maintenance actions. This is achieved using condition moni‐ toring systems [27, 28]. Thereby, CBM is presented as a useful technique to improve not on‐ ly the maintenance but the safety of the equipments. Byon and Ding [14] or McMillan and Ault [50] have demonstrated its successful application in wind turbines, making the CBM

> © 2013 García Márquez et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 García Márquez et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

Fausto Pedro García Márquez,

http://dx.doi.org/10.5772/52199

**1.1. Maintenance in Wind Turbines**

**1. Wind Turbines**

Raúl Ruiz de la Hermosa González-Carrato, Jesús María Pinar Perez and Noor Zaman

Additional information is available at the end of the chapter

## **Maintenance Management Based on Signal Processing**

Fausto Pedro García Márquez, Raúl Ruiz de la Hermosa González-Carrato, Jesús María Pinar Perez and Noor Zaman

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52199

## **1. Wind Turbines**

Most of the wind turbines are three-blade units (Figure 1.) [55]. Once the wind drives the blades, the energy is transmitted via the main shaft through the gearbox (supported by the bearings) to the generator. The generator speed must be as near as possible to the optimal for the generation of electricity. At the top of the tower, assembled on a base or foundation, the housing or nacelle is mounted and the alignment with the direction of the wind is con‐ trolled by a yaw system. There is also a pitch system in each blade. This mechanism controls the wind power and sometimes is employed as an aerodynamic brake. The wind turbine features a hydraulic brake to stop itself when it is needed. Finally, there is a meteorological unit that provides information about the wind (speed and direction) to the control system.

#### **1.1. Maintenance in Wind Turbines**

Maintenance is a key tool to ensure the operation of all components of a set. One of the ob‐ jectives is to use available resources efficiently. The classical theory of maintenance was fo‐ cused on the corrective and preventive maintenance [9] but alternatives to corrective and preventive maintenance have appeared in recent years. One of them is Condition Based Maintenance, which ensures the continuous monitoring and inspection of the wind turbine detecting emerging faults and organizing maintenance tasks that anticipate the failure [59]. Condition Based Maintenance implies acquisition, processing, analysis and interpretation of data and the selection of proper maintenance actions. This is achieved using condition moni‐ toring systems [27, 28]. Thereby, CBM is presented as a useful technique to improve not on‐ ly the maintenance but the safety of the equipments. Byon and Ding [14] or McMillan and Ault [50] have demonstrated its successful application in wind turbines, making the CBM

© 2013 García Márquez et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 García Márquez et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

one of the most employed strategies in this industry. Another example of the maintenance evolution is the Reliability Centred Maintenance. It is defined as a process to determine what must be done to ensure that any physical asset works in its operating context [71]. Nowadays it is the most common type of maintenance for many industrial fields [25, 26] and it involves maintenance system functions or identifying failure modes among others maintenance tasks [52].

Ultrasonic tests evaluate the structural surface of towers and blades in wind turbines [22]

Oil analysis may determine the occurrence of problems in early stages of deterioration. It is usually a clear indicator of the wearing of certain components. The technique is widely used

Thermographic technique is established for monitoring mainly electrical components [72]; al‐ though its use is extended to the search of abnormal temperatures on the surfaces of the blades [64]. Using thermography, hot spots can be found due to bad contacts or a system failure. It is common the introduction of online monitoring systems based on the infrared spectrum.

There are techniques that not being so extended, are also used in the maintenance of wind turbines. In many cases, their performance is heavily influenced by the costs or their exces‐ sive specialization, making them not always feasible. Some examples are strain measure‐ ments in blades [68]; voltage and current analysis in engines, generators and accumulators [67]; shock pulse methods detecting mechanical shocks for bearings [13] or radiographic in‐

The FFT converts a signal from the time domain to the frequency domain. The use of FFT also allows its spectral representation [56]. Each frequency range is framed into a particular failure state. It is very useful when periodic patterns are searched [5]. Vibration analysis also provides information about a particular reason of the fault origin and/or its severity [43]. There is extensive literature demonstrating the development of the method for rolling ele‐

> 2 p ¥ ò-¥

2 pw ¥ ò-¥

formula, equation (2) where *F(s)* is the Fourier transform of *f(x)* is obtained.

This integral, which is a function of s, may be written as *F(s)*. Transforming *F(s)* by the same

There are a considerable number of publications regarding the diagnosis of faults for rolling machinery that justifies the models and patterns based on the Fast Fourier Transform. Mis‐ alignment is one of the most commonly observed faults in rotating machines, being the sec‐ ond most common malfunction after unbalance. It may be present because of improper machine assembly, thermal distortion and asymmetry in the applied load. Misalignment causes reaction forces in couplings that are the major cause of machinery vibration. Some authors evaluated numerically the effect of coupling misalignment and suggested the occur‐

*i xs f ( x )e dx* (1)

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

3

*i s f (s)e ds* (2)

[24]. Consistent with some other techniques, it is capable of locating faults safely.

in the field of maintenance, being important for gearboxes in wind turbines [47].

spections to observe the structural conditions of the [61].

ments. The FFT of a function *f(x)* is defined as [12]:

**1.3. Signal processing methods**

*Fast Fourier Transform (FFT)*

**Figure 1.** Main parts of a turbine: (1) blades, (2) rotor, (3) gearbox, (4) generator, (5) bearings, (6) yaw system and (7) tower [36].

#### **1.2. Condition Monitoring applied to Wind Turbines**

Condition Monitoring systems operate from different types of sensors and signal processing equipments. They are capable of monitoring components ranging from blades, gearboxes, generators to bearings or towers. Monitoring can be processed in real time or in packages of time intervals. The procurement of data will be critical to determine the occurrence of a problem and determine a solution to apply. Therefore, the success of a Condition Monitor‐ ing system will be supported by the number and type of sensors used and the signal collec‐ tion and processing.

Any element that performs a rotation is susceptible of being analysed by vibration. In the case of the wind turbines, vibration analysis is mainly specialized in the study of gearboxes [48, 49] and bearings [81] [85]. Different types of sensors will be required depending on the operating frequency: position transducers, velocity sensors, accelerometers or spectral ener‐ gy emitted sensors.

Acoustic emissions (AE) describe the sound waves produced when a material undergoes stress as a result of an external force [35]. They can detect the occurrence of cracks in bear‐ ings [84] and blades [91] in earlier stages.

Ultrasonic tests evaluate the structural surface of towers and blades in wind turbines [22] [24]. Consistent with some other techniques, it is capable of locating faults safely.

Oil analysis may determine the occurrence of problems in early stages of deterioration. It is usually a clear indicator of the wearing of certain components. The technique is widely used in the field of maintenance, being important for gearboxes in wind turbines [47].

Thermographic technique is established for monitoring mainly electrical components [72]; al‐ though its use is extended to the search of abnormal temperatures on the surfaces of the blades [64]. Using thermography, hot spots can be found due to bad contacts or a system failure. It is common the introduction of online monitoring systems based on the infrared spectrum.

There are techniques that not being so extended, are also used in the maintenance of wind turbines. In many cases, their performance is heavily influenced by the costs or their exces‐ sive specialization, making them not always feasible. Some examples are strain measure‐ ments in blades [68]; voltage and current analysis in engines, generators and accumulators [67]; shock pulse methods detecting mechanical shocks for bearings [13] or radiographic in‐ spections to observe the structural conditions of the [61].

#### **1.3. Signal processing methods**

#### *Fast Fourier Transform (FFT)*

one of the most employed strategies in this industry. Another example of the maintenance evolution is the Reliability Centred Maintenance. It is defined as a process to determine what must be done to ensure that any physical asset works in its operating context [71]. Nowadays it is the most common type of maintenance for many industrial fields [25, 26] and it involves maintenance system functions or identifying failure modes among others

**Figure 1.** Main parts of a turbine: (1) blades, (2) rotor, (3) gearbox, (4) generator, (5) bearings, (6) yaw system and (7)

Condition Monitoring systems operate from different types of sensors and signal processing equipments. They are capable of monitoring components ranging from blades, gearboxes, generators to bearings or towers. Monitoring can be processed in real time or in packages of time intervals. The procurement of data will be critical to determine the occurrence of a problem and determine a solution to apply. Therefore, the success of a Condition Monitor‐ ing system will be supported by the number and type of sensors used and the signal collec‐

Any element that performs a rotation is susceptible of being analysed by vibration. In the case of the wind turbines, vibration analysis is mainly specialized in the study of gearboxes [48, 49] and bearings [81] [85]. Different types of sensors will be required depending on the operating frequency: position transducers, velocity sensors, accelerometers or spectral ener‐

Acoustic emissions (AE) describe the sound waves produced when a material undergoes stress as a result of an external force [35]. They can detect the occurrence of cracks in bear‐

**1.2. Condition Monitoring applied to Wind Turbines**

maintenance tasks [52].

2 Digital Filters and Signal Processing

tower [36].

tion and processing.

gy emitted sensors.

ings [84] and blades [91] in earlier stages.

The FFT converts a signal from the time domain to the frequency domain. The use of FFT also allows its spectral representation [56]. Each frequency range is framed into a particular failure state. It is very useful when periodic patterns are searched [5]. Vibration analysis also provides information about a particular reason of the fault origin and/or its severity [43]. There is extensive literature demonstrating the development of the method for rolling ele‐ ments. The FFT of a function *f(x)* is defined as [12]:

$$\int\_{-n}^{n} f(\mathbf{x}) e^{-2i\pi x} d\mathbf{x} \tag{1}$$

This integral, which is a function of s, may be written as *F(s)*. Transforming *F(s)* by the same formula, equation (2) where *F(s)* is the Fourier transform of *f(x)* is obtained.

$$\int\_{-\alpha}^{\alpha} f(s)e^{-2ims}ds\tag{2}$$

There are a considerable number of publications regarding the diagnosis of faults for rolling machinery that justifies the models and patterns based on the Fast Fourier Transform. Mis‐ alignment is one of the most commonly observed faults in rotating machines, being the sec‐ ond most common malfunction after unbalance. It may be present because of improper machine assembly, thermal distortion and asymmetry in the applied load. Misalignment causes reaction forces in couplings that are the major cause of machinery vibration. Some authors evaluated numerically the effect of coupling misalignment and suggested the occur‐ rence of strong vibrations at twice the natural frequency [70] [95], although rotating machi‐ nery can excite vibration harmonics from twice to ten harmonics depending on the signal pickup locations and directions [53].

great accuracy is requested at low frequencies and vice versa, e.g. small regions where preci‐ sion details for a deeper processing are required at higher frequencies [23]. The wavelet transform can be defined as a signal on a temporal base that is filtered successive times and whose average value is zero. These wavelets are irregular and asymmetrical [51]. The trans‐ form has many applications in control process and detection of anomalies. It enables to ana‐ lyse the signal structures that depend on time and scale, being a useful method to characterize and identify signals with spectral features, unusual temporary files and other properties related to the lack of stationary. When the frequency range corresponding to each signal is known, the data can be studied in terms of time, frequency and amplitude. There‐ fore it is possible to see which frequencies are in each time interval, and may even reverse the wavelet transform when it is necessary. Previously to the wavelet transform, the FFT was able to work with this type of signals in the frequency domain but without great resolu‐

The wavelet transform of a function *f(t)* is the decomposition of *f(t)* in a set of functions and

t

> t

<sup>=</sup> ò *\* W (s, ) f (t ) (t )dt <sup>f</sup> s ,* (3)

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

5

*<sup>t</sup> (t) <sup>s</sup> <sup>s</sup>* (4)

 y

Wavelets transforms are generated from the translation and scale change from a same wave‐

The wavelets *ψ s,τ (t)* generated from the same mother wavelet function *ψ(t)* have different scale *s* and location *τ,* but the same shape. Scale factors are always *s*>0. The wavelets are di‐ lated when the scale *s*>1 and contracted when *s*<1. Thus, the changing of the value *s* can cov‐ er different ranges of frequencies. Large values for the parameter *s* correspond to lower frequencies ranges or a large scale for *ψ s,τ (t)*. Small values of *s* correspond to lower frequen‐

The wavelet transform can be continuous or discrete. The difference between them is that the continuous transform provides more detailed information but consuming more compu‐ tation time while the discrete signal is efficient with fewer parameters and less computation time [17]. The Discrete Wavelet Transform coefficients are a group of discrete intervals of time and scales. These coefficients are used to formalize a set of features that characterize different types of signals. Any signal can be divided into low frequency approximations (*A*) and high frequency details (*D*). The sum of *A* and *D* is always equal to the original signal.

1

 yæ ö - <sup>=</sup> ç ÷ è ø *s ,*

t

let function *ψ(t)*, called *mother wavelet*, which is given by equation (4):

t

y

where *s* is the scale factor, and *τ* is the translational factor.

tion in the time domain [38].

cies ranges or very small scales.

The division is done using filters (Figure 2).

*ψ s,τ (t)*, forming a base. It is defined as [88] [66]:

Faults do not have a unique nature and most of the time, problems on a smaller scale are linked, e.g. in the case of misalignment, when an angular misalignment is studied, parallel misalign‐ ment (minor fault) needs to be take into account. Al-Hussain and Redmond reported vibra‐ tions for parallel misalignment at the natural frequency from experimental investigations [4].

To facilitate the diagnosis in rolling elements, some companies and researchers tabulate the most common failure modes in the frequency domain, so that the analysis can be carried out easier. Thus, the appearance of different frequency peaks determines the existence of devel‐ oping problems such as gaps, unbalances or misalignments among other circumstances [31].The great advantage of these tables is that the value of the frequency peak is not a par‐ ticular value and may be adapted to any situation where the natural frequency (or the rota‐ tional speed) is known.

Wavelet transform is a time-frequency technique similar to Short Time Fourier Transform although it is more effective when the signal is not stationary. Wavelet transform decom‐ pose an input signal into a set of levels at different frequencies [77]. Wavelet transforms have been applied to the fault detection and diagnosis in various wind turbine parts.

A hidden Markov model is a statistical model in which the system being modelled is as‐ sumed to be a Markov process with hidden states. A hidden Markov model can be consid‐ ered as the simplest dynamic Bayesian network [8]. Ocak and Loparo presented the application for the bearing fault detection [57].

They are used when a statistical study is required. In these cases, common statistical, i.e. the root mean square or peak amplitude; to diagnose faults are employed. Other parameters can be maximum or minimum values, means, standard deviations to energy ratios or kurtosis. Moreover, trend analysis refers to the collection of information in order to find a trend.

There are many methods that, as happened with the techniques available for CM, are very specific and therefore they are used for very specific situations. Filtering methods, for exam‐ ple, are designed to remove any redundant information, eliminating unnecessary overloads in the process. Analysis in time domain will be a way of monitoring wind turbine faults as inductive imbalances o turn-to-turn faults. Other methodology, the power cepstrum, de‐ fined as the inverse Fourier Transform of the logarithmic power spectrum [92], reports the occurrence of deterioration through the study of the sidebands. Time synchronous averag‐ ing, amplitude demodulation and order analysis are other signal processing methodologies used in wind turbines.

## **2. Wavelet transform**

The wavelet transform is a method of analysis capable of identifying the local characteristics of a signal in the time and frequency domain. It is suitable for large time intervals, where great accuracy is requested at low frequencies and vice versa, e.g. small regions where preci‐ sion details for a deeper processing are required at higher frequencies [23]. The wavelet transform can be defined as a signal on a temporal base that is filtered successive times and whose average value is zero. These wavelets are irregular and asymmetrical [51]. The trans‐ form has many applications in control process and detection of anomalies. It enables to ana‐ lyse the signal structures that depend on time and scale, being a useful method to characterize and identify signals with spectral features, unusual temporary files and other properties related to the lack of stationary. When the frequency range corresponding to each signal is known, the data can be studied in terms of time, frequency and amplitude. There‐ fore it is possible to see which frequencies are in each time interval, and may even reverse the wavelet transform when it is necessary. Previously to the wavelet transform, the FFT was able to work with this type of signals in the frequency domain but without great resolu‐ tion in the time domain [38].

The wavelet transform of a function *f(t)* is the decomposition of *f(t)* in a set of functions and *ψ s,τ (t)*, forming a base. It is defined as [88] [66]:

$$\mathcal{W}\_f(\mathbf{s}, \tau) = \int f(t) \boldsymbol{\uprho}\_{s, \tau}^\*(t) dt \tag{3}$$

Wavelets transforms are generated from the translation and scale change from a same wave‐ let function *ψ(t)*, called *mother wavelet*, which is given by equation (4):

$$
\psi\_{s,\tau}(t) = \frac{1}{\sqrt{s}} \psi\left(\frac{t-\tau}{s}\right) \tag{4}
$$

where *s* is the scale factor, and *τ* is the translational factor.

rence of strong vibrations at twice the natural frequency [70] [95], although rotating machi‐ nery can excite vibration harmonics from twice to ten harmonics depending on the signal

Faults do not have a unique nature and most of the time, problems on a smaller scale are linked, e.g. in the case of misalignment, when an angular misalignment is studied, parallel misalign‐ ment (minor fault) needs to be take into account. Al-Hussain and Redmond reported vibra‐ tions for parallel misalignment at the natural frequency from experimental investigations [4]. To facilitate the diagnosis in rolling elements, some companies and researchers tabulate the most common failure modes in the frequency domain, so that the analysis can be carried out easier. Thus, the appearance of different frequency peaks determines the existence of devel‐ oping problems such as gaps, unbalances or misalignments among other circumstances [31].The great advantage of these tables is that the value of the frequency peak is not a par‐ ticular value and may be adapted to any situation where the natural frequency (or the rota‐

Wavelet transform is a time-frequency technique similar to Short Time Fourier Transform although it is more effective when the signal is not stationary. Wavelet transform decom‐ pose an input signal into a set of levels at different frequencies [77]. Wavelet transforms

A hidden Markov model is a statistical model in which the system being modelled is as‐ sumed to be a Markov process with hidden states. A hidden Markov model can be consid‐ ered as the simplest dynamic Bayesian network [8]. Ocak and Loparo presented the

They are used when a statistical study is required. In these cases, common statistical, i.e. the root mean square or peak amplitude; to diagnose faults are employed. Other parameters can be maximum or minimum values, means, standard deviations to energy ratios or kurtosis. Moreover, trend analysis refers to the collection of information in order to find a trend.

There are many methods that, as happened with the techniques available for CM, are very specific and therefore they are used for very specific situations. Filtering methods, for exam‐ ple, are designed to remove any redundant information, eliminating unnecessary overloads in the process. Analysis in time domain will be a way of monitoring wind turbine faults as inductive imbalances o turn-to-turn faults. Other methodology, the power cepstrum, de‐ fined as the inverse Fourier Transform of the logarithmic power spectrum [92], reports the occurrence of deterioration through the study of the sidebands. Time synchronous averag‐ ing, amplitude demodulation and order analysis are other signal processing methodologies

The wavelet transform is a method of analysis capable of identifying the local characteristics of a signal in the time and frequency domain. It is suitable for large time intervals, where

have been applied to the fault detection and diagnosis in various wind turbine parts.

pickup locations and directions [53].

4 Digital Filters and Signal Processing

tional speed) is known.

used in wind turbines.

**2. Wavelet transform**

application for the bearing fault detection [57].

The wavelets *ψ s,τ (t)* generated from the same mother wavelet function *ψ(t)* have different scale *s* and location *τ,* but the same shape. Scale factors are always *s*>0. The wavelets are di‐ lated when the scale *s*>1 and contracted when *s*<1. Thus, the changing of the value *s* can cov‐ er different ranges of frequencies. Large values for the parameter *s* correspond to lower frequencies ranges or a large scale for *ψ s,τ (t)*. Small values of *s* correspond to lower frequen‐ cies ranges or very small scales.

The wavelet transform can be continuous or discrete. The difference between them is that the continuous transform provides more detailed information but consuming more compu‐ tation time while the discrete signal is efficient with fewer parameters and less computation time [17]. The Discrete Wavelet Transform coefficients are a group of discrete intervals of time and scales. These coefficients are used to formalize a set of features that characterize different types of signals. Any signal can be divided into low frequency approximations (*A*) and high frequency details (*D*). The sum of *A* and *D* is always equal to the original signal. The division is done using filters (Figure 2).

**2.1. Wavelet families**

set by the application.

[34] [30]:

solving boundaries problems [87].

eral trend for almost all types of wavelet families [76].

The concept of wavelet has emerged and evolved during the last decades. Though new fam‐ ilies of wavelet transforms are rapidly increasing, there are a number of them that have been established with more strength over time. In most situations, the use of a particular family is

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

7

Daubechies wavelets are the most used wavelets, representing the foundations of wavelets signal processing and founding application in Discrete Wavelet Transform. They are defined as a family of orthogonal and smooth basis wavelets characterized by a maximum number of vanishing moments. The degree of smoothness increases as long as the order is higher. Daubechies wavelets lead to more accurate results in comparison to others wavelet types and also handle with boundary problems for finite length signals in an easier way [58] [29] [60] [94]. Wavelets have not an explicit expression except for order 1, which is the Haar wavelet. The inability to present a wavelet equation by a particular formula will be the gen‐

As above mentioned, Haar wavelets are Daubechies wavelets when the order is 1. They are the simplest orthonormal wavelets. The main drawback for Haar wavelets is their disconti‐ nuity as a consequence of not solving breaking points problems for its derivates. The Haar transform is one of the earliest examples of a wavelet transform and it is supported by a function is an odd rectangular pulse pair [33]. Haar functions are widely used for applica‐ tions as image coding, edge extraction and binary logic design and are defined as [46] [41]

<sup>1</sup> 1 0

*t*

*elsewhere*

The main advantages of the Haar wavelet are its accuracy and fast implementation com‐ pared with others methods, its simplicity and small computational costs, and its capacity for

Symlet wavelet transform is an orthogonal wavelet defined by a scaling filter (a low-pass fi‐ nite impulse response filter of length *2N* and sum 1). Symlet wavelet transform is sometimes called SymletN, where *N* is the order. Symlet wavelets are near symmetric. Furthermore,

Coiflet wavelets are a family of wavelets whose main characteristics are similar to the Sym‐ let ones: a high number of vanishing moments and symmetry. Coiflet family is also com‐ pactly supported, orthogonal and capable to give a good accuracy when the original signal

<sup>ì</sup> £ < <sup>ï</sup>

ï <sup>ï</sup> =- £< <sup>í</sup> ï ï ï î

they have highest number of vanishing moments for a given width [7].

has a distortion. The Coiflet wavelets are defined for 5 orders [18].

*H(t) t*

<sup>1</sup> 1 1 2 0

2

(5)

**Figure 2.** Decomposition diagram.

To reduce the computational and mathematical costs due to duplication of data, a sub-sam‐ pling is usually performed, containing the half of the collected information from *A* and *D* but without losing information. It is common to accompany this information with a graphi‐ cal representation where the original signal is divided in low pass filters and high pass fil‐ ters [15]. When the signals are complex, the decomposition must be to further levels and it is not sufficient with two frequency bands. From this need, multilevel filters appear. Multile‐ vel filters repeat the filtering process iteratively with the output signals from the previous level. This leads to the so called wavelet decomposition trees (Figure 3.) [2]. By decomposing a signal in more frequency bands, additional information is obtained. A suitable branch to each signal is highly recommended as more decompositions do not always mean higher quality results.

**Figure 3.** Wavelet decomposition tree.

The calculation of the Continuous Wavelet Transform starts for an initial time and a scale value. The result of multiplying the two signals is integrated into the whole space of time. Subsequently, this integral is multiplied by the inverse of the square root scale value, obtain‐ ing a transformed function with a normalized energy. This process is iterative until the end of the original signal is reached and must be repeated for all the values of scale that sweep the frequency range to be studied.

### **2.1. Wavelet families**

**Figure 2.** Decomposition diagram.

6 Digital Filters and Signal Processing

quality results.

**Figure 3.** Wavelet decomposition tree.

the frequency range to be studied.

To reduce the computational and mathematical costs due to duplication of data, a sub-sam‐ pling is usually performed, containing the half of the collected information from *A* and *D* but without losing information. It is common to accompany this information with a graphi‐ cal representation where the original signal is divided in low pass filters and high pass fil‐ ters [15]. When the signals are complex, the decomposition must be to further levels and it is not sufficient with two frequency bands. From this need, multilevel filters appear. Multile‐ vel filters repeat the filtering process iteratively with the output signals from the previous level. This leads to the so called wavelet decomposition trees (Figure 3.) [2]. By decomposing a signal in more frequency bands, additional information is obtained. A suitable branch to each signal is highly recommended as more decompositions do not always mean higher

The calculation of the Continuous Wavelet Transform starts for an initial time and a scale value. The result of multiplying the two signals is integrated into the whole space of time. Subsequently, this integral is multiplied by the inverse of the square root scale value, obtain‐ ing a transformed function with a normalized energy. This process is iterative until the end of the original signal is reached and must be repeated for all the values of scale that sweep The concept of wavelet has emerged and evolved during the last decades. Though new fam‐ ilies of wavelet transforms are rapidly increasing, there are a number of them that have been established with more strength over time. In most situations, the use of a particular family is set by the application.

Daubechies wavelets are the most used wavelets, representing the foundations of wavelets signal processing and founding application in Discrete Wavelet Transform. They are defined as a family of orthogonal and smooth basis wavelets characterized by a maximum number of vanishing moments. The degree of smoothness increases as long as the order is higher. Daubechies wavelets lead to more accurate results in comparison to others wavelet types and also handle with boundary problems for finite length signals in an easier way [58] [29] [60] [94]. Wavelets have not an explicit expression except for order 1, which is the Haar wavelet. The inability to present a wavelet equation by a particular formula will be the gen‐ eral trend for almost all types of wavelet families [76].

As above mentioned, Haar wavelets are Daubechies wavelets when the order is 1. They are the simplest orthonormal wavelets. The main drawback for Haar wavelets is their disconti‐ nuity as a consequence of not solving breaking points problems for its derivates. The Haar transform is one of the earliest examples of a wavelet transform and it is supported by a function is an odd rectangular pulse pair [33]. Haar functions are widely used for applica‐ tions as image coding, edge extraction and binary logic design and are defined as [46] [41] [34] [30]:

$$H(t) = \begin{cases} 1 & 0 \le t < \frac{1}{2} \\ -1 & \frac{1}{2} \le t < 1 \\ 0 & elso where \end{cases} \tag{5}$$

The main advantages of the Haar wavelet are its accuracy and fast implementation com‐ pared with others methods, its simplicity and small computational costs, and its capacity for solving boundaries problems [87].

Symlet wavelet transform is an orthogonal wavelet defined by a scaling filter (a low-pass fi‐ nite impulse response filter of length *2N* and sum 1). Symlet wavelet transform is sometimes called SymletN, where *N* is the order. Symlet wavelets are near symmetric. Furthermore, they have highest number of vanishing moments for a given width [7].

Coiflet wavelets are a family of wavelets whose main characteristics are similar to the Sym‐ let ones: a high number of vanishing moments and symmetry. Coiflet family is also com‐ pactly supported, orthogonal and capable to give a good accuracy when the original signal has a distortion. The Coiflet wavelets are defined for 5 orders [18].

Biorthogonal wavelets have become very popular because of its versatility, being capable of supporting symmetric or antisymmetric signals. They perform very well under certain boundaries conditions [97]. Moreover the Biorthogonal wavelet transform is an invertible transform. They have two sets of lowpass filters for reconstruction, and highpass filters for decomposition [32].

Along with the Haar wavelets, the Meyer family is one of the exceptions that can be repre‐ sented by an equation. The Meyer wavelets have numerous applications in the theory of functions, solving differential equations, signal processing, etc. [39]. Meyer family has not compact support being this one of its drawbacks. It is defined by equation (6) [44]:

$$\left.\lambda(\alpha)\right\|\_{\left(0,\pi\right)} = \begin{cases} \frac{\pi}{4} + \theta(\alpha - \pi), & \alpha \in \left[\frac{2\pi}{3}, \frac{4\pi}{3}\right], \\ \frac{\pi}{4} + \theta(\frac{\alpha}{2} - \pi), & \alpha \in \left[\frac{4\pi}{3}, \frac{8\pi}{3}\right], \\ 0, & \alpha \in \left[0, \frac{2\pi}{3}\right] \cup \left[\frac{8\pi}{3}, +\infty\right]. \end{cases} \tag{6}$$

plications to damages caused by corrosion in chemical process installations [86]. As follow

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

9

The application of wavelets transforms in wind turbines focuses on the implementation of adaptive controllers for wind energy conversion systems. Wavelet transform is capable of providing a good and quick approximation. The drivers studied under different noise levels achieved higher performances [69]. Other works study the monitoring and diagnosis of faults in induced generators with satisfactory results. In these cases a combination of DWTs, accompanied by statistical data and energy is proposed. The use of decomposed signals spectral components is other highly interesting technique of study. Its harmonic content has suitable characteristics to be employed in fault diagnosis as an alternative to conventional

Rolling bearing plays an important role in rotating machines. The choice of a particular wavelet family is crucial for the maintenance and fault diagnosis. The location of peaks on the vibration spectrum can identify a particular fault. Wavelet decomposition trees are a useful tool for this identification. The mean square error extracted from the terminal nodes of a tree reports the failure and its size [17]. There are also studies focused on determining

The wavelet transform is a good signal analysis method when a variation of time but not of space exists. The analysis provides information about the frequency of the signal, being a solution for the engine failure detection. There are detection algorithms that identify the presence of a fault in working condition and are ahead of the shutdown of the system, re‐ ducing costs and downtimes [19] [20]. These algorithms are independent of the type of en‐ gine used. Other studies in this field, present methods to detect imbalances in the stator voltage of a three phase induction motor. The wavelet transform of the stator current is ana‐ lysed. Computationally, these methods are less expensive than other existing and can detect faults in an early stage. In the same vein, monitoring fatigue damage has been studied [65].

A novel approach for Condition Monitoring based on wavelet transforms is introduced. A system for a mechanism based on an engine and a generator will be shown. It has been de‐ signed to represent any similar mechanism located in a wind turbine, generally in the na‐ celle. These mechanisms are used in cooling devices (generators, gearboxes), electric motors for service crane, yaw motors, pitch motors (depending on the configuration) or pumps (oil,

A set of faults are induced in different experiments: ski-slope faults, misalignment faults, an‐ gular misalignment faults, parallel misalignment faults, rotating looseness faults and exter‐ nal noise faults. Pattern recognition is obtained from the extraction of vibration and acoustic signals. A Fault Detection and Diagnosis method is developed from the patterns of these

what type of wavelet is suitable for bearing maintenance [79].

**3. Condition Monitoring for engine-generator mechanism**

water) according to the sub systems configurations, ventilators, etc (Figure 4).

signals. In order to recognize the patterns, three basic steps have been followed [37]:

there is an explanation for some of the most examined in the scientific literature.

methods [3].

where *θ*(*ω*) is a continuously and differentiable function equal to *<sup>π</sup>* <sup>4</sup> for *<sup>ω</sup>* <sup>≥</sup> *π* 3 .

#### **2.2. Wavelet transform applications**

The use of the wavelet transform has been developed over the past two decades focused on the process diagnosis and instrumentation. In 1990, Leducq introduces them in the analysis of hydraulic noise for a centrifugal pump [45]. Later other authors demonstrates its useful‐ ness for the detection of mechanical failures and the health monitoring control in gears [74] [11] [90] [21] [82] [80]. Cracks in rotors [1], structures [73] [63] [89] [10] or composite plates [75] has been another exploitation source for wavelet transforms. In 1994, Newland re‐ searches on their properties and applications, and coins the term harmonic wavelet. Har‐ monic wavelets are used for ridge and phase identification in signals [54]. The results showed that the cracks found reduced the rotor speed. The effectiveness of wavelets has al‐ so been compared with the envelope detection methodology in the diagnosis of faults in the bearings, obtaining results in shorter time analysis [85].

Due to its good analytical skills in time regarding the frequency, wavelet transform is a guarantee of success in the study of transient processes. Chancey and Flowers [16] managed to discover a relation between vibration patterns and the coefficients of a wavelet. Kang and Birtwhistle [40] or Subramanian, Badrilal and Henry [78] developed techniques to find prob‐ lems in power transformers. Yacamini [96] proposed a method to detect torsional vibrations in engines and generators from the stator currents.

At present, the development of techniques associated to the scopes mentioned previously are still being implemented but others wavelet transforms purposes are emerging, such as classification of linear frequency modulation signals for radar emitter recognition [83] or ap‐ plications to damages caused by corrosion in chemical process installations [86]. As follow there is an explanation for some of the most examined in the scientific literature.

Biorthogonal wavelets have become very popular because of its versatility, being capable of supporting symmetric or antisymmetric signals. They perform very well under certain boundaries conditions [97]. Moreover the Biorthogonal wavelet transform is an invertible transform. They have two sets of lowpass filters for reconstruction, and highpass filters for

Along with the Haar wavelets, the Meyer family is one of the exceptions that can be repre‐ sented by an equation. The Meyer wavelets have numerous applications in the theory of functions, solving differential equations, signal processing, etc. [39]. Meyer family has not

2 4

p p

4 8

p

 pp

3 3

 p

<sup>4</sup> for *<sup>ω</sup>* <sup>≥</sup>

*π* 3 . (6)

compact support being this one of its drawbacks. It is defined by equation (6) [44]:

4 3 3

<sup>ì</sup> é ù <sup>ï</sup> +- Î ê ú ë û ï <sup>ï</sup> é ù =+ - Î <sup>í</sup> ê ú <sup>ï</sup> ë û

4 2 33 2 8 0 0

w

<sup>ï</sup> é ùé ù <sup>ï</sup> Î È +¥ ê úê ú <sup>î</sup> ë ûë û

The use of the wavelet transform has been developed over the past two decades focused on the process diagnosis and instrumentation. In 1990, Leducq introduces them in the analysis of hydraulic noise for a centrifugal pump [45]. Later other authors demonstrates its useful‐ ness for the detection of mechanical failures and the health monitoring control in gears [74] [11] [90] [21] [82] [80]. Cracks in rotors [1], structures [73] [63] [89] [10] or composite plates [75] has been another exploitation source for wavelet transforms. In 1994, Newland re‐ searches on their properties and applications, and coins the term harmonic wavelet. Har‐ monic wavelets are used for ridge and phase identification in signals [54]. The results showed that the cracks found reduced the rotor speed. The effectiveness of wavelets has al‐ so been compared with the envelope detection methodology in the diagnosis of faults in the

Due to its good analytical skills in time regarding the frequency, wavelet transform is a guarantee of success in the study of transient processes. Chancey and Flowers [16] managed to discover a relation between vibration patterns and the coefficients of a wavelet. Kang and Birtwhistle [40] or Subramanian, Badrilal and Henry [78] developed techniques to find prob‐ lems in power transformers. Yacamini [96] proposed a method to detect torsional vibrations

At present, the development of techniques associated to the scopes mentioned previously are still being implemented but others wavelet transforms purposes are emerging, such as classification of linear frequency modulation signals for radar emitter recognition [83] or ap‐

*, , ,,*

*( ), , ,*

0

bearings, obtaining results in shorter time analysis [85].

in engines and generators from the stator currents.

¥

*[, ]*

l w

**2.2. Wavelet transform applications**

p

p w

where *θ*(*ω*) is a continuously and differentiable function equal to *<sup>π</sup>*

qw p w

q pw

*( ) ( ), , ,*

decomposition [32].

8 Digital Filters and Signal Processing

The application of wavelets transforms in wind turbines focuses on the implementation of adaptive controllers for wind energy conversion systems. Wavelet transform is capable of providing a good and quick approximation. The drivers studied under different noise levels achieved higher performances [69]. Other works study the monitoring and diagnosis of faults in induced generators with satisfactory results. In these cases a combination of DWTs, accompanied by statistical data and energy is proposed. The use of decomposed signals spectral components is other highly interesting technique of study. Its harmonic content has suitable characteristics to be employed in fault diagnosis as an alternative to conventional methods [3].

Rolling bearing plays an important role in rotating machines. The choice of a particular wavelet family is crucial for the maintenance and fault diagnosis. The location of peaks on the vibration spectrum can identify a particular fault. Wavelet decomposition trees are a useful tool for this identification. The mean square error extracted from the terminal nodes of a tree reports the failure and its size [17]. There are also studies focused on determining what type of wavelet is suitable for bearing maintenance [79].

The wavelet transform is a good signal analysis method when a variation of time but not of space exists. The analysis provides information about the frequency of the signal, being a solution for the engine failure detection. There are detection algorithms that identify the presence of a fault in working condition and are ahead of the shutdown of the system, re‐ ducing costs and downtimes [19] [20]. These algorithms are independent of the type of en‐ gine used. Other studies in this field, present methods to detect imbalances in the stator voltage of a three phase induction motor. The wavelet transform of the stator current is ana‐ lysed. Computationally, these methods are less expensive than other existing and can detect faults in an early stage. In the same vein, monitoring fatigue damage has been studied [65].

## **3. Condition Monitoring for engine-generator mechanism**

A novel approach for Condition Monitoring based on wavelet transforms is introduced. A system for a mechanism based on an engine and a generator will be shown. It has been de‐ signed to represent any similar mechanism located in a wind turbine, generally in the na‐ celle. These mechanisms are used in cooling devices (generators, gearboxes), electric motors for service crane, yaw motors, pitch motors (depending on the configuration) or pumps (oil, water) according to the sub systems configurations, ventilators, etc (Figure 4).

A set of faults are induced in different experiments: ski-slope faults, misalignment faults, an‐ gular misalignment faults, parallel misalignment faults, rotating looseness faults and exter‐ nal noise faults. Pattern recognition is obtained from the extraction of vibration and acoustic signals. A Fault Detection and Diagnosis method is developed from the patterns of these signals. In order to recognize the patterns, three basic steps have been followed [37]:


A classification has been done to obtain the optimal pattern recognitions employing the data from Fast Fourier Transform and wavelet transforms applied to the vibrations and sounds signals respectively.

**Figure 5.** Experimental mechanism.

**Figure 6.** Measuring points.

to a total measurement for the point studied (Figure 8).

The experiments were completed for an average time of 10 seconds each one, and every ex‐ periment was repeated 3 times. Therefore, for each experiment 12 measurements of temper‐ atures, currents, sound, velocities and vibrations were taken (Figure 7). In the case of vibration, the vibrometer is capable of storing samples for the 'x', 'y' and 'z' axis, in addition

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

11

The experiments were carried out in order to identify couplings and misalignments in dif‐ ferent degrees. The engine has 4 rubber clamping (silemblocks), while the generator has 3 rubbers clamping. The silemblocks were located at the ends, having two on the right side of the engine and two on the left side. The generator has them placed in a triangle, two in the area closest to the coupling and one at the end. The first experiment recorded under free

**Figure 4.** Different locations of a wind turbine where the CM can be used: (1) fans, (2) gear oil pump, (3) oil pump for brake and (4) water cooling pump.

#### **3.1. Case study**

The experiments were made on a mechanism consisting of an engine and a generator linked by an elastic coupling joint. The sensors employed were a current sensor, an ambient tem‐ perature sensor, another temperature sensor located in strategic points of the mechanism, a vibration sensor; and a sound sensor (microphone). The data obtained by these sensors are stored in a data acquisition board, except for the vibration which is collected directly with a vibrometer. The software employed was LabView and specific software for vibration pro‐ vided by the manufacturer Kionix. The speed of the engine and its associated frequency were set by a frequency variator, and the energy is dispelled using a resistive element.

The allocation of the vibration measurements were: two points for the engine and two for the generator. Points of selection were located at the end of each machine and as close as possible to the axis which is the main rotational element of the mechanism (Figure 6).

**Figure 5.** Experimental mechanism.

**1.** The data acquisition on the testing bench (Figure 5).

**3.** A decision-making.

10 Digital Filters and Signal Processing

signals respectively.

brake and (4) water cooling pump.

**3.1. Case study**

**2.** The extraction of the features of the experiment using specific algorithms.

A classification has been done to obtain the optimal pattern recognitions employing the data from Fast Fourier Transform and wavelet transforms applied to the vibrations and sounds

**Figure 4.** Different locations of a wind turbine where the CM can be used: (1) fans, (2) gear oil pump, (3) oil pump for

The experiments were made on a mechanism consisting of an engine and a generator linked by an elastic coupling joint. The sensors employed were a current sensor, an ambient tem‐ perature sensor, another temperature sensor located in strategic points of the mechanism, a vibration sensor; and a sound sensor (microphone). The data obtained by these sensors are stored in a data acquisition board, except for the vibration which is collected directly with a vibrometer. The software employed was LabView and specific software for vibration pro‐ vided by the manufacturer Kionix. The speed of the engine and its associated frequency were set by a frequency variator, and the energy is dispelled using a resistive element.

The allocation of the vibration measurements were: two points for the engine and two for the generator. Points of selection were located at the end of each machine and as close as

possible to the axis which is the main rotational element of the mechanism (Figure 6).

**Figure 6.** Measuring points.

The experiments were completed for an average time of 10 seconds each one, and every ex‐ periment was repeated 3 times. Therefore, for each experiment 12 measurements of temper‐ atures, currents, sound, velocities and vibrations were taken (Figure 7). In the case of vibration, the vibrometer is capable of storing samples for the 'x', 'y' and 'z' axis, in addition to a total measurement for the point studied (Figure 8).

The experiments were carried out in order to identify couplings and misalignments in dif‐ ferent degrees. The engine has 4 rubber clamping (silemblocks), while the generator has 3 rubbers clamping. The silemblocks were located at the ends, having two on the right side of the engine and two on the left side. The generator has them placed in a triangle, two in the area closest to the coupling and one at the end. The first experiment recorded under free fault conditions, and the rest of experiments were performed when the silemblocks were re‐ moved from the engine and the generator in order to create the different degrees of decou‐ pling (Figure 9).

**Experiment Type of experiment Data set**

<sup>3</sup> Misalignment removing silemblocks from the right side and the front

<sup>5</sup> Misalignment removing the silemblock from the right side of the

<sup>6</sup> Misalignment removing 2 silemblocks near to the coupling in the

<sup>7</sup> Misalignment removing the silemblock from the right side of the generator and one from the left side of the engine

(Figure 10). With a click on a particular peak, the program provides the data.

left one of the engine

generator

generator

**Table 1.** Experiments (1500 rpm).

a rigid coupling.

1 Free fault conditions From 1 to 12 2 Misalignment removing silemblocks from the right side of the engine From 13 to 24

4 Generation of resistance in the coupling From 37 to 48

8 Use of a rigid coupling From 85 to 96

The FFT of each signal has been developed in Matlab. An algorithm that allows the compari‐ son of two signals for a given frequency was created. The main purpose is to compare pat‐ tern conditions with the signals of the rest of experiments that represent a fault and to analyse the peaks found in the natural frequency and its multiples. In some cases it is impor‐ tant to analyse the area located below the natural frequency. Another advantage of the pro‐ gram is that it is possible to obtain the amplitude values for a certain frequency range

**Figure 9.** Misalignments induced removing silemblocks from the engine and the generator and experimentation with

From 25 to 36

http://dx.doi.org/10.5772/52199

13

Maintenance Management Based on Signal Processing

From 49 to 60

From 61 to 72

From 73 to 84

**Figure 7.** Data collection in LabView.

**Figure 8.** Data collection with Kionix software (vibration).

The rotational speed is 1500 rpm, i.e. 25 Hz. In order to do an analysis above the natural fre‐ quency, the number of samples was increased from 25 Hz to 125 Hz, being 25 Hz the default samples. This guarantees a range 5 times bigger than the natural frequency of the engine.


**Table 1.** Experiments (1500 rpm).

fault conditions, and the rest of experiments were performed when the silemblocks were re‐ moved from the engine and the generator in order to create the different degrees of decou‐

pling (Figure 9).

12 Digital Filters and Signal Processing

**Figure 7.** Data collection in LabView.

**Figure 8.** Data collection with Kionix software (vibration).

The rotational speed is 1500 rpm, i.e. 25 Hz. In order to do an analysis above the natural fre‐ quency, the number of samples was increased from 25 Hz to 125 Hz, being 25 Hz the default samples. This guarantees a range 5 times bigger than the natural frequency of the engine.

The FFT of each signal has been developed in Matlab. An algorithm that allows the compari‐ son of two signals for a given frequency was created. The main purpose is to compare pat‐ tern conditions with the signals of the rest of experiments that represent a fault and to analyse the peaks found in the natural frequency and its multiples. In some cases it is impor‐ tant to analyse the area located below the natural frequency. Another advantage of the pro‐ gram is that it is possible to obtain the amplitude values for a certain frequency range (Figure 10). With a click on a particular peak, the program provides the data.

**Figure 9.** Misalignments induced removing silemblocks from the engine and the generator and experimentation with a rigid coupling.

Values for 25 Hz (natural frequency or 1X), 50 Hz (2X), 75 Hz (3X) and 100 Hz (4X) have been taken into account. Frequencies above these values have been discarded.

*Angular misalignment fault*

*Parallel misalignment fault*

*Rotating looseness fault*

*External noise fault*

external noise (red).

the same frequency source.

harmonics are possible (Figure 11).

Angular misalignment fault produces a bending moment on both shafts and this generates a strong vibration at 1X, and some others at 2X and 3X for the axial direction. There will also

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

15

Parallel misalignment fault produces a shear force and a bending moment on the coupled end of each shaft. High vibration levels at 2X as well as 1X are produced in the radial direc‐ tion. Most often the 2X component is higher than 1X. Depending on the coupling, there can

Rotating looseness fault will create harmonics or sub-harmonics every 0.5X. Even 1/3 order

It is very common to find a peak in a spectrum that is difficult to analyse. This happens be‐ cause of the vibration from another machine or process. The peak will typically be at a nonsynchronous frequency (Figure 11). External noise can be verified stopping the machine (or varying the speed) and seeing if the vibration is still present or checking local machines for

**Figure 11.** a) Angular misalignment fault (red) and pattern condition (blue), (b) parallel misalignment fault (red) and pattern condition (blue), (c) ski-slope fault (blue) and pattern condition (red) and (d) rotating looseness (blue); and

be strong radial components for vertical and horizontal directions (Figure 11).

be 3X or 4X, even reaching 8X when the misalignment is severe (Figure 11).

**Figure 10.** FFT of a vibration signal.

#### **3.2. Vibration diagnosis and results**

The most common spectrums for engine-generator mechanisms are presented. Examples based on the experiments held are shown.

#### *Ski-slope fault*

A ski-slope fault appears when the spectrum begins at a high level and then it goes down slowly (Figure 11). A ski-slope shows a problem with the quality of the sensor. Sometimes it happens because the sensor has experienced a transient during the measurement process. The transient may be mechanical, thermal or electrical.

#### *Misalignment faults*

Misalignment fault appears when the centrelines of coupled shafts do not coincide. If the misaligned shaft centrelines are parallel but not coincident, then the misalign‐ ment is a parallel misalignment. If the misaligned shafts meet at a point but they are not parallel, the misalignment is angular. Most of the cases are a combination of them. The diagnosis is based on dominant vibration from the natural frequency (1X) at twice the rotational rate (2X), with increased rotational rate levels (3X, 4X, etc.) acting in the axial, vertical or horizontal directions.

#### *Angular misalignment fault*

Values for 25 Hz (natural frequency or 1X), 50 Hz (2X), 75 Hz (3X) and 100 Hz (4X) have

The most common spectrums for engine-generator mechanisms are presented. Examples

A ski-slope fault appears when the spectrum begins at a high level and then it goes down slowly (Figure 11). A ski-slope shows a problem with the quality of the sensor. Sometimes it happens because the sensor has experienced a transient during the measurement process.

Misalignment fault appears when the centrelines of coupled shafts do not coincide. If the misaligned shaft centrelines are parallel but not coincident, then the misalign‐ ment is a parallel misalignment. If the misaligned shafts meet at a point but they are not parallel, the misalignment is angular. Most of the cases are a combination of them. The diagnosis is based on dominant vibration from the natural frequency (1X) at twice the rotational rate (2X), with increased rotational rate levels (3X, 4X,

been taken into account. Frequencies above these values have been discarded.

**Figure 10.** FFT of a vibration signal.

14 Digital Filters and Signal Processing

*Ski-slope fault*

*Misalignment faults*

**3.2. Vibration diagnosis and results**

based on the experiments held are shown.

The transient may be mechanical, thermal or electrical.

etc.) acting in the axial, vertical or horizontal directions.

Angular misalignment fault produces a bending moment on both shafts and this generates a strong vibration at 1X, and some others at 2X and 3X for the axial direction. There will also be strong radial components for vertical and horizontal directions (Figure 11).

#### *Parallel misalignment fault*

Parallel misalignment fault produces a shear force and a bending moment on the coupled end of each shaft. High vibration levels at 2X as well as 1X are produced in the radial direc‐ tion. Most often the 2X component is higher than 1X. Depending on the coupling, there can be 3X or 4X, even reaching 8X when the misalignment is severe (Figure 11).

#### *Rotating looseness fault*

Rotating looseness fault will create harmonics or sub-harmonics every 0.5X. Even 1/3 order harmonics are possible (Figure 11).

#### *External noise fault*

It is very common to find a peak in a spectrum that is difficult to analyse. This happens be‐ cause of the vibration from another machine or process. The peak will typically be at a nonsynchronous frequency (Figure 11). External noise can be verified stopping the machine (or varying the speed) and seeing if the vibration is still present or checking local machines for the same frequency source.

**Figure 11.** a) Angular misalignment fault (red) and pattern condition (blue), (b) parallel misalignment fault (red) and pattern condition (blue), (c) ski-slope fault (blue) and pattern condition (red) and (d) rotating looseness (blue); and external noise (red).

### **3.3. Vibration results**

As a rule, the natural frequency (1X) has been kept as the reference. Following the same no‐ menclature, the peaks at 50 Hz, 75 Hz and 100 Hz have been named 2X, 3X and 4X.

1X, 2X and 3X, sidebands and noise sources are detected. When a failure is studied at an ad‐

**Point 3**

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

17

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8

1234 **Frequency peaks**

**Point 4**

1234 **Frequency peaks**

The diagnosis of the experiments reveals that the mechanism has a minor looseness which causes the appearance of a high peak at the natural frequency in some cases, even under free fault conditions. This looseness appears because the engine and the generator are not anch‐ ored directly to the test bench. The assembly was done on a surface that has facilitated the removal of the silemblocks when the experiments required it, e.g. to create different degrees of misalignment. On the other hand, this action expands the vibration intentionally because in this way it is closer to the actual behavior of the nacelle. These frequency peaks change their trend in 1X as long as the study advances from the end of the engine to the generator.

vanced stage, peaks at 4X are noticeable (case of rigid coupling).

**Amplitude peaks**

**Figure 15.** Vibration for point 4.

**Amplitude peaks**

**Figure 14.** Vibration for point 3.

**Figure 12.** Vibration for point 1.

**Figure 13.** Vibration for point 2.

Vibration patterns are different for the four operating points. It has been detected that the natural frequency, regardless of its amplitude, tends to predominate in the experiments as‐ sociated with the end points of the set (Figures 12 and 15). Additionally, the generator's clos‐ est point to the coupling also has a similar pattern (Figure 14). The second point differs from the rest, yielding most predominant peaks from the frequency at 50 Hz (Figure 13). To make the vibration analysis, it must be taken into consideration not only the appearance of peaks, but also the amplitude. The same diagnosis for two experiments can vary its amplitude de‐ pending on the severity of the faults found. The main symptoms appear when peaks at 0.5X, 1X, 2X and 3X, sidebands and noise sources are detected. When a failure is studied at an ad‐ vanced stage, peaks at 4X are noticeable (case of rigid coupling).

**Figure 14.** Vibration for point 3.

**3.3. Vibration results**

16 Digital Filters and Signal Processing

0

**Amplitude peaks**

**Figure 13.** Vibration for point 2.

50

100

**Amplitude peaks**

**Figure 12.** Vibration for point 1.

150

200

250

As a rule, the natural frequency (1X) has been kept as the reference. Following the same no‐

**Point 1**

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8

menclature, the peaks at 50 Hz, 75 Hz and 100 Hz have been named 2X, 3X and 4X.

1234 **Frequency peaks**

**Point 2**

1234 **Frequency peaks**

Vibration patterns are different for the four operating points. It has been detected that the natural frequency, regardless of its amplitude, tends to predominate in the experiments as‐ sociated with the end points of the set (Figures 12 and 15). Additionally, the generator's clos‐ est point to the coupling also has a similar pattern (Figure 14). The second point differs from the rest, yielding most predominant peaks from the frequency at 50 Hz (Figure 13). To make the vibration analysis, it must be taken into consideration not only the appearance of peaks, but also the amplitude. The same diagnosis for two experiments can vary its amplitude de‐ pending on the severity of the faults found. The main symptoms appear when peaks at 0.5X,

**Figure 15.** Vibration for point 4.

The diagnosis of the experiments reveals that the mechanism has a minor looseness which causes the appearance of a high peak at the natural frequency in some cases, even under free fault conditions. This looseness appears because the engine and the generator are not anch‐ ored directly to the test bench. The assembly was done on a surface that has facilitated the removal of the silemblocks when the experiments required it, e.g. to create different degrees of misalignment. On the other hand, this action expands the vibration intentionally because in this way it is closer to the actual behavior of the nacelle. These frequency peaks change their trend in 1X as long as the study advances from the end of the engine to the generator. From point 2, the peak at frequencies as 2X and 3X becomes more significant and sometimes exceed the amplitude of the natural frequency.

The results for experiment 8 are also remarkable. The rigid coupling added causes a severe looseness and vibration. The growth of a frequency at 4X and a constant noise over the spec‐ trum is observed. Although it is usual to find sidebands, peaks below 1X and high frequen‐ cy peaks for all this type of experiments, this feature is unique to this last experiment. Initially, a similar diagnosis for cases 1, 4 and 8 was expected, but the behavior has been slightly different for this reason.

### **3.4. Wavelet transform processing approach and results**

Wavelet transforms were employed to analyse the sound signals. As for the Fast Fourier Trans‐ form, an algorithm has been written with Matlab. This program plots and compares two sig‐ nals. Data has been transformed in 5 decompositions named *a <sup>4</sup> , d <sup>4</sup>*, *d <sup>3</sup>*, *d <sup>2</sup>* and *d <sup>1</sup>*, where each of them has an energy rate associated from the original signal (Figure 16). The algorithm also re‐ turns a percentage value per decomposition. These values of energy, the decomposition levels attached and the peak amplitudes are examined in order to look for patterns.

Functions in the time domain can be represented as a linear combination of all frequency components present in a signal, where the coefficients are the amount of energy provided by each frequency component to the original signal. The main decomposition is associated with *a <sup>4</sup>* (*main* or *mother wavelet*) that usually has the highest energy, though it is not always neces‐ sarily the case. It has a similar pattern to the original signal. The first (*d <sup>4</sup>*), second (*d <sup>3</sup>*), third (*d <sup>2</sup>*) and fourth (*d <sup>1</sup>*) transformed signals have decreasing energy rates, being *s* the original signal. Usually *a <sup>4</sup>* is the low frequency component of the original signal while *d <sup>i</sup>* is the high frequency component, having *d <sup>1</sup>* the biggest value.

It is necessary to verify that the experiments performed at 1500 rpm can be extrapolated to other speeds. In the case of wind turbines, most of the engines rotate at speeds close to 3000 rpm. A certain number of tests were done varying from 500 to 3000 rpm (at intervals of 500 rpm) in order to ensure the existence of the proportional pattern.

**Figure 16.** Wavelet decompositions.

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

19

**Figure 17.** Energies at different rotational speeds.

The results showed that regardless of the speeds or the points of study, all the graphical rep‐ resentations for the different decompositions of energy had the same patterns. Figure 17 in‐ dicates the existence of a similar behavior where only changes the numerical value. The biggest ones will correspond to the *main* signals, while the results for decompositions *d <sup>1</sup>* and *d <sup>2</sup>* are similar.

Data can be studied according to the evolution of a single point along the different experi‐ ments or analysing the evolution of the set points for all the experiment. Each row in Figure 18 contains two graphics, one with the amplitude peaks (left) and the other one with the en‐ ergy distribution of the sound signal (right). The first two graphics correspond to the engine end (point 1). The following two graphics are the closest to the coupling (point 2). The third row belongs to the points of the generator next to the coupling (point 3), and finally, the last two graphics are for the end of the generator (point 4).

**Figure 16.** Wavelet decompositions.

From point 2, the peak at frequencies as 2X and 3X becomes more significant and sometimes

The results for experiment 8 are also remarkable. The rigid coupling added causes a severe looseness and vibration. The growth of a frequency at 4X and a constant noise over the spec‐ trum is observed. Although it is usual to find sidebands, peaks below 1X and high frequen‐ cy peaks for all this type of experiments, this feature is unique to this last experiment. Initially, a similar diagnosis for cases 1, 4 and 8 was expected, but the behavior has been

Wavelet transforms were employed to analyse the sound signals. As for the Fast Fourier Trans‐ form, an algorithm has been written with Matlab. This program plots and compares two sig‐ nals. Data has been transformed in 5 decompositions named *a <sup>4</sup> , d <sup>4</sup>*, *d <sup>3</sup>*, *d <sup>2</sup>* and *d <sup>1</sup>*, where each of them has an energy rate associated from the original signal (Figure 16). The algorithm also re‐ turns a percentage value per decomposition. These values of energy, the decomposition levels

Functions in the time domain can be represented as a linear combination of all frequency components present in a signal, where the coefficients are the amount of energy provided by each frequency component to the original signal. The main decomposition is associated with *a <sup>4</sup>* (*main* or *mother wavelet*) that usually has the highest energy, though it is not always neces‐ sarily the case. It has a similar pattern to the original signal. The first (*d <sup>4</sup>*), second (*d <sup>3</sup>*), third (*d <sup>2</sup>*) and fourth (*d <sup>1</sup>*) transformed signals have decreasing energy rates, being *s* the original signal. Usually *a <sup>4</sup>* is the low frequency component of the original signal while *d <sup>i</sup>* is the high

It is necessary to verify that the experiments performed at 1500 rpm can be extrapolated to other speeds. In the case of wind turbines, most of the engines rotate at speeds close to 3000 rpm. A certain number of tests were done varying from 500 to 3000 rpm (at intervals of 500

The results showed that regardless of the speeds or the points of study, all the graphical rep‐ resentations for the different decompositions of energy had the same patterns. Figure 17 in‐ dicates the existence of a similar behavior where only changes the numerical value. The biggest ones will correspond to the *main* signals, while the results for decompositions *d <sup>1</sup>* and

Data can be studied according to the evolution of a single point along the different experi‐ ments or analysing the evolution of the set points for all the experiment. Each row in Figure 18 contains two graphics, one with the amplitude peaks (left) and the other one with the en‐ ergy distribution of the sound signal (right). The first two graphics correspond to the engine end (point 1). The following two graphics are the closest to the coupling (point 2). The third row belongs to the points of the generator next to the coupling (point 3), and finally, the last

attached and the peak amplitudes are examined in order to look for patterns.

exceed the amplitude of the natural frequency.

**3.4. Wavelet transform processing approach and results**

frequency component, having *d <sup>1</sup>* the biggest value.

two graphics are for the end of the generator (point 4).

*d <sup>2</sup>* are similar.

rpm) in order to ensure the existence of the proportional pattern.

slightly different for this reason.

18 Digital Filters and Signal Processing

**Figure 17.** Energies at different rotational speeds.

**Experiment Main d4 d3 d2 d1 Energy**

**Point 1**

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

21

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8

12345 **Decompositions (1=main, 2=D4, 3=D3, 4=D2, 5=D1)**

**Point 2**

12345 **Decompositions (1=main, 2=D4, 3=D3, 4=D2, 5=D1)**

Experiment A is associated to point 2, belonging to the engine and situated close to the cou‐ pling. Experiment B, however, is related to point 1, left end of the assembly. Experiment A

**Table 2.** Energy distribution for experiments A and B.

0,00

0,00 50,00 100,00 150,00 200,00 250,00 300,00 350,00 400,00

**Energie values**

**Figure 20.** Energy values for point 2.

500,00

1000,00

**Energie values**

**Figure 19.** Energy values for point 1.

1500,00

2000,00

2500,00

A 17,19% 9,10% 22,12% 24,95% 26,63% 167,9 B 82,32% 9,90% 5,17% 1,77% 0,84% 311,8

**Figure 18.** Evolution of the frequency peaks and wavelet energy decompositions for each point in experiment 2.

Based on the distribution of the energy among the 5 different decompositions of every sig‐ nal, the energy distribution for point 1, end of the engine-generator set is ruled by an almost similar pattern where each experiment has a maximum of energy in the main signal and a minimum for decomposition *d <sup>1</sup>* or *d <sup>2</sup>*. It means that by performing a decomposition of the signal, the energy has a closest resemblance to the original value, often exceeding 85% of the total energy, remaining a residual percentage for *d <sup>1</sup>* or *d <sup>2</sup>*. When the experiments are closer to the generator (points 2, 3 and 4), the energy is distributed among the 5 decompositions and not concentrated in the *mother wavelet,* as it is for point 1.

All the decompositions have been registered with their energy maximum and minimum val‐ ues and their patterns distribution. An example for 2 experiments is shown in Table 2.


**Table 2.** Energy distribution for experiments A and B.

**Figure 19.** Energy values for point 1.

**Figure 20.** Energy values for point 2.

**Figure 18.** Evolution of the frequency peaks and wavelet energy decompositions for each point in experiment 2.

and not concentrated in the *mother wavelet,* as it is for point 1.

20 Digital Filters and Signal Processing

Based on the distribution of the energy among the 5 different decompositions of every sig‐ nal, the energy distribution for point 1, end of the engine-generator set is ruled by an almost similar pattern where each experiment has a maximum of energy in the main signal and a minimum for decomposition *d <sup>1</sup>* or *d <sup>2</sup>*. It means that by performing a decomposition of the signal, the energy has a closest resemblance to the original value, often exceeding 85% of the total energy, remaining a residual percentage for *d <sup>1</sup>* or *d <sup>2</sup>*. When the experiments are closer to the generator (points 2, 3 and 4), the energy is distributed among the 5 decompositions

All the decompositions have been registered with their energy maximum and minimum val‐

ues and their patterns distribution. An example for 2 experiments is shown in Table 2.

Experiment A is associated to point 2, belonging to the engine and situated close to the cou‐ pling. Experiment B, however, is related to point 1, left end of the assembly. Experiment A has the maximum percentage of energy in *d <sup>1</sup>* and the minimum in *d <sup>4</sup>*. Furthermore, the ex‐ periment B has its maximum in the *main signal* and the minimum located in *d <sup>1</sup>*. The maxi‐ mum-minimum patterns are *d <sup>1</sup> -d <sup>4</sup>* and *main-d <sup>1</sup>* respectively. Numerically, the most compensated distribution of energy is close to the coupling (experiment A – point 2) above mentioned. The patterns *main-d <sup>1</sup>* and *main-d <sup>2</sup>* appear for all the cases in point 1. However, the same maximum-minimum distribution is smaller for the points 2, 3 and 4. Unlike in point 1, there are different patterns for the 8 experiments in these points. Figures 19, 20, 21 and 22 represent the numerical values of the energy per point and experiment. It must be noted that the numerical values are higher or lower, depending on the type of experiment.

**4. Conclusions**

**•** Vibration.

**•** Sound.

**•** Current.

**•** Velocity.

gine.

**•** Temperature.

Wind turbines are complex systems that require a high level of reliability, availability, main‐ tainability and safety. This chapter is focused on to guarantee these correct levels for mecha‐ nisms used in cooling devices for generators and gearboxes, electric motors for service

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

23

The mechanism brake of the engine has been simulated linking a generator by a coupling

The experiments have been done in working conditions for different points of the mecha‐

**•** Misalignment removing silemblocks from the right side and the front left one of the en‐

**•** Misalignment removing the silemblock from the right side of the generator and one from

A fault detection and diagnosis model based on the Fast Fourier Transform applied to the vibration signals; together with the wavelet transform applied to sound signals has been de‐

It has been observed that for the outer ends of the engine and the generator, the appearance of a pronounced peak amplitude at the natural frequency or *2X* (vibration) was associated to the maximum energy values for the *main* signal, the most suitable with the original, and minimum values for decomposed signals *d <sup>1</sup>* and *d <sup>2</sup>* (sound). In contrast, the results obtained close to the coupling did not follow a clear trend as the results were conditioned by the type of experiment. The numerical values of each peak were also taken into account in the estab‐ lishment of the pattern recognitions, being different for each experiment. The same conclu‐ sion was reached for the energy values. Different models and results were expected because

veloped. The model detects and diagnoses correctly 100% of the failures considered.

**•** Misalignment removing silemblocks from the right side of the engine.

**•** Misalignment removing the silemblock from the right side of the generator.

**•** Misalignment removing 2 silemblocks near to the coupling in the generator.

crane, yaw motors, pitch motors, pumps, ventilators, etc.

joint. The signals collected have been:

nism and considering the following failures:

**•** Induction of resistance in the coupling.

the left side of the engine.

**•** Using a rigid coupling.

**Figure 21.** Energy values for point 3.

**Figure 22.** Energy values for point 4.

## **4. Conclusions**

has the maximum percentage of energy in *d <sup>1</sup>* and the minimum in *d <sup>4</sup>*.

0,00 200,00 400,00 600,00 800,00 1000,00 1200,00 1400,00 1600,00

0,00 50,00 100,00 150,00 200,00 250,00 300,00 350,00

**Energie values**

**Figure 22.** Energy values for point 4.

**Energie values**

22 Digital Filters and Signal Processing

**Figure 21.** Energy values for point 3.

periment B has its maximum in the *main signal* and the minimum located in *d <sup>1</sup>*. The maxi‐ mum-minimum patterns are *d <sup>1</sup> -d <sup>4</sup>* and *main-d <sup>1</sup>* respectively. Numerically, the most compensated distribution of energy is close to the coupling (experiment A – point 2) above mentioned. The patterns *main-d <sup>1</sup>* and *main-d <sup>2</sup>* appear for all the cases in point 1. However, the same maximum-minimum distribution is smaller for the points 2, 3 and 4. Unlike in point 1, there are different patterns for the 8 experiments in these points. Figures 19, 20, 21 and 22 represent the numerical values of the energy per point and experiment. It must be noted that the numerical values are higher or lower, depending on the type of experiment.

**Point 3**

12345 **Decompositions (1=main, 2=D4, 3=D3, 4=D2, 5=D1)**

**Point 4**

12345 **Decompositions (1=main, 2=D4, 3=D3, 4=D2, 5=D1)**

Furthermore, the ex‐

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8

Reference Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8 Wind turbines are complex systems that require a high level of reliability, availability, main‐ tainability and safety. This chapter is focused on to guarantee these correct levels for mecha‐ nisms used in cooling devices for generators and gearboxes, electric motors for service crane, yaw motors, pitch motors, pumps, ventilators, etc.

The mechanism brake of the engine has been simulated linking a generator by a coupling joint. The signals collected have been:


The experiments have been done in working conditions for different points of the mecha‐ nism and considering the following failures:


A fault detection and diagnosis model based on the Fast Fourier Transform applied to the vibration signals; together with the wavelet transform applied to sound signals has been de‐ veloped. The model detects and diagnoses correctly 100% of the failures considered.

It has been observed that for the outer ends of the engine and the generator, the appearance of a pronounced peak amplitude at the natural frequency or *2X* (vibration) was associated to the maximum energy values for the *main* signal, the most suitable with the original, and minimum values for decomposed signals *d <sup>1</sup>* and *d <sup>2</sup>* (sound). In contrast, the results obtained close to the coupling did not follow a clear trend as the results were conditioned by the type of experiment. The numerical values of each peak were also taken into account in the estab‐ lishment of the pattern recognitions, being different for each experiment. The same conclu‐ sion was reached for the energy values. Different models and results were expected because the objective was not to find similar patterns between different experiments, and the tests were never performed under identical conditions. The objective was to have different vibra‐ tion patterns and their associated sound models in order to create a catalogue of possible scenarios for predictive maintenance in the mechanisms. Thus, it is possible to extend the range of possibilities to relate the result of an acoustic signal with the frequency domain us‐ ing the Fast Fourier Transform.

[8] Baum, L. E., & Petrie, T. (1966). Statistical inference for probabilistic functions of fi‐ nite state Markov chains. *The Annals of Mathematical Statistics.*, 37(6), 1554-1563.

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

25

[9] Ben-Daya, M. S., & Duffuaa, A. R. (2009). Handbook of maintenance management

[10] Bieman, C., Staszewski, W. J., Boller, C., & Tomlinson, G. R. (1999). Crack detection in metallic structures using piezoceramic sensors. *Key Engineering Materials.*, 167,

[11] Boulahbal, D., Farid, G. M., & Ismail, F. (1999). Amplitude and phase wavelet maps for the detection of cracks in geared systems. *Mechanical Systems and Signal Process‐*

[12] Bracewell, R. (2000). The Fast Fourier Transform and its applications. *McGraw Hill*

[13] Butler, D. E. (1973). The shock pulse method for the detection of damaged rolling

[14] Byon, E., & Ding, Y. (2010). Season-dependent condition-based maintenance for a wind turbine using a partially observed Markov decision process. *IEEE Transactions*

[15] Canal, M. R. (2010). Comparison of wavelet and short time Fourier transform meth‐ ods in the analysis of EMG signals. *Journal of Medical Systems.*, 34(1), 91-94.

[16] Chancey, V. C., & Flowers, G. T. (2001). Identification of transient vibration charac‐ teristics using absolute harmonic wavelet coefficients. *Journal of Vibration and Control.*,

[17] Chebil, J., Noel, G., Mesbah, M., & Deriche, M. (2010). Wavelet decomposition for the detection and diagnosis of faults in rolling element bearings. *Jordan Journal of Mechan‐*

[18] Chourasia, V. S., & Mittra, A. K. (2009). Selection of mother wavelet and denoising algorithm for analysis of foetal phonocardiographic signals. *Journal of Medical Engi‐*

[19] Combastel, C., Lesecq, S., Petropol, S., & Gentil, S. (2002). Model-based and wavelet approaches to induction motor on-line fault detection. *Control Engineering Practice.*,

[20] Cusidó, J., Romeral, L., Ortega, J. A., García, A., & Riba, J. R. (2010). Wavelet and PDD as fault detection techniques. *Electric Power Systems Research.*, 80(8), 915-924.

[21] Dalpiaz, G., Rivola, A., & Rubini, R. (2000). Effectiveness and sensitivity of vibration processing techniques for local fault detection in gears. *Mechanical Systems and Signal*

and engineering. *Springer Verlag London Limited.*

bearings. *NDT International.*, 6(2), 92-95.

*ical & Industrial Engineering.*, 4(5), 260-266.

*neering & Technology.*, 33(6), 442-448.

*on Power Systems.*, 25(4), 1823-1834.

112-121.

*ing.*, 13(3), 423-436.

*Higher Education.*

7(8), 1175-1193.

10(5), 493-509.

*Processing.*, 14(3), 387-412.

## **Author details**

Fausto Pedro García Márquez1 , Raúl Ruiz de la Hermosa González-Carrato1 , Jesús María Pinar Perez1 and Noor Zaman2

1 University of Castilla-La Mancha, Spain

2 CCSIT, King Faisal University, Saudi Arabia

## **References**


[8] Baum, L. E., & Petrie, T. (1966). Statistical inference for probabilistic functions of fi‐ nite state Markov chains. *The Annals of Mathematical Statistics.*, 37(6), 1554-1563.

the objective was not to find similar patterns between different experiments, and the tests were never performed under identical conditions. The objective was to have different vibra‐ tion patterns and their associated sound models in order to create a catalogue of possible scenarios for predictive maintenance in the mechanisms. Thus, it is possible to extend the range of possibilities to relate the result of an acoustic signal with the frequency domain us‐

, Raúl Ruiz de la Hermosa González-Carrato1

[1] Adewusi, S. A., & Al-Bedoor, B. O. (2001). Wavelet analysis of vibration signals of an overhang rotor with a propagating transverse crack. *Journal of Sound and Vibration.*,

[2] Aktas, M., & Turkmenoglu, V. (2010). Wavelet-based switching faults detection in di‐ rect torque control induction motor drives. *Science, Measurement and Technology.*, 4(6),

[3] Al-Ahmar, E., Benbouzid, M. E. H., & Turri, S. (2008). Wind energy conversion sys‐ tems fault diagnosis using Wavelet analysis. *International Review of Electrical Engineer‐*

[4] Al-Hussain, K. M., & Redmond, I. (2002). Dynamic response of two rotors connected by rigid mechanical coupling with parallel misalignment. *Journal of Sound and Vibra‐*

[5] Amidror, I., & Hersch, R. (2009). The role of Fourier theory and of modulation in the prediction of visible moiré effects. *Journal of Modern Optics.*, 56(9), 1103-1118.

[6] Anon. (2005). Managing the wind: Reducing kilowatt-hour costs with condition mon‐

[7] Arora, R., Sharma, L., Birla, N., & Bala, A. (2011). An algorithm for image compres‐ sion using 2D wavelet transforms. *International Journal of Engineering Science & Tech‐*

,

ing the Fast Fourier Transform.

24 Digital Filters and Signal Processing

Fausto Pedro García Márquez1

246(5), 777-793.

*ing.*, 3(4), 646-652.

*tion.*, 249(3), 483-498.

itoring. *Refocus.*, 6(3), 48-51.

*nology.*, 3(4), 2758-2764.

303-310.

1 University of Castilla-La Mancha, Spain

2 CCSIT, King Faisal University, Saudi Arabia

and Noor Zaman2

Jesús María Pinar Perez1

**Author details**

**References**


[22] Deshpande, V. S., & Modak, J. P. (2002). Application of RCM for safety considera‐ tions in a steel plant. *Reliability Engineering and System Safety.*, 3(78), 325-334.

[36] Igarashi, T., & Hamada, H. (1982). Studies on the vibration and sound of defective roller bearings (First report: vibration of ball bearing with one defect). *Bulletin of the*

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

27

[37] Jardine, A., Lin, D., & Banjevic, D. (2006). A review on machinery diagnostics and prognostics implementing condition-based maintenance. *Mechanical Systems and*

[38] Jia, M., & Wang, Y. (2003). Application of wavelet transformation in signal process‐ ing for vibrating platform. *Journal of Shenyang Institute of Technology.*, 22(3), 53-55.

[39] Johnstone, I. M., Kerkyacharian, G., Picard, D., & Raimondo, M. (2004). Wavelet de‐ convolution in a periodic setting. *Journal of the Royal Statistical Society: Series B (Statis‐*

[40] Kang, P., & Birtwhistle, D. (2003). Condition assessment of power transformer onload tap-changers using wavelet analysis. *IEEE Transactions on Power Delivery.*, 18(1),

[41] Karimi, H., & Robbersmyr, K. (2011). Signal analysis and performance evaluation of a crash test with a fixed safety barrier based on Haar waveletes. *International Journal of*

[42] Knezevic, J. (1993). Reliability, maintainability and supportability engineering: A

[43] Lahdelma, S., & Juuso, E. (2007). Advanced signal processing and fault diagnosis in condition monitoring. *Non-destructive Testing and Condition Monitoring.*, 49(12),

[44] Lebedeva, E., & Protasov, V. (2008). Meyer wavelets with least uncertainty constant.

[45] Leducq, D. (1990). Hydraulic noise diagnostics using wavelet analysis. *Proceedings of*

[46] Lepik, Ü. (2011). Buckling of elastic beams by the Haar wavelet method. *Estonian*

[48] Mc Fadden, P. D. (1987). Examination of a technique for the early detection of failure in gears by signal processing of the time domain average of the meshing vibration.

[49] Mc Fadden, P. D. (1996). Detecting fatigue cracks in gears by amplitude and phase demodulation of the meshing vibration. *Journal of Vibration, Acoustics, Stress and Relia‐*

[47] Leske, S., & Kitaljevich, D. (2006). Managing gearbox failure. *Dewi Magazine.*, 29.

*the International Conference on Noise Control Engineering.*, 997-1000.

*Mechanical Systems and Signal Processing.*, 1(2), 173-183.

*Wavelets, Multiresolution & Information Processing.*, 9(1), 131-149.

*Japan Society of Mechanical Engineers.*, 25(204), 994-1001.

*Processing.*, 20(7), 1483-1510.

*tical Methodology).*, 66(3), 547-573.

probabilistic approach. *McGraw Hill.*

*Mathematical Notes.*, 84(5), 680-687.

*Journal of Engineering.*, 17(3), 271-284.

*bility in Design.*, 108, 165-170.

78-84.

719-725.


[36] Igarashi, T., & Hamada, H. (1982). Studies on the vibration and sound of defective roller bearings (First report: vibration of ball bearing with one defect). *Bulletin of the Japan Society of Mechanical Engineers.*, 25(204), 994-1001.

[22] Deshpande, V. S., & Modak, J. P. (2002). Application of RCM for safety considera‐ tions in a steel plant. *Reliability Engineering and System Safety.*, 3(78), 325-334.

[23] Dong, Y., Shi, H., Luo, J., Fan, G., & Zhang, C. (2010). Application of wavelet trans‐

[24] Endrenyi, J., Mc Cauley, J., & Singh, C. (2001). The present status of maintenance strategies and the impact of maintenance on reliability. *IEEE Transaction Power Sys‐*

[25] García, F. P., Schmid, F., & Collado, J. C. (2003). A reliability centered approach to remote condition monitoring. A railway points case study. *Reliability Engineering &*

[26] García, F. P., Schmid, F., & Collado, J. C. (2003). Wear assessment employing remote

[27] García, F. P., Pedregal, D. J., & Roberts, C. (2010). Time series methods applied to fail‐ ure prediction and detection. *Reliability Engineering & System Safety.*, 95(6), 698-703. [28] García, F. P., Roberts, C., & Tobias, A. (2010). Railway point mechanisms: Condition monitoring and fault detection. *Proceedings of the Institution of Mechanical Engineers, Part F, Journal of Rail and Rapid Transit. Professional Engineering Publishing*, 224(1),

[29] Genovese, L., Neelov, A., Goedecker, S., Deutsch, T., Ghasemi, S., Willand, A., Cal‐ iste, D., Zilberberg, O., Rayson, M., Bergman, A., & Schneider, R. (2008). Daubechies wavelets as a basis set for density functional pseudopotential calculations. *Journal of*

[30] Ghanbari, M., Askaripour, M., & Khezrimotlagh, D. (2010). Numerical solution of singular integral equations using Haar wavelet. *Australian Journal of Basic & Applied*

[31] Goldman, P., & Muszynska, A. (1999). Application of full spectrum to rotating ma‐

[32] Hajjara, S., Abdallah, M., & Hudaib, A. (2009). Digital image watermarking using lo‐ calized biorthogonal wavelets. *European Journal of Scientific Research.*, 26(4), 594-608. [33] Hariharan, G., & Kannan, K. (2010). Haar wavelet method for solving FitzHugh-Na‐ gumo equation. *International Journal of Computational & Mathematical Sciences.*, 4(6),

[34] Hassan, S., Al-Saegh, M., Mohamed, A., & Batarfi, H. (2011). Haar wavelet spectrum of a pulsed-driven qubit. *Nonlinear Optics, Quantum Optics: Concepts in Modern Op‐*

[35] Huang, M., Jiang, L., Liaw, P. K., Brooks, C. R., Seeley, R., & Klarstrom, D. L. (1998). Using acoustic emission in fatigue and fracture materials research. *JOM Nondestruc‐*

form in MCG-signal denoising. *Modern Applied Science.*, 4(6), 20-24.

condition monitoring: A case study. *Wear.*, 2(255), 1209-1220.

*tem.*, 16(4), 638-646.

26 Digital Filters and Signal Processing

35-44.

281-285.

*tics.*, 42(1), 37-50.

*tive Evaluation: Overview.*, 50(11), 1-12.

*System Safety.*, 80(1), 33-40.

*Chemical Physics.*, 129(1), 104-109.

chinery diagnostics. *Orbit First Quarter.*, 20(1), 17-21.

*Sciences.*, 4(12), 5852-5855.


[50] Mc Millan, D., & Ault, G. W. (2008). Condition monitoring benefit for onshore wind turbines: Sensitivity to operational parameters. *IET Renewable Power Generation.*, 2(1), 60-72.

[65] Samsi, R., Ray, A., & Mayer, J. (2009). Early detection of stator voltage imbalance in three-phase induction motors. *Electric Power Systems Research.*, 79(1), 239-245.

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

29

[66] Schmitt, E., Idowu, P., & Morales, A. (2010). Applications of wavelets in induction machine fault detection. *Ingeniare. Revista chilena de ingeniería.*, 18(2), 158-164.

[67] Schoen, R. R., Lin, B. K., Habetler, T. G., Schlag, J. H., & Farag, S. (1995). An unsuper‐ vised, on-line system for induction motor fault detection using stator current moni‐

[68] Schroeder, K., Ecke, W., Apitz, J., Lembke, E., & Lenschow, G. (2006). A fibre Bragg grating sensor system monitors operational load in a wind turbine rotor blade. *Meas‐*

[69] Sedighizadeh, M., & Rezazadeh, A. (2008). Nonlinear model identification and PI control of wind turbine using neural network adaptive frame wavelets. *International*

[70] Sekhar, A. S., & Prabhu, B. S. (1995). Effects of coupling misalignment on vibration of

[72] Smith, B. M. (1978). Condition monitoring by thermography. *NDT International*, 11(3),

[73] Srinivas, H. K., Srinivasan, K. S., & Umesh, K. N. (2010). Application of artificial neu‐ ral network and wavelet transform for vibration analysis of combined faults of un‐ balances and shaft bow. *Advances in Theoretical and Applied Mechanics.*, 3(4), 159-176.

[74] Staszewski, W. J., & Tomlinson, G. R. (1994). Application of the wavelet transform to fault detection in a spur gear. *Mechanical Systems and Signal Processing.*, 8(3), 289-307.

[75] Staszewski, W. J., Pierce, S. G., Worden, K., & Culshaw, B. (1999). Cross-wavelet analysis for lamb wave damage detection in composite materials using optical fibres.

[76] Strang, G. (1992). The optimal coefficients in Daubechies wavelets. *Physica D: Nonlin‐*

[77] Strang, G., & Nguyen, T. (1997). Wavelets and filter banks. *Wellesley-Cambridge Press.*

[78] Subramanian, S., Badrilal, M., & Henry, J. (2010). Wavelet transform based differen‐ tial protection for power transformer and classification of faults using SVM and

[79] Sugumaran, V., & Ramachandran, K. I. (2009). Wavelet selection using Decision tree for fault diagnosis of roller bearings. *International Journal of Applied Engineering Re‐*

PNN. *International Review of Electrical Engineering.*, 5(5), 2186-2198.

[71] Smith, A. M. (1993). Reliability-centred maintenance. *New York: McGraw-Hill, Inc.*

toring. *IEEE Transactions on Industry Applications.*, 31(6), 1280-1286.

*urement Science and Technology.*, 17(5), 1167-1172.

*Journal of Applied Engineering Research.*, 3(7), 861-877.

*Key Engineering Materials.*, 167, 373-380.

*ear Phenomena.*, 60, 239-244.

*search.*, 4(2), 201-225.

121-122.

machines. *Journal of Sound and Vibration.*, 185(4), 655-671.


[65] Samsi, R., Ray, A., & Mayer, J. (2009). Early detection of stator voltage imbalance in three-phase induction motors. *Electric Power Systems Research.*, 79(1), 239-245.

[50] Mc Millan, D., & Ault, G. W. (2008). Condition monitoring benefit for onshore wind turbines: Sensitivity to operational parameters. *IET Renewable Power Generation.*, 2(1),

[51] Misrikhanov, A. M. (2006). Wavelet transform methods: Application in electroener‐

[53] Nakhaeinejad, M., & Ganeriwala, S. Observations on dynamic responses of misalign‐

[54] Newland, D. E. (1999). Ridge and phase identification in the frequency analysis of transient signals by harmonic wavelets. *Journal of Vibration and Acoustics, Transactions*

[55] Novaes de, G., Alencar, E., & Kraj, A. Remote conditioning monitoring system for a hybrid wind diesel system-application at Fernando de Naronha Island. http://

[56] Oberst, U. (2007). The Fast Fourier Transform. *SIAM Journal on Control & Optimiza‐*

[57] Ocak, H., & Loparo, K. A. (2001). A new bearing fault detection and diagnosis Scheme based on Hidden Markov modeling of vibration signals. *Acoustics, Speech and*

[58] Patil, S., Kasturiwala, S., Dahad, S., & Jadhav, C. (2011). Wavelet tool: Application for human face recognition. *International Journal of Engineering Science & Technology.*, 3(3),

[59] Pedregal, D. J., García, F. P., & Roberts, C. (2009). An algorithmic approach for main‐

[60] Peng, Z., & Chu, F. (2004). Application of the wavelet transform in machine condi‐ tion monitoring and fault diagnostics: a review with bibliography. *Mechanical Sys‐*

[62] PROTEST (PROcedures for TESTing and measuring wind energy systems). (2009).

[63] Quek, S. T., Wang, Q., Zhang, L., & Ang, K. K. (2001). Sensitivity analysis of crack detection in beams by wavelet technique. *International Journal of Mechanical Sciences.*,

[64] Rumsey, M. A., & Musial, W. (2001). Application of infrared thermography nondes‐ tructive testing during wind turbine blade tests. *Journal of Solar Energy Engineering.*,

[61] Peters, S. T. (1998). Handbook of composites. *Chapman & Hall. London.*, 839-855.

tenance management. *Annals of Operations Research.*, 166, 109-124.

Deliverable D1: State of the art report. *FP7-ENERGY-2007-1-RTD*.

[52] Moubray, J. (1997). Reliability-centered maintenance. *New York: Industrial Press.*

ments. *Technologial Notes. SpectraQuest Inc.*, http://spectraquest.com/.

getics. *Automation and Remote Control.*, 67(5), 682-697.

*of the ASME.*, 121(2), 149-155.

*Signal Processing.*, 5, 3141-4144.

*tems and Signal Processing.*, 18, 199-221.

www.ontario-sea.org.

*tion.*, 46(2), 1-45.

2392-2398.

43(12), 2899-2910.

123(4), 271.

60-72.

28 Digital Filters and Signal Processing


[80] Suh, J. H., Kumara, S. R. T., & Mysore, S. P. (1999). Machinery fault diagnosis and prognosis: Application of advanced signal processing techniques. *CIRP Annals- Man‐ ufacturing Technology.*, 48(1), 317-320.

[95] Xu, M., & Marangoni, R. (1994). Vibration analysis of a motor-flexible coupling-rotor system subjected to misalignment and unbalance Part I: Theoretical model and anal‐

Maintenance Management Based on Signal Processing

http://dx.doi.org/10.5772/52199

31

[96] Yacamini, R., Smith, K. S., & Ran, L. (1998). Monitoring torsional vibrations of elec‐ tro-mechanical systems using stator currents. *Journal of Vibration and Acoustics, Trans‐*

[97] Yang, X., Shi, Y., & Yang, B. (2011). General framework of the construction of bio‐ rthogonal wavelets based on Bernstein bases: theory analysis and application in im‐

ysis. *Journal of Sound and Vibration.*, 176(5), 663-679.

*actions of the ASME.*, 120(1), 72-79.

age compression. 5(1), 50-67.


[95] Xu, M., & Marangoni, R. (1994). Vibration analysis of a motor-flexible coupling-rotor system subjected to misalignment and unbalance Part I: Theoretical model and anal‐ ysis. *Journal of Sound and Vibration.*, 176(5), 663-679.

[80] Suh, J. H., Kumara, S. R. T., & Mysore, S. P. (1999). Machinery fault diagnosis and prognosis: Application of advanced signal processing techniques. *CIRP Annals- Man‐*

[81] Sun, Q., & Tang, Y. (2002). Singularity analysis using continuous wavelet transform for bearing fault diagnosis. *Mechanical Systems and Signal Processing.*, 16(6), 1025-1041.

[82] Sung, C. K., Tai, H. M., & Chen, C. W. (2000). Locating defects of a gear system by the technique of wavelet transform. *Mechanism and Machine Theory.*, 35(8), 1169-1182. [83] Swiercz, E. (2011). Automatic classification of LFM signals for radar emitter recogni‐ tion using wavelet decomposition and LVQ classifier. *Physical Aspects of Microwave*

[84] Tan, C. C. (1990). Application of acoustic emission to the detection of bearing fail‐

[85] Tse, P. W., Peng, Y. H., & Yam, R. (2001). Wavelet analysis and envelope detection for Rolling element bearing fault diagnosis. Their effectiveness and flexibilities.

[86] Van Dijck, G., & Van Hulle, M. M. (2011). Information theory filters for wavelet pack‐ et coefficient selection with application to corrosion type identification from acoustic

[87] Venkatesh, S., Ayyaswamy, S., & Hariharan, G. (2010). Haar wavelet method for solving initial and boundary value problems of Bratu-type. *International Journal of*

[88] Wang, D., Miao, Q., Fan, X., & Huang, H. Z. (2009). Rolling element bearing fault de‐ tection using an improved combination of Hilbert and wavelet transform. *Journal of*

[89] Wang, Q., & Deng, X. M. (1999). Damage detection with spatial wavelets. *Internation‐*

[90] Wang, W. Q., Ismail, F., & Golnaragh, M. F. (2001). Assessment of gear damage mon‐ itoring techniques using vibration measurements. *Mechanical Systems and Signal Proc‐*

[91] Wei, J., & Mc Carty, J. (1993). Acoustic emission evaluation of composite wind tur‐

[92] Wismer, N. J. (1994). Gearbox analysis using cepstrum analysis and comb liftering.

[93] World Wind Energy Association. World wind energy report 2009. http://www.wwin‐

[94] Wu, J., & Liu, C. (2008). Investigation of engine fault diagnosis using discrete wavelet transform and neural network. *Expert Systems with Applications.*, 35, 1200-1213.

bine blades during fatigue testing. *Wind Engineering.*, 17(6), 266-274.

*ufacturing Technology.*, 48(1), 317-320.

30 Digital Filters and Signal Processing

*and Radar Applications.*, 119(4), 488-494.

ures. *Proceedings Tribology Conference. Brisbane.*, 110-114.

*ASME Journal of Vibration and Acoustics.*, 123, 303-310.

*Computational & Mathematical Sciences.*, 4(6), 286-289.

*Mechanical Science and Technology.*, 23, 3292-3301.

*al Journal of Solids and Structures.*, 36(23), 3443-3468.

*Application Note. Brüel & Kjaer. Denmark.*

*essing.*, 15(5), 905-922.

dea.org.

emission signals. *Sensors.*, 11(6), 5695-5715.


**Chapter 2**

**Spectral Analysis of Exons in DNA Signals**

DNA is found in blood cells carrying nucleus. The DNA is isolated from blood through a series of different procedures including heat shock, thermal change and applications of dif‐ ferent chemicals etc. DNA sequence contains chromosomes which further contains genes over them. The genes have regions which could translate to protein and the regions which don't perform any contribution in protein production. Both kinds of regions are made-up of nucleotides characterized as Adenine, Thymine, Cytosine and Guanine. The order of these nucleotides determines the traits, habits and livings of all species. Since with the exponential growth of biological data, there is an enormous amount of such data that needs to be trans‐ lated to protein. A successful translation would result in knowing important information

Comparative analysis of computational techniques employed over genetic datasets has given very interesting results. We are able to identify species from each other on the behalf of DNA properties. A true correct conversion takes to fruitful results. Literature has shown that direct comparative analysis is not as useful as approximate estimation. So far, there is no compact sol‐

It is a common phenomenon that nucleotide sequences in DNA perform a period three property [3, 11] due to codon composition and structure in the strand. This fundamental characteristic can be exploited to predict the codon regions that help in determination of RNA sequences in DNA. This finding is of immense importance as cell growth and function is determined by the type of protein the cell produces and helps in drug design and reveal‐ ing genetic disorders as a result of mutation in structure of nucleotide bases (order in which they appear over chain). Many approaches have been proposed in literature that addresses

> © 2013 Zaman et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Zaman et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

ution available that could outperform for a robust translation from DNA to RNA.

this open optimization problem in computational biology.

Noor Zaman, Ahmed Muneer and

Additional information is available at the end of the chapter

Fausto Pedro García Márquez

http://dx.doi.org/10.5772/52763

**1. Introduction**

about species.

## **Chapter 2**

## **Spectral Analysis of Exons in DNA Signals**

Noor Zaman, Ahmed Muneer and Fausto Pedro García Márquez

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52763

## **1. Introduction**

DNA is found in blood cells carrying nucleus. The DNA is isolated from blood through a series of different procedures including heat shock, thermal change and applications of dif‐ ferent chemicals etc. DNA sequence contains chromosomes which further contains genes over them. The genes have regions which could translate to protein and the regions which don't perform any contribution in protein production. Both kinds of regions are made-up of nucleotides characterized as Adenine, Thymine, Cytosine and Guanine. The order of these nucleotides determines the traits, habits and livings of all species. Since with the exponential growth of biological data, there is an enormous amount of such data that needs to be trans‐ lated to protein. A successful translation would result in knowing important information about species.

Comparative analysis of computational techniques employed over genetic datasets has given very interesting results. We are able to identify species from each other on the behalf of DNA properties. A true correct conversion takes to fruitful results. Literature has shown that direct comparative analysis is not as useful as approximate estimation. So far, there is no compact sol‐ ution available that could outperform for a robust translation from DNA to RNA.

It is a common phenomenon that nucleotide sequences in DNA perform a period three property [3, 11] due to codon composition and structure in the strand. This fundamental characteristic can be exploited to predict the codon regions that help in determination of RNA sequences in DNA. This finding is of immense importance as cell growth and function is determined by the type of protein the cell produces and helps in drug design and reveal‐ ing genetic disorders as a result of mutation in structure of nucleotide bases (order in which they appear over chain). Many approaches have been proposed in literature that addresses this open optimization problem in computational biology.

© 2013 Zaman et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Zaman et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Discrete Fourier Transforms [6, 7, and 8] normally result in spectral leakage that doesn't pre‐ view the optimal power spectral density estimation. On the other hand, the Short Time Fourier Transforms [2, 4] minimize the leakage but are considered useful when we desire to have the frequency contents with location information. It can plot the components for time, amplitude and frequency of a genetic signal.

**•** Shannon entropy calculation of signal

**•** Magnitude and power estimation of signal

**•** Exon and intron boundaries' estimation

The corresponding transform becomes

1 1

f

() () ()

*yt A t D t*

= +

=++ =+++

3rd order wavelet decomposition

Kaiser window of length 351 bp

221 3321

f

*At Dt Dt At Dt Dt Dt*

() () () () () () ()

weights for nucleotides,

Indicator sequence

Adenine (A) = X (A) = 0.260 Thymine (T) = X (T) = 0.375 Guanine (G) = X (G) = 0.125 Cytosine (C) = X (C) = 0.370

**•** Calculation of discrimination measure for PSD analysis

As an elaboration, the DNA sequence is passed through a filter that transforms it into a digi‐ tal pattern. This phase is accomplished employing an indicator sequence with the following

2 /

<sup>å</sup> (1)

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 35

(2)

(3)

*<sup>N</sup> j kn N*

 p

The signal is decomposed employing the wavelet transforms of order three at level three

3 3, 3 3, 2 2, 1 1,

*cA k t cD k w t cD k w t cD k w t*

( ) () ( ) () ( ) () ( ) ()


*j k j k j k j k*

The wavelet decomposition passes the signal into a series of low and high pass filters that

The signal is then convoluted with a window function (Kaiser Window) defined below,

<sup>1</sup> <sup>2</sup> <sup>2</sup> 0 0 1 ( )/ / ( ) 0 1 ( )

<sup>ì</sup> æ ö ïï ç ÷ - - ££ - <sup>=</sup> <sup>í</sup> è ø <sup>ï</sup>

 b

*otherwise*

IndSeq IndSeq 1 [] []

*n X k x ne*

= =

2 2, 2 2, 1 1,

*cA k t cD k w t cD k w t*

( ) () ( ) () ( ) ()


*kk k k*

decompose and synthesize the signal for reducing flicker noise (pink noise).

( ( ) )

 aa

b

ïî

0 *I n I nM w n*

åå å å

=+ + +

*j k j k j k*

1, 2,...,

*k N*

*kk k*

åå å

=+ +

=

Digital Filters [5, 7, and 13] present the spectral contents of signal around the periodicity property of coding regions but don't specify the frequency time relationship with ampli‐ tude.

Dosay-Akbulut [14] emphasized the classification of introns in two groups based on RNA secondary structure and self splicing ability in variant species using PCR.

A. Parent et al., [15] describe the importance of coordination between transcription and RNA processing that carboxy-terminal domain of RNA polymerase II acts as a common link in both.

Al Wadi et al. [16] used wavelet transforms for forecasting volatility in experimental results. M. Hashemi et al. [17] provided Identification of *Escherichia coli* O157:H7 Isolated from Cat‐ tle Carcasses in Mashhad Abattoir by Multiplex PCR.

A. Ali et al. [18] have presented a Histopathological Study for development of a model for Tumor Lung Cancer Assessing Anti-neoplastic Effect of PMF in Rodents.

J. Singh et al. [19] proposed a technique for Prediction of in vitro Drug Release Mechanisms from Extended Release Matrix Tablets.

## **2. Proposed approach**

The proposed approach consists of a series of components that analyze the DNA signal and enhances the prediction accuracy of genic regions over DNA sequence. The major steps of proposed approach are,


**•** Shannon entropy calculation of signal

Discrete Fourier Transforms [6, 7, and 8] normally result in spectral leakage that doesn't pre‐ view the optimal power spectral density estimation. On the other hand, the Short Time Fourier Transforms [2, 4] minimize the leakage but are considered useful when we desire to have the frequency contents with location information. It can plot the components for time,

Digital Filters [5, 7, and 13] present the spectral contents of signal around the periodicity property of coding regions but don't specify the frequency time relationship with ampli‐

Dosay-Akbulut [14] emphasized the classification of introns in two groups based on RNA

A. Parent et al., [15] describe the importance of coordination between transcription and RNA processing that carboxy-terminal domain of RNA polymerase II acts as a common link

Al Wadi et al. [16] used wavelet transforms for forecasting volatility in experimental results. M. Hashemi et al. [17] provided Identification of *Escherichia coli* O157:H7 Isolated from Cat‐

A. Ali et al. [18] have presented a Histopathological Study for development of a model for

J. Singh et al. [19] proposed a technique for Prediction of in vitro Drug Release Mechanisms

The proposed approach consists of a series of components that analyze the DNA signal and enhances the prediction accuracy of genic regions over DNA sequence. The major steps of

**•** Conversion of target DNA stretch to a digital pattern employing an indicator sequence

secondary structure and self splicing ability in variant species using PCR.

Tumor Lung Cancer Assessing Anti-neoplastic Effect of PMF in Rodents.

amplitude and frequency of a genetic signal.

34 Digital Filters and Signal Processing

tle Carcasses in Mashhad Abattoir by Multiplex PCR.

**•** Decomposition of signal using wavelet transforms

**•** Calculations of detail coefficients of signal

**•** Calculations of approximate coefficients of signal at level three

**•** Depiction of original and synthesized signal at level three

from Extended Release Matrix Tablets.

**2. Proposed approach**

proposed approach are,

**•** Density estimation of signal **•** Signal analysis for denosing

**•** Histogram estimations of signal

**•** Signal extension to a desired length

tude.

in both.


As an elaboration, the DNA sequence is passed through a filter that transforms it into a digi‐ tal pattern. This phase is accomplished employing an indicator sequence with the following weights for nucleotides,

Adenine (A) = X (A) = 0.260 Thymine (T) = X (T) = 0.375 Guanine (G) = X (G) = 0.125 Cytosine (C) = X (C) = 0.370

The corresponding transform becomes

$$\begin{aligned} \, \, \, \, X\_{\text{IndSeq}} \{ k \} &= \sum\_{n=1}^{N} \, \mathbf{x}\_{\text{IndSeq}} \{ n \} e^{-j2\pi kn/N} \\ \, \, \, k = 1, \, \, 2, \dots, N \end{aligned} \tag{1}$$

Indicator sequence

The signal is decomposed employing the wavelet transforms of order three at level three

$$\begin{aligned} y(t) &= A\_1(t) + D\_1(t) \\ &= \sum\_k c A\_2(k) \phi\_{j-2,k}(t) + \sum\_k c D\_2(k) w\_{j-2,k}(t) + \sum\_k c D\_1(k) w\_{j-1,k}(t) \\ &= A\_2(t) + D\_2(t) + D\_1(t) \\ &= A\_3(t) + D\_3(t) + D\_2(t) + D\_1(t) \\ &= \sum\_k c A\_3(k) \phi\_{j-3,k}(t) + \sum\_k c D\_3(k) w\_{j-3,k}(t) + \sum\_k c D\_2(k) w\_{j-2,k}(t) + \sum\_k c D\_1(k) w\_{j-1,k}(t) \end{aligned} \tag{2}$$

3rd order wavelet decomposition

The wavelet decomposition passes the signal into a series of low and high pass filters that decompose and synthesize the signal for reducing flicker noise (pink noise).

The signal is then convoluted with a window function (Kaiser Window) defined below,

$$w(n) = \begin{cases} I\_0 \left( \beta \left( 1 - \left( (n - \alpha) / \alpha \right)^2 \right)^{\frac{1}{2}} \right) / I\_0(\beta) & 0 \le n \le M - 1 \\\\ 0 & \text{otherwise} \end{cases} \tag{3}$$

Kaiser window of length 351 bp

Each section of the signal is traversed for calculation of absolute and power values. Each segment is plotted over the power spectral graph keeping the period three property main‐ tained at each step. The exon boundaries appear as sharp peaks. The final discrimination measure depicts the degree of relevance in exon and introns.

Figure 2 represents a data glimpse that contains pure nucleotide characters.

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 37

The EIIP indicator sequence transforms the nucleotides in numeric values as per its defini‐

The binary indicator sequence is formed by replacing the individual nucleotides with values either 0 or 1. 1 stands for presence and 0 for absence of a particular nucleotide in specified

tion. A part of signal is described in Figure 3 below using EIIP indicator sequence as,

**Figure 2.** Refined dataset

**Figure 3.** Numeric translation of gene F56F11.5 (AF099922)

location in DNA signal,

## **3. Results and discussions**

A specimen gene pattern S.cerevisiae chromosome III (AF099922) has been taken for experi‐ ments over proposed approach. The gene is passed through the series of steps defined,

At processing stage, the dataset is passed through two kinds of filters. First filter refines the data and outputs a data file that purely contains nucleotide characters. Second filter operates on output file obtained from first filter application and generates a file that contains numeric data. This data is fed into central engine for further processing.

Figure 1 Shows dataset that contains nucleotide characters and some other characters. This is first necessary step because this input when fed into our engine will badly degrade the performance and brings false results.

**Figure 1.** Preprocessed dataset

Figure 2 represents a data glimpse that contains pure nucleotide characters.

**Figure 2.** Refined dataset

Each section of the signal is traversed for calculation of absolute and power values. Each segment is plotted over the power spectral graph keeping the period three property main‐ tained at each step. The exon boundaries appear as sharp peaks. The final discrimination

A specimen gene pattern S.cerevisiae chromosome III (AF099922) has been taken for experi‐ ments over proposed approach. The gene is passed through the series of steps defined,

At processing stage, the dataset is passed through two kinds of filters. First filter refines the data and outputs a data file that purely contains nucleotide characters. Second filter operates on output file obtained from first filter application and generates a file that contains numeric

Figure 1 Shows dataset that contains nucleotide characters and some other characters. This is first necessary step because this input when fed into our engine will badly degrade the

measure depicts the degree of relevance in exon and introns.

data. This data is fed into central engine for further processing.

**3. Results and discussions**

36 Digital Filters and Signal Processing

performance and brings false results.

**Figure 1.** Preprocessed dataset

The EIIP indicator sequence transforms the nucleotides in numeric values as per its defini‐ tion. A part of signal is described in Figure 3 below using EIIP indicator sequence as,

**Figure 3.** Numeric translation of gene F56F11.5 (AF099922)

The binary indicator sequence is formed by replacing the individual nucleotides with values either 0 or 1. 1 stands for presence and 0 for absence of a particular nucleotide in specified location in DNA signal,

Figure 4 describes the glimpse of binary indicator sequence which is the one of four parts of translation of gene file. Only 1's and 0's are visible in this sequence.

The complex indicator sequence transforms the sequence into four digital patterns with as‐ sociated weights. It is worth mentioning that this indicator sequence provided close range

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 39

This signal is then passed through the steps of windowed STFT for exonic prediction spec‐ tral analysis. This helps to extend the length of the signal to a target length so that perfect

Figure 6 shows that signal has been extended to a desired length. The length of signal was 8000 patterns. The convolution method suggests that to perform a better approximation, the signal should be extended to 8192 patterns. The signal should be mapped employing Kaiser Window of length 351 base pairs. The previous power of two shows a numerical value 4096 which truncates the signal from its original length. Truncation phenomenon can degrade the results and may bring faulty approximation that would lead to differ from the standard

Figure 7 depicts the wavelet sketch for db3 wavelet. Scaling and wavelet functions have been described. Decomposition of low pass filter and high pass filters have been identified, similarly signal synthesis for low and high pass filters have been shown. This sketch demon‐ strates that signal should be passed through these defined filters to further analyze it for denoising and enhancement. The upward and downward curves self explains the convolu‐

tion of signal with the window function at desired location of nucleotides.

estimation for nucleotides in the literature.

analysis could be performed over the signal.

range of exons.

**Figure 6.** Signal extension to desired length

#### **Figure 4.** Binary indicator sequence

The complex indicator sequence is defined by replacing the nucleotide with 1, -1, iota and iota values.

Figure 5 shows a portion of gene AF099922 after application of complex indicator sequence.


**Figure 5.** Complex indicator sequence applied to gene

The complex indicator sequence transforms the sequence into four digital patterns with as‐ sociated weights. It is worth mentioning that this indicator sequence provided close range estimation for nucleotides in the literature.

This signal is then passed through the steps of windowed STFT for exonic prediction spec‐ tral analysis. This helps to extend the length of the signal to a target length so that perfect analysis could be performed over the signal.

Figure 6 shows that signal has been extended to a desired length. The length of signal was 8000 patterns. The convolution method suggests that to perform a better approximation, the signal should be extended to 8192 patterns. The signal should be mapped employing Kaiser Window of length 351 base pairs. The previous power of two shows a numerical value 4096 which truncates the signal from its original length. Truncation phenomenon can degrade the results and may bring faulty approximation that would lead to differ from the standard range of exons.

Figure 7 depicts the wavelet sketch for db3 wavelet. Scaling and wavelet functions have been described. Decomposition of low pass filter and high pass filters have been identified, similarly signal synthesis for low and high pass filters have been shown. This sketch demon‐ strates that signal should be passed through these defined filters to further analyze it for denoising and enhancement. The upward and downward curves self explains the convolu‐ tion of signal with the window function at desired location of nucleotides.

**Figure 6.** Signal extension to desired length

Figure 4 describes the glimpse of binary indicator sequence which is the one of four parts of

The complex indicator sequence is defined by replacing the nucleotide with 1, -1, iota and -

Figure 5 shows a portion of gene AF099922 after application of complex indicator sequence.

translation of gene file. Only 1's and 0's are visible in this sequence.

**Figure 4.** Binary indicator sequence

38 Digital Filters and Signal Processing

**Figure 5.** Complex indicator sequence applied to gene

iota values.

frame of the signal is calculated numerically equal sized so that power spectral graph is uni‐

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 41

For discrete wavelet transforms of order three, the signal is decomposed and synthesized. These db3 performs the quick vanishing of coefficients for approximate and detail patterns.

Figure 9 shows a glimpse of original signal. There are 8000 base pairs shown in the form of a digital pattern. Cumulative histogram of signal shows different range of weight values as‐ signed to nucleotides base pairs. It can be seen that nucleotides with numerals higher than 0.25 have high frequency while those between 0.1 and less than 0.25 have lower frequency. The individual histogram also shows three separate characterizations of nucleotide weight values. The standard deviation has been found to be 0.09037, median of absolute deviation is 0.11 while mean absolute deviation is 0.07843. The maximum range is 0.375 while mini‐

mum range is 0.125 and the average range is depicted as 0.25.

form in all characteristics.

**Figure 8.** Wavelet tree for Shannon entropy

**Figure 7.** Wavelet of db3 sketch

Figure 8 shows a wavelet tree for Shannon entropy of signal. There is a tree structure for the nodes depicting different position factors. Colored coefficients for terminal nodes can be ob‐ served. The first rectangle at the right top shows the analyzed signal at different nucleotides places (diffusion of bases at DNA strand). Calculation of Shannon entropy would assist in further identification of boundary values for individual nucleotides at power spectral densi‐ ty estimation graphs.

The digital signal passes through refinement stages. First, the sequence was obtained as a raw data which was purified to access only nucleotides bases without degrading factor. This is termed as an important process because any kind of unwanted characters may lead to dif‐ ferent set of nucleotides values that would be away from actual results.

The digital signal under discussions contains 8000 base pairs. The same dataset was used ex‐ tensively in literature by other researchers and it is being used as a bench mark. The spectral estimation graph reveals that it contains five exonic regions at different nucleotides ranges. Identification of these ranges close to standard range demands to denoise the signal and se‐ lection of an appropriate window function that could be used for perfect convolution. The standard convolution requires to multiply the signal with a portion of window function, this is the reason that signal was extended to a power of two to make it to desired length. Each frame of the signal is calculated numerically equal sized so that power spectral graph is uni‐ form in all characteristics.

For discrete wavelet transforms of order three, the signal is decomposed and synthesized. These db3 performs the quick vanishing of coefficients for approximate and detail patterns.

Figure 9 shows a glimpse of original signal. There are 8000 base pairs shown in the form of a digital pattern. Cumulative histogram of signal shows different range of weight values as‐ signed to nucleotides base pairs. It can be seen that nucleotides with numerals higher than 0.25 have high frequency while those between 0.1 and less than 0.25 have lower frequency. The individual histogram also shows three separate characterizations of nucleotide weight values. The standard deviation has been found to be 0.09037, median of absolute deviation is 0.11 while mean absolute deviation is 0.07843. The maximum range is 0.375 while mini‐ mum range is 0.125 and the average range is depicted as 0.25.

**Figure 8.** Wavelet tree for Shannon entropy

**Figure 7.** Wavelet of db3 sketch

40 Digital Filters and Signal Processing

ty estimation graphs.

Figure 8 shows a wavelet tree for Shannon entropy of signal. There is a tree structure for the nodes depicting different position factors. Colored coefficients for terminal nodes can be ob‐ served. The first rectangle at the right top shows the analyzed signal at different nucleotides places (diffusion of bases at DNA strand). Calculation of Shannon entropy would assist in further identification of boundary values for individual nucleotides at power spectral densi‐

The digital signal passes through refinement stages. First, the sequence was obtained as a raw data which was purified to access only nucleotides bases without degrading factor. This is termed as an important process because any kind of unwanted characters may lead to dif‐

The digital signal under discussions contains 8000 base pairs. The same dataset was used ex‐ tensively in literature by other researchers and it is being used as a bench mark. The spectral estimation graph reveals that it contains five exonic regions at different nucleotides ranges. Identification of these ranges close to standard range demands to denoise the signal and se‐ lection of an appropriate window function that could be used for perfect convolution. The standard convolution requires to multiply the signal with a portion of window function, this is the reason that signal was extended to a power of two to make it to desired length. Each

ferent set of nucleotides values that would be away from actual results.

form of a digital pattern. Cumulative histogram of signal shows different range of weight values assigned to nucleotides base pairs. It can be seen that nucleotides with numerals higher than 0.25 have high frequency while those between 0.1 and less than 0.25 have lower frequency. The individual histogram also shows three separate characterizations of nucleo‐ tide weight values. The standard deviation has been found to be 0.09037, median of absolute deviation is 0.11 while mean absolute deviation is 0.07843. The maximum range is 0.375

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 43

while minimum range is 0.125 and the average range is depicted as 0.25.

**Figure 10.** Synthesized signal at level 3

**Figure 9.** Original signal at level 3

The histogram calculates the frequency of nucleotides bases in the signal. Since the signal was mapped with an enhanced indicator sequence which assigns perfect weights to nucleo‐ tides bases, histogram of such a signal is uniform. It is observed that almost half of the signal is diffused in first band and half in the second band. First half band shows the smaller histo‐ gram values (frequency components) while second half band depicts some larger histogram values.

The individual histogram components dependant over the individual nucleotide bases, for instance, the numeric value of Adenine is 0.260, which is plotted against the other numeric values for bases in individual histogram. Depending over the weights assigned to Thymine and cytosine, the histogram shape may change.

It is also important to note that histogram of frequency components present the redundancy of bases in the digital pattern. This repetition depends over the order of nucleotides in DNA sequence which defines the habits, traits and other characteristics of species.

Figure 10 shows the synthesized signal at level three. Like the original signal, the synthe‐ sized signal owns the same histogram characteristics. There are 8000 base pairs shown in the form of a digital pattern. Cumulative histogram of signal shows different range of weight values assigned to nucleotides base pairs. It can be seen that nucleotides with numerals higher than 0.25 have high frequency while those between 0.1 and less than 0.25 have lower frequency. The individual histogram also shows three separate characterizations of nucleo‐ tide weight values. The standard deviation has been found to be 0.09037, median of absolute deviation is 0.11 while mean absolute deviation is 0.07843. The maximum range is 0.375 while minimum range is 0.125 and the average range is depicted as 0.25.

**Figure 10.** Synthesized signal at level 3

**Figure 9.** Original signal at level 3

42 Digital Filters and Signal Processing

and cytosine, the histogram shape may change.

values.

The histogram calculates the frequency of nucleotides bases in the signal. Since the signal was mapped with an enhanced indicator sequence which assigns perfect weights to nucleo‐ tides bases, histogram of such a signal is uniform. It is observed that almost half of the signal is diffused in first band and half in the second band. First half band shows the smaller histo‐ gram values (frequency components) while second half band depicts some larger histogram

The individual histogram components dependant over the individual nucleotide bases, for instance, the numeric value of Adenine is 0.260, which is plotted against the other numeric values for bases in individual histogram. Depending over the weights assigned to Thymine

It is also important to note that histogram of frequency components present the redundancy of bases in the digital pattern. This repetition depends over the order of nucleotides in DNA

Figure 10 shows the synthesized signal at level three. Like the original signal, the synthe‐ sized signal owns the same histogram characteristics. There are 8000 base pairs shown in the

sequence which defines the habits, traits and other characteristics of species.

The synthesized signal shows the same histogram even after its decomposition. The synthe‐ sized signal is perfectly reconstructed by employing discrete wavelet transforms. The ap‐ proximate and detail coefficients of signal are obtained in passing through a series of filters. These digital filters have been defined and constructed using Matlab. The decomposed sig‐ nal is addition of approximate and detail coefficients at level three along with detail coeffi‐ cients at level two and level one.

Figure 12 presents the histogram for approximate and detail coefficients. At level one, the concentration of components is less than other levels. Level two shows that signal compo‐ nents are more concentrated. At level three, the signal components are more closely packed. Likewise, the histogram for approximate coefficients presents the same phenomenon. At level one, the concentration of components is less than other levels. Level two shows that signal components are more concentrated. At level three, the signal components are more closely packed. It can be observed that original signal and synthesized signal contain the same number of components. The concentrations of signal components are uniform over

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 45

these histograms' plots, which depict the perfect reconstruction of digital signal.

**Figure 12.** Histogram of signal

As for as, we decompose the signal, the components are loosely packed.

Figure 11 depicts the signal decomposition into approximate and detail coefficients. Symbol s represents the original signal. Approximate and detail coefficient at level three show the reduced complexity in the signal.

**Figure 11.** Signal decomposition

Figure 12 presents the histogram for approximate and detail coefficients. At level one, the concentration of components is less than other levels. Level two shows that signal compo‐ nents are more concentrated. At level three, the signal components are more closely packed. Likewise, the histogram for approximate coefficients presents the same phenomenon. At level one, the concentration of components is less than other levels. Level two shows that signal components are more concentrated. At level three, the signal components are more closely packed. It can be observed that original signal and synthesized signal contain the same number of components. The concentrations of signal components are uniform over these histograms' plots, which depict the perfect reconstruction of digital signal.

**Figure 12.** Histogram of signal

The synthesized signal shows the same histogram even after its decomposition. The synthe‐ sized signal is perfectly reconstructed by employing discrete wavelet transforms. The ap‐ proximate and detail coefficients of signal are obtained in passing through a series of filters. These digital filters have been defined and constructed using Matlab. The decomposed sig‐ nal is addition of approximate and detail coefficients at level three along with detail coeffi‐

Figure 11 depicts the signal decomposition into approximate and detail coefficients. Symbol s represents the original signal. Approximate and detail coefficient at level three show the

As for as, we decompose the signal, the components are loosely packed.

cients at level two and level one.

44 Digital Filters and Signal Processing

reduced complexity in the signal.

**Figure 11.** Signal decomposition

Figure 13 shows the density estimation of approximate and detail coefficients. The density estimate of original signal shows the numerals for nucleotides present in the signal in digital format as a general. The approximate coefficients at level three presents a sharp peak at some 0.25 points. The signal remains uniform through the course except at another peak val‐ ue ranging from 0.37 to the end of the signal. The density estimation for detail coefficients at level one shows the same sharp peak around 0.27 points. The same peak can be observed around 0.40 at level two. At level three, the phenomenon is same but the signal components are loosely packed than level two. At granular level, the components are more packed at lev‐ el one than other levels.

Figure 14 shows the resultant denoised signal. It is obvious that preview of detail coeffi‐ cients at level three shows the loosely packed signal components. The original signal is rep‐ resented in red color. The threshold coefficients are shown in vertical bars for all nucleotide range (8000 base pairs). The coefficients at detail level depicts a hierarchy of packed, loosely packed and more loosely packed components, which shows a gradual improvement in the

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 47

signal for denosing.

**Figure 14.** Denoised signal

**Figure 13.** Density estimation of signal

Figure 14 shows the resultant denoised signal. It is obvious that preview of detail coeffi‐ cients at level three shows the loosely packed signal components. The original signal is rep‐ resented in red color. The threshold coefficients are shown in vertical bars for all nucleotide range (8000 base pairs). The coefficients at detail level depicts a hierarchy of packed, loosely packed and more loosely packed components, which shows a gradual improvement in the signal for denosing.

**Figure 14.** Denoised signal

Figure 13 shows the density estimation of approximate and detail coefficients. The density estimate of original signal shows the numerals for nucleotides present in the signal in digital format as a general. The approximate coefficients at level three presents a sharp peak at some 0.25 points. The signal remains uniform through the course except at another peak val‐ ue ranging from 0.37 to the end of the signal. The density estimation for detail coefficients at level one shows the same sharp peak around 0.27 points. The same peak can be observed around 0.40 at level two. At level three, the phenomenon is same but the signal components are loosely packed than level two. At granular level, the components are more packed at lev‐

el one than other levels.

46 Digital Filters and Signal Processing

**Figure 13.** Density estimation of signal

Figure 15 shows the approximate coefficients at level three. A sharp gradual change can be observed in a commutative histogram. The peaks are more pronounced at from point one to onwards. In another histogram, the peaks are not much visible around first 0.6 points, there is a sharp gradual increment in the bars reaching the maximum of 0.07 points then a gradual decrement is observed leading it to point one. The peaks are less pronounced after this point. The coefficients of approximation at level three show the signal as loosely packed components.

Figure 16 shows the detail coefficients at level three. A sharp gradual change can be ob‐ served in a commutative histogram. The peaks are more pronounced at from point one to onwards. In another histogram, the peaks are not much visible around first 0.6 points, there is a sharp gradual increment in the bars reaching the maximum of 0.07 points then a gradual decrement is observed leading it to point one. The peaks are less pronounced after this point. The coefficients of detail at level three show the signal as loosely packed components.

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 49

It can be observed that detail coefficient at level one are packed showing more concentration of nucleotides while detail coefficients at level two are loosely packed. The coefficients at level three are more significant than other levels, which represents that the signal is filtered for refinement. The signal was passed through a series of filters for the wavelet db3 which

Table 1 presents the nucleotide range for exons. Clear differences can be observed as a com‐ parative analysis of various approaches. Binary and EIIP methods show a wide range differ‐ ence compared with the standard range. Complex method results are better than the first two approaches. Digital filters behave accordingly. The proposed approach has more signifi‐

**Figure 16.** Coefficients of detail at level 3

denoised the signal as a result of reconstruction of signal.

cant results than other prevailing approaches.

It can be observed that detail coefficient at level one are packed showing more concentration of nucleotides while detail coefficients at level two are loosely packed. The coefficients at level three are more significant than other levels, which represents that the signal is filtered for refinement. The signal was passed through a series of filters for the wavelet db3 which denoised the signal as a result of reconstruction of signal.

**Figure 15.** Approximation coefficients at level 3

Figure 16 shows the detail coefficients at level three. A sharp gradual change can be ob‐ served in a commutative histogram. The peaks are more pronounced at from point one to onwards. In another histogram, the peaks are not much visible around first 0.6 points, there is a sharp gradual increment in the bars reaching the maximum of 0.07 points then a gradual decrement is observed leading it to point one. The peaks are less pronounced after this point. The coefficients of detail at level three show the signal as loosely packed components.

**Figure 16.** Coefficients of detail at level 3

Figure 15 shows the approximate coefficients at level three. A sharp gradual change can be observed in a commutative histogram. The peaks are more pronounced at from point one to onwards. In another histogram, the peaks are not much visible around first 0.6 points, there is a sharp gradual increment in the bars reaching the maximum of 0.07 points then a gradual decrement is observed leading it to point one. The peaks are less pronounced after this point. The coefficients of approximation at level three show the signal as loosely packed

It can be observed that detail coefficient at level one are packed showing more concentration of nucleotides while detail coefficients at level two are loosely packed. The coefficients at level three are more significant than other levels, which represents that the signal is filtered for refinement. The signal was passed through a series of filters for the wavelet db3 which

denoised the signal as a result of reconstruction of signal.

**Figure 15.** Approximation coefficients at level 3

components.

48 Digital Filters and Signal Processing

It can be observed that detail coefficient at level one are packed showing more concentration of nucleotides while detail coefficients at level two are loosely packed. The coefficients at level three are more significant than other levels, which represents that the signal is filtered for refinement. The signal was passed through a series of filters for the wavelet db3 which denoised the signal as a result of reconstruction of signal.

Table 1 presents the nucleotide range for exons. Clear differences can be observed as a com‐ parative analysis of various approaches. Binary and EIIP methods show a wide range differ‐ ence compared with the standard range. Complex method results are better than the first two approaches. Digital filters behave accordingly. The proposed approach has more signifi‐ cant results than other prevailing approaches.


**References**

10.1186/1471-2105-11-S1-S50, 2010

for Communication. CODEC 2009, Page(s): 1 - 4, 2009

Networks, Pages: 1029 - 1036, ISBN:978-3-642-02477-1, 2009

Mumbai, India, ISBN:978-1-60558-351-8, 2009

quisition and processing, Malaysia, 2009

Page(s): 1918 - 1921, 2008

[1] Tina P George and Tessamma Thomas, "Discrete wavelet transform de-noising in eu‐ karyotic gene splicing", BMC Bioinformatics, 11(Suppl 1):S50, doi:

Spectral Analysis of Exons in DNA Signals http://dx.doi.org/10.5772/52763 51

[2] Roy, M., Biswas, S. and Barman, S., "Identification and Analysis of Coding and Non‐ coding Regions of a DNA Sequence by Positional Frequency Distribution of Nucleo‐ tides (PFDN) Algorithm", 4th International Conference on Computers and Devices

[3] GuoShuo and Zhu Yi-sheng, "Prediction of Protein Coding Regions by Support Vec‐ tor Machine", International Symposium on Intelligent Ubiquitous Computing and Education, Digital Object Identifier: 10.1109/IUCE.2009.141, Page(s): 185 - 188, 2009 [4] J. Quintanilla-Domínguez, B. Ojeda-Magaña, J. Seijas, A. Vega-Corona, D. Andina, "Edges Detection of Clusters of Microcalcifications with SOM and Coordinate Logic Filters", Proceedings of the 10th International Work-Conference on Artificial Neural

[5] Ruchira. Ajay Jadhav, Roopa Ashok Thorat, "Computer aided breast cancer analysis and detection using statistical features and neural networks", Proceedings of the In‐ ternational Conference on Advances in Computing, Communication and Control,

[6] Muneer Ahmad and Hassan Mathkour, "Multiple Sequence Alignment with GAP consideration by pattern matching technique", International Conference on signal ac‐

[7] Muneer Ahmad and Hassan Mathkour, "Genome sequence analysis by Matlab histo‐ gram comparison between Image-sets of genetic data", WCSET, Hong Kong, 2009 [8] Muneer Ahmad and Hassan Mathkour, "A pattern matching approach for redundan‐

[9] HazrinaYusofHamdani and SitiRohkmahMohdShukri, "Gene prediction system", In‐ ternational Symposium on Information Technology, Volume: 2, Digital Object Identi‐

[10] ShuoGuo and Yi-Sheng Zhu, "An integrative algorithm for predicting protein coding regions", IEEE Asia Pacific Conference on Circuits and Systems,Digital Object Identi‐

[11] Kakumani, R., Devabhaktuni, V., and Ahmad, M.O., "Prediction of protein-coding re‐ gions in DNA sequences using a model-based approach", IEEE International Sympo‐ sium on Circuits and Systems, Digital Object Identifier: 10.1109/ISCAS.2008.4541818,

[12] Akhtar, M., Ambikairajah, E. and Epps, J., "Optimizing period-3 methods for eukary‐ otic gene prediction", IEEE International Conference on Acoustics, Speech and Signal

cy detection in bi-lingual and mono-lingual Corpora", IAENG 2009

fier: 10.1109/ITSIM.2008.4631728, Page(s): 1 - 7, 2008

fier: 10.1109/APCCAS.2008.4746054, Page(s): 438 - 441, 2008

**Table 1.** Range of exons for different methods

## **4. Conclusion**

Bioinformatics is a very rapidly emerging field of research. The genome sequence analysis is an interesting and challenging task that needs great attention. The analysis brings very promising relevance between species. The proposed approach provides a way to better identify the genetic regions in mixture of exon-intron noise. The focus directed to minimize the leakage of frequency contents by adoption of an optimal indicator sequence. We also re‐ duced the signal noise by using Kaiser Window function with length 351 base pairs. The spectral density estimation was enhanced with application of wavelet transforms. The pro‐ posed dimensions reduced the noise and increased the sharp peaks of exons in density graphs. We have observed significant improvement in results as a comparative analysis be‐ tween existing techniques and compared the results with strands NCBI range.

## **Author details**

Noor Zaman1\*, Ahmed Muneer1 and Fausto Pedro García Márquez2

\*Address all correspondence to: nzaman@kfu.edu.sa

\*Address all correspondence to: mmalik@kfu.edu.sa

\*Address all correspondence to: FaustoPedro.Garcia@uclm.es

1 College of Computer Sciences & Information Technology,King Faisal University, Saudi Arabia

2 ETSI Industriales, Universidad Castilla-La Mancha, Ciudad Real, Spain

## **References**

**Method E1 E2 E3 E4 E5**

**Table 1.** Range of exons for different methods

**4. Conclusion**

50 Digital Filters and Signal Processing

**Author details**

Arabia

Noor Zaman1\*, Ahmed Muneer1

\*Address all correspondence to: nzaman@kfu.edu.sa

\*Address all correspondence to: mmalik@kfu.edu.sa

\*Address all correspondence to: FaustoPedro.Garcia@uclm.es

2 ETSI Industriales, Universidad Castilla-La Mancha, Ciudad Real, Spain

Binary Method 656-1206 2406-3106 3806-4406 5306-5806 7106-7706 EIIP Method 706-1206 2206-2906 3906-4406 5206-5806 7206-7706 Complex Method 750-1100 2600-2906 3600-4406 5206-5706 7106-7600 Filter 1 (Anti-notch) 656-1206 2450-3106 3806-4450 5306-5850 7106-7750 Filter 2 (Multistage) 706-1250 2206-2950 3906-4450 5206-5850 7206-7706 proposed Method 750-1050 2450-2906 3950-4380 5206-5600 7220-7680 NCBI Range 928-1039 2528-2857 4114-4377 5465-5644 7255-7605

Bioinformatics is a very rapidly emerging field of research. The genome sequence analysis is an interesting and challenging task that needs great attention. The analysis brings very promising relevance between species. The proposed approach provides a way to better identify the genetic regions in mixture of exon-intron noise. The focus directed to minimize the leakage of frequency contents by adoption of an optimal indicator sequence. We also re‐ duced the signal noise by using Kaiser Window function with length 351 base pairs. The spectral density estimation was enhanced with application of wavelet transforms. The pro‐ posed dimensions reduced the noise and increased the sharp peaks of exons in density graphs. We have observed significant improvement in results as a comparative analysis be‐

and Fausto Pedro García Márquez2

1 College of Computer Sciences & Information Technology,King Faisal University, Saudi

tween existing techniques and compared the results with strands NCBI range.


Processing, Digital Object Identifier: 10.1109/ICASSP.2008.4517686, Page(s): 621 - 624, 2008

**Chapter 3**

**Deterministic Sampling for Quantification of Modeling**

Statistical signal processing [1] traditionally focuses on extraction of information from noisy measurements. Typically, parameters or states are estimated by various filtering operations. Here, the quality of signal processing operations will be assessed by evaluating the statistical uncertainty of the result [2]. The processing could for instance simulate, correct, modulate, evaluate, or control the response of a physical system. Depending on the addressed task and the system, this can often be formulated in terms of a differential or difference *signal processing model equation* in time, with uncertain parameters and driven by an exciting input signal corrupted by noise. The quantity of primary interest may not be the output signal but can be extracted from it. If this uncertain dynamic model is linear-in-response it can be translated into a linear digital filter for highly efficient and standardized evaluation [3]. A statistical model of the parameters describing to which degree the dynamic model is known and accurate will be assumed given, instead of being the target of investigation as in system identification [4]. *Model uncertainty* (of parameters) is then *propagated* to *model-ing uncertainty* (of the result)*.* The two are to be clearly distinguished – the former relate to the input while the latter relate to the

Quantification of uncertainty of complex computations is an emerging topic, driven by the general need for quality assessment and rapid development of modern computers. Applica‐ tions include e.g. various mechanical and electrical applications [5-7] using uncertain differ‐ ential equations, and statistical signal processing. The so-called brute force Monte Carlo method [8-9] is the indisputable reference method to propagate model uncertainty. Its main disadvantage is its slow convergence, or requirement of using many samples of the model (large ensembles). Thus, it cannot be used for demanding complex models. The ensemble size is a key aspect which motivates deterministic sampling. Small ensembles are found by

> © 2013 Hessling; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Hessling; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

**Uncertainty of Signals**

Additional information is available at the end of the chapter

Jan Peter Hessling

**1. Introduction**

output of the model.

http://dx.doi.org/10.5772/52193


## **Deterministic Sampling for Quantification of Modeling Uncertainty of Signals**

Jan Peter Hessling

Processing, Digital Object Identifier: 10.1109/ICASSP.2008.4517686, Page(s): 621 - 624,

[13] Hota, M.K. and Srivastava, V.K., "DSP technique for gene and exon prediction taking complex indicator sequence", IEEE Region 10 Conference, Digital Object Identifier:

[14] Dosay-Akbulut, M., 2006. Group I introns and splicing mechanism and their present possibilities in elasmobranches. J. Boil. Sci., 6: 921-925. DOI: 10.3923/jbs.2006.921.925

[15] Parent, A., I. Benzaghou, I. Bougie and M. Bisaillon, 2004. Transcription and mRNA processing events: The importance of coordination. J. Biological Sci., 4: 624-627. DOI:

[16] S. Al Wadi, MohdTahir Ismail, M.H. Alkhahazaleh and SamsulAriffinAddulKarim, "Orthogonal Wavelet Transforms in Forecasting Volatility: An Expermintal Results",

[17] M. Hashemi, S. Khanzadi and A. Jamshidi, " Identification of Escherichia coli O157:H7 Isolated from Cattle Carcasses in Mashhad Abattoir by Multiplex PCR",

[18] A. Ali, F. Khorshid, H. Abu-araki and A.M. Osman, ,"Tumor Lung Cancer Model for Assessing Anti-neoplastic Effect of PMF in Rodents: Histopathological Study",

[19] J. Singh, S. Gupta and H. Kaur , "Prediction of in vitro Drug Release Mechanisms from Extended Release Matrix Tablets using SSR/R2 Technique", Trends in Applied

Trends in Applied Sciences Research Volume 6, Number 10, 1214-1221, 2011

10.1109/TENCON.2008.4766667, Page(s): 1 - 6, 2008

World Applied Sciences Journal, Volume 10 number 3, 2010

World Applied Sciences Journal, Volume 10 number 6, 2010

Sciences Research Volume 6, Number 4, 400-409, 2011

2008

52 Digital Filters and Signal Processing

10.3923/jbs.2004.624.627

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52193

## **1. Introduction**

Statistical signal processing [1] traditionally focuses on extraction of information from noisy measurements. Typically, parameters or states are estimated by various filtering operations. Here, the quality of signal processing operations will be assessed by evaluating the statistical uncertainty of the result [2]. The processing could for instance simulate, correct, modulate, evaluate, or control the response of a physical system. Depending on the addressed task and the system, this can often be formulated in terms of a differential or difference *signal processing model equation* in time, with uncertain parameters and driven by an exciting input signal corrupted by noise. The quantity of primary interest may not be the output signal but can be extracted from it. If this uncertain dynamic model is linear-in-response it can be translated into a linear digital filter for highly efficient and standardized evaluation [3]. A statistical model of the parameters describing to which degree the dynamic model is known and accurate will be assumed given, instead of being the target of investigation as in system identification [4]. *Model uncertainty* (of parameters) is then *propagated* to *model-ing uncertainty* (of the result)*.* The two are to be clearly distinguished – the former relate to the input while the latter relate to the output of the model.

Quantification of uncertainty of complex computations is an emerging topic, driven by the general need for quality assessment and rapid development of modern computers. Applica‐ tions include e.g. various mechanical and electrical applications [5-7] using uncertain differ‐ ential equations, and statistical signal processing. The so-called brute force Monte Carlo method [8-9] is the indisputable reference method to propagate model uncertainty. Its main disadvantage is its slow convergence, or requirement of using many samples of the model (large ensembles). Thus, it cannot be used for demanding complex models. The ensemble size is a key aspect which motivates deterministic sampling. Small ensembles are found by

© 2013 Hessling; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Hessling; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

substituting the random generator with a customized deterministic sampling rule. Since any computerized random generator produces a pseudo-random rather than a truly random sequence, this is equivalent of modifying the random generator to be *accurate* for *small* ensembles of *definite* size, rather than being *asymptotically exact* (infinite ensembles). Correct‐ ness of very large ensembles is of theoretical but hardly practical interest for complex models, if the convergence to the asymptotic result is very slow.

#### **2. Modeling uncertainty of signals**

#### **2.1. Problem definition**

Suppose the (output) signal *y*(*x*, *t*)∈*R* of interest is generated from the (input) signal *x*(*t*)∈*R* passing through a dynamic system *H* , with parameters *ak* ∈*R*, *bk* ∈*R*,

$$\mathbb{E}\left[\sum\_{k=0}^{\mu} a\_k \boldsymbol{\mathcal{D}}^k \right] \boldsymbol{y} = \left[\sum\_{k=0}^{\upsilon} b\_k \boldsymbol{\mathcal{D}}^k \right] \boldsymbol{x}, \quad a\_0 = 1. \tag{1}$$

The parameterization should be carefully chosen as it affects the convergence rate of Taylor expansions (section 3.1) as well as the physical interpretation. The parameters and their statistics are preferably extracted from measurements using system identification techniques

The problem to be addressed is the statistical evaluation of any function *g*(*y*(*t*)=*h* (*q*, *t*)∗ *x*(*t*)), given statistical models of *q* and *x*. It will here consist of evaluating its time-dependent mean *g*(*y*) and standard deviation *g*(*y*)− *g*(*y*) <sup>2</sup> . Without loss of generality, the analysis will be made for *g*(*y*)= *y*. Digital filtering will be utilized for evaluating

Statistical expectations of any signal, model or function *g*(*q*) over finite discrete *E* as well as

( ) ( )

.

^, with their components organized in columns. Sample indices will

(*k* ) = (*δqi*

is related but different to its *skewness* [11]. Further, the shape is

=1 / 4 5, 1 / 2 3≈(0.11, 0.29) for UNI(0,1) and NRM(0,1), respec‐

, similarly to the *curtosis* [11]. Since UNI(0,1) and NRM(0,1) are normalized


*<sup>k</sup>*2<sup>⋯</sup> (*E*). The discussion will be limited to correlations described by the covariance

(3)

<sup>ò</sup> (3)

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

http://dx.doi.org/10.5772/52193

55

is a column vector denoting the *k* −th sample

=0. Their differences are first reflected in

is expressed by the range

) *<sup>k</sup> <sup>k</sup>* carry the information

(2)

describes

( ) ( ) 1 <sup>1</sup> <sup>ˆ</sup> ,

*q*

*<sup>m</sup> <sup>k</sup>*

*k*

å

^(*k* )

Only uniform (UNI) and normal distributions (NRM) will be utilized. Either the mean and standard deviation, or the interval in brackets will be given in parenthesis, e.g.

contained in the marginalized probability density functions (pdf)

(*δq*)*dq*1⋯*dqi*−1*dqi*+1⋯*dqn*, where *Q* denotes the sample space. While *Mi*

=1 and *Mi*

*Q*

*g g q m g g q f q dq* =

=

=

continuous ensembles or probability distributions (no subscript) are defined as,

[4]. Note that complex-valued poles and zeros are conjugated in pairs [10].

samples of the model, i.e. filtering with definite sets of *q* and signals *x*.

*E*

of parameter *q*. Variations from the mean are written as *δq*(*E*)≡*q* − *q* (*E*).

*<sup>q</sup>* ~UNI(0.5,1 / <sup>2</sup> 3) =UNI( 0, <sup>1</sup> ). Statistical moments *Mi*

( −*δqi*

tively. The maximum variation of the parameter *qi*

(4)

), *Mi* (2)

matrix cov(*q*)= *δqδq <sup>T</sup>* , where the vector multiplication is an outer product.

**2.2. Nomenclature**

Samples of *q* are labeled *q*

*f i* (*δqi* ) ≡ *∫ Q f q*

*Mi* (*∞*) ≡lim *k*→*∞* |*Mi* (*k* )

(*δqi*1) *k*1 (*δqi*2)

the width of *f <sup>i</sup>*

reflected in *Mi*

and symmetric *f <sup>i</sup>*

their fourth moment, *Mi*

(*δqi* ), *Mi* (3)

> (*δqi* ) = *f <sup>i</sup>*

(4)

be given as superscripts in parenthesis, eg. *q*

The model is given in *n* =*u* + *v* + 1 uncertain parameters, which can be arranged in a column vector *<sup>q</sup>* =(*b*<sup>0</sup> <sup>⋯</sup> *bv <sup>a</sup>*<sup>1</sup> <sup>⋯</sup> *au*)*<sup>T</sup>* . For systems continuous-in-time (CT), *<sup>D</sup>* <sup>=</sup>∂*<sup>t</sup>* is the differen‐ tial operator in time while for systems discrete-in-time (DT), *D* =*Δ* <sup>−</sup><sup>1</sup> is the negative unit displacement operator, *Δ* <sup>−</sup><sup>1</sup> *xk* = *xk* <sup>−</sup>1. There are several approximate methods to sample CT systems to DT systems, see [3] and references therein. The discretization techniques are beyond the scope of this presentation and DT systems will be assumed. If *u* ≥1, there is feedback in the system which results in an impulse response *h* (*q*, *t*) of infinite duration. For finite accuracy however, the duration is finite. The system is linear-in-response, *y*(*x* =*αx*<sup>1</sup> + *βx*2, *t*) =*αy*(*x*1, *t*) + *β y*(*x*2, *t*). Most importantly, the system is non-linear-in-param‐ eters if *u* ≥1. This is the typical situation addressed here.

Systems of the form in Eq. 1 may be directly realized as digital filters,*y*(*q*, *x*, *t*)=*h* (*q*)∗ *x*(*t*), where ∗ denotes the filtering operation. The coefficients *bk* and *ak* are the numerator and denominator coefficients of the filter with impulse response *h* (*q*), respectively. Its z-transform *H* (*q*, *z*) is obtained with the substitution *Δ* →*z*. The parameterization can be changed to for instance gain*K*, poles *pk* and zeros *zk* , or poles *pk* and residues *rk* ,

$$Y(z) = H(z)X(z) \colon \quad H(q, z) = \frac{\sum\_{k=0}^{v} b\_k z^{-k}}{\sum\_{k=0}^{u} a\_k z^{-k}} = K \frac{\prod\_{k} (z - z\_k) / (1 - z\_k)}{\prod\_{k} (z - p\_k) / (1 - p\_k)} = \sum\_{k} \frac{r\_k}{(z - p\_k)} \tag{2}$$

The parameterization should be carefully chosen as it affects the convergence rate of Taylor expansions (section 3.1) as well as the physical interpretation. The parameters and their statistics are preferably extracted from measurements using system identification techniques [4]. Note that complex-valued poles and zeros are conjugated in pairs [10].

The problem to be addressed is the statistical evaluation of any function *g*(*y*(*t*)=*h* (*q*, *t*)∗ *x*(*t*)), given statistical models of *q* and *x*. It will here consist of evaluating its time-dependent mean *g*(*y*) and standard deviation *g*(*y*)− *g*(*y*) <sup>2</sup> . Without loss of generality, the analysis will be made for *g*(*y*)= *y*. Digital filtering will be utilized for evaluating samples of the model, i.e. filtering with definite sets of *q* and signals *x*.

### **2.2. Nomenclature**

substituting the random generator with a customized deterministic sampling rule. Since any computerized random generator produces a pseudo-random rather than a truly random sequence, this is equivalent of modifying the random generator to be *accurate* for *small* ensembles of *definite* size, rather than being *asymptotically exact* (infinite ensembles). Correct‐ ness of very large ensembles is of theoretical but hardly practical interest for complex models,

Suppose the (output) signal *y*(*x*, *t*)∈*R* of interest is generated from the (input) signal *x*(*t*)∈*R*

The model is given in *n* =*u* + *v* + 1 uncertain parameters, which can be arranged in a column vector *<sup>q</sup>* =(*b*<sup>0</sup> <sup>⋯</sup> *bv <sup>a</sup>*<sup>1</sup> <sup>⋯</sup> *au*)*<sup>T</sup>* . For systems continuous-in-time (CT), *<sup>D</sup>* <sup>=</sup>∂*<sup>t</sup>* is the differen‐ tial operator in time while for systems discrete-in-time (DT), *D* =*Δ* <sup>−</sup><sup>1</sup> is the negative unit

systems to DT systems, see [3] and references therein. The discretization techniques are beyond the scope of this presentation and DT systems will be assumed. If *u* ≥1, there is feedback in the system which results in an impulse response *h* (*q*, *t*) of infinite duration. For finite accuracy however, the duration is finite. The system is linear-in-response, *y*(*x* =*αx*<sup>1</sup> + *βx*2, *t*) =*αy*(*x*1, *t*) + *β y*(*x*2, *t*). Most importantly, the system is non-linear-in-param‐

Systems of the form in Eq. 1 may be directly realized as digital filters,*y*(*q*, *x*, *t*)=*h* (*q*)∗ *x*(*t*), where ∗ denotes the filtering operation. The coefficients *bk* and *ak* are the numerator and denominator coefficients of the filter with impulse response *h* (*q*), respectively. Its z-transform *H* (*q*, *z*) is obtained with the substitution *Δ* →*z*. The parameterization can be changed to for

0

, 1.

å å (1)

*xk* = *xk* <sup>−</sup>1. There are several approximate methods to sample CT

( ) ( )


*zp p zp a z*

*k k k k k k <sup>u</sup> <sup>k</sup> kk k <sup>k</sup>*

<sup>å</sup> <sup>Õ</sup> <sup>å</sup>

*b z zz z*

1

( ) ( ) ( )

(2)

passing through a dynamic system *H* , with parameters *ak* ∈*R*, *bk* ∈*R*,

0 0

= =

eters if *u* ≥1. This is the typical situation addressed here.

( ) ( ) ( ) ( )

instance gain*K*, poles *pk* and zeros *zk* , or poles *pk* and residues *rk* ,

0

=

*<sup>v</sup> <sup>k</sup>*

*<sup>r</sup> Y z H zX z Hqz <sup>K</sup>*

: , <sup>1</sup>

<sup>=</sup> = = <sup>=</sup> -- å Õ



*k k*

0

=

*k*

*u v k k k k k k*

é ùé ù ê úê ú = = ë ûë û

*aD y bD x a*

if the convergence to the asymptotic result is very slow.

**2. Modeling uncertainty of signals**

**2.1. Problem definition**

54 Digital Filters and Signal Processing

displacement operator, *Δ* <sup>−</sup><sup>1</sup>

Statistical expectations of any signal, model or function *g*(*q*) over finite discrete *E* as well as continuous ensembles or probability distributions (no subscript) are defined as,

$$\begin{aligned} \{\mathcal{g}\}\_{E} &= \ \frac{1}{m} \sum\_{k=1}^{m} \mathcal{g}\left(\hat{\boldsymbol{q}}^{(k)}\right) \\ \{\mathcal{g}\}\_{E} &= \ \int\_{\mathcal{Q}} \mathcal{g}\left(\boldsymbol{q}\right) f\_{\boldsymbol{q}}\left(\boldsymbol{q}\right) d\boldsymbol{q}. \end{aligned} \tag{3}$$

Samples of *q* are labeled *q* ^, with their components organized in columns. Sample indices will be given as superscripts in parenthesis, eg. *q* ^(*k* ) is a column vector denoting the *k* −th sample of parameter *q*. Variations from the mean are written as *δq*(*E*)≡*q* − *q* (*E*).

Only uniform (UNI) and normal distributions (NRM) will be utilized. Either the mean and standard deviation, or the interval in brackets will be given in parenthesis, e.g. *<sup>q</sup>* ~UNI(0.5,1 / <sup>2</sup> 3) =UNI( 0, <sup>1</sup> ). Statistical moments *Mi* (*k* ) = (*δqi* ) *<sup>k</sup> <sup>k</sup>* carry the information contained in the marginalized probability density functions (pdf) *f i* (*δqi* ) ≡ *∫ Q f q* (*δq*)*dq*1⋯*dqi*−1*dqi*+1⋯*dqn*, where *Q* denotes the sample space. While *Mi* (2) describes the width of *f <sup>i</sup>* (*δqi* ), *Mi* (3) is related but different to its *skewness* [11]. Further, the shape is reflected in *Mi* (4) , similarly to the *curtosis* [11]. Since UNI(0,1) and NRM(0,1) are normalized and symmetric *f <sup>i</sup>* (*δqi* ) = *f <sup>i</sup>* ( −*δqi* ), *Mi* (2) =1 and *Mi* (3) =0. Their differences are first reflected in their fourth moment, *Mi* (4) =1 / 4 5, 1 / 2 3≈(0.11, 0.29) for UNI(0,1) and NRM(0,1), respec‐ tively. The maximum variation of the parameter *qi* is expressed by the range *Mi* (*∞*) ≡lim *k*→*∞* |*Mi* (*k* ) | =max(|*δqi* |). Dependencies are expressed in mixed moments (*δqi*1) *k*1 (*δqi*2) *<sup>k</sup>*2<sup>⋯</sup> (*E*). The discussion will be limited to correlations described by the covariance matrix cov(*q*)= *δqδq <sup>T</sup>* , where the vector multiplication is an outer product.

Matrix size will be indicated with subscripts, e.g. *Vn*×*m* is a matrix of *n* rows and *m* columns with elements *V jk* , *j* =1, …*n* and *k* =1, …*m*. The identity matrix will be denoted*I*, while matrices with equal elements (*i*) will have their size attached, (*i <sup>n</sup>*×*n*) *jk* =*i*. For a matrix (vector) *D*, diag(*D*) is a vector (diagonal matrix) with components (diagonal elements) equal to the diagonal elements (components) of *D*. The trace of a matrix is denoted Tr.

REF (section 6.1), many different estimators of *yC*(*q*) can be used, e.g. the different ensembles in section 5.6, see result in Fig. 5 (left). Consequently, an unbiased estimator of *yC*(*q*) correctly

The scent is important since *yC* and not *y*(*qc*) is the main result utilized in applications. The

<sup>∇</sup> *<sup>y</sup> <sup>T</sup>* cov(*q*)<sup>∇</sup> *<sup>y</sup>*, with (<sup>∇</sup> *<sup>y</sup>*) *jk* <sup>=</sup><sup>∂</sup> *<sup>j</sup> <sup>y</sup>*(*tk* ), affects the confidence in the result. Its accuracy is usually less critical. An accurate evaluation of the scent is perhaps the strongest feature of the unscented Kalman filter, which provides the foundation for the presented approach as well as

A brief resume of the most traditional related methods of uncertainty propagation, applicable to signal processing models, is here given together with their pros and cons. Advanced intrusive methods like e.g. polynomial chaos expansions [14-15] not directly related to the

The indisputable default methods of uncertainty propagation are based on Taylor expansions. These methods are intrusive if the differentiations are made analytically. Convergent series require regular differentiable models and numerical or analytical complexity make them error

¶ = - = Ñ =Ñ+ <sup>+</sup>

K

from *H* (*q*, *z*) but may nevertheless be realized as digital filters [3,7,10], just as *H* (*q*, *z*). The

( ) ( ) ( ) { ( ) ( ) } 1 2 <sup>1</sup> , , , Tr , , <sup>2</sup>

d d= \*+ × \* + éù éù é ù êú êú ëû ëû ë û <sup>K</sup> (7)

1 ,

*<sup>n</sup> <sup>k</sup> T T*

d

(1)

*q q k l k k l l k*

( *q* , *z*), and so on. These variation systems differ (intrusive)

å å <sup>K</sup>

The transfer function *H* (*q*, *z*) of the digital filter can be expanded in a Taylor series,

*<sup>H</sup> Hqz Hqz H q z q Hqz q H q q <sup>k</sup> q q*

prone. Their applicability is therefore limited for complex models.

+¥

=

corresponding variation of *y*(*q*, *x*, *t*)=*h* (*q*, *t*)∗ *x*(*t*) is given by,

*T T*

*y qxt q e q t xt q q e q t xt*

1 1 ,, , , ! 2

d

( ) ( ) ( ) ( ) ( )

( ) ( ) { ( )}

, Tr ,

This defines *n* sensitivity systems (column vector) *Ek*

1 2

*qE q z qq E q z*

= +×+ é ù ë û

d d

*T T*

variation systems (matrix) *E* (2)

( ) ( )

 d (2)

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

from its linearized approximation

http://dx.doi.org/10.5772/52193

57

2

¶ ¶

( *q* , *z*), *n*(*n* + 1)/ 2 unique quadratic

(6)

 dd

accounts for rather than ignores its finite scent, or deviation from *y*(*qC*).

corresponding difference [13] in the standard deviation *My*

the origin of the term 'scent'.

**3. Conventional methods**

proposed method are omitted.

**3.1. Taylor expansions**

( )

d

d

> d

A method will be stated intrusive if manipulations of the model are required. For the targeted highly complex models, it will be assumed that the computational cost for their evaluation dominates all other calculations. The efficiency *ρ* of any method will accordingly be defined by the least required number of evaluations of the original model.

#### **2.3. Fundamentals of non-linear propagation of uncertainty**

Linearity in parameters (LP) is to be distinguished from linearity in response (LR),

$$\begin{aligned} \text{LR}: \quad &y\left(q, a\_1 \mathbf{x}\_1 + a\_2 \mathbf{x}\_2, t\right) &= &a\_1 y\left(q, \mathbf{x}\_1, t\right) + a\_2 y\left(q, \mathbf{x}\_2, t\right), \quad &\forall \mathbf{x}\_1, \mathbf{x}\_2\\ \text{LP}: \quad &y\left(q\_1 + q\_2, \mathbf{x}\_i, t\right) &=&y\left(q\_1, \mathbf{x}, t\right) + \mathbf{C}\left(\mathbf{x}, t\right)^T \left(q\_2 - q\_1\right) \quad \forall q\_{1\prime} q\_2^{\prime} \end{aligned} \tag{4}$$

for some vector *Cn*×1. Different concepts of linearity are used, *y*(*q*<sup>1</sup> + *q*2, *x*, *t*) ≠ *y*(*q*1, *x*, *t*) + *y*(*q*2, *x*, *t*) for LP models. Strictly speaking, LP denotes models that are affine, i.e. written as linear combinations of their parameters. Most constructed systems are designed to be as close to LR as possible while most models are not LP. There is hence no contradiction in non-linear (LP) propagation of uncertainty with linear (LR) digital filters, as here.

For non-linear propagation of uncertainty, the asymmetry of the resulting pdf is central. It can be expressed as a lack of commutation of non-linear propagation and statistical evaluation of a center value (⋅*<sup>C</sup>* ), as measured with the *scent* [12],

$$
\zeta \neq y\_{\mathbb{C}}(q) - y(q\_{\mathbb{C}}).\tag{5}
$$

The method for evaluating the center is left unspecified, as there are several alternatives. The most common choice is to use the mean, ⋅*<sup>C</sup>* = ⋅ . The lowest order approximation of the scent can then be obtained by calculating the expectation of a Taylor expansion (section 3.1), *<sup>ζ</sup>* =Tr cov(*q*) <sup>⋅</sup>*Η*(*y*) / 2, where *Η*(*y*) *jk* <sup>=</sup>∂<sup>2</sup> *<sup>y</sup>* / <sup>∂</sup>*qj* ∂*qk* is the Hessian matrix signal of *y*, evaluated at *q* . The scent is related to the skewness *γ* = *δ y* <sup>3</sup> / *δ y* <sup>2</sup> 3/2 . The *additional* asymmetry caused by the non-linearity of the model is measured with the scent but differently. The scent addresses how parametric uncertainties are propagated and not how the result is distributed, e.g. *ζ* =0 for all LP models for which *γ* may attain any value. A finite scent thus implies the model is not LP, but not the reverse. The scent should not be confused with bias. Bias is a property of an estimator, while scent is a property of a model. For every model, such as the REF (section 6.1), many different estimators of *yC*(*q*) can be used, e.g. the different ensembles in section 5.6, see result in Fig. 5 (left). Consequently, an unbiased estimator of *yC*(*q*) correctly accounts for rather than ignores its finite scent, or deviation from *y*(*qC*).

The scent is important since *yC* and not *y*(*qc*) is the main result utilized in applications. The corresponding difference [13] in the standard deviation *My* (2) from its linearized approximation <sup>∇</sup> *<sup>y</sup> <sup>T</sup>* cov(*q*)<sup>∇</sup> *<sup>y</sup>*, with (<sup>∇</sup> *<sup>y</sup>*) *jk* <sup>=</sup><sup>∂</sup> *<sup>j</sup> <sup>y</sup>*(*tk* ), affects the confidence in the result. Its accuracy is usually less critical. An accurate evaluation of the scent is perhaps the strongest feature of the unscented Kalman filter, which provides the foundation for the presented approach as well as the origin of the term 'scent'.

## **3. Conventional methods**

Matrix size will be indicated with subscripts, e.g. *Vn*×*m* is a matrix of *n* rows and *m* columns with elements *V jk* , *j* =1, …*n* and *k* =1, …*m*. The identity matrix will be denoted*I*, while

*D*, diag(*D*) is a vector (diagonal matrix) with components (diagonal elements) equal to the

A method will be stated intrusive if manipulations of the model are required. For the targeted highly complex models, it will be assumed that the computational cost for their evaluation dominates all other calculations. The efficiency *ρ* of any method will accordingly be defined

*<sup>n</sup>*×*n*) *jk* =*i*. For a matrix (vector)

,

(4)

matrices with equal elements (*i*) will have their size attached, (*i*

56 Digital Filters and Signal Processing

by the least required number of evaluations of the original model.

**2.3. Fundamentals of non-linear propagation of uncertainty**

a center value (⋅*<sup>C</sup>* ), as measured with the *scent* [12],

*<sup>ζ</sup>* =Tr cov(*q*) <sup>⋅</sup>*Η*(*y*) / 2, where *Η*(*y*) *jk* <sup>=</sup>∂<sup>2</sup> *<sup>y</sup>* / <sup>∂</sup>*qj*

at *q* . The scent is related to the skewness *γ* = *δ y* <sup>3</sup> / *δ y* <sup>2</sup> 3/2

z

here.

diagonal elements (components) of *D*. The trace of a matrix is denoted Tr.

Linearity in parameters (LP) is to be distinguished from linearity in response (LR),

( ) ( ) ( )

( ) ( ) ( ) ( )

: , , ,, , ,, ,

: ,, ,, , , *<sup>T</sup> LR y q a x a x t a y q x t a y q x t x x LP y q q x t y q x t C x t q q q q*

11 22 11 22 1 2

+= + "

1 2 1 2 1 12

for some vector *Cn*×1. Different concepts of linearity are used, *y*(*q*<sup>1</sup> + *q*2, *x*, *t*) ≠ *y*(*q*1, *x*, *t*) + *y*(*q*2, *x*, *t*) for LP models. Strictly speaking, LP denotes models that are affine, i.e. written as linear combinations of their parameters. Most constructed systems are designed to be as close to LR as possible while most models are not LP. There is hence no contradiction in non-linear (LP) propagation of uncertainty with linear (LR) digital filters, as

For non-linear propagation of uncertainty, the asymmetry of the resulting pdf is central. It can be expressed as a lack of commutation of non-linear propagation and statistical evaluation of

( ) ( ). *C C*

The method for evaluating the center is left unspecified, as there are several alternatives. The most common choice is to use the mean, ⋅*<sup>C</sup>* = ⋅ . The lowest order approximation of the scent can then be obtained by calculating the expectation of a Taylor expansion (section 3.1),

by the non-linearity of the model is measured with the scent but differently. The scent addresses how parametric uncertainties are propagated and not how the result is distributed, e.g. *ζ* =0 for all LP models for which *γ* may attain any value. A finite scent thus implies the model is not LP, but not the reverse. The scent should not be confused with bias. Bias is a property of an estimator, while scent is a property of a model. For every model, such as the

º - *y q yq* (5)

∂*qk* is the Hessian matrix signal of *y*, evaluated

. The *additional* asymmetry caused

+ = + -"

A brief resume of the most traditional related methods of uncertainty propagation, applicable to signal processing models, is here given together with their pros and cons. Advanced intrusive methods like e.g. polynomial chaos expansions [14-15] not directly related to the proposed method are omitted.

#### **3.1. Taylor expansions**

The indisputable default methods of uncertainty propagation are based on Taylor expansions. These methods are intrusive if the differentiations are made analytically. Convergent series require regular differentiable models and numerical or analytical complexity make them error prone. Their applicability is therefore limited for complex models.

The transfer function *H* (*q*, *z*) of the digital filter can be expanded in a Taylor series,

$$\begin{split} \delta \mathcal{S} H(q, z) &= H(q, z) - H\left(\{q\}, z\right) = \sum\_{k=1}^{+\infty} \frac{1}{k!} \left(\delta \eta^{T} \nabla\_{q}\right)^{k} H\{q, z\} = \delta \eta^{T} \nabla\_{q} H + \frac{1}{2} \sum\_{k,l}^{\eta} \delta q\_{k} \delta q\_{l} \frac{\hat{\sigma}^{2} H}{\partial q\_{l} \partial q\_{k}} + \dots \\ &= \delta \eta^{T} \mathcal{E}^{(1)}\left(\{q\}, z\right) + \text{Tr}\left\{ \left[\delta q \delta q^{T}\right] \cdot \mathcal{E}^{(2)}\left(\{q\}, z\right) \right\} + \dots \end{split} \tag{6}$$

This defines *n* sensitivity systems (column vector) *Ek* (1) ( *q* , *z*), *n*(*n* + 1)/ 2 unique quadratic variation systems (matrix) *E* (2) ( *q* , *z*), and so on. These variation systems differ (intrusive) from *H* (*q*, *z*) but may nevertheless be realized as digital filters [3,7,10], just as *H* (*q*, *z*). The corresponding variation of *y*(*q*, *x*, *t*)=*h* (*q*, *t*)∗ *x*(*t*) is given by,

$$\delta \delta y \{ q, \mathbf{x}, t \} = \delta q^T \left[ e^{(1)} \left( \{ q \} \_ {\prime} , t \right) \* \mathbf{x} \{ t \} \right] + \frac{1}{2} \operatorname{Tr} \left[ \left[ \delta q \delta q^T \right] \cdot \left[ e^{(2)} \left( \{ q \} \_ {\prime} , t \right) \* \mathbf{x} \{ t \} \right] \right] + \dots \tag{7}$$

where *e*(*<sup>k</sup>* ) ( *q* , *t*) are the impulse responses of the systems *E* (*<sup>k</sup>* ) ( *q* , *z*). Utilizing digital filters with impulse responses *e*(*<sup>k</sup>* ) ( *q* , *t*), the differentiations are conveniently done *once*, and not repeatedly for every signal *x*(*t*). The linearity in parameters of the model can easily be studied for many different input signals *x*(*t*), by evaluating *e*(*<sup>k</sup>* ) ( *q* , *t*)∗ *x*(*t*). Due to the large number of variation systems, higher order perturbation analyses rapidly become intractable though. The established method is limited to linearization (LIN) [16] (*e*(1) ). It will always incorrectly yield vanishing scent, *ζ* =0. A first order estimate of *ζ* is instead given by the expectation of second term in Eq. 7, *ζ* ≈Tr cov(*q*)*Η*( *q* , *t*) / 2, where the matrix of Hessian signals *Η*( *q* , *t*) =*e*(2) ( *q* , *t*)∗ *x*(*t*) is obtained with repeated digital filtering.

( ) ( ) ( ) ( ) ( ) ( ) 1 2 1 11 1 2 , if . *k k <sup>k</sup> <sup>n</sup> k k n kk k n f q Uq Uq Uq Uq dq dq dq dq q U I*

All *ϕk* are hence mixed according to *U* . Dependencies are thus difficult to account for. One rare exception is provided by the multinomial distribution [9]. It is often better to assign the pdfs to the canonical parameters in the original instead of the canonical basis. The transfor‐ mation then reads *q*˜ : *q* =*U <sup>T</sup> SU q*˜. As required, it leaves cov(*q*) invariant. The marginalization

Indeed, if the commutator *S*, *U <sup>T</sup>* ≡*SU <sup>T</sup>* −*U <sup>T</sup> S* vanishes, *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> = *I*. The transformation *U <sup>T</sup>* must satisfy the stronger criterion *U* = *I* to avoid mixing. For any transformation *q* →*Wq*,

( ) <sup>2</sup>

*W W W n* = = *W*

( ) ( ) ( )

1 1

*W W W <sup>n</sup> <sup>W</sup>* <sup>1</sup>

pdfs *<sup>f</sup> <sup>k</sup>* (*qk* ) beyond recognition for the transformation *<sup>U</sup> <sup>T</sup>* but not for *<sup>U</sup> <sup>T</sup> SU <sup>S</sup>* <sup>−</sup><sup>1</sup>

**Figure 1.** Left: The sample space of independent scaled parameters *<sup>k</sup> <sup>k</sup> I q Sq* <sup>~</sup> <sup>9</sup>: (Eq. 12), and

 (dashed) and obtained marginal pdfs *<sup>k</sup> <sup>k</sup>* 11 *f q* (solid) ) with mixing 1.00 *<sup>T</sup> U* and 0.058 <sup>1</sup> *U SUS <sup>T</sup>* 12 , and magnified upper transition region (inset).

**Figure 1.** Left: The sample space of independent scaled parameters (*I* :*qk* =*Sq*˜ *<sup>k</sup>* ) (Eq. 12), and of the two transforma‐ tions (*<sup>U</sup> <sup>T</sup>* ) (rotated) and (*<sup>U</sup> <sup>T</sup> SUS* <sup>−</sup>1) (skewed and tilted). Right: Assigned pdfs ϕ*<sup>k</sup>* (*Sq*˜ *<sup>k</sup>* ) (dashed) and obtained margin‐

13 Specifying both marginal probability distributions and covariance is either redundant or 14 inconsistent, as the latter is uniquely determined by the former. Nevertheless, this reflects 15 the typical available information for signal processing applications. The moments can be 16 accurately determined [4] for sufficiently large data sets but the joint distribution *f q* is 17 hardly ever known with any precision. Some of its properties are usually assigned, with varying degree of confidence. For instance, the allowed maximal range 18 *M* of the 19 parameters of digital filters is given by stability constraints. The transformation technique 20 above is well adapted to these facts, since the covariance is prioritised. The transformation *<sup>q</sup> <sup>U</sup> SU <sup>q</sup> <sup>T</sup>* <sup>~</sup> <sup>21</sup> will be utilized in section 5.2 to include correlations with limited mixing of the

of the two transformations *<sup>T</sup> U* (rotated) and <sup>1</sup> *U SUS <sup>T</sup>* 10 (skewed and tilted). Right:

statistics assigned to independent normalized canonical parameters *q* <sup>~</sup> 22 .

 

UNI 0,1 0.90 0.10 <sup>1</sup> 11 1 0 cov , . 0.10 0.90 <sup>2</sup> 1 1 0 0.8 UNI 0, 0.8

2 A simple example illustrates that the mixing effect can be considerable, even for minute

4 . (12)

*jj* 5 *S* are similar, i.e. cov *q* is almost *degenerate*. As shown in Fig. 1, the large rotations mix the assigned pdfs *<sup>k</sup> <sup>k</sup> Sq* <sup>~</sup> <sup>6</sup>

marginal pdfs *<sup>k</sup> <sup>k</sup> f q* beyond recognition for the transformation *<sup>T</sup> U* but not for <sup>1</sup> *U SUS <sup>T</sup>* 7 .

*degenerate*. As shown in Fig. 1, the large rotations mix the assigned pdfs *ϕ<sup>k</sup>* (*Sq*˜ *<sup>k</sup>* ) to marginal

<sup>1</sup> <sup>0</sup> , <sup>1</sup> <sup>1</sup>

 

0 0.8

UNI 0, 0.8 <sup>~</sup>

<sup>ì</sup> <sup>=</sup> é ù æ ö æöæ ö <sup>ï</sup> ë û = Û= ç ÷ ç÷ç ÷ = Þ <sup>í</sup> - <sup>=</sup> é ù è ø èøè ø <sup>ï</sup>

*W W*

æ ö - ç ÷ Yº - Î º é ù ë û

correlations. Assume a model has two parameters with a covariance matrix,

1 , :

3 correlations. Assume a model has two parameters with a covariance matrix,

2 1

Large rotations are required because the canonical variances <sup>2</sup>

*r r*

è ø

*q US Sq*

 

*n*

Large rotations are required because the canonical variances *S jj*

 

*q U S*

 

*<sup>T</sup> U*

Assigned pdfs *<sup>k</sup> <sup>k</sup> Sq*

~ 

*I*

8

0.10 0.90 0.90 0.10 cov

*r c r*

1 1 , : max min <sup>1</sup> <sup>1</sup> 0,1 , . *n n rc rc <sup>c</sup> <sup>c</sup>*

A simple example illustrates that the mixing effect can be considerable, even for minute

0,1 , max min <sup>1</sup> <sup>1</sup> 1 . (11)

Deterministic Sampling for Modeling Uncertainty <sup>7</sup>

 

*rc <sup>c</sup> rc <sup>c</sup> W W*


f

http://dx.doi.org/10.5772/52193

, it is generally less distorting than *U <sup>T</sup>* .

of *Sq*˜ *<sup>k</sup>*

59

*U* . Since the transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup>

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

, :

( ) ( ) 1 1 <sup>2</sup> 2 2

2 2

*Sq Sq*

*Sq*

 

%

, :

f

f

 

<sup>1</sup> *U SUS <sup>T</sup> <sup>T</sup> U*

<sup>1</sup> *U SUS <sup>T</sup>*

2 1 1

å å (11)

*r rc*

*n c r rc*

2

î ë û

 

UNI 0,1 <sup>~</sup>

% (12)

2 are similar, i.e. cov(*q*) is almost

 *<sup>k</sup> <sup>k</sup> k k f q Sq* ~ 

.

to

 f

 f

ff

in Eq. 10 changes accordingly, *U* →*SU <sup>T</sup> S* <sup>−</sup><sup>1</sup>

contains cancelling operations *U* , *U <sup>T</sup>* and *S*, *S* <sup>−</sup><sup>1</sup>

an indicator of mixing of the components of *q* is given by,

#### **3.2. Brute force Monte Carlo**

Monte Carlo (MC) methods [8-9], or *random sampling* of uncertain models was originally introduced and phrased 'statistical sampling' by Enrico Fermi already in the 1930's [17]. The MC methods *realize* uncertain signal processing models in finite *ensembles*. Every ensemble consists of a possible set of well-defined model systems, all (usually) having the same structure but slightly different parameter values. In the original so-called brute force Monte Carlo method, each set of parameters is assigned to the output of random generators with appro‐ priate statistics. The convergence to the assigned statistics is very slow [5] but it is asymptot‐ ically exact and the required number of samples is essentially independent of the number of parameters. Hence it does not suffer from the curse-of-dimensionality of many other methods. The outstanding simplicity in application is likely the cause of its popularity, just as the slow convergence or low efficiency is the main reason for its failures.

In MC, arbitrary distributions and dependencies are usually obtained by means of transfor‐ mations of samples of elementary distributions. Independent samples *q* ^(*k* ) of any probability density function (pdf) *ϕ*(*x*) can be constructed with the inverse transform method [9]. It consists of a calculation of the inverse of its cumulative distribution function (cdf) *Φ*(*y*) and generation of a uniformly distributed random sequence *z* ^(*k* ) ,

$$
\hat{\boldsymbol{\eta}}^{(k)} = \boldsymbol{\Phi}^{-1}\left(\hat{\boldsymbol{z}}^{(k)}\right), \quad \boldsymbol{\Phi}\left(\boldsymbol{y}\right) = \underset{-\boldsymbol{\eta}}{\text{J}}\boldsymbol{\Phi}\left(\mathbf{x}\right)d\boldsymbol{x}, \quad \boldsymbol{z} \sim \text{UNI}\left(\mathbf{0}, \mathbf{1}\right), \quad k = 1, 2, \dots \\ m. \tag{8}
$$

Covariance may be included with an appropriate transformation of samples of *canonical* parameters *q*˜ : *q* =*U <sup>T</sup> Sq*˜ with cov(*q*˜)= *I*,

$$\text{cov}\left(q\right) = \left\langle \delta q \delta \tilde{q}^{T} \right\rangle = \left\langle \mathcal{U}^{T} S \delta \tilde{q} \left(\mathcal{U}^{T} S \delta \tilde{q}\right)^{T} \right\rangle = \mathcal{U}^{T} S \left\langle \delta \tilde{q} \delta \tilde{q}^{T} \right\rangle\\\text{S.U.} = \mathcal{U}^{T} S^{2} \mathcal{U}, \quad \begin{vmatrix} \mathcal{U}^{T} \mathcal{U} = \mathcal{U} \mathcal{U}^{T} = I\\ S\_{jk}^{2} = 0, \; j \neq k \end{vmatrix} = \begin{pmatrix} \mathcal{U}^{T} \mathcal{U}^{T} \mathcal{U} \end{pmatrix}$$

The matrices *S*, *U* are found by calculating the eigenvalues (*S* 2) and eigenvectors (*U* ) [11] of cov(*q*). This transformation makes the marginal pdfs *f <sup>k</sup>* (*qk* ) to differ substantially from the univariate pdfs *ϕk* of the independent but scaled parameters *Sq*˜ *<sup>k</sup>* ,

$$f\_k(q\_k) = \int \phi\_1\left(\left[\Box Lq\right]\_1\right)\phi\_2\left(\left[\Box Lq\right]\_2\right)\cdots\phi\_k\left(\left[\Box Lq\right]\_k\right)\cdots\phi\_n\left(\left[\Box Lq\right]\_n\right)dq\_1\cdots dq\_{k-1}dq\_{k+1}\cdots dq\_n \neq \phi\_k(q\_k), \text{ if } \Box\right.\tag{10}$$

where *e*(*<sup>k</sup>* )

*Η*( *q* , *t*) =*e*(2)

with impulse responses *e*(*<sup>k</sup>* )

58 Digital Filters and Signal Processing

**3.2. Brute force Monte Carlo**

( *q* , *t*) are the impulse responses of the systems *E* (*<sup>k</sup>* )

for many different input signals *x*(*t*), by evaluating *e*(*<sup>k</sup>* )

The established method is limited to linearization (LIN) [16] (*e*(1)

convergence or low efficiency is the main reason for its failures.

generation of a uniformly distributed random sequence *z*

*k k q z y x dx z*

parameters *q*˜ : *q* =*U <sup>T</sup> Sq*˜ with cov(*q*˜)= *I*,

 d

dd

mations of samples of elementary distributions. Independent samples *q*

*y*


( ) ( ) <sup>2</sup>

 d

univariate pdfs *ϕk* of the independent but scaled parameters *Sq*˜ *<sup>k</sup>* ,

f

repeatedly for every signal *x*(*t*). The linearity in parameters of the model can easily be studied

of variation systems, higher order perturbation analyses rapidly become intractable though.

yield vanishing scent, *ζ* =0. A first order estimate of *ζ* is instead given by the expectation of second term in Eq. 7, *ζ* ≈Tr cov(*q*)*Η*( *q* , *t*) / 2, where the matrix of Hessian signals

Monte Carlo (MC) methods [8-9], or *random sampling* of uncertain models was originally introduced and phrased 'statistical sampling' by Enrico Fermi already in the 1930's [17]. The MC methods *realize* uncertain signal processing models in finite *ensembles*. Every ensemble consists of a possible set of well-defined model systems, all (usually) having the same structure but slightly different parameter values. In the original so-called brute force Monte Carlo method, each set of parameters is assigned to the output of random generators with appro‐ priate statistics. The convergence to the assigned statistics is very slow [5] but it is asymptot‐ ically exact and the required number of samples is essentially independent of the number of parameters. Hence it does not suffer from the curse-of-dimensionality of many other methods. The outstanding simplicity in application is likely the cause of its popularity, just as the slow

In MC, arbitrary distributions and dependencies are usually obtained by means of transfor‐

density function (pdf) *ϕ*(*x*) can be constructed with the inverse transform method [9]. It consists of a calculation of the inverse of its cumulative distribution function (cdf) *Φ*(*y*) and

*k m* -

Covariance may be included with an appropriate transformation of samples of *canonical*

*T T <sup>T</sup> TTT T T T*

 dd

The matrices *S*, *U* are found by calculating the eigenvalues (*S* 2) and eigenvectors (*U* ) [11] of cov(*q*). This transformation makes the marginal pdfs *f <sup>k</sup>* (*qk* ) to differ substantially from the

<sup>ï</sup> = = == = = <sup>í</sup>

( ) ( ) ( ) ( ) ( ) ( ) <sup>1</sup> ˆ ˆ , , ~ UNI 0,1 , 1,2, .

<sup>2</sup> cov , , 0,

*q q q U S q U S q U S q q SU U S U S jk*

^(*k* ) ,

% % %% (9)

=F F = <sup>=</sup> ò <sup>K</sup> (8)

( *q* , *t*)∗ *x*(*t*) is obtained with repeated digital filtering.

( *q* , *z*). Utilizing digital filters

). It will always incorrectly

( *q* , *t*)∗ *x*(*t*). Due to the large number

^(*k* )

*jk*

<sup>ï</sup> = ¹ <sup>î</sup>

ì

*U U UU I*

of any probability

( *q* , *t*), the differentiations are conveniently done *once*, and not

All *ϕk* are hence mixed according to *U* . Dependencies are thus difficult to account for. One rare exception is provided by the multinomial distribution [9]. It is often better to assign the pdfs to the canonical parameters in the original instead of the canonical basis. The transfor‐ mation then reads *q*˜ : *q* =*U <sup>T</sup> SU q*˜. As required, it leaves cov(*q*) invariant. The marginalization in Eq. 10 changes accordingly, *U* →*SU <sup>T</sup> S* <sup>−</sup><sup>1</sup> *U* . Since the transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> of *Sq*˜ *<sup>k</sup>* contains cancelling operations *U* , *U <sup>T</sup>* and *S*, *S* <sup>−</sup><sup>1</sup> , it is generally less distorting than *U <sup>T</sup>* . Indeed, if the commutator *S*, *U <sup>T</sup>* ≡*SU <sup>T</sup>* −*U <sup>T</sup> S* vanishes, *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> = *I*. The transformation *U <sup>T</sup>* must satisfy the stronger criterion *U* = *I* to avoid mixing. For any transformation *q* →*Wq*, an indicator of mixing of the components of *q* is given by,

$$\Psi\left(W\right) \equiv \frac{1}{n} \sum\_{r=1}^{n} \left(1 - \frac{\max\left|\mathcal{W}\_{rc}\right| - \min\left|\mathcal{W}\_{rc}\right|}{\left\|\mathcal{W}\_{r,:}\right\|}\right) \in \left[0, 1\right], \quad \left\|\mathcal{W}\_{r,:}\right\| \equiv \sqrt{\sum\_{c=1}^{n} \left|\mathcal{W}\_{rc}\right|^{2}}.\tag{11}$$

  *c*

 

2 1 1

A simple example illustrates that the mixing effect can be considerable, even for minute correlations. Assume a model has two parameters with a covariance matrix, *n r rc n rc <sup>c</sup> rc <sup>c</sup> W W W W <sup>n</sup> <sup>W</sup>* <sup>1</sup> 2 , : 0,1 , max min <sup>1</sup> <sup>1</sup> 1 . (11)

*W*

*q U S*

 

8

*r r*

1 , :

 

$$\text{cov}\begin{pmatrix}q\end{pmatrix}=\begin{pmatrix}0.90 & 0.10\\0.10 & 0.90\end{pmatrix}\Leftrightarrow U = \frac{1}{\sqrt{2}}\begin{pmatrix}1 & 1\\1 & -1\end{pmatrix}, S^2 = \begin{pmatrix}1 & 0\\0 & 0.8\end{pmatrix}\Rightarrow \begin{cases}\phi\_1(\text{S}\vec{q}\_1) & = \text{ UNI}(\left[0,1\right])\\\phi\_2(\text{S}\vec{q}\_2) & = \text{ UNI}\left(\left[0,\sqrt{0.8}\right]\right) \end{cases}.\tag{12}$$

Large rotations are required because the canonical variances *S jj* 2 are similar, i.e. cov(*q*) is almost *degenerate*. As shown in Fig. 1, the large rotations mix the assigned pdfs *ϕ<sup>k</sup>* (*Sq*˜ *<sup>k</sup>* ) to marginal pdfs *<sup>f</sup> <sup>k</sup>* (*qk* ) beyond recognition for the transformation *<sup>U</sup> <sup>T</sup>* but not for *<sup>U</sup> <sup>T</sup> SU <sup>S</sup>* <sup>−</sup><sup>1</sup> . UNI 0, 0.8 <sup>~</sup> 0 0.8 <sup>1</sup> <sup>0</sup> , <sup>1</sup> <sup>1</sup> 2 0.10 0.90 2 2 *Sq* Large rotations are required because the canonical variances <sup>2</sup> *jj* 5 *S* are similar, i.e. cov *q* is almost *degenerate*. As shown in Fig. 1, the large rotations mix the assigned pdfs *<sup>k</sup> <sup>k</sup> Sq* <sup>~</sup> <sup>6</sup> to marginal pdfs *<sup>k</sup> <sup>k</sup> f q* beyond recognition for the transformation *<sup>T</sup> U* but not for <sup>1</sup> *U SUS <sup>T</sup>* 7 .

4 . (12)

of the two transformations *<sup>T</sup> U* (rotated) and <sup>1</sup> *U SUS <sup>T</sup>* 10 (skewed and tilted). Right: Assigned pdfs *<sup>k</sup> <sup>k</sup> Sq* ~ (dashed) and obtained marginal pdfs *<sup>k</sup> <sup>k</sup>* 11 *f q* (solid) ) with mixing **Figure 1.** Left: The sample space of independent scaled parameters (*I* :*qk* =*Sq*˜ *<sup>k</sup>* ) (Eq. 12), and of the two transforma‐ tions (*<sup>U</sup> <sup>T</sup>* ) (rotated) and (*<sup>U</sup> <sup>T</sup> SUS* <sup>−</sup>1) (skewed and tilted). Right: Assigned pdfs ϕ*<sup>k</sup>* (*Sq*˜ *<sup>k</sup>* ) (dashed) and obtained margin‐

13 Specifying both marginal probability distributions and covariance is either redundant or 14 inconsistent, as the latter is uniquely determined by the former. Nevertheless, this reflects 15 the typical available information for signal processing applications. The moments can be 16 accurately determined [4] for sufficiently large data sets but the joint distribution *f q* is 17 hardly ever known with any precision. Some of its properties are usually assigned, with varying degree of confidence. For instance, the allowed maximal range 18 *M* of the 19 parameters of digital filters is given by stability constraints. The transformation technique 20 above is well adapted to these facts, since the covariance is prioritised. The transformation *<sup>q</sup> <sup>U</sup> SU <sup>q</sup> <sup>T</sup>* <sup>~</sup> <sup>21</sup> will be utilized in section 5.2 to include correlations with limited mixing of the

**Figure 1.** Left: The sample space of independent scaled parameters *<sup>k</sup> <sup>k</sup> I q Sq* <sup>~</sup> <sup>9</sup>: (Eq. 12), and

1.00 *<sup>T</sup> U* and 0.058 <sup>1</sup> *U SUS <sup>T</sup>* 12 , and magnified upper transition region (inset).

statistics assigned to independent normalized canonical parameters *q* <sup>~</sup> 22 . al pdfs *<sup>f</sup> <sup>k</sup>* (*qk* ) (solid) ) with mixing Ψ(*<sup>U</sup> <sup>T</sup>* )=1.00 and Ψ(*<sup>U</sup> <sup>T</sup> SUS* <sup>−</sup>1)=0.058, and magnified upper transition region (inset).

( ) ( )

*k j*

higher order moments *M* (*<sup>k</sup>* )

9

*M* (1)

**Figure 2.** The second *M* (2)

−*M* (4)

*q* ~NRM(0, 1), compared to a fixed grid (dashed).

0 1

*n k r*

( )

,: , , , ,0 1.

*r* =1 3 6 11 21 *r* =2 6 21 66 231 *r* =3 10 56 286 1771

The distribution of samples may be improved with stratification, as in Latin Hypercube sampling (LHS) [18]. By dividing the sample space into intervals, or stratas representing equal probability the need for large ensembles is reduced. In LHS, each parameter is sampled exactly once in each of its stratas giving a generalized *latin square* [20]. This selection pushes the samples away from each other and distributes them more evenly. To illustrate the improve‐ ment with stratification, sample one parameter *q* ~NRM(0, 1). After division into *m* intervals of equal probability, samples are found with the inverse transform method described in section 3.2 (Eq. 8). As seen in Fig. 2, the convergence improves dramatically. Still, even for *m*=100 samples the second moment (left) varies noticeably. The convergence is generally poorer for

Deterministic Sampling for Modeling Uncertainty <sup>9</sup>

1 equal probability the need for large ensembles is reduced. In LHS, each parameter is 2 sampled exactly once in each of its stratas giving a generalized *latin square* [20]. This 3 selection pushes the samples away from each other and distributes them more evenly. To 4 illustrate the improvement with stratification, sample one parameter *q* ~ NRM 0*,*1 . After 5 division into *m* intervals of equal probability, samples are found with the inverse transform 6 method described in section 3.2 (Eq. 8). As seen in Fig. 2, the convergence improves 7 dramatically. Still, even for *m* 100 samples the second moment (left) varies noticeably. The convergence is generally poorer for higher order moments *<sup>k</sup>* 8 *M* , as shown for *k* 4 (right).

æ ö <sup>=</sup> = ×- = ç ÷ è ø å å (15)

*n* =2 *n* =5 *n* =10 *n* =20

min ,

*<sup>n</sup> v wnk wnk w jk j w j <sup>j</sup>* = =

In practice, *r* >3 often yields an unacceptable number of samples, see table 1.

**Table 1.** Efficiency ρ =*v* for RSM(*r*), for selected polynomial orders *r* and numbers *n* of parameters.

, as shown for *k* =4 (right).

**Figure 2.** The second <sup>2</sup> *M* (left) and fourth 4 10 *M* (right) moments for stratified (solid) and

12 In this case, it is questionable if 100 samples are sufficient to represent as few as four moments <sup>1</sup> <sup>4</sup> 13 *M M* . The probabilistically evenly distributed fixed grid (dashed) converges 14 more rapidly to the proper statistics. Despite the prevailing tradition, there is no absolute 15 requirement of using a random generator to represent statistical information. Fixed grids are 16 examples of deterministic sampling. Stratification provides an interesting intermediate type 17 of sampling since it is partially deterministic – the strata are constructed deterministically 18 but the samples within each stratum are generated randomly. The construction of a fixed grid requires focus on the most relevant features. To reproduce 1 4 19 *M M exactly*, a very

In this case, it is questionable if 100 samples are sufficient to represent as few as four moments

rapidly to the proper statistics. Despite the prevailing tradition, there is no absolute require‐

. The probabilistically evenly distributed fixed grid (dashed) converges more

(right) moments for stratified (solid) and brute force sampling (∗) of

 

 1.732 <sup>0</sup> <sup>4</sup> <sup>~</sup> NRM 0,1 1.376 0.325 <sup>~</sup> UNI 0,1 <sup>ˆ</sup> *<sup>q</sup> <sup>q</sup>* <sup>21</sup>*<sup>q</sup>* . (16)

22 If the problem at hand only depends on these moments, the exact solution will be obtained. 23 The size of such small ensembles must be fixed, no matter how they are generated. Adding,

11 brute force sampling of *q* ~ NRM0*,*1 , compared to a fixed grid (dashed).

20 sparse grid or few deterministic samples are needed,

(left) and fourth *M* (4)

 

24 or perturbing a single sample would modify the statistics substantially.

( ) ( )

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

http://dx.doi.org/10.5772/52193

61

Specifying both marginal probability distributions and covariance is either redundant or inconsistent, as the latter is uniquely determined by the former. Nevertheless, this reflects the typical available information for signal processing applications. The moments can be accu‐ rately determined [4] for sufficiently large data sets but the joint distribution *f* (*q*) is hardly ever known with any precision. Some of its properties are usually assigned, with varying degree of confidence. For instance, the allowed maximal range *M* (*∞*) of the parameters of digital filters is given by stability constraints. The transformation technique above is well adapted to these facts, since the covariance is prioritised. The transformation *q* =*U <sup>T</sup> SU q*˜ will be utilized in section 5.2 to include correlations with limited mixing of the statistics assigned to independent normalized canonical parameters *q*˜.

#### **3.3. Refinements of Monte Carlo**

To increase the efficiency of MC, the original brute force sampling technique has been further developed in mainly two directions: model simplification and sample distribution improve‐ ment. In response surface methodology (RSM) [18], the model is replaced by a simple approx‐ imate surrogate model. A model of order *v* may be found by applying linear (with respect to *C*) regression at *collocation points* [15] *q* ^(*k* ) =*μ* (*<sup>k</sup>* ) ,

$$H(\mu) \approx \mathsf{R}(\mu)\mathsf{C}, \quad \begin{cases} \mathsf{R}\_{kj} = & \mathsf{R}\_j(\mu^{(k)}), \quad j = 1, 2, \dots, \upsilon \\ H\_k = & H(\mu^{(k)}) \end{cases} \quad \prime \begin{cases} \mathsf{C} = & \begin{pmatrix} \mathsf{C}\_1 & \cdots & \mathsf{C}\_v \end{pmatrix}^T \\ \mu = & \begin{pmatrix} \mu^{(1)} & \cdots & \mu^{(m)} \end{pmatrix}^T \end{cases} \quad \text{if } \mu \ge \upsilon, \tag{13}$$

where *Rj* (*q*) is basis function *j*. Since it may be non-linear, RSM allows for non-linear propa‐ gation of uncertainty and may give a substantially different and more accurate result than LIN. If only linear basis functions are used *Rj* (*q*)=*qj* , RSM becomes equivalent to LIN. The best least square approximation is directly obtained from Eq. 13 [19],

$$\mathbf{C} = \left(\mathbf{R}^T \mathbf{R}\right)^{-1} \mathbf{R}^T \mathbf{H} \tag{14}$$

Let RSM(*r*) utilize a complete set of mixed polynomial basis functions up to order *r*. Its least number (*v*) of collocation points grows rapidly with both the number of parameters (*n*) and polynomial order (*r*) [12],

$$w = \sum\_{k=0}^{r} w\{n, k\} \text{ : } \quad w\{n, k\} = \sum\_{j=1}^{\min\{n, k\}} \binom{n}{j} \cdot w\{j, k-j\}, \quad w\{j, 0\} = 1. \tag{15}$$


In practice, *r* >3 often yields an unacceptable number of samples, see table 1.

al pdfs *<sup>f</sup> <sup>k</sup>* (*qk* ) (solid) ) with mixing Ψ(*<sup>U</sup> <sup>T</sup>* )=1.00 and Ψ(*<sup>U</sup> <sup>T</sup> SUS* <sup>−</sup>1)=0.058, and magnified upper transition region

Specifying both marginal probability distributions and covariance is either redundant or inconsistent, as the latter is uniquely determined by the former. Nevertheless, this reflects the typical available information for signal processing applications. The moments can be accu‐ rately determined [4] for sufficiently large data sets but the joint distribution *f* (*q*) is hardly ever known with any precision. Some of its properties are usually assigned, with varying degree of confidence. For instance, the allowed maximal range *M* (*∞*) of the parameters of digital filters is given by stability constraints. The transformation technique above is well adapted to these facts, since the covariance is prioritised. The transformation *q* =*U <sup>T</sup> SU q*˜ will be utilized in section 5.2 to include correlations with limited mixing of the statistics assigned

To increase the efficiency of MC, the original brute force sampling technique has been further developed in mainly two directions: model simplification and sample distribution improve‐ ment. In response surface methodology (RSM) [18], the model is replaced by a simple approx‐ imate surrogate model. A model of order *v* may be found by applying linear (with respect to

, , , ,

(*q*) is basis function *j*. Since it may be non-linear, RSM allows for non-linear propa‐

*kj j v*

gation of uncertainty and may give a substantially different and more accurate result than LIN.

Let RSM(*r*) utilize a complete set of mixed polynomial basis functions up to order *r*. Its least number (*v*) of collocation points grows rapidly with both the number of parameters (*n*) and

(*q*)=*qj*

( ) <sup>1</sup> *T T C RR RH* -

*RR j v C CC H RC m v*

<sup>ì</sup> <sup>ì</sup> = = <sup>=</sup> <sup>ï</sup> <sup>ï</sup> » í í <sup>³</sup>

*k T*

*<sup>T</sup> <sup>k</sup> <sup>m</sup>*

mm

K L

( )

1 1

( ) ( ) ( )

, RSM becomes equivalent to LIN. The best least

= (14)

L

 m (13)

9

^(*k* ) =*μ* (*<sup>k</sup>* ) ,

ï ï <sup>=</sup> <sup>=</sup> <sup>î</sup> <sup>î</sup>

, 1,2, ,

( ) ( )

m

( ) ( )

m

square approximation is directly obtained from Eq. 13 [19],

to independent normalized canonical parameters *q*˜.

**3.3. Refinements of Monte Carlo**

*C*) regression at *collocation points* [15] *q*

*k*

If only linear basis functions are used *Rj*

*H H*

( ) ( )

 m

polynomial order (*r*) [12],

m

where *Rj*

(inset).

60 Digital Filters and Signal Processing

**Table 1.** Efficiency ρ =*v* for RSM(*r*), for selected polynomial orders *r* and numbers *n* of parameters.

The distribution of samples may be improved with stratification, as in Latin Hypercube sampling (LHS) [18]. By dividing the sample space into intervals, or stratas representing equal probability the need for large ensembles is reduced. In LHS, each parameter is sampled exactly once in each of its stratas giving a generalized *latin square* [20]. This selection pushes the samples away from each other and distributes them more evenly. To illustrate the improve‐ ment with stratification, sample one parameter *q* ~NRM(0, 1). After division into *m* intervals of equal probability, samples are found with the inverse transform method described in section 3.2 (Eq. 8). As seen in Fig. 2, the convergence improves dramatically. Still, even for *m*=100 samples the second moment (left) varies noticeably. The convergence is generally poorer for higher order moments *M* (*<sup>k</sup>* ) , as shown for *k* =4 (right). Deterministic Sampling for Modeling Uncertainty <sup>9</sup> 1 equal probability the need for large ensembles is reduced. In LHS, each parameter is 2 sampled exactly once in each of its stratas giving a generalized *latin square* [20]. This 3 selection pushes the samples away from each other and distributes them more evenly. To 4 illustrate the improvement with stratification, sample one parameter *q* ~ NRM 0*,*1 . After 5 division into *m* intervals of equal probability, samples are found with the inverse transform 6 method described in section 3.2 (Eq. 8). As seen in Fig. 2, the convergence improves 7 dramatically. Still, even for *m* 100 samples the second moment (left) varies noticeably. The

convergence is generally poorer for higher order moments *<sup>k</sup>* 8 *M* , as shown for *k* 4 (right).

**Figure 2.** The second <sup>2</sup> *M* (left) and fourth 4 10 *M* (right) moments for stratified (solid) and 11 brute force sampling of *q* ~ NRM0*,*1 , compared to a fixed grid (dashed). **Figure 2.** The second *M* (2) (left) and fourth *M* (4) (right) moments for stratified (solid) and brute force sampling (∗) of *q* ~NRM(0, 1), compared to a fixed grid (dashed).

12 In this case, it is questionable if 100 samples are sufficient to represent as few as four moments <sup>1</sup> <sup>4</sup> 13 *M M* . The probabilistically evenly distributed fixed grid (dashed) converges

18 but the samples within each stratum are generated randomly. The construction of a fixed grid requires focus on the most relevant features. To reproduce 1 4 19 *M M exactly*, a very

22 If the problem at hand only depends on these moments, the exact solution will be obtained. 23 The size of such small ensembles must be fixed, no matter how they are generated. Adding,

20 sparse grid or few deterministic samples are needed,

 

24 or perturbing a single sample would modify the statistics substantially.

14 more rapidly to the proper statistics. Despite the prevailing tradition, there is no absolute 15 requirement of using a random generator to represent statistical information. Fixed grids are 16 examples of deterministic sampling. Stratification provides an interesting intermediate type 17 of sampling since it is partially deterministic – the strata are constructed deterministically In this case, it is questionable if 100 samples are sufficient to represent as few as four moments *M* (1) −*M* (4) . The probabilistically evenly distributed fixed grid (dashed) converges more rapidly to the proper statistics. Despite the prevailing tradition, there is no absolute require‐

> 

 1.732 <sup>0</sup> <sup>4</sup> <sup>~</sup> NRM 0,1 1.376 0.325 <sup>~</sup> UNI 0,1 <sup>ˆ</sup> *<sup>q</sup> <sup>q</sup>* <sup>21</sup>*q*. (16) ment of using a random generator to represent statistical information. Fixed grids are examples of deterministic sampling. Stratification provides an interesting intermediate type of sampling since it is partially deterministic – the strata are constructed deterministically but the samples within each stratum are generated randomly. The construction of a fixed grid requires focus on the most relevant features. To reproduce *M* (1) −*M* (4) *exactly*, a very sparse grid or few deterministic samples are needed,

$$
\hat{\eta} = \begin{cases}
\left(\pm 1.376 \quad \pm 0.325\right) & q \sim \text{UNI}\{0, 1\} \\
\left(\pm 1.732 \quad 0\right) \left(\times 4\right) & q \sim \text{NRM}\{0, 1\}
\end{cases} \tag{16}
$$

additional requirements. DS can also be utilized for direct evaluation of confidence intervals [12]. The targets of various DS methods may differ but the focus on the most influential statistical aspect and customization is shared. In stark contrast, almost without exception RS targets the joint pdf of the parameters and ignores the final utilization. Adaptation and fixed

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

http://dx.doi.org/10.5772/52193

63

The reference will be the specific variant of DS used for propagating covariance in what will be referred to as the standard unscented Kalman filter (UKF) [21-23]. The ensemble consists

where *Δ*:*k* denotes the *k*-th column of *Δ*. The sampling rule is manifested in the square root calculation of the covariance matrix (*Δ*). As suggested [23] it may be found with a Cholesky factorization [19]. The square root matrix is not unique though – the Cholesky root is upper triangular and thus asymmetric. A more symmetric standard alternative is to evaluate the matrix square root in a canonical basis [24] *Uq* where cov(*Uq*) is diagonal. The canonical

*pal* axes of the covariance matrix, amplified by the marginal standard deviations and most importantly, *n*. For many parameters with large covariance, the scaling with *n* may cause the UKF to fail since the scaling is not related to the variability of the parameters, only their total number. A possible solution to the scaling problem is provided by the scaled unscented transformation [25]. However, it is based on Taylor expansions and thus suffers from an

One class of methods of deterministic sampling conserves a limited number of statistical moments. The model parameters are sampled to satisfy these moments and collected in ensembles, similar to how parameters are sampled to fulfill probability distributions in RS.

The constraints of satisfying statistical moments constitute an infinite system of equations for

^ ) of the joint pdf *<sup>f</sup>* (*q*),

. It can formally be viewed as sampling (=

*<sup>k</sup> q q sn* º + × × D DD = *q k ns* = K = ± (17)

will be unit vectors in the *n* positive and negative directions of the *princi‐*

ensemble sizes provides the principal means to improve the efficiency of sampling.

**4.2. Propagation of covariance in the standard unscented Kalman filter**

( ) ( ) ,

: <sup>ˆ</sup> , cov , 1,2, , *s k <sup>T</sup>*

of 2*n* samples, or *sigma-points*,

^(*s*,*v*)

approximation problem of the model.

**5. Sampling with conservation of moments**

variations *Uδq*

**5.1. Principle**

the samples *δq*

^ *i* (*v*)

If the problem at hand only depends on these moments, the exact solution will be obtained. The size of such small ensembles must be fixed, no matter how they are generated. Adding, or perturbing a single sample would modify the statistics substantially.

## **4. Deterministic sampling**

Deterministic sampling (DS) of uncertain systems is a viable alternative to random sampling (RS). Instead of using random generators, specific DS rules are devised to generate appropriate, but still statistical (Fermi's notation, see section 3.2) ensembles. A rudimentary example illustrates the principle: Assume a model *y*(*q*) depends on one parameter *q* with mean *q* and variance *δq* <sup>2</sup> . To estimate the mean *y* and the variance *δ y* 2 of the model, the samples (filter parameters) *q* ^(1,2) = *q* ± *δq* 2 are appropriate since they satisfy the desired statistics, *q* ^ *<sup>E</sup>* <sup>=</sup> *<sup>q</sup>* and *δq* ^2 *<sup>E</sup>* = *δq* <sup>2</sup> . The formula for *q* ^(1,2) constitutes the sampling rule and *q* ^(1,2) is the statistical ensemble containing only two model samples. By paying the computational cost of using more samples and improving the sampling rule, additional moments *δq <sup>k</sup>* , *k* >2 or other statistical features can be accounted for.

In deterministic sampling the model evaluations involve no approximations and are *noninvasive*. In many respects, deterministic sampling is constructed and optimized for quantifi‐ cation of modeling uncertainty: Minimal ensembles allow for evaluation of the most numerically demanding models. The model evaluations are exact and non-invasive to fully respect non-linear deeply hidden parameter dependences. Only vaguely known statistics of the model is approximated.

#### **4.1. Concepts of deterministic sampling**

DS does not per se specify the goal of sampling, e.g. given mean and covariance of the parameters. In the example at the end of section 3.3, the primary target was the joint pdf of the parameters. In section 4.2, the target is *M* (2) (*q*). In section 5, this will be complemented with additional requirements. DS can also be utilized for direct evaluation of confidence intervals [12]. The targets of various DS methods may differ but the focus on the most influential statistical aspect and customization is shared. In stark contrast, almost without exception RS targets the joint pdf of the parameters and ignores the final utilization. Adaptation and fixed ensemble sizes provides the principal means to improve the efficiency of sampling.

#### **4.2. Propagation of covariance in the standard unscented Kalman filter**

The reference will be the specific variant of DS used for propagating covariance in what will be referred to as the standard unscented Kalman filter (UKF) [21-23]. The ensemble consists of 2*n* samples, or *sigma-points*,

$$\hat{q}^{(s,k)} \equiv \{q\} + \mathbf{s} \cdot \sqrt{\mathbf{n}} \cdot \boldsymbol{\Lambda}\_{\cdot \underline{k}'} \quad \Delta \boldsymbol{\Lambda}^T = \text{cov}\{q\}, \quad k = 1, 2, \dots \\ \boldsymbol{n}, \quad s = \pm \\ \tag{17}$$

where *Δ*:*k* denotes the *k*-th column of *Δ*. The sampling rule is manifested in the square root calculation of the covariance matrix (*Δ*). As suggested [23] it may be found with a Cholesky factorization [19]. The square root matrix is not unique though – the Cholesky root is upper triangular and thus asymmetric. A more symmetric standard alternative is to evaluate the matrix square root in a canonical basis [24] *Uq* where cov(*Uq*) is diagonal. The canonical variations *Uδq* ^(*s*,*v*) will be unit vectors in the *n* positive and negative directions of the *princi‐ pal* axes of the covariance matrix, amplified by the marginal standard deviations and most importantly, *n*. For many parameters with large covariance, the scaling with *n* may cause the UKF to fail since the scaling is not related to the variability of the parameters, only their total number. A possible solution to the scaling problem is provided by the scaled unscented transformation [25]. However, it is based on Taylor expansions and thus suffers from an approximation problem of the model.

## **5. Sampling with conservation of moments**

One class of methods of deterministic sampling conserves a limited number of statistical moments. The model parameters are sampled to satisfy these moments and collected in ensembles, similar to how parameters are sampled to fulfill probability distributions in RS.

#### **5.1. Principle**

ment of using a random generator to represent statistical information. Fixed grids are examples of deterministic sampling. Stratification provides an interesting intermediate type of sampling since it is partially deterministic – the strata are constructed deterministically but the samples within each stratum are generated randomly. The construction of a fixed grid requires focus

> ( ) ( ) ( ( )) ( ) 1.376 0.325 ~ UNI 0,1 <sup>ˆ</sup> . 1.732 0 4 ~ NRM 0,1

*q*

*q*

If the problem at hand only depends on these moments, the exact solution will be obtained. The size of such small ensembles must be fixed, no matter how they are generated. Adding,

Deterministic sampling (DS) of uncertain systems is a viable alternative to random sampling (RS). Instead of using random generators, specific DS rules are devised to generate appropriate, but still statistical (Fermi's notation, see section 3.2) ensembles. A rudimentary example illustrates the principle: Assume a model *y*(*q*) depends on one parameter *q* with mean *q* and variance *δq* <sup>2</sup> . To estimate the mean *y* and the variance *δ y* 2 of the model, the samples (filter

ensemble containing only two model samples. By paying the computational cost of using more samples and improving the sampling rule, additional moments *δq <sup>k</sup>* , *k* >2 or other statistical

In deterministic sampling the model evaluations involve no approximations and are *noninvasive*. In many respects, deterministic sampling is constructed and optimized for quantifi‐ cation of modeling uncertainty: Minimal ensembles allow for evaluation of the most numerically demanding models. The model evaluations are exact and non-invasive to fully respect non-linear deeply hidden parameter dependences. Only vaguely known statistics of

DS does not per se specify the goal of sampling, e.g. given mean and covariance of the parameters. In the example at the end of section 3.3, the primary target was the joint pdf of the

= *q* ± *δq* 2 are appropriate since they satisfy the desired statistics, *q*

constitutes the sampling rule and *q*

−*M* (4) *exactly*, a very sparse grid or few

(16)

^ *<sup>E</sup>* <sup>=</sup> *<sup>q</sup>*

is the statistical

^(1,2)

(*q*). In section 5, this will be complemented with

on the most relevant features. To reproduce *M* (1)

*q*

<sup>ì</sup> ± ± <sup>ï</sup> <sup>=</sup> <sup>í</sup> <sup>ï</sup> ± ´ <sup>î</sup>

or perturbing a single sample would modify the statistics substantially.

^(1,2)

deterministic samples are needed,

62 Digital Filters and Signal Processing

**4. Deterministic sampling**

^(1,2)

features can be accounted for.

the model is approximated.

**4.1. Concepts of deterministic sampling**

parameters. In section 4.2, the target is *M* (2)

*<sup>E</sup>* = *δq* <sup>2</sup> . The formula for *q*

parameters) *q*

and *δq* ^2

> The constraints of satisfying statistical moments constitute an infinite system of equations for the samples *δq* ^ *i* (*v*) . It can formally be viewed as sampling (= ^ ) of the joint pdf *<sup>f</sup>* (*q*),

$$\begin{array}{rclclclclclcl}0 = \left\{\delta\hat{q}\_{i}\right\} & \equiv & \int \delta\hat{q}\_{i1} f\Big(\boldsymbol{q}\big) d\boldsymbol{q} & \equiv & \frac{1}{m} \sum\_{v=1}^{m} \delta\hat{q}\_{i}^{(v)} \big( \\\\ \left\{\delta\hat{q}\_{i1}\delta\boldsymbol{q}\_{i2}\right\} & \equiv & \int \delta\hat{q}\_{i1} \delta\boldsymbol{q}\_{i2} f\Big(\boldsymbol{q}\big) d\boldsymbol{q} & \equiv & \frac{1}{m} \sum\_{v=1}^{m} \delta\hat{q}\_{i1}^{(v)} \delta\hat{q}\_{i2}^{(v)} \big) & \equiv & \left\{\delta\hat{q}\_{i1} \delta\hat{q}\_{i2}\right\}\_{E} \\\\ \left\{\delta\hat{q}\_{i1}\delta\boldsymbol{q}\_{i2}\delta\boldsymbol{q}\_{i3}\right\} & \equiv & \int \delta\hat{q}\_{i1} \delta\boldsymbol{q}\_{i2} \delta\boldsymbol{q}\_{i3} f\Big(\boldsymbol{q}\big) d\boldsymbol{q} & \triangleq & \frac{1}{m} \sum\_{v=1}^{m} \delta\hat{q}\_{i1}^{(v)} \delta\hat{q}\_{i2}^{(v)} \delta\hat{q}\_{i3}^{(v)} & \equiv & \left\{\delta\hat{q}\_{i1} \delta\hat{q}\_{i2} \delta\hat{q}\_{i3}\right\}\_{E} \\\\ \left\{\delta\hat{q}\_{i1}\right\}\_{E} & \equiv & \vdots & \equiv & \left\{\text{ $\hat{q}$ }\right\}\_{E} \end{array} \tag{18}$$

expanded from *n* to *m* samples). The excitation matrix controls the sampling *beyond* the first

as deterministic samples of the pdf *ϕ<sup>k</sup>* (*Sq*˜ *<sup>k</sup>* ), assigned to canonical parameters in RS, see section

This mixing effect is indicated by the index *Ψ*(*U <sup>T</sup> SU S* <sup>−</sup>1) defined in Eq. 11. To diagonalize large matrices cov(*q*) many efficient techniques have been developed. This should not cause any difficulties even for *n* ~1000, especially since cov(*q*) usually is either very sparse, or rank

The rationale for applying the reduction to be presented is that any model is derived from a limited set of experiments, resulting in a usually moderate rank of cov(*q*). If the number of parameters is large, it is thus often (nearly) rank deficient. The widely practiced singular value decomposition (SVD) [19] may then be used to reduce the excitation matrix and hence the number of samples. The most general form of SVD cannot be used here since it renders an

<sup>2</sup> <sup>=</sup>*<sup>δ</sup> jkSkk* <sup>≥</sup>0. Different matrices *<sup>U</sup>* , *<sup>W</sup>* allow for decomposition of an arbitrary matrix. For the symmetric matrix cov(*q*), a symmetric SVD *U* =*W* can be found with the less general eigenvalue decomposition [24], according to the spectral theorem [11]. As cov(*q*) is positive

The ensemble may now be reduced by elimination of singular values (ESV). Choose a threshold

Proceeding as in many applications of SVD, this reduction (indicated by tilde below) will not change the result significantly, if *α* is small enough. Accordingly, samples are eliminated using

> ( ) ( ) . *T T n m nn nm rr r n n n n r* ´ *U S V U SV* ´ ´ ´´ ´

not allow for *r* <*n* rows of the matrix *V* . The increase in distortion of *M* (*<sup>k</sup>* >2) indicated by *Ψ*(*U <sup>T</sup> SU S* <sup>−</sup>1)→*Ψ*(*U <sup>T</sup>* ) is less important for the intended use though. Signal processing models with large numbers of parameters are typically non-parametric and usually describe

 a

max , 1. *rr kk <sup>k</sup>*

*α* and remove row *r* and column *r* from *S* and row *r* from *U* for all *r* such that,

*S S* < × a

of *SV*

^ can be interpreted

65

http://dx.doi.org/10.5772/52193

^ .

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

^ distorts all higher moments than the second.

*W* , where *U U <sup>T</sup>* =*U <sup>T</sup> U* =*W W <sup>T</sup>* =*W <sup>T</sup> W* = *I* and

<< (20)

^ advocated in section 3.2 do

2 fulfill the requirement of being positive. This is required to

m

´ ´ D = » % % % = D% (21)

 m

of *SV*

and second moments, e.g. the range of the samples. Row *k* of the matrix *SV*

3.2. All ensembles will be described with a unique excitation matrix*V*

The adopted transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup>

deficient for models with many parameters.

asymmetric decomposition cov(*q*)=*U <sup>T</sup> S* <sup>2</sup>

directly obtain a real-valued matrix square root *Δ* =*U <sup>T</sup> SUV* .

the alternative decomposition of the square root of cov(*q*),

Unfortunately, the less distorting transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup>

definite, all its eigenvalues *Skk*

*S jk*

**5.3. Elimination of singular values**

The infinite number of equations requires an infinite number of samples. However, it is implicitly assumed that relatively few moments are known and significantly influence the result of interest. Only a few moments then needs to be accurately represented by {*δq* ^(*v*) }. Typically, *δqi* and *δqi*1*δqi*<sup>2</sup> are estimated when models are identified [4,7]. In addition, the range *M* (*∞*) or another higher diagonal moment can generally be determined from underlying physical constraints like stability. Clearly, any sampling rule must generate a fixed number of samples and create them simultaneously. The samples are consequently strongly dependent. One obvious sampling method is to solve Eq. 18 numerically for a sufficiently large number of samples *δq* ^, as in Eq. 16. Due to the strong non-linearities, this is quite difficult for a large number of moments but may be feasible for a few moments.

#### **5.2. The excitation matrix**

The UKF (section 4.2) utilizes DS with conservation of all first ( *δqi* ) and second ( *δqi*1*δqi*<sup>2</sup> ) statistical moments. The invariance in its *formulation* allows for any additional 'half' unitary transformation *<sup>Δ</sup>* <sup>→</sup>*Δ<sup>V</sup>* , *<sup>V</sup>* :*<sup>V</sup> <sup>V</sup> <sup>T</sup>* <sup>=</sup> *<sup>I</sup>*. This results in another equally valid matrix *<sup>Δ</sup>*˜, since *<sup>Δ</sup>*˜*Δ*˜*<sup>T</sup>* <sup>=</sup>*Δ<sup>V</sup> <sup>Δ</sup><sup>V</sup> <sup>T</sup>* <sup>=</sup>*Δ<sup>V</sup> <sup>V</sup> <sup>T</sup> <sup>Δ</sup> <sup>T</sup>* <sup>=</sup>*ΔΔ <sup>T</sup>* . Since the transformation *<sup>V</sup>* is allowed and influences the result, the result of applying the UKF is not unique. The matrix *V* condenses this invariance and provides practical means to manipulate the UKF ensemble. A key feature of *V* is the absence of constraints on *V TV* . That makes it possible to stretch *V* 'horizontally' (as long as *V V <sup>T</sup>* = *I*). That corresponds to adding samples (sigma-points). The improved transformation *U <sup>T</sup> SU* (section 3.2) can be applied by also combining *U* with *V* . The square root of the covariance matrix will then read *Δ* =*U <sup>T</sup> SUV* instead of *Δ* =*U <sup>T</sup> SV* ,

$$
\hat{\Sigma}\_{m \times m} \equiv \left\{ q \right\} \cdot \mathbf{1}\_{1 \times m} + \mathbf{U}^T \text{SL} \\
\hat{\mathbf{V}}, \quad \hat{\mathbf{V}} \equiv \sqrt{m} \cdot \mathbf{V}\_{m \times m'} \quad \mathbf{V} \\
\mathbf{V}^T = \mathbf{I}, \quad \mathbf{V} \cdot \mathbf{1}\_{m \times 1} = \mathbf{0}. \tag{19}
$$

The samples *q* ^(*k* ) are here collected in columns of the ensemble matrix *Σ*. The matrix cov(*q*)=*ΔΔ <sup>T</sup>* =*U <sup>T</sup> S* <sup>2</sup> *U* is diagonalized,*S* <sup>2</sup> =eig(cov(*q*)), with the unitary transformation *U* [24]. The normalization factor *m* is included in the *excitation matrix V* ^ to satisfy the correct covariance, *ΣΣ <sup>T</sup>* =*ΔΔ <sup>T</sup>* , just as the factor *n* was included in Eq. 17 (the ensemble is here expanded from *n* to *m* samples). The excitation matrix controls the sampling *beyond* the first and second moments, e.g. the range of the samples. Row *k* of the matrix *SV* ^ can be interpreted as deterministic samples of the pdf *ϕ<sup>k</sup>* (*Sq*˜ *<sup>k</sup>* ), assigned to canonical parameters in RS, see section 3.2. All ensembles will be described with a unique excitation matrix*V* ^ .

The adopted transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> of *SV* ^ distorts all higher moments than the second. This mixing effect is indicated by the index *Ψ*(*U <sup>T</sup> SU S* <sup>−</sup>1) defined in Eq. 11. To diagonalize large matrices cov(*q*) many efficient techniques have been developed. This should not cause any difficulties even for *n* ~1000, especially since cov(*q*) usually is either very sparse, or rank deficient for models with many parameters.

#### **5.3. Elimination of singular values**

( ) ( )

*q q f q dq q q*

*q q q q f q dq q q q q*

<sup>1</sup> <sup>0</sup> <sup>ˆ</sup> ˆ ˆ

=º = º

ò å

ò å

ò å

MM M M

number of moments but may be feasible for a few moments.

covariance matrix will then read *Δ* =*U <sup>T</sup> SUV* instead of *Δ* =*U <sup>T</sup> SV* ,

[24]. The normalization factor *m* is included in the *excitation matrix V*

d

64 Digital Filters and Signal Processing

ddd

range *M* (*∞*)

of samples *δq*

The samples *q*

cov(*q*)=*ΔΔ <sup>T</sup>* =*U <sup>T</sup> S* <sup>2</sup>

^(*k* )

**5.2. The excitation matrix**

d d  d

d d

 ddd

1 2 1 2 1 2 1 2

*i i i i i i i i <sup>E</sup> <sup>v</sup>*

º =º

*q q q q q q f q dq q q q q q q m*

º =º

*i i i i <sup>E</sup> <sup>v</sup>*

( ) ( ) ( )

*m*

*m*

The infinite number of equations requires an infinite number of samples. However, it is implicitly assumed that relatively few moments are known and significantly influence the result of interest. Only a few moments then needs to be accurately represented by {*δq*

Typically, *δqi* and *δqi*1*δqi*<sup>2</sup> are estimated when models are identified [4,7]. In addition, the

physical constraints like stability. Clearly, any sampling rule must generate a fixed number of samples and create them simultaneously. The samples are consequently strongly dependent. One obvious sampling method is to solve Eq. 18 numerically for a sufficiently large number

The UKF (section 4.2) utilizes DS with conservation of all first ( *δqi* ) and second ( *δqi*1*δqi*<sup>2</sup> ) statistical moments. The invariance in its *formulation* allows for any additional 'half' unitary transformation *<sup>Δ</sup>* <sup>→</sup>*Δ<sup>V</sup>* , *<sup>V</sup>* :*<sup>V</sup> <sup>V</sup> <sup>T</sup>* <sup>=</sup> *<sup>I</sup>*. This results in another equally valid matrix *<sup>Δ</sup>*˜, since *<sup>Δ</sup>*˜*Δ*˜*<sup>T</sup>* <sup>=</sup>*Δ<sup>V</sup> <sup>Δ</sup><sup>V</sup> <sup>T</sup>* <sup>=</sup>*Δ<sup>V</sup> <sup>V</sup> <sup>T</sup> <sup>Δ</sup> <sup>T</sup>* <sup>=</sup>*ΔΔ <sup>T</sup>* . Since the transformation *<sup>V</sup>* is allowed and influences the result, the result of applying the UKF is not unique. The matrix *V* condenses this invariance and provides practical means to manipulate the UKF ensemble. A key feature of *V* is the absence of constraints on *V TV* . That makes it possible to stretch *V* 'horizontally' (as long as *V V <sup>T</sup>* = *I*). That corresponds to adding samples (sigma-points). The improved transformation *U <sup>T</sup> SU* (section 3.2) can be applied by also combining *U* with *V* . The square root of the

> 1 1 ˆ ˆ <sup>1</sup> , , , 1 0. *T T n m <sup>m</sup> n m <sup>m</sup>* ´ ´ *q U SUV V m V VV I V* ´ ´ Sº× + º × = × = (19)

covariance, *ΣΣ <sup>T</sup>* =*ΔΔ <sup>T</sup>* , just as the factor *n* was included in Eq. 17 (the ensemble is here

are here collected in columns of the ensemble matrix *Σ*. The matrix

*U* is diagonalized,*S* <sup>2</sup> =eig(cov(*q*)), with the unitary transformation *U*

^ to satisfy the correct

or another higher diagonal moment can generally be determined from underlying

^, as in Eq. 16. Due to the strong non-linearities, this is quite difficult for a large

123 123 1 2 3 123

*iii iii i i i iii <sup>E</sup> <sup>v</sup>*

ˆ

º =º

( ) ( ) ( ) ( )

=

=

1

=

*<sup>m</sup> <sup>v</sup>*

d

d d

d d d

*<sup>m</sup> v v*

*<sup>m</sup> vvv*

1

<sup>1</sup> <sup>ˆ</sup> ˆ ˆ ˆ ˆ

<sup>1</sup> <sup>ˆ</sup> ˆ ˆ ˆ ˆˆˆ

 d

d d

(18)

^(*v*) }.

 ddd

1

The rationale for applying the reduction to be presented is that any model is derived from a limited set of experiments, resulting in a usually moderate rank of cov(*q*). If the number of parameters is large, it is thus often (nearly) rank deficient. The widely practiced singular value decomposition (SVD) [19] may then be used to reduce the excitation matrix and hence the number of samples. The most general form of SVD cannot be used here since it renders an asymmetric decomposition cov(*q*)=*U <sup>T</sup> S* <sup>2</sup> *W* , where *U U <sup>T</sup>* =*U <sup>T</sup> U* =*W W <sup>T</sup>* =*W <sup>T</sup> W* = *I* and *S jk* <sup>2</sup> <sup>=</sup>*<sup>δ</sup> jkSkk* <sup>≥</sup>0. Different matrices *<sup>U</sup>* , *<sup>W</sup>* allow for decomposition of an arbitrary matrix. For the symmetric matrix cov(*q*), a symmetric SVD *U* =*W* can be found with the less general eigenvalue decomposition [24], according to the spectral theorem [11]. As cov(*q*) is positive definite, all its eigenvalues *Skk* 2 fulfill the requirement of being positive. This is required to directly obtain a real-valued matrix square root *Δ* =*U <sup>T</sup> SUV* .

The ensemble may now be reduced by elimination of singular values (ESV). Choose a threshold *α* and remove row *r* and column *r* from *S* and row *r* from *U* for all *r* such that,

$$\left|\mathcal{S}\_{rr}\right| < \alpha \cdot \max\_{k} \left|\mathcal{S}\_{kk}\right|\_{\prime} \quad \alpha << 1. \tag{20}$$

Proceeding as in many applications of SVD, this reduction (indicated by tilde below) will not change the result significantly, if *α* is small enough. Accordingly, samples are eliminated using the alternative decomposition of the square root of cov(*q*),

$$
\Delta\_{n \times m} = \left(\mathsf{L}\boldsymbol{\varGamma}^{T}\right)\_{n \times m} \mathsf{S}\_{n \times n} \boldsymbol{V}\_{n \times m} \approx \left(\mathsf{L}\boldsymbol{\varGamma}^{T}\right)\_{n \times r} \tilde{\mathsf{S}}\_{r \times r} \tilde{\boldsymbol{V}}\_{r \times \mu} = \tilde{\boldsymbol{\Delta}}\_{n \times \mu}.\tag{21}
$$

Unfortunately, the less distorting transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> of *SV* ^ advocated in section 3.2 do not allow for *r* <*n* rows of the matrix *V* . The increase in distortion of *M* (*<sup>k</sup>* >2) indicated by *Ψ*(*U <sup>T</sup> SU S* <sup>−</sup>1)→*Ψ*(*U <sup>T</sup>* ) is less important for the intended use though. Signal processing models with large numbers of parameters are typically non-parametric and usually describe samples of signals like impulse responses, or noise signals. The required LR property of the system then implies LP. The propagation of covariance is then linear and only the undistorted first and second moments need to be encoded.

where *<sup>V</sup>*˜

*V* ^ *<sup>n</sup>*×*mV* ^

from *γ* samples of *<sup>V</sup>*˜

all sub-matrices *cV*˜

Accordingly,

*c* drops out as *m*→*γ*,

g g

g g

g g

ç ÷

( ) ( )


**5.5. Combining covariance**

*T TT jk jk*

g

*<sup>γ</sup>*×*<sup>γ</sup>* is any allowed deterministic sub-ensemble. The factor *c* accounts for the change

*<sup>n</sup>*×*<sup>m</sup> <sup>T</sup>* <sup>=</sup>*m*, the size of the ensemble can be 'compressed' from *m* to *<sup>γ</sup>* samples by moving

*<sup>γ</sup>*×*γ* to the first block-column and skipping all zeros. The introduced constant

*<sup>n</sup>*×*m*. By violating the normalization constraint

http://dx.doi.org/10.5772/52193

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

 g

*l Z*

*uv* though, the size of the sub-ensembles *<sup>V</sup>*˜

*y y* <sup>=</sup> å (27)

Î

g

*<sup>x</sup> n jkl*

g

g

(25)

67

(26)

*<sup>γ</sup>*×*γ* of

^

( ) ˆ ˆˆ , . *<sup>T</sup> n n n n n*

æ ö æ ö ç ÷ ç ÷ ç ÷ <sup>º</sup> <sup>=</sup> ¹ × ç ÷

g

% L % L % L M M M MO

ç ÷ è ø è ø

gg gg gg

gg gg gg

gg gg gg

( )

*jk*

*jk n*

+

g

cov , 2

*x jk*

<sup>ì</sup> - £ <sup>ï</sup>

( ) ,

The consequence of violating the normalization constraint is that only a limited diagonal band of cov(*x*) is correctly reproduced. If a non-parametric model of a signal is propagated through a system model with impulse response *h* of correlation length *σ* ≤*γ* / 2 this will nevertheless *not* result in any error of var(*h* ), as it is independent of all faulty elements cov(*ν*) *jk* ,

correlated sampling must fulfill the stronger size constraint *γ* ≥2(max(*τ*, *σ*) + |*u* −*v* |). The symmetry of convolutions implies a corresponding result when the non-parametric model

A signal processing model generally includes both parametric and non-parametric sources of uncertainty. For instance, a device (parametric system model) may be fed with a signal corrupted with noise (non-parametric noise model). The question then arises how the two sources *qk* , *xk* of uncertainty can be combined. For propagation of uncertainty through LP

models, the combined covariance is given by the Gauss approximation formula [16],

( ) ( ) ( ) LP cov cov ,

*k*

*k*

<sup>ï</sup> ® = <sup>í</sup> <sup>ì</sup> - > <sup>ï</sup> <sup>ï</sup> <sup>í</sup> = -- <sup>ï</sup> <sup>ï</sup> <sup>î</sup> <sup>î</sup>

cov <sup>2</sup> . cov , arg min

*V III V III V VV I V III*

´ ´´´ ´ ´´´ ´ ´ ´ ´ ´ ´´´

 g

*x U SUVV U SU j k*

describes the impulse response *h* of the system, rather than a signal.

*<sup>γ</sup>*×*γ* to the *m*>*γ* samples of *V*

g

#### **5.4. Correlated sampling of non-parametric models**

A major difference between parametric and non-parametric models is the dimensionality. A conceptual dissimilarity is that non-parametric models usually refer to correlated signals, rather than abstract model structures. The parameters may describe discrete samples of input noise [7], or an impulse response [6]. A common parametric pole-zero model may contain 20 parameters, while a non-parametric model can be expressed in perhaps 1000 parameters. The ensembles of non-parametric models often need to be reduced drastically.

Due to limited resolution, the correlation times of any signal or impulse response is finite. Their 'memory' is thus finite so sample variations may be regenerated or repeated, as long as the time between repetitions exceeds the correlation time. This *correlated sampling* (CRS) provides efficient and accurate reduction of the ensembles. The minimal number of parameters *n* is then set by the correlation time of the model. Most importantly, the size of the ensemble becomes independent of the size of the model (the length of the signal).

A finite correlation length *τ* ∈ *N* of any model *δx*(*t*) is normally inferred from the decay of its autocorrelation function*C*(*t*, *T* ), where *t* denotes the lag and *T* refers to a non-stationary variation. Here, a global *τ* will be defined through its *l* <sup>2</sup> -norm and determined for a relative truncation threshold *β* (argmin returns the minimizing argument),

$$\tau = \max\_{T} \left[ \arg \min\_{\tau} \left| \sum\_{t=\tau+1}^{w} \left| \mathbb{C} \{t, T\} \right|^2 - \beta^2 \sum\_{t=0}^{w} \left| \mathbb{C} \{t, T\} \right|^2 \right] \right], \quad \mathbb{C} \{t, T\} = \left\langle \delta \mathbf{x} \left( T + \frac{t}{2} \right) \delta \mathbf{x} \left( T - \frac{t}{2} \right) \right\rangle. \quad \beta << 1. \tag{22}$$

If the model is expressed as a convolution *δx*(*t*)=*h* (*t*)∗*w*(*t*) of an impulse response *h* (*t*) and time-dependent white noise *w*(*t*) as in section 6.2,

$$\mathbb{C}\{t,T\} = \sum\_{u=0}^{\infty} \eta^2 \left(T - \left(u + \frac{t}{2}\right)\right) h(u)h(u+t) \approx \eta^2\left(T\right) \sum\_{u=0}^{\infty} h(u)h(u+t), \quad \eta(t) = \sqrt{\text{var}\{w\}}, \quad \{w\} = 0. \tag{23}$$

By padding the model to an integer multiple of *γ* ≥2*τ* samples, it is always possible to choose an excitation matrix partitioned to block-diagonal form,

$$
\hat{\boldsymbol{V}}\_{n\times m} = \begin{pmatrix} c\tilde{\boldsymbol{V}}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & \mathbf{0}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & \mathbf{0}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & \cdots \\ \mathbf{0}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & c\tilde{\boldsymbol{V}}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & \mathbf{0}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & \cdots \\ \mathbf{0}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & \mathbf{0}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & c\tilde{\boldsymbol{V}}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}, \quad \tilde{\boldsymbol{V}}\_{\boldsymbol{r}\times\boldsymbol{\gamma}}\tilde{\boldsymbol{V}}^{T}\_{\boldsymbol{r}\times\boldsymbol{\gamma}} = \boldsymbol{\gamma} \cdot \boldsymbol{I}\_{\boldsymbol{r}} \quad \boldsymbol{c} = \sqrt{\frac{m}{\mathcal{N}}},\tag{24}
$$

where *<sup>V</sup>*˜ *<sup>γ</sup>*×*<sup>γ</sup>* is any allowed deterministic sub-ensemble. The factor *c* accounts for the change from *γ* samples of *<sup>V</sup>*˜ *<sup>γ</sup>*×*γ* to the *m*>*γ* samples of *V* ^ *<sup>n</sup>*×*m*. By violating the normalization constraint *V* ^ *<sup>n</sup>*×*mV* ^ *<sup>n</sup>*×*<sup>m</sup> <sup>T</sup>* <sup>=</sup>*m*, the size of the ensemble can be 'compressed' from *m* to *<sup>γ</sup>* samples by moving all sub-matrices *cV*˜ *<sup>γ</sup>*×*γ* to the first block-column and skipping all zeros. The introduced constant *c* drops out as *m*→*γ*,

$$
\hat{\boldsymbol{V}}\_{\boldsymbol{m}\times\boldsymbol{\gamma}} \equiv \begin{pmatrix} \tilde{\boldsymbol{V}}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} \\ \tilde{\boldsymbol{V}}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} \\ \tilde{\boldsymbol{V}}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} \\ \vdots \end{pmatrix}, \quad \hat{\boldsymbol{V}}\_{\boldsymbol{m}\times\boldsymbol{\gamma}} \begin{pmatrix} \hat{\boldsymbol{I}}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \cdots \\ \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \cdots \\ \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \boldsymbol{I}\_{\boldsymbol{\gamma}\times\boldsymbol{\gamma}} & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix} \neq \boldsymbol{\gamma} \cdot \boldsymbol{I}\_{\boldsymbol{m}\times\boldsymbol{\gamma}}.\tag{25}
$$

Accordingly,

samples of signals like impulse responses, or noise signals. The required LR property of the system then implies LP. The propagation of covariance is then linear and only the undistorted

A major difference between parametric and non-parametric models is the dimensionality. A conceptual dissimilarity is that non-parametric models usually refer to correlated signals, rather than abstract model structures. The parameters may describe discrete samples of input noise [7], or an impulse response [6]. A common parametric pole-zero model may contain 20 parameters, while a non-parametric model can be expressed in perhaps 1000 parameters. The

Due to limited resolution, the correlation times of any signal or impulse response is finite. Their 'memory' is thus finite so sample variations may be regenerated or repeated, as long as the time between repetitions exceeds the correlation time. This *correlated sampling* (CRS) provides efficient and accurate reduction of the ensembles. The minimal number of parameters *n* is then set by the correlation time of the model. Most importantly, the size of the ensemble becomes

A finite correlation length *τ* ∈ *N* of any model *δx*(*t*) is normally inferred from the decay of its autocorrelation function*C*(*t*, *T* ), where *t* denotes the lag and *T* refers to a non-stationary

max arg min , ,,, , 1. *<sup>T</sup>* 2 2 *t t t t C tT C tT C tT x T x T*

, , var , 0. <sup>2</sup> *u u <sup>t</sup> C tT T u huhu t T huhu t t w w*

0 0 <sup>ˆ</sup> , ,, 0 0

= =× =

*cV <sup>m</sup> <sup>V</sup> VV Ic*

% <sup>L</sup> % % % <sup>L</sup>

= - + +» ç ÷ ç ÷ + = =

By padding the model to an integer multiple of *γ* ≥2*τ* samples, it is always possible to choose

If the model is expressed as a convolution *δx*(*t*)=*h* (*t*)∗*w*(*t*) of an impulse response *h* (*t*) and

è øè ø ë û å å (22)

hh

è ø è ø å å (23)

*T*

g

g

(24)

gg gg

é ù æ öæ ö <sup>=</sup> ê ú - º + - << ç ÷ç ÷


 b

dd

ensembles of non-parametric models often need to be reduced drastically.

independent of the size of the model (the length of the signal).

truncation threshold *β* (argmin returns the minimizing argument),

b

( ) ( ) ( ) 2 2 <sup>2</sup>

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 2

variation. Here, a global *τ* will be defined through its *l* <sup>2</sup>

1 0

0 0

an excitation matrix partitioned to block-diagonal form,

0 0

% L

æ ö ç ÷

è ø

M M MO

´´ ´ ´ ´´ ´ ´ ´ ´´ ´

 gg

> gg

 gg *cV*

 gg

 gg

> gg

¥ ¥ = =

¥ ¥ = + =

time-dependent white noise *w*(*t*) as in section 6.2,

æ ö æ ö

*cV*

gg

gg

gg

t t

h

*n m*

t

first and second moments need to be encoded.

66 Digital Filters and Signal Processing

**5.4. Correlated sampling of non-parametric models**

$$\text{cov}\left(\mathbf{x}\right)\_{jk} \to \left(\mathbf{U}^T \mathbf{S} \mathbf{U} \mathbf{V} \mathbf{V}^T \mathbf{U}^T \mathbf{S} \mathbf{U}\right)\_{jk} = \begin{cases} \text{cov}\left(\mathbf{x}\right)\_{jk'} & \left|j - k\right| \le \gamma/2\\ \text{cov}\left(\mathbf{x}\right)\_{j, k + n\gamma} & \left|\left|j - k\right| > \gamma/2\\ n = \arg\min\_{l \in \mathbb{Z}} \left|j - k - l\gamma\right| \end{cases} . \tag{26}$$

The consequence of violating the normalization constraint is that only a limited diagonal band of cov(*x*) is correctly reproduced. If a non-parametric model of a signal is propagated through a system model with impulse response *h* of correlation length *σ* ≤*γ* / 2 this will nevertheless *not* result in any error of var(*h* ), as it is independent of all faulty elements cov(*ν*) *jk* , | *j* −*k* | >*γ* / 2≥*σ*. To correctly evaluate cov(*h* ) *uv* though, the size of the sub-ensembles *<sup>V</sup>*˜ *<sup>γ</sup>*×*γ* of correlated sampling must fulfill the stronger size constraint *γ* ≥2(max(*τ*, *σ*) + |*u* −*v* |). The symmetry of convolutions implies a corresponding result when the non-parametric model describes the impulse response *h* of the system, rather than a signal.

#### **5.5. Combining covariance**

A signal processing model generally includes both parametric and non-parametric sources of uncertainty. For instance, a device (parametric system model) may be fed with a signal corrupted with noise (non-parametric noise model). The question then arises how the two sources *qk* , *xk* of uncertainty can be combined. For propagation of uncertainty through LP models, the combined covariance is given by the Gauss approximation formula [16],

$$\text{cov}\left(y\right) = \sum\_{k} \text{cov}\left(y^{(k)}\right). \tag{27}$$

where cov(*y* (*<sup>k</sup>* ) ) is the propagated covariance of *qk* , *xk* . This will seize to apply for non-LP models. There exists no general non-linear summation rule for propagated covariance. A method of summation can be given though, if different ensembles are combined as in RS.

How far the reduction of samples might be driven is illustrated by the minimal simplex (SPX)

where the operator ⊥ performs classical Gram-Schmidt orthogonalization [11] and normali‐ zation of rows. The ensemble is constructed from half the STD ensemble, complemented with one sample 1*n*×1 to cancel the first moments. Since that violates the orthogonality of the rows of *V* , ⊥ must be applied to satisfy *V V <sup>T</sup>* = *I*. The high efficiency of the SPX ensemble is tarnished

The binary (BIN) ensemble has minimal range to guarantee allowed samples. By varying all parameters with an equal magnitude of one standard deviation in all samples, the diverging

from a standard binary array, with the difference that the allowed levels are ±1 instead of 0, 1 (see rows 1-3 in Eq. 32). It is then complemented with supplementary rows obtained in two

SPX {( <sup>1</sup> )} <sup>ˆ</sup> <sup>1</sup> 1 , 1, *Vn Inn n m n* ´ ´ = + ×^ - = + (31)

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

. This may give considerable bias of propagated covariance for

*BIN* is fundamentally constructed

http://dx.doi.org/10.5772/52193

69

n 5 ceil 4

æ +ö ç ÷ è ø

(32)

^

L L L L L L L

to 128 samples. That size is acceptable in perspective of the *n* + 1=21

ensemble,

by its large skewness, or *M* (3)

( )

drops from roughly 10<sup>6</sup>

non-LP models, but is irrelevant for LP models.

ways, by cyclic shifting and mirror imaging,

factor *n* of the STD is eliminated. Its excitation matrix *V*

11111111 11111111 11111111

æ ö +-+-+-+- ç ÷ ++--++-- ++++----

11111111 11111111

è ø

M M M M M M M MO

+-+--+-+ -++-+--+

large. The BIN can thus only be applied to moderately sized models.

*<sup>m</sup> V m BIN*

<sup>11111111</sup> <sup>ˆ</sup> ,2 . <sup>11111111</sup>

Cyclic shifts are applied to all original rows except the first, by a quarter of their periodicity. Mirror imaging of a row is defined to change the sign of its second half and is applied to all original rows except the last two, and all shifted rows except the last. For instance, in Eq. 32 row 4 and 5 are shifted versions of row 2 and 3, while rows 6 and 7 are the mirror images of rows 1 and the shifted row 4. The supplementary rows reduce the size of the ensemble drastically with a corresponding improvement of the efficiency. For *n* =20 parameters, the size

samples of the most efficient SPX. Eventually though, the number of samples will grow too


To combine ensembles of parametric (*q*) and non-parametric models (*x*), collect all parameters, *q* →(*q <sup>T</sup> x<sup>T</sup>* )*<sup>T</sup>* , and diagonalize the enlarged covariance matrix, cov(*q*)=*U <sup>T</sup> S* <sup>2</sup> *U* . Build *V* ^ with two blocks and use CRS (section 5.4) for the non-parametric model,

$$
\hat{\mathcal{V}}\_{\left(\mathbf{n}+\mathbf{k}\mathbf{y}\right)\cdot\left(\mathbf{m}+\mathbf{v}\right)} = \begin{pmatrix}
\sqrt{\mathbf{1}+c^{-1}} \cdot \hat{\mathcal{V}}\_{\text{nucr}} & \mathbf{0} \\
\mathbf{0} & \sqrt{\mathbf{1}+c} \cdot \hat{\mathcal{V}}\_{\text{pv}} \\
\mathbf{0} & \sqrt{\mathbf{1}+c} \cdot \hat{\mathcal{V}}\_{\text{pv}} \\
\vdots & \vdots \\
\mathbf{0} & \sqrt{\mathbf{1}+c} \cdot \hat{\mathcal{V}}\_{\text{pv}}
\end{pmatrix}, \quad c = \frac{m}{\nu}, \quad \hat{\mathcal{V}}\_{\left(\mathbf{n}+\mathbf{k}\mathbf{y}\right)\cdot\left(\mathbf{m}+\mathbf{v}\right)} \left(\hat{\mathcal{V}}\_{\left(\mathbf{n}+\mathbf{k}\mathbf{y}\right)\cdot\left(\mathbf{m}+\mathbf{v}\right)}\right)^{T} = \left(m+\nu\right) \cdot I. \tag{28}
$$

The scaling 1 + *c* ±1 may cause a similar scaling problem as the factor *n* in the UKF (section 4.2). Using extended excitation matrices these factors can be eliminated,

$$
\hat{\mathcal{V}}\_{\{n+k\}\text{pc}} = \begin{pmatrix} \hat{A}\_{\text{rcc}} \\ \hat{B}\_{\text{rcc}} \\ \hat{B}\_{\text{rcc}} \\ \vdots \\ \hat{B}\_{\text{rcc}} \end{pmatrix}\_{\mathcal{I}}, \quad \hat{\mathcal{E}}\_{\{n+\gamma\}\text{cc}} = \begin{pmatrix} \hat{A}\_{\text{rcc}} \\ \hat{B}\_{\text{rcc}} \\ \hat{B}\_{\text{rcc}} \end{pmatrix}, \quad \hat{\mathcal{E}}\_{\{n+\gamma\}\text{cc}} \begin{pmatrix} \hat{E}\_{\{n+\gamma\}\text{cc}} \\ \hat{E}\_{\{n+\gamma\}\text{cc}} \end{pmatrix}^{\mathrm{T}} = c \cdot I, \quad c = \max\{m, \upsilon\} > \left(n+\gamma\right). \tag{29}
$$

A disadvantage of this summation is that the same type of ensemble must be used for all parameters. Both alternatives combine the statistics of the two models non-linearly. The uncertainties are propagated and combined by evaluating the model for all samples and calculating the desired statistics, just as if the combined ensemble described one model.

#### **5.6. Selected ensembles**

The standard (STD) ensemble employed in the UKF (as defined in section 4.2) utilizes the perhaps simplest possible excitation matrix,

$$
\hat{V}\_{\rm STD} = \sqrt{n} \cdot \begin{pmatrix} I\_{n \times n} & -I\_{n \times n} \end{pmatrix}, \quad m = 2n. \tag{30}
$$

While the ultimate simplicity is its main advantage, the long maximal(!) range *M* (*∞*) is its main disadvantage.

How far the reduction of samples might be driven is illustrated by the minimal simplex (SPX) ensemble,

where cov(*y* (*<sup>k</sup>* )

68 Digital Filters and Signal Processing

g

ˆ

*A B*

*n c*

æ ö ç ÷

´ ´

g

g

ˆ

*B*

**5.6. Selected ensembles**

disadvantage.

g

ç ÷ è ø M

*c*

´

) is the propagated covariance of *qk* , *xk* . This will seize to apply for non-LP

*<sup>v</sup> <sup>T</sup>*

 g

g

*U* . Build *V*

g

^ with

(28)

(29)

is its main

models. There exists no general non-linear summation rule for propagated covariance. A method of summation can be given though, if different ensembles are combined as in RS.

To combine ensembles of parametric (*q*) and non-parametric models (*x*), collect all parameters,

*q* →(*q <sup>T</sup> x<sup>T</sup>* )*<sup>T</sup>* , and diagonalize the enlarged covariance matrix, cov(*q*)=*U <sup>T</sup> S* <sup>2</sup>

( ) ( ) ( ) ( ) ( ( ) ( ) ) ( )

*<sup>m</sup> <sup>V</sup> cV V mvI c V <sup>v</sup>*

*nk mv nk mv nk mv v*

g

g

4.2). Using extended excitation matrices these factors can be eliminated,

*<sup>c</sup> <sup>T</sup> n c*

*c*

´

´

*A*

*B*

g

g

´ + ´+ + ´+ + ´+ ´

*v*

´

<sup>ˆ</sup> ˆ ˆ <sup>ˆ</sup> , , . 0 1

º + × = = +×

The scaling 1 + *c* ±1 may cause a similar scaling problem as the factor *n* in the UKF (section

( ) ( ) ( ) ( ( ) ) ( ) ( )

<sup>º</sup> ç ÷ <sup>=</sup> ç ÷ =× = > + ç ÷ ç ÷ è ø ç ÷

A disadvantage of this summation is that the same type of ensemble must be used for all parameters. Both alternatives combine the statistics of the two models non-linearly. The uncertainties are propagated and combined by evaluating the model for all samples and calculating the desired statistics, just as if the combined ensemble described one model.

The standard (STD) ensemble employed in the UKF (as defined in section 4.2) utilizes the

<sup>ˆ</sup> , 2. *V nI I m n nn nn* ´ ´ =× - = (30)

( ) STD

While the ultimate simplicity is its main advantage, the long maximal(!) range *M* (*∞*)

g

*V B E E E c I c mv n*

<sup>ˆ</sup> <sup>ˆ</sup> <sup>ˆ</sup> <sup>ˆ</sup> ˆ ˆˆ , , , max , . <sup>ˆ</sup>

two blocks and use CRS (section 5.4) for the non-parametric model,

*c V*

+ ×

*c V*

<sup>1</sup> ˆ 1 0

*c V*


*n m*

æ ö + × ç ÷

´

ˆ 0 1

ˆ 0 1

*nk c c n c ncnc*

+ ´ ´ + ´ +´ +´

ggg

ç ÷ æ ö ç ÷

perhaps simplest possible excitation matrix,

+ × è ø M M

$$
\hat{V}\_{\text{SPX}} = \sqrt{n+1} \cdot \bot \left\{ \begin{pmatrix} I\_{n \times n} & -1\_{n \times 1} \end{pmatrix} \right\}, \quad m = n+1,\tag{31}
$$

where the operator ⊥ performs classical Gram-Schmidt orthogonalization [11] and normali‐ zation of rows. The ensemble is constructed from half the STD ensemble, complemented with one sample 1*n*×1 to cancel the first moments. Since that violates the orthogonality of the rows of *V* , ⊥ must be applied to satisfy *V V <sup>T</sup>* = *I*. The high efficiency of the SPX ensemble is tarnished by its large skewness, or *M* (3) . This may give considerable bias of propagated covariance for non-LP models, but is irrelevant for LP models.

The binary (BIN) ensemble has minimal range to guarantee allowed samples. By varying all parameters with an equal magnitude of one standard deviation in all samples, the diverging factor *n* of the STD is eliminated. Its excitation matrix *V* ^ *BIN* is fundamentally constructed from a standard binary array, with the difference that the allowed levels are ±1 instead of 0, 1 (see rows 1-3 in Eq. 32). It is then complemented with supplementary rows obtained in two ways, by cyclic shifting and mirror imaging,

$$
\hat{\mathcal{V}}\_{\text{EN}}^{(m)} = \begin{pmatrix} +1 & -1 & +1 & -1 & +1 & -1 & +1 & -1 & \cdots \\ +1 & +1 & -1 & -1 & +1 & +1 & -1 & -1 & \cdots \\ +1 & +1 & +1 & +1 & -1 & -1 & -1 & -1 & \cdots \\ +1 & +1 & +1 & -1 & -1 & +1 & +1 & -1 & \cdots \\ -1 & -1 & +1 & +1 & +1 & +1 & -1 & -1 & \cdots \\ +1 & -1 & +1 & -1 & -1 & +1 & -1 & +1 & \cdots \\ -1 & +1 & +1 & -1 & +1 & -1 & -1 & +1 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix} \qquad m = 2 \begin{pmatrix} \text{en} \frac{(n+5)}{4} \\ \text{} \\ \text{} \end{pmatrix} \tag{32}
$$

Cyclic shifts are applied to all original rows except the first, by a quarter of their periodicity. Mirror imaging of a row is defined to change the sign of its second half and is applied to all original rows except the last two, and all shifted rows except the last. For instance, in Eq. 32 row 4 and 5 are shifted versions of row 2 and 3, while rows 6 and 7 are the mirror images of rows 1 and the shifted row 4. The supplementary rows reduce the size of the ensemble drastically with a corresponding improvement of the efficiency. For *n* =20 parameters, the size drops from roughly 10<sup>6</sup> to 128 samples. That size is acceptable in perspective of the *n* + 1=21 samples of the most efficient SPX. Eventually though, the number of samples will grow too large. The BIN can thus only be applied to moderately sized models.

By no means, this brief survey exhausts all possible ensembles. Many criteria for selecting the most appropriate ensemble can be formulated. Here, the first and second moments, parameter ranges and efficiency were in focus.

fulfilled since the REF is built of possible realizations. Also, the REF closes the loop as it makes it possible to compare 'true' and approximate samples directly on an equal footing (see Fig. 8). Even though the samples differ substantially, the resulting modeling uncertainties can be

generating *m* samples of *n* parameters *qk* from uniform distributionsUNI(0, *σ<sup>k</sup>* ), with *σk* listed in Fig. 3, top left. The joint pdf will have compact support [11], as required to guarantee stability. The mean is subtracted from all samples to remove the bias of the finite random ensemble, *δqj <sup>E</sup>* = *δqj* =0, ∀ *j*. The covariance of the REF will have a desirable more or less random variation for small values of *m*. If the REF samples are arranged in columns of a matrix

methods to severe tests with significant transformations *U* , *S*. The mixing *Ψ* (Eq. 13) using transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> was considerable, but less than for *U <sup>T</sup>* , see caption Fig. 3. For the chosen REF, the resulting variations of poles and zeros are displayed in Fig. 3. The obtained variation of the parameters defined in Eq. 33 can be quantified with an averaged correlation

1 1

<sup>+</sup> = =

*<sup>k</sup> jj j jj k j j q q n k <sup>n</sup>*

1 cos , <sup>1</sup> 0, 0

é ù é ù æ ö ìï+- ><

 jq

order system *α* = 1 −0.4 0.6 with time parameters {*Ν*, *ψ*, *T* , *φ*}={0.05, 0.3, 2 *f <sup>C</sup>*

*x w xx g ww g g wg w t*


The exact REF result of modelling covariance of noise is given by combining Eqs. 34 and 36,

( ( )) ( ) <sup>2</sup> Tr cov Tr diag . *T T*

An explicit realization of the REF for the noise model is hence not needed. Specifically, a second

*TT T T*

 d d

p

=N + ê ú ê ú ç ÷ + = <sup>í</sup> ê ú ë û <sup>+</sup> ë û è ø <sup>ï</sup> <sup>=</sup> <sup>î</sup>

*n k n*

1 1

( ) ( ) ( )

( ) ( ) ( ) 2 2

 s

º = - å å (35)

s

*kj* = *g <sup>j</sup>*−*k* is the matrix of translated impulse responses *g* for the AR process defined by

, cov , cov diag

*t x*

*T x*

. 1 2 1 0

 hº = *y h g gh* é ù ë û (37)

cov , var .

A REF signal *δxj* for non-stationary correlated noise may conveniently be generated from an autoregressive process (AR) acting on time-dependent zero mean white noise *δw*, *δx* = *g*

for the system model realized as a digital filter is created by randomly

^ *<sup>T</sup>* . For *m*=31>*<sup>n</sup>* =21, the strong correlations will expose the

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

http://dx.doi.org/10.5772/52193

71

( ) ( ) ( )

h

−1

, *π* / 8} was

¯*<sup>T</sup> <sup>δ</sup>w*,

(36)

similar.

*Λ* ^

*<sup>n</sup>*×*m*, (as *V*

where *g* ¯

*k*

å

h *t*

ad

^ ) cov(*q*)

x

*k jk j*

 d REF =1 / *m*⋅*Λ*

index and standard deviation (Fig. 3, bottom left),


parameters *α<sup>k</sup>* . Assigning a square wave time-dependence,

q

y

 d d

*N* s

^ *Λ*

A plausible REF *δqj*

## **6. Application — Modeling uncertainty of a dynamic device**

The task is to simulate the response of an electrical device such as an amplifier or oscilloscope, in the presence of non-stationary correlated noise on its input. An uncertain LR CT model of the device and its parametric covariance is usually found by applying system identification techniques [4] on calibration measurements [6]. Such a model of the system can be sampled into a digital filter and be described in the pole-zero form in Eq. 2. These standard steps will here be omitted. The system model will instead be assigned to a digital low-pass Butterworth filter, of order 10 and cross-over frequency *f <sup>C</sup>* =0.1 *f <sup>N</sup>* , *f <sup>N</sup>* being the Nyquist frequency and described by parameters *K*, *p*1, *p*2, … *p*10, *z*1, *z*2, *z*10. The complete correlations of complexconjugated pole (*p*) and zero (*z*) pairs are eliminated by a transformation from *q* = *z*, *p* to Re(*q*), Im(*q*)≥0, giving *n* =21 system model parameters,

$$q \equiv \begin{pmatrix} \text{K} & \text{Re}\begin{pmatrix} z\_1 \end{pmatrix} & \text{Im}\begin{pmatrix} z\_1 \end{pmatrix} \ge 0 & \cdots & \text{Re}\begin{pmatrix} p\_1 \end{pmatrix} & \text{Im}\begin{pmatrix} p\_1 \end{pmatrix} \ge 0 & \cdots & \text{Re}\begin{pmatrix} p\_{10} \end{pmatrix} & \text{Im}\begin{pmatrix} p\_{10} \end{pmatrix} \ge 0 \end{pmatrix}^T. \tag{33}$$

To be most general, the non-parametric input noise model is chosen to be correlated/colored and non-stationary. The noise parameter *δxk* represents the noise level at time sample *k*. Its *generating signal* [7] is a Dirac delta function *δ jk* , centered at time *k*. The response of a system with impulse response *h* will be *δ yj* =*δxk* ⋅ (*hj* ∗*δ jk* ) =*h <sup>j</sup>*−*<sup>k</sup>* ⋅*δxk* . In matrix notation, *δ y* =*h* ¯*<sup>T</sup> <sup>δ</sup>x*, where *h* ¯ *kj* =*h <sup>j</sup>*−*<sup>k</sup>* . Hence,

$$\text{cov}\left(y\right) = \left\langle \delta y \delta y^T \right\rangle = \overline{h}^T \left\langle \delta \mathbf{x} \delta \mathbf{x}^T \right\rangle \overline{h} = \overline{h}^T \text{cov}\left(\mathbf{x}\right) \overline{h} \tag{34}$$

Since the response is linear in noise parameters, it is sufficient to only capture cov(*x*).

#### **6.1. Reference ensembles**

Traditionally, any method for uncertainty propagation is evaluated by comparisons with the default method of linearization [10,16], and brute-force random sampling (MC) [9] as state-ofthe-art. There are several drawbacks of this approach. Linearization is a coarse approximation for LP models and MC suffer from the difficulty of modeling dependencies and low efficiency. An alternative is to construct finite reference ensembles (REF) and by definition let them describe the truth. Their primary advantage is that the finite size of the REF makes it possible to propagate the uncertainty exactly, using all REF samples. A more or less arbitrary REF may be generated randomly, like any MC ensemble. All requirements are also automatically fulfilled since the REF is built of possible realizations. Also, the REF closes the loop as it makes it possible to compare 'true' and approximate samples directly on an equal footing (see Fig. 8). Even though the samples differ substantially, the resulting modeling uncertainties can be similar.

By no means, this brief survey exhausts all possible ensembles. Many criteria for selecting the most appropriate ensemble can be formulated. Here, the first and second moments, parameter

The task is to simulate the response of an electrical device such as an amplifier or oscilloscope, in the presence of non-stationary correlated noise on its input. An uncertain LR CT model of the device and its parametric covariance is usually found by applying system identification techniques [4] on calibration measurements [6]. Such a model of the system can be sampled into a digital filter and be described in the pole-zero form in Eq. 2. These standard steps will here be omitted. The system model will instead be assigned to a digital low-pass Butterworth filter, of order 10 and cross-over frequency *f <sup>C</sup>* =0.1 *f <sup>N</sup>* , *f <sup>N</sup>* being the Nyquist frequency and described by parameters *K*, *p*1, *p*2, … *p*10, *z*1, *z*2, *z*10. The complete correlations of complexconjugated pole (*p*) and zero (*z*) pairs are eliminated by a transformation from *q* = *z*, *p* to

( Re Im 0 Re Im 0 Re Im 0 . ( 11 11 1 ) ( ) ( ) ( ) ( 0 1 ) ( <sup>0</sup> ) )

with impulse response *h* will be *δ yj* =*δxk* ⋅ (*hj* ∗*δ jk* ) =*h <sup>j</sup>*−*<sup>k</sup>* ⋅*δxk* . In matrix notation, *δ y* =*h*

cov( ) cov( ) *TT T T y yy h xx h h xh* == =

Since the response is linear in noise parameters, it is sufficient to only capture cov(*x*).

Traditionally, any method for uncertainty propagation is evaluated by comparisons with the default method of linearization [10,16], and brute-force random sampling (MC) [9] as state-ofthe-art. There are several drawbacks of this approach. Linearization is a coarse approximation for LP models and MC suffer from the difficulty of modeling dependencies and low efficiency. An alternative is to construct finite reference ensembles (REF) and by definition let them describe the truth. Their primary advantage is that the finite size of the REF makes it possible to propagate the uncertainty exactly, using all REF samples. A more or less arbitrary REF may be generated randomly, like any MC ensemble. All requirements are also automatically

 dd

dd

*qK z z* º³ ³ ³ L L *p p p p* (33)

To be most general, the non-parametric input noise model is chosen to be correlated/colored and non-stationary. The noise parameter *δxk* represents the noise level at time sample *k*. Its *generating signal* [7] is a Dirac delta function *δ jk* , centered at time *k*. The response of a system

*T*

(34)

¯*<sup>T</sup> <sup>δ</sup>x*,

**6. Application — Modeling uncertainty of a dynamic device**

Re(*q*), Im(*q*)≥0, giving *n* =21 system model parameters,

ranges and efficiency were in focus.

70 Digital Filters and Signal Processing

where *h* ¯

*kj* =*h <sup>j</sup>*−*<sup>k</sup>* . Hence,

**6.1. Reference ensembles**

A plausible REF *δqj* for the system model realized as a digital filter is created by randomly generating *m* samples of *n* parameters *qk* from uniform distributionsUNI(0, *σ<sup>k</sup>* ), with *σk* listed in Fig. 3, top left. The joint pdf will have compact support [11], as required to guarantee stability. The mean is subtracted from all samples to remove the bias of the finite random ensemble, *δqj <sup>E</sup>* = *δqj* =0, ∀ *j*. The covariance of the REF will have a desirable more or less random variation for small values of *m*. If the REF samples are arranged in columns of a matrix *Λ* ^ *<sup>n</sup>*×*m*, (as *V* ^ ) cov(*q*) REF =1 / *m*⋅*Λ* ^ *Λ* ^ *<sup>T</sup>* . For *m*=31>*<sup>n</sup>* =21, the strong correlations will expose the methods to severe tests with significant transformations *U* , *S*. The mixing *Ψ* (Eq. 13) using transformation *U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> was considerable, but less than for *U <sup>T</sup>* , see caption Fig. 3. For the chosen REF, the resulting variations of poles and zeros are displayed in Fig. 3. The obtained variation of the parameters defined in Eq. 33 can be quantified with an averaged correlation index and standard deviation (Fig. 3, bottom left),

$$\xi\_k \equiv \sqrt{\frac{1}{n-k} \sum\_{j=1}^{n-k} \text{cov}\left(q\right)\_{j\left(j+k\right)}} \right/ \sqrt{\frac{1}{n} \sum\_{j=1}^{n} \sigma\_j^2}, \quad \sigma\_j^2 = \text{var}\left(q\_j\right). \tag{35}$$

A REF signal *δxj* for non-stationary correlated noise may conveniently be generated from an autoregressive process (AR) acting on time-dependent zero mean white noise *δw*, *δx* = *g* ¯*<sup>T</sup> <sup>δ</sup>w*, where *g* ¯ *kj* = *g <sup>j</sup>*−*k* is the matrix of translated impulse responses *g* for the AR process defined by parameters *α<sup>k</sup>* . Assigning a square wave time-dependence,

$$\sum\_{k} a\_{k} \delta x\_{j-k} = \quad \delta w\_{j}, \quad \left\{ \delta x \delta x^{T} \right\} = \overline{\mathfrak{g}}^{T} \left\{ \delta x \delta w^{T} \right\} \overline{\mathfrak{g}} = \overline{\mathfrak{g}}^{T} \text{cov} \left( w \right) \overline{\mathfrak{g}}\_{\prime} \quad \text{cov} \left( w \right) = \text{diag} \left[ \eta \left( t \right) \right]$$

$$\eta \left( t \right) \qquad \qquad = \text{ N} \left[ 1 + \frac{1}{1+\nu} \theta \left[ \cos \left( \frac{2t\pi}{T} + \varphi \right) \right] \right], \quad \theta = \begin{cases} + \text{(-)} 1 & \text{x > (<)} 0 \\ 0, & \text{x = 0} \end{cases} \tag{36}$$

The exact REF result of modelling covariance of noise is given by combining Eqs. 34 and 36,

$$\sigma\_N = \sqrt{\text{Tr}\left(\text{cov}\left(y\right)\right)} = \sqrt{\text{Tr}\left(\overline{h}^T \overline{g}^T \text{diag}\left[\eta^2\right] \overline{g} \overline{h}\right)}.\tag{37}$$

An explicit realization of the REF for the noise model is hence not needed. Specifically, a second order system *α* = 1 −0.4 0.6 with time parameters {*Ν*, *ψ*, *T* , *φ*}={0.05, 0.3, 2 *f <sup>C</sup>* −1 , *π* / 8} was Borås Sweden

**%&?>Uncertainty<\$%&?>of<\$%&?>Signals** 

Jan<\$%&?>Peter<\$%&?>Hessling

**1.<\$%&?>Introduction** 

**Deterministic<\$%&?>Sampling<\$%&?>for<\$%&?>Quantification<\$%&?>of<\$%&?>Modeling<\$**

Measurement<\$%&?>Technology,<\$%&?>SP<\$%&?>Technical<\$%&?>Research<\$%&?>Institute<\$%&?>of<\$%&?>Sweden,<\$%&?>

20 Digital Filters and Signal Processing

http://dx.doi.org/10.5772/52193

73

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

 (top left) and is almost opposite in phase, due to the response time of about <sup>1</sup> *<sup>C</sup>* 1 *f* , see

 *<sup>S</sup>* 2 is strongly dependent on the 3 input signal and decreases with increased regularity / differentiability. The propagated

 *<sup>S</sup>* 4 has a more complex variation (top, right and bottom), as it is larger for the 5 more regular Gaussian (bottom, right) than for the triangular pulse (bottom, left).

*<sup>S</sup>* (top, right and bottom). The signal distortion

6

7

intensity is given by η(*t*) (top, left).

**Figure 4.** The mean

11 **6.2. Deterministic sampling** 

**Figure 4.** The mean μ*S* (dashed), the standard deviations σ*<sup>S</sup>* ,*N* (solid) and the scent ζ (thin, solid) for the REFs, for the different test signals (thin, dashed). The subscripts refer to the system (*S*) and noise (*N* ) models. The variation of noise

*<sup>S</sup>* (dashed), the standard deviations

9 solid) for the REFs, for the different test signals (thin, dashed). The subscripts refer to the 10 system *S* and noise *N* models. The variation of noise intensity is given by

12 The error of the scent and the standard deviation for the STD, SPX and BIN ensembles of the 13 system model is displayed in Fig. 5, for all test signals. The low scent of the REF (left: thin, 14 dotted) suggests the model is close to LP. Despite the relative errors are large they are quite 15 small on an absolute scale. The SPX has the largest errors, for the scent as well as the 16 variance. That is likely caused by its skewness being much larger than that of the REF. The 17 BIN has the lowest errors and is thus the best approximate representation of the REF.

*<sup>S</sup>* ,*<sup>N</sup>* 8 (solid) and the scent

(thin,

*t* (top, left).

delay of 

covariance 

> *<sup>N</sup>* 0

**Figure 3.** Top: Assigned variations (left) and resulting (*zk* middle, *pk* right) samples of the REF of the system model. Label *P* indicates the pole explored in Fig. 8. Bottom, left: Obtained variations σ*<sup>k</sup>* (dots) and correlations ξ*k* (bars) of parameters *q* (Eq. 33), with mixing (Eq. 11) Ψ(*U <sup>T</sup> SUS* <sup>−</sup>1)=0.22 (adopted) and Ψ(*U <sup>T</sup>* )=0.39. Bottom, right: Impulse re‐ sponses *h* ( *q* , *t*) and *g*(*t*) and time-dependence η(*t*) (Eq. 36) of noise intensity. The correlation lengths λ*<sup>h</sup>* , λ*<sup>g</sup>* were determined according to Eqs. 22-23, for β =0.05.

chosen. The impulse responses of the AR noise system and the system model, and the variation of the noise model are illustrated in Fig. 3, bottom right.

The 'true' result given by the response for the REFs for the different test signals is shown in Fig. 4. The propagated noise variation *σ<sup>N</sup>* differs substantially from the input square wave *η* (top left) and is almost opposite in phase, due to the response time of about *f <sup>C</sup>* −1 , see delay of *μS* (top, right and bottom). The signal distortion (*μS* ) is strongly dependent on the input signal and decreases with increased regularity / differentiability. The propagated covariance *σS* has a more complex variation (top, right and bottom), as it is larger for the more regular Gaussian (bottom, right) than for the triangular pulse (bottom, left).

#### **6.2. Deterministic sampling**

The error of the scent and the standard deviation for the STD, SPX and BIN ensembles of the system model is displayed in Fig. 5, for all test signals. The low scent of the REF (left: thin, dotted) suggests the model is close to LP. Despite the relative errors are large they are quite small on an absolute scale. The SPX has the largest errors, for the scent as well as the variance. That is likely caused by its skewness being much larger than that of the REF. The BIN has the lowest errors and is thus the best approximate representation of the REF.

20 Digital Filters and Signal Processing

 (top left) and is almost opposite in phase, due to the response time of about <sup>1</sup> *<sup>C</sup>* 1 *f* , see

5 more regular Gaussian (bottom, right) than for the triangular pulse (bottom, left).

delay of 

9 solid) for the REFs, for the different test signals (thin, dashed). The subscripts refer to the 10 system *S* and noise *N* models. The variation of noise intensity is given by *t* (top, left). 11 **6.2. Deterministic sampling Figure 4.** The mean μ*S* (dashed), the standard deviations σ*<sup>S</sup>* ,*N* (solid) and the scent ζ (thin, solid) for the REFs, for the different test signals (thin, dashed). The subscripts refer to the system (*S*) and noise (*N* ) models. The variation of noise intensity is given by η(*t*) (top, left).

*<sup>S</sup>* (dashed), the standard deviations

12 The error of the scent and the standard deviation for the STD, SPX and BIN ensembles of the 13 system model is displayed in Fig. 5, for all test signals. The low scent of the REF (left: thin, 14 dotted) suggests the model is close to LP. Despite the relative errors are large they are quite 15 small on an absolute scale. The SPX has the largest errors, for the scent as well as the 16 variance. That is likely caused by its skewness being much larger than that of the REF. The 17 BIN has the lowest errors and is thus the best approximate representation of the REF.

*<sup>S</sup>* ,*<sup>N</sup>* 8 (solid) and the scent

(thin,

**Figure 4.** The mean

chosen. The impulse responses of the AR noise system and the system model, and the variation

**Figure 3.** Top: Assigned variations (left) and resulting (*zk* middle, *pk* right) samples of the REF of the system model. Label *P* indicates the pole explored in Fig. 8. Bottom, left: Obtained variations σ*<sup>k</sup>* (dots) and correlations ξ*k* (bars) of parameters *q* (Eq. 33), with mixing (Eq. 11) Ψ(*U <sup>T</sup> SUS* <sup>−</sup>1)=0.22 (adopted) and Ψ(*U <sup>T</sup>* )=0.39. Bottom, right: Impulse re‐ sponses *h* ( *q* , *t*) and *g*(*t*) and time-dependence η(*t*) (Eq. 36) of noise intensity. The correlation lengths λ*<sup>h</sup>* , λ*<sup>g</sup>* were

**Deterministic<\$%&?>Sampling<\$%&?>for<\$%&?>Quantification<\$%&?>of<\$%&?>Modeling<\$**

Measurement<\$%&?>Technology,<\$%&?>SP<\$%&?>Technical<\$%&?>Research<\$%&?>Institute<\$%&?>of<\$%&?>Sweden,<\$%&?>

**%&?>Uncertainty<\$%&?>of<\$%&?>Signals** 

Jan<\$%&?>Peter<\$%&?>Hessling

**1.<\$%&?>Introduction** 

Assigned variation

2% 0.2% 0.9%

Re ,

*<sup>K</sup> <sup>k</sup> k z z* Im

 *<sup>k</sup> k p p* Im Re ,

Borås Sweden

72 Digital Filters and Signal Processing

The 'true' result given by the response for the REFs for the different test signals is shown in Fig. 4. The propagated noise variation *σ<sup>N</sup>* differs substantially from the input square wave *η*

*μS* (top, right and bottom). The signal distortion (*μS* ) is strongly dependent on the input signal and decreases with increased regularity / differentiability. The propagated covariance *σS* has a more complex variation (top, right and bottom), as it is larger for the more regular Gaussian

The error of the scent and the standard deviation for the STD, SPX and BIN ensembles of the system model is displayed in Fig. 5, for all test signals. The low scent of the REF (left: thin, dotted) suggests the model is close to LP. Despite the relative errors are large they are quite small on an absolute scale. The SPX has the largest errors, for the scent as well as the variance. That is likely caused by its skewness being much larger than that of the REF. The BIN has the

−1

, see delay of

(top left) and is almost opposite in phase, due to the response time of about *f <sup>C</sup>*

lowest errors and is thus the best approximate representation of the REF.

of the noise model are illustrated in Fig. 3, bottom right.

determined according to Eqs. 22-23, for β =0.05.

(bottom, right) than for the triangular pulse (bottom, left).

**6.2. Deterministic sampling**

1

2

3

**Figure 5.** The errors of the scent REF

Deterministic Sampling for Modeling Uncertainty <sup>21</sup>

5 of the system model (solid) for the STD (top), SPX (middle) and BIN (bottom) and the three test signals (thin, dashed). The correct REF (left) and S,REF 6 (right) are included for 7 comparison (thin, dotted). The triangular and Gaussian signals are displaced for clarity. **Figure 5.** The errors of the scent ζ −ζREF (left) and the standard deviation σS−σS,REF (right) of the system model (solid) for the STD (top), SPX (middle) and BIN (bottom) and the three test signals (thin, dashed). The correct ζREF (left) and σS,REF (right) are included for comparison (thin, dotted). The triangular and Gaussian signals are displaced for clarity.

S,REF 4 (right)

(left) and the standard deviation

 <sup>S</sup>  betweenaccuracyandefficiency.Forthe chosenvalues,CRSis abouttwice as accurate andtwice as efficient as ESV. In contrast to ESV, the number of samples for the CRS method is independ‐ ent of the number of noise samples. The computational cost thus increases linearly with the length ofthe noise signalfor CRS but quadratically (approximately)forESV. ForESV to be most efficient, the model covariance needs to be strongly rank deficient. That is not as unlikely as it might appear, since themodelusually isderivedfroma limitedamount of experimentalresults.

8 Eqs. 20,22), the errors can be made arbitrarily low. As the reduction will decrease 9 accordingly, there is a trade-off between accuracy and efficiency. For the chosen values, CRS 10 is about twice as accurate and twice as efficient as ESV. In contrast to ESV, the number of 11 samples for the CRS method is independent of the number of noise samples. The 12 computational cost thus increases linearly with the length of the noise signal for CRS but 13 quadratically (approximately) for ESV. For ESV to be most efficient, the model covariance 14 needs to be strongly rank deficient. That is not as unlikely as it might appear, since the

15 model usually is derived from a limited amount of experimental results.

N,REF 17 of propagated noise, for the ESV (section 5.3) and CRS

20 *m* 142 samples for ESV and *m* 75 for CRS, compared to *m* 402 of the original SPX.

21 The summation of the noise and the model covariance is illustrated in Fig. 7. The 22 propagation of the covariance of the system model*q* is not LP. The quadratic summation 23 rule (Eq. 27), or Gauss approximation formula [16], is therefore not applicable. Nevertheless,

The summation of the noise and the model covariance is illustrated in Fig. 7. The propagation of the covariance of the system model (*q*) is not LP. The quadratic summation rule (Eq. 27), or Gauss approximation formula [16], is therefore not applicable. Nevertheless, the low scent *ζ* (Fig. 5, left) suggests that both propagations are close to LP. The summation error (*ε*) is hence finite, but quite small. It differs qualitatively from both contributions, indicating that the

Finally, the samples of one pole of the derived ensembles are compared to the reference samples of the REF in Fig. 8. The limit (| *z* | =1) of stability is included to illustrate how close the samples are to be physically forbidden. The construction of the different ensembles is

5.2), and tilts the principal axes (lines). The samples of the REF are almost evenly distributed. Only four samples of the STD, labelled *p*1, *p*2, *p*3, *p*4, deviate significantly from a dense central

from half the STD. A small translation required to achieve the correct mean is discernible, while the Gram-Schmidt orthogonalization renders a minor rotation and distortion. The BIN contains comparable variations in all samples and thus has no central cluster and its samples are repelled from the principal directions (lines). The statistical differences to the REF refer to the shape of the joint pdf. Choosing the best ensemble is thus equivalent of selecting the most appropriate pdf in RS. The BIN seems to resemble the REF scatter plot the most, as verified by

^

**Figure 6.** The error σ<sup>N</sup> −σN,REF of propagated noise, for the ESV (section 5.3) and CRS (section 5.4) ensemble reduction methods, and the correct σN,REF (thin, ×1 / 10). The thresholds were α =0.1 (Eq. 20) for ESV, and β =0.05 (Eq. 22) for

N,REF 18 (thin, 1 10 ). The

CRS. That resulted in *m*=142 samples for ESV and *m*=75 for CRS, compared to *m*=402 of the original SPX.

(Fig. 5, left) suggests that both propagations are close to LP. The summation

is hence finite, but quite small. It differs qualitatively from both contributions,

0.05 (Eq. 22) for CRS. That resulted in

distorts the scatter plots (sections 3.2,

STD (Eq. 30). It also is evident that SPX originates

7 elimination of singular values (ESV) and

16

**Figure 6.** The error

19 thresholds were

24 the low scent

summation is non-trivial.

25 error

its low errors in Fig. 5.

 <sup>N</sup> 

cluster, as described by the excitation matrix *V*

26 indicating that the summation is non-trivial.

apparent, even though the transformation *T* =*U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup>

(section 5.4) ensemble reduction methods, and the correct

0.1 (Eq. 20) for ESV, and

for 75

http://dx.doi.org/10.5772/52193

for truncation of the correlation lengths (see

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

22 Digital Filters and Signal Processing

1 The errors might appear large, considering *all* ensembles are 'correct', i.e. correctly represent 2 (typically) available accurate information (mean and covariance of parameters). The errors 3 reflect ambiguities caused by the ubiquitous lack of information in signal processing, rather 4 than inadequacies of DS. RS can only produce better results by making further *assumptions*. 5 The result of applying the ESV and the CRS methods to reduce the SPX ensemble for 6 propagating the noise is displayed in Fig. 6. By choosing sufficiently low thresholds

 

The errors might appear large, considering *all* ensembles are 'correct', i.e. correctly represent (typically) available accurate information (mean and covariance of parameters). The errors reflect ambiguities caused by the ubiquitous lack of information in signal processing, rather than inadequacies of DS. RS can only produce better results by making further *assumptions*.

The result of applying the ESV and the CRS methods to reduce the SPX ensemble for propagat‐ ing the noise is displayed in Fig. 6. By choosing sufficiently low thresholds *α* for elimination of singular values (ESV) and *β* for truncation of the correlation lengths (see Eqs. 20,22), the errors can be made arbitrarily low. As the reduction will decrease accordingly, there is a trade-off

for

betweenaccuracyandefficiency.Forthe chosenvalues,CRSis abouttwice as accurate andtwice as efficient as ESV. In contrast to ESV, the number of samples for the CRS method is independ‐ ent of the number of noise samples. The computational cost thus increases linearly with the length ofthe noise signalfor CRS but quadratically (approximately)forESV. ForESV to be most efficient, the model covariance needs to be strongly rank deficient. That is not as unlikely as it might appear, since themodelusually isderivedfroma limitedamount of experimentalresults. 7 elimination of singular values (ESV) and for truncation of the correlation lengths (see 8 Eqs. 20,22), the errors can be made arbitrarily low. As the reduction will decrease 9 accordingly, there is a trade-off between accuracy and efficiency. For the chosen values, CRS 10 is about twice as accurate and twice as efficient as ESV. In contrast to ESV, the number of 11 samples for the CRS method is independent of the number of noise samples. The 12 computational cost thus increases linearly with the length of the noise signal for CRS but 13 quadratically (approximately) for ESV. For ESV to be most efficient, the model covariance 14 needs to be strongly rank deficient. That is not as unlikely as it might appear, since the

22 Digital Filters and Signal Processing

1 The errors might appear large, considering *all* ensembles are 'correct', i.e. correctly represent 2 (typically) available accurate information (mean and covariance of parameters). The errors 3 reflect ambiguities caused by the ubiquitous lack of information in signal processing, rather

6 propagating the noise is displayed in Fig. 6. By choosing sufficiently low thresholds

15 model usually is derived from a limited amount of experimental results.

16

**Figure 6.** The error

Deterministic Sampling for Modeling Uncertainty <sup>21</sup>

1

74 Digital Filters and Signal Processing

2

3

**Figure 5.** The errors of the scent REF

test signals (thin, dashed). The correct REF

 

 S,REF 4 (right) 5 of the system model (solid) for the STD (top), SPX (middle) and BIN (bottom) and the three

**Figure 5.** The errors of the scent ζ −ζREF (left) and the standard deviation σS−σS,REF (right) of the system model (solid) for the STD (top), SPX (middle) and BIN (bottom) and the three test signals (thin, dashed). The correct ζREF (left) and σS,REF (right) are included for comparison (thin, dotted). The triangular and Gaussian signals are displaced for clarity.

The errors might appear large, considering *all* ensembles are 'correct', i.e. correctly represent (typically) available accurate information (mean and covariance of parameters). The errors reflect ambiguities caused by the ubiquitous lack of information in signal processing, rather than inadequacies of DS. RS can only produce better results by making further *assumptions*. The result of applying the ESV and the CRS methods to reduce the SPX ensemble for propagat‐ ing the noise is displayed in Fig. 6. By choosing sufficiently low thresholds *α* for elimination of singular values (ESV) and *β* for truncation of the correlation lengths (see Eqs. 20,22), the errors can be made arbitrarily low. As the reduction will decrease accordingly, there is a trade-off

(left) and

 S,REF 6 (right) are included for 7 comparison (thin, dotted). The triangular and Gaussian signals are displaced for clarity.

(left) and the standard deviation

 <sup>S</sup> 

 <sup>N</sup> N,REF 17 of propagated noise, for the ESV (section 5.3) and CRS (section 5.4) ensemble reduction methods, and the correct N,REF 18 (thin, 1 10 ). The 19 thresholds were 0.1 (Eq. 20) for ESV, and 0.05 (Eq. 22) for CRS. That resulted in 20 *m* 142 samples for ESV and *m* 75 for CRS, compared to *m* 402 of the original SPX. **Figure 6.** The error σ<sup>N</sup> −σN,REF of propagated noise, for the ESV (section 5.3) and CRS (section 5.4) ensemble reduction methods, and the correct σN,REF (thin, ×1 / 10). The thresholds were α =0.1 (Eq. 20) for ESV, and β =0.05 (Eq. 22) for CRS. That resulted in *m*=142 samples for ESV and *m*=75 for CRS, compared to *m*=402 of the original SPX.

21 The summation of the noise and the model covariance is illustrated in Fig. 7. The 22 propagation of the covariance of the system model*q* is not LP. The quadratic summation 23 rule (Eq. 27), or Gauss approximation formula [16], is therefore not applicable. Nevertheless, 24 the low scent (Fig. 5, left) suggests that both propagations are close to LP. The summation 25 error is hence finite, but quite small. It differs qualitatively from both contributions, 26 indicating that the summation is non-trivial. The summation of the noise and the model covariance is illustrated in Fig. 7. The propagation of the covariance of the system model (*q*) is not LP. The quadratic summation rule (Eq. 27), or Gauss approximation formula [16], is therefore not applicable. Nevertheless, the low scent *ζ* (Fig. 5, left) suggests that both propagations are close to LP. The summation error (*ε*) is hence finite, but quite small. It differs qualitatively from both contributions, indicating that the summation is non-trivial.

Finally, the samples of one pole of the derived ensembles are compared to the reference samples of the REF in Fig. 8. The limit (| *z* | =1) of stability is included to illustrate how close the samples are to be physically forbidden. The construction of the different ensembles is apparent, even though the transformation *T* =*U <sup>T</sup> SU S* <sup>−</sup><sup>1</sup> distorts the scatter plots (sections 3.2, 5.2), and tilts the principal axes (lines). The samples of the REF are almost evenly distributed. Only four samples of the STD, labelled *p*1, *p*2, *p*3, *p*4, deviate significantly from a dense central cluster, as described by the excitation matrix *V* ^ STD (Eq. 30). It also is evident that SPX originates from half the STD. A small translation required to achieve the correct mean is discernible, while the Gram-Schmidt orthogonalization renders a minor rotation and distortion. The BIN contains comparable variations in all samples and thus has no central cluster and its samples are repelled from the principal directions (lines). The statistical differences to the REF refer to the shape of the joint pdf. Choosing the best ensemble is thus equivalent of selecting the most appropriate pdf in RS. The BIN seems to resemble the REF scatter plot the most, as verified by its low errors in Fig. 5.

1

2

1

6 **7. Conclusions** 

20 low.

**7. Conclusions**

need of approximation.

**Author details**

Jan Peter Hessling\*

sampling though, their convergence rates remain low.

sampling, likely the best ensembles are yet to be discovered.

Deterministic sampling remains controversial [27] while random sampling has qualified as a preferred state-of-the-art method for propagating uncertainty. Both result in finite *statistical* [17] ensembles, which are approximate finite representations of the primary statistical models. Their sampling strategies and convergence rates are dramatically different. While determin‐ istic sampling humbly aims at representing the most relevant and best known statistical information, random sampling targets complete control of all features of the ensemble. Such detailed information is rarely known and must instead be more or less blindly assigned. The inevitable consequence is that critical computational resources are spent on propagating, at best, vaguely known details. The numerical power of modern computers is better spent on refinements of the signal processing model (longer time series, higher sampling rates, larger systems etc.). Refined methods of random sampling have therefore been proposed which either simplifies the model, or improve the sampling distributions. Compared to deterministic

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

http://dx.doi.org/10.5772/52193

77

It is easy to confuse deterministic sampling with experimental design and optimization [28]. Even though any sample could be a possible outcome of an experiment, deterministic ensem‐ bles *represent* rather than *realize* (as random ensembles) statistical distributions. Instead of associating a joint distribution to the parameters of an uncertain model, it is possible to directly represent their statistics with a deterministic ensemble. That would eliminate the need of interpreting abstract distributions and result in complete reproducibility. The critical choice of ensemble would be assigned once and for all in the calibration experiment, with no further

The use of excitation matrices made it possible to construct universal generic ensembles. The efficiency of the minimal SPX ensemble is indeed high but so is also its third moment. While the STD maximizes the range of each parameter, the BIN minimizes it by varying all parameters in all samples. The STD is the simplest while the SPX is the most efficient ensemble. In the example, the BIN was most accurate. For non-parametric models with many parameters, reduction of samples may be required. Elimination of singular values (ESV) and correlated sampling (CRS) were two such techniques. The presented ensembles are not to be associated to random sampling as a method. They are nothing but a few examples of deterministic

It is indeed challenging but also rewarding to find novel deterministic sampling strategies. Once the sampling rules are found, the application is just as simple as random sampling, but usually much more efficient. Deterministic sampling is one of very few methods capable of

non-linear propagation of uncertainty through large signal processing models.

Measurement Technology, SP Technical Research Institute of Sweden, Borås, Sweden

24 Digital Filters and Signal Processing 3 **Figure 7.** Summation of covariance: Total (solid), system (dashed) and noise (dotted), for 4 the three test signals (thin, dashed), with the error of square summation 10 (Eq. 27). **Figure 7.** Summation of covariance: Total (solid), system (dashed) and noise (dotted), for the three test signals (thin, dashed), with the error (ε) of square summation (×10) (Eq. 27).

5 Finally, the samples of one pole of the derived ensembles are compared to the reference

2 **Figure 8.** The different samples (dots) of the pole marked 'P' in Fig. 3, of the reference (REF), 3 standard (STD), simplex (SPX) and binary (BIN) ensembles. The limit *z* 1 of stability (solid, thick) and lines connecting the primary variations <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> 4 *p* , *p* , *p* , *p* of the STD as well as **Figure 8.** The different samples (dots) of the pole marked 'P' in Fig. 3, of the reference (REF), standard (STD), simplex (SPX) and binary (BIN) ensembles. The limit | *z* | =1 of stability (solid, thick) and lines connecting the primary variations *p*1, *p*2, *p*3, *p*4 of the STD as well as lines (dashed) to combined excitations of the BIN are included for reference.

7 Deterministic sampling remains controversial [27] while random sampling has qualified as a 8 preferred state-of-the-art method for propagating uncertainty. Both result in finite *statistical* 9 [17] ensembles, which are approximate finite representations of the primary statistical 10 models. Their sampling strategies and convergence rates are dramatically different. While 11 deterministic sampling humbly aims at representing the most relevant and best known 12 statistical information, random sampling targets complete control of all features of the 13 ensemble. Such detailed information is rarely known and must instead be more or less 14 blindly assigned. The inevitable consequence is that critical computational resources are 15 spent on propagating, at best, vaguely known details. The numerical power of modern 16 computers is better spent on refinements of the signal processing model (longer time series, 17 higher sampling rates, larger systems etc.). Refined methods of random sampling have 18 therefore been proposed which either simplifies the model, or improve the sampling 19 distributions. Compared to deterministic sampling though, their convergence rates remain

5 lines (dashed) to combined excitations of the BIN are included for reference.

## **7. Conclusions**

Deterministic sampling remains controversial [27] while random sampling has qualified as a preferred state-of-the-art method for propagating uncertainty. Both result in finite *statistical* [17] ensembles, which are approximate finite representations of the primary statistical models. Their sampling strategies and convergence rates are dramatically different. While determin‐ istic sampling humbly aims at representing the most relevant and best known statistical information, random sampling targets complete control of all features of the ensemble. Such detailed information is rarely known and must instead be more or less blindly assigned. The inevitable consequence is that critical computational resources are spent on propagating, at best, vaguely known details. The numerical power of modern computers is better spent on refinements of the signal processing model (longer time series, higher sampling rates, larger systems etc.). Refined methods of random sampling have therefore been proposed which either simplifies the model, or improve the sampling distributions. Compared to deterministic sampling though, their convergence rates remain low.

It is easy to confuse deterministic sampling with experimental design and optimization [28]. Even though any sample could be a possible outcome of an experiment, deterministic ensem‐ bles *represent* rather than *realize* (as random ensembles) statistical distributions. Instead of associating a joint distribution to the parameters of an uncertain model, it is possible to directly represent their statistics with a deterministic ensemble. That would eliminate the need of interpreting abstract distributions and result in complete reproducibility. The critical choice of ensemble would be assigned once and for all in the calibration experiment, with no further need of approximation.

The use of excitation matrices made it possible to construct universal generic ensembles. The efficiency of the minimal SPX ensemble is indeed high but so is also its third moment. While the STD maximizes the range of each parameter, the BIN minimizes it by varying all parameters in all samples. The STD is the simplest while the SPX is the most efficient ensemble. In the example, the BIN was most accurate. For non-parametric models with many parameters, reduction of samples may be required. Elimination of singular values (ESV) and correlated sampling (CRS) were two such techniques. The presented ensembles are not to be associated to random sampling as a method. They are nothing but a few examples of deterministic sampling, likely the best ensembles are yet to be discovered.

It is indeed challenging but also rewarding to find novel deterministic sampling strategies. Once the sampling rules are found, the application is just as simple as random sampling, but usually much more efficient. Deterministic sampling is one of very few methods capable of non-linear propagation of uncertainty through large signal processing models.

## **Author details**

24 Digital Filters and Signal Processing

**Figure 7.** Summation of covariance: Total (solid), system (dashed) and noise (dotted), for the three test signals (thin,

3 **Figure 7.** Summation of covariance: Total (solid), system (dashed) and noise (dotted), for

5 Finally, the samples of one pole of the derived ensembles are compared to the reference 6 samples of the REF in Fig. 8. The limit *z* 1 of stability is included to illustrate how close 7 the samples are to be physically forbidden. The construction of the different ensembles is apparent, even though the transformation <sup>1</sup> *T U SUS <sup>T</sup>* 8 distorts the scatter plots 9 (sections 3.2, 5.2), and tilts the principal axes (lines). The samples of the REF are almost evenly distributed. Only four samples of the STD, labelled <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> 10 *p* , *p* , *p* , *p* , deviate significantly

ˆ 11 *V* (Eq. 30). It also is 12 evident that SPX originates from half the STD. A small translation required to achieve the 13 correct mean is discernible, while the Gram-Schmidt orthogonalization renders a minor 14 rotation and distortion. The BIN contains comparable variations in all samples and thus has 15 no central cluster and its samples are repelled from the principal directions (lines). The 16 statistical differences to the REF refer to the shape of the joint pdf. Choosing the best 17 ensemble is thus equivalent of selecting the most appropriate pdf in RS. The BIN seems to

Deterministic Sampling for Modeling Uncertainty <sup>23</sup>

0.24 0.26 0.28 0.3 0.32 0.34

0.24 0.26 0.28 0.3 0.32 0.34 *p3*

0.86 0.88 0.9 0.92 0.94 0.96

BIN

0.86 0.88 0.9 0.92 0.94 0.96

*p4*

*p1*

of square summation 10 (Eq. 27).

STD

*p2*

2 **Figure 8.** The different samples (dots) of the pole marked 'P' in Fig. 3, of the reference (REF), 3 standard (STD), simplex (SPX) and binary (BIN) ensembles. The limit *z* 1 of stability (solid, thick) and lines connecting the primary variations <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> 4 *p* , *p* , *p* , *p* of the STD as well as

**Figure 8.** The different samples (dots) of the pole marked 'P' in Fig. 3, of the reference (REF), standard (STD), simplex (SPX) and binary (BIN) ensembles. The limit | *z* | =1 of stability (solid, thick) and lines connecting the primary variations *p*1, *p*2, *p*3, *p*4 of the STD as well as lines (dashed) to combined excitations of the BIN are included for reference.

7 Deterministic sampling remains controversial [27] while random sampling has qualified as a 8 preferred state-of-the-art method for propagating uncertainty. Both result in finite *statistical* 9 [17] ensembles, which are approximate finite representations of the primary statistical 10 models. Their sampling strategies and convergence rates are dramatically different. While 11 deterministic sampling humbly aims at representing the most relevant and best known 12 statistical information, random sampling targets complete control of all features of the 13 ensemble. Such detailed information is rarely known and must instead be more or less 14 blindly assigned. The inevitable consequence is that critical computational resources are 15 spent on propagating, at best, vaguely known details. The numerical power of modern 16 computers is better spent on refinements of the signal processing model (longer time series, 17 higher sampling rates, larger systems etc.). Refined methods of random sampling have 18 therefore been proposed which either simplifies the model, or improve the sampling 19 distributions. Compared to deterministic sampling though, their convergence rates remain

5 lines (dashed) to combined excitations of the BIN are included for reference.

*p1*

18 resemble the REF scatter plot the most, as verified by its low errors in Fig. 5.

from a dense central cluster, as described by the excitation matrix STD

0.86 0.88 0.9 0.92 0.94 0.96

SPX

*p2*

0.86 0.88 0.9 0.92 0.94 0.96

REF

4 the three test signals (thin, dashed), with the error

dashed), with the error (ε) of square summation (×10) (Eq. 27).

0.24 0.26 0.28 0.3 0.32 0.34

0.24 0.26 0.28 0.3 0.32 0.34

1

1

76 Digital Filters and Signal Processing

2

6 **7. Conclusions** 

20 low.

Jan Peter Hessling\*

Measurement Technology, SP Technical Research Institute of Sweden, Borås, Sweden

### **References**

[1] Kay S. Fundamentals of Statistical signal processing: Estimation Theory. New Jersey: Prentice Hall; 1993.

[14] Lovett T. Polynomial Chaos Simulation of Analog and Mixed-Signal Systems: Theo‐ ry, Modeling method, Application. Saarbrucken: Lambert Academic Publishing;

Deterministic Sampling for Quantification of Modeling Uncertainty of Signals

http://dx.doi.org/10.5772/52193

79

[15] Li H, Zhang D. Probabilistic collocation method for flow in porous media: Compari‐ sons with other stochastic methods. Water Resources Research 2007; 43 W09409 (13

[16] ISO GUM. Guide to the Expression of Uncertainty in Measurement. Geneva: Interna‐

[17] Metropolis N. The Beginning of the Monte Carlo Method. Los Alamos Science special

[18] Helton J, Davis L. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering and System Safety 2003; 81

[19] Björk Å. Numerical methods for least squares problems. Philadelphia: Siam; 1996.

[21] Julier S, Uhlmann J, Durrant-Whyte HF. A new approach for filtering nonlinear sys‐ tems. Proc IEEE American Control Conference June 21-23 1995; 1628-1632.

[22] Julier S, Uhlmann J. Unscented filtering and nonlinear estimation. Proceeding IEEE

[23] Simon D. Optimal State Estimation: Kalman, H∞ and non-linear approaches. New

[25] Julier S, Uhlmann J. The scaled unscented transformation, Proceedings of the IEEE

[26] Hessling JP. Non-linear propagation and summation of covariance using determinis‐

[27] Gustafsson F, Hendeby G. Some Relations between Extended and Unscented Kalman

[28] Fischer RA. Statistical Methods, Experimental design and Scientific Inference. New

[20] Wikipedia: http://en.wikipedia.org/wiki/Latin\_square (accessed 3 July 2012):

[24] Matlab with Signal Processing Toolbox, The Mathworks, Inc.

American Control Conference 8-10 May 2002; 4555-4559.

Filters. IEEE Trans. sign. proc. 2012; 60 (2) 545-555.

tional Organisation for Standardisation; 1995.

2010.

pp).

23-69.

issue 1987; 15 125-130.

March 2004; 92 (3) 401-422.

tic sampling, in preparation.

York: Oxford University Press; 1990.

Jersey: Wiley; 2006.


[14] Lovett T. Polynomial Chaos Simulation of Analog and Mixed-Signal Systems: Theo‐ ry, Modeling method, Application. Saarbrucken: Lambert Academic Publishing; 2010.

**References**

78 Digital Filters and Signal Processing

Prentice Hall; 1993.

cessed 4 July 2012).

Technology; 2009.

tlitteratur; 1990.

*tion*.

publication.

2011; 22 (10) 105105 (13pp).

cataway, New Jersey: IEEE Press; 2001.

Association 1949; 44 (247) 335-341.

York: John Wiley & Sons Inc.; 2007.

[1] Kay S. Fundamentals of Statistical signal processing: Estimation Theory. New Jersey:

[2] Hessling JP. Propagation of dynamic measurement uncertainty. Meas. Sci. Technol.

[3] Hessling JP. Integration of digital filters and measurements. In: Márquez FPG. (ed.) Digital Filters. Rijeka: InTech; 2011. p123-154. Available from http://www.intechop‐ en.com/books/digital-filters/integration-of-digital-filters-and-measurements (ac‐

[4] Pintelon R, Schoukens J. System Identification: A Frequency Domain Approach. Pis‐

[5] Witteveen JAS. Efficient and Robust Uncertainty Quantification for Computational Fluid Dynamics and Fluid-Structure Interaction. PhD thesis. Delft University of

[6] Hale PD, Dienstfrey A, Wang JCM, Williams DF, Lewandowski A, Keenan DA, Clement TS. Traceable Waveform Calibration With a Covariance-Based Uncertainty

[7] Hessling JP. Metrology for non-stationary dynamic measurements. In: Sharma MK. (ed.) Advances in Measurement systems. Vukovar: InTech; 2010. p. 221-256. Availa‐ ble from http://www.intechopen.com/books/advances-in-measurement-systems/

[8] Metropolis N, Ulam S. The Monte Carlo Method. Journal of the American Statistical

[9] Rubenstein RY, Kroese DP. Simulation and the Monte Carlo Method, 2nd Ed. New

[10] Hessling JP. A novel method of evaluating dynamic measurement uncertainty utiliz‐

[11] Råde L, Westergren B. Beta Mathematics Handbook, 2nd Ed. Lund, Sweden: Studen‐

[12] Hessling JP, Svensson T. Propagation of uncertainty by sampling on confidence boundaries, accepted for publication in *International Journal for Uncertainty Quantifica‐*

[13] Hessling JP. Deterministic sampling for propagating model covariance, submitted for

metrology-for-non-stationary-dynamic-measurements (accessed 4 July 2012).

Analysis. IEEE Trans. Instrum. Meas. 2009; 58 (10) 3554-3568.

ing digital filters. Meas. Sci. Technol. 2009; 20 (5) 055106 (11pp).


**Chapter 4**

**Direct Methods for Frequency Filter Performance**

Analysis methods based on determining system performance specifications by step response, as well as indirect methods: pole-zero plot, magnitude response and integral analysis methods are applied in automatic control theory for performance estimation of linear systems [1,2,3]. However, in many cases the mentioned methods result in crude performance estimation of a linear system (filter) operation. Furthermore, direct methods of linear system performance specifications (settling time, accuracy, overshoot etc.) characterization require a huge amount

Specification or estimation of signal processing performance criteria are usual tasks in frequency filter analysis. In some cases it is considered to be enough to examine a filter behavior at average statistical parameters of a useful signal and its disturbance. In other cases, for instance for robust filters, it is rather more complicated – one needs to determine the limit of variables for signal processing performance specifications at any possible input signal

The author offers to use the filter analysis methods, developed by him on the basis of spectral representations of the Laplace transform, to solve efficiently the problem of signal processing performance specifications determination by frequency filters at different variations of input signal parameter [4,5,6]. The mentioned methods are based on using consistent mathematical models for input signals and filter impulse characteristics by means of a set of continuous/ discrete semi-infinite or finite damped oscillatory components. Similar models can be applied for simple semi-infinite harmonic and aperiodic signals or filter impulse characteristics, compound signals of any form, including signals with composite envelopes, as well as pulse

> © 2013 Mokeev; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Mokeev; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Additional information is available at the end of the chapter

**Analysis**

Alexey Mokeev

**1. Introduction**

http://dx.doi.org/10.5772/52192

of calculations being performed.

signals (radio and video pulses).

parameters variation.

## **Direct Methods for Frequency Filter Performance Analysis**

## Alexey Mokeev

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52192

## **1. Introduction**

Analysis methods based on determining system performance specifications by step response, as well as indirect methods: pole-zero plot, magnitude response and integral analysis methods are applied in automatic control theory for performance estimation of linear systems [1,2,3]. However, in many cases the mentioned methods result in crude performance estimation of a linear system (filter) operation. Furthermore, direct methods of linear system performance specifications (settling time, accuracy, overshoot etc.) characterization require a huge amount of calculations being performed.

Specification or estimation of signal processing performance criteria are usual tasks in frequency filter analysis. In some cases it is considered to be enough to examine a filter behavior at average statistical parameters of a useful signal and its disturbance. In other cases, for instance for robust filters, it is rather more complicated – one needs to determine the limit of variables for signal processing performance specifications at any possible input signal parameters variation.

The author offers to use the filter analysis methods, developed by him on the basis of spectral representations of the Laplace transform, to solve efficiently the problem of signal processing performance specifications determination by frequency filters at different variations of input signal parameter [4,5,6]. The mentioned methods are based on using consistent mathematical models for input signals and filter impulse characteristics by means of a set of continuous/ discrete semi-infinite or finite damped oscillatory components. Similar models can be applied for simple semi-infinite harmonic and aperiodic signals or filter impulse characteristics, compound signals of any form, including signals with composite envelopes, as well as pulse signals (radio and video pulses).

© 2013 Mokeev; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Mokeev; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The application of signal/filter frequency and frequency-time representations, based on Laplace transform, allowed developing simple and effective direct methods for performance analysis of signal processing by analog and digital filters.

sions for calculating forced and free components of a filter reaction by the first method (items 4,5), components of a filter reaction by the second method (item6) are given in the Table 1.

*e <sup>p</sup><sup>t</sup>*),

*e <sup>q</sup><sup>t</sup>*),

T

The operation of the real part extraction on the right side of the expression in the items 1,2,3 for *<sup>X</sup>* (*p*), *K*(*p*), *K*(*p*, *<sup>t</sup>*) is solved in terms of the complex coefficients *G*˙ *<sup>m</sup>* and *ρm* with no

The first method is a complex amplitude method generalization for definition of forced and free components for filter reaction at semi-infinite or finite input signals [6]. The advantages of this method are related to simple algebraic operations, which are used for determining the parameters of linear system reaction (filter, linear circuit) components to an input action described by a set of semi-infinite or finite damped oscillatory components. To analyze a filter it is needed to use simple algebraic operations and operate a set of complex amplitudes and frequencies of forced and free filter reaction components. In this case, there are simple relations between complex amplitudes of output signal forced components and complex amplitudes of an input signal (item 4 Table 1), between complex amplitudes of output signal free components

The time-and-frequency approach in the second analysis method applies to a filter transfer function, i.e. time dependent transfer function of the filter is used [6,8]. In that case, instead of

Analysis methods given in the Table 1 enable to reduce effectively the computational costs when performing a filter analysis by using simple algebraic operations to determine the forced and free components of a filter reaction to an input action as a set of damped oscillatory components. Therefore, the considered analysis methods for linear systems (filters) can be

effectively applied for performance analysis of signal processing by frequency filters.

*<sup>p</sup>* <sup>−</sup> *pn <sup>N</sup>* )

*<sup>p</sup>* <sup>−</sup>ρ*<sup>m</sup> <sup>M</sup>* )

*p* −ρ*<sup>m</sup> <sup>M</sup>*

*<sup>X</sup>***˙** <sup>=</sup> *<sup>X</sup>*˙ *<sup>n</sup> <sup>N</sup>* <sup>=</sup> *<sup>X</sup> mne* <sup>−</sup> *<sup>j</sup>*φ*<sup>n</sup>*

Direct Methods for Frequency Filter Performance Analysis

*p* = *pn <sup>N</sup>* = −β*<sup>n</sup>* + *j*ω*<sup>n</sup> <sup>N</sup>*

*<sup>G</sup>***˙** <sup>=</sup> *<sup>G</sup>*˙ *<sup>m</sup> <sup>M</sup>* <sup>=</sup> *kme* <sup>−</sup> *<sup>j</sup>*ϕ*<sup>m</sup>*

0

*t g*(τ)*e* <sup>−</sup>*p*<sup>τ</sup> *d*τ

) *K*(*p*, *t*)=∫

*e <sup>p</sup>t*) *Y***˙** =diag(*X***˙** )*K*(*p*)

*e <sup>q</sup>t*) *V***˙** =diag(*G***˙** )*X* (*q*)

*e <sup>p</sup>t*) *Y***˙** (*t*)=diag(*X***˙** )*K*(*p*, *t*)

*q* = ρ*<sup>m</sup> <sup>M</sup>* = −α*<sup>m</sup>* + *jwm <sup>M</sup>*

*N* ,

http://dx.doi.org/10.5772/52192

83

*M* ,

**№ Name Expression Remark**

*x*(*t*)=Re(*X***˙** <sup>T</sup>

*g*(*t*)=Re(*G***˙** <sup>T</sup>

transfer function *<sup>K</sup>*(*p*, *<sup>t</sup>*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>−*<sup>e</sup>* <sup>−</sup>( *<sup>p</sup>*−ρ*m*)*<sup>t</sup>*

and complex amplitudes of a filter impulse function (item5).

two sets of filter reaction components only one of them may be used.

4. Forced components *<sup>y</sup>*1(*t*)=Re(*Y***˙** <sup>T</sup>

5. Free components *<sup>y</sup>*2(*t*)=Re(*V***˙** <sup>T</sup>

6. Filter reaction *y*(*t*)=Re(*Y***˙** (*t*)

relevance to the complex variable *p*.

*<sup>X</sup>* (*p*)=Re(*X***˙** <sup>T</sup> <sup>1</sup>

*<sup>K</sup>*(*p*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>

1. Input signal

3. Time dependent

**Table 1.** IIR filters analysis

2. Filter

To simplify the task of analog and digital filter signal processing performance analysis the author offers two methods for performance express-analysis of signal processing by frequency filters using filter frequency responses based on Laplace transform: frequency and frequencytime analysis methods [7].

The frequency method from an indirect analysis method for signal processing, in fact, has transformed into a direct analysis method by means of Laplace transform spectral represen‐ tations. This method is the most effective in cases, where only two main performance specifi‐ cations: signal processing speed and accuracy – are required to be evaluated.

The frequency-time analysis method is being applied in cases, where there is a need to evaluate signal processing speed and accuracy, as well as the history of transient processes in a filter, for instance, to control oscillation of transient process in a filter. It is suggested to perform the analysis by using frequency responses based on filter transfer function, dependent on time, in sections of input signal complex frequencies.

In case of FIR filter an effective estimation of signal processing performance specifications can be carried out by using filter frequency response 3D analysis based on Laplace transform in sections of input signal complex frequencies, considering their change. To evaluate signal processing performance specifications for IIR filters one will need along with sections of 3D filter frequency responses to use sections of 3D signal spectrum on filter impulse response complex frequencies [5].

The issues about the application of the analog and digital filter analysis methods, developed by the author for signal processing performance analysis by frequency filters, are considered below.

## **2. IIR filters analysis**

#### **2.1. Signal processing performance analysis by analog IIR filters**

Let us consider signal processing performance analysis by IIR filters at semi-infinite input signals on the basis of analysis methods based on the Laplace transform spectral representa‐ tions.

Three methods of frequency filter analysis are suggested by the author for the time-andfrequency representations positions of signals and linear systems in coordinates of complex frequency [5,6]. Let us consider the first two methods for signal processing performance analysis by frequency filters.

ThemathematicaldescriptionforthegeneralizedinputsignalandIIRfilterattimeandfrequency domains for the first (item2) and the second (item 3) analysis methods, mathematical expres‐


sions for calculating forced and free components of a filter reaction by the first method (items 4,5), components of a filter reaction by the second method (item6) are given in the Table 1.

**Table 1.** IIR filters analysis

The application of signal/filter frequency and frequency-time representations, based on Laplace transform, allowed developing simple and effective direct methods for performance

To simplify the task of analog and digital filter signal processing performance analysis the author offers two methods for performance express-analysis of signal processing by frequency filters using filter frequency responses based on Laplace transform: frequency and frequency-

The frequency method from an indirect analysis method for signal processing, in fact, has transformed into a direct analysis method by means of Laplace transform spectral represen‐ tations. This method is the most effective in cases, where only two main performance specifi‐

The frequency-time analysis method is being applied in cases, where there is a need to evaluate signal processing speed and accuracy, as well as the history of transient processes in a filter, for instance, to control oscillation of transient process in a filter. It is suggested to perform the analysis by using frequency responses based on filter transfer function, dependent on time, in

In case of FIR filter an effective estimation of signal processing performance specifications can be carried out by using filter frequency response 3D analysis based on Laplace transform in sections of input signal complex frequencies, considering their change. To evaluate signal processing performance specifications for IIR filters one will need along with sections of 3D filter frequency responses to use sections of 3D signal spectrum on filter impulse response

The issues about the application of the analog and digital filter analysis methods, developed by the author for signal processing performance analysis by frequency filters, are considered

Let us consider signal processing performance analysis by IIR filters at semi-infinite input signals on the basis of analysis methods based on the Laplace transform spectral representa‐

Three methods of frequency filter analysis are suggested by the author for the time-andfrequency representations positions of signals and linear systems in coordinates of complex frequency [5,6]. Let us consider the first two methods for signal processing performance

ThemathematicaldescriptionforthegeneralizedinputsignalandIIRfilterattimeandfrequency domains for the first (item2) and the second (item 3) analysis methods, mathematical expres‐

**2.1. Signal processing performance analysis by analog IIR filters**

cations: signal processing speed and accuracy – are required to be evaluated.

analysis of signal processing by analog and digital filters.

sections of input signal complex frequencies.

time analysis methods [7].

82 Digital Filters and Signal Processing

complex frequencies [5].

**2. IIR filters analysis**

analysis by frequency filters.

below.

tions.

The operation of the real part extraction on the right side of the expression in the items 1,2,3 for *<sup>X</sup>* (*p*), *K*(*p*), *K*(*p*, *<sup>t</sup>*) is solved in terms of the complex coefficients *G*˙ *<sup>m</sup>* and *ρm* with no relevance to the complex variable *p*.

The first method is a complex amplitude method generalization for definition of forced and free components for filter reaction at semi-infinite or finite input signals [6]. The advantages of this method are related to simple algebraic operations, which are used for determining the parameters of linear system reaction (filter, linear circuit) components to an input action described by a set of semi-infinite or finite damped oscillatory components. To analyze a filter it is needed to use simple algebraic operations and operate a set of complex amplitudes and frequencies of forced and free filter reaction components. In this case, there are simple relations between complex amplitudes of output signal forced components and complex amplitudes of an input signal (item 4 Table 1), between complex amplitudes of output signal free components and complex amplitudes of a filter impulse function (item5).

The time-and-frequency approach in the second analysis method applies to a filter transfer function, i.e. time dependent transfer function of the filter is used [6,8]. In that case, instead of two sets of filter reaction components only one of them may be used.

Analysis methods given in the Table 1 enable to reduce effectively the computational costs when performing a filter analysis by using simple algebraic operations to determine the forced and free components of a filter reaction to an input action as a set of damped oscillatory components. Therefore, the considered analysis methods for linear systems (filters) can be effectively applied for performance analysis of signal processing by frequency filters.

Let us consider a simple example of performance analysis of signal processing by a high-pass second-order filter relating to signal processing task in power system protection and automa‐ tion devices [9,10]. The filter is used to extract a sinusoidal component of commercial frequency and eliminate disturbance as a free component of transient processes in a control object. In this case, a change of a useful signal initial phase is acceptable.

All the initial data and dependencies which are necessary for the analysis are represented in the Table 2. IIR filter parameters are specified, the mathematical description of an input signal with specified sizes of changing for useful signal and disturbance parameters affecting their spectrum is given in the Table 2 as well.

The impulse function of high-pass second-order filter contains a delta function of Dirac which is used for determining complex amplitudes of forced components when defining *K*(*p*) by the impulse function (item 3 Table 2) and cannot be applied for determining complex amplitudes of filter reaction free components (item 5). To simplify the analysis the delta function can be represented as an extreme case of the exponential component *αe* <sup>−</sup>*α<sup>t</sup>* at *α* →*∞*[6].

The analysis results should ensure the following performance criteria of signal processing by a filter:


As it follows from the Table 1, simple algebraic operations are applied to determine complex amplitudes, as well as forced and free components of a filter output signal.

> For determining or estimating performance specifications of signal processing by the investi‐ gated IIR filter one need either to improve the software (Figure 1) or to reduce the amount of

> The easiest operation is to define the error level in signal processing by a filter at frequency deviation of useful sinusoidal signal within the range ±5 Hz from the nominal value of 50 Hz. This error, as it is known, may be determine by an average amplitude-frequency response of a filter. In this case the value of a filter amplitude-frequency response in the areas of frequency 2*π*(45÷55) rad/s is between 0,95 and 1,038. Thus, the filter meets the signal processing

> A filter settling time can be defined by total damping of a free component *τ*1 and a forced com‐ ponent of exponential disturbance *τ*2 if the last component was not eliminated by the filter till the necessary level. Time *τ*1 and *τ*2 can be defined according to the Table 1 (item 6 and item 4).

> A damping time of disturbance free component *τ*1 and forced component *τ*<sup>2</sup> to the required

level of 5% may be determined on the basis of the expressions given in the Table 2.

calculations by simplifying the analysis task. Let us consider the second option first.

*<sup>x</sup>*(*t*)= *<sup>X</sup> <sup>m</sup>*1cos(ω1*<sup>t</sup>* <sup>−</sup>φ1)<sup>−</sup> *<sup>X</sup> <sup>m</sup>*2*<sup>e</sup>* <sup>−</sup>β2*<sup>t</sup>*

*p*cos(φ1) + ω1sin(φ1) *<sup>p</sup>* <sup>2</sup> <sup>+</sup> <sup>ω</sup><sup>1</sup>

> *X*˙ <sup>1</sup> 0 0 *X*˙ <sup>2</sup>

<sup>2</sup> <sup>+</sup> *<sup>j</sup>*166, <sup>66</sup>ω<sup>1</sup>

−1, 044ω<sup>1</sup> 2 *<sup>e</sup>* <sup>−</sup> *<sup>j</sup>*φ<sup>1</sup>

*<sup>e</sup> <sup>p</sup><sup>t</sup>*)=Re( *<sup>Y</sup>*˙ <sup>1</sup> *Y*˙ 2

*<sup>y</sup>*1(*t*)= *<sup>y</sup>*11(*t*) <sup>+</sup> *<sup>y</sup>*12(*t*), *y*11(*t*)=Re(*Y*˙ <sup>1</sup>*<sup>e</sup> <sup>j</sup>*ω1*<sup>t</sup>*

<sup>100</sup><sup>2</sup> <sup>−</sup>ω<sup>1</sup>

*V***˙** = *V*˙ <sup>1</sup> = *X* (ρ1)*G*˙ <sup>1</sup> = *X m*<sup>1</sup>

)

*<sup>X</sup>***˙** <sup>=</sup> *<sup>X</sup> <sup>m</sup>*1*<sup>e</sup>* <sup>−</sup> *<sup>j</sup>*φ<sup>1</sup> *<sup>X</sup> <sup>m</sup>*2*<sup>e</sup>* <sup>−</sup> *<sup>j</sup>*<sup>π</sup> <sup>T</sup>

*X* (*p*)= *X m*<sup>1</sup>

*<sup>K</sup>*(*p*)= *<sup>k</sup>*0*<sup>p</sup>* <sup>2</sup> *<sup>p</sup>* <sup>2</sup> <sup>+</sup> <sup>2</sup>α1*<sup>p</sup>* <sup>+</sup> *<sup>w</sup>*<sup>2</sup>

*Y***˙** =diag(*X***˙** )*K*(*p*)=

= *X m*<sup>1</sup>

*<sup>y</sup>*1(*t*)=Re(*Y***˙** <sup>T</sup>

,

μ<sup>2</sup> = *X m*<sup>2</sup> / *X m*<sup>1</sup> =0÷1, ω<sup>1</sup> =2π(45÷55), φ<sup>1</sup> =0÷2π, β<sup>2</sup> =2÷200

<sup>2</sup> , *g*(*t*)=*k*0δ(*t*) <sup>+</sup> Re(*G*˙ <sup>1</sup>*<sup>e</sup>* <sup>ρ</sup>1*<sup>t</sup>*

*<sup>G</sup>***˙** <sup>=</sup> *<sup>k</sup>*1*<sup>e</sup>* <sup>−</sup> *<sup>j</sup>*ϕ<sup>1</sup> <sup>=</sup> <sup>−</sup>424, <sup>5</sup>*<sup>e</sup> <sup>j</sup>*0.342 , *<sup>q</sup>* <sup>=</sup> <sup>−</sup>α<sup>1</sup> <sup>+</sup> *jw*<sup>1</sup> <sup>=</sup> <sup>−</sup>165, <sup>9</sup> <sup>+</sup> *<sup>j</sup>*117, <sup>1</sup>

*K*( *j*ω1) *<sup>K</sup>*(−β2) <sup>=</sup>

<sup>T</sup> *<sup>e</sup> <sup>j</sup>*ω1*<sup>t</sup>*

<sup>2</sup> − *X m*<sup>2</sup>

, *p* = *j*ω<sup>1</sup> −β<sup>2</sup> <sup>T</sup>

1 *p* + β<sup>2</sup> ,

*X*˙ <sup>1</sup>*K*( *j*ω1) *<sup>X</sup>*˙ <sup>2</sup>*K*(−β2) <sup>=</sup>

<sup>2</sup> <sup>−</sup>166, <sup>66</sup>β<sup>2</sup>

), *y*12(*t*)=Re(*Y*˙ <sup>2</sup>*<sup>e</sup>* <sup>−</sup>β2*<sup>t</sup>*

<sup>2</sup> −μ<sup>2</sup>

<sup>+</sup> *<sup>Y</sup>*˙ <sup>2</sup>*<sup>e</sup>* <sup>−</sup>β2*<sup>t</sup>* ),

μ2β<sup>2</sup> 2 *e* <sup>−</sup> *<sup>j</sup>*<sup>π</sup>

<sup>100</sup><sup>2</sup> <sup>+</sup> <sup>β</sup><sup>2</sup>

*<sup>e</sup>* <sup>−</sup>β2*<sup>t</sup>* ) =Re(*Y*˙ <sup>1</sup>*<sup>e</sup> <sup>j</sup>*ω1*<sup>t</sup>*

ρ1cos(φ1) + ω1sin(φ1) ρ1 <sup>2</sup> <sup>+</sup> <sup>ω</sup><sup>1</sup>

),*k*<sup>0</sup> =1, 206 ,

*Y*˙ 1 *Y*˙ 2 =

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

85

T

)

1 ρ<sup>1</sup> + β<sup>2</sup>

performance requirement mentioned above.

**№ Name Expression**

1. Input signal

2. High-pass filter

3. Forced components of complex amplitudes

4. Forced components

components

**Table 2.** IIR filter analysis

5. Complex amplitudes of free

6. Free components *<sup>y</sup>*2(*t*)=Re(*V*˙ <sup>1</sup>*<sup>e</sup>* <sup>ρ</sup>1*<sup>t</sup>*

7. Error ε(*t*)= *y*2(*t*) + *y*12(*t*)

When using Mathematica, Mapple, Matlab, Mathcad and other state-of-art mathematical software for determining forced and free components of an output signal it is necessary to specify only complex amplitude vectors of an input signal and a filter impulse function, as well as complex frequency vectors of an input signal and a filter. In this case all the necessary calculations, related to a filter analysis, would be carried out automatically. If it is needed to determine complex amplitudes of a filter impulse function at specified transfer function the ready-made formulas may be used [6], which can be easily applied in the mathematical software mentioned above.

All the examples in the present chapter are given using the mathematical software Mathcad. Mathcad was chosen due to pragmatic considerations related to assuring the maximum visibility of the examples for filter analysis, as in Mathcad mathematical expressions are given in the form, closest to universally accepted mathematical notation [11,12].

An example of a filter computation using Mathcad at the specified filter parameters and the following input signal parameters: *X m*<sup>2</sup> = *X m*<sup>1</sup> =1, *ω*<sup>1</sup> =2*π*50rad/s, *φ*<sup>1</sup> =0, *β*<sup>2</sup> =60 s-1 is given on the Figure 1.


**Table 2.** IIR filter analysis

Let us consider a simple example of performance analysis of signal processing by a high-pass second-order filter relating to signal processing task in power system protection and automa‐ tion devices [9,10]. The filter is used to extract a sinusoidal component of commercial frequency and eliminate disturbance as a free component of transient processes in a control object. In this

All the initial data and dependencies which are necessary for the analysis are represented in the Table 2. IIR filter parameters are specified, the mathematical description of an input signal with specified sizes of changing for useful signal and disturbance parameters affecting their

The impulse function of high-pass second-order filter contains a delta function of Dirac which is used for determining complex amplitudes of forced components when defining *K*(*p*) by the impulse function (item 3 Table 2) and cannot be applied for determining complex amplitudes of filter reaction free components (item 5). To simplify the analysis the delta function can be

The analysis results should ensure the following performance criteria of signal processing by

**1.** a filter settling time should be less than 30 ms at 5% acceptable total error of signal processing at any value of disturbance parameters within the specified range,

**2.** an acceptable error at frequency deviation of useful signal from the nominal value of 50Hz

As it follows from the Table 1, simple algebraic operations are applied to determine complex

When using Mathematica, Mapple, Matlab, Mathcad and other state-of-art mathematical software for determining forced and free components of an output signal it is necessary to specify only complex amplitude vectors of an input signal and a filter impulse function, as well as complex frequency vectors of an input signal and a filter. In this case all the necessary calculations, related to a filter analysis, would be carried out automatically. If it is needed to determine complex amplitudes of a filter impulse function at specified transfer function the ready-made formulas may be used [6], which can be easily applied in the mathematical

All the examples in the present chapter are given using the mathematical software Mathcad. Mathcad was chosen due to pragmatic considerations related to assuring the maximum visibility of the examples for filter analysis, as in Mathcad mathematical expressions are given

An example of a filter computation using Mathcad at the specified filter parameters and the following input signal parameters: *X m*<sup>2</sup> = *X m*<sup>1</sup> =1, *ω*<sup>1</sup> =2*π*50rad/s, *φ*<sup>1</sup> =0, *β*<sup>2</sup> =60 s-1 is given on

at *α* →*∞*[6].

case, a change of a useful signal initial phase is acceptable.

represented as an extreme case of the exponential component *αe* <sup>−</sup>*α<sup>t</sup>*

within the range ±5 Hz should not be more than 5%, **3.** an acceptable overshoot should not be more than 10%.

amplitudes, as well as forced and free components of a filter output signal.

in the form, closest to universally accepted mathematical notation [11,12].

spectrum is given in the Table 2 as well.

84 Digital Filters and Signal Processing

a filter:

software mentioned above.

the Figure 1.

For determining or estimating performance specifications of signal processing by the investi‐ gated IIR filter one need either to improve the software (Figure 1) or to reduce the amount of calculations by simplifying the analysis task. Let us consider the second option first.

The easiest operation is to define the error level in signal processing by a filter at frequency deviation of useful sinusoidal signal within the range ±5 Hz from the nominal value of 50 Hz. This error, as it is known, may be determine by an average amplitude-frequency response of a filter. In this case the value of a filter amplitude-frequency response in the areas of frequency 2*π*(45÷55) rad/s is between 0,95 and 1,038. Thus, the filter meets the signal processing performance requirement mentioned above.

A filter settling time can be defined by total damping of a free component *τ*1 and a forced com‐ ponent of exponential disturbance *τ*2 if the last component was not eliminated by the filter till the necessary level. Time *τ*1 and *τ*2 can be defined according to the Table 1 (item 6 and item 4).

A damping time of disturbance free component *τ*1 and forced component *τ*<sup>2</sup> to the required level of 5% may be determined on the basis of the expressions given in the Table 2.

Let us consider an estimation of total damping *τ* by the specified values of *τ*1 and *τ*2 5 %. In this case *τ* <*τ*<sup>1</sup> + *τ*2. If the filter is designed in a correct way, then | *y*12(*t*)| ≤*ε*lim at *t* ≥*τ*1, that is when disturbance is eliminated to the specified level by the moment of the end of transient process in a filter, then *τ* ≈*τ*1. An estimation of a filter settling time can be performed with some conservative value on the basis of a sum of modules of free component envelopes of transient process in a filter and disturbance forced components [13]. The dependence *τ* from *β*2 can be quite easily determined by an insignificant improvement of the program on the Mathcad

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

87

Dependencies *τ*, *τ*1 and *τ*2, depending on the value of exponential disturbance damping

In case of *β*<sup>2</sup> =2÷35 s-1 the initial level of a disturbance forced component is below the acceptable error, so |*Y*˙ <sup>2</sup> <sup>|</sup> <sup>≤</sup>*ε*lim, and *τ*<sup>2</sup> =0. Then a filter settling time is mostly defined by damping transient process of its own in a filter, in other words by the value *τ*1. Within the range *β*<sup>2</sup> =74÷125 s-1 damping of a disturbance force component is longer than damping time of transient process of its own in a filter, that is *τ*<sup>2</sup> >*τ*1. At *β*<sup>2</sup> ≤107 s-1 a filter settling time *τ* is less than values *τ*1 and *τ*2, and at *β*<sup>2</sup> >107 s-1 is longer than any of the values mentioned above. It is due to plus-minus signs of filter reaction components, defining an error of signal processing,

An overshoot level in a filter can be determined by the program improvement on the Mathcad

The performance analysis results for signal processing of the investigated IIR filter are represented in the Table 3. Performance specifications determined by using one traditional method - by step response of a second-order low-pass filter are given in the Table 3 as well. In this case the description for the low-pass filter was obtained on the basis of the investigated second-order low-pass filter by applying a well known frequency transformation [14].

example, represented on the Figure 1.

coefficient *β*2 are shown on the Figure 2.

**Figure 2.** The dependence of a filter settling time from damping coefficient β<sup>2</sup>

as well as to values of the components complex frequencies.

example, shown on the Figure 1.

**Figure 1.** IIR filters analysis using Mathcad software

$$\tau\_1 = -\frac{1}{\alpha\_1} \ln \left( \frac{0,05}{k\_1 \left| X(\rho\_1) \right|} \right), \\ \tau\_2 = -\frac{1}{\beta\_2} \ln \left( \frac{0,05}{\mu\_2 \left| K(-\beta\_2) \right|} \right). \tag{1}$$

Variables *τ*1 and *τ*2 depend not only on filter parameters which are constant, but also on signal parameters. Let us assume the worst option *μ*<sup>2</sup> =1, *φ*<sup>1</sup> =0, *ω*<sup>1</sup> =2*π*50rad/s, taking into account the particularities of the controlled object [9,10]. Thus, let us consider the dependence *τ*1and *τ*2 from *β*2.

A presice settling time *τ* can be determined through the total error of signal processing *ε*(*t*). In case of *t* ≥*τ*the condition |*ε*(*t*)| ≤*ε*limshould be performed, when *ε*lim =0, 05.

Let us consider an estimation of total damping *τ* by the specified values of *τ*1 and *τ*2 5 %. In this case *τ* <*τ*<sup>1</sup> + *τ*2. If the filter is designed in a correct way, then | *y*12(*t*)| ≤*ε*lim at *t* ≥*τ*1, that is when disturbance is eliminated to the specified level by the moment of the end of transient process in a filter, then *τ* ≈*τ*1. An estimation of a filter settling time can be performed with some conservative value on the basis of a sum of modules of free component envelopes of transient process in a filter and disturbance forced components [13]. The dependence *τ* from *β*2 can be quite easily determined by an insignificant improvement of the program on the Mathcad example, represented on the Figure 1.

Dependencies *τ*, *τ*1 and *τ*2, depending on the value of exponential disturbance damping coefficient *β*2 are shown on the Figure 2.

**Figure 2.** The dependence of a filter settling time from damping coefficient β<sup>2</sup>

( ) ( ) æö æ ö

1 0,05 1 0,05 ln , ln . *k X <sup>K</sup>* (1)

<sup>T</sup> := N length X( ) :=

<sup>T</sup> := M length G( ) :=

é ê ê ë

Gi p - r i

= +: 1.206

G ¾( ) i p r ¾( ) <sup>i</sup> - +

ù ú ú û

ù

1 N i å y2 i t ( ) , = :=

ù ú ú û

2 1 M i

å =

r×<sup>t</sup> := ( ) y t( ) yp t( ) yf t( ) +:= <sup>e</sup> t( ) yf t( ) Yp t( )2 +:=

G ¾( ) i p r ¾( ) <sup>i</sup> - 1 e p r ¾ ( )<sup>i</sup> <sup>é</sup> - <sup>ë</sup> <sup>ù</sup> û - <sup>t</sup> é ë

= +: 1.206

Re p1 ( <sup>i</sup>)×<sup>t</sup> <sup>=</sup> ×: ys t( )

×+ û

t =- ç÷ ç ÷ t = a b <sup>r</sup> m -b èø è ø

1 2 1 1 2 2

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07

t

Variables *τ*1 and *τ*2 depend not only on filter parameters which are constant, but also on signal parameters. Let us assume the worst option *μ*<sup>2</sup> =1, *φ*<sup>1</sup> =0, *ω*<sup>1</sup> =2*π*50rad/s, taking into account the particularities of the controlled object [9,10]. Thus, let us consider the dependence *τ*1and

A presice settling time *τ* can be determined through the total error of signal processing *ε*(*t*).

In case of *t* ≥*τ*the condition |*ε*(*t*)| ≤*ε*limshould be performed, when *ε*lim =0, 05.

1 2

ORIGIN 1 := j 1 := -

86 Digital Filters and Signal Processing

1 METOD X1 p( ) <sup>1</sup>

yp t( ) Re YT <sup>e</sup>

1

Y2 i t () X , ( <sup>i</sup>) K2 p1i

1

**Figure 1.** IIR filters analysis using Mathcad software

0

1

ys t( ) y2 1 t ( ) , y2 2 t ( ) , YM2 1 t ( ) ,

0

1 y t( ) yp t( ) yf t( ) e t( ) Yp t( )2

IIR filter G 424.4 - e

2 1 N i

2 METOD K2 p t ( ) , <sup>1</sup>

,<sup>t</sup> ( ) ×:= y2 i t ( ) Re Y2 i t , ( )e , p1i

å =

Complex amplitudes: Y diag X( ) K1 p1( ) ×:= Yf diag G( ) X1 ×:= (r )

Input signal X 11 ( ) - <sup>T</sup> := p1 j 2 ( × p×<sup>50</sup> -60)

j 0.342 <sup>×</sup> ( <sup>×</sup> <sup>0</sup>)

Xi p p1i -

é ê ê ë

p1 t<sup>×</sup> := ( ) Yp t( ) Re diag Y( ) ep1 t<sup>×</sup> := ( ) yf t( ) Re YfT <sup>e</sup>

T := r -165.9 j 117.1 ×+ 0( )

> X ¾( ) i p p1¾( ) <sup>i</sup> - +

ù ú ú û

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07

é ê ê ë

2 1 M i

å =

<sup>×</sup><sup>t</sup> := ( ) YM2 i t ( ) Y2 i t , ( ) , <sup>e</sup>

t

1 e- (p-ri)<sup>t</sup> - <sup>é</sup> <sup>ë</sup> <sup>ù</sup> <sup>×</sup> <sup>û</sup>

Gi p - r i

:= K1 p( ) <sup>1</sup>

*τ*2 from *β*2.

In case of *β*<sup>2</sup> =2÷35 s-1 the initial level of a disturbance forced component is below the acceptable error, so |*Y*˙ <sup>2</sup> <sup>|</sup> <sup>≤</sup>*ε*lim, and *τ*<sup>2</sup> =0. Then a filter settling time is mostly defined by damping transient process of its own in a filter, in other words by the value *τ*1. Within the range *β*<sup>2</sup> =74÷125 s-1 damping of a disturbance force component is longer than damping time of transient process of its own in a filter, that is *τ*<sup>2</sup> >*τ*1. At *β*<sup>2</sup> ≤107 s-1 a filter settling time *τ* is less than values *τ*1 and *τ*2, and at *β*<sup>2</sup> >107 s-1 is longer than any of the values mentioned above. It is due to plus-minus signs of filter reaction components, defining an error of signal processing, as well as to values of the components complex frequencies.

An overshoot level in a filter can be determined by the program improvement on the Mathcad example, shown on the Figure 1.

The performance analysis results for signal processing of the investigated IIR filter are represented in the Table 3. Performance specifications determined by using one traditional method - by step response of a second-order low-pass filter are given in the Table 3 as well. In this case the description for the low-pass filter was obtained on the basis of the investigated second-order low-pass filter by applying a well known frequency transformation [14].


The express-analysis methods for filters, including performance analysis of signal processing, were developed based on investigation of 3D and 4D frequency responses [7]. It is enough to consider the sections *p* = *jω* and *p* = −*γ* of 3D frequency responses *K*(*p*), as well as the section *p* = −*α*<sup>1</sup> + *jw*1 of a input signal spectrum according to the Laplace transform to estimate the

The express-analysis methods mentioned above can be effectively applied for FIR filters as

Under the definition of digital filters in the chapter discrete filters are ment. In many cases it is justified, for instance, in cases of using microcontrollers or digital signal processor with high digit capacity and especially for microprocessors with support for floating-point operations

When using discrete filters their analysis has a lot of similarities with the analysis of analog filters-prototypes. There is a small difference only when it comes to transition from images to originals. The main expressions for determining components of a digital filter output signal when injecting on the filter input a signal as a set of discrete semi-infinite damped oscillatory

An example for digital filter analysis as a continuation of the example of the analog filterprototype analysis (fig. 1) is represented on the fig. 3. The mathematical description for the digital filter was obtained by the method of invariant impulse responses at the discrete

*<sup>X</sup>***˙** <sup>=</sup> *<sup>X</sup> mne* <sup>−</sup> *<sup>j</sup>*φ*<sup>n</sup>*

*<sup>G</sup>***˙** <sup>=</sup> *kme* <sup>−</sup> *<sup>j</sup>*ϕ*<sup>m</sup>*

*z* =*e <sup>q</sup><sup>T</sup>*

) *<sup>K</sup>*(*z*, *<sup>k</sup>*)=∑

*Z*(*p*, *k*)) *Y***˙** =diag(*X***˙** )*K*(*z*),

*Z*(*q*, *k*)) *V***˙** =diag(*G***˙** )*X* (*z*)

*Z*(*p*, *k*)) *Y***˙** (*k*)=diag(*X***˙** )*K*(*z*, *k*)

*q* = −α*<sup>m</sup>* + *jwm <sup>M</sup>* ,

*i*=0 *k g*(*i*)*z* <sup>−</sup>*<sup>i</sup>*

*z* =*e <sup>p</sup><sup>T</sup>* , *Z*(*p*, *k*)=*e pkT* , *T* - discrete sampling step

*M* ,

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

89

*<sup>N</sup>* , *p* = −β*<sup>n</sup>* + *j*ω*<sup>n</sup> <sup>N</sup>*

settling time and accuracy of signal processing for the example given on the fig.1.

well, the detailed explanation will be given further in the present chapter.

**№ Name Expressions Remark**

*Z*(*P*, *k*)),

*<sup>z</sup>* <sup>−</sup> *zn <sup>N</sup>* )

*Z*(*Q*, *k*)),

*<sup>z</sup>* <sup>−</sup>z*<sup>m</sup> <sup>M</sup>* )

T

*z* <sup>−</sup>*<sup>k</sup>* ) *z* −z*<sup>m</sup> <sup>M</sup>*

*x*(*k*)=Re(*X***˙** <sup>T</sup>

*g*(*k*)=Re(*G***˙** <sup>T</sup>

function *<sup>K</sup>*(*z*, *<sup>k</sup>*)=Re(*G***˙** <sup>T</sup> *<sup>z</sup>*(1−*<sup>e</sup>* <sup>ρ</sup>*mkT*

*<sup>K</sup>*(*z*)=Re(*G***˙** <sup>T</sup> *<sup>z</sup>*

*<sup>X</sup>* (*z*)=Re(*X***˙** <sup>T</sup> *<sup>z</sup>*

**2.4. Digital IIR filters analysis**

components are given in the Table 4.

sampling step *T* =0, 0005s.

3. Time dependent transfer

**Table 4.** IIR digital filter analysis

4. Forced components *<sup>y</sup>*1(*k*)=Re(*Y***˙** <sup>T</sup>

5. Free components *<sup>y</sup>*2(*k*)=Re(*V***˙** <sup>T</sup>

6. Filter reaction *y*(*k*)=Re(*Y***˙** (*k*)

1. Input signal

2. Filter

[15,16].

**Table 3.** Signal processing performance for a second-order high-pass filter

As it follows from the Table 3, in the considered example there are some substantial differences in the performance specifications, gained by the traditional method and on the basis of their direct determination.

#### **2.2. IIR filters analysis at dissemination of time-and-frequency approach to transfer function filter**

As it follows from the Table 1 (item 3 and item 6), estimation of functioning performance for filters can be performed at dissemination of time-and-frequency approach to filter transfer function, in other words by using time dependent transfer function of a filter [5,6].

In that case, comparing to the first method where two groups of filter reaction components are using – forced and free components, only the first group of components is using. The infor‐ mation about the transient process is containing in the time dependent complex amplitudes of a filter reaction *Y***˙** (*t*). In the case of *t* →*∞* the mentioned set of complex amplitudes will be equal to complex amplitudes of forced components *Y***˙** (*t*)→*Y***˙** .

The necessary dependences for determination of output signal components of a filter are represented in the Table 1, an example for high-pass filter analysis at the specified parameters of an useful signal is given on the bottom part of the fig.1.

The advantage of the considered method is connected to determination envelopes for every component of a filter output signal, based on which the total envelope of a filter output signal and a variation law of initial phase can be defined. This information can be effectively used for performance analysis of signal processing by frequency filters. For instance, when deter‐ mining an overshoot level (oscillativity) of a transient process in a filter.

### **2.3. IIR filter express-analysis method**

It follows from the Table 1, that quality indexes estimation for filter operation can be carried out on the basis of interim calculation results –*K*(*p*) and *X* (*q*), that means - based on spectral representations of signals and filter impulse functions in complex frequency coordinates. Another, not less effective approach, is related to the usage of the interim results of the second analysis method – the transfer function *K*(*p*, *t*), which is dependent of time. Thus, the application of filter frequency characteristics and a signal spectrum in complex frequency coordinates increase significantly the effectiveness of using the frequency methods of per‐ formance analysis for frequency filter operation [7].

The express-analysis methods for filters, including performance analysis of signal processing, were developed based on investigation of 3D and 4D frequency responses [7]. It is enough to consider the sections *p* = *jω* and *p* = −*γ* of 3D frequency responses *K*(*p*), as well as the section *p* = −*α*<sup>1</sup> + *jw*1 of a input signal spectrum according to the Laplace transform to estimate the settling time and accuracy of signal processing for the example given on the fig.1.

The express-analysis methods mentioned above can be effectively applied for FIR filters as well, the detailed explanation will be given further in the present chapter.

### **2.4. Digital IIR filters analysis**

**№ Name Step response Direct estimation** 1. Settling time, s 0,0172 0,0275 2. Maximum error in the steady-state mode, % 0 5 3. Maximum overshoot level, % 1,17 11,81 4. Additional error at a frequency variation of an useful signal - 5

As it follows from the Table 3, in the considered example there are some substantial differences in the performance specifications, gained by the traditional method and on the basis of their

As it follows from the Table 1 (item 3 and item 6), estimation of functioning performance for filters can be performed at dissemination of time-and-frequency approach to filter transfer

In that case, comparing to the first method where two groups of filter reaction components are using – forced and free components, only the first group of components is using. The infor‐ mation about the transient process is containing in the time dependent complex amplitudes of a filter reaction *Y***˙** (*t*). In the case of *t* →*∞* the mentioned set of complex amplitudes will be

The necessary dependences for determination of output signal components of a filter are represented in the Table 1, an example for high-pass filter analysis at the specified parameters

The advantage of the considered method is connected to determination envelopes for every component of a filter output signal, based on which the total envelope of a filter output signal and a variation law of initial phase can be defined. This information can be effectively used for performance analysis of signal processing by frequency filters. For instance, when deter‐

It follows from the Table 1, that quality indexes estimation for filter operation can be carried out on the basis of interim calculation results –*K*(*p*) and *X* (*q*), that means - based on spectral representations of signals and filter impulse functions in complex frequency coordinates. Another, not less effective approach, is related to the usage of the interim results of the second analysis method – the transfer function *K*(*p*, *t*), which is dependent of time. Thus, the application of filter frequency characteristics and a signal spectrum in complex frequency coordinates increase significantly the effectiveness of using the frequency methods of per‐

**2.2. IIR filters analysis at dissemination of time-and-frequency approach to transfer**

function, in other words by using time dependent transfer function of a filter [5,6].

**Table 3.** Signal processing performance for a second-order high-pass filter

equal to complex amplitudes of forced components *Y***˙** (*t*)→*Y***˙** .

of an useful signal is given on the bottom part of the fig.1.

**2.3. IIR filter express-analysis method**

formance analysis for frequency filter operation [7].

mining an overshoot level (oscillativity) of a transient process in a filter.

direct determination.

88 Digital Filters and Signal Processing

**function filter**

Under the definition of digital filters in the chapter discrete filters are ment. In many cases it is justified, for instance, in cases of using microcontrollers or digital signal processor with high digit capacity and especially for microprocessors with support for floating-point operations [15,16].

When using discrete filters their analysis has a lot of similarities with the analysis of analog filters-prototypes. There is a small difference only when it comes to transition from images to originals. The main expressions for determining components of a digital filter output signal when injecting on the filter input a signal as a set of discrete semi-infinite damped oscillatory components are given in the Table 4.

An example for digital filter analysis as a continuation of the example of the analog filterprototype analysis (fig. 1) is represented on the fig. 3. The mathematical description for the digital filter was obtained by the method of invariant impulse responses at the discrete sampling step *T* =0, 0005s.


**Table 4.** IIR digital filter analysis

**2.5. IIR filters analysis at finite signals**

a time shift and the principle of additivity.

different duration, where *V***˙** *<sup>n</sup>* – *n*-th matrix column *V***˙** .

signal with finite duration is given on the fig.4.

ORIGIN 1 := j 1 := -

X1i Xi <sup>e</sup>

K2 p t ( ) , <sup>1</sup> 2 1 M i

1

0

y t( ) x t( )

**Figure 4.** IIR filter analysis at a finite signal

1

Input signal X 11 ( ) - <sup>T</sup> := p1 j 2 ( × p×<sup>50</sup> -60)

p1i t2i t1 <sup>×</sup>( - <sup>i</sup>) ×:= x2 i t ( ) Re X , <sup>i</sup> <sup>e</sup>

1 e- (p-ri)<sup>t</sup> - <sup>é</sup> <sup>ë</sup> <sup>ù</sup> <sup>×</sup> <sup>û</sup>

Y2 i t () X , ( <sup>i</sup>) K2 p1i t t1i -, ( ) ×:= Y3 i t ( ) X1 , ( <sup>i</sup>) K2 p1i t t2i -, ( ) ×:= y2 i t ( ) Re Y2 i t , ( )e , p1i t t1 <sup>×</sup>( - <sup>i</sup>) <sup>F</sup> t t1i - ( ) <sup>×</sup> Y3 i t ( )e , p1i t t2 <sup>×</sup>( - <sup>i</sup>) <sup>F</sup> t t2i - ( ) ×- <sup>é</sup> <sup>ë</sup> <sup>ù</sup> û := y t( )

IIR filter G 424.4 - e

Gi p - r i

é ê ê ë

å =

ë

j 0.342 <sup>×</sup> ( <sup>×</sup> <sup>0</sup>)

= +: 1.206

G ¾( ) i p r ¾( ) <sup>i</sup> - 1 e p r ¾ ( )<sup>i</sup> <sup>é</sup> - <sup>ë</sup> <sup>ù</sup> û - <sup>t</sup> é ë

of components with equal duration and the same time shift [6,17].

at semi-infinite signals [5,6].

signal processing by filters.

Performance analysis of processing finite signals by IIR filters as a set of damped oscillatory components with finite duration may be performed on the basis of dependencies for IIR filters

All the needed expressions were obtained on the basis of the expressions from the Table 1 using

Let us considerthe IIRfilter analysis at compound input signals as a set of sequentially adjacent finite signals [17]. The calculation of IIR filter reaction for this case is represented in the Table 5.

In this case every component of an input signal in a general way has a different shift and a

The expressions represented in the Table 5 can be significantly simplified, if the IIR filter analysis at a finite signal, for instance, at injection a finite signal on its input from *N* number

An example of filter calculation, analogous to the example on the fig.1, but using an input

The dependences given in the present section can be effectively used for not only analysis of passing through IIR filter one or another finite signal, but also for performance analysis of

<sup>T</sup> := t1 0 0( )T := t2 0.0448 0.0448 ( )

p1i t t1 <sup>×</sup>( - <sup>i</sup>) × F t t1i - ( ) <sup>×</sup> X1i <sup>e</sup> p1i t t2 <sup>×</sup>( - <sup>i</sup>) × F t t2i - ( ) ×- <sup>é</sup>

T := r -165.9 j 117.1 ×+ 0( )

×+ û

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07

t

û := x t( )

ù

ù ú ú û

<sup>T</sup> := N length X( ) := i 1N..:=

ù

<sup>T</sup> := M length G( ) :=

1 N i å y2 i t ( ) , = :=

1 N i å x2 i t ( ) , = :=

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

91

**Figure 3.** IIR digital filter analysis using Mathcad software


**Table 5.** IIR filter analysis at compound finite signals

### **2.5. IIR filters analysis at finite signals**

T 0.0005 := z1 ep1 T<sup>×</sup> := <sup>z</sup> <sup>e</sup>

X ¾( ) i ×z z z1¾( ) <sup>i</sup> - +

Yd1 diag X( ) K2 z1( ) ×:= Yd2 diag G( ) X2 ×:= (z)

ù ú ú û

:= K2 z( ) <sup>T</sup>

ypd k( ) Re Yd1T := ( <sup>×</sup>zs p1 k ( ) , ) yfd k( ) Re Yd2T := ( <sup>×</sup>zs(<sup>r</sup> , <sup>k</sup>)) yd k( ) ypd k( ) yfd k( ) +:=

**№ Name Expressions Remark**

*e <sup>P</sup>*(*Ct*−*t*) − *X***˙** '

*<sup>p</sup>* <sup>−</sup> *pn <sup>N</sup>* ),

*<sup>p</sup>* <sup>−</sup> *pn <sup>N</sup>* ),

*n p* − *pn <sup>N</sup>*

*<sup>p</sup>* <sup>−</sup>ρ*<sup>m</sup> <sup>M</sup>* )

*e <sup>P</sup>*(*Ct*−*t*) −*Y***˙** '

*p* −ρ*<sup>m</sup> <sup>M</sup>*

<sup>T</sup>*e <sup>P</sup>*(*Ct*−*<sup>t</sup>* '

*e <sup>q</sup><sup>t</sup>*),

(*p*)=Re(*X***˙** <sup>T</sup> <sup>1</sup>

(*p*)=Re(*X***˙** 'T <sup>1</sup>

<sup>Χ</sup>(*p*)=Re(*X***˙** <sup>T</sup> *<sup>e</sup>* <sup>−</sup>*pt*

*<sup>K</sup>*(*p*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>

*n V***˙** *<sup>n</sup>* <sup>T</sup> *<sup>e</sup> <sup>q</sup>*(*t*−*<sup>t</sup> n*) −∑ *n V***˙** ' *<sup>n</sup>* <sup>T</sup> *<sup>e</sup> <sup>q</sup>*(*t*−*<sup>t</sup> n* '

> T *e <sup>P</sup>*(*Ct*−*t*) −*Y***˙** ' (*t*) T *e <sup>P</sup>*(*Ct*−*<sup>t</sup>* '

*g*(*t*)=Re(*G***˙** <sup>T</sup>

*x*(*t*)=Re(*X***˙** <sup>T</sup>

*X* '

*X* ''

function *<sup>K</sup>*(*p*, *<sup>t</sup>*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>−*<sup>e</sup>* <sup>−</sup>( *<sup>p</sup>*−ρ*m*)*<sup>t</sup>*

Xi ×z z z1i -

é ê ê ë

å =

X2 z( ) <sup>T</sup> 2 1 N i

90 Digital Filters and Signal Processing

<sup>1</sup> y t( ) yp t( ) yf t( ) ypd k( ) yfd k( ) yd k( )

1

1. Input signal

2. Filter

3. Time dependent transfer

4. Forced components *<sup>y</sup>*1(*t*)=Re(*Y***˙** <sup>T</sup>

5. Free components *<sup>y</sup>*2(*t*)=Re(∑

6 Filter reaction *y*(*t*)=Re(*Y***˙** (*t*)

**Table 5.** IIR filter analysis at compound finite signals

**Figure 3.** IIR digital filter analysis using Mathcad software

0

r×<sup>T</sup> := zs ps k ( )e , ps k<sup>×</sup> <sup>×</sup><sup>T</sup> := k 0 0.05

= +: 1.206

G ¾( ) i ×z z z ¾( ) <sup>i</sup> - +

ù ú ú û

Gi ×z z - zi

é ê ê ë

2 1 M i

0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045

<sup>T</sup>*e <sup>P</sup>*(*Ct*−*<sup>t</sup>* ' )),

<sup>+</sup> *<sup>X</sup>***˙** 'T *<sup>e</sup>* <sup>−</sup>*pt n* '

*p* − *pn <sup>N</sup>* )

t t , , t k T×, k T×, k T×,

å =

<sup>T</sup> ..:=

*<sup>X</sup>***˙** <sup>=</sup> *<sup>X</sup>*˙ *<sup>n</sup> <sup>N</sup>* <sup>=</sup> *<sup>X</sup> mne* <sup>−</sup> *<sup>j</sup>*φ*<sup>n</sup>*

=diag(*X***˙** )*e <sup>P</sup>*(*t*−*<sup>t</sup>* '

*p* = *pn <sup>N</sup>* = −β*<sup>n</sup>* + *j*ω*<sup>n</sup> <sup>N</sup>* ,

= *tn* '

*<sup>G</sup>***˙** <sup>=</sup> *<sup>G</sup>*˙ *<sup>m</sup> <sup>M</sup>* <sup>=</sup> *kme* <sup>−</sup> *<sup>j</sup>*ϕ*<sup>m</sup>*

0

=diag(*X***˙** ''

(*t*)=diag(*X***˙** ''

*t g*(τ)*e* <sup>−</sup>*p*<sup>τ</sup> *d*τ

*q* = ρ*<sup>m</sup> <sup>M</sup>* = −α*<sup>m</sup>* + *jwm <sup>M</sup>*

)*K*(*p*),

)*K*(*p*)

(ρ*m*) *n M* ,*N*

*<sup>n</sup> <sup>M</sup>* ,*<sup>N</sup>* ,

)*K*(*p*, *Ct* −*t*)

)*K*(*p*, *Ct* −*t* '

)

*N* ,

*X***˙** '

*P* =diag(*p*), *<sup>t</sup>* <sup>=</sup> *tn <sup>N</sup>* , *<sup>t</sup>* '

*C* = 1 *<sup>N</sup>*

) *K*(*p*, *t*)=∫

)) *<sup>Y</sup>***˙** =diag(*X***˙** '

)

*Y***˙** '

) *<sup>V</sup>***˙** <sup>=</sup> *<sup>G</sup>*˙ *<sup>m</sup><sup>X</sup>* (ρ*m*)

*V***˙** ' = *G*˙ *<sup>m</sup>X* '

*Y***˙** '

)) *<sup>Y</sup>***˙** (*t*)=diag(*X***˙** '

*N* ,

*M* ,

) , Performance analysis of processing finite signals by IIR filters as a set of damped oscillatory components with finite duration may be performed on the basis of dependencies for IIR filters at semi-infinite signals [5,6].

All the needed expressions were obtained on the basis of the expressions from the Table 1 using a time shift and the principle of additivity.

Let us considerthe IIRfilter analysis at compound input signals as a set of sequentially adjacent finite signals [17]. The calculation of IIR filter reaction for this case is represented in the Table 5.

In this case every component of an input signal in a general way has a different shift and a different duration, where *V***˙** *<sup>n</sup>* – *n*-th matrix column *V***˙** .

The expressions represented in the Table 5 can be significantly simplified, if the IIR filter analysis at a finite signal, for instance, at injection a finite signal on its input from *N* number of components with equal duration and the same time shift [6,17].

An example of filter calculation, analogous to the example on the fig.1, but using an input signal with finite duration is given on the fig.4.

The dependences given in the present section can be effectively used for not only analysis of passing through IIR filter one or another finite signal, but also for performance analysis of signal processing by filters.

**Figure 4.** IIR filter analysis at a finite signal

For digital IIR filter analysis at injection finite signals on their inputs analogous mathematical operations are applied. All the necessary dependences may be obtained on the basis of the formulas from the Table 4 [13].

teristic [5,6]. According to this approach to mathematical description, IIR filters are special

The input-output dependences for FIR filters can be obtained on the basis of analogous dependences of FIR filters by using time shift and principle of additivity operations [6].

Comparing to IIR filters, FIR filters have finite duration of transient processes of their own, which are defined by a filter length. This to a certain extent simplifies performance analysis for signal processing of this type of filters, especially when using the suggested express-

Basic expressions for FIR filter analysis at injection on the filter input a set of semi-infinite

Due to characteristics of FIR filters, among forced and free components in an filter output signal there is the third group of components *y*4(*t*), which is conventionally referred to free compo‐

> <sup>T</sup>*e <sup>P</sup>*(*Ct*−*<sup>t</sup>* ' ))

> > ),

*p* −ρ*<sup>m</sup> <sup>M</sup>*

−*Y***˙** <sup>2</sup>(*t*) T *e <sup>P</sup>*(*Ct*−*<sup>t</sup>* ' ) −

Basic expressions for FIR filter analysis at signal injection on a filter input as a set of damped oscillatory components with finite duration are represented in the Table 7. The calculation of a filter reaction is given in the Table 7 using only the second analysis method for the case when duration of all the components of a filter impulse function is equal *T*1. The filter analysis based

+ *Y***˙** 3(*t*) T *<sup>e</sup> <sup>P</sup>*(*C*(*t*−*T*1)−*<sup>t</sup>* ' ) )

),

),

*p* −ρ*<sup>m</sup> <sup>M</sup>*

*<sup>X</sup>***˙** <sup>=</sup> *<sup>X</sup>*˙ *<sup>n</sup> <sup>N</sup>* , *<sup>p</sup>* <sup>=</sup> *pn <sup>N</sup>* ,

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

93

=diag(*X***˙** )*e <sup>P</sup>*(*t*−*<sup>t</sup>* '

*P* =diag(*p*),*C* = 1 *<sup>N</sup>* ,

*<sup>G</sup>***˙** <sup>=</sup> *<sup>G</sup>*˙ *<sup>m</sup> <sup>M</sup>* , *<sup>q</sup>* <sup>=</sup> <sup>ρ</sup>*<sup>m</sup> <sup>M</sup>* ,

=diag(*G***˙** )*e QT* , *Q* =diag(*q*),*T* = *Tm <sup>M</sup>*

= *tn* '

*Y***˙** <sup>1</sup>(*t*)=diag(*X***˙** )*K*1(*p*, *Ct* −*t*), *Y***˙** (*t*)=diag(*X***˙** )*K*1(*p*, *Ct* −*t* '

*Y***˙** <sup>3</sup>(*t*)=diag(*X***˙** )*K*2(*p*, *Ct* −*t*), *Y***˙** <sup>4</sup>(*t*)=diag(*X***˙** )*K*2(*p*, *Ct* −*t* '

),

)

*N*

) ,

*X***˙** '

*G***˙** '

*<sup>t</sup>* <sup>=</sup> *tn <sup>N</sup>* , *<sup>t</sup>* '

analysis methods for signal processing performance by FIR filters.

**№ Name Expressions Remark**

*e <sup>P</sup>*(*Ct*−*t*) − *X***˙** '

*<sup>K</sup>*1(*p*, *<sup>t</sup>*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>−*<sup>e</sup>* <sup>−</sup>( *<sup>p</sup>*−ρ*m*)*<sup>t</sup>*

*<sup>K</sup>*2(*p*, *<sup>t</sup>*)=Re(*G***˙** 'T <sup>1</sup>−*<sup>e</sup>* <sup>−</sup>( *<sup>p</sup>*−ρ*m*)(*t*−*T*1)

*<sup>K</sup>*(*p*, *<sup>t</sup>*)= *<sup>K</sup>*1(*p*, *<sup>t</sup>*)<sup>−</sup> *<sup>K</sup>*2(*p*, *<sup>t</sup>*)*<sup>e</sup>* <sup>−</sup>*pT*<sup>1</sup>

T *e <sup>P</sup>*(*Ct*−*t*)

on the first method is represented in details in the author's papers [6,18].

*<sup>e</sup> <sup>q</sup><sup>t</sup>* <sup>−</sup>*G***˙** 'T*<sup>e</sup> <sup>q</sup>*(*t*−*T*1)

damped oscillatory components are given in the Table 6.

*g*(*t*)=Re(*G***˙** <sup>T</sup>

*y*(*t*)=Re(*Y***˙** 1(*t*)

−*Y***˙** 3(*t*) T *<sup>e</sup> <sup>P</sup>*(*C*(*t*−*T*1)−*t*)

**Table 7.** FIR filter analysis at input signal as a set of finite components

cases of FIR filters.

**3.2. Analog FIR filters**

nents in the Table 6 [6].

2.

FIR filter: impulse function, time dependent transfer function

3. Filter reaction

1. Input signal *x*(*t*)=Re(*X***˙** <sup>T</sup>

## **3. FIR filters performance analysis**

#### **3.1. Particularities of the analysis**

Mathematical description for FIR filters can be obtained on the basis of the IIR filter description (Table 1) by using twice as many of filter impulse function components.


**Table 6.** FIR filters analysis

The additional components have the same set of complex frequencies and differ by time shift and values of complex amplitudes in a way to ensure the finitude of a filter impulse charac‐ teristic [5,6]. According to this approach to mathematical description, IIR filters are special cases of FIR filters.

The input-output dependences for FIR filters can be obtained on the basis of analogous dependences of FIR filters by using time shift and principle of additivity operations [6].

Comparing to IIR filters, FIR filters have finite duration of transient processes of their own, which are defined by a filter length. This to a certain extent simplifies performance analysis for signal processing of this type of filters, especially when using the suggested expressanalysis methods for signal processing performance by FIR filters.

#### **3.2. Analog FIR filters**

For digital IIR filter analysis at injection finite signals on their inputs analogous mathematical operations are applied. All the necessary dependences may be obtained on the basis of the

Mathematical description for FIR filters can be obtained on the basis of the IIR filter description

*<sup>p</sup>* <sup>−</sup> *pn <sup>N</sup>* ) *<sup>X</sup>***˙** <sup>=</sup> *<sup>X</sup>*˙ *<sup>n</sup> <sup>N</sup>* <sup>=</sup> *<sup>X</sup> mne* <sup>−</sup> *<sup>j</sup>*φ*<sup>n</sup>*

*G***˙** '

*V***˙** ' =diag(*G***˙** '

*U***˙** '

*Y***˙** '

), *K*2(*p*, *<sup>t</sup>*)=Re(*G***˙** 'T <sup>1</sup>−*<sup>e</sup>* <sup>−</sup>( *<sup>p</sup>*−ρ*m*)(*t*−*Tm*)

*e <sup>p</sup>t*) *Y***˙** =diag(*X***˙** )*K*(*p*)

*p* = *pn <sup>N</sup>* = −β*<sup>n</sup>* + *j*ω*<sup>n</sup> <sup>N</sup>*

*<sup>G</sup>***˙** <sup>=</sup> *<sup>G</sup>*˙ *<sup>m</sup> <sup>M</sup>* <sup>=</sup> *kme* <sup>−</sup> *<sup>j</sup>*ϕ*<sup>m</sup>*

=diag(*G***˙** )*e QT* , *q* = ρ*<sup>m</sup> <sup>M</sup>* = −α*<sup>m</sup>* + *jwm <sup>M</sup>* ,

*Q* =diag(*q*), *T* = *Tm <sup>M</sup>* , *C* = 1 *<sup>M</sup>* , *K*(*p*)= *K*1(*p*)− *K*2(*p*)

*V***˙** =diag(*G***˙** )*X* (*q*),

*U***˙** =diag(*X***˙** )*K*1(*p*),

= *X*˙ *<sup>n</sup>K*3(*pn*)

*p* −ρ*<sup>m</sup> <sup>M</sup>*

*Y***˙** (*t*)=diag(*X***˙** )*K*1(*p*, *t*),

(*t*)=diag(*X***˙** )*K*2(*p*, *t*)

)*X* (*q*),

*m N* ,*M*

),

*N* ,

*M* ,

(Table 1) by using twice as many of filter impulse function components.

**№ Name Expressions Remark**

*<sup>e</sup> <sup>p</sup><sup>t</sup>*), *<sup>X</sup>* (*p*)=Re(*X***˙** <sup>T</sup> <sup>1</sup>

*<sup>e</sup>* <sup>−</sup>*pTm M* ),

*e <sup>Q</sup>*(*Ct*−*<sup>T</sup>* ) )

*p* −ρ*<sup>m</sup> <sup>M</sup>*

*<sup>U</sup>***˙** 'T*<sup>e</sup> <sup>p</sup>*(*t*−*Tm*) )

The additional components have the same set of complex frequencies and differ by time shift and values of complex amplitudes in a way to ensure the finitude of a filter impulse charac‐

*e <sup>p</sup><sup>t</sup>* −∑ *m*

),

*e <sup>q</sup><sup>t</sup>* −*G***˙** 'T*e <sup>Q</sup>*(*Ct*−*<sup>T</sup>* )

*<sup>p</sup>* <sup>−</sup>ρ*<sup>m</sup> <sup>M</sup>* ),

*<sup>p</sup>* **<sup>−</sup>ρ***<sup>m</sup> <sup>M</sup>* **)**

*p* −ρ*<sup>m</sup>*

*e <sup>q</sup><sup>t</sup>* −*V***˙** 'T

*<sup>K</sup>*1(*p*, *<sup>t</sup>*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>−*<sup>e</sup>* <sup>−</sup>( *<sup>p</sup>*−ρ*m*)*<sup>t</sup>*

T *e <sup>p</sup><sup>t</sup>* −*Y***˙** ' (*t*) T *e <sup>P</sup>*(*Ct*−*<sup>T</sup>* ) )

*<sup>K</sup>*(*p*, *<sup>t</sup>*)= *<sup>K</sup>*1(*p*, *<sup>t</sup>*)<sup>−</sup> *<sup>K</sup>*2(*p*, *<sup>t</sup>*)*<sup>e</sup>* <sup>−</sup>*pT*<sup>1</sup>

formulas from the Table 4 [13].

92 Digital Filters and Signal Processing

**3.1. Particularities of the analysis**

1. Input signal *x*(*t*)=Re(*X***˙** <sup>T</sup>

3. Forced components *<sup>y</sup>*1(*t*)=Re(*Y***˙** <sup>T</sup>

5. Filter reaction *y*(*t*)= *y*1(*t*) + *y*2(*t*)

7. Filter reaction *y*(*t*)=Re(*Y***˙** (*t*)

4. Free components

6. Time dependent transfer function

**Table 6.** FIR filters analysis

*g*(*t*)=Re(*G***˙** <sup>T</sup>

*<sup>K</sup>*1(*p*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>

*<sup>K</sup>*2(*p*)=Re(*G***˙** 'T <sup>1</sup>

*<sup>K</sup>***3(***p***)=Re(***G***˙ 'T <sup>1</sup>**

*y*2(*t*)= *y*3(*t*) + *y*4(*t*) *<sup>y</sup>*3(*t*)=Re(*V***˙** <sup>T</sup>

*<sup>y</sup>*4(*t*)= *<sup>y</sup>*1(*t*)−Re(*U***˙** <sup>T</sup>

2. FIR filter

**3. FIR filters performance analysis**

Basic expressions for FIR filter analysis at injection on the filter input a set of semi-infinite damped oscillatory components are given in the Table 6.

Due to characteristics of FIR filters, among forced and free components in an filter output signal there is the third group of components *y*4(*t*), which is conventionally referred to free compo‐ nents in the Table 6 [6].


#### **Table 7.** FIR filter analysis at input signal as a set of finite components

Basic expressions for FIR filter analysis at signal injection on a filter input as a set of damped oscillatory components with finite duration are represented in the Table 7. The calculation of a filter reaction is given in the Table 7 using only the second analysis method for the case when duration of all the components of a filter impulse function is equal *T*1. The filter analysis based on the first method is represented in details in the author's papers [6,18].

Let us consider an analysis example of FIR filters which are used in one of the most perspective intelligent electronic devices (IED) - Phasor Measurement Units (PMU) [19].

An average FIR filter *Ka*(*p*) should isolate the constant (*y*˙(*t*)= *<sup>X</sup>*˙ 1 at *ω*<sup>1</sup> <sup>=</sup>*ω*0) or low-frequency component (*y*˙(*t*)= *<sup>X</sup>*˙ <sup>1</sup>(*t*) at *ω*<sup>1</sup> <sup>≠</sup>*ω*0). The filters should suppress higher harmonics and a damped

The considered filters should have low sensitivity to a change of damping coefficient *β*0 in the range from 10÷200 sec-1 and the frequency *ω*<sup>1</sup> =2*π*(50 ± 5) rad/sec. The acceptable static error of signal processing should not be more than 0,5%, and the acceptable dynamic error at *t* ≥*T*<sup>1</sup>

FIR filter analysis is performed at input signal of a device as a set of semi-infinite or finite damped oscillatory components according to the algebraic expressions from the Table 7 and

An example of FIR filter analysis using Mathcad at compound input signals as a set of

1 e <sup>p</sup> <sup>r</sup> <sup>¾</sup> - ( - )<sup>t</sup> - <sup>é</sup>

û <sup>+</sup>

:= <sup>r</sup> ( ) -22.9881 j 62.3049 ×+ -23.2599 j 186.8944 ×+ <sup>T</sup> := T1 0.051 := TF T1 T1( )T :=

fx Xk K1 p1k t t1k -, ( ) <sup>×</sup> X1k K1 p1k t t2k -, ( ) ×, p1k , t1k , t2k , ,<sup>t</sup> ( ) fx Xk K2 p1k t t1k -, ( ) <sup>×</sup> X1k K2 p1k t t2k -, ( ) ×, p1k , t1k +, T1 t2k +, T1,<sup>t</sup> ( ) - å ( )

<sup>û</sup> fx X¾( )

<sup>û</sup> <sup>é</sup> - <sup>ë</sup> <sup>ù</sup> å <sup>û</sup>

0 0.5 1 1.5 2 2.5 3 3.5

t

W G1m,rm, <sup>p</sup> t T1 -, å ( )


¾ ( ) <sup>k</sup> , t1k , t2k , ,<sup>t</sup>

<sup>T</sup> :=

ù

<sup>k</sup> K2 p2k t t1k -, ( ) <sup>×</sup> X2k K2 p2k t t2k -, ( ) ×, p2k , t1k +, T1 t2k <sup>é</sup> +, T1,<sup>t</sup> <sup>ë</sup> <sup>ù</sup>

ù ú

ù

T

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

95

<sup>ë</sup> <sup>ù</sup> := <sup>û</sup>

ORIGIN 1 := j 1 := w - 1 2p×:= 50 w0 2p×:= 50 b0 20 := r1 j×:= w1 r2 -:= b0 p0 j×:= w0 b1 5 := dw:= j 2× p2

¾ p r ¾ -

<sup>T</sup> := t1 ( <sup>t</sup><sup>1</sup> <sup>t</sup><sup>1</sup> <sup>t</sup><sup>1</sup> <sup>t</sup><sup>2</sup> <sup>t</sup><sup>2</sup> <sup>t</sup><sup>3</sup> <sup>t</sup><sup>4</sup> <sup>t</sup><sup>4</sup> <sup>t</sup><sup>5</sup> <sup>t</sup><sup>5</sup> <sup>t</sup>5)

¾ ( )

<sup>ë</sup> <sup>ù</sup> <sup>=</sup> +: <sup>×</sup> <sup>û</sup>

oscillatory component with the complex frequency *p* = −*β*<sup>0</sup> + *jω*0.

sequentially adjacent finite signals is given on the fig.6.

W G( r, , <sup>p</sup>,t) <sup>G</sup>

:= r r1 3 r1 ( × 5 r1 × r1 r2 0 r1 -b1 j×+ w1 r1 r1 dw+ r1 dw- )

t2 ( t2 t2 t2 t3 t3 t4 t5 t5 t6 t6 t6)

fx Zk Zk <sup>e</sup> rk t2 <sup>k</sup> t1 ( - <sup>k</sup>) ×, <sup>r</sup> <sup>k</sup> , t1k , t2k , ,<sup>t</sup> <sup>é</sup>

fx Xk Xk <sup>e</sup>

**FIR filter** G 80.4832ej 4.2732 <sup>×</sup> <sup>×</sup> 37.932ej 0.5887 <sup>×</sup> ( <sup>×</sup> )

W Gm,<sup>r</sup> <sup>m</sup>, <sup>p</sup>,<sup>t</sup> å ( ) =

p1 <sup>k</sup> t2 <sup>k</sup> t1 ( - <sup>k</sup>) ×:= X2k <sup>X</sup>

fx X¾( )

:= K2 p t ( ) , <sup>1</sup>

ë

ë

ë

é ê ë

= ×:=

= ×:=

z t( ) <sup>1</sup> 2 1 K k

x t( ) <sup>1</sup> 2 1 K k

K1 p t ( ) , <sup>1</sup> 2 1 M m

X1k Xk <sup>e</sup>

1 K k

1 K k

= :=

2

**Figure 5.** FIR filter analysis using Mathcad software

0

2

4

= :=

y1 t( )

y2 t( )

zz t( ) y t( ) - y t( )

t1 0 := t2 0.2 := t3 0.6 := t4 0.7 := t5 1.8 := t6 3.6 :=

Xk 2 Zk ×:= p1k <sup>r</sup>

<sup>k</sup> -:= p0 **Input signal filter**

p1 <sup>k</sup> t2 <sup>k</sup> t1 ( - <sup>k</sup>) ×, p1k , t1k , t2k , ,<sup>t</sup> <sup>é</sup>

G1 diag G( ) er×T1 ×:= M length G( ) := m 1M..:=

¾( ) k e p2 <sup>k</sup> t2 <sup>k</sup> t1 ( - <sup>k</sup>) ×:=

<sup>k</sup> K1 p2k t t1k -, ( ) <sup>×</sup> X2k K1 p2k t t2k -, ( ) ×, p2k , t1k , t2k <sup>é</sup> , ,<sup>t</sup> <sup>ë</sup> <sup>ù</sup>

**Function** fx X1 X2 ( , , <sup>p</sup>,t1,t2,t) X1 ep t t1 <sup>×</sup>( ) - × F× ( ) t t1 - X2 ep t t2 <sup>×</sup>( ) - × F×- ( ) t t2 - é

<sup>p</sup> r- 1 e- ( )t <sup>p</sup> r- - <sup>é</sup> <sup>ë</sup> <sup>ù</sup> <sup>×</sup> <sup>û</sup> <sup>G</sup>

**Input signal device** Z 0.3 e- <sup>j</sup>× p 0.5 <sup>×</sup> -0.1 0.05 e- <sup>j</sup>× p 0.5 <sup>×</sup> -2.5 2.5 0 2 e- <sup>j</sup>× p 0.5 <sup>×</sup> -<sup>2</sup> <sup>e</sup>

ù <sup>û</sup> fx Z¾( ) <sup>k</sup> <sup>Z</sup> ¾( ) k e r <sup>¾</sup> ( ) <sup>k</sup> t2 <sup>k</sup> t1 ( - <sup>k</sup>) ×, <sup>r</sup>

é ê ë

<sup>ú</sup> å <sup>û</sup>

<sup>k</sup> -:= p0 p2k <sup>r</sup>

ù <sup>û</sup> fx X¾( ) <sup>k</sup> <sup>X</sup> ¾( ) k e p2 <sup>k</sup> t2 <sup>k</sup> t1 ( - <sup>k</sup>) ×, p2k , t1k , t2k , ,<sup>t</sup> <sup>é</sup>

T

2 1 M m

= :=

y t( ) 0.5 y1 t( ) y2 t( ) ×:= ( ) + Tz 0.03 := zz t( ) if t Tz := ( ) > , z t Tz ( ) - , 0

<sup>ù</sup> å <sup>û</sup>

ë

<sup>û</sup> <sup>+</sup> <sup>é</sup>

<sup>T</sup> := K length Z( ) := k 1K..:=

should not be higher than 3%.

the Table 8.

A brief description of a basic algorithm for PMU signal processing on the example of an analog system-prototype is given in the item 1 of the Table 8 [20].


**Table 8.** IED algorithm

An input signal of intelligent electronic devices is represented by a set of complex amplitudes *Z***˙** and frequencies *r*, as well as time parameters when using a signal model as a set of finite components. Let us constrain the signal models to one model only as an exponential compo‐ nent, useful sinusoidal component of commercial frequency *ω*1 (nominal value is *ω*<sup>0</sup> =2*π*50rad/ sec) and higher harmonics. More complicated signal models are considered in the papers [6,20]. An average FIR filter *Ka*(*p*) should isolate the constant (*y*˙(*t*)= *<sup>X</sup>*˙ 1 at *ω*<sup>1</sup> <sup>=</sup>*ω*0) or low-frequency component (*y*˙(*t*)= *<sup>X</sup>*˙ <sup>1</sup>(*t*) at *ω*<sup>1</sup> <sup>≠</sup>*ω*0). The filters should suppress higher harmonics and a damped oscillatory component with the complex frequency *p* = −*β*<sup>0</sup> + *jω*0.

The considered filters should have low sensitivity to a change of damping coefficient *β*0 in the range from 10÷200 sec-1 and the frequency *ω*<sup>1</sup> =2*π*(50 ± 5) rad/sec. The acceptable static error of signal processing should not be more than 0,5%, and the acceptable dynamic error at *t* ≥*T*<sup>1</sup> should not be higher than 3%.

FIR filter analysis is performed at input signal of a device as a set of semi-infinite or finite damped oscillatory components according to the algebraic expressions from the Table 7 and the Table 8.

An example of FIR filter analysis using Mathcad at compound input signals as a set of sequentially adjacent finite signals is given on the fig.6.

**Figure 5.** FIR filter analysis using Mathcad software

Let us consider an analysis example of FIR filters which are used in one of the most perspective

A brief description of a basic algorithm for PMU signal processing on the example of an analog

Semi-infinite signal *z*(*t*)=Re(*Z***˙** <sup>T</sup>

*z*(*t*)=0, 5(*Z***˙** <sup>T</sup>

˙*x*(*t*)=2*<sup>e</sup>* <sup>−</sup> *<sup>j</sup>*ω0*<sup>t</sup>*

*<sup>t</sup>*−*T*<sup>1</sup>

*T* = *T*<sup>1</sup> *T*<sup>1</sup> <sup>T</sup>

*g*(*t*)=Re(*G***˙** <sup>T</sup>

*<sup>K</sup>*(*p*)=Re(*G***˙** <sup>T</sup> <sup>1</sup>

An input signal of intelligent electronic devices is represented by a set of complex amplitudes *Z***˙** and frequencies *r*, as well as time parameters when using a signal model as a set of finite components. Let us constrain the signal models to one model only as an exponential compo‐ nent, useful sinusoidal component of commercial frequency *ω*1 (nominal value is *ω*<sup>0</sup> =2*π*50rad/ sec) and higher harmonics. More complicated signal models are considered in the papers [6,20].

*t*

˙*x*(*t*)=*Z***˙** <sup>T</sup>

*R* =diag(*r*)

*Z***˙** = *Z*<sup>0</sup> *Z*˙ <sup>1</sup> *Z*˙<sup>2</sup> … *Z*˙ *<sup>K</sup>* <sup>−</sup><sup>1</sup> <sup>T</sup>

*e <sup>r</sup><sup>t</sup>*) or *z*(*t*)=0, 5(*Z***˙** <sup>T</sup>

*r* = −β<sup>0</sup> *j*ω<sup>1</sup> *j*2ω<sup>1</sup> … *j*(*K* −1)ω<sup>1</sup> <sup>T</sup>

*Z***¯**,*r***¯** – are complex – adjoint vectors Signal as a set of finite components

> *e <sup>R</sup>*(*Ct*−*t*) −*Z***˙**' <sup>T</sup>*e <sup>R</sup>*(*Ct*−*<sup>t</sup>* '

*z*(*t*), At semi-infinite input signal device

+ *Z***¯** <sup>T</sup>

*<sup>e</sup>* (*r***¯** <sup>−</sup>**<sup>С</sup>** *<sup>j</sup>*ω0*t*)*<sup>t</sup>*

*g*(*t* −τ)*d*τ = ∫

),

<sup>−</sup>*G***˙** 'T <sup>1</sup> *p* −ρ*<sup>m</sup>*

*<sup>t</sup>*−*T*<sup>1</sup>

, *G***˙** '

*t*

*<sup>e</sup>* (*r*−**<sup>С</sup>** *<sup>j</sup>*ω0*t*)*<sup>t</sup>*

*<sup>z</sup>*(τ)*<sup>e</sup>* <sup>−</sup> *<sup>j</sup>*ω0<sup>τ</sup>

*G***˙** = 80.48*e <sup>j</sup>*4.273 37.93*e <sup>j</sup>*0.5887 <sup>T</sup>

*q* = −22.99 + *j*62.30 −23.26 + *j*186.9 <sup>T</sup>

, *T*<sup>1</sup> =0.051с,

*p* −ρ*<sup>m</sup> <sup>M</sup>*

*<sup>e</sup> <sup>q</sup><sup>t</sup>* <sup>−</sup>*G***˙** 'T*<sup>e</sup> <sup>q</sup>*(*t*−*T*1)

*e <sup>r</sup><sup>t</sup>* + *Z***¯** <sup>T</sup> *e <sup>r</sup>***¯** *<sup>t</sup>*),

) + *Z***¯** <sup>T</sup>

,

*e <sup>R</sup>***¯** (*Ct*−*t*) −*Z***¯**' <sup>T</sup>*e <sup>R</sup>***¯** (*Ct*−*<sup>t</sup>* ' ))

˙*x*(τ)*g*(*t* −τ)*d*τ

=diag(*G***˙** )*<sup>e</sup> <sup>q</sup>T*<sup>1</sup>

,

*<sup>e</sup>* <sup>−</sup>*pTm M* )

,

intelligent electronic devices (IED) - Phasor Measurement Units (PMU) [19].

system-prototype is given in the item 1 of the Table 8 [20].

**№ Name Diagram/expressions**

1. Block scheme of algorithm

94 Digital Filters and Signal Processing

2. Signal description *z*(*t*)

3. Input signal of a filter

5. Average FIR filter

**Table 8.** IED algorithm

4. Algorithm Χ˙1(*t*)= ∫

Each of five sets of finite components represented comply with a particular power system regime. The first set is related to a normal regime of a power system and is represented by sections of sinusoidal component of industrial frequency and higher harmonics. The second set of finite signals corresponds to an accidental regime and contains of finite sinusoidal and exponential component, the third set shows no-current regime, the fourth set is connected to a change of PMU input signals envelope due to automation operation, the fifth set represents a swings regime.

A filter processes a complex signal *x*˙(*t*), formed after multiplication of PMU input signal *z*(*t*) by a reference signal *e* − *jω*0*t* to shift the signal spectrum to the left for the formation of orthogonal components of complex amplitude of sinusoidal signal with industrial frequency *ω*1.

A plot for PMU input signal *z*(*t*)and a plot for a module of filter output signal | *y*˙(*t*)| taking into account filter group delay time are represented on the fig.6. As it follows from the fig.1, the plot | *y*˙(*t*)| is close to the envelope of sinusoidal component of PMU input signal.

It appears from the plot that the investigated filter has a feature which is connected to the absence of overshoot (oscillation) of transient process in the filter in its traditional definition, in other words at a stepwise growth of a signal. and presence of overshoot at a stepwise reduction of signal.

In fact, due to particular characteristics of impulse function, from the traditional point of view, an overshoot is absent in the considered FIR filter, as at the end of transient process in the filter signal behavior is close to aperiodic process. In this case oscillation is noticeable on the initial stage of transient process.

Analysis of FIR filter and signal processing algorithm in total is carried out only at fixed values of input signal parametres in the example. With some further ordinary improvement, analog to the example on the fig.1, it is possible to determine performance specifications for FIR filter signal processing at any variation of useful signal and disturbance parameters. However, it is much easier to use the specific express-analysis methods.

To check the compliance with the requirements for filters to the specified quality indexes of signal processing for the specified input signal, it would be enough to consider two sections of 3D amplitude-frequency response: in the sections *p* = *j*2*πf* (Fig.6) and *p* = −*β* + *jω*0 (Fig.8). As it follows from the Fig.6, the examined filters have the same amplitude-frequency re‐ sponse in the section *p* = *j*2*πf* and ensure the required quality of useful signal processing and suppressing higher harmonics. Absolutely different situation appears in the case of

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

97

eliminating by filters a damped oscillatory component with frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*0.

**Figure 8.** Frequency response of the filters in the section *p* = −β + *j*ω<sup>0</sup>

**Figure 6.** Impulse characteristics of the filters

**Figure 7.** Amplitude-frequency response of the filter

### **3.3. Express-analysis methods for signal processing performance**

Let us consider an example of express-analysis of signal processing performance of a FIR filter, which mathematical description is given in the Table 7. In addition, let us consider performance analysis for signal processing of a filter with the impulse function *g*2(*t*)= *g*1(*T*<sup>1</sup> −*t*) (Fig.7).

To check the adherence to the mentioned conditions, it is enough to consider amplitudefrequency response of the filters in complex frequency coordinates.

The amplitude-frequency response for the first and second filters is given in the Fig.8. The multiplier *e* −*βT*<sup>1</sup> at *β* ≠0 allows for attenuation of forced damped components by the moment of the transient process ending in the filter. The speed of the FIR filter is determining by its length *T*1 in the case of noise damped components are suppressed to the level of the acceptable dynamic error during the time, that is equal to the filter length.

**Figure 6.** Impulse characteristics of the filters

Each of five sets of finite components represented comply with a particular power system regime. The first set is related to a normal regime of a power system and is represented by sections of sinusoidal component of industrial frequency and higher harmonics. The second set of finite signals corresponds to an accidental regime and contains of finite sinusoidal and exponential component, the third set shows no-current regime, the fourth set is connected to a change of PMU input signals envelope due to automation operation, the fifth set represents

A filter processes a complex signal *x*˙(*t*), formed after multiplication of PMU input signal *z*(*t*)

A plot for PMU input signal *z*(*t*)and a plot for a module of filter output signal | *y*˙(*t*)| taking into account filter group delay time are represented on the fig.6. As it follows from the fig.1, the plot | *y*˙(*t*)| is close to the envelope of sinusoidal component of PMU input signal.

It appears from the plot that the investigated filter has a feature which is connected to the absence of overshoot (oscillation) of transient process in the filter in its traditional definition, in other words at a stepwise growth of a signal. and presence of overshoot at a stepwise

In fact, due to particular characteristics of impulse function, from the traditional point of view, an overshoot is absent in the considered FIR filter, as at the end of transient process in the filter signal behavior is close to aperiodic process. In this case oscillation is noticeable on the initial

Analysis of FIR filter and signal processing algorithm in total is carried out only at fixed values of input signal parametres in the example. With some further ordinary improvement, analog to the example on the fig.1, it is possible to determine performance specifications for FIR filter signal processing at any variation of useful signal and disturbance parameters. However, it is

Let us consider an example of express-analysis of signal processing performance of a FIR filter, which mathematical description is given in the Table 7. In addition, let us consider performance analysis for signal processing of a filter with the impulse function *g*2(*t*)= *g*1(*T*<sup>1</sup> −*t*) (Fig.7).

To check the adherence to the mentioned conditions, it is enough to consider amplitude-

The amplitude-frequency response for the first and second filters is given in the Fig.8. The

of the transient process ending in the filter. The speed of the FIR filter is determining by its length *T*1 in the case of noise damped components are suppressed to the level of the acceptable

at *β* ≠0 allows for attenuation of forced damped components by the moment

components of complex amplitude of sinusoidal signal with industrial frequency *ω*1.

to shift the signal spectrum to the left for the formation of orthogonal

a swings regime.

by a reference signal *e*

96 Digital Filters and Signal Processing

reduction of signal.

multiplier *e*

−*βT*<sup>1</sup>

stage of transient process.

− *jω*0*t*

much easier to use the specific express-analysis methods.

**3.3. Express-analysis methods for signal processing performance**

frequency response of the filters in complex frequency coordinates.

dynamic error during the time, that is equal to the filter length.

**Figure 7.** Amplitude-frequency response of the filter

To check the compliance with the requirements for filters to the specified quality indexes of signal processing for the specified input signal, it would be enough to consider two sections of 3D amplitude-frequency response: in the sections *p* = *j*2*πf* (Fig.6) and *p* = −*β* + *jω*0 (Fig.8). As it follows from the Fig.6, the examined filters have the same amplitude-frequency re‐ sponse in the section *p* = *j*2*πf* and ensure the required quality of useful signal processing and suppressing higher harmonics. Absolutely different situation appears in the case of eliminating by filters a damped oscillatory component with frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*0.

**Figure 8.** Frequency response of the filters in the section *p* = −β + *j*ω<sup>0</sup>

The filters 1 and 2 in the section of 3D frequency response *p* = −*β* + *jω*0 (Fig.7) have different characteristics, and the second filter does not ensure compliance with the specified require‐ ments to suppress the component of an input signal with the complex frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*0. It results in ambiguity of using the traditional frequency response of filters FIR filters with asymmetric form of the impulse response for analysis aperiodic signals.

finite signals), *p* (or *z* =*e <sup>p</sup><sup>T</sup>* ) and a filter *G***˙** , *G***˙** '

Table 6 can be described in the following way

The expression for a digital filter transfer function

function component, *T* - discrete sampling step.

1. Method of differential equation discretization

2. Method of invariant impulse responses

3. Bilinear transformation

**Table 9.** Transition methods

prototype parameters.

sented in the Table 9

expand also for the case of FIR filters in the author's paper [13].

duration of input signal components.

which define the duration of filter impulse function components and the beginning and

In some cases only mathematical description of analog filter-prototypeis specified. In case of FIR filters, different methods of transition from analog filter-prototype description to digital filter description, for instance, method of differential equation discretization, method of invariant impulseresponses, bilinear transformation are applied [14]. The mentioned methods

The transfer function of analog filter-prototype with finite impulse response according to the


( ) ( ) - æ ö é ùé ù + + <sup>=</sup> ç ÷ ê úê ú - - - è ø ë ûë û

where *km*, *a* - coefficients; z*m*- *m*-th pole of system function, *NmT* - duration of the *m*-th impulse

All the mentioned constants, except *Nm*, depend on the transition method being applied. The values of the given coefficients for the three transition methods mentioned above are repre‐

*km* =*T* , *a*=0,

*km* =*T* , *a*=0, z2*<sup>m</sup>* <sup>=</sup>*<sup>e</sup>* <sup>ρ</sup>*m<sup>T</sup>*

<sup>ρ</sup>3*<sup>m</sup>* =ln(z3*m*) / *<sup>T</sup>*

The additional indexes of the constants in the Table 9, corresponding to order numbers of transition methods, are given for the constants, which do not coincide with the analog filter-

z1*<sup>m</sup>* =1 / (1−ρ1*m*), ρ1*<sup>m</sup>* =ln(z*m*) / *<sup>T</sup>*

, ρ2*<sup>m</sup>* <sup>=</sup>ρ*<sup>m</sup>*

*km* =*T* / (2−ρ*mT* ), *a*=1, z3*<sup>m</sup>* =(2 + ρ*mT* ) / (2−ρ*mT* ),

*z z*

*k za k za*

*m m Nm*

*p p*

*K p e*

& & <sup>T</sup> 'T ( ) Re z z

**№ Method Meaning**

*K z z*

*m m M M*

*m m M M*

**G G** (2)

**G G** (3)

, *q* (or *z* =*e <sup>q</sup><sup>T</sup>* ), as well as a set of parameters,

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

99

**Figure 9.** Amplitude-frequency response of the filter

In the case of a need along with estimating the speed and accuracy of signal processing to appraise the history of transient processes in a filter, for instance, to control oscillation of transient process in a filter, it is eligible to conduct the analysis using amplitude-frequency response based on the filter transfer function, which depends on time.

It is necessary for analysis to depress a damped oscillatory component with the complex frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*<sup>0</sup> to fix the imaginary part, due to *ω*<sup>0</sup> =const. The plot, proportional to the product | *<sup>K</sup>*2(−*<sup>β</sup>* <sup>+</sup> *<sup>j</sup>ω*0, *<sup>t</sup>*)|*<sup>e</sup>* <sup>−</sup>*β<sup>t</sup>* , is given in the Fig.10. In the case of complex frequency *p* = −*β* + *jω*0 the plot will be equal to the envelope (curves 1 and 2) of the reaction of the second filter (curve 3) to the examined input impact at *β*<sup>0</sup> =50 sec-1.

The graph presented describes graphically the fact, that the second filter does not ensure the suppressing of an input signal component with the complex frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*0 by the moment of free components completion in the filter at the most specified values of *β*0.

#### **3.3. Digital FIR filters**

As for IIR filters (Table 1 and 4), input-output dependencies for digital (discrete) FIR filter may be obtained by discretization of expressions for analog FIR filter and transition from the Laplace transform to Z transform [13].

In case of using the mathematical software Matlab, Mathcad etc. for digital FIR filter analysis it is enough to get the data about complex amplitudes and input signal frequencies *X***˙** (*X***˙** ' at finite signals), *p* (or *z* =*e <sup>p</sup><sup>T</sup>* ) and a filter *G***˙** , *G***˙** ' , *q* (or *z* =*e <sup>q</sup><sup>T</sup>* ), as well as a set of parameters, which define the duration of filter impulse function components and the beginning and duration of input signal components.

In some cases only mathematical description of analog filter-prototypeis specified. In case of FIR filters, different methods of transition from analog filter-prototype description to digital filter description, for instance, method of differential equation discretization, method of invariant impulseresponses, bilinear transformation are applied [14]. The mentioned methods expand also for the case of FIR filters in the author's paper [13].

The transfer function of analog filter-prototype with finite impulse response according to the Table 6 can be described in the following way

$$K(p) = \text{Re}\left(\dot{\mathbf{G}}^T \left[\frac{1}{p - \rho\_m}\right]\_M - \dot{\mathbf{G}}^T \left[\frac{1}{p - \rho\_m} e^{-pT\_m}\right]\_M\right) \tag{2}$$

The expression for a digital filter transfer function

The filters 1 and 2 in the section of 3D frequency response *p* = −*β* + *jω*0 (Fig.7) have different characteristics, and the second filter does not ensure compliance with the specified require‐ ments to suppress the component of an input signal with the complex frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*0. It results in ambiguity of using the traditional frequency response of filters FIR

In the case of a need along with estimating the speed and accuracy of signal processing to appraise the history of transient processes in a filter, for instance, to control oscillation of transient process in a filter, it is eligible to conduct the analysis using amplitude-frequency

It is necessary for analysis to depress a damped oscillatory component with the complex frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*<sup>0</sup> to fix the imaginary part, due to *ω*<sup>0</sup> =const. The plot, proportional to

*p* = −*β* + *jω*0 the plot will be equal to the envelope (curves 1 and 2) of the reaction of the second

The graph presented describes graphically the fact, that the second filter does not ensure the suppressing of an input signal component with the complex frequency *p*<sup>0</sup> = −*β*<sup>0</sup> + *jω*0 by the

As for IIR filters (Table 1 and 4), input-output dependencies for digital (discrete) FIR filter may be obtained by discretization of expressions for analog FIR filter and transition from the

In case of using the mathematical software Matlab, Mathcad etc. for digital FIR filter analysis it is enough to get the data about complex amplitudes and input signal frequencies *X***˙** (*X***˙** '

moment of free components completion in the filter at the most specified values of *β*0.

, is given in the Fig.10. In the case of complex frequency

at

response based on the filter transfer function, which depends on time.

filter (curve 3) to the examined input impact at *β*<sup>0</sup> =50 sec-1.

filters with asymmetric form of the impulse response for analysis aperiodic signals.

**Figure 9.** Amplitude-frequency response of the filter

98 Digital Filters and Signal Processing

the product | *<sup>K</sup>*2(−*<sup>β</sup>* <sup>+</sup> *<sup>j</sup>ω*0, *<sup>t</sup>*)|*<sup>e</sup>* <sup>−</sup>*β<sup>t</sup>*

Laplace transform to Z transform [13].

**3.3. Digital FIR filters**

$$K(\mathbf{z}) = \text{Re}\left(\dot{\mathbf{G}}^{\text{T}} \left[\frac{k\_m \left(z + a\right)}{z - \mathbf{z}\_m}\right]\_M - \dot{\mathbf{G}}^{\text{T}} \left[\frac{k\_m \left(z + a\right)}{z - \mathbf{z}\_m} z^{-N\_m}\right]\_M\right) \tag{3}$$

where *km*, *a* - coefficients; z*m*- *m*-th pole of system function, *NmT* - duration of the *m*-th impulse function component, *T* - discrete sampling step.

All the mentioned constants, except *Nm*, depend on the transition method being applied. The values of the given coefficients for the three transition methods mentioned above are repre‐ sented in the Table 9


**Table 9.** Transition methods

The additional indexes of the constants in the Table 9, corresponding to order numbers of transition methods, are given for the constants, which do not coincide with the analog filterprototype parameters.

## **4. Performance express-analysis at modulated signal**

#### **4.1. Mathematical description of signals**

The signals as a set of semi-infinite or finite damped oscillatory components were considered above. Compound signals of different forms, including compound periodical and quasiperiodic signals, nonstationary signals and signals with compound envelopes can be synthe‐ sized on the basis of the collection of components mentioned above [5,6]. The mentioned models also make it possible to describe the majority of impulse signals, which are widely applicable in radio engineering (radio pulse and video pulse).

The analysis methods considered above can be applied for that kind of signals.

However, the more general case is obviously more interesting, when signals with compound dependencies of envelopes and signal total phase are applied. Semi-infinite or finite signals with compound envelopes, the most frequently used signals in radio engineering, can be described by the following model

$$\exp(t) = \text{Re}\left(\dot{X}\_1(t)e^{p\_1(t)t}\right) \tag{4}$$

important to note that even general signal approximation (1) and (2) by Prony's method for

Another option for solution of the filter analysis at signal type (1) is connected to modification of the suggested analysis methods. Modification of the second analysis method is represented

Let us consider IIR filter analysis at input signals (1), as well as for special cases, when

To develop the necessary dependences one can use expressions, obtained for IIR filter at injection on its input a set of finite damped oscillatory components. Let us decompose the signal into time steps, during which signal time dependent parameters are mostly constant.

In case of an even step of signal decomposition one will obtain the following expression to

) - D D - D -+D

*yt X e K p t n t e e K p t n t e* (6)

As before, only algebraic operations with complex amplitudes, frequencies and values of time dependent transfer function on complex frequency of an input signal are applied for the filter

The same approach can be used for digital filters as well – one should replace continuous time *t* to discrete time *kT* , and instead of transfer function *K*(*p*, *t*) of an analog filter one should use a transfer function *K*(*z*, *k*)in the expression. In this case, if one assumes *Δt* to be equal to discrete sampling step *T* , it enables to take into account the errors of analog-to-digital converter

The filter analysis on the basis of the expression (3) is approximate and can be considered as numerical method. To determine explicit dependencies one need to perform passage to the

The required dependency of input-output can be defined also by using the convolution integral by substitution of the expression for the input signal (1). Performing discretization with the passage to the limit, the following input-output dependency for an analog filter-prototype can

*p nt p tnt p t p tn t*

<sup>æ</sup> <sup>æ</sup> <sup>ö</sup> <sup>=</sup> <sup>ç</sup> <sup>ç</sup> -D - - + D <sup>÷</sup> <sup>ç</sup> <sup>è</sup> <sup>ø</sup> <sup>è</sup> å & 1 1 <sup>1</sup> <sup>1</sup> <sup>1</sup> <sup>1</sup>

1 1 1

*n n n*

( ) Re , , 1 *n n <sup>n</sup> <sup>n</sup>*

( ) ( ) ( ( ) ) ( ( ) )

(*nΔt*), *pn*<sup>1</sup> = *p*1(*nΔt*), *N* =*tk* / *Δt*, *tk* - duration of a finite signal (beginning of the

are corresponding to a modulation

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

101

calculations enables to apply the considered filter analysis methods.

amplitude *X*˙ (*t*)= *Xm*(*t*)*<sup>e</sup>* <sup>−</sup> *<sup>j</sup>φ* and phase *X*˙ (*t*)= *Xme* <sup>−</sup> *<sup>j</sup>φ*(*t*)

In this case time step *Δt* can be even and uneven.

signal coincides with zero reading on time).

and finite digit capacity microprocessor in filter analysis.

analysis at the input signals type (1).

determine a filter ouput signal

=

*n*

where *X*˙ *<sup>n</sup>*<sup>1</sup> <sup>=</sup> *<sup>X</sup>*˙ <sup>1</sup>

limit *Δt* →0.

be obtained

*N*

0

below.

signal.

**4.2. Filter analysis**

or in general case it would be

$$\mathbf{x}(t) = \text{Re}\left(\dot{\mathbf{X}}(t)^{\text{T}} e^{\mathbf{P}(t)(\mathbf{C}t - \mathbf{t})}\right) \tag{5}$$

For the considered signal models, for instance, of the signal (1),their finite duration can be specified by the finite duration of a complex amplitude *X*˙ <sup>1</sup> (*t*).

Using the model (1), signals with amplitude, phase and frequency modulation which are commonly used in radio engineering signals can be described. Among with the similar models input signals in sound processing systems, automation devices of power systems at electro‐ mechanical transients process can be described.

It is known, that using the signal (1), in case when amplitude, frequency or initial phase are time functions, it is impossible to define the law of amplitude variation and a filter initial phase by values of amplitude-frequency response and phase-frequency response on an input signal frequency [21,22]. Thus, the analysis methods considered above cannot be applied directly.

There are exceptional cases when at some variation laws *X*˙ <sup>1</sup> (*t*) and *p*1(*t*) signals of the type (2) can be transformed into signals, described by a set of semi-infinite or finite damped oscillatory components [5,6].

However, signals (1) and (2) can be decomposed into components of a set of damped oscillatory components using the Prony's method and its modifications [23]. Leaving behind the issue concerning decomposition into "real" components by using the mentioned methods it is important to note that even general signal approximation (1) and (2) by Prony's method for calculations enables to apply the considered filter analysis methods.

Another option for solution of the filter analysis at signal type (1) is connected to modification of the suggested analysis methods. Modification of the second analysis method is represented below.

### **4.2. Filter analysis**

**4. Performance express-analysis at modulated signal**

applicable in radio engineering (radio pulse and video pulse).

specified by the finite duration of a complex amplitude *X*˙ <sup>1</sup>

There are exceptional cases when at some variation laws *X*˙ <sup>1</sup>

mechanical transients process can be described.

The analysis methods considered above can be applied for that kind of signals.

The signals as a set of semi-infinite or finite damped oscillatory components were considered above. Compound signals of different forms, including compound periodical and quasiperiodic signals, nonstationary signals and signals with compound envelopes can be synthe‐ sized on the basis of the collection of components mentioned above [5,6]. The mentioned models also make it possible to describe the majority of impulse signals, which are widely

However, the more general case is obviously more interesting, when signals with compound dependencies of envelopes and signal total phase are applied. Semi-infinite or finite signals with compound envelopes, the most frequently used signals in radio engineering, can be

= ( ( ) ) & <sup>1</sup> ( )

For the considered signal models, for instance, of the signal (1),their finite duration can be

Using the model (1), signals with amplitude, phase and frequency modulation which are commonly used in radio engineering signals can be described. Among with the similar models input signals in sound processing systems, automation devices of power systems at electro‐

It is known, that using the signal (1), in case when amplitude, frequency or initial phase are time functions, it is impossible to define the law of amplitude variation and a filter initial phase by values of amplitude-frequency response and phase-frequency response on an input signal frequency [21,22]. Thus, the analysis methods considered above cannot be applied directly.

can be transformed into signals, described by a set of semi-infinite or finite damped oscillatory

However, signals (1) and (2) can be decomposed into components of a set of damped oscillatory components using the Prony's method and its modifications [23]. Leaving behind the issue concerning decomposition into "real" components by using the mentioned methods it is

<sup>1</sup> ( ) Re *p tt xt X t e* (4)

( ) ( )( ) ( ) - <sup>=</sup> & <sup>T</sup> ( ) Re *t t xt t e***<sup>P</sup> <sup>C</sup> <sup>t</sup> <sup>X</sup>** (5)

(*t*).

(*t*) and *p*1(*t*) signals of the type (2)

**4.1. Mathematical description of signals**

100 Digital Filters and Signal Processing

described by the following model

or in general case it would be

components [5,6].

Let us consider IIR filter analysis at input signals (1), as well as for special cases, when amplitude *X*˙ (*t*)= *Xm*(*t*)*<sup>e</sup>* <sup>−</sup> *<sup>j</sup>φ* and phase *X*˙ (*t*)= *Xme* <sup>−</sup> *<sup>j</sup>φ*(*t*) are corresponding to a modulation signal.

To develop the necessary dependences one can use expressions, obtained for IIR filter at injection on its input a set of finite damped oscillatory components. Let us decompose the signal into time steps, during which signal time dependent parameters are mostly constant. In this case time step *Δt* can be even and uneven.

In case of an even step of signal decomposition one will obtain the following expression to determine a filter ouput signal

$$y(t) = \text{Re}\left(\sum\_{n=0}^{N-1} \dot{X}\_{n1} e^{p\_{n1} n\Delta t} \left( K\left(p\_{n1}, t - n\Delta t\right) e^{p\_{n1}\left(t - n\Delta t\right)} - e^{p\_{n1}\Delta t} K\left(p\_{n1}, t - (n+1)\Delta t\right) e^{p\_{n1}\left(t - (n+1)\Delta t\right)} \right) \right) \tag{6}$$

where *X*˙ *<sup>n</sup>*<sup>1</sup> <sup>=</sup> *<sup>X</sup>*˙ <sup>1</sup> (*nΔt*), *pn*<sup>1</sup> = *p*1(*nΔt*), *N* =*tk* / *Δt*, *tk* - duration of a finite signal (beginning of the signal coincides with zero reading on time).

As before, only algebraic operations with complex amplitudes, frequencies and values of time dependent transfer function on complex frequency of an input signal are applied for the filter analysis at the input signals type (1).

The same approach can be used for digital filters as well – one should replace continuous time *t* to discrete time *kT* , and instead of transfer function *K*(*p*, *t*) of an analog filter one should use a transfer function *K*(*z*, *k*)in the expression. In this case, if one assumes *Δt* to be equal to discrete sampling step *T* , it enables to take into account the errors of analog-to-digital converter and finite digit capacity microprocessor in filter analysis.

The filter analysis on the basis of the expression (3) is approximate and can be considered as numerical method. To determine explicit dependencies one need to perform passage to the limit *Δt* →0.

The required dependency of input-output can be defined also by using the convolution integral by substitution of the expression for the input signal (1). Performing discretization with the passage to the limit, the following input-output dependency for an analog filter-prototype can be obtained

$$y(t) = \operatorname{Re} \left( \int\_0^t \dot{X}\_1(t - \tau) e^{p\_1(\tau)t} K'(p\_1(\tau), \tau) d\tau \right) \tag{7}$$

where

$$K\left(p\_1(t), t\right) = \frac{dK\left(p\_1(t), t\right)}{d\tau} \tag{8}$$

phase of sinusoidal component of input signal with commercial frequency, but also at PMU input signals at electromechanical transient process in a power system. In the mentioned regimes of power system operation in the controlled currents and voltage envelopes and total

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

103

For FIR filter analysis at input actions (1) the input - output expression can be obtained on the basis of the dependences given in the Table 6. Such-like dependences for IIR filters and FIR filters can be obtained based on the dependences given before for the first analysis method.

Let us consider possible areas of application for the suggested performance analysis methods for signal processing by frequency filters, which are used in intelligent electronic devices of power systems, in automation devices, in radio engineering and communication systems, as well as in other fields of engineering where digital signal processing is commonly applied.

The prospectives of power system development in nearest future are related to technology Smart Grid implementation and the application of automatic control and regulation systems of a new generation. Power system control improvement involves a wide application of fast action IED based on synchronized measurement of current and voltage phasors of a funda‐

Up to date IED should ensure performance signal processing in conditions of an intense electromagnetic and electromechanical transients process. Mathematical model of an input signal IED in a normal and an accidental regimes of power systems in some cases can be represented by a set of semi-infinite or finite damped oscillatory components, in other cases – by analogous models, in which complex amplitudes and frequencies of mentioned compo‐

Most of IED power systems should ensure performance of signal processing at any possible combination of input signal parameters. The suggested analysis methods enable to solve effectively problems of determination of performance specifications for frequency filters, used in IED power systems. The examples for the performance analysis of signal processing by frequency filters and algorithms of signal processing for IED of power systems based on the phasor measurement technology are considered in the sections 3 and 4 of the present chapter. The example for IIR filter analysis for general devices of relay protection and automation is

The analysis methods for analog IIR filters considered in the chapter can be applied also for the linear circuit analysis. It is important to note that for absolute majority of microprocessor control systems and measuring systems the information sources have analogue nature. In this case for analysis of controlled objects equivalent circuits based on linear electric circuits are used. The illustrative examples: equivalent circuit of power systems, power plants, electriс grids and power-supply systems. Application of the suggested analysis methods enables a

mental harmonic on the basis of IEEE C37.118-2011 and IEC 61850-90-5 standards.

phases are time functions.

nents are time functions.

given in the section 2.

**5. The application of the analysis methods**

In case of amplitude, phase modulation or their combination, the expression for input-output would significantly simplify

$$y(t) = \text{Re}\left[\left(\int\_0^t \dot{X}\_1(t-\tau)K'(p\_1,\tau)d\tau\right)e^{p\_1t}\right] \tag{9}$$

For IIR filter the following expression for a derivative of time dependent transfer function takes place

$$K^{'}(p\_1, t) = \frac{dK(p\_1, t)}{d\tau} = \text{Re}\left(\left[\dot{G}\_m e^{-(p\_1 - \rho\_m)t}\right]\_M\right) \tag{10}$$

In case of *X*˙ <sup>1</sup> (*t*)= *<sup>X</sup>*˙ <sup>1</sup> the input-output dependency coincides with input-output dependency for IIR filter obtained before (item 6 Table 1)

$$y(t) = \text{Re}\left(\dot{X}\_1 K(p\_{1'}t)e^{p\_1t}\right) \tag{11}$$

It follows from the comparison of the input-output dependencies (4) and (5), that in the second case complex amplitude of IIR filter output signal on the input injection as a damped oscillatory component is determined by multiplication of a signal complex amplitude by the value of time dependent transfer function on the input signal frequency *Y*˙ (*t*)= *<sup>X</sup>*˙ <sup>1</sup>*K*(*p*1, *<sup>t</sup>*), in the first case

*t*

more complicated dependency takes place *Y*˙ (*t*)=*∫* 0 *X*˙ 1 (*t* −*τ*)*K* ' (*p*1, *τ*)*dτ*.

The dependencies enable to perform filter analysis, including performance analysis of signal processing, at input signals with a composite form. Solving problems of this kind is relevamt not only for radio engineering and communication systems, but also for other industries.

For the example of considered in the sections 3.2 and 3.3 analysis of frequency filters, which are used in PMU, according to a new version of IEEE C37.118.2 standard it is necessary to perform testing for the mentioned devices not only at stepwise change of amplitude and initial phase of sinusoidal component of input signal with commercial frequency, but also at PMU input signals at electromechanical transient process in a power system. In the mentioned regimes of power system operation in the controlled currents and voltage envelopes and total phases are time functions.

For FIR filter analysis at input actions (1) the input - output expression can be obtained on the basis of the dependences given in the Table 6. Such-like dependences for IIR filters and FIR filters can be obtained based on the dependences given before for the first analysis method.

## **5. The application of the analysis methods**

( ) ( ) ( ( ) ) <sup>t</sup> æ ö = -t ç ÷ tt t è ø

*p t yt X t e K p d* (7)

, , *dK p t t Kp tt <sup>d</sup>* (8)

*p t yt X t K p d e* (9)

*<sup>m</sup> <sup>M</sup>*

(*t*)= *<sup>X</sup>*˙ <sup>1</sup> the input-output dependency coincides with input-output dependency

*<sup>d</sup>* (10)

1 1 ( ) Re , *p t yt XK p t e* (11)

(*p*1, *τ*)*dτ*.

ò & <sup>1</sup> ' 1 1

( ( ) ) ( ( ) ) <sup>=</sup> <sup>t</sup>

In case of amplitude, phase modulation or their combination, the expression for input-output

( ) ( ) æ ö æ ö = ç ÷ ç ÷ -t t t è ø è ø ò & <sup>1</sup> ' 1 1

For IIR filter the following expression for a derivative of time dependent transfer function takes

( ) ( ) æ ö - -r ( ) é ù = = ç ÷ tè ø ê ú ë û

= ( ( ) ) & <sup>1</sup>

It follows from the comparison of the input-output dependencies (4) and (5), that in the second case complex amplitude of IIR filter output signal on the input injection as a damped oscillatory component is determined by multiplication of a signal complex amplitude by the value of time dependent transfer function on the input signal frequency *Y*˙ (*t*)= *<sup>X</sup>*˙ <sup>1</sup>*K*(*p*1, *<sup>t</sup>*), in the first case

0

The dependencies enable to perform filter analysis, including performance analysis of signal processing, at input signals with a composite form. Solving problems of this kind is relevamt not only for radio engineering and communication systems, but also for other industries.

For the example of considered in the sections 3.2 and 3.3 analysis of frequency filters, which are used in PMU, according to a new version of IEEE C37.118.2 standard it is necessary to perform testing for the mentioned devices not only at stepwise change of amplitude and initial

(*t* −*τ*)*K* '

*t X*˙ 1

, , Re *p t <sup>m</sup>*

<sup>1</sup> & <sup>1</sup>

*dK p t Kpt G e*

1

( ) Re ,

0

'

1

0 ( ) Re , *t*

'

for IIR filter obtained before (item 6 Table 1)

more complicated dependency takes place *Y*˙ (*t*)=*∫*

1

where

place

In case of *X*˙ <sup>1</sup>

would significantly simplify

102 Digital Filters and Signal Processing

*t*

Let us consider possible areas of application for the suggested performance analysis methods for signal processing by frequency filters, which are used in intelligent electronic devices of power systems, in automation devices, in radio engineering and communication systems, as well as in other fields of engineering where digital signal processing is commonly applied.

The prospectives of power system development in nearest future are related to technology Smart Grid implementation and the application of automatic control and regulation systems of a new generation. Power system control improvement involves a wide application of fast action IED based on synchronized measurement of current and voltage phasors of a funda‐ mental harmonic on the basis of IEEE C37.118-2011 and IEC 61850-90-5 standards.

Up to date IED should ensure performance signal processing in conditions of an intense electromagnetic and electromechanical transients process. Mathematical model of an input signal IED in a normal and an accidental regimes of power systems in some cases can be represented by a set of semi-infinite or finite damped oscillatory components, in other cases – by analogous models, in which complex amplitudes and frequencies of mentioned compo‐ nents are time functions.

Most of IED power systems should ensure performance of signal processing at any possible combination of input signal parameters. The suggested analysis methods enable to solve effectively problems of determination of performance specifications for frequency filters, used in IED power systems. The examples for the performance analysis of signal processing by frequency filters and algorithms of signal processing for IED of power systems based on the phasor measurement technology are considered in the sections 3 and 4 of the present chapter. The example for IIR filter analysis for general devices of relay protection and automation is given in the section 2.

The analysis methods for analog IIR filters considered in the chapter can be applied also for the linear circuit analysis. It is important to note that for absolute majority of microprocessor control systems and measuring systems the information sources have analogue nature. In this case for analysis of controlled objects equivalent circuits based on linear electric circuits are used. The illustrative examples: equivalent circuit of power systems, power plants, electriс grids and power-supply systems. Application of the suggested analysis methods enables a consistent approach for the regime analysis of the controlled object operation, analysis of analog filters-prototype and digital filters [5,6].

Synthesized in that way filters ensure the specified performance specifications for signal

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

105

Among with frequency filter synthesis this approach gives an opportunity to perform time window synthesis for hort-time Fourier transform, as well as synthesis of father and mother wavelets for cases when an input signal can be represented by a set of semi-infinite or finite

The effective methods of performance analysis for signal processing based on the Laplace transform spectral representations and the uniformity of mathematical models of input signals as well as the filter impulse functions as a sets of continuous/discrete semi-infinite or finite damped oscillatory components were developed for a direct determination of IIR filters and FIR filters performance specification. input signals as well as the filter impulse functions as a set of continuous/discrete semi-infinite or finite damped oscillatory components were developed. Simple semi-infinite harmonic, aperiodic signals, compound signals and impulse characteristics of any form can be synthesized on the basis of components set mentioned above, including signal composite envelopes, as well as pulse signals (radio and video pulses). The uniformity of the mathematical description of the signals and filters enables, on one hand, allows to employ both a consistent and a compact form of their characterization in the configuration of a set of complex amplitudes, complex frequencies and time parameters. On the other hand, it simplifies significantly performance analysis for signal processing by analog or digital filters at any possible useful signal and disturbance parameters variation by reducing

The analysis methods can be used in case of mathematical models as well - where complex

To simplify the task of the analysis, two methods are suggested for performance expressanalysis of signal processing by frequency filters using filter frequency responses based on

The application of the suggested methods for performance analysis of signal processing as one of the steps of filters synthesis enables to automatize filter design for filters with low sensitivity

processing at any possible variation of input signal parameters.

damped oscillatory components [5,20].

significantly the amount of calculations.

amplitudes and/or complex frequencies are time functions.

to a signal parameter change within the specified range.

Northern (Arctic) Federal University, Russia

Laplace transform: frequency and frequency-time analysis methods.

**7. Conclusion**

**Author details**

Alexey Mokeev

The suggested analysis methods may be applied for performance analysis of signal processing by frequency filters in up-to-date measuring devices, automation devices, radio devices, in communications systems, sound processing systems and other devices, where digital signal processing is commonly used.

Advantages and particularities of the suggested analysis methods are related to uniform analysis methods for analog filter-prototypes (IIR filters and FIR filters), as well as for digital filters.

The majority of pulse signals (radio and video pulses), which are commonly used in radio engineering, can be described by a set of finite damped oscillatory components. For perform‐ ance analysis of pulse signal processing by IIR filters and FIR filters the analysis methods considered in sections 2 and 3 of the chapter can be applied.

The author suggests to use the methods, considered in the section 4 of the present chapter for performance analysis of modulated signal processing by filters.

The synthesis methods mentioned above can be also effectively applied for typical signal filtering problems, including problems of a useful signal extraction against the white noise.

In this case, the white noise realizations can be described as a set of time-shifted fast damping exponents of different digits. The initial values and appearance time of the mentioned exponential components are random variables, which variation law ensures the white noise to have specified spectral characteristics.

## **6. Performance analysis of signal processing as a step of filter synthesis**

Guaranteeing the necessary quality of signal processing is utterly important at frequency filter synthesis. The application of the considered above approaches for the signal type (1) and the filter impulse characteristics (2) enables to reject the traditional approach, related to formula‐ tion the requirements to a filter amplitude-frequency response in different fields (band pass, stop band, transition region). In the case of filter synthesis, it is enough to lay down the requirements to the filter frequency characteristics on the basis of Laplace transform on complex frequencies of an input signal with the allowance for their change. Thus, according to the approach described above, the signal processing performance analysis is one of the steps of filter synthesis.

This approach enables to formalize the task of filter synthesis and to gain optimal solutions in combination with methods of multicriterion optimization with limitations[5]. Using methods of multicriterion optimization, by limitations values of filter frequency responses are ment and in some cases values of input signal spectrum based on the Laplace transform spectral representations on complex frequencies of impulse function of a filter and input signal [5,20]. Synthesized in that way filters ensure the specified performance specifications for signal processing at any possible variation of input signal parameters.

Among with frequency filter synthesis this approach gives an opportunity to perform time window synthesis for hort-time Fourier transform, as well as synthesis of father and mother wavelets for cases when an input signal can be represented by a set of semi-infinite or finite damped oscillatory components [5,20].

## **7. Conclusion**

consistent approach for the regime analysis of the controlled object operation, analysis of

The suggested analysis methods may be applied for performance analysis of signal processing by frequency filters in up-to-date measuring devices, automation devices, radio devices, in communications systems, sound processing systems and other devices, where digital signal

Advantages and particularities of the suggested analysis methods are related to uniform analysis methods for analog filter-prototypes (IIR filters and FIR filters), as well as for digital

The majority of pulse signals (radio and video pulses), which are commonly used in radio engineering, can be described by a set of finite damped oscillatory components. For perform‐ ance analysis of pulse signal processing by IIR filters and FIR filters the analysis methods

The author suggests to use the methods, considered in the section 4 of the present chapter for

The synthesis methods mentioned above can be also effectively applied for typical signal filtering problems, including problems of a useful signal extraction against the white noise.

In this case, the white noise realizations can be described as a set of time-shifted fast damping exponents of different digits. The initial values and appearance time of the mentioned exponential components are random variables, which variation law ensures the white noise

**6. Performance analysis of signal processing as a step of filter synthesis**

Guaranteeing the necessary quality of signal processing is utterly important at frequency filter synthesis. The application of the considered above approaches for the signal type (1) and the filter impulse characteristics (2) enables to reject the traditional approach, related to formula‐ tion the requirements to a filter amplitude-frequency response in different fields (band pass, stop band, transition region). In the case of filter synthesis, it is enough to lay down the requirements to the filter frequency characteristics on the basis of Laplace transform on complex frequencies of an input signal with the allowance for their change. Thus, according to the approach described above, the signal processing performance analysis is one of the steps

This approach enables to formalize the task of filter synthesis and to gain optimal solutions in combination with methods of multicriterion optimization with limitations[5]. Using methods of multicriterion optimization, by limitations values of filter frequency responses are ment and in some cases values of input signal spectrum based on the Laplace transform spectral representations on complex frequencies of impulse function of a filter and input signal [5,20].

analog filters-prototype and digital filters [5,6].

considered in sections 2 and 3 of the chapter can be applied.

to have specified spectral characteristics.

of filter synthesis.

performance analysis of modulated signal processing by filters.

processing is commonly used.

104 Digital Filters and Signal Processing

filters.

The effective methods of performance analysis for signal processing based on the Laplace transform spectral representations and the uniformity of mathematical models of input signals as well as the filter impulse functions as a sets of continuous/discrete semi-infinite or finite damped oscillatory components were developed for a direct determination of IIR filters and FIR filters performance specification. input signals as well as the filter impulse functions as a set of continuous/discrete semi-infinite or finite damped oscillatory components were developed. Simple semi-infinite harmonic, aperiodic signals, compound signals and impulse characteristics of any form can be synthesized on the basis of components set mentioned above, including signal composite envelopes, as well as pulse signals (radio and video pulses). The uniformity of the mathematical description of the signals and filters enables, on one hand, allows to employ both a consistent and a compact form of their characterization in the configuration of a set of complex amplitudes, complex frequencies and time parameters. On the other hand, it simplifies significantly performance analysis for signal processing by analog or digital filters at any possible useful signal and disturbance parameters variation by reducing significantly the amount of calculations.

The analysis methods can be used in case of mathematical models as well - where complex amplitudes and/or complex frequencies are time functions.

To simplify the task of the analysis, two methods are suggested for performance expressanalysis of signal processing by frequency filters using filter frequency responses based on Laplace transform: frequency and frequency-time analysis methods.

The application of the suggested methods for performance analysis of signal processing as one of the steps of filters synthesis enables to automatize filter design for filters with low sensitivity to a signal parameter change within the specified range.

## **Author details**

Alexey Mokeev

Northern (Arctic) Federal University, Russia

### **References**

[1] Voronov, A. A. (1985). Basic principles of automatic control theory: Special linear and nonlinear systems. *Moscow: Mir Publishers*, 319.

[18] Mokeev, A. V. (2009). Frequency filters analysis on the basis of features of signal spectral representations in complex frequency coordinates. *Scientific and Technical Bulletin of*

Direct Methods for Frequency Filter Performance Analysis

http://dx.doi.org/10.5772/52192

107

[19] Hector, J., Altuve, Ferrer. H. J. A., Edmund, O., & Schweitzer, E. O. (2010). Modern Solutions for Protection, Control, and Monitoring of Electric Power Systems. *SEL.*, 400.

[20] Mokeev, A. V. (2011). Signal processing algorithms for intelligent electronic devices using phasor measurement technology. *In 2011 Proc. Int. Actual Trends in Development*

[21] Fink, L. M. (1984). Signals, Interferences and Errors. *Moscow: Radio and Svyaz*, 256.

[22] Gonorovsky, I. S. (1981). Radio circuits and signals. *Moscow: Mir Publishers.*, 639.

[23] Marple, S. L. (1987). Digital Spectral Analysis with Application. *NJ: Prentice Hall.*

*of Power System Protection and Automation (CIGRE-2011).*

*SPbSPU.*, 2, 61-68.


[18] Mokeev, A. V. (2009). Frequency filters analysis on the basis of features of signal spectral representations in complex frequency coordinates. *Scientific and Technical Bulletin of SPbSPU.*, 2, 61-68.

**References**

106 Digital Filters and Signal Processing

[1] Voronov, A. A. (1985). Basic principles of automatic control theory: Special linear and

[4] Mokeev, A. V. (2007). Spectral expansion in coordinates of complex frequency appli‐

[5] Mokeev, A. V. (2011). Application of spectral representations in coordinates of complex frequency for the digital filter analysis and synthesis. *Márquez F.P.G, editor. Digital*

[6] Mokeev, A. V. (2008). Signal processing in intellectual electronic devices of electric power systems. 3, *The signal and system spectral expansion in coordinates of complex*

[7] Mokeev, A. V. (2011). Quality analysis of signal processing using digital filters.

[8] Kharkevich, A. A. (1960). Spectra and Analysis. *New York: Consultants Bureau.*, 222.

[9] Vanin, V. K., & Pavlov, G. M. (1991). Relay Protection of Computer Components.

[11] Maxfield, B. (2006). Engineering with MathCad: Using MathCad to Create and

[12] Pritchard, P. (2011). Mathcad: A Tool for Engineering Problem Solving. *McGraw-Hill*,

[13] Mokeev, A. V. (2008). Signal processing in intellectual electronic devices of electric power systems. 4, *The mathematical description of digital systems. Arkhangelsk: ASTU*, 201.

[14] Smith, S. W. (2002). Digital Signal Processing: A Practical Guide for Engineers and

[15] Ifeachor, E. C., & Jervis, B. W. (2002). Digital Signal Processing: A Practical Approach.

[16] Lyons, R. G. (2004). Understanding Digital Signal Processing. *Prentice Hall PTR.*, 665.

[17] Mokeev, A. V. (2009). Analysis of digital filters used for preprocessing of signals of

[2] Nise, N. S. (2004). Control System Engineering. *NJ: John Wiley & Sons.*, 969.

cation to analysis and synthesis filters. *Tampere: TICSP Report 37.*, 159-167.

[3] Ogata, K. (1997). Modern Control Engineering. *NJ: Prentice Hall.*

*International Siberian IEEE Conference. Krasnoyarsk.*, 106-109.

[10] Prévé, C. (2006). Protection of Electrical Networks. *AREVA: Mâcon*, 512.

Organize Your Engineering Calculations. *Butterworth-Heinemann*, 494.

nonlinear systems. *Moscow: Mir Publishers*, 319.

*Filters. Rijeka: InTech*, 27-52.

*Énergoatomizdat, Moscow.*

Scientists. *Newnes*, 672.

*2nd edition. Pearson Education.*, 933.

relay protection. *Electromehanica.*, 4, 37-42.

203.

*frequency. Arkhangelsk: ASTU*, 196.


**Chapter 5**

**Provisional chapter**

**Frequency Transformation for Linear State-Space**

**Frequency Transformation for Linear State-Space**

**Systems and Its Application to High-Performance**

Frequency transformation is one of the well-known techniques for design of analog and digital filters [1, 2]. This technique is based on variable substitution in a transfer function and allows us to easily convert a given prototype low-pass filter into any kind of frequency selective filter such as low-pass filters of different cutoff frequencies, high-pass filters, band-pass filters, and band-stop filters. It is also well-known that the transformed filters retain some properties of the prototype filter such as the stability and the shape of the magnitude response. For example, if a prototype filter is stable and has the Butterworth magnitude response, any filter given by the frequency transformation is also stable and of the Butterworth characteristic. Due to this useful fact, the frequency transformation is suitable not only to the filter design but also to the real-time tuning of cutoff frequencies, which can be applied to design of variable filters [3] and to adaptive notch filtering [4, 5]. Hence the frequency transformation plays important roles in many modern applications of

The purpose of this chapter is to provide further insights into the theory of frequency transformation from the viewpoint of *internal* properties of filters. In many textbooks on digital signal processing, the frequency transformation is discussed in terms of only the input-output properties, i.e. properties on the transfer function. In other words, few results have been reported about the relationship between the frequency transformation and the internal properties. As is well-known, the internal properties of filters are closely related to the problem of how we should construct a filter structure of a given transfer function, and this problem must be carefully considered in order to obtain analog filters of high dynamic range and low sensitivity [6–12] or digital filters of high accuracy with respect to finite wordlength effects [13–25]. Hence it is worthwhile to investigate the frequency transformation from the viewpoint of the internal properties, and to extend the results to some practical applications.

> ©2012 Koshita et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Koshita et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2012 Koshita et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

signal processing from both the theoretical and practical points of view.

**Systems and Its Application to High-Performance**

**Analog/Digital Filters**

**Analog/Digital Filters**

Masayuki Kawamata

http://dx.doi.org/10.5772/52197

**1. Introduction**

Shunsuke Koshita, Masahide Abe and

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

Shunsuke Koshita, Masahide Abe and Masayuki Kawamata

**Provisional chapter**

### **Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters**

Shunsuke Koshita, Masahide Abe and Masayuki Kawamata Shunsuke Koshita, Masahide Abe and Masayuki Kawamata

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52197

## **1. Introduction**

Frequency transformation is one of the well-known techniques for design of analog and digital filters [1, 2]. This technique is based on variable substitution in a transfer function and allows us to easily convert a given prototype low-pass filter into any kind of frequency selective filter such as low-pass filters of different cutoff frequencies, high-pass filters, band-pass filters, and band-stop filters. It is also well-known that the transformed filters retain some properties of the prototype filter such as the stability and the shape of the magnitude response. For example, if a prototype filter is stable and has the Butterworth magnitude response, any filter given by the frequency transformation is also stable and of the Butterworth characteristic. Due to this useful fact, the frequency transformation is suitable not only to the filter design but also to the real-time tuning of cutoff frequencies, which can be applied to design of variable filters [3] and to adaptive notch filtering [4, 5]. Hence the frequency transformation plays important roles in many modern applications of signal processing from both the theoretical and practical points of view.

The purpose of this chapter is to provide further insights into the theory of frequency transformation from the viewpoint of *internal* properties of filters. In many textbooks on digital signal processing, the frequency transformation is discussed in terms of only the input-output properties, i.e. properties on the transfer function. In other words, few results have been reported about the relationship between the frequency transformation and the internal properties. As is well-known, the internal properties of filters are closely related to the problem of how we should construct a filter structure of a given transfer function, and this problem must be carefully considered in order to obtain analog filters of high dynamic range and low sensitivity [6–12] or digital filters of high accuracy with respect to finite wordlength effects [13–25]. Hence it is worthwhile to investigate the frequency transformation from the viewpoint of the internal properties, and to extend the results to some practical applications.

Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Koshita et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Koshita et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Koshita et al., licensee InTech. This is an open access chapter distributed under the terms of the

In order to discuss the frequency transformation from the viewpoint of the internal properties of filters, we make use of the state-space representation. The state-space representation is one of the well-known internal descriptions of linear systems and, in addition, it provides a powerful tool for synthesis of analog/digital filter structures with the aforementioned high-performance. The results from our discussion are twofold. First, we reveal many useful properties of frequency transformation in terms of the state-space representation. The properties to be presented here are closely related to the following three elements of linear state-space systems: the controllability Gramian, the observability Gramian, and the second-order modes. These three elements are known to be very important in characterization of internal properties of analog/digital filters and synthesis of high-performance filter structures. Second, we apply this result to the technique of design and synthesis of analog and digital filters with high performance structures. To be more specific, we present simple and unified frameworks for design and synthesis of analog/digital filters that simultaneously realize the change of frequency characteristics and attain the aforementioned high-performance. Furthermore, we extend this result to variable filters with high-performance structures.

corresponds to each output of delay elements of the filter. Taking the *z*-transform of (1),

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

*zX*(*z*) = *AX*(*z*) + *bU*(*z*)

It is well-known that the transfer function *H*(*z*) is invariant under nonsingular transformation matrices *<sup>T</sup>* ∈ ℜ*N*×*<sup>N</sup>* of the state: if *<sup>x</sup>*(*n*) is transformed into *<sup>x</sup>*(*n*) = *<sup>T</sup>*−1*x*(*n*), then the state-space system (*A*, *b*, *c*, *d*) is also transformed into the following set (*A*, *b*, *c*, *d*):

It is easy to show that the transfer function of this new set is the same as that of (*A*, *b*, *c*, *d*). Therefore, many structures exist for a digital filter with a given transfer function *H*(*z*). This

We next introduce the controllability Gramian, the observability Gramian, and the second-order modes. For the system (*A*, *b*, *c*, *d*), the solutions *K* and *W* to the following Lyapunov equations are called the controllability Gramian and the observability Gramian,

*<sup>K</sup>* = *AKA<sup>T</sup>* + *bb<sup>T</sup>*

The Gramians *<sup>K</sup>* and *<sup>W</sup>* are symmetric and positive definite, i.e. *<sup>K</sup>* = *<sup>K</sup><sup>T</sup>* > 0 and *<sup>W</sup>* = *<sup>W</sup><sup>T</sup>* > 0, because the system (*A*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) is assumed to be stable, controllable and observable. Then, the eigenvalues of the matrix product *KW* are all positive. We denote these eigenvalues

<sup>2</sup> ≥···≥ *<sup>θ</sup>*<sup>2</sup>

···≥ *θ<sup>N</sup>* are called the second-order modes of the system. In the literature on control system theory, the second-order modes are also called Hankel singular values because *θ*1, *θ*2, ··· , *θ<sup>N</sup>*

The two Gramians and the similarity transformation *<sup>x</sup>*(*n*) = *<sup>T</sup>*−1*x*(*n*) are simply related as follows: the controllability/observability Gramians (*K*,*W*) of the system in (4) are given by

<sup>1</sup> <sup>≥</sup> *<sup>θ</sup>*<sup>2</sup>

are equal to the nonzero singular values of the Hankel operator of *H*(*z*).

from which the transfer function *H*(*z*) is described in terms of (*A*, *b*, *c*, *d*) as

where *I<sup>N</sup>* denotes the *N* × *N* identity matrix.

nonsingular transformation is called similarity transformation.

*<sup>N</sup>* and assume that *θ*<sup>2</sup>

*Y*(*z*) = *cX*(*z*) + *dU*(*z*) (2)

Filters

111

http://dx.doi.org/10.5772/52197

*<sup>H</sup>*(*z*) = *<sup>d</sup>* + *<sup>c</sup>*(*zI<sup>N</sup>* − *<sup>A</sup>*)−1*<sup>b</sup>* (3)

(*A*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*)=(*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*, *<sup>d</sup>*). (4)

*<sup>W</sup>* = *<sup>A</sup>TW A* + *<sup>c</sup>Tc*. (5)

(*K*,*W*)=(*T*−1*KT*−*T*, *<sup>T</sup>TW T*). (6)

*<sup>N</sup>*. Their positive square roots *θ*<sup>1</sup> ≥ *θ*<sup>2</sup> ≥

we have

respectively:

as *θ*<sup>2</sup> <sup>1</sup>, *<sup>θ</sup>*<sup>2</sup>

<sup>2</sup>, ··· , *<sup>θ</sup>*<sup>2</sup>

The chapter is organized as follows. Section 2 reviews the fundamentals of the state-space representation of linear systems, including analog filters and digital filters. Section 3 introduces the classical theory of frequency transformation. Sections 4 and 5 are the main theme of this chapter. In Section 4 we discuss the frequency transformation by using the state-space representation and reveal insightful relationships between the frequency transformation and the internal properties of filters. In Section 5 we extend this theory and present new useful methods for design and synthesis of high-performance analog/digital filters.

## **2. State-space representation, Gramians and second-order modes**

In this section we introduce state-space representation of linear systems. In addition, we introduce the aforementioned three elements on the internal properties—controllability Gramian, observability Gramian, and second-order modes—and we address how these elements are applied to synthesis of high-performance filter structures. We will present these topics for digital filters and analog filters, respectively.

## **2.1. State-space representation of digital filters**

Consider the following state-space equations for an *N*-th order stable single-input/single-output linear discrete-time system:

$$\begin{aligned} \mathbf{x}(n+1) &= A\mathbf{x}(n) + b\mathbf{u}(n) \\ y(n) &= c\mathbf{x}(n) + d\mathbf{u}(n) \end{aligned} \tag{1}$$

where *u*(*n*), *y*(*n*) and *x*(*n*) ∈ ℜ*N*×<sup>1</sup> denote the scalar input, the scalar output and the state vector, respectively, and *A* ∈ ℜ*N*×*N*, *b* ∈ ℜ*N*<sup>×</sup>1, *c* ∈ ℜ1×*<sup>N</sup>* and *d* ∈ ℜ1×<sup>1</sup> are constant coefficients. Throughout this chapter we assume that the system is stable, controllable and observable. If this state-space system represents a digital filter, each entry of *x*(*n*)

<sup>110</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 3 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters http://dx.doi.org/10.5772/52197 111

> corresponds to each output of delay elements of the filter. Taking the *z*-transform of (1), we have

$$\begin{aligned} zX(z) &= AX(z) + b\mathcal{U}(z) \\ Y(z) &= cX(z) + d\mathcal{U}(z) \end{aligned} \tag{2}$$

from which the transfer function *H*(*z*) is described in terms of (*A*, *b*, *c*, *d*) as

$$H(z) = d + \mathbf{c}(zI\_N - A)^{-1}\mathbf{b} \tag{3}$$

where *I<sup>N</sup>* denotes the *N* × *N* identity matrix.

2 Digital Filters and Signal Processing

filters with high-performance structures.

filters.

In order to discuss the frequency transformation from the viewpoint of the internal properties of filters, we make use of the state-space representation. The state-space representation is one of the well-known internal descriptions of linear systems and, in addition, it provides a powerful tool for synthesis of analog/digital filter structures with the aforementioned high-performance. The results from our discussion are twofold. First, we reveal many useful properties of frequency transformation in terms of the state-space representation. The properties to be presented here are closely related to the following three elements of linear state-space systems: the controllability Gramian, the observability Gramian, and the second-order modes. These three elements are known to be very important in characterization of internal properties of analog/digital filters and synthesis of high-performance filter structures. Second, we apply this result to the technique of design and synthesis of analog and digital filters with high performance structures. To be more specific, we present simple and unified frameworks for design and synthesis of analog/digital filters that simultaneously realize the change of frequency characteristics and attain the aforementioned high-performance. Furthermore, we extend this result to variable

The chapter is organized as follows. Section 2 reviews the fundamentals of the state-space representation of linear systems, including analog filters and digital filters. Section 3 introduces the classical theory of frequency transformation. Sections 4 and 5 are the main theme of this chapter. In Section 4 we discuss the frequency transformation by using the state-space representation and reveal insightful relationships between the frequency transformation and the internal properties of filters. In Section 5 we extend this theory and present new useful methods for design and synthesis of high-performance analog/digital

In this section we introduce state-space representation of linear systems. In addition, we introduce the aforementioned three elements on the internal properties—controllability Gramian, observability Gramian, and second-order modes—and we address how these elements are applied to synthesis of high-performance filter structures. We will present

Consider the following state-space equations for an *N*-th order stable

*x*(*n* + 1) = *Ax*(*n*) + *bu*(*n*)

where *u*(*n*), *y*(*n*) and *x*(*n*) ∈ ℜ*N*×<sup>1</sup> denote the scalar input, the scalar output and the state vector, respectively, and *A* ∈ ℜ*N*×*N*, *b* ∈ ℜ*N*<sup>×</sup>1, *c* ∈ ℜ1×*<sup>N</sup>* and *d* ∈ ℜ1×<sup>1</sup> are constant coefficients. Throughout this chapter we assume that the system is stable, controllable and observable. If this state-space system represents a digital filter, each entry of *x*(*n*)

*y*(*n*) = *cx*(*n*) + *du*(*n*) (1)

**2. State-space representation, Gramians and second-order modes**

these topics for digital filters and analog filters, respectively.

**2.1. State-space representation of digital filters**

single-input/single-output linear discrete-time system:

It is well-known that the transfer function *H*(*z*) is invariant under nonsingular transformation matrices *<sup>T</sup>* ∈ ℜ*N*×*<sup>N</sup>* of the state: if *<sup>x</sup>*(*n*) is transformed into *<sup>x</sup>*(*n*) = *<sup>T</sup>*−1*x*(*n*), then the state-space system (*A*, *b*, *c*, *d*) is also transformed into the following set (*A*, *b*, *c*, *d*):

$$(\overline{A}, \overline{b}, \overline{c}, \overline{d}) = (T^{-1} A \mathbf{T}, T^{-1} b, \mathbf{c} \mathbf{T}, d). \tag{4}$$

It is easy to show that the transfer function of this new set is the same as that of (*A*, *b*, *c*, *d*). Therefore, many structures exist for a digital filter with a given transfer function *H*(*z*). This nonsingular transformation is called similarity transformation.

We next introduce the controllability Gramian, the observability Gramian, and the second-order modes. For the system (*A*, *b*, *c*, *d*), the solutions *K* and *W* to the following Lyapunov equations are called the controllability Gramian and the observability Gramian, respectively:

$$\mathbf{K} = \mathbf{A}\mathbf{K}\mathbf{A}^T + \mathbf{b}\mathbf{b}^T$$

$$\mathbf{W} = \mathbf{A}^T\mathbf{W}\mathbf{A} + \mathbf{c}^T\mathbf{c}.\tag{5}$$

The Gramians *<sup>K</sup>* and *<sup>W</sup>* are symmetric and positive definite, i.e. *<sup>K</sup>* = *<sup>K</sup><sup>T</sup>* > 0 and *<sup>W</sup>* = *<sup>W</sup><sup>T</sup>* > 0, because the system (*A*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) is assumed to be stable, controllable and observable. Then, the eigenvalues of the matrix product *KW* are all positive. We denote these eigenvalues as *θ*<sup>2</sup> <sup>1</sup>, *<sup>θ</sup>*<sup>2</sup> <sup>2</sup>, ··· , *<sup>θ</sup>*<sup>2</sup> *<sup>N</sup>* and assume that *θ*<sup>2</sup> <sup>1</sup> <sup>≥</sup> *<sup>θ</sup>*<sup>2</sup> <sup>2</sup> ≥···≥ *<sup>θ</sup>*<sup>2</sup> *<sup>N</sup>*. Their positive square roots *θ*<sup>1</sup> ≥ *θ*<sup>2</sup> ≥ ···≥ *θ<sup>N</sup>* are called the second-order modes of the system. In the literature on control system theory, the second-order modes are also called Hankel singular values because *θ*1, *θ*2, ··· , *θ<sup>N</sup>* are equal to the nonzero singular values of the Hankel operator of *H*(*z*).

The two Gramians and the similarity transformation *<sup>x</sup>*(*n*) = *<sup>T</sup>*−1*x*(*n*) are simply related as follows: the controllability/observability Gramians (*K*,*W*) of the system in (4) are given by

$$(\overline{\mathbf{K}}, \overline{\mathbf{W}}) = (T^{-1}\mathbf{K}T^{-T}, T^{T}\mathbf{W}T). \tag{6}$$

On the other hand, the second-order modes are invariant under similarity transformation because of the following relationship

$$
\overline{\mathbf{K}} \overline{\mathbf{W}} = \mathbf{T}^{-1} (\mathbf{K} \mathbf{W}) \mathbf{T}. \tag{7}
$$

**2.2. State-space representation of analog filters**

vector corresponds to the output signals of these integrators.

Taking the Laplace transform of (11) leads to

which results in the following transfer function

a given transfer function *H*(*s*).

equations:

holds.

the following state-space representation

An *N*-th order linear continuous-time system (including analog filter) can be described by

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

*dt* <sup>=</sup> *Ax*(*t*) + *<sup>b</sup>u*(*t*)

where *u*(*t*), *y*(*t*) and *x*(*t*) ∈ ℜ*N*×<sup>1</sup> are the scalar input, the scalar output and the state vector of the system, respectively, and *A* ∈ ℜ*N*×*N*, *b* ∈ ℜ*N*<sup>×</sup>1, *c* ∈ ℜ1×*<sup>N</sup>* and *d* ∈ ℜ1×<sup>1</sup> are constant coefficients. The system (*A*, *b*, *c*, *d*) is assumed to be stable, controllable and observable. If this system represents a continuous-time analog filter that comprises *N* integrators, the state

*sX*(*s*) = *AX*(*s*) + *bU*(*s*)

As similar to the discrete-time case, the transfer function is invariant under similarity transformation: if *<sup>x</sup>*(*t*) is transformed by a nonsingular matrix *<sup>T</sup>* ∈ ℜ*N*×*<sup>N</sup>* into *<sup>T</sup>*−1*x*(*t*), then the new state-space system (*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*, *<sup>d</sup>*) is an equivalent realization to (*A*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) of the transfer function *H*(*s*). Therefore, many circuit topologies exist for an analog filter with

The controllability Gramian *K* and the observability Gramian *W* of a continuous-time state-space system are respectively obtained as the solutions to the following Lyapunov

*AK* + *KA<sup>T</sup>* + *bb<sup>T</sup>* = **0***N*×*<sup>N</sup>*

where **0***N*×*<sup>N</sup>* denotes the *N* × *N* zero matrix. By the assumption of the stability, controllability and observability of (*A*, *b*, *c*, *d*), the Gramians *K* and *W* are shown to be symmetric and positive definite. Then, as in the discrete-time case, the second-order modes *θ*1, *θ*2, ··· , *θ<sup>N</sup>*

The relationship of similarity transformations to the Gramians and the second-order modes in the continuous-time case is the same as that in the discrete-time case. The new Gramians (*K*,*W*) of the transformed continuous-time system given by a similarity transformation *T* are shown to be (*T*−1*KT*−*T*, *<sup>T</sup>TW T*), and thus the Gramians depend on realizations of the system. On the other hand, the second-order modes are invariant because *KW* = *<sup>T</sup>*−1(*KW*)*<sup>T</sup>*

are obtained as the positive square roots of the eigenvalues of *KW*.

*y*(*t*) = *cx*(*t*) + *du*(*t*) (11)

Filters

113

http://dx.doi.org/10.5772/52197

*Y*(*s*) = *cX*(*s*) + *dU*(*s*), (12)

*<sup>H</sup>*(*s*) = *<sup>d</sup>* <sup>+</sup> *<sup>c</sup>*(*sI<sup>N</sup>* <sup>−</sup> *<sup>A</sup>*)−1*b*. (13)

*ATW* + *W A* + *cTc* = **0***N*×*<sup>N</sup>* (14)

*dx*(*t*)

Hence it follows that the Gramians depend on realizations of the system, while the second-order modes depend only on the transfer function.

In the literature on synthesis of filter structures [13–25], it is shown that the two Gramians and the second-order modes play central roles in analysis and optimization of filter performance such as the roundoff noise and the coefficient sensitivity. In other words, given the transfer function of a digital filter, we can formulate some cost functions with respect to the aforementioned filter performance in terms of the two Gramians (*K*,*W*), and a filter structure of high performance can be obtained by constructing the two Gramians appropriately in such a manner that they optimize or sub-optimize the corresponding cost functions.

An example of high-performance digital filter structures is the balanced form [15, 16, 18, 23, 25]. This form consists of the two Gramians given by

$$K = \mathcal{W} = \Theta \tag{8}$$

where Θ is the diagonal matrix consisting of the second-order modes, i.e.

$$\boldsymbol{\Theta} = \text{diag}(\theta\_1, \theta\_2, \dots, \theta\_N). \tag{9}$$

Another example is the minimum roundoff noise structure [13, 14, 16, 17], which consists of the two Gramians that satisfy the following relationships

$$\begin{aligned} \mathbf{W} &= \left(\frac{1}{N} \sum\_{i=1}^{N} \theta\_i\right)^2 \mathbf{K} \\ K\_{ii} &= 1 \end{aligned} \tag{10}$$

where *Kii* denotes the *i*-th diagonal entry of *K*.

Finally, we address the significance of the second-order modes from two practical aspects. First, it is known in the literature that the second-order modes describe the optimal values of the aforementioned cost functions. Therefore, it follows that the optimal performance is determined by the second-order modes of a given transfer function. Another important feature of the second-order modes can be seen in the field of the balanced model reduction [26–28], where it is shown that the second-order modes provide the upper bound of the approximation error between the reduced-order system and the original system.

## **2.2. State-space representation of analog filters**

4 Digital Filters and Signal Processing

because of the following relationship

second-order modes depend only on the transfer function.

25]. This form consists of the two Gramians given by

the two Gramians that satisfy the following relationships

where *Kii* denotes the *i*-th diagonal entry of *K*.

On the other hand, the second-order modes are invariant under similarity transformation

Hence it follows that the Gramians depend on realizations of the system, while the

In the literature on synthesis of filter structures [13–25], it is shown that the two Gramians and the second-order modes play central roles in analysis and optimization of filter performance such as the roundoff noise and the coefficient sensitivity. In other words, given the transfer function of a digital filter, we can formulate some cost functions with respect to the aforementioned filter performance in terms of the two Gramians (*K*,*W*), and a filter structure of high performance can be obtained by constructing the two Gramians appropriately in such

An example of high-performance digital filter structures is the balanced form [15, 16, 18, 23,

Another example is the minimum roundoff noise structure [13, 14, 16, 17], which consists of

*N* ∑ *i*=1 *θi* 2 *K*

Finally, we address the significance of the second-order modes from two practical aspects. First, it is known in the literature that the second-order modes describe the optimal values of the aforementioned cost functions. Therefore, it follows that the optimal performance is determined by the second-order modes of a given transfer function. Another important feature of the second-order modes can be seen in the field of the balanced model reduction [26–28], where it is shown that the second-order modes provide the upper bound of the

a manner that they optimize or sub-optimize the corresponding cost functions.

where Θ is the diagonal matrix consisting of the second-order modes, i.e.

*W* = 1 *N*

approximation error between the reduced-order system and the original system.

*KW* = *<sup>T</sup>*−1(*KW*)*T*. (7)

*K* = *W* = **Θ** (8)

**Θ** = diag(*θ*1, *θ*2, ··· , *θN*). (9)

*Kii* = 1 (10)

An *N*-th order linear continuous-time system (including analog filter) can be described by the following state-space representation

$$\begin{aligned} \frac{d\mathbf{x}(t)}{dt} &= \mathbf{A}\mathbf{x}(t) + \mathbf{b}u(t) \\ y(t) &= \mathbf{c}\mathbf{x}(t) + du(t) \end{aligned} \tag{11}$$

where *u*(*t*), *y*(*t*) and *x*(*t*) ∈ ℜ*N*×<sup>1</sup> are the scalar input, the scalar output and the state vector of the system, respectively, and *A* ∈ ℜ*N*×*N*, *b* ∈ ℜ*N*<sup>×</sup>1, *c* ∈ ℜ1×*<sup>N</sup>* and *d* ∈ ℜ1×<sup>1</sup> are constant coefficients. The system (*A*, *b*, *c*, *d*) is assumed to be stable, controllable and observable. If this system represents a continuous-time analog filter that comprises *N* integrators, the state vector corresponds to the output signals of these integrators.

Taking the Laplace transform of (11) leads to

$$\begin{aligned} s\mathbf{X}(\mathbf{s}) &= A\mathbf{X}(\mathbf{s}) + b\mathcal{U}(\mathbf{s})\\ \mathbf{Y}(\mathbf{s}) &= \mathbf{c}\mathbf{X}(\mathbf{s}) + d\mathcal{U}(\mathbf{s}) \end{aligned} \tag{12}$$

which results in the following transfer function

$$H(\mathbf{s}) = d + \mathbf{c}(sI\_N - A)^{-1}\mathbf{b}.\tag{13}$$

As similar to the discrete-time case, the transfer function is invariant under similarity transformation: if *<sup>x</sup>*(*t*) is transformed by a nonsingular matrix *<sup>T</sup>* ∈ ℜ*N*×*<sup>N</sup>* into *<sup>T</sup>*−1*x*(*t*), then the new state-space system (*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*, *<sup>d</sup>*) is an equivalent realization to (*A*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) of the transfer function *H*(*s*). Therefore, many circuit topologies exist for an analog filter with a given transfer function *H*(*s*).

The controllability Gramian *K* and the observability Gramian *W* of a continuous-time state-space system are respectively obtained as the solutions to the following Lyapunov equations:

$$\mathbf{A}\mathbf{K} + \mathbf{K}\mathbf{A}^T + \mathbf{b}\mathbf{b}^T = \mathbf{0}\_{N \times N}$$

$$\mathbf{A}^T\mathbf{W} + \mathbf{W}\mathbf{A} + \mathbf{c}^T\mathbf{c} = \mathbf{0}\_{N \times N} \tag{14}$$

where **0***N*×*<sup>N</sup>* denotes the *N* × *N* zero matrix. By the assumption of the stability, controllability and observability of (*A*, *b*, *c*, *d*), the Gramians *K* and *W* are shown to be symmetric and positive definite. Then, as in the discrete-time case, the second-order modes *θ*1, *θ*2, ··· , *θ<sup>N</sup>* are obtained as the positive square roots of the eigenvalues of *KW*.

The relationship of similarity transformations to the Gramians and the second-order modes in the continuous-time case is the same as that in the discrete-time case. The new Gramians (*K*,*W*) of the transformed continuous-time system given by a similarity transformation *T* are shown to be (*T*−1*KT*−*T*, *<sup>T</sup>TW T*), and thus the Gramians depend on realizations of the system. On the other hand, the second-order modes are invariant because *KW* = *<sup>T</sup>*−1(*KW*)*<sup>T</sup>* holds.

As in the discrete-time case, the Gramians and the second-order modes of continuous-time systems play important roles in synthesis of filter structures of high performance [6–12]. A high-performance structure can be obtained by optimizing or sub-optimizing a prescribed cost function in terms of the controllability and observability Gramians. Such a cost function can be seen as a measure of the dynamic range and the sensitivity of an analog filter. In addition, the optimal values of such cost functions are determined by the second-order modes.

## **3. Frequency transformation**

#### **3.1. Frequency transformation of digital filters**

Frequency transformation of digital filters can be seen in the work of Oppenheim [29] and Constantinides [2]. The work of Oppenheim is applied to finite impulse response (FIR) transfer functions, whereas the work of Constantinides is applied to infinite impulse response (IIR) transfer functions. In this chapter, the frequency transformation of digital filters is restricted to the work of Constantinides.

Now let *H*(*z*) be the transfer function of a given *N*-th order digital low-pass filter. The frequency transformation in the discrete-time case is defined as

$$H(F(z)) = H(z)|\_{z^{-1} \leftarrow 1/F(z)}\tag{15}$$

which respectively correspond to the low-pass-low-pass (LP-LP), low-pass-high-pass (LP-HP), low-pass-band-pass (LP-BP) and low-pass-band-stop (LP-BS) transformations. The parameters *ξ* and *η* determine the cutoff frequencies of the transformed filters. On the block diagram of a digital filter, the frequency transformation means that each delay element *z*−<sup>1</sup>

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

Let *H*(*s*) be the transfer function of a given *N*-th order analog low-pass filter. The frequency

Hence the frequency transformation yields a new composite transfer function *H*(*F*(*s*)) from the prototype transfer function *H*(*s*). In general, the cutoff frequency of the prototype low-pass filter is set to be 1 rad/s. From a circuit point of view, the substitution *s*−<sup>1</sup> ← 1/*F*(*s*) means that each integrator 1/*s* in the prototype filter *H*(*s*) is replaced with another system

The transformation function 1/*F*(*s*) is defined as the following Foster reactance function [1]

where *G* > 0 and 0 ≤ *ω*z1 < *ω*p1 < *ω*z2 < *ω*p2 < *ω*z3 < *ω*p3 < ··· . The Foster reactance functions are determined in such a manner that the degree of difference of *p*(*s*) and *z*(*s*) is 1, i.e. | deg *p*(*s*) − deg *z*(*s*)| = 1. In the case of the well-known typical LP-LP, LP-HP, LP-BP

*s*

*s*<sup>2</sup> + *ω*<sup>2</sup> p1

z1)

z1)(*s*<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup>

p1)(*s*<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup>

*<sup>p</sup>*(*s*) <sup>=</sup> *<sup>G</sup>* (*s*<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup>

and LP-BS transformations, the reactance functions are respectively given by

1 *<sup>F</sup>*LP(*s*) <sup>=</sup> *<sup>G</sup>*

1 *<sup>F</sup>*HP(*s*) <sup>=</sup> *Gs*

1

1

*<sup>F</sup>*BP(*s*) <sup>=</sup> *Gs*

*<sup>F</sup>*BS(*s*) <sup>=</sup> *<sup>G</sup>*(*s*<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup>

The parameters *G*, *ω*<sup>p</sup> and *ω*<sup>s</sup> determine the cutoff frequencies of the transformed filters.

<sup>1</sup> To be precise, replacing *z*−<sup>1</sup> with another transfer function often yields a delay-free loop. In this case, some extra processing such as reformulation of the coefficients of the transformed filter is required after this replacement.

*s*(*s*<sup>2</sup> + *ω*<sup>2</sup>

*<sup>H</sup>*(*F*(*s*)) = *<sup>H</sup>*(*s*)|*s*−1←1/*F*(*s*). (18)

z2)(*s*<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup>

p2)(*s*<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup>

z3)···

*<sup>s</sup>* . (20)

p3)··· (19)

Filters

115

http://dx.doi.org/10.5772/52197

transformation of analog filters is defined as the following variable substitution [1]

in *H*(*z*) is replaced<sup>1</sup> with an all-pass filter 1/*F*(*z*).

with the transfer function 1/*F*(*s*).

1 *<sup>F</sup>*(*s*) <sup>=</sup> *<sup>z</sup>*(*s*)

**3.2. Frequency transformation of analog filters**

which results in a new composite transfer function *H*(*F*(*z*)). The function 1/*F*(*z*) for this transformation is defined as an *M*-th order stable all-pass function of the form

$$\frac{1}{F(z)} = \pm z^{-M} \frac{G(z^{-1})}{G(z)}$$

$$G(z) = 1 + \sum\_{k=1}^{M} g\_k z^{-k}.\tag{16}$$

The well-known typical frequency transformations make use of the following four types of all-pass functions

$$\begin{split} \frac{1}{F\_{\rm LP}(z)} &= \frac{z^{-1} - \tilde{\xi}}{1 - \tilde{\xi}z^{-1}} \\ \frac{1}{F\_{\rm HP}(z)} &= -\frac{z^{-1} + \tilde{\xi}}{1 + \tilde{\xi}z^{-1}} \\ \frac{1}{F\_{\rm BP}(z)} &= -\frac{z^{-2} - \frac{2\tilde{\xi}\eta}{\eta+1}z^{-1} + \frac{\eta-1}{\eta+1}}{1 - \frac{2\tilde{\xi}\eta}{\eta+1}z^{-1} + \frac{\eta-1}{\eta+1}z^{-2}} \\ \frac{1}{F\_{\rm BS}(z)} &= \frac{z^{-2} - \frac{2\tilde{\xi}\eta}{1+\eta}z^{-1} + \frac{1-\eta}{1+\eta}}{1 - \frac{2\tilde{\xi}\eta}{1+\eta}z^{-1} + \frac{1-\eta}{1+\eta}z^{-2}} \end{split} \tag{17}$$

which respectively correspond to the low-pass-low-pass (LP-LP), low-pass-high-pass (LP-HP), low-pass-band-pass (LP-BP) and low-pass-band-stop (LP-BS) transformations. The parameters *ξ* and *η* determine the cutoff frequencies of the transformed filters. On the block diagram of a digital filter, the frequency transformation means that each delay element *z*−<sup>1</sup> in *H*(*z*) is replaced<sup>1</sup> with an all-pass filter 1/*F*(*z*).

## **3.2. Frequency transformation of analog filters**

6 Digital Filters and Signal Processing

**3. Frequency transformation**

restricted to the work of Constantinides.

**3.1. Frequency transformation of digital filters**

frequency transformation in the discrete-time case is defined as

modes.

all-pass functions

As in the discrete-time case, the Gramians and the second-order modes of continuous-time systems play important roles in synthesis of filter structures of high performance [6–12]. A high-performance structure can be obtained by optimizing or sub-optimizing a prescribed cost function in terms of the controllability and observability Gramians. Such a cost function can be seen as a measure of the dynamic range and the sensitivity of an analog filter. In addition, the optimal values of such cost functions are determined by the second-order

Frequency transformation of digital filters can be seen in the work of Oppenheim [29] and Constantinides [2]. The work of Oppenheim is applied to finite impulse response (FIR) transfer functions, whereas the work of Constantinides is applied to infinite impulse response (IIR) transfer functions. In this chapter, the frequency transformation of digital filters is

Now let *H*(*z*) be the transfer function of a given *N*-th order digital low-pass filter. The

which results in a new composite transfer function *H*(*F*(*z*)). The function 1/*F*(*z*) for this

*<sup>F</sup>*(*z*) <sup>=</sup> <sup>±</sup>*z*−*<sup>M</sup> <sup>G</sup>*(*z*−1)

The well-known typical frequency transformations make use of the following four types of

1 − *ξz*−<sup>1</sup>

1 + *ξz*−<sup>1</sup>

<sup>1</sup> <sup>−</sup> <sup>2</sup>*ξη*

<sup>1</sup> <sup>−</sup> <sup>2</sup>*ξη*

*M* ∑ *k*=1 *G*(*z*)

*<sup>η</sup>*+<sup>1</sup> *<sup>z</sup>*−<sup>1</sup> <sup>+</sup> *<sup>η</sup>*−<sup>1</sup>

*<sup>η</sup>*+<sup>1</sup> *<sup>z</sup>*−<sup>1</sup> <sup>+</sup> *<sup>η</sup>*−<sup>1</sup>

<sup>1</sup>+*<sup>η</sup> <sup>z</sup>*−<sup>1</sup> <sup>+</sup> <sup>1</sup>−*<sup>η</sup>*

<sup>1</sup>+*<sup>η</sup> <sup>z</sup>*−<sup>1</sup> <sup>+</sup> <sup>1</sup>−*<sup>η</sup>*

*η*+1

*<sup>η</sup>*+<sup>1</sup> *<sup>z</sup>*−<sup>2</sup>

1+*η*

transformation is defined as an *M*-th order stable all-pass function of the form

*G*(*z*) = 1 +

1

1

1

1

1

*<sup>F</sup>*LP(*z*) <sup>=</sup> *<sup>z</sup>*−<sup>1</sup> <sup>−</sup> *<sup>ξ</sup>*

*<sup>F</sup>*HP(*z*) <sup>=</sup> <sup>−</sup> *<sup>z</sup>*−<sup>1</sup> <sup>+</sup> *<sup>ξ</sup>*

*<sup>F</sup>*BP(*z*) <sup>=</sup> <sup>−</sup> *<sup>z</sup>*−<sup>2</sup> <sup>−</sup> <sup>2</sup>*ξη*

*<sup>F</sup>*BS(*z*) <sup>=</sup> *<sup>z</sup>*−<sup>2</sup> <sup>−</sup> <sup>2</sup>*ξη*

*<sup>H</sup>*(*F*(*z*)) = *<sup>H</sup>*(*z*)|*z*−1←1/*F*(*z*) (15)

*gkz*<sup>−</sup>*k*. (16)

<sup>1</sup>+*<sup>η</sup> <sup>z</sup>*−<sup>2</sup> (17)

Let *H*(*s*) be the transfer function of a given *N*-th order analog low-pass filter. The frequency transformation of analog filters is defined as the following variable substitution [1]

$$H(F(\mathbf{s})) = H(\mathbf{s})|\_{\mathbf{s}^{-1} \leftarrow \mathbf{1}/F(\mathbf{s})}.\tag{18}$$

Hence the frequency transformation yields a new composite transfer function *H*(*F*(*s*)) from the prototype transfer function *H*(*s*). In general, the cutoff frequency of the prototype low-pass filter is set to be 1 rad/s. From a circuit point of view, the substitution *s*−<sup>1</sup> ← 1/*F*(*s*) means that each integrator 1/*s* in the prototype filter *H*(*s*) is replaced with another system with the transfer function 1/*F*(*s*).

The transformation function 1/*F*(*s*) is defined as the following Foster reactance function [1]

$$\frac{1}{F(s)} = \frac{z(s)}{p(s)} = G \frac{(s^2 + \omega\_{\rm z1}^2)(s^2 + \omega\_{\rm z2}^2)(s^2 + \omega\_{\rm z3}^2) \cdots}{s(s^2 + \omega\_{\rm p1}^2)(s^2 + \omega\_{\rm p2}^2)(s^2 + \omega\_{\rm p3}^2) \cdots} \tag{19}$$

where *G* > 0 and 0 ≤ *ω*z1 < *ω*p1 < *ω*z2 < *ω*p2 < *ω*z3 < *ω*p3 < ··· . The Foster reactance functions are determined in such a manner that the degree of difference of *p*(*s*) and *z*(*s*) is 1, i.e. | deg *p*(*s*) − deg *z*(*s*)| = 1. In the case of the well-known typical LP-LP, LP-HP, LP-BP and LP-BS transformations, the reactance functions are respectively given by

$$\begin{split} \frac{1}{F\_{\rm LP}(s)} &= \frac{G}{s} \\ \frac{1}{F\_{\rm HP}(s)} &= Gs \\ \frac{1}{F\_{\rm BP}(s)} &= \frac{Gs}{s^2 + \omega\_{\rm p1}^2} \\ \frac{1}{F\_{\rm BS}(s)} &= \frac{G(s^2 + \omega\_{\rm z1}^2)}{s}. \end{split} \tag{20}$$

The parameters *G*, *ω*<sup>p</sup> and *ω*<sup>s</sup> determine the cutoff frequencies of the transformed filters.

<sup>1</sup> To be precise, replacing *z*−<sup>1</sup> with another transfer function often yields a delay-free loop. In this case, some extra processing such as reformulation of the coefficients of the transformed filter is required after this replacement.

It is important to note that the Foster reactance functions are classified into two categories—strictly proper reactance functions and improper reactance functions2. In the typical frequency transformations of (20), 1/*F*LP(*s*) and 1/*F*BP(*s*) correspond to strictly proper reactance functions, whereas 1/*F*HP(*s*) and 1/*F*BS(*s*) are improper reactance functions.

The significance of the description given by (22) lies in the fact that, by using this description, we can easily carry out the frequency transformation on a state-space structure as well as a transfer function. Also, note that this description does not include any delay-free loop.

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

In addition to the above state-space formulation, Mullis and Roberts also described the Gramians and the second-order modes of the transformed system (A, B, C, D). The two

K = *K* ⊗ *Q*

where *Q* is the controllability Gramian of the all-pass system (*α*, *β*, *γ*, *δ*). From this

which means that the matrix product KW have the same eigenvalues as *KW* with multiplicity *M*. This shows that the second-order modes of transformed filters are the same as those of a given prototype filter. Hence the second-order modes of digital filters are

The practical benefit of this invariance property is discussed as follows. As stated in Section 2, the second-order modes determine the optimal values of cost functions with respect to finite wordlength effects. In [30], using the fact that the minimum roundoff noise is characterized by the second-order modes, it was proved that the minimum attainable value of the roundoff noise of digital filters is independent of the filter characteristics that are controlled by the frequency transformation. A similar conclusion can be drawn for the balanced model reduction: the upper bound of the approximation error due to the balanced model reduction

Furthermore, in the case of the LP-LP transformation, the work of [30] also presents the specific state-space-based frequency transformation that can preserve the optimal

A = (*<sup>ξ</sup> <sup>I</sup><sup>N</sup>* + *<sup>A</sup>*)(*I<sup>N</sup>* + *<sup>ξ</sup>A*)−<sup>1</sup>

By setting the prototype state-space filter (*A*, *b*, *c*, *d*) to be the optimal realization and applying (25), we can obtain arbitrary low-pass filters that have the same optimal realization

In the rest of this section, we will provide our results that are derived by further extending

<sup>1</sup> − *<sup>ξ</sup>*2(*I<sup>N</sup>* + *<sup>ξ</sup>A*)−1*<sup>b</sup>*

<sup>1</sup> − *<sup>ξ</sup>*2*c*(*I<sup>N</sup>* + *<sup>ξ</sup>A*)−<sup>1</sup>

D = *<sup>d</sup>* − *<sup>ξ</sup>c*(*I<sup>N</sup>* + *<sup>ξ</sup>A*)−1*b*. (25)

W = *<sup>W</sup>* ⊗ *<sup>Q</sup>*−<sup>1</sup> (23)

Filters

117

http://dx.doi.org/10.5772/52197

KW = (*KW*) ⊗ *I<sup>M</sup>* (24)

Gramians, which are respectively denoted by K and W, are given as follows:

relationship we easily see

as the prototype filter.

these results.

invariant under frequency transformation.

is invariant under frequency transformation.

realizations. This specific transformation is given by

B = 

C = 

## **4. State-space analysis of frequency transformation**

In this section, we discuss the frequency transformation from the viewpoint of the internal properties. In other words, we show many interesting results of the frequency transformation in terms of the state-space representation.

This research has its roots in the work of Mullis and Roberts [30], where they presented a simple state-space formulation of frequency transformation for digital filters and they proved an important property of the second-order modes—they are invariant under frequency transformation. In addition, they provided practical impacts of these results on the design and synthesis of high-performance digital filters.

In this chapter we start with introducing this work, and then we further extend this result and present other theoretical results on the relationship between the frequency transformation and the state-space representation of discrete-time systems. In addition, we also present similar results for continuous-time systems.

## **4.1. State-space formulation of frequency transformation for digital filters and invariance of second-order modes**

Mullis and Roberts [30] first presented an explicit state-space representation of frequency transformation as follows. Let (*A*, *b*, *c*, *d*) be a state-space representation of a given prototype filter *H*(*z*). Then, the transfer function *H*(*F*(*z*)) that is given by the frequency transformation (15) with an *M*-th order all-pass function 1/*F*(*z*) can be explicitly described by

$$H(F(z)) = \mathcal{D} + \mathcal{C}(z\mathbf{I}\_{MN} - \mathcal{A})^{-1}\mathcal{B} \tag{21}$$

with the following coefficients

$$\begin{aligned} \mathcal{A} &= I\_N \otimes \mathfrak{a} + [\![A(I\_N - \delta A)^{-1}]\!] \otimes (\not{\beta} \gamma) \\ \mathcal{B} &= [(I\_N - \delta A)^{-1} \! b] \otimes \mathfrak{B} \\ \mathcal{C} &= [\![c(I\_N - \delta A)^{-1}]\!] \otimes \gamma \\ \mathcal{D} &= d + \delta c (I\_N - \delta A)^{-1} \mathfrak{b} \end{aligned} \tag{22}$$

where (*α*, *β*, *γ*, *δ*) is an arbitrary state-space representation of 1/*F*(*z*), and ⊗ stands for the Kronecker product for matrices.

<sup>2</sup> A rational function *G*(*s*) = *N*(*s*)/*D*(*s*) is called strictly proper if deg*N*(*s*) < deg*D*(*s*). On the other hand, *G*(*s*) is called improper if deg*N*(*s*) > deg*D*(*s*). Since the Foster reactance functions given by (19) always satisfy | deg *p*(*s*) − deg *z*(*s*)| = 1, there does not exist any reactance function such that deg *p*(*s*) = deg *z*(*s*).

The significance of the description given by (22) lies in the fact that, by using this description, we can easily carry out the frequency transformation on a state-space structure as well as a transfer function. Also, note that this description does not include any delay-free loop.

8 Digital Filters and Signal Processing

functions.

It is important to note that the Foster reactance functions are classified into two categories—strictly proper reactance functions and improper reactance functions2. In the typical frequency transformations of (20), 1/*F*LP(*s*) and 1/*F*BP(*s*) correspond to strictly proper reactance functions, whereas 1/*F*HP(*s*) and 1/*F*BS(*s*) are improper reactance

In this section, we discuss the frequency transformation from the viewpoint of the internal properties. In other words, we show many interesting results of the frequency transformation

This research has its roots in the work of Mullis and Roberts [30], where they presented a simple state-space formulation of frequency transformation for digital filters and they proved an important property of the second-order modes—they are invariant under frequency transformation. In addition, they provided practical impacts of these results on the design

In this chapter we start with introducing this work, and then we further extend this result and present other theoretical results on the relationship between the frequency transformation and the state-space representation of discrete-time systems. In addition, we also present

**4.1. State-space formulation of frequency transformation for digital filters and**

(15) with an *M*-th order all-pass function 1/*F*(*z*) can be explicitly described by

<sup>B</sup> = [(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>A*)−1*b*] <sup>⊗</sup> *<sup>β</sup>* <sup>C</sup> = [*c*(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>A*)−1] <sup>⊗</sup> *<sup>γ</sup>*

deg *z*(*s*)| = 1, there does not exist any reactance function such that deg *p*(*s*) = deg *z*(*s*).

Mullis and Roberts [30] first presented an explicit state-space representation of frequency transformation as follows. Let (*A*, *b*, *c*, *d*) be a state-space representation of a given prototype filter *H*(*z*). Then, the transfer function *H*(*F*(*z*)) that is given by the frequency transformation

<sup>A</sup> <sup>=</sup> *<sup>I</sup><sup>N</sup>* <sup>⊗</sup> *<sup>α</sup>* + [*A*(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>A*)−1] <sup>⊗</sup> (*βγ*)

where (*α*, *β*, *γ*, *δ*) is an arbitrary state-space representation of 1/*F*(*z*), and ⊗ stands for the

<sup>2</sup> A rational function *G*(*s*) = *N*(*s*)/*D*(*s*) is called strictly proper if deg*N*(*s*) < deg*D*(*s*). On the other hand, *G*(*s*) is called improper if deg*N*(*s*) > deg*D*(*s*). Since the Foster reactance functions given by (19) always satisfy | deg *p*(*s*) −

*<sup>H</sup>*(*F*(*z*)) = <sup>D</sup> <sup>+</sup> <sup>C</sup>(*zIMN* <sup>−</sup> <sup>A</sup>)−1<sup>B</sup> (21)

<sup>D</sup> <sup>=</sup> *<sup>d</sup>* <sup>+</sup> *<sup>δ</sup>c*(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>A*)−1*<sup>b</sup>* (22)

**4. State-space analysis of frequency transformation**

in terms of the state-space representation.

and synthesis of high-performance digital filters.

similar results for continuous-time systems.

**invariance of second-order modes**

with the following coefficients

Kronecker product for matrices.

In addition to the above state-space formulation, Mullis and Roberts also described the Gramians and the second-order modes of the transformed system (A, B, C, D). The two Gramians, which are respectively denoted by K and W, are given as follows:

$$\begin{aligned} \mathcal{K} &= \mathcal{K} \otimes \mathcal{Q} \\ \mathcal{W} &= \mathcal{W} \otimes \mathcal{Q}^{-1} \end{aligned} \tag{23}$$

where *Q* is the controllability Gramian of the all-pass system (*α*, *β*, *γ*, *δ*). From this relationship we easily see

$$\mathbf{K}\mathbf{W} = (\mathbf{K}\mathbf{W}) \otimes I\_M \tag{24}$$

which means that the matrix product KW have the same eigenvalues as *KW* with multiplicity *M*. This shows that the second-order modes of transformed filters are the same as those of a given prototype filter. Hence the second-order modes of digital filters are invariant under frequency transformation.

The practical benefit of this invariance property is discussed as follows. As stated in Section 2, the second-order modes determine the optimal values of cost functions with respect to finite wordlength effects. In [30], using the fact that the minimum roundoff noise is characterized by the second-order modes, it was proved that the minimum attainable value of the roundoff noise of digital filters is independent of the filter characteristics that are controlled by the frequency transformation. A similar conclusion can be drawn for the balanced model reduction: the upper bound of the approximation error due to the balanced model reduction is invariant under frequency transformation.

Furthermore, in the case of the LP-LP transformation, the work of [30] also presents the specific state-space-based frequency transformation that can preserve the optimal realizations. This specific transformation is given by

$$\begin{aligned} \mathcal{A} &= (\mathfrak{f}I\_N + \mathcal{A})(I\_N + \mathfrak{f}A)^{-1} \\ \mathfrak{B} &= \sqrt{1 - \mathfrak{f}^2}(I\_N + \mathfrak{f}A)^{-1}\mathfrak{b} \\ \mathcal{C} &= \sqrt{1 - \mathfrak{f}^2}c(I\_N + \mathfrak{f}A)^{-1} \\ \mathcal{D} &= d - \mathfrak{f}c(I\_N + \mathfrak{f}A)^{-1}\mathfrak{b}. \end{aligned}$$

By setting the prototype state-space filter (*A*, *b*, *c*, *d*) to be the optimal realization and applying (25), we can obtain arbitrary low-pass filters that have the same optimal realization as the prototype filter.

In the rest of this section, we will provide our results that are derived by further extending these results.

controllability/observability Gramians as those of the prototype filter3. Now, recalling that high-performance structures can be obtained by appropriate choice of the Gramians, we notice that the Gramian-preserving frequency transformation is a very powerful technique for simultaneous design and synthesis of high-performance digital fitlers. That is, if we prepare the structure of a given prototype low-pass filter as a high-performance one such as the balanced form and the minimum roundoff noise form, the Gramian-preserving frequency transformation enables us to obtain other types of filters with the same high-performance structure. This fact is also true for analog filters, as will be shown later in the next subsection. We now present the mathematical formulation of the Gramian-preserving frequency transformation. Given a prototype state-space digital filter (*A*, *b*, *c*, *d*) with the transfer function *H*(*z*) and an *M*-th order all-pass function 1/*F*(*z*), the following description provides the Gramian-preserving frequency transformation to produce the composite transfer function

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

<sup>A</sup> <sup>=</sup> *<sup>α</sup>* <sup>⊗</sup> *<sup>I</sup><sup>N</sup>* + (*<sup>β</sup><sup>γ</sup>*) <sup>⊗</sup> [*A*(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>*

*<sup>c</sup>*(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>*

*<sup>A</sup>*)−1*b*]

*<sup>A</sup>*)−1]

Now we turn our attention to the mathematical formulation of the Gramians of (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>), which are respectively denoted by <sup>K</sup> and <sup>W</sup>. They are given in terms of the Gramians of

<sup>K</sup> <sup>=</sup> *<sup>I</sup><sup>M</sup>* <sup>⊗</sup> *<sup>K</sup>*

which means that <sup>K</sup> and <sup>W</sup> become block diagonal matrices with *<sup>M</sup>* diagonal blocks all equal to *<sup>K</sup>* and *<sup>W</sup>*. Therefore, as stated earlier, <sup>K</sup> and <sup>W</sup> respectively become the same as *K* and *W* with multiplicity *M*. Hence (26) preserves the Gramians under frequency

<sup>3</sup> In the case of LP-BP and LP-BS transformations, the transformed filters have the same Gramians with multiplicity 2 as those of the prototype filter. This is because the all-pass functions 1/*F*BP(*z*) and 1/*F*BS(*z*) are second-order

functions and the order of *H*(*F*BP(*z*)) and *H*(*F*BS(*z*) become twice as high as that of *H*(*z*).

<sup>B</sup> <sup>=</sup> *<sup>β</sup>* <sup>⊗</sup> [(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>*

<sup>C</sup> <sup>=</sup> *<sup>γ</sup>* <sup>⊗</sup> [*c*(*I<sup>N</sup>* <sup>−</sup> *<sup>δ</sup>*

controllability/observability Gramians equal to the identity matrix, i.e.

<sup>D</sup> <sup>=</sup> *<sup>d</sup>* <sup>+</sup> *<sup>δ</sup>*

*<sup>A</sup>*)−1]

) is a state-space representation of 1/*F*(*z*) with the

*<sup>α</sup><sup>α</sup><sup>T</sup>* <sup>+</sup> *<sup>β</sup><sup>β</sup><sup>T</sup>* <sup>=</sup> *<sup>α</sup><sup>T</sup><sup>α</sup>* <sup>+</sup> *<sup>γ</sup><sup>T</sup><sup>γ</sup>* <sup>=</sup> *<sup>I</sup>M*. (27)

*<sup>A</sup>*)−1*<sup>b</sup>* (26)

Filters

119

http://dx.doi.org/10.5772/52197

) is a balanced form. It should be noted that

<sup>W</sup> <sup>=</sup> *<sup>I</sup><sup>M</sup>* <sup>⊗</sup> *<sup>W</sup>* (28)

*H*(*F*(*z*)):

where the set (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*, *<sup>δ</sup>*

the prototype filter as follows:

transformation.

This relationship means that the set (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*, *<sup>δ</sup>*

such a set always exists if 1/*F*(*z*) is stable.

**Figure 1.** Gramian-preserving frequency transformation.

## **4.2. Gramian-preserving frequency transformation for digital filters**

Here we pay special attention to the controllability and observability Gramians, and we provide a new state-space formulation of frequency transformation that can keep these Gramians invariant. This new state-space-based frequency transformation is called the Gramian-preserving frequency transformation [31] and includes the formulation of (25) as a special case.

Before showing the mathematical formulation of the Gramian-preserving frequency transformation, we first discuss how the Gramian-preserving frequency transformation is related to design and synthesis of digital filters. Simple examples for design/synthesis of low-pass, high-pass, band-pass and band-stop filters are given in Fig. 1. Here, suppose that we are given a prototype low-pass filter with the transfer function *H*(*z*), as shown at the left of this figure. Also, let the controllability/observability Gramians of this prototype filter be *K* and *W*, respectively. Then, by applying the Gramian-preserving frequency transformation to this prototype filter, we can convert this filter into other arbitrary low-pass, high-pass, band-pass and band-stop filters that consist of the *same* <sup>118</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 11 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters http://dx.doi.org/10.5772/52197 119

> controllability/observability Gramians as those of the prototype filter3. Now, recalling that high-performance structures can be obtained by appropriate choice of the Gramians, we notice that the Gramian-preserving frequency transformation is a very powerful technique for simultaneous design and synthesis of high-performance digital fitlers. That is, if we prepare the structure of a given prototype low-pass filter as a high-performance one such as the balanced form and the minimum roundoff noise form, the Gramian-preserving frequency transformation enables us to obtain other types of filters with the same high-performance structure. This fact is also true for analog filters, as will be shown later in the next subsection.

10 Digital Filters and Signal Processing

Frequency

(K,W)

H(z)

**4.2. Gramian-preserving frequency transformation for digital filters**

Here we pay special attention to the controllability and observability Gramians, and we provide a new state-space formulation of frequency transformation that can keep these Gramians invariant. This new state-space-based frequency transformation is called the Gramian-preserving frequency transformation [31] and includes the formulation of (25) as a

Before showing the mathematical formulation of the Gramian-preserving frequency transformation, we first discuss how the Gramian-preserving frequency transformation is related to design and synthesis of digital filters. Simple examples for design/synthesis of low-pass, high-pass, band-pass and band-stop filters are given in Fig. 1. Here, suppose that we are given a prototype low-pass filter with the transfer function *H*(*z*), as shown at the left of this figure. Also, let the controllability/observability Gramians of this prototype filter be *K* and *W*, respectively. Then, by applying the Gramian-preserving frequency transformation to this prototype filter, we can convert this filter into other arbitrary low-pass, high-pass, band-pass and band-stop filters that consist of the *same*

Frequency

H(FHP(z))

Frequency

Gramians: (K,W)

H(FLP(z))

Frequency

Frequency

Gramians: K 0 0 K , W 0 0 W 

H(FBP(z))

H(FBS(z))

Gramians: K 0 0 K , W 0 0 W 

Gramians: (K,W)

Magnitude

Magnitude

Magnitude

Magnitude

Gramians:

Magnitude

**Figure 1.** Gramian-preserving frequency transformation.

special case.

We now present the mathematical formulation of the Gramian-preserving frequency transformation. Given a prototype state-space digital filter (*A*, *b*, *c*, *d*) with the transfer function *H*(*z*) and an *M*-th order all-pass function 1/*F*(*z*), the following description provides the Gramian-preserving frequency transformation to produce the composite transfer function *H*(*F*(*z*)):

$$\begin{aligned} \widetilde{\mathcal{A}} &= \widetilde{\mathfrak{a}} \otimes I\_N + (\widetilde{\mathfrak{f}}\widetilde{\gamma}) \otimes [\mathcal{A}(I\_N - \widetilde{\delta}\mathcal{A})^{-1}] \\ \widetilde{\mathfrak{B}} &= \widetilde{\mathfrak{B}} \otimes [(I\_N - \widetilde{\delta}\mathcal{A})^{-1}\mathcal{b}] \\ \widetilde{\mathcal{C}} &= \widetilde{\gamma} \otimes [\mathfrak{c}(I\_N - \widetilde{\delta}\mathcal{A})^{-1}] \\ \widetilde{\mathcal{D}} &= d + \widetilde{\delta}\mathfrak{c}(I\_N - \widetilde{\delta}\mathcal{A})^{-1}\mathcal{b} \end{aligned} \tag{26}$$

where the set (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*, *<sup>δ</sup>* ) is a state-space representation of 1/*F*(*z*) with the controllability/observability Gramians equal to the identity matrix, i.e.

$$
\widetilde{\mathfrak{a}\mathfrak{a}}^T + \widetilde{\mathfrak{f}}\widetilde{\mathfrak{f}}^T = \widetilde{\mathfrak{a}}^T \widetilde{\mathfrak{a}} + \widetilde{\gamma}^T \widetilde{\gamma} = \mathbf{I}\_M. \tag{27}
$$

This relationship means that the set (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*, *<sup>δ</sup>* ) is a balanced form. It should be noted that such a set always exists if 1/*F*(*z*) is stable.

Now we turn our attention to the mathematical formulation of the Gramians of (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>), which are respectively denoted by <sup>K</sup> and <sup>W</sup>. They are given in terms of the Gramians of the prototype filter as follows:

$$\begin{aligned} \bar{\mathbf{\mathcal{K}}} &= \mathbf{I}\_M \otimes \mathbf{K} \\ \bar{\mathbf{\mathcal{W}}} &= \mathbf{I}\_M \otimes \mathbf{W} \end{aligned} \tag{28}$$

which means that <sup>K</sup> and <sup>W</sup> become block diagonal matrices with *<sup>M</sup>* diagonal blocks all equal to *<sup>K</sup>* and *<sup>W</sup>*. Therefore, as stated earlier, <sup>K</sup> and <sup>W</sup> respectively become the same as *K* and *W* with multiplicity *M*. Hence (26) preserves the Gramians under frequency transformation.

<sup>3</sup> In the case of LP-BP and LP-BS transformations, the transformed filters have the same Gramians with multiplicity 2 as those of the prototype filter. This is because the all-pass functions 1/*F*BP(*z*) and 1/*F*BS(*z*) are second-order functions and the order of *H*(*F*BP(*z*)) and *H*(*F*BS(*z*) become twice as high as that of *H*(*z*).

We next discuss the Gramian-preserving frequency transformation from a realization point of view. From (27), we first see that realization of the Gramian-preserving frequency transformation requires us to construct the structure of the all-pass filter 1/*F*(*z*) appropriately such that its state-space representation becomes a balanced form. Although formulation of the balanced form is known to be non-unique for a given transfer function, we presented a useful technique [31]: given an all-pass transfer function 1/*F*(*z*), its normalized lattice structure becomes a balanced form, which enables us to realize the Gramian-preserving frequency transformation. This is derived from the fact that 1/*F*(*z*) is all-pass. Now, recall that the frequency transformation of digital filters means that each delay element in a prototype filter is replaced with an all-pass filter (and delay-free loops, if any, are eliminated after this replacement)4. In view of this, we can conclude that the Gramian-preserving frequency transformation is interpreted as the replacement of each delay element in the prototype filter with the all-pass filter that has the normalized lattice structure. Figure 2 illustrates this scheme. Given a state-space prototype filter as in Fig. 2(a), we carry out the aforementioned replacement and we obtain the transformed state-space filter as in Fig. 2(b). The all-pass filter that is included in this structure consists of *M* lattice sections Φ1, ··· , Φ*M*, and each section Φ*<sup>i</sup>* is given as in Fig. 2(c). The variable *ξ<sup>i</sup>* for 1 ≤ *i* ≤ *M* denotes the *i*-th lattice coefficient for 1/*F*(*z*), and *ξ* � *<sup>i</sup>* = �1 − *ξ*2.

(a)

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

**Figure 2.** Gramian-preserving frequency transformation: (a) prototype state-space filter, (b) transformed state-space filter, and

has in total (*M* − 1)*N*(*MN* − *M*/2) zero entries. Hence this state-space filter is very suitable

In the case of analog filters, little had been reported about the state-space analysis of frequency transformation. On the other hand, our work [32–34] has derived many results

We first present a state-space formulation of frequency transformation for analog filters. One thing to be noted here is that, as stated in Section 3.2, the frequency transformation functions (i.e. Foster reactance functions) are classified into strictly proper functions and improper

that are similar to the discrete-time case. Here we will introduce these results.

Ψ<sup>i</sup>

Filters

121

http://dx.doi.org/10.5772/52197

ξ<sup>i</sup> ξ<sup>i</sup>

ξi

(c)

−ξ<sup>i</sup>

u(n) y(n)

<sup>±</sup><sup>1</sup> Ψ<sup>M</sup>

z<sup>−</sup><sup>1</sup>

z<sup>−</sup><sup>1</sup>

z<sup>−</sup><sup>1</sup>

b c

ΨM−<sup>1</sup>

1/F(z) d

A

(c) a normalized lattice section Ψ*i*.

**4.3. Results for analog filters**

to implementation.

(b)

Ψ<sup>1</sup>

Finally, we provide the mathematical formulation of the Gramian-preserving frequency transformation based on the normalized lattice structure. The normalized lattice structure of 1/*F*(*z*) can be given by the following state-space representation:

�*α* = −*ξ*<sup>1</sup> −*ξ* � <sup>1</sup>*ξ*<sup>2</sup> −*ξ* � 1*ξ* �2*ξ*<sup>3</sup> ··· −*<sup>ξ</sup>* �1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−3*ξM*−<sup>2</sup> <sup>−</sup>*<sup>ξ</sup>* � 1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−2*ξM*−<sup>1</sup> <sup>−</sup>*<sup>ξ</sup>* �1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−1*ξ<sup>M</sup> ξ* � <sup>1</sup> <sup>−</sup>*ξ*1*ξ*<sup>2</sup> <sup>−</sup>*ξ*1*<sup>ξ</sup>* �2*ξ*<sup>3</sup> ··· −*ξ*1*<sup>ξ</sup>* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−3*ξM*−<sup>2</sup> <sup>−</sup>*ξ*1*<sup>ξ</sup>* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−2*ξM*−<sup>1</sup> <sup>−</sup>*ξ*1*<sup>ξ</sup>* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−1*ξ<sup>M</sup>* 0 *ξ* �<sup>2</sup> <sup>−</sup>*ξ*2*ξ*<sup>3</sup> ··· −*ξ*2*<sup>ξ</sup>* �3*ξ* � <sup>4</sup> ··· *ξ* �*M*−3*ξM*−<sup>2</sup> <sup>−</sup>*ξ*2*<sup>ξ</sup>* �3*ξ* � <sup>4</sup> ··· *ξ* �*M*−2*ξM*−<sup>1</sup> <sup>−</sup>*ξ*2*<sup>ξ</sup>* �3*ξ* � <sup>4</sup> ··· *ξ* �*M*−1*ξ<sup>M</sup>* . . . . . . . . . . . . . . . . . . . . . 00 0 ··· 0 *ξ* �*M*−<sup>1</sup> <sup>−</sup>*ξM*−1*ξ<sup>M</sup> <sup>β</sup>*� <sup>=</sup> *ξ* �1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−1*<sup>ξ</sup>* �*M ξ*1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−1*<sup>ξ</sup>* �*M ξ*2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>* �*M*−1*<sup>ξ</sup>* �*M* . . . *<sup>ξ</sup>M*−2*<sup>ξ</sup>* �*M*−1*<sup>ξ</sup>* �*M <sup>ξ</sup>M*−1*<sup>ξ</sup>* �*M <sup>γ</sup>*� <sup>=</sup> � 000 ··· 0 ±*ξ* �*M* � *δ* � <sup>=</sup> <sup>±</sup>*ξM*. (29)

Therefore, substitution of (29) into (26) carries out the Gramian-preserving frequency transformation. Note that the state-space representation (A� , <sup>B</sup>�, <sup>C</sup>�, <sup>D</sup>�) given in this way becomes sparse due to many zero entries in �*<sup>α</sup>* and *<sup>γ</sup>*�. To be precise, the set (A� , <sup>B</sup>�, <sup>C</sup>�, <sup>D</sup>�)

<sup>4</sup> Note that the mathematical formulation of the Gramian-preserving frequency transformation (26) is derived after elimination of delay-free loops. Therefore, (26) does not have the problem of delay-free loops. See [30] for the details.

<sup>120</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 13 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters http://dx.doi.org/10.5772/52197 121

**Figure 2.** Gramian-preserving frequency transformation: (a) prototype state-space filter, (b) transformed state-space filter, and (c) a normalized lattice section Ψ*i*.

has in total (*M* − 1)*N*(*MN* − *M*/2) zero entries. Hence this state-space filter is very suitable to implementation.

#### **4.3. Results for analog filters**

12 Digital Filters and Signal Processing

�*α* =

*<sup>β</sup>*� <sup>=</sup>

*<sup>γ</sup>*� <sup>=</sup> �

*δ*

−*ξ*<sup>1</sup> −*ξ* � <sup>1</sup>*ξ*<sup>2</sup> −*ξ* � 1*ξ*

0 *ξ*

<sup>1</sup> <sup>−</sup>*ξ*1*ξ*<sup>2</sup> <sup>−</sup>*ξ*1*<sup>ξ</sup>*

. . .

*<sup>ξ</sup>M*−2*<sup>ξ</sup>* �*M*−1*<sup>ξ</sup>* �*M <sup>ξ</sup>M*−1*<sup>ξ</sup>* �*M*

000 ··· 0 ±*ξ*

*ξ* �

> . . .

*ξ* �1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

*ξ*1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

*ξ*2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

denotes the *i*-th lattice coefficient for 1/*F*(*z*), and *ξ*

�2*ξ*<sup>3</sup> ··· −*<sup>ξ</sup>*

�<sup>2</sup> <sup>−</sup>*ξ*2*ξ*<sup>3</sup> ··· −*ξ*2*<sup>ξ</sup>*

. . . . .

�*M*−1*<sup>ξ</sup>* �*M*

�*M*−1*<sup>ξ</sup>* �*M*

�*M*−1*<sup>ξ</sup>* �*M* . . .

> �*M* �

�2*ξ*<sup>3</sup> ··· −*ξ*1*<sup>ξ</sup>*

1/*F*(*z*) can be given by the following state-space representation:

�1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

�2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

�3*ξ* � <sup>4</sup> ··· *ξ*

> . .

. .

00 0 ··· 0 *ξ*

We next discuss the Gramian-preserving frequency transformation from a realization point of view. From (27), we first see that realization of the Gramian-preserving frequency transformation requires us to construct the structure of the all-pass filter 1/*F*(*z*) appropriately such that its state-space representation becomes a balanced form. Although formulation of the balanced form is known to be non-unique for a given transfer function, we presented a useful technique [31]: given an all-pass transfer function 1/*F*(*z*), its normalized lattice structure becomes a balanced form, which enables us to realize the Gramian-preserving frequency transformation. This is derived from the fact that 1/*F*(*z*) is all-pass. Now, recall that the frequency transformation of digital filters means that each delay element in a prototype filter is replaced with an all-pass filter (and delay-free loops, if any, are eliminated after this replacement)4. In view of this, we can conclude that the Gramian-preserving frequency transformation is interpreted as the replacement of each delay element in the prototype filter with the all-pass filter that has the normalized lattice structure. Figure 2 illustrates this scheme. Given a state-space prototype filter as in Fig. 2(a), we carry out the aforementioned replacement and we obtain the transformed state-space filter as in Fig. 2(b). The all-pass filter that is included in this structure consists of *M* lattice sections Φ1, ··· , Φ*M*, and each section Φ*<sup>i</sup>* is given as in Fig. 2(c). The variable *ξ<sup>i</sup>* for 1 ≤ *i* ≤ *M*

�

� 1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

�2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

�3*ξ* � <sup>4</sup> ··· *ξ*

. . .

Finally, we provide the mathematical formulation of the Gramian-preserving frequency transformation based on the normalized lattice structure. The normalized lattice structure of

�*M*−3*ξM*−<sup>2</sup> <sup>−</sup>*<sup>ξ</sup>*

�*M*−3*ξM*−<sup>2</sup> <sup>−</sup>*ξ*1*<sup>ξ</sup>*

�*M*−3*ξM*−<sup>2</sup> <sup>−</sup>*ξ*2*<sup>ξ</sup>*

� <sup>=</sup> <sup>±</sup>*ξM*. (29)

Therefore, substitution of (29) into (26) carries out the Gramian-preserving frequency transformation. Note that the state-space representation (A� , <sup>B</sup>�, <sup>C</sup>�, <sup>D</sup>�) given in this way becomes sparse due to many zero entries in �*<sup>α</sup>* and *<sup>γ</sup>*�. To be precise, the set (A� , <sup>B</sup>�, <sup>C</sup>�, <sup>D</sup>�)

<sup>4</sup> Note that the mathematical formulation of the Gramian-preserving frequency transformation (26) is derived after elimination of delay-free loops. Therefore, (26) does not have the problem of delay-free loops. See [30] for the details.

*<sup>i</sup>* = �1 − *ξ*2.

�*M*−2*ξM*−<sup>1</sup> <sup>−</sup>*<sup>ξ</sup>*

�*M*−2*ξM*−<sup>1</sup> <sup>−</sup>*ξ*1*<sup>ξ</sup>*

�*M*−2*ξM*−<sup>1</sup> <sup>−</sup>*ξ*2*<sup>ξ</sup>*

�*M*−<sup>1</sup> <sup>−</sup>*ξM*−1*ξ<sup>M</sup>*

�1*ξ* �2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

�2*ξ* �<sup>3</sup> ··· *<sup>ξ</sup>*

�3*ξ* � <sup>4</sup> ··· *ξ*

. . . �*M*−1*ξ<sup>M</sup>*

�*M*−1*ξ<sup>M</sup>*

�*M*−1*ξ<sup>M</sup>*

In the case of analog filters, little had been reported about the state-space analysis of frequency transformation. On the other hand, our work [32–34] has derived many results that are similar to the discrete-time case. Here we will introduce these results.

We first present a state-space formulation of frequency transformation for analog filters. One thing to be noted here is that, as stated in Section 3.2, the frequency transformation functions (i.e. Foster reactance functions) are classified into strictly proper functions and improper functions. In this chapter we focus on the case of strictly proper reactance functions, which include the LP-LP and the LP-BP transformations.

Now consider a state-space representation (*A*, *b*, *c*, *d*) of a given prototype low-pass filter with the transfer function *H*(*s*). Also, let (A, B, C, D) be a state-space representation of *H*(*F*(*s*)), where 1/*F*(*s*) denotes a strictly proper Foster reactance function. Then, (A, B, C, D) can be given in terms of (*A*, *b*, *c*, *d*) as follows:

$$\begin{aligned} \mathcal{A} &= I\_N \otimes \mathfrak{a} + \mathcal{A} \otimes (\mathfrak{F}\gamma) \\ \mathcal{B} &= \mathfrak{b} \otimes \mathfrak{F} \\ \mathcal{C} &= \mathfrak{c} \otimes \gamma \\ \mathcal{D} &= d \end{aligned} \tag{30}$$

<sup>A</sup> <sup>=</sup> *<sup>α</sup>* <sup>⊗</sup> *<sup>I</sup><sup>N</sup>* + (*<sup>β</sup><sup>γ</sup>*) <sup>⊗</sup> *<sup>A</sup>*

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

where (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) is a state-space representation of 1/*F*(*s*) that satisfies *<sup>P</sup>* <sup>=</sup> *<sup>I</sup><sup>M</sup>* in (33), i.e.

*<sup>α</sup><sup>T</sup>* <sup>+</sup> *<sup>α</sup>* <sup>=</sup> **<sup>0</sup>***M*×*<sup>M</sup>*

For (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>) described as above, the controllability/observability Gramians (<sup>K</sup> ,<sup>W</sup> ) are

<sup>K</sup> <sup>=</sup> *<sup>I</sup><sup>M</sup>* <sup>⊗</sup> *<sup>K</sup>*

Needless to say, this relationship is the same as in the discrete-time case (28). Hence the

As in the discrete-time case, formulation of (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) is known to be non-unique. In [34], we presented a closed-form representation of (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) that will be very suitable to circuit implementation. In order to derive this representation, we first rewrite the Foster reactance

> *Gis s*<sup>2</sup> + *ω*<sup>2</sup> p*i* + *G*0

where *G*1, ··· *GL* and *G*<sup>0</sup> are all real and nonnegative, and *L* = ⌊*M*/2⌋, i.e. *L* is the largest integer less than or equal to *M*/2. Note that *G*<sup>0</sup> = 0 holds if *M* is even. Also, note that the first term on the right-hand side of (38) vanishes if *M* = 1. Now we can formulate the desired state-space representation of 1/*F*(*s*) by using the parameters of (38). The formulation depends on the value of *M*, i.e. the order of 1/*F*(*s*). For even *M*, we give the desired

> <sup>2</sup> ··· *<sup>ψ</sup> <sup>T</sup> L T*

**Ω**p1, **Ω**p2, ··· **Ω**p*<sup>L</sup>*

even (39)

Gramians of a prototype state-space filter are preserved under this transformation.

*L* ∑ *i*=1

state-space representation, which is denoted by (*<sup>α</sup>*even, *<sup>β</sup>*even, *<sup>γ</sup>*even), as follows:

*<sup>α</sup>*even <sup>=</sup> block diag

 *ψ T* <sup>1</sup> *<sup>ψ</sup> <sup>T</sup>*

*<sup>β</sup>*even <sup>=</sup>

*<sup>γ</sup>*even <sup>=</sup> *<sup>β</sup><sup>T</sup>*

1 *<sup>F</sup>*(*s*) <sup>=</sup>

function (19) as the following partial fraction

<sup>D</sup> <sup>=</sup> *<sup>d</sup>* (35)

*<sup>β</sup>* <sup>=</sup> *<sup>γ</sup><sup>T</sup>*. (36)

Filters

123

http://dx.doi.org/10.5772/52197

<sup>W</sup> <sup>=</sup> *<sup>I</sup><sup>M</sup>* <sup>⊗</sup> *<sup>W</sup>*. (37)

*<sup>s</sup>* (38)

<sup>B</sup> <sup>=</sup> *<sup>β</sup>* <sup>⊗</sup> *<sup>b</sup>* <sup>C</sup> <sup>=</sup> *<sup>γ</sup>* <sup>⊗</sup> *<sup>c</sup>*

found to be

where the set (*α*, *β*, *γ*) shown here is an arbitrary state-space representation of 1/*F*(*s*), i.e.

$$1/F(s) = \gamma (sI\_M - \mathfrak{a})^{-1} \mathfrak{P} \tag{31}$$

and *M* is the order of 1/*F*(*s*), i.e. *M* = deg*p*(*s*) in (19). Note that the *d*-term in a state-space representation of 1/*F*(*s*) becomes zero because the reactance function is strictly proper. Therefore, the state-space-based frequency transformation given here is simpler than the discrete-time case (22).

Next we discuss the second-order modes of analog filters under frequency transformation. Let (*K*,*W*) and (K,W) be the controllability/observability Gramians of (*A*, *b*, *c*, *d*) and (A, B, C, D), respectively. Using (30), we can prove the following property:

$$
\mathcal{K} = \mathbf{K} \otimes \mathbf{P}^{-1}
$$

$$
\mathbf{W} = \mathbf{W} \otimes \mathbf{P} \tag{32}
$$

where *P* is the positive definite matrix that satisfies the following relationship called the lossless positive-real lemma:

$$\begin{aligned} \mathbf{a}^T \mathbf{P} + \mathbf{P} \mathbf{a} &= \mathbf{0}\_{M \times M} \\ \mathbf{P} \mathbf{\mathcal{B}} &= \boldsymbol{\gamma}^T. \end{aligned} \tag{33}$$

From (32) we easily see

$$\mathbf{\mathcal{K}W} = (\mathbf{KW}) \otimes I\_M \tag{34}$$

which proves that the second-order modes of analog filters are invariant under frequency transformation.

We now present the Gramian-preserving frequency transformation for analog filters. Let (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>) be the state-space filter that is given by this transformation. Then, (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>) is formulated as

#### <sup>122</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 15 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 123

14 Digital Filters and Signal Processing

discrete-time case (22).

lossless positive-real lemma:

From (32) we easily see

transformation.

is formulated as

include the LP-LP and the LP-BP transformations.

given in terms of (*A*, *b*, *c*, *d*) as follows:

functions. In this chapter we focus on the case of strictly proper reactance functions, which

Now consider a state-space representation (*A*, *b*, *c*, *d*) of a given prototype low-pass filter with the transfer function *H*(*s*). Also, let (A, B, C, D) be a state-space representation of *H*(*F*(*s*)), where 1/*F*(*s*) denotes a strictly proper Foster reactance function. Then, (A, B, C, D) can be

A = *I<sup>N</sup>* ⊗ *α* + *A* ⊗ (*βγ*)

where the set (*α*, *β*, *γ*) shown here is an arbitrary state-space representation of 1/*F*(*s*), i.e.

and *M* is the order of 1/*F*(*s*), i.e. *M* = deg*p*(*s*) in (19). Note that the *d*-term in a state-space representation of 1/*F*(*s*) becomes zero because the reactance function is strictly proper. Therefore, the state-space-based frequency transformation given here is simpler than the

Next we discuss the second-order modes of analog filters under frequency transformation. Let (*K*,*W*) and (K,W) be the controllability/observability Gramians of (*A*, *b*, *c*, *d*) and

K = *<sup>K</sup>* ⊗ *<sup>P</sup>*−<sup>1</sup>

where *P* is the positive definite matrix that satisfies the following relationship called the

*αTP* + *Pα* = **0***M*×*<sup>M</sup>*

which proves that the second-order modes of analog filters are invariant under frequency

We now present the Gramian-preserving frequency transformation for analog filters. Let (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>) be the state-space filter that is given by this transformation. Then, (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>)

(A, B, C, D), respectively. Using (30), we can prove the following property:

D = *d* (30)

1/*F*(*s*) = *<sup>γ</sup>*(*sI<sup>M</sup>* <sup>−</sup> *<sup>α</sup>*)−1*<sup>β</sup>* (31)

W = *W* ⊗ *P* (32)

*Pβ* = *γT*. (33)

KW = (*KW*) ⊗ *I<sup>M</sup>* (34)

B = *b* ⊗ *β* C = *c* ⊗ *γ*

http://dx.doi.org/10.5772/52197

$$\begin{aligned} \mathcal{A} &= \tilde{\mathfrak{a}} \otimes I\_N + (\mathcal{J}\tilde{\gamma}) \otimes A \\ \tilde{\mathfrak{B}} &= \tilde{\mathfrak{B}} \otimes \mathfrak{b} \\ \tilde{\mathcal{C}} &= \tilde{\gamma} \otimes \mathfrak{c} \\ \tilde{\mathcal{D}} &= d \end{aligned} \tag{35}$$

where (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) is a state-space representation of 1/*F*(*s*) that satisfies *<sup>P</sup>* <sup>=</sup> *<sup>I</sup><sup>M</sup>* in (33), i.e.

$$\begin{aligned} \tilde{\mathfrak{a}}^T + \tilde{\mathfrak{a}} &= \mathbf{0}\_{M \times M} \\ \tilde{\mathfrak{B}} &= \tilde{\gamma}^T. \end{aligned} \tag{36}$$

For (<sup>A</sup> , <sup>B</sup>, <sup>C</sup>, <sup>D</sup>) described as above, the controllability/observability Gramians (<sup>K</sup> ,<sup>W</sup> ) are found to be

$$
\begin{aligned}
\bar{\mathcal{K}} &= \mathbf{I}\_M \otimes \mathbf{K} \\
\tilde{\mathcal{W}} &= \mathbf{I}\_M \otimes \mathbf{W}.
\end{aligned}
\tag{37}
$$

Needless to say, this relationship is the same as in the discrete-time case (28). Hence the Gramians of a prototype state-space filter are preserved under this transformation.

As in the discrete-time case, formulation of (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) is known to be non-unique. In [34], we presented a closed-form representation of (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) that will be very suitable to circuit implementation. In order to derive this representation, we first rewrite the Foster reactance function (19) as the following partial fraction

$$\frac{1}{F(s)} = \sum\_{i=1}^{L} \frac{G\_{l}s}{s^2 + \omega\_{\text{pi}}^2} + \frac{G\_0}{s} \tag{38}$$

where *G*1, ··· *GL* and *G*<sup>0</sup> are all real and nonnegative, and *L* = ⌊*M*/2⌋, i.e. *L* is the largest integer less than or equal to *M*/2. Note that *G*<sup>0</sup> = 0 holds if *M* is even. Also, note that the first term on the right-hand side of (38) vanishes if *M* = 1. Now we can formulate the desired state-space representation of 1/*F*(*s*) by using the parameters of (38). The formulation depends on the value of *M*, i.e. the order of 1/*F*(*s*). For even *M*, we give the desired state-space representation, which is denoted by (*<sup>α</sup>*even, *<sup>β</sup>*even, *<sup>γ</sup>*even), as follows:

$$
\tilde{\boldsymbol{\mathfrak{a}}}\_{\text{even}} = \text{block diag}\left(\mathbf{D}\_{\text{p1}}, \mathbf{D}\_{\text{p2}}, \dots, \mathbf{D}\_{\text{pL}}\right)
$$

$$
\tilde{\boldsymbol{\mathfrak{B}}}\_{\text{even}} = \left(\tilde{\boldsymbol{\mathfrak{p}}}\_{1}^{T} \,\, \tilde{\boldsymbol{\mathfrak{p}}}\_{2}^{T} \,\, \cdots \,\, \tilde{\boldsymbol{\mathfrak{p}}}\_{L}^{T}\right)^{T}
$$

$$
\tilde{\boldsymbol{\chi}}\_{\text{even}} = \tilde{\boldsymbol{\mathfrak{B}}}\_{\text{even}}^{T}\tag{39}
$$

where **<sup>Ω</sup>**p*<sup>i</sup>* ∈ ℜ2×<sup>2</sup> and *<sup>ψ</sup><sup>i</sup>* ∈ ℜ2×<sup>1</sup> for *<sup>M</sup>* <sup>=</sup> 1, 2, ··· , *<sup>L</sup>* are respectively given by

$$\begin{aligned} \boldsymbol{\Omega}\_{\text{Pi}} &= \begin{pmatrix} 0 & \omega\_{\text{Pi}} \\ -\omega\_{\text{Pi}} & 0 \end{pmatrix} \\ \boldsymbol{\tilde{\Psi}}\_{i} &= \begin{pmatrix} \sqrt{G\_{i}} \\ 0 \end{pmatrix} . \end{aligned} \tag{40}$$

A

(a)

A

(b)

**Figure 3.** Gramian-preserving frequency transformation for analog filters: (a) prototype state-space filter, and (b) transformed

*<sup>γ</sup>i*(*sI*<sup>2</sup> <sup>−</sup> *<sup>α</sup>i*)−1*β<sup>i</sup>*

, *<sup>γ</sup>i*) for 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>L</sup>* and (*α*0, *<sup>β</sup>*0, *<sup>γ</sup>*0) are found to be

 0 *Gi* −*ω*<sup>2</sup> p*i Gi* 0

<sup>+</sup>*γ*0(*sI*<sup>1</sup> <sup>−</sup> *<sup>α</sup>*0)−1*β*<sup>0</sup> (43)

*γ*<sup>0</sup> = 1 (44)

*L* ∑ *i*=1

*αi* =

*βi* = *Gi* 0

*γi* = 1 0

*α*<sup>0</sup> = 0 *<sup>β</sup>*<sup>0</sup> <sup>=</sup> *<sup>G</sup>*<sup>0</sup> <sup>U</sup>(s) c <sup>Y</sup> (s) α

<sup>β</sup> <sup>γ</sup>

d

1/F(s)

Y (s)

Filters

125

http://dx.doi.org/10.5772/52197

d

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

<sup>U</sup>(s) b c

1 *<sup>F</sup>*(*s*) <sup>=</sup>

b

state-space filter.

where the subsystems (*αi*, *βi*

If *<sup>M</sup>* is odd, we give the desired state-space representation (*<sup>α</sup>*odd, *<sup>β</sup>*odd, *<sup>γ</sup>*odd) as

$$
\begin{aligned}
\widetilde{\mathfrak{a}}\_{\text{odd}} &= \begin{pmatrix}
\widetilde{\mathfrak{a}}\_{\text{even}} & \mathbf{0}\_{2L \times 1} \\
\mathbf{0}\_{1 \times 2L} & \mathbf{0}\_{1 \times 1}
\end{pmatrix} \\
\widetilde{\mathfrak{B}}\_{\text{odd}} &= \begin{pmatrix}
\widetilde{\mathfrak{B}}\_{\text{even}}^T & \sqrt{\mathbf{G}\_0}
\end{pmatrix}^T
\\
\widetilde{\gamma}\_{\text{odd}} &= \widetilde{\mathfrak{B}}\_{\text{odd}}^T.
\end{aligned}
\tag{41}
$$

Note that the above expression reduces to (*<sup>α</sup>*odd, *<sup>β</sup>*odd, *<sup>γ</sup>*odd)=(0, <sup>√</sup>*G*0, <sup>√</sup>*G*0) if *<sup>M</sup>* <sup>=</sup> 1. By direct calculation it is easy to prove that the state-space representations (39) and (41) satisfy the transfer function 1/*F*(*s*) given by (38) for even *M* and odd *M*, respectively, and that they also satisfy *P* = *I<sup>M</sup>* in the lossless positive-real lemma, i.e.

$$\begin{aligned} \widetilde{\boldsymbol{\mathfrak{a}}}\_{\text{even}}^{T} + \widetilde{\boldsymbol{\mathfrak{a}}}\_{\text{even}} &= \mathbf{0}\_{M \times M} \\ \widetilde{\boldsymbol{\mathcal{B}}}\_{\text{even}} &= \widetilde{\boldsymbol{\gamma}}\_{\text{even}}^{T} \\ \widetilde{\boldsymbol{\mathfrak{a}}}\_{\text{odd}}^{T} + \widetilde{\boldsymbol{\mathfrak{a}}}\_{\text{odd}} &= \mathbf{0}\_{M \times M} \\ \widetilde{\boldsymbol{\mathcal{B}}}\_{\text{odd}} &= \widetilde{\boldsymbol{\gamma}}\_{\text{odd}}^{T} . \end{aligned} \tag{42}$$

This result shows that (39) and (41) offer the closed-form expression for the Gramian-preserving frequency transformation.

Finally, we discuss the physical interpretation of the Gramian-preserving frequency transformation, which will bring further insight into the circuit theory. As in the discrete-time case, we first discuss the Gramian-preserving frequency transformation in terms of the block diagram. As illustrated in Fig. 3, the Gramian-preserving frequency transformation for analog filters is derived from the model of Fig. 3(b), which is given by replacing the integrators in the prototype filter of Fig. 3(a) with an appropriate state-space representation (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) of the Foster reactance function 1/*F*(*s*). Here, we have to consider how the circuit topology of the set (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) is constructed. In order to answer this, consider again the partial fraction of strictly proper Foster reactance functions 1/*F*(*s*) as in (38). This expression is well-known as the *LC* driving-point impedance functions corresponding to the first Foster canonical form [1], which is realized by the series connection of a capacitor of capacitance 1/*G*<sup>0</sup> and *L* parallel combinations of an inductor of inductance *Gi*/*ω*<sup>2</sup> <sup>p</sup>*<sup>i</sup>* and a capacitor of capacitance 1/*Gi*.

Figure 4(a) shows the circuit representation of 1/*F*(*s*), where 1/*F*(*s*) is related to *V* and *I* as 1/*F*(*s*) = *V*(*s*)/*I*(*s*). This circuit is easily expressed in state-space form as

<sup>124</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 17 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 125

16 Digital Filters and Signal Processing

where **<sup>Ω</sup>**p*<sup>i</sup>* ∈ ℜ2×<sup>2</sup> and *<sup>ψ</sup><sup>i</sup>* ∈ ℜ2×<sup>1</sup> for *<sup>M</sup>* <sup>=</sup> 1, 2, ··· , *<sup>L</sup>* are respectively given by

**Ω**p*<sup>i</sup>* =

*<sup>ψ</sup><sup>i</sup>* <sup>=</sup>

If *<sup>M</sup>* is odd, we give the desired state-space representation (*<sup>α</sup>*odd, *<sup>β</sup>*odd, *<sup>γ</sup>*odd) as

*<sup>α</sup>*odd <sup>=</sup>

*<sup>β</sup>*odd <sup>=</sup>

Note that the above expression reduces to (*<sup>α</sup>*odd, *<sup>β</sup>*odd, *<sup>γ</sup>*odd)=(0, <sup>√</sup>*G*0,

*αT*

*αT*

1/*G*<sup>0</sup> and *L* parallel combinations of an inductor of inductance *Gi*/*ω*<sup>2</sup>

1/*F*(*s*) = *V*(*s*)/*I*(*s*). This circuit is easily expressed in state-space form as

also satisfy *P* = *I<sup>M</sup>* in the lossless positive-real lemma, i.e.

Gramian-preserving frequency transformation.

capacitance 1/*Gi*.

*<sup>γ</sup>*odd <sup>=</sup> *<sup>β</sup><sup>T</sup>*

 *βT* even

direct calculation it is easy to prove that the state-space representations (39) and (41) satisfy the transfer function 1/*F*(*s*) given by (38) for even *M* and odd *M*, respectively, and that they

> even <sup>+</sup> *<sup>α</sup>*even <sup>=</sup> **<sup>0</sup>***M*×*<sup>M</sup> <sup>β</sup>*even <sup>=</sup> *<sup>γ</sup><sup>T</sup>*

odd <sup>+</sup> *<sup>α</sup>*odd <sup>=</sup> **<sup>0</sup>***M*×*<sup>M</sup> <sup>β</sup>*odd <sup>=</sup> *<sup>γ</sup><sup>T</sup>*

This result shows that (39) and (41) offer the closed-form expression for the

Finally, we discuss the physical interpretation of the Gramian-preserving frequency transformation, which will bring further insight into the circuit theory. As in the discrete-time case, we first discuss the Gramian-preserving frequency transformation in terms of the block diagram. As illustrated in Fig. 3, the Gramian-preserving frequency transformation for analog filters is derived from the model of Fig. 3(b), which is given by replacing the integrators in the prototype filter of Fig. 3(a) with an appropriate state-space representation (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) of the Foster reactance function 1/*F*(*s*). Here, we have to consider how the circuit topology of the set (*<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>*) is constructed. In order to answer this, consider again the partial fraction of strictly proper Foster reactance functions 1/*F*(*s*) as in (38). This expression is well-known as the *LC* driving-point impedance functions corresponding to the first Foster canonical form [1], which is realized by the series connection of a capacitor of capacitance

Figure 4(a) shows the circuit representation of 1/*F*(*s*), where 1/*F*(*s*) is related to *V* and *I* as

even

 0 *ω*p*<sup>i</sup>* −*ω*p*<sup>i</sup>* 0

 *<sup>α</sup>*even **<sup>0</sup>**2*L*×<sup>1</sup> **0**1×2*<sup>L</sup>* **0**1×<sup>1</sup>

> √*G*0 *T*

 <sup>√</sup>*Gi* 0  . (40)

odd. (41)

odd. (42)

<sup>√</sup>*G*0) if *<sup>M</sup>* <sup>=</sup> 1. By

<sup>p</sup>*<sup>i</sup>* and a capacitor of

**Figure 3.** Gramian-preserving frequency transformation for analog filters: (a) prototype state-space filter, and (b) transformed state-space filter.

$$\frac{1}{F(s)} = \sum\_{i=1}^{L} \gamma\_i (sI\_2 - \mathfrak{a}\_i)^{-1} \mathfrak{f}\_i$$

$$+ \gamma\_0 (sI\_1 - \mathfrak{a}\_0)^{-1} \mathfrak{f}\_0 \tag{43}$$

where the subsystems (*αi*, *βi* , *<sup>γ</sup>i*) for 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>L</sup>* and (*α*0, *<sup>β</sup>*0, *<sup>γ</sup>*0) are found to be

$$\begin{aligned} \mathbf{a}\_{i} &= \begin{pmatrix} 0 & \mathbf{G}\_{i} \\ -\frac{\omega\_{pi}^{2}}{\mathbf{G}\_{i}} & 0 \end{pmatrix} \\ \mathbf{g}\_{i} &= \begin{pmatrix} \mathbf{G}\_{i} \\ 0 \end{pmatrix} \\ \gamma\_{i} &= \begin{pmatrix} 1 \ 0 \end{pmatrix} \\ \mathbf{a}\_{0} &= \mathbf{0} \\ \mathbf{g}\_{0} &= \mathbf{G}\_{0} \\ \gamma\_{0} &= 1 \end{aligned} \tag{44}$$

**Figure 4.** Construction of desired state-space model of 1/*F*(*s*) for Gramian-preserving frequency transformation: (a) *LC* circuit representation of 1/*F*(*s*), (b) state-space model of the *LC* circuit, and (c) desired state-space model.

with their state vectors *Xi*(*s*) and *X*0(*s*) defined as

$$\begin{aligned} \mathbf{X}\_{i}(\mathbf{s}) &= \left( \begin{array}{c} V\_{i}(\mathbf{s}) \ -I\_{i}(\mathbf{s}) \end{array} \right)^{T} \\ \mathbf{X}\_{0}(\mathbf{s}) &= V\_{0} . \end{aligned} \tag{45}$$

(*α*0, *β*0, *γ*0) as follows:

*P<sup>i</sup>* = diag(1/*Gi*, *Gi*/*ω*<sup>2</sup>

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

From (46) we see that the state-space model of Fig. 4(b) does not satisfy *P* = *I<sup>M</sup>* in (33). Hence it is necessary to modify the structure of this model such that *P<sup>i</sup>* = *I*<sup>2</sup> and *P*<sup>0</sup> = *I*<sup>1</sup>

*Gi*, *ω*p*i*/

*<sup>i</sup>* <sup>=</sup> *<sup>P</sup>*−<sup>1</sup>

*<sup>i</sup> αiT<sup>i</sup>* =

*<sup>i</sup> β<sup>i</sup>* =

*<sup>i</sup>* <sup>=</sup> *<sup>γ</sup>iT<sup>i</sup>* <sup>=</sup> <sup>√</sup>*Gi* <sup>0</sup>

<sup>0</sup> *<sup>α</sup>*0*T*<sup>0</sup> <sup>=</sup> <sup>0</sup>

<sup>0</sup> *<sup>β</sup>*<sup>0</sup> <sup>=</sup> *<sup>G</sup>*<sup>0</sup>

and its corresponding model is given by Fig. 4(c). Then, it immediately follows that this modified structure satisfies *P* = *I<sup>M</sup>* in (33) and coincides with the desired state-space

The above discussion shows that the desired structure of Fig. 4(c) is obtained by applying the similarity transformation based on (47) to the first Foster canonical form for *LC* impedance networks. Here, it turns out that the nonsingular matrices *Ti*'s and *T*<sup>0</sup> serve as the scaling matrices that convert the matrices *Pi*'s and *P*<sup>0</sup> into the identity matrices. Therefore, we conclude that our proposed Gramian-preserving frequency transformation is derived from a state-space system of which integrators are replaced with 1/*F*(*s*), where the structure of 1/*F*(*s*) is constructed as the scaled version of the first Foster canonical form for *LC* impedance networks. It is interesting to note that this construction of Fig. 4(c) is similar to the realization of orthonormal ladder filters [7]: the orthonormal ladder filters are obtained by applying the

apply the similarity transformation to (44), which results in the new structure (*α*′

*<sup>i</sup>* and *<sup>T</sup>*0*T<sup>T</sup>*

 0 *ω*p*<sup>i</sup>* −*ω*p*<sup>i</sup>* 0

 <sup>√</sup>*Gi* 0 

hold. To this end, we consider the following nonsingular matrices

*T<sup>i</sup>* = diag(

*α*′ *<sup>i</sup>* <sup>=</sup> *<sup>T</sup>*−<sup>1</sup>

*β*′ *<sup>i</sup>* <sup>=</sup> *<sup>T</sup>*−<sup>1</sup>

*γ*′

*α*′ <sup>0</sup> <sup>=</sup> *<sup>T</sup>*−<sup>1</sup>

*β*′ <sup>0</sup> <sup>=</sup> *<sup>T</sup>*−<sup>1</sup>

*γ*′

Note that these matrices satisfy *<sup>T</sup>iT<sup>T</sup>*

representations (39) and (41).

(*α*′ 0, *β*′ <sup>0</sup>, *<sup>γ</sup>*′ <sup>0</sup>) as p*i* )

*Gi*), 1 ≤ *i* ≤ *L T*<sup>0</sup> = *G*0. (47)

<sup>0</sup> <sup>=</sup> *<sup>P</sup>*−<sup>1</sup>

<sup>0</sup> <sup>=</sup> *<sup>γ</sup>*0*T*<sup>0</sup> <sup>=</sup> *<sup>G</sup>*<sup>0</sup> (48)

<sup>0</sup> . Using these matrices, we

*i* , *β*′ *i* , *γ*′ *i* ) and

Filters

127

http://dx.doi.org/10.5772/52197

*P*<sup>0</sup> = 1/*G*0. (46)

Figure 4(b) shows the state-space model of 1/*F*(*s*) described as above. Substituting (44) into (33), we obtain the solutions *P<sup>i</sup>* and *P*<sup>0</sup> to the lossless positive-lemma for (*αi*, *β<sup>i</sup>* , *γi*) and <sup>126</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 19 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters http://dx.doi.org/10.5772/52197 127

(*α*0, *β*0, *γ*0) as follows:

18 Digital Filters and Signal Processing

G<sup>1</sup>

R1(s)

G<sup>2</sup>

R2(s)

G<sup>L</sup>

RL(s)

R0(s)

1/G<sup>1</sup> G1/ω<sup>2</sup> p1

V<sup>1</sup> V<sup>2</sup>

V

G<sup>2</sup> <sup>−</sup>ω<sup>2</sup>

G<sup>L</sup> −ω<sup>2</sup>

G<sup>1</sup> −ω<sup>2</sup>

p1/G<sup>1</sup>

p2/G<sup>2</sup>

<sup>p</sup><sup>L</sup>/G<sup>L</sup>

(b)

with their state vectors *Xi*(*s*) and *X*0(*s*) defined as

G<sup>0</sup>

I

1/G<sup>2</sup> G2/ω<sup>2</sup> p2

I<sup>1</sup> I<sup>2</sup> I<sup>L</sup>

(a)

1/G<sup>L</sup>

V<sup>L</sup>

GL/ω<sup>2</sup> pL

R1(s)

 G<sup>1</sup>

R2(s)

 G<sup>2</sup>

RL(s)

 G<sup>L</sup>

**Figure 4.** Construction of desired state-space model of 1/*F*(*s*) for Gramian-preserving frequency transformation: (a) *LC* circuit

Figure 4(b) shows the state-space model of 1/*F*(*s*) described as above. Substituting (44) into

*Vi*(*s*) −*Ii*(*s*)

*T*

*X*0(*s*) = *V*0. (45)

representation of 1/*F*(*s*), (b) state-space model of the *LC* circuit, and (c) desired state-space model.

*Xi*(*s*) =

(33), we obtain the solutions *P<sup>i</sup>* and *P*<sup>0</sup> to the lossless positive-lemma for (*αi*, *β<sup>i</sup>*

1/G<sup>0</sup>

ωp1 −ωp1

 G<sup>1</sup>

 G<sup>2</sup>

 G<sup>L</sup>

, *γi*) and

ωp2 −ωp2

ωp<sup>L</sup> −ωp<sup>L</sup>

(c)

 G<sup>0</sup>

R0(s)

 G<sup>0</sup> V0

$$\mathbf{P}\_{i} = \text{diag}(1/\mathcal{G}\_{i\prime} \ \mathcal{G}\_{i}/\omega\_{\text{pi}}^{2})$$

$$\mathbf{P}\_{0} = 1/\mathcal{G}\_{0}. \tag{46}$$

From (46) we see that the state-space model of Fig. 4(b) does not satisfy *P* = *I<sup>M</sup>* in (33). Hence it is necessary to modify the structure of this model such that *P<sup>i</sup>* = *I*<sup>2</sup> and *P*<sup>0</sup> = *I*<sup>1</sup> hold. To this end, we consider the following nonsingular matrices

$$\begin{aligned} T\_{\bar{l}} &= \text{diag}(\sqrt{G\_{\bar{l}\prime}}, \omega\_{\text{pl}}/\sqrt{G\_{\bar{l}}}), \quad 1 \le i \le L\\ T\_0 &= \sqrt{G\_0}. \end{aligned} \tag{47}$$

Note that these matrices satisfy *<sup>T</sup>iT<sup>T</sup> <sup>i</sup>* <sup>=</sup> *<sup>P</sup>*−<sup>1</sup> *<sup>i</sup>* and *<sup>T</sup>*0*T<sup>T</sup>* <sup>0</sup> <sup>=</sup> *<sup>P</sup>*−<sup>1</sup> <sup>0</sup> . Using these matrices, we apply the similarity transformation to (44), which results in the new structure (*α*′ *i* , *β*′ *i* , *γ*′ *i* ) and (*α*′ 0, *β*′ <sup>0</sup>, *<sup>γ</sup>*′ <sup>0</sup>) as

$$\begin{aligned} \mathbf{a}\_i' &= T\_i^{-1} \mathbf{a}\_i T\_i = \begin{pmatrix} 0 & \omega\_{pi} \\ -\omega\_{pi} & 0 \end{pmatrix} \\ \mathbf{\mathcal{B}}\_i' &= T\_i^{-1} \mathbf{\mathcal{B}}\_i = \begin{pmatrix} \sqrt{\mathbf{G}\_i} \\ 0 \end{pmatrix} \\ \boldsymbol{\gamma}\_i' &= \boldsymbol{\gamma}\_i T\_i = \begin{pmatrix} \sqrt{\mathbf{G}\_i} & 0 \end{pmatrix} \\ \mathbf{a}\_0' &= T\_0^{-1} \mathbf{a}\_0 T\_0 = 0 \\ \mathbf{\mathcal{B}}\_0' &= T\_0^{-1} \mathbf{\mathcal{B}}\_0 = \sqrt{\mathbf{G}\_0} \\ \boldsymbol{\gamma}\_0' &= \boldsymbol{\gamma}\_0 T\_0 = \sqrt{\mathbf{G}\_0} \end{aligned} \tag{48}$$

and its corresponding model is given by Fig. 4(c). Then, it immediately follows that this modified structure satisfies *P* = *I<sup>M</sup>* in (33) and coincides with the desired state-space representations (39) and (41).

The above discussion shows that the desired structure of Fig. 4(c) is obtained by applying the similarity transformation based on (47) to the first Foster canonical form for *LC* impedance networks. Here, it turns out that the nonsingular matrices *Ti*'s and *T*<sup>0</sup> serve as the scaling matrices that convert the matrices *Pi*'s and *P*<sup>0</sup> into the identity matrices. Therefore, we conclude that our proposed Gramian-preserving frequency transformation is derived from a state-space system of which integrators are replaced with 1/*F*(*s*), where the structure of 1/*F*(*s*) is constructed as the scaled version of the first Foster canonical form for *LC* impedance networks. It is interesting to note that this construction of Fig. 4(c) is similar to the realization of orthonormal ladder filters [7]: the orthonormal ladder filters are obtained by applying the *L*<sup>2</sup> scaling to the structure of singly-terminated *LC* ladder networks, whereas the structures of Fig. 4(c) is obtained by applying another type of scaling, which makes use of the solutions to the lossless positive-real lemma, to the Foster canonical form for *LC* networks.

Before concluding this section, it should be noted again that the above results apply to the case of strictly proper reactance functions that include the LP-LP and the LP-BP transformations. For details of the improper reactance functions such as the LP-HP and the LP-BS transformations, see [32–34].

## **5. Application to design and synthesis of high-performance filters**

This section applies the results of the previous section to design and synthesis of high-performance analog and digital filters. Emphasis is on the tunable filters, and we present a simple method to obtain state-space-based tunable filters with high-performance structures.

## **5.1. High-performance digital filters**

Here we apply the Gramian-preserving frequency transformation to design and synthesis of a variable band-pass filter of high-performance structure [35]. The variable band-pass filter to be presented here is assumed to have the fixed bandwidth and the tunable center-frequency. Such a band-pass filter requires the simplified LP-BP transformation with the following all-pass function:

$$\frac{1}{F\_{\rm BP}(z)} = -z^{-1} \frac{z^{-1} - \mathfrak{f}\_{\rm BP}}{1 - \mathfrak{f}\_{\rm BP} z^{-1}} \tag{49}$$

<sup>A</sup>� <sup>=</sup>

<sup>B</sup>� <sup>=</sup>

<sup>C</sup>� <sup>=</sup> �

representation (*A*, *b*, *c*, *d*) of this prototype filter as follows:

which shows that this realization is the balanced realization.

with multiplicity 2, i.e.

*A* =

*b* = �

*c* = �

(51).

 

� 1 − *ξ*<sup>2</sup>

> 1 − *ξ*<sup>2</sup> BP*b*

−*ξ*BP*b*

**0**1×*<sup>N</sup>* −*c*

� �

fourth-order elliptic low-pass filter with the following transfer function:

*ξ*BP *I<sup>N</sup>* −

�

and we can easily control the center-frequency of this filter by changing the value of *ξ*BP in

Now we present a design/synthesis example. The prototype filter used here is the

*<sup>H</sup>*(*z*) = 0.0101 <sup>−</sup> 0.0362*z*−<sup>1</sup> <sup>+</sup> 0.0524*z*−<sup>2</sup> <sup>−</sup> 0.0362*z*−<sup>3</sup> <sup>+</sup> 0.0101*z*−<sup>4</sup>

The peak-to-peak ripple, the minimum stopband attenuation and the passband-edge frequency of this filter are 0.5 dB, 40 dB and 0.05*π* rad, respectively. We choose the state-space

> 0.9838 −0.1007 −0.0165 −0.0171 0.1007 0.9582 −0.1029 −0.0273 −0.0165 0.1029 0.9336 −0.1015 0.0171 −0.0273 0.1015 0.9139

0.1490 <sup>−</sup>0.1953 0.1669 <sup>−</sup>0.0995 �*<sup>T</sup>*

0.1490 0.1953 0.1669 0.0995 �

Applying (51) to (53) yields the eighth-order variable band-pass filter. It can be easily checked that, for any *ξ*BP, the Gramians of this band-pass filter become the same as (54)

*K* = *W* = diag(0.8850, 0.6124, 0.2761, 0.0817, 0.8850, 0.6124, 0.2761, 0.0817). (55)

The controllability/observability Gramians of this realization are calculated as

� 1 − *ξ*<sup>2</sup> BP*<sup>A</sup>*   Filters

129

http://dx.doi.org/10.5772/52197

D = *d* (51)

<sup>1</sup> <sup>−</sup> 3.7895*z*−<sup>1</sup> <sup>+</sup> 5.4142*z*−<sup>2</sup> <sup>−</sup> 3.4553*z*−<sup>3</sup> <sup>+</sup> 0.8310*z*−<sup>4</sup> . (52)

*d* = 0.0101. (53)

*K* = *W* = diag(0.8850, 0.6124, 0.2761, 0.0817), (54)

BP *<sup>I</sup><sup>N</sup> <sup>ξ</sup>*BP*<sup>A</sup>*

�

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

where *ξ*BP = cos *ω*BP and *ω*BP is the desired center-frequency of the passband in the variable band-pass filter. The desired state-space representation of (49) in order to carry out the Gramian-preserving frequency transformation (i.e. the state-space representation of (49) with the normalized lattice structure) is found to be

$$
\begin{aligned}
\widetilde{\mathfrak{a}} &= \begin{pmatrix}
\widetilde{\mathfrak{xi}}\_{\text{BP}} & 0 \\
\sqrt{1 - \widetilde{\mathfrak{z}}\_{\text{BP}}^2} & 0
\end{pmatrix} \\
\widetilde{\mathfrak{F}} &= \begin{pmatrix}
\sqrt{1 - \widetilde{\mathfrak{z}}\_{\text{BP}}^2} \\
\end{pmatrix} \\
\widetilde{\gamma} &= \begin{pmatrix} 0 \ -1 \end{pmatrix} \\
\widetilde{\delta} &= 0.
\end{aligned}
\tag{50}
$$

Substituting (50) into (26), we obtain the state-space representation of the variable band-pass filter as

<sup>128</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 21 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters http://dx.doi.org/10.5772/52197 129

20 Digital Filters and Signal Processing

structures.

all-pass function:

filter as

the LP-BS transformations, see [32–34].

**5.1. High-performance digital filters**

the normalized lattice structure) is found to be

*L*<sup>2</sup> scaling to the structure of singly-terminated *LC* ladder networks, whereas the structures of Fig. 4(c) is obtained by applying another type of scaling, which makes use of the solutions

Before concluding this section, it should be noted again that the above results apply to the case of strictly proper reactance functions that include the LP-LP and the LP-BP transformations. For details of the improper reactance functions such as the LP-HP and

This section applies the results of the previous section to design and synthesis of high-performance analog and digital filters. Emphasis is on the tunable filters, and we present a simple method to obtain state-space-based tunable filters with high-performance

Here we apply the Gramian-preserving frequency transformation to design and synthesis of a variable band-pass filter of high-performance structure [35]. The variable band-pass filter to be presented here is assumed to have the fixed bandwidth and the tunable center-frequency. Such a band-pass filter requires the simplified LP-BP transformation with the following

*<sup>F</sup>*BP(*z*) <sup>=</sup> <sup>−</sup>*z*−<sup>1</sup> *<sup>z</sup>*−<sup>1</sup> <sup>−</sup> *<sup>ξ</sup>*BP

where *ξ*BP = cos *ω*BP and *ω*BP is the desired center-frequency of the passband in the variable band-pass filter. The desired state-space representation of (49) in order to carry out the Gramian-preserving frequency transformation (i.e. the state-space representation of (49) with

> *ξ*BP 0 1 − *ξ*<sup>2</sup> BP 0

1 − *ξ*<sup>2</sup> BP −*ξ*BP

<sup>=</sup> 0. (50)

<sup>1</sup> <sup>−</sup> *<sup>ξ</sup>*BP*z*−<sup>1</sup> (49)

to the lossless positive-real lemma, to the Foster canonical form for *LC* networks.

**5. Application to design and synthesis of high-performance filters**

1

*α* =

*<sup>β</sup>* <sup>=</sup>

*<sup>γ</sup>* <sup>=</sup>

*δ*

 

0 −1 

Substituting (50) into (26), we obtain the state-space representation of the variable band-pass

$$\begin{aligned} \widetilde{\mathcal{A}} &= \begin{pmatrix} \widetilde{\varsigma}\_{\text{BP}} I\_N & -\sqrt{1-\widetilde{\varsigma}\_{\text{BP}}^2} A \\ \sqrt{1-\widetilde{\varsigma}\_{\text{BP}}^2} I\_N & \widetilde{\varsigma}\_{\text{BP}} A \end{pmatrix} \\ \widetilde{\mathcal{B}} &= \begin{pmatrix} \sqrt{1-\widetilde{\varsigma}\_{\text{BP}}^2} b \\ -\widetilde{\varsigma}\_{\text{BP}} b \end{pmatrix} \\ \widetilde{\mathcal{C}} &= \begin{pmatrix} \mathbf{0}\_{1\times N} \ -c \end{pmatrix} \\ \mathcal{D} &= d \end{aligned} \tag{51}$$

and we can easily control the center-frequency of this filter by changing the value of *ξ*BP in (51).

Now we present a design/synthesis example. The prototype filter used here is the fourth-order elliptic low-pass filter with the following transfer function:

$$H(z) = \frac{0.0101 - 0.0362z^{-1} + 0.0524z^{-2} - 0.0362z^{-3} + 0.0101z^{-4}}{1 - 3.7895z^{-1} + 5.4142z^{-2} - 3.4553z^{-3} + 0.8310z^{-4}}.\tag{52}$$

The peak-to-peak ripple, the minimum stopband attenuation and the passband-edge frequency of this filter are 0.5 dB, 40 dB and 0.05*π* rad, respectively. We choose the state-space representation (*A*, *b*, *c*, *d*) of this prototype filter as follows:

$$A = \begin{pmatrix} 0.9838 & -0.1007 \ -0.0165 & -0.0171 \\ 0.1007 & 0.9582 & -0.1029 \ -0.0273 \\ -0.0165 & 0.1029 & 0.9336 & -0.1015 \\ 0.0171 & -0.0273 & 0.1015 & 0.9139 \end{pmatrix}$$

$$\begin{aligned} b &= \begin{pmatrix} 0.1490 \ -0.1953 \ 0.1669 \ -0.0995 \end{pmatrix}^T \\ c &= \begin{pmatrix} 0.1490 \ 0.1953 \ 0.1669 \ 0.0995 \end{pmatrix} \\ d &= 0.0101. \end{aligned} \tag{53}$$

The controllability/observability Gramians of this realization are calculated as

$$\mathbf{K} = \mathbf{W} = \text{diag}(0.8850, 0.6124, 0.2761, 0.0817), \tag{54}$$

which shows that this realization is the balanced realization.

Applying (51) to (53) yields the eighth-order variable band-pass filter. It can be easily checked that, for any *ξ*BP, the Gramians of this band-pass filter become the same as (54) with multiplicity 2, i.e.

$$\mathbf{K} = \mathbf{W} = \text{diag}(0.8850, 0.6124, 0.2761, 0.0817, 0.8850, 0.6124, 0.2761, 0.0817). \tag{55}$$

Therefore, the variable band-pass filter keeps the balanced form regardless of the location of the center-frequency.

**5.2. High-performance analog filters**

orthonormal ladder structure [7]:

with

band-pass filter:

Here we will design and synthesize a variable analog band-pass filter by using the Gramian-preserving frequency transformation. In the LP-BP transformation, we use the second-order Foster reactance function 1/*F*BP(*s*) as in (20). Therefore we apply (39) to (35), which results in the following state-space formulation of the desired variable analog

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

� *GA ω*p1 *I<sup>N</sup>* −*ω*p1 *I<sup>N</sup>* **0***N*×*<sup>N</sup>*

�

�

*s*<sup>3</sup> + 2*s*<sup>2</sup> + 2*s* + 1

0 *a*<sup>1</sup> 0 −*a*<sup>1</sup> 0 *a*<sup>2</sup> 0 −*a*<sup>2</sup> −*a*<sup>3</sup>

 

(*a*1, *a*2, *a*3, *b*3, *c*1)=(0.7071, 1.2247, 2.0000, 0.7979, 1.4472). (59)

*d* = 0 (58)

This transfer function is the third-order Butterworth low-pass filter with a cutoff frequency of 1 rad/s. We give the state-space representation of this prototype filter as the following

> 

*c*<sup>1</sup> 0 0 �

From (58) and (59), the controllability/observability Gramians of this filter are found to be

16.4493 9.3052 3.7988 9.3052 9.8696 5.3723 3.7988 5.3723 3.2899

� <sup>√</sup>*G<sup>b</sup>* **0***N*×<sup>1</sup>

<sup>C</sup>� <sup>=</sup> � <sup>√</sup>*G<sup>c</sup>* **<sup>0</sup>**1×*<sup>N</sup>*

As a design/synthesis example, here we use the following prototype low-pass filter

*<sup>H</sup>*(*s*) = <sup>1</sup>

*A* =

*b* = 0 0 *b*3

*c* = �

*K* = *I*<sup>3</sup>

 

*W* =

  �

<sup>D</sup>� <sup>=</sup> *<sup>d</sup>*. (56)

. (57)

Filters

131

http://dx.doi.org/10.5772/52197

. (60)

<sup>A</sup>� <sup>=</sup>

<sup>B</sup>� <sup>=</sup>

Figures 5(a), (b), (c) and (d) show the magnitude responses of our proposed variable filter for *ξ*BP = −0.8, −0.4, 0.5 and 0.9, respectively. For comparison purpose, the magnitude responses in the case of the cascaded direct form are also shown here, and all the coefficients of these two variable filters are quantized to 10 fractional bits. From Figs. 5(a), (b), (c) and (d) we know that our proposed variable filter shows very good agreement with the ideal magnitude responses for all *ξ*BP. This result confirms that, our proposed variable filter exhibits high accuracy for all tunable characteristics by constructing the state-space representation of the prototype filter appropriately with respect to the Gramians. On the other hand, the magnitude responses of the cascaded direct form are degraded in all cases and the degradation is extremely large for *ξ*BP = 0.9. As is well-known, direct form digital filters are very sensitive to quantization effects. In addition, since variable digital filters with direct form do not take into account the controllability/observability Gramians, the performance of the direct form with respect to quantization effects highly depends on the frequency characteristics. These facts show the utility of our proposed method.

**Figure 5.** Magnitude responses of the eighth-order variable band-pass digital filters: (a) Responses for *ξ*BP = −0.8. (b) Responses for *ξ*BP = −0.4. (c) Responses for *ξ*BP = 0.5. (d) Responses for *ξ*BP = 0.9.

<sup>130</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 23 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters http://dx.doi.org/10.5772/52197 131

### **5.2. High-performance analog filters**

22 Digital Filters and Signal Processing

the center-frequency.

−40 −30 −20 −10 0 10 20 30

−40 −30 −20 −10 0 10 20 30

Magnitude [dB]

Magnitude [dB]

Therefore, the variable band-pass filter keeps the balanced form regardless of the location of

Figures 5(a), (b), (c) and (d) show the magnitude responses of our proposed variable filter for *ξ*BP = −0.8, −0.4, 0.5 and 0.9, respectively. For comparison purpose, the magnitude responses in the case of the cascaded direct form are also shown here, and all the coefficients of these two variable filters are quantized to 10 fractional bits. From Figs. 5(a), (b), (c) and (d) we know that our proposed variable filter shows very good agreement with the ideal magnitude responses for all *ξ*BP. This result confirms that, our proposed variable filter exhibits high accuracy for all tunable characteristics by constructing the state-space representation of the prototype filter appropriately with respect to the Gramians. On the other hand, the magnitude responses of the cascaded direct form are degraded in all cases and the degradation is extremely large for *ξ*BP = 0.9. As is well-known, direct form digital filters are very sensitive to quantization effects. In addition, since variable digital filters with direct form do not take into account the controllability/observability Gramians, the performance of the direct form with respect to quantization effects highly depends on the

frequency characteristics. These facts show the utility of our proposed method.

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −50

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −50

(d)

Normalized frequency

(b)

Normalized frequency

Ideal Proposed Cascaded direct form

Ideal Proposed Cascaded direct form

−40 −30 −20 −10 0 10 20 30

−40 −30 −20 −10 0 10 20 30

Magnitude [dB]

**Figure 5.** Magnitude responses of the eighth-order variable band-pass digital filters: (a) Responses for *ξ*BP = −0.8. (b)

Magnitude [dB]

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −50

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −50

(c)

Normalized frequency

Responses for *ξ*BP = −0.4. (c) Responses for *ξ*BP = 0.5. (d) Responses for *ξ*BP = 0.9.

(a)

Normalized frequency

Ideal Proposed Cascaded direct form

Ideal Proposed Cascaded direct form Here we will design and synthesize a variable analog band-pass filter by using the Gramian-preserving frequency transformation. In the LP-BP transformation, we use the second-order Foster reactance function 1/*F*BP(*s*) as in (20). Therefore we apply (39) to (35), which results in the following state-space formulation of the desired variable analog band-pass filter:

$$\begin{aligned} \widetilde{\mathcal{A}} &= \begin{pmatrix} \mathbf{G}\mathbf{A} & \omega\_{\mathbf{p}1} I\_N \\ -\omega\_{\mathbf{p}1} I\_N & \mathbf{0}\_{N \times N} \end{pmatrix} \\ \widetilde{\mathbf{B}} &= \begin{pmatrix} \sqrt{\mathbf{G}} \mathbf{b} \\ \mathbf{0}\_{N \times 1} \end{pmatrix} \\ \widetilde{\mathbf{C}} &= \begin{pmatrix} \sqrt{\mathbf{G}} \mathbf{c} \ \mathbf{0}\_{1 \times N} \end{pmatrix} \\ \widetilde{\mathcal{D}} &= d. \end{aligned} \tag{56}$$

As a design/synthesis example, here we use the following prototype low-pass filter

$$H(s) = \frac{1}{s^3 + 2s^2 + 2s + 1}.\tag{57}$$

This transfer function is the third-order Butterworth low-pass filter with a cutoff frequency of 1 rad/s. We give the state-space representation of this prototype filter as the following orthonormal ladder structure [7]:

$$\begin{aligned} A &= \begin{pmatrix} 0 & a\_1 & 0 \\ -a\_1 & 0 & a\_2 \\ 0 & -a\_2 & -a\_3 \end{pmatrix} \\ b &= \begin{pmatrix} 0 \\ 0 \\ b\_3 \end{pmatrix} \\ c &= \begin{pmatrix} c\_1 \ 0 \ 0 \end{pmatrix} \\ d &= 0 \end{aligned} \tag{58}$$

with

$$(a\_1, a\_2, a\_3, b\_3, c\_1) = (0.7071, 1.2247, 2.0000, 0.7979, 1.4472). \tag{59}$$

From (58) and (59), the controllability/observability Gramians of this filter are found to be

$$\begin{array}{l} \mathbf{K} = I\_3 \\ \mathbf{W} = \begin{pmatrix} 16.4493 \ 9.3052 \ 3.7988 \\ 9.3052 \ 9.8696 \ 5.3723 \\ 3.7988 \ 5.3723 \ 3.2899 \end{pmatrix} . \end{array} \tag{60}$$

**Figure 6.** Prototype filter based on transconductance-capacitor integrators.

As seen above, the controllability Gramian of the orthonormal ladder structure becomes the identity matrix. This property brings the high-performance with respect to the dynamic range and the sensitivity. Figure 6 illustrates the block diagram of this filter structure based on transconductance-capacitor integrators, where the normalized capacitance distribution is given by

$$(\mathbb{C}\_{\rm p1}, \mathbb{C}\_{\rm p2}, \mathbb{C}\_{\rm p3}) = \mathbb{C}\_{\rm p}(0.3091, 0.3957, 0.2952) \tag{61}$$

and *C*p is the unit-less value of the total capacitance when expressed in F. The specification of (61) is determined according to the following rule [10]:

$$\begin{aligned} \mathbf{C}\_{\mathrm{pi}} &= \frac{\sqrt{\eta\_{i} w\_{i\bar{i}} k\_{i\bar{i}}}}{\sum\_{j} \sqrt{\eta\_{j} w\_{j\bar{j}} k\_{j\bar{j}}}} \\ \eta\_{i} &= \sum\_{j} |a\_{i\bar{j}}|. \end{aligned} \tag{62}$$

Vin

ωp1

C

ladder structure.

**6. Conclusion**

the references therein.

p1 C

ωp1

Ga<sup>1</sup> Ga<sup>1</sup> Ga<sup>2</sup>

Ga<sup>2</sup> Ga<sup>3</sup>

√ Gb<sup>3</sup>

ωp1

p3 C

p4 C

for arbitrary values of *G* and *ω*p1. It follows from this result that the Gramian-preserving frequency transformation easily produces the band-pass filter with the orthonormal ladder structure for arbitrary center frequency and bandwidth. Therefore, by controlling the parameters of *G* and *ω*p1, we can realize tunable band-pass filters with the orthonormal

The high-performance of this band-pass filter can be demonstrated by not only calculation of the Gramians, but also numerical evaluation of the dynamic range. For details, see [34] and

In this chapter we have introduced insightful and useful results on the classical frequency transformation of analog filters and digital filters. While most of the known results on the frequency transformation are described in terms of the transfer functions, the results given in this chapter are based on the state-space representation, which have revealed many useful properties with respect to the performance of filters that is dominated by the internal properties as well as the input-output relationship. In particular, the Gramian-preserving frequency transformation is very attractive to design and synthesis of high-performance filters. Using this new frequency transformation, we have presented variable analog/digital filters that retain high-performance regardless of the change of the frequency characteristics. In addition to the aforementioned work, some other results on the frequency transformation have been reported in the literature. One of them is the state-space formulation of 2-D frequency transformation [36], which presents an explicit state-space-based frequency transformation for 2-D digital filters. Also, Yan et al. [37, 38] extended this work to

p2 C

**Figure 7.** Band-pass filter given by Gramian-preserving frequency transformation.

ωp1

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

ωp1

ωp1

p5 C

p6

Iout

http://dx.doi.org/10.5772/52197

Filters

133

√ Gc<sup>1</sup>

As is seen from (58) and Fig. 6, the structure of this prototype filter is very sparse and suitable for circuit implementation. This is another benefit of the orthonormal ladder structure.

Applying (56) to this prototype filter, we finally obtain the state-space representation of the variable band-pass filter, and its corresponding circuit realization is given by Fig. 7. It can be easily shown that the controllability/observability Gramians (<sup>K</sup> ,<sup>W</sup> ) of this band-pass filter become

$$
\begin{aligned}
\tilde{\mathcal{K}} &= \text{block diagram}(\mathbf{K}, \mathbf{K}) = \mathbf{I}\_6 \\
\tilde{\mathcal{W}} &= \text{block diagram}(\mathbf{W}, \mathbf{W})
\end{aligned}
\tag{63}
$$

<sup>132</sup> Digital Filters and Signal Processing Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters 25 Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital Filters http://dx.doi.org/10.5772/52197 133

**Figure 7.** Band-pass filter given by Gramian-preserving frequency transformation.

for arbitrary values of *G* and *ω*p1. It follows from this result that the Gramian-preserving frequency transformation easily produces the band-pass filter with the orthonormal ladder structure for arbitrary center frequency and bandwidth. Therefore, by controlling the parameters of *G* and *ω*p1, we can realize tunable band-pass filters with the orthonormal ladder structure.

The high-performance of this band-pass filter can be demonstrated by not only calculation of the Gramians, but also numerical evaluation of the dynamic range. For details, see [34] and the references therein.

## **6. Conclusion**

24 Digital Filters and Signal Processing

given by

become

a1

**Figure 6.** Prototype filter based on transconductance-capacitor integrators.

of (61) is determined according to the following rule [10]:

Vin

a1

a2

b3

c1

(*C*p1, *C*p2, *C*p3) = *C*p(0.3091, 0.3957, 0.2952) (61)


<sup>W</sup> <sup>=</sup> block diag(*W*,*W*) (63)

Iout

a3

a2

Cp1 Cp2 Cp3

As seen above, the controllability Gramian of the orthonormal ladder structure becomes the identity matrix. This property brings the high-performance with respect to the dynamic range and the sensitivity. Figure 6 illustrates the block diagram of this filter structure based on transconductance-capacitor integrators, where the normalized capacitance distribution is

and *C*p is the unit-less value of the total capacitance when expressed in F. The specification

∑*j* 

As is seen from (58) and Fig. 6, the structure of this prototype filter is very sparse and suitable for circuit implementation. This is another benefit of the orthonormal ladder structure.

Applying (56) to this prototype filter, we finally obtain the state-space representation of the variable band-pass filter, and its corresponding circuit realization is given by Fig. 7. It can be easily shown that the controllability/observability Gramians (<sup>K</sup> ,<sup>W</sup> ) of this band-pass filter

<sup>K</sup> <sup>=</sup> block diag(*K*, *<sup>K</sup>*) = *<sup>I</sup>*<sup>6</sup>

*ηiwiikii*

*ηjwjjkjj*

*C*p*<sup>i</sup>* =

*η<sup>i</sup>* = ∑ *j*

In this chapter we have introduced insightful and useful results on the classical frequency transformation of analog filters and digital filters. While most of the known results on the frequency transformation are described in terms of the transfer functions, the results given in this chapter are based on the state-space representation, which have revealed many useful properties with respect to the performance of filters that is dominated by the internal properties as well as the input-output relationship. In particular, the Gramian-preserving frequency transformation is very attractive to design and synthesis of high-performance filters. Using this new frequency transformation, we have presented variable analog/digital filters that retain high-performance regardless of the change of the frequency characteristics.

In addition to the aforementioned work, some other results on the frequency transformation have been reported in the literature. One of them is the state-space formulation of 2-D frequency transformation [36], which presents an explicit state-space-based frequency transformation for 2-D digital filters. Also, Yan et al. [37, 38] extended this work to formulations of more general 2-D frequency transformation. Moreover, in [39] we have revealed the invariance property of the second-order modes of 2-D separable denominator digital filters under frequency transformation. Proof of this invariance property in the case of 2-D non-separable denominator digital filters is still an open problem. Derivation of the Gramian-preserving frequency transformation in the 2-D case is also an open problem.

[9] J. Harrison and N. Weste, "Energy storage and gramians of ladder filter realisations,"

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

Filters

135

http://dx.doi.org/10.5772/52197

[10] D. P. W. M. Rocha, "Optimal design of analogue low-power systems, a strongly directional hearing-aid adapter," Ph.D. dissertation, Delft University of Technology,

[11] S. A. P. Haddad, S. Bagga, and W. A. Serdijn, "Log-domain wavelet bases," *IEEE*

[12] A. N. Akansu, W. A. Serdijn, and I. W. Selesnick, "Emerging applications of wavelets:

[13] C. T. Mullis and R. A. Roberts, "Synthesis of minimum roundoff noise fixed point digital filters," *IEEE Trans. Circuits Syst.*, vol. CAS-23, no. 9, pp. 551–562, Sept. 1976.

[14] S. Y. Hwang, "Minimum uncorrelated unit noise in state-space digital filtering," *IEEE Trans. Acoust., Speech, Signal Processing*, vol. ASSP-25, no. 4, pp. 273–281, Aug. 1977.

[15] V. Tav¸sano ˘glu and L. Thiele, "Optimal design of state-space digital filters by simultaneous minimization of sensitivity and roundoff noise," *IEEE Trans. Circuits*

[16] M. Kawamata and T. Higuchi, "A unified approach to the optimal synthesis of fixed-point state-space digital filters," *IEEE Trans. Acoust., Speech, Signal Processing*,

[17] L. Thiele, "On the sensitivity of linear state-space systems," *IEEE Trans. Circuits Syst.*,

[18] M. Iwatsuki, M. Kawamata, and T. Higuchi, "Statistical sensitivity and minimum sensitivity structures with fewer coefficients in discrete time linear systems," *IEEE*

[19] G. Li, B. D. O. Anderson, M. Gevers, and J. E. Perkins, "Optimal FWL design of state-space digital systems with weighted sensitivity minimization and sparseness consideration," *IEEE Trans. Circuits Syst. I*, vol. 39, no. 5, pp. 365–377, May 1992.

[20] W.-Y. Yan and J. B. Moore, "On *L*2-sensitivity minimization of linear state-space systems," *IEEE Trans. Circuits Syst. I*, vol. 39, no. 8, pp. 641–648, Aug. 1992.

[21] T. Hinamoto, S. Yokoyama, T. Inoue, W. Zeng, and W.-S. Lu, "Analysis and minimization of *L*2-sensitivity for linear systems and two-dimensional state-space filters using general controllability and observability Gramians," *IEEE Trans. Circuits*

[22] T. Hinamoto, K. Iwata, and W.-S. Lu, "*L*2-sensitivity minimization of one- and two-dimensional state-space digital filters subject to *L*2-scaling constraints," *IEEE*

*Trans. Signal Processing*, vol. 54, no. 5, pp. 1804–1812, May 2006.

*Trans. Circuits Syst.*, vol. CAS-37, no. 1, pp. 72–80, Jan. 1990.

in *Proc. IEEE Int. Symp. Circuits and Systems*, May 2001, pp. I–29–I–32.

*Trans. Circuits Syst. I*, vol. 52, no. 10, pp. 2023–2032, Oct. 2005.

A review," *Physical Communication*, vol. 3, no. 1, pp. 1–18, Mar. 2010.

Delft, The Netherlands, Apr. 2003.

*Syst.*, vol. 31, no. 10, pp. 884–888, Oct. 1984.

vol. ASSP-33, no. 4, pp. 911–920, Aug. 1985.

*Syst. I*, vol. 49, no. 9, pp. 1279–1289, Sept. 2002.

vol. 33, no. 5, pp. 502–510, May 1986.

Another interesting topic is the transformations based on "lossy" functions. In both the cases of analog frequency transformation and digital frequency transformation, the required transformation functions have the lossless property. On the other hand, it is theoretically possible to use lossy functions for transformation. Motivated by this, in [33, 40] we presented the state-space analysis of lossy transformations and revealed that the second-order modes are decreased under such transformations. Development of a practical application of this property is a future work.

## **Author details**

Shunsuke Koshita⋆, Masahide Abe and Masayuki Kawamata

<sup>⋆</sup> Address all correspondence to: kosita@mk.ecei.tohoku.ac.jp

Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, Sendai, Japan

## **References**


[9] J. Harrison and N. Weste, "Energy storage and gramians of ladder filter realisations," in *Proc. IEEE Int. Symp. Circuits and Systems*, May 2001, pp. I–29–I–32.

26 Digital Filters and Signal Processing

property is a future work.

Shunsuke Koshita⋆, Masahide Abe and Masayuki Kawamata <sup>⋆</sup> Address all correspondence to: kosita@mk.ecei.tohoku.ac.jp

*Processing*, vol. 1, no. 4, pp. 275–289, July 1997.

no. 8, pp. 1585–1590, Aug. 1970.

no. 6, pp. 475–481, Dec. 1990.

pp. 287–301, Mar. 1986.

1991.

**Author details**

Sendai, Japan

**References**

formulations of more general 2-D frequency transformation. Moreover, in [39] we have revealed the invariance property of the second-order modes of 2-D separable denominator digital filters under frequency transformation. Proof of this invariance property in the case of 2-D non-separable denominator digital filters is still an open problem. Derivation of the Gramian-preserving frequency transformation in the 2-D case is also an open problem.

Another interesting topic is the transformations based on "lossy" functions. In both the cases of analog frequency transformation and digital frequency transformation, the required transformation functions have the lossless property. On the other hand, it is theoretically possible to use lossy functions for transformation. Motivated by this, in [33, 40] we presented the state-space analysis of lossy transformations and revealed that the second-order modes are decreased under such transformations. Development of a practical application of this

Department of Electronic Engineering, Graduate School of Engineering, Tohoku University,

[2] A. G. Constantinides, "Spectral transformations for digital filters," *Proc. IEE*, vol. 117,

[3] G. Stoyanov and M. Kawamata, "Variable digital filters," *RISP Journal of Signal*

[4] J. A. Chambers and A. G. Constantinides, "Frequency tracking using constrained adaptive notch filters synthesised from allpass sections," *Proc. IEE (part F)*, vol. 137,

[5] V. DeBrunner and S. Torres, "Multiple fully adaptive notch filter design based on allpass sections," *IEEE Signal Processing Lett.*, vol. 48, no. 2, pp. 550–552, Feb. 2000.

[6] W. M. Snelgrove and A. S. Sedra, "Synthesis and analysis of state-space active filters using intermediate transfer functions," *IEEE Trans. Circuits Syst.*, vol. CAS-33, no. 3,

[7] D. A. Johns, W. M. Snelgrove, and A. S. Sedra, "Orthonormal ladder filters," *IEEE*

[8] G. Groenewold, "The design of high dynamic range continuous-time integratable bandpass filters," *IEEE Trans. Circuits Syst.*, vol. CAS-38, no. 8, pp. 838–852, Aug.

*Trans. Circuits Syst.*, vol. CAS-36, no. 3, pp. 337–343, Mar. 1989.

[1] W.-K. Chen, Ed., *The Circuits and Filters Handbook*. CRC Press, 1995.


[23] S. Yamaki, M. Abe, and M. Kawamata, "A closed form solution to *L*2-sensitivity minimization of second-order state-space digital filters," *IEICE Trans. Fundamentals*, vol. E91-A, no. 5, pp. 1268–1273, May 2008.

[36] S. Koshita and M. Kawamata, "State-space formulation of frequency transformation for 2-D digital filters," *IEEE Signal Processing Lett.*, vol. 11, no. 10, pp. 784–787, Oct.

Frequency Transformation for Linear State-Space Systems and Its Application to High-Performance Analog/Digital

Filters

137

http://dx.doi.org/10.5772/52197

[37] S. Yan, L. Xu, and Y. Anazawa, "A two-stage approach to the establishment of state-space formulation of 2-D frequency transformation," *IEEE Signal Processing Lett.*,

[38] S. Yan, N. Shiratori, and L. Xu, "Simple state-space formulations of 2-D frequency transformation and double bilinear transformation," *Multidimensional Systems and*

[39] S. Koshita and M. Kawamata, "Invariance of second-order modes under frequency transformation in 2-D separable denominator digital filters," *Multidimensional Systems*

[40] S. Koshita, M. Abe, and M. Kawamata, "Analysis of second-order modes of linear discrete-time systems under bounded-real transformations," *IEICE Trans.*

2004.

vol. 14, no. 12, pp. 960–963, Dec. 2007.

*Signal Processing*, vol. 21, no. 1, pp. 3–23, Mar. 2010.

*and Signal Processing*, vol. 16, no. 3, pp. 305–333, July 2005.

*Fundamentals*, vol. E90-A, no. 11, pp. 2510–2515, Nov. 2007.


[36] S. Koshita and M. Kawamata, "State-space formulation of frequency transformation for 2-D digital filters," *IEEE Signal Processing Lett.*, vol. 11, no. 10, pp. 784–787, Oct. 2004.

28 Digital Filters and Signal Processing

pp. 17–32, Feb. 1981.

1982.

Apr. 1976.

vol. E91-A, no. 5, pp. 1268–1273, May 2008.

vol. E91-A, no. 7, pp. 1697–1705, July 2008.

"Advances in Design and Control", 2005, vol. DC-06.

*Fundamentals*, vol. E91-A, no. 10, pp. 3014–3021, Oct. 2008.

*Trans. Fundamentals*, vol. E90-A, no. 7, pp. 1481–1486, July 2007.

ASSP-24, no. 6, pp. 538–550, Dec. 1976.

vol. 58, no. 3, pp. 493–506, Mar. 2011.

[23] S. Yamaki, M. Abe, and M. Kawamata, "A closed form solution to *L*2-sensitivity minimization of second-order state-space digital filters," *IEICE Trans. Fundamentals*,

[24] ——, "A closed form solution to *L*2-sensitivity minimization of second-order state-space digital filters subject to *L*2-scaling constraints," *IEICE Trans. Fundamentals*,

[25] ——, "Derivation of the class of digital filters with all second-order modes equal,"

[26] B. C. Moore, "Principal component analysis in linear systems: Controllability, observability, and model reduction," *IEEE Trans. Automat. Contr.*, vol. AC-26, no. 1,

[27] L. Pernebo and L. M. Silverman, "Model reduction via balanced state space representations," *IEEE Trans. Automat. Contr.*, vol. AC-27, no. 2, pp. 382–387, Apr.

[28] A. C. Antoulas, *Approximation of large-scale dynamical systems*. SIAM Book series

[29] A. V. Oppenheim, W. F. G. Mecklenbrauker, and R. M. Mersereau, "Variable cutoff linear phase digital filters," *IEEE Trans. Circuits Syst.*, vol. CAS-23, no. 4, pp. 199–203,

[30] C. T. Mullis and R. A. Roberts, "Roundoff noise in digital filters: Frequency transformations and invariants," *IEEE Trans. Acoust., Speech, Signal Processing*, vol.

[31] S. Koshita, S. Tanaka, M. Abe, and M. Kawamata, "Gramian-preserving frequency transformation for linear discrete-time state-space systems," *IEICE Trans.*

[32] M. Kawamata, Y. Mizukami, and S. Koshita, "Invariance of second-order modes of linear continuous-time systems under typical frequency transformations," *IEICE*

[33] S. Koshita, Y. Mizukami, T. Konno, M. Abe, and M. Kawamata, "Analysis of second-order modes of linear continuous-time systems under positive-real transformations," *IEICE Trans. Fundamentals*, vol. E91-A, no. 2, pp. 575–583, Feb. 2008.

[34] S. Koshita, M. Abe, and M. Kawamata, "Gramian-preserving frequency transformation and its application to analog filter design," *IEEE Trans. Circuits Syst. I*,

[35] S. Koshita, K. Miyoshi, M. Abe, and M. Kawamata, "Realization of variable band-pass/band-stop IIR digital filters using Gramian-preserving frequency transformation," in *Proc. IEEE Int. Symp. on Circuits Syst.*, May 2010, pp. 2698–2701.

*IEEE Trans. Signal Processing*, vol. 59, no. 11, pp. 5236–5242, Nov. 2011.


**Chapter 6**

**A Study on a Filter Bank Structure With Rational**

Filter banks decompose signals into multiple sub-bands to perform various processes and reconstruct the original signals [1]-[8]. It has been demonstrated that sub-band processing with filter banks improves the performance of numerous image processing applications

For example, image recognition accuracy is improved with sub-band processing. A water‐ mark is typically embedded into a middle frequency band in order to make the scheme not only robust to compression but also secure against attacks on the watermark. Thus, it is highly desirable that analysis filters have excellent frequency characteristics to appropriately

In sub-band coding, on the other hand, we usually focus more on various visual issues proper to compression such as checkerboard artifacts and blocking effects, rather than the frequency characteristics of the analysis filters. Coding gain is also one of the significant is‐ sues in image coding. Therefore, various constraints are imposed on the filters to deal with such problems in the design. Accordingly, the design is usually performed with non-linear

Multi-resolution processing, which is an extension of sub-band processing, is also in de‐ mand in a number of applications. Multi-resolution schemes such as wavelet transform not only decompose the frequencies of signals but also yield various lower resolution versions

Meanwhile, image scaling has become highly required processing in the context of the di‐ versification of the means of displaying images [15]-[17]. Image scaling changes the resolu‐ tion of images more flexibly than multi-resolution processing, although it does not

> © 2013 Itami; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Itami; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

of the original signals by gradually reducing the sampling rate with filter banks.

such as image recognition, watermarking, image coding,and so on [9]-[14].

**Scaling Factors and Its Applications**

Additional information is available at the end of the chapter

Fumio Itami

**1. Introduction**

http://dx.doi.org/10.5772/ 52391

decompose signals in such applications.

optimization, which increases its design complexity.

## **A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications**

Fumio Itami

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/ 52391

## **1. Introduction**

Filter banks decompose signals into multiple sub-bands to perform various processes and reconstruct the original signals [1]-[8]. It has been demonstrated that sub-band processing with filter banks improves the performance of numerous image processing applications such as image recognition, watermarking, image coding,and so on [9]-[14].

For example, image recognition accuracy is improved with sub-band processing. A water‐ mark is typically embedded into a middle frequency band in order to make the scheme not only robust to compression but also secure against attacks on the watermark. Thus, it is highly desirable that analysis filters have excellent frequency characteristics to appropriately decompose signals in such applications.

In sub-band coding, on the other hand, we usually focus more on various visual issues proper to compression such as checkerboard artifacts and blocking effects, rather than the frequency characteristics of the analysis filters. Coding gain is also one of the significant is‐ sues in image coding. Therefore, various constraints are imposed on the filters to deal with such problems in the design. Accordingly, the design is usually performed with non-linear optimization, which increases its design complexity.

Multi-resolution processing, which is an extension of sub-band processing, is also in de‐ mand in a number of applications. Multi-resolution schemes such as wavelet transform not only decompose the frequencies of signals but also yield various lower resolution versions of the original signals by gradually reducing the sampling rate with filter banks.

Meanwhile, image scaling has become highly required processing in the context of the di‐ versification of the means of displaying images [15]-[17]. Image scaling changes the resolu‐ tion of images more flexibly than multi-resolution processing, although it does not

decompose the frequency components of signals. Typically it consists of up/down samplers and a filter which suppresses the imaging or aliasing components caused by the samplers.

various image processing such as image coding, watermarking and recognition is also ap‐ plied before image scaling, in which a filter bank is often accompanied by a scaling structure sequentially in practice since filter banks are used in many image processing applications

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

http://dx.doi.org/10.5772/ 52391

141

Typically, the direct scaling structure consists of up/down samplers and a filter which sup‐ presses the imaging or aliasing components caused by the samplers. Similarly, the filter bank structure also involves the samplers and filters. This indicates that one can integrate both the structure in order to implement them with lower computational complexity than

For this, we provide a study on a simple filter bank structure depicted in Figure 2 which is a simplified version of the structure given in [18]. Please note that the sampling rate at the output in both the structures Figure 1 and Figure 2 is the same when *R* =*U* and *S* =*D* . In addition, the number of channels in both the systems is also the same when *N* =*D* . Thus, we

The aim of this chapter is to demonstrate that the computational complexity of Figure 2 for scaling is reduced compared to that of Figure 1, while its scaling performance is comparable

with their sub-band decomposition performance.

that of their sequential structure shown in Figure 1.

**Figure 1.** The typical sequential structure.

assume this relation in this chapter.

**Figure 2.** The introduced filter bank structure.

to that of Figure 1.

Under the circumstances, we discuss complex processes where sub-band processing and scaling are carried out sequentially in this chapter. For instance, when watermarked images are scaled, or scaling is applied to decoded images, a filter bank and a scaling structure are designed separately and implemented sequentially. However, they have the structural simi‐ larity in that they involve samplers and filters. This implies that one can integrate them in order to implement them with lower implementation cost.

Therefore, this chapter provides a profound discussion on a filter bank structure to yield an effective scheme in computation in such complex processing. We introduce a simple filter bank structure for computational simplicity which directly synthesizes the analysis decom‐ posed signals to produce a scaled signal, not to reconstruct the original signal as in tradition‐ al perfect reconstruction filter banks.

First, we discuss the frequency decomposition characteristics of the filter bank to show how to obtain arbitrarily scaled signals. We briefly clarified the band-width of the synthesis fil‐ ters to obtain the scaled signal from the analysis part of the filter bank in [18]. However, the discussion is not sufficient since the behavior of the aliasing components, which can signifi‐ cantly affect the filter bank performance, is not clarified. This chapter discusses the issue so that the feature of the filter bank will be manifested. It is shown that the aliasing problem is similar to that of perfect reconstruction filter banks.

Next, theoretical conditions for addressing the above aliasing problem are given through the input-output relation of the filter bank. We also discussed the strategy to make the filter bank equivalent to the direct scaling structure by deriving such theoretical conditions in [18]. However, we observed the number of conditions is proportional to that of aliasing components, which can lead to time-consuming design or complicated optimization. More‐ over, we also know the conditions are usually not satisfied when all the filters have linear phase, so that the derived filter banks do not have excellent performance.

In this chapter, we discuss the conditions from a viewpoint of the aliasing components to verify that they are the same regardless of the aliasing components, so that the number of equations to solve is significantly reduced. A design procedure with the conditions is also discussed. We mention that the procedure is applicable to various image processing men‐ tioned above. In addition, it is demonstrated through simulation results that scaled images' quality is comparable to that of the sequential structure even though not all the filters have linear phase characteristics. Finally, we discuss potential issues and advantages in making use of the scheme as well as traditional ones in practical image processing.

## **2. Image scaling**

Image-scaling which changes the resolution of images has become highly required process‐ ing in the context of the diversification of the means of displaying images. In such context, various image processing such as image coding, watermarking and recognition is also ap‐ plied before image scaling, in which a filter bank is often accompanied by a scaling structure sequentially in practice since filter banks are used in many image processing applications with their sub-band decomposition performance.

Typically, the direct scaling structure consists of up/down samplers and a filter which sup‐ presses the imaging or aliasing components caused by the samplers. Similarly, the filter bank structure also involves the samplers and filters. This indicates that one can integrate both the structure in order to implement them with lower computational complexity than that of their sequential structure shown in Figure 1.

**Figure 1.** The typical sequential structure.

decompose the frequency components of signals. Typically it consists of up/down samplers and a filter which suppresses the imaging or aliasing components caused by the samplers. Under the circumstances, we discuss complex processes where sub-band processing and scaling are carried out sequentially in this chapter. For instance, when watermarked images are scaled, or scaling is applied to decoded images, a filter bank and a scaling structure are designed separately and implemented sequentially. However, they have the structural simi‐ larity in that they involve samplers and filters. This implies that one can integrate them in

Therefore, this chapter provides a profound discussion on a filter bank structure to yield an effective scheme in computation in such complex processing. We introduce a simple filter bank structure for computational simplicity which directly synthesizes the analysis decom‐ posed signals to produce a scaled signal, not to reconstruct the original signal as in tradition‐

First, we discuss the frequency decomposition characteristics of the filter bank to show how to obtain arbitrarily scaled signals. We briefly clarified the band-width of the synthesis fil‐ ters to obtain the scaled signal from the analysis part of the filter bank in [18]. However, the discussion is not sufficient since the behavior of the aliasing components, which can signifi‐ cantly affect the filter bank performance, is not clarified. This chapter discusses the issue so that the feature of the filter bank will be manifested. It is shown that the aliasing problem is

Next, theoretical conditions for addressing the above aliasing problem are given through the input-output relation of the filter bank. We also discussed the strategy to make the filter bank equivalent to the direct scaling structure by deriving such theoretical conditions in [18]. However, we observed the number of conditions is proportional to that of aliasing components, which can lead to time-consuming design or complicated optimization. More‐ over, we also know the conditions are usually not satisfied when all the filters have linear

In this chapter, we discuss the conditions from a viewpoint of the aliasing components to verify that they are the same regardless of the aliasing components, so that the number of equations to solve is significantly reduced. A design procedure with the conditions is also discussed. We mention that the procedure is applicable to various image processing men‐ tioned above. In addition, it is demonstrated through simulation results that scaled images' quality is comparable to that of the sequential structure even though not all the filters have linear phase characteristics. Finally, we discuss potential issues and advantages in making

Image-scaling which changes the resolution of images has become highly required process‐ ing in the context of the diversification of the means of displaying images. In such context,

phase, so that the derived filter banks do not have excellent performance.

use of the scheme as well as traditional ones in practical image processing.

order to implement them with lower implementation cost.

similar to that of perfect reconstruction filter banks.

al perfect reconstruction filter banks.

140 Digital Filters and Signal Processing

**2. Image scaling**

For this, we provide a study on a simple filter bank structure depicted in Figure 2 which is a simplified version of the structure given in [18]. Please note that the sampling rate at the output in both the structures Figure 1 and Figure 2 is the same when *R* =*U* and *S* =*D* . In addition, the number of channels in both the systems is also the same when *N* =*D* . Thus, we assume this relation in this chapter.

The aim of this chapter is to demonstrate that the computational complexity of Figure 2 for scaling is reduced compared to that of Figure 1, while its scaling performance is comparable to that of Figure 1.

**Figure 2.** The introduced filter bank structure.

## **3. A discussion on the computational complexity of the filter bank for scaling**

First, we discuss the computation cost of the introduced filter bank drawn in Figure 2 as well as the sequential structure shown in Figure 1.

The sequential structure decomposes the input signal into sub-band frequency components first with down-sampling by the factor *D* . Next, the decomposed signals are synthesized after up-sampling by *D* . Such sub-band processing with the filter bank is followed by the direct scaling structure, where further up-sampling by *U* is carried out. Thus, the sampling rate increases with two successive up-sampling processes. After this, the sampling rate de‐ creases with down-sampling by *D* .

On the other hand, the introduced filter bank structure also decomposes the input into mul‐ tiple sub-band signals with down-sampling by *D* first, and then synthesizes the signals with up-sampling by *U* .

Accordingly, we see that the sequential structure requires two down-sampling and up-sam‐ pling processes, while the introduced filter bank structure is carried out with only one down-sampling and up-sampling process. Such a structural difference can cause different computation time for scaling signals.

**Figure 3.** The frequency decomposition and synthesis in the filter bank

*Yc*,*<sup>i</sup>*

where

that of *Hi*

(*z*)= <sup>1</sup> *<sup>D</sup>* ∑ *r*=0 *D*-1

The analysis signals such as Figure 3(b) and (c) are described in the z domain as

*W* =*e*- *<sup>j</sup>*

2π

These analysis signals are up-sampled by the factor *U* and then filtered with the filters *Gi*

in the synthesis part, which is illustrated with Figure 3(d) and (e) in the case of *G*0(*z*) and

(*z*) since the factor *U* and *D* are not the same either to yield scaled signals.

The eventual scaled signal, which is the output of the filter bank, is drawn in the Figure 3(f),

*<sup>D</sup>* )(*i* =0,1, ⋯, *D* - 1) (1)

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

http://dx.doi.org/10.5772/ 52391

143

*<sup>D</sup>* (2)

*<sup>D</sup>* )(*i* =0,1, ⋯, *D* - 1) (3)

(*z*)

(*z*) is not the same as‐

*X* (*W <sup>r</sup> z* 1 *<sup>D</sup>* )*Hi* (*W <sup>r</sup> z* 1

*GD*-1(*z*) , respectively. Please notice that the pass band of the filters *Gi*

*X* (*W <sup>r</sup> z* U *<sup>D</sup>* )*Hi* (*W <sup>r</sup> z* U

These signals in the synthesis part are represented as

(*z*)= <sup>1</sup> *<sup>D</sup>* ∑ *r*=0 *D*-1

*Yt*,*<sup>i</sup>*

and also represented as

If we assume that both the systems have the same length of the analysis filters, then the dif‐ ference in length between both the systems will be in their synthesis part. The difference will also cause different computation time. We also discuss these concerns with the design and simulation in both of the systems in detail.

## **4. A discussion on the filter bank structure for scaling**

## **4.1. The frequency decomposition and synthesis characteristics of the filter bank**

Next, we discuss the performance of the frequency decomposition and synthesis with the fil‐ ter bank. Figure 3(a) shows an original signal *X* (*z*) where the bold line represents π [rad] on the frequency axis. Figure 3(b) shows the frequency decomposition characteristics with the analysis low-pass filter *H*0(*z*) followed by down-sampling by the factor *D* . We also show the counterpart of the high-pass filter *HD*-1(*z*) and down-samplingin Figure 3(c). The fre‐ quency components colored with dots represent the aliasing components. Please note that these decomposition characteristics in the analysis part are the same as those of traditional (perfect reconstruction) filter banks.

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications http://dx.doi.org/10.5772/ 52391 143

**Figure 3.** The frequency decomposition and synthesis in the filter bank

The analysis signals such as Figure 3(b) and (c) are described in the z domain as

*Yc*,*<sup>i</sup>* (*z*)= <sup>1</sup> *<sup>D</sup>* ∑ *r*=0 *D*-1 *X* (*W <sup>r</sup> z* 1 *<sup>D</sup>* )*Hi* (*W <sup>r</sup> z* 1 *<sup>D</sup>* )(*i* =0,1, ⋯, *D* - 1) (1)

where

**3. A discussion on the computational complexity of the filter bank for**

well as the sequential structure shown in Figure 1.

creases with down-sampling by *D* .

computation time for scaling signals.

simulation in both of the systems in detail.

(perfect reconstruction) filter banks.

**4. A discussion on the filter bank structure for scaling**

**4.1. The frequency decomposition and synthesis characteristics of the filter bank**

Next, we discuss the performance of the frequency decomposition and synthesis with the fil‐ ter bank. Figure 3(a) shows an original signal *X* (*z*) where the bold line represents π [rad] on the frequency axis. Figure 3(b) shows the frequency decomposition characteristics with the analysis low-pass filter *H*0(*z*) followed by down-sampling by the factor *D* . We also show the counterpart of the high-pass filter *HD*-1(*z*) and down-samplingin Figure 3(c). The fre‐ quency components colored with dots represent the aliasing components. Please note that these decomposition characteristics in the analysis part are the same as those of traditional

up-sampling by *U* .

First, we discuss the computation cost of the introduced filter bank drawn in Figure 2 as

The sequential structure decomposes the input signal into sub-band frequency components first with down-sampling by the factor *D* . Next, the decomposed signals are synthesized after up-sampling by *D* . Such sub-band processing with the filter bank is followed by the direct scaling structure, where further up-sampling by *U* is carried out. Thus, the sampling rate increases with two successive up-sampling processes. After this, the sampling rate de‐

On the other hand, the introduced filter bank structure also decomposes the input into mul‐ tiple sub-band signals with down-sampling by *D* first, and then synthesizes the signals with

Accordingly, we see that the sequential structure requires two down-sampling and up-sam‐ pling processes, while the introduced filter bank structure is carried out with only one down-sampling and up-sampling process. Such a structural difference can cause different

If we assume that both the systems have the same length of the analysis filters, then the dif‐ ference in length between both the systems will be in their synthesis part. The difference will also cause different computation time. We also discuss these concerns with the design and

**scaling**

142 Digital Filters and Signal Processing

$$\mathcal{W} = e^{-\dagger \frac{2\pi}{D}} \tag{2}$$

These analysis signals are up-sampled by the factor *U* and then filtered with the filters *Gi* (*z*) in the synthesis part, which is illustrated with Figure 3(d) and (e) in the case of *G*0(*z*) and *GD*-1(*z*) , respectively. Please notice that the pass band of the filters *Gi* (*z*) is not the same as‐ that of *Hi* (*z*) since the factor *U* and *D* are not the same either to yield scaled signals.

These signals in the synthesis part are represented as

$$Y\_{t,i}(\mathbf{z}) = \frac{1}{D} \sum\_{r=0}^{D-1} X\left(W^r \mathbf{z}^{\frac{\upsilon}{D}}\right) H\_i\left(W^r \mathbf{z}^{\frac{\upsilon}{D}}\right) \begin{Bmatrix} i = 0, 1, \ \cdots, D - 1 \end{Bmatrix} \tag{3}$$

The eventual scaled signal, which is the output of the filter bank, is drawn in the Figure 3(f), and also represented as

$$Y\_s(\mathbf{z}) = \frac{1}{D} \sum\_{i=0}^{D-1} Y\_{t,i}(\mathbf{z}) \mathbf{G}\_i(\mathbf{z}) \tag{4}$$

These are possible issues to address when we design the filter banks that serve as scal‐

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

http://dx.doi.org/10.5772/ 52391

145

**Figure 4.** The frequency characteristics of the direct scaling structure

**5. The theoretical conditions for the improvement of the scaling**

Typically, traditional perfect reconstruction filter banks are designed to satisfy perfect recon‐ struction conditions completely or approximately so that aliasing will be eliminated. Simi‐ larly, we provide theoretical conditions for the introduced filter banks to reduce aliasing.

We see, from Figure 3 and 4, that aliasing in Figure 3 is reduced if the eventual signal shown in Figure 3(f) is the same as Figure 4(c). In other words, it is desired that the introduced filter banks precisely work as the direct scaling structures. Therefore, a design scheme for the fil‐ ter banks that approximately or exactly serve as the direct scaling structures is provided in

ing systems.

**performance**

this section.

In this manner, one can obtain a scaled signal by using the filter bank, and the scaled signal is completely reconstructed when the ideal filters are used in the filter bank. However, it is not possible to use such filters, as in traditional filter banks, and the aliasing components af‐ fect the scaling performance with the filter bank.

Therefore, we provide a discussion on the aliasing components in the next, which manifests the scaling behavior of the filter bank.

#### **4.2. The aliasing components of the filter bank**

The output signal of the filter bank is rewritten as

$$\mathbf{Y}\_s(\mathbf{z}) = \sum\_{i=0}^{D-1} G\_i(\mathbf{z}) \frac{1}{D} \sum\_{r=0}^{D-1} \mathbf{X} \left(\mathbf{W}^{r} \mathbf{z}^{\frac{\mathbf{U}}{\Delta t}}\right) H\_i \left(\mathbf{W}^{r} \mathbf{z}^{\frac{\mathbf{U}}{\Delta t}}\right) \tag{5}$$

When *U* =*D* , that is, a perfect reconstruction filter bank is used, then the pass-band of the synthesis filters is the same as that of the analysis one. In this case, "serious" aliasing caused by the "mismatch" of the band of the filters is avoided, and only minor aliasing caused by the non-ideal filters exists in the adjacent areas between the aliasing componentsin the fre‐ quency domain.

On the other hand, when *U* ≠*D* , which is necessary to obtain an arbitrarily scaled signal, the pass-band of the synthesis ones is not the same as that of the analysis ones, and the serious aliasing occurs. Actually, it can be observed that the pass-band of *Gi* (*z*) overlaps with the aliasing components in the above equation and also in Figure 3.

However, it can also be verified in the frequency domain that such serious aliasing is essen‐ tial and necessary to obtain a scaled signal, since the aliasing components themselves form the scaled signal in part in thiscase.

In this sense, one can conclude that even though the input signal is scaled, only minor alias‐ ing between the aliasing components caused by the non-ideal filters is taken into account to design the filter banks as well as the traditional filter banks.

We also compare the direct scaling structure in the sequential structure shown in Figure 1 with the filter bank of Figure 2 in terms of aliasing. In the direct scaling structure, up-sam‐ pling by the factor of *U* is followed by down-sampling by *D* . Hence, when *U* >*D* , which means the input signal is extended, then down-sampling is not critical sampling, that is, oversampling. We illustrate this with Figure 4.

On the other hand, the filter bank structure performs down-sampling by *D* first, which means critical sampling. This implies that the aliasing caused by down-sampling in Figure 1 is less than that of Figure 2. Thus, it is also concluded that the filter bank structures have more aliasing than the direct scaling structures do when the scaling factors are larger than 1.

These are possible issues to address when we design the filter banks that serve as scal‐ ing systems.

**Figure 4.** The frequency characteristics of the direct scaling structure

*Ys* (*z*)= <sup>1</sup> *<sup>D</sup>* ∑ *i*=0 *D*-1 *Yt*,*<sup>i</sup>* (z)*Gi*

fect the scaling performance with the filter bank.

**4.2. The aliasing components of the filter bank**

The output signal of the filter bank is rewritten as

*Ys* (*z*)= ∑ *i*=0 *D*-1 *Gi* (z) <sup>1</sup> *<sup>D</sup>* ∑ *r*=0 *D*-1

the scaling behavior of the filter bank.

144 Digital Filters and Signal Processing

the scaled signal in part in thiscase.

quency domain.

In this manner, one can obtain a scaled signal by using the filter bank, and the scaled signal is completely reconstructed when the ideal filters are used in the filter bank. However, it is not possible to use such filters, as in traditional filter banks, and the aliasing components af‐

Therefore, we provide a discussion on the aliasing components in the next, which manifests

*X* (*W <sup>r</sup> z* U *<sup>D</sup>* )*Hi* (*W <sup>r</sup> z* U

When *U* =*D* , that is, a perfect reconstruction filter bank is used, then the pass-band of the synthesis filters is the same as that of the analysis one. In this case, "serious" aliasing caused by the "mismatch" of the band of the filters is avoided, and only minor aliasing caused by the non-ideal filters exists in the adjacent areas between the aliasing componentsin the fre‐

On the other hand, when *U* ≠*D* , which is necessary to obtain an arbitrarily scaled signal, the pass-band of the synthesis ones is not the same as that of the analysis ones, and the serious

However, it can also be verified in the frequency domain that such serious aliasing is essen‐ tial and necessary to obtain a scaled signal, since the aliasing components themselves form

In this sense, one can conclude that even though the input signal is scaled, only minor alias‐ ing between the aliasing components caused by the non-ideal filters is taken into account to

We also compare the direct scaling structure in the sequential structure shown in Figure 1 with the filter bank of Figure 2 in terms of aliasing. In the direct scaling structure, up-sam‐ pling by the factor of *U* is followed by down-sampling by *D* . Hence, when *U* >*D* , which means the input signal is extended, then down-sampling is not critical sampling, that is,

On the other hand, the filter bank structure performs down-sampling by *D* first, which means critical sampling. This implies that the aliasing caused by down-sampling in Figure 1 is less than that of Figure 2. Thus, it is also concluded that the filter bank structures have more aliasing than the direct scaling structures do when the scaling factors are larger than 1.

aliasing occurs. Actually, it can be observed that the pass-band of *Gi*

aliasing components in the above equation and also in Figure 3.

design the filter banks as well as the traditional filter banks.

oversampling. We illustrate this with Figure 4.

(z) (4)

*<sup>D</sup>* ) (5)

(*z*) overlaps with the

## **5. The theoretical conditions for the improvement of the scaling performance**

Typically, traditional perfect reconstruction filter banks are designed to satisfy perfect recon‐ struction conditions completely or approximately so that aliasing will be eliminated. Simi‐ larly, we provide theoretical conditions for the introduced filter banks to reduce aliasing.

We see, from Figure 3 and 4, that aliasing in Figure 3 is reduced if the eventual signal shown in Figure 3(f) is the same as Figure 4(c). In other words, it is desired that the introduced filter banks precisely work as the direct scaling structures. Therefore, a design scheme for the fil‐ ter banks that approximately or exactly serve as the direct scaling structures is provided in this section.

#### **5.1. The input-output relation in the filter banks and the direct scaling structures in the frequency domain**

First, we give the relation between the input and output of the filter banks and the direct scaling structures. The output signal of the direct scaling structure in the sequential struc‐ ture shown in Figure 1 is written as

$$Y\_{\mathcal{A}}(\mathbf{z}) = \frac{1}{D} \sum\_{r=0}^{D-1} X\left(W^{\perp l r} \mathbf{z}^{\frac{U}{D}}\right) H\left(W^{r} \mathbf{z}^{\frac{1}{D}}\right) . \tag{6}$$

*Ur* =*nD* + *r* (10)

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

. (11)

http://dx.doi.org/10.5772/ 52391

147

, *D*) (12)

. (13)

*<sup>D</sup>* ) (14)

(*z*) (*r* =0,1, ⋯, *D* - 1) (15)

(16)

partly (every *D* -th equation) as

need to hold in order to satisfy the equations (8). n is an arbitrary integer number.

*Ur* =*nD* + *r* '

=*mod*(*r* '

*Ur* =*nD* + *r* ''

*U*

*mod*(*a*, *b*) represents the reminder in the case where *a* is divided by *b* .Therefore, the equa‐

*<sup>D</sup>* ) = *X* (*W <sup>r</sup> ''*

hold. We see that the equations (14) also hold when *U* =*D* + 1 , i.e., (14) are equivalent to (8)

Next, we verify the equations (15) are the same regardless of *r* . When *r ''* =*r* =0 , the equa‐

*z U*

*r* ''

*X* (*W Urz*

, *gi*, *<sup>j</sup>*

and *h <sup>j</sup>*

{ *hqD*<sup>+</sup> *<sup>p</sup>* <sup>=</sup>*h*0,*qD*<sup>+</sup> *<sup>p</sup>g*0,*lU* <sup>+</sup>*<sup>k</sup>* <sup>+</sup> *<sup>h</sup>*0,(*q*+1)*D*<sup>+</sup> *<sup>p</sup>g*0,(*l*-1)*<sup>U</sup>* <sup>+</sup>*<sup>k</sup>* +…+ *<sup>h</sup>*1,*qD*<sup>+</sup> *<sup>p</sup>g*1,*lU* <sup>+</sup>*<sup>k</sup>* + … *hqD*<sup>+</sup> *<sup>p</sup>*+1 =*h*0,*qD*<sup>+</sup> *<sup>p</sup>*+1*g*0,*lU* <sup>+</sup>*<sup>k</sup>* -1 + *h*0,(*q*+1)*D*<sup>+</sup> *<sup>p</sup>*+1*g*0,(*l*-1)*<sup>U</sup>* <sup>+</sup>*<sup>k</sup>* -1 +…+ *h*1,*qD*<sup>+</sup> *<sup>p</sup>*+1*g*1,*lU* <sup>+</sup>*<sup>k</sup>* -1 + … *hqD*<sup>+</sup> *<sup>p</sup>*+2 =*h*0,*qD*<sup>+</sup> *<sup>p</sup>*+2*g*0,*lU* <sup>+</sup>*<sup>k</sup>* -2 + *h*0,(*q*+1)*D*<sup>+</sup> *<sup>p</sup>*+2*g*0,(*l*-1)*<sup>U</sup>* <sup>+</sup>*<sup>k</sup>* -2 +…+ *h*1,*qD*<sup>+</sup> *<sup>p</sup>*+2*g*1,*lU* <sup>+</sup>*<sup>k</sup>* -2 + … *hqD*<sup>+</sup> *<sup>p</sup>*+3 =*h*0,*qD*<sup>+</sup> *<sup>p</sup>*+3*g*0,*lU* <sup>+</sup>*<sup>k</sup>* -3 + *h*0,(*q*+1)*D*<sup>+</sup> *<sup>p</sup>*+3*g*0,(*l*-1)*<sup>U</sup>* <sup>+</sup>*<sup>k</sup>* -3 +…+ *h*1,*qD*<sup>+</sup> *<sup>p</sup>*+3*g*1,*lU* <sup>+</sup>*<sup>k</sup>* -3 + …

*r '*

tions

If we replace *r '*

then we also obtain

when *U* =*D* + 1 .

⋮

Hence, the equations

*H* (*W <sup>r</sup> z* 1 *<sup>D</sup>* ) = ∑ *i*=0 *D*-1 *Hi* (*W <sup>r</sup> '' z U <sup>D</sup>* )*Gi*

tions (15) are rewritten with *hi*, *<sup>j</sup>*

need to hold in order to satisfy the conditions (7).

=*r*(*U* - *D*) . Therefore, *nD* + *r '*

with

If *U* =*D* + 1 , then *Ur* =*rD* + *r* , which leads to (10). Thus, the equations (8) hold in this case. When *U* ≠*D* + 1 , (10) does not hold. Hence, we replace r on the right side of (10) with

=*Ur* + *D*(*n* - *r*) , which also leads to

Please note that we usually use a perfect reconstruction filter bank in Figure 1, therefore, it is assumed that the output of the filter bank is the same as its input.

On the other hand, the output signal *Ys*(*z*) of the filter bank shown in Figure 2 is represent‐ ed as the equation (4).

If the signal *Yd* (*z*) is equal to the output signal *Ys*(*z*) , then

$$\mathop{\rm X}\nolimits(\mathop{\rm W}^{\perp \rm lr} \mathop{\rm z}^{\rm U}) \mathop{H}\nolimits(\mathop{\rm W}^{\rm r} \mathop{\rm z}^{\rm lr}) = \mathop{\rm X}\nolimits(\mathop{\rm W}^{\rm r} \mathop{\rm z}^{\rm U}) \mathop{\rm \begin{pmatrix} \mathop{\rm U}^{\rm l} \\ \end{pmatrix}}\_{\mathop{\rm t}} \mathop{\rm H}\nolimits(\mathop{\rm W}^{\rm r} \mathop{\rm z}^{\rm U}) \mathop{G}\_{\mathop{\rm l}}(\mathop{\rm z}) \begin{pmatrix} \mathop{\rm r} = \mathop{\rm O}\nolimits(\mathop{\rm z}) & \mathop{\rm O}\nolimits(\mathop{\rm V}^{\rm l}) \end{pmatrix} \tag{7}$$

in which

$$\mathbf{x}\left(\mathbf{W}^{\;\;\;\;\;\;\;\;\mathbf{U}^{\;\;\;\;\;\;\;\;\;\mathbf{U}}}\mathbf{z}\right) = \mathbf{X}\left(\mathbf{W}^{\;\;\;\;\;\;\;\;\;\;\mathbf{U}}\mathbf{z}\right) \tag{8}$$

hold in some cases without any manipulation, or holds in the other cases by changing the order of *r* on the left and right side of (8). This depends on the values of *U* and *D* . There‐ fore, the conditions (7) are proportional tothe number of aliasing components *D* - 1 , and the parameters in the conditions are *H* (*z*) , *Hi* (*z*) and *Gi* (*z*) . Moreover, it is observed that the conditions are mostly difficult to satisfy when all the filters have linear phase characteristics, i.e., the symmetry of the coefficients. Therefore, we give a profound discussion on the condi‐ tions to address these issues in the next.

#### **5.2. A discussion on the conditions**

First, we discuss the equations (8) to verify that they hold. We assume that *U* >*D* , but the following discussion is straightforwardly extended to the case where *U* <*D* . Since the equa‐ tion

$$\mathbf{W}^r = \mathbf{W}^{\;nD \; +r} \tag{9}$$

hold, the equations

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications http://dx.doi.org/10.5772/ 52391 147

$$r\,\,\,\,\underline{U}r = m\,\,\underline{D} + r\,\,\tag{10}$$

need to hold in order to satisfy the equations (8). n is an arbitrary integer number.

If *U* =*D* + 1 , then *Ur* =*rD* + *r* , which leads to (10). Thus, the equations (8) hold in this case. When *U* ≠*D* + 1 , (10) does not hold. Hence, we replace r on the right side of (10) with *r '* =*r*(*U* - *D*) . Therefore, *nD* + *r '* =*Ur* + *D*(*n* - *r*) , which also leads to

$$
\hat{\mathbf{U}}\hat{\mathbf{U}}\mathbf{r} = \mathbf{n}\mathbf{D} + \mathbf{r}\mathbf{{}'}\mathbf{{}'}\tag{11}
$$

If we replace *r '* with

**5.1. The input-output relation in the filter banks and the direct scaling structures in the**

First, we give the relation between the input and output of the filter banks and the direct scaling structures. The output signal of the direct scaling structure in the sequential struc‐

> U *<sup>D</sup>* )*H* (*W <sup>r</sup>*

Please note that we usually use a perfect reconstruction filter bank in Figure 1, therefore, it is

On the other hand, the output signal *Ys*(*z*) of the filter bank shown in Figure 2 is represent‐

*z* 1

*<sup>D</sup>* ). (6)

(*z*) (*r* =0,1, ⋯, *D* - 1) (7)

*<sup>D</sup>* ) (8)

(*z*) . Moreover, it is observed that the

*W <sup>r</sup>* =*W nD*+*<sup>r</sup>* (9)

**frequency domain**

146 Digital Filters and Signal Processing

ed as the equation (4).

*X* (*W Urz*

in which

tion

hold, the equations

U *<sup>D</sup>* )*H* (*W <sup>r</sup>*

ture shown in Figure 1 is written as

*Yd* (*z*)= <sup>1</sup>

If the signal *Yd* (*z*) is equal to the output signal *Ys*(*z*) , then

*z* 1

parameters in the conditions are *H* (*z*) , *Hi*

tions to address these issues in the next.

**5.2. A discussion on the conditions**

*<sup>D</sup>* ∑ *r*=0 *D*-1

assumed that the output of the filter bank is the same as its input.

*<sup>D</sup>* ) = *X* (*W <sup>r</sup>*

*X* (*W Urz*

*z* U *<sup>D</sup>* ) ∑ *i*=0 *D*-1 *Hi* (*W <sup>r</sup> z* U *<sup>D</sup>* )*Gi*

U

*<sup>D</sup>* ) = *X* (*W <sup>r</sup>*

hold in some cases without any manipulation, or holds in the other cases by changing the order of *r* on the left and right side of (8). This depends on the values of *U* and *D* . There‐ fore, the conditions (7) are proportional tothe number of aliasing components *D* - 1 , and the

conditions are mostly difficult to satisfy when all the filters have linear phase characteristics, i.e., the symmetry of the coefficients. Therefore, we give a profound discussion on the condi‐

First, we discuss the equations (8) to verify that they hold. We assume that *U* >*D* , but the following discussion is straightforwardly extended to the case where *U* <*D* . Since the equa‐

(*z*) and *Gi*

*z* U

*X* (*W Urz*

$$r^\top = mod(r^\top, D) \tag{12}$$

then we also obtain

$$\text{Lur} = mD + r \stackrel{\text{''}}{.} \tag{13}$$

*mod*(*a*, *b*) represents the reminder in the case where *a* is divided by *b* .Therefore, the equa‐ tions

$$\mathbf{x}\left(\mathbf{W}^{\;\;\;\;\;\;\;Lr\;\;\;\;\underline{\mathbf{w}}}\right) = \mathbf{x}\left(\mathbf{W}^{\;\;\;\;\;\;\;\;\;\;\;\;\underline{\mathbf{w}}}\right) \tag{14}$$

hold. We see that the equations (14) also hold when *U* =*D* + 1 , i.e., (14) are equivalent to (8) when *U* =*D* + 1 .

Hence, the equations

$$\,\_2H\left(\boldsymbol{W}\,^{r}\boldsymbol{z}\,^{\frac{1}{\mathrm{D}}}\right) = \sum\_{i=0}^{\mathrm{D}-1} H\_i\left(\boldsymbol{W}\,^{r}\,^{r}\boldsymbol{z}\,^{\frac{\mathrm{U}}{\mathrm{D}}}\right) \mathrm{G}\_i(\boldsymbol{z}) \; \left\{r = 0, 1, \,\cdots, \,\mathrm{D} \,\mathrm{-}1\right\} \tag{15}$$

need to hold in order to satisfy the conditions (7).

Next, we verify the equations (15) are the same regardless of *r* . When *r ''* =*r* =0 , the equa‐ tions (15) are rewritten with *hi*, *<sup>j</sup>* , *gi*, *<sup>j</sup>* and *h <sup>j</sup>* partly (every *D* -th equation) as

$$\begin{aligned} h\_{qD\*p} &= h\_{0,qD\*p}g\_{0,lll+k} + h\_{0,(q+1)D\*p}g\_{0,(l-1)l+k} + \dots + h\_{1,qD\*p}g\_{1,lll+k} + \dots \\ h\_{qD\*p\*1} &= h\_{0,qD\*p\*1}g\_{0,lll+k-1} + h\_{0,(q+1)D\*p\*1}g\_{0,(l-1)l+k-1} + \dots + h\_{1,qD\*p\*1}g\_{1,lll+k-1} + \dots \\ h\_{qD\*p\*2} &= h\_{0,qD\*p\*2}g\_{0,lll+k-2} + h\_{0,(q+1)D\*p\*2}g\_{0,(l-1)l+k-2} + \dots + h\_{1,qD\*p\*2}g\_{1,lll+k-2} + \dots \\ h\_{qD\*p\*3} &= h\_{0,qD\*p\*3}g\_{0,lll+k-3} + h\_{0,(q+1)D\*p\*3}g\_{0,(l-1)l+k-3} + \dots + h\_{1,qD\*p\*3}g\_{1,lll+k-3} + \dots \end{aligned} \tag{16}$$

where *hi*, *<sup>j</sup>* , *gi*, *<sup>j</sup>* and *h <sup>j</sup>* are the coefficients of the filter *Hi* (*z*) , *Gi* (*z*) and *H* (*z*) , respectively, and the order of the rows except the 0-th row on the right side of (16), however, changes depending on the factors *D* and *U* .

Here we define the following vector

$$
\boldsymbol{w}\_r^\top = \begin{bmatrix} \boldsymbol{W}^{\;cr^\top} & \boldsymbol{W}^{\;2cr^\top} & \cdots & \boldsymbol{W}^{\;(D-1)cr^\top} \end{bmatrix} \tag{17}
$$

**6. Design and Simulation Results**

(*z*) and *Gi*

impose the symmetry constraints on *Gi*

teristics, respectively. This is summarized in Table1.

delays so that the conditions will be satisfied.

**The scaling**

**Table 1.** The length of the designed filters in the introduced filter banks.

vance. Thus, *Hi*

direct scaling structure.

sub-band image coding.

*Gi*

but also *Hi*

**The number of channels**

The equations derived in the previous section have unknown parameters *Hi*

example, several constraints should be imposed on the analysis filters *Hi*

(*z*) , in order not to lay much burden on *Hi*

*H* (*z*) . The filter *H* (*z*) in the direct scaling structure should be known, or designed, in ad‐

sired *H* (*z*) so that the filter bank structure will be exactly or approximately equivalent to the

The concrete design procedure depends on applications the filter banks are utilized in. For

banks to have non DC leakage, high coding gain, etc., when the filter banks are exploited in

The analysis filters should have high attenuation in the stop band, or excellent frequency de‐ composition performance, when the filter banks are employed for watermarking, since the watermarks are often embedded only into one narrow sub-band that is robust to attacks.

These tell that the analysis filters typically suffer from severe constraints. In this section, we survey the case where the analysis and synthesis filters are separately designed, i.e., only

(*z*) are determined so that the equations are satisfied with not only a pre-designed *H* (*z*)

With this design policy, we here design filter banks. As an example, two-channel and threechannel filter banks are designed and explored. In the two-channel case, the scaling factor is 3/2, and in the three channel one, it is 2/3. The length of the linear phase analysis filters is 6 in the two-channel case, and 8 in the three-channel one. The length of the obtained (non-lin‐ ear) synthesis filters is 9, and 6, respectively. The length of the linear phase filter in the direct scaling structure is 3, and 2, respectively, and it has third and half band amplitude charac‐

Please note that in the introduced filter bank structures, the direct scaling filters are not re‐ quired for scaling since the filter banks perform not only sub-band processing but also scal‐ ing. The direct scaling filters are used only to design the filter banks. The round brackets "(· )" are used in this sense in Table 1. It is observed that the direct scaling filters have some

2 3/2 6 9 (3) 3 2/3 8 6 (2)

**Factors The length of the analysis, synthesis and direct scaling filters**

(*z*) are obtained by solving the equations with a pre-designed de‐

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

(*z*) to satisfy the equations more precisely.

(*z*) , *Gi*

http://dx.doi.org/10.5772/ 52391

(*z*) for the filter

(*z*) . On the other hand, we do not

(*z*) , and

149

in which *c* is a constant, and the *p ''* -th row on the right side of (16) is multiplied by the *p ''* th element *W <sup>p</sup> ''cr* '' in each *r ''* . For example, the *p ''* -th row on the right side of (16) is multi‐ plied by the coefficient *W <sup>p</sup> ''c* when *r ''* =1 .

Moreover, the *p ''* -th row on the right side includes *hi*, *<sup>p</sup>*+*qD* , since the coefficient *hi*, *<sup>p</sup>*+*qD* has the delay *z U <sup>D</sup>* ( *<sup>p</sup>*+*qD*) , where *p ''* has the relation to *p* , which is

$$\stackrel{\circ}{p} = \text{mod}\left(\mathcal{U}(p+qD), D\right) \tag{18}$$

If we denote *p* which satisfies *p ''* =1 as *ps* , then *c* = *ps* , since the first row on the right side, which includes *hi*, *ps*+*qD* , is multiplied by the coefficient *<sup>W</sup> ps* when *r ''* =1 . In this manner, the constant c is determined.

On the other hand, we also define the following vector

$$\boldsymbol{\upsilon}\_r = \begin{bmatrix} \boldsymbol{W}^{\;r\;} & \boldsymbol{W}^{\;2r\;} & \cdots & \boldsymbol{W}^{(D-1)r\;} \end{bmatrix} \tag{19}$$

where the *p ''* -th row on the left side of (16) is multiplied by the *p ''* -th element *W <sup>p</sup> ''r* in each *r* . If *r* = *ps* when *r ''* =1 , then *vr ''* =*vr* since *c* = *ps* . In this case, the equations (16) are the same as those in *r* =*r ''* =0 . In fact, the equation *r* = *ps* holds when *r ''* =1 , since (18) is equiva‐ lent to (12). In other words, the relation between *p ''* and *p* is the same as that between *r ''* and *r* . Furthermore, the equations *vr ''* =*vr* hold in all the *r ''* and *r* under (12). Therefore, we see that the equations (16) are the same regardless of *r* or *r ''* .

However, it is observed that equations are usually not satisfied when all the filters have line‐ ar phase characteristics. Accordingly, we do not impose the symmetry constraints on all the filters. We also see that the filter in the desired direct scaling structure needs to have some delays in order to satisfy the equations. The performance of such filter banks is examined in the next section.

## **6. Design and Simulation Results**

where *hi*, *<sup>j</sup>*

, *gi*, *<sup>j</sup>*

148 Digital Filters and Signal Processing

th element *W <sup>p</sup> ''cr* ''

the delay *z*

plied by the coefficient *W <sup>p</sup> ''c*

*U <sup>D</sup>* ( *<sup>p</sup>*+*qD*)

constant c is determined.

each *r* . If *r* = *ps* when *r ''* =1 , then *vr*

and *r* . Furthermore, the equations *vr*

the next section.

and *h <sup>j</sup>*

depending on the factors *D* and *U* .

Here we define the following vector

*vr* '' = *W cr* ''

when *r ''* =1 .

*p* ''

which includes *hi*, *ps*+*qD* , is multiplied by the coefficient *<sup>W</sup> ps*

On the other hand, we also define the following vector

, where *p ''* has the relation to *p* , which is

are the coefficients of the filter *Hi*

*W* <sup>2</sup>*cr* ''

and the order of the rows except the 0-th row on the right side of (16), however, changes

in which *c* is a constant, and the *p ''* -th row on the right side of (16) is multiplied by the *p ''* -

Moreover, the *p ''* -th row on the right side includes *hi*, *<sup>p</sup>*+*qD* , since the coefficient *hi*, *<sup>p</sup>*+*qD* has

If we denote *p* which satisfies *p ''* =1 as *ps* , then *c* = *ps* , since the first row on the right side,

where the *p ''* -th row on the left side of (16) is multiplied by the *p ''* -th element *W <sup>p</sup> ''r*

same as those in *r* =*r ''* =0 . In fact, the equation *r* = *ps* holds when *r ''* =1 , since (18) is equiva‐ lent to (12). In other words, the relation between *p ''* and *p* is the same as that between *r ''*

However, it is observed that equations are usually not satisfied when all the filters have line‐ ar phase characteristics. Accordingly, we do not impose the symmetry constraints on all the filters. We also see that the filter in the desired direct scaling structure needs to have some delays in order to satisfy the equations. The performance of such filter banks is examined in

*''*

*''*

see that the equations (16) are the same regardless of *r* or *r ''* .

⋯ *W* (*D*-1)*cr* ''

in each *r ''* . For example, the *p ''* -th row on the right side of (16) is multi‐

=*mod*(*U* (*p* + *qD*), *D*) (18)

*vr* <sup>=</sup> *<sup>W</sup>* <sup>r</sup> *<sup>W</sup>* 2r <sup>⋯</sup> *<sup>W</sup>* (*D*-1)*<sup>r</sup>* (19)

=*vr* since *c* = *ps* . In this case, the equations (16) are the

=*vr* hold in all the *r ''* and *r* under (12). Therefore, we

when *r ''* =1 . In this manner, the

(*z*) , *Gi*

(*z*) and *H* (*z*) , respectively,

(17)

in

The equations derived in the previous section have unknown parameters *Hi* (*z*) , *Gi* (*z*) , and *H* (*z*) . The filter *H* (*z*) in the direct scaling structure should be known, or designed, in ad‐ vance. Thus, *Hi* (*z*) and *Gi* (*z*) are obtained by solving the equations with a pre-designed de‐ sired *H* (*z*) so that the filter bank structure will be exactly or approximately equivalent to the direct scaling structure.

The concrete design procedure depends on applications the filter banks are utilized in. For example, several constraints should be imposed on the analysis filters *Hi* (*z*) for the filter banks to have non DC leakage, high coding gain, etc., when the filter banks are exploited in sub-band image coding.

The analysis filters should have high attenuation in the stop band, or excellent frequency de‐ composition performance, when the filter banks are employed for watermarking, since the watermarks are often embedded only into one narrow sub-band that is robust to attacks.

These tell that the analysis filters typically suffer from severe constraints. In this section, we survey the case where the analysis and synthesis filters are separately designed, i.e., only *Gi* (*z*) are determined so that the equations are satisfied with not only a pre-designed *H* (*z*) but also *Hi* (*z*) , in order not to lay much burden on *Hi* (*z*) . On the other hand, we do not impose the symmetry constraints on *Gi* (*z*) to satisfy the equations more precisely.

With this design policy, we here design filter banks. As an example, two-channel and threechannel filter banks are designed and explored. In the two-channel case, the scaling factor is 3/2, and in the three channel one, it is 2/3. The length of the linear phase analysis filters is 6 in the two-channel case, and 8 in the three-channel one. The length of the obtained (non-lin‐ ear) synthesis filters is 9, and 6, respectively. The length of the linear phase filter in the direct scaling structure is 3, and 2, respectively, and it has third and half band amplitude charac‐ teristics, respectively. This is summarized in Table1.

Please note that in the introduced filter bank structures, the direct scaling filters are not re‐ quired for scaling since the filter banks perform not only sub-band processing but also scal‐ ing. The direct scaling filters are used only to design the filter banks. The round brackets "(· )" are used in this sense in Table 1. It is observed that the direct scaling filters have some delays so that the conditions will be satisfied.


**Table 1.** The length of the designed filters in the introduced filter banks.

For comparison, we also design nearly perfect reconstruction filter banks in the two-channel and three-channel case, where the length of the analysis filters is 6, and 8, respectively. Please note that the sequential structure consists of the perfect reconstruction filter bank and the same direct scaling structure (except the delays). This is also shown in Table2.In the se‐ quential structures, the coefficients of the analysis filters partially have a few zero-values to satisfy the perfect reconstruction conditions. The sequential structures also require direct scaling filters to perform the scaling process.

We also explored the elapsed time of both systems in this case, which is also shown in the

We also explore the performance of image-scaling in the two-channel and three-channel cas‐ es. Several well-known test images ( 256×256 ) are used as the input to the filter banks. Table 4 shows the PSNR[dB] of the output of the introduced filter banks against that of the se‐ quential structures. It is observed that the obtained scaled images with the introduced filter banks are comparable to that of the sequential structures. However, it is not certain whether the proposed approach can yield the optimal solution for image-scaling. A detailed discus‐

2 147.60 148.88 145.40 146.64 146.56 3 122.80 123.96 121.02 122.12 121.82

The introduced filter banks as well as the sequential structures can be applicable to practical image processing applications. In this simulation, we examine the performance of both the systems in watermarking, where it is assumed that the watermark is embedded into a subband of the original image and then the watermarked image is rescaled, and that the water‐ mark is extracted from the rescaled image subsequently. We illustrate both the systems in

**Table 4.** The PSNR[dB] of the introduced filter banks against the sequential structures.

watermarking with Figure 5 and Figure 6 in which the scaling factor is 3/2.

**Figure 5.** The sequential structure employed for watermarking.

**Figure 6.** The introduced filter bank structure for watermarking.

**LENA BARBARA BOAT Building Lighthouse**

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

http://dx.doi.org/10.5772/ 52391

151

round brackets "(· )", where the length of the analysis filters is also 30, in Table 3.

sion on this issue is future work.

**The number of channels**


**Table 2.** The length of the designed filters in the sequential structures

The elapsed computation time required for signal scaling in both the sequential structure and the introduced filter bank is also compared. The computation is carried out on a com‐ puter with a 2.2 GHz processor. In this comparison, we examine the elapsed time with vari‐ ous scaling factors and length of the filters.

For example, the elapsed time is 0.2030, and 0.2660 in the introduced structure and the se‐ quential one designed above, respectively, when *D* =2 , *U* =3 and the input image size is 256×256 .

In addition, Table 3 shows the elapsed time[s] when *D* =5 , *U* =4, 6, 9, 11 , the length of the analysis filters in both systems is 10, and 30, the length of the filter in the direct scaling struc‐ ture is equal to the factor *U* , and the input image size is 256×256 .


**Table 3.** Comparison of the elapsed time[s] between the proposed and sequential schemes

It is observed that the elapsed time is proportional to the factor *U* and also the length of the filters, and that the elapsed time of the introduced filter banks is more than that of the se‐ quential structures when the synthesis filters are long with *U* = 9, 11. However, this draw‐ back is improved if each branch of the filter banks is completely carried out in parallel, i.e., the computation time of the filter banks is the same as that of each branch in the filter banks. We also explored the elapsed time of both systems in this case, which is also shown in the round brackets "(· )", where the length of the analysis filters is also 30, in Table 3.

We also explore the performance of image-scaling in the two-channel and three-channel cas‐ es. Several well-known test images ( 256×256 ) are used as the input to the filter banks. Table 4 shows the PSNR[dB] of the output of the introduced filter banks against that of the se‐ quential structures. It is observed that the obtained scaled images with the introduced filter banks are comparable to that of the sequential structures. However, it is not certain whether the proposed approach can yield the optimal solution for image-scaling. A detailed discus‐ sion on this issue is future work.


**Table 4.** The PSNR[dB] of the introduced filter banks against the sequential structures.

For comparison, we also design nearly perfect reconstruction filter banks in the two-channel and three-channel case, where the length of the analysis filters is 6, and 8, respectively. Please note that the sequential structure consists of the perfect reconstruction filter bank and the same direct scaling structure (except the delays). This is also shown in Table2.In the se‐ quential structures, the coefficients of the analysis filters partially have a few zero-values to satisfy the perfect reconstruction conditions. The sequential structures also require direct

2 3/2 6 6 3 3 2/3 8 9 2

The elapsed computation time required for signal scaling in both the sequential structure and the introduced filter bank is also compared. The computation is carried out on a com‐ puter with a 2.2 GHz processor. In this comparison, we examine the elapsed time with vari‐

For example, the elapsed time is 0.2030, and 0.2660 in the introduced structure and the se‐ quential one designed above, respectively, when *D* =2 , *U* =3 and the input image size is

In addition, Table 3 shows the elapsed time[s] when *D* =5 , *U* =4, 6, 9, 11 , the length of the analysis filters in both systems is 10, and 30, the length of the filter in the direct scaling struc‐

 0.375, 0.406(0.281) 0.437, 0.500(0.359) 8, 24 10, 30 0.390, 0.453(0.297) 0.453, 0.531(0.390) 12, 36 10, 30 0.437, 0.578(0.375) 0.484, 0.563(0.421) 18, 54 10, 30 0.453, 0.641(0.391) 0.500, 0.593(0.485) 22, 66 10, 30

It is observed that the elapsed time is proportional to the factor *U* and also the length of the filters, and that the elapsed time of the introduced filter banks is more than that of the se‐ quential structures when the synthesis filters are long with *U* = 9, 11. However, this draw‐ back is improved if each branch of the filter banks is completely carried out in parallel, i.e., the computation time of the filter banks is the same as that of each branch in the filter banks.

**The elapsed time[s] The length of the synthesis filters** Proposed Sequential Proposed Sequential

**Factors The length of the analysis, synthesis and direct scaling filters**

scaling filters to perform the scaling process.

**The scaling**

ous scaling factors and length of the filters.

**Table 2.** The length of the designed filters in the sequential structures

ture is equal to the factor *U* , and the input image size is 256×256 .

**Table 3.** Comparison of the elapsed time[s] between the proposed and sequential schemes

**The number of channels**

150 Digital Filters and Signal Processing

256×256 .

**The factor** *U*

The introduced filter banks as well as the sequential structures can be applicable to practical image processing applications. In this simulation, we examine the performance of both the systems in watermarking, where it is assumed that the watermark is embedded into a subband of the original image and then the watermarked image is rescaled, and that the water‐ mark is extracted from the rescaled image subsequently. We illustrate both the systems in watermarking with Figure 5 and Figure 6 in which the scaling factor is 3/2.

**Figure 5.** The sequential structure employed for watermarking.

**Figure 6.** The introduced filter bank structure for watermarking.

Please notice that the subsequent extracting stage in both the systems requires down-sam‐ pling and also the use of a scaled version of the original image, in order to extract the water‐ mark embedded in a sub-band of the original image in this simulation. The only difference between both of the systems is that the sub-band signals are synthesized and then rescaled sequentially in Figure 5 while they are rescaled in each sub-band and synthesized in Figure 6. This also holds for other image processing applications such as image coding, recognition, etc.

tial structures, while the computation time of the proposed filter bank is less than that of the

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

http://dx.doi.org/10.5772/ 52391

153

**Watermark1 Watermark2** Proposed structure Sequential structure Proposed structure Sequential structure 34.38 34.56 34.66 34.88

We, however, see that the water marks are not extracted with very high PSNR in both the systems, which is because re-sampling or filtering is carried out in the image-scaling and

Typically, watermarks can also be deteriorated in quality or eliminated by attacks such as filtering, scaling, etc. Thus, they are more sensitive to such attacks than usual watermarking

This chapter has discussed a filter bank structure which directly synthesizes the analysis de‐ composed signals to produce a scaled signal, which leads to computational simplicity in

First, we have discussed the frequency decomposition characteristics of the filter bank to show how to obtain arbitrarily scaled signals, where the behavior of the aliasing compo‐ nents has also been manifested and addressed, since it can significantly affect the filter bank

Next, theoretical conditions for designing the filter bank that is equivalent to the direct scaling structure have been given with the input-output relation of the filter bank, where it has been mentioned that the number of conditions is proportional to that of aliasing components, and that the conditions are mostly not satisfied when all the filters have linear phase. Therefore, we have discussed the conditions, so that they are the same regardless of the aliasing components, which means that the number of equations to solve is signifi‐

In addition, it has been demonstrated through simulation results that the computation time required for signal scaling with the proposed scheme is usually less than that of the sequen‐ tial structures, and that the quality of scaled images is comparable to that of the sequential

Finally, we have discussed potential issues and advantages in employing the proposed scheme as well as traditional ones in practical image processing such as watermarking.

structures even though not all the filters have linear phase characteristics.

**Table 5.** The PSNR[dB] of the extracted watermark against the original watermark.

(without scaling) is. A detailed discussion is also the future work.

complex processing such as sub-band processing followed by scaling.

performance as in traditional perfect reconstruction filter banks.

sequential structures as shown above.

watermark-extracting stage.

**7. Conclusion**

cantly reduced.

To demonstrate this, we use several original test images ( 256×256 ) and bi-level watermark images ( 128×128 ) shown in Figure 7 and Figure 8 in both the structures, and the water‐ mark is embedded into the high frequency band of the original image because it is known that the low frequency one is visually susceptible to the watermarking process.

**Figure 7.** The bi-level image used as Watermark1.

**Figure 8.** The bi-level image used as Watermark2.

Table 5 represents the PSNR[dB] of the extracted watermark against the original watermark in both the systemsin which the original image "LENA" is used. It is observed that the ob‐ tained watermarks with the introduced filter banks are comparable to those of the sequen‐ tial structures, while the computation time of the proposed filter bank is less than that of the sequential structures as shown above.


**Table 5.** The PSNR[dB] of the extracted watermark against the original watermark.

We, however, see that the water marks are not extracted with very high PSNR in both the systems, which is because re-sampling or filtering is carried out in the image-scaling and watermark-extracting stage.

Typically, watermarks can also be deteriorated in quality or eliminated by attacks such as filtering, scaling, etc. Thus, they are more sensitive to such attacks than usual watermarking (without scaling) is. A detailed discussion is also the future work.

## **7. Conclusion**

Please notice that the subsequent extracting stage in both the systems requires down-sam‐ pling and also the use of a scaled version of the original image, in order to extract the water‐ mark embedded in a sub-band of the original image in this simulation. The only difference between both of the systems is that the sub-band signals are synthesized and then rescaled sequentially in Figure 5 while they are rescaled in each sub-band and synthesized in Figure 6. This also holds for other image processing applications such as image coding, recognition, etc.

To demonstrate this, we use several original test images ( 256×256 ) and bi-level watermark images ( 128×128 ) shown in Figure 7 and Figure 8 in both the structures, and the water‐ mark is embedded into the high frequency band of the original image because it is known

Table 5 represents the PSNR[dB] of the extracted watermark against the original watermark in both the systemsin which the original image "LENA" is used. It is observed that the ob‐ tained watermarks with the introduced filter banks are comparable to those of the sequen‐

that the low frequency one is visually susceptible to the watermarking process.

**Figure 7.** The bi-level image used as Watermark1.

152 Digital Filters and Signal Processing

**Figure 8.** The bi-level image used as Watermark2.

This chapter has discussed a filter bank structure which directly synthesizes the analysis de‐ composed signals to produce a scaled signal, which leads to computational simplicity in complex processing such as sub-band processing followed by scaling.

First, we have discussed the frequency decomposition characteristics of the filter bank to show how to obtain arbitrarily scaled signals, where the behavior of the aliasing compo‐ nents has also been manifested and addressed, since it can significantly affect the filter bank performance as in traditional perfect reconstruction filter banks.

Next, theoretical conditions for designing the filter bank that is equivalent to the direct scaling structure have been given with the input-output relation of the filter bank, where it has been mentioned that the number of conditions is proportional to that of aliasing components, and that the conditions are mostly not satisfied when all the filters have linear phase. Therefore, we have discussed the conditions, so that they are the same regardless of the aliasing components, which means that the number of equations to solve is signifi‐ cantly reduced.

In addition, it has been demonstrated through simulation results that the computation time required for signal scaling with the proposed scheme is usually less than that of the sequen‐ tial structures, and that the quality of scaled images is comparable to that of the sequential structures even though not all the filters have linear phase characteristics.

Finally, we have discussed potential issues and advantages in employing the proposed scheme as well as traditional ones in practical image processing such as watermarking.

The findings in this research are that the derived solution by using the proposed approach yields comparable scaling and watermarking performance to that of the usual sequential structures that consist of filter banks and also the direct scaling processes, while the compu‐ tation time required for scaling is less than that of the sequential scheme and the design pro‐ cedure is considerabley simplified.

[10] Lin, C. Y., Wu, M., Bloom, J. A., Cox, I. J., Miller, M. L., & Lui, Y. M. (2001). Rotation, scale, and translation resilient watermarking for images. *IEEE Transactions on Image*

A Study on a Filter Bank Structure With Rational Scaling Factors and Its Applications

http://dx.doi.org/10.5772/ 52391

155

[11] Hernandez, J., Amado, M., & Perez-Gonzalez, F. (2000). DCT-domain watermarking techniques for still images, detector performance analysis and a new structure. *IEEE*

[12] Pitas, I. (1998). A method for watermark casting on digital image. *IEEE Transactions*

[13] Woods, J. W., & O'neil, S. (1986). Subband Coding of Images. *IEEE Transactionson*

[14] Oraintara, S., Tran, T.D., & Nguyen, T.Q. (2003). A Class of Regular Biorthogonal Linear-PhaseFilterbanks: Theory, Structure, andApplication in Image Coding. *IEEE*

[15] Unser, M., Aldroubi, A., & Eden, M. (1993). B-Spline Signal Processing: Part I-Theory.

[16] Unser, M., Aldroubi, A., & Eden, M. (1993). B-Spline Signal Processing: Part II-Effi‐ cient Design and Applications. *IEEE Transactions on Signal Processing Feb.*, 41, 834-848.

[17] Yang, S., & Nguyen, T. Q. (2002). Interpolated Mth-Band Filters for Image Size Con‐

[18] Itami, F., Watanabe, E., & Nishihara, A. (2006). Multirate Filter Bank-based Conver‐ sion of Image Resolution. *Proc. of IEEE, Asia Pacific Conference on Circuits and Systems*

*Processing May*, 10.

*Dec.*, 1234-1237.

*Transactions on Image Processing Jan.*, 9, 55-68.

*on Circuits Syst. Video Technol. Oct.*, 8, 775-780.

*Transactionson Signal Processing Dec.*, 513220-3235.

*IEEE Transactions on Signal Processing Feb.*, 41, 821-833.

version. *IEEE Transactions on Signal Processing Dec.*, 50.

*Acoustics, Speech, and Signal Processing Oct.*, 34 , 1278-1288.

## **Author details**

Fumio Itami\*

Address all correspondence to: itami@sit.ac.jp

Faculty of Engineering, Saitama Institute of Technology, Japan

## **References**


[10] Lin, C. Y., Wu, M., Bloom, J. A., Cox, I. J., Miller, M. L., & Lui, Y. M. (2001). Rotation, scale, and translation resilient watermarking for images. *IEEE Transactions on Image Processing May*, 10.

The findings in this research are that the derived solution by using the proposed approach yields comparable scaling and watermarking performance to that of the usual sequential structures that consist of filter banks and also the direct scaling processes, while the compu‐ tation time required for scaling is less than that of the sequential scheme and the design pro‐

[1] Vaidyanathan, P. P. (1993). Multirate Systems and Filter Banks. *Englewood Cliffs.*

[2] Vaidyanathan, P. P., & Kirac, A. (1998). Results on optimal biorthogonal filter banks.

[3] Soman, A. K., Vaidyanathan, P. P., & Nguyen, T. Q. (1993). Linear-phase orthogonal

[4] Strang, G., & Nguyen, T. (1996). Wavelets and Filter Banks. *Wellesley, MA: Wellesley-*

[5] Vetterli, M., & Kovacevic, J. (1995). Wavelets and Subband Coding. *Englewood Cliffs*

[6] Vetterli, M., & Herley, C. (1992). Wavelets and filter banks: Theory and design. *IEEE*

[7] Kovacevic, J., & Vetterli, M. (1993). Perfect Reconstruction Filter Banks with Rational Sampling Factors. *IEEE Transactions on Signal Processing Jun.*, 41, 2047-2066.

[8] Gopinath, R. A., & Burrus, C. S. (1994). On Upsampling, Downsampling, and Ration‐ al Sampling Rate Filter Banks. *IEEE Transactions on Signal Processing Apr.*, 42, 812-824.

[9] Wang, Y., Doherty, J., & Van Dyck, R. A. (2002). Wavelet-Based Watermarking Algo‐ rithm for Ownership Verification of Digital Images. *IEEE TransactionsonImage Process‐*

cedure is considerabley simplified.

Address all correspondence to: itami@sit.ac.jp

Faculty of Engineering, Saitama Institute of Technology, Japan

*IEEE Transactions on Circuits Syst. II Aug.*, 45.

filter banks. *IEEE Transactions on Signal Processing Dec.*, 41.

*Transactions on Signal Processing Sep.*, 40, 2207-2232.

**Author details**

154 Digital Filters and Signal Processing

Fumio Itami\*

**References**

*NJ:Prentice-Hall.*

*Cambridge*.

*NJ: Prentice-Hall*.

*ing Feb.*, 11, 77-88.


**Chapter 7**

**Digital Filter Implementation of Orthogonal Moments**

Accuracy in detection, robustness in performance and the ability to perform accurately and robustly are all integral considerations in image related research. However, since applications of such research often have a real-time constraint, speed of computation and its corollary -

The importance of geometric moment (GM) invariants first received research attention when Hu [1] introduced them in his study. As they capture global features have been widely used in applications such as object classification, image and shape analysis and edge detection [2-8]. However, since GMs are non-orthogonal, a set of continuous orthogonal moments including Zernike and Legendre moments was introduced by Teague [9]. Their desirable qualities of orthogonality, speed of computation and robustness have found a wider range of applications such as character recognition, two-dimensional (2D) direction-of-arrival estimation, and trademark segmentation and retrieval [10-13]. However, these continuous moments pose certain problems in computation: they require coordinate transformation and suitable approximation of the continuous moment integrals. The transformation and approximation they require create additional opportunities for the occurrence of error in the computation of the feature descriptors. This recognition of the error potential in continuous orthogonal moments led to a new set of discrete orthogonal moments such as Tchebichef, Krawtchouk and Hahn moments as these do not require coordinate transformation and the discretization

The computation of both continuous and discrete orthogonal moments can be realized directly or via the GMs. The complexity in deriving higher order GMs using a direct calculation approach raises a significant challenge for real-time applications. A number of studies address this issue using different approaches. Hatamian [19] proposed a novel approach that uses 2D

> © 2013 Asli and Paramesran; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Asli and Paramesran; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

time-saving- have also become increasingly important in the research agenda.

Barmak Honarvar Shakibaei Asli and

Additional information is available at the end of the chapter

Raveendran Paramesran

http://dx.doi.org/10.5772/52191

step in the moment computation [14-18].

**1. Introduction**

## **Digital Filter Implementation of Orthogonal Moments**

Barmak Honarvar Shakibaei Asli and Raveendran Paramesran

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52191

## **1. Introduction**

Accuracy in detection, robustness in performance and the ability to perform accurately and robustly are all integral considerations in image related research. However, since applications of such research often have a real-time constraint, speed of computation and its corollary time-saving- have also become increasingly important in the research agenda.

The importance of geometric moment (GM) invariants first received research attention when Hu [1] introduced them in his study. As they capture global features have been widely used in applications such as object classification, image and shape analysis and edge detection [2-8]. However, since GMs are non-orthogonal, a set of continuous orthogonal moments including Zernike and Legendre moments was introduced by Teague [9]. Their desirable qualities of orthogonality, speed of computation and robustness have found a wider range of applications such as character recognition, two-dimensional (2D) direction-of-arrival estimation, and trademark segmentation and retrieval [10-13]. However, these continuous moments pose certain problems in computation: they require coordinate transformation and suitable approximation of the continuous moment integrals. The transformation and approximation they require create additional opportunities for the occurrence of error in the computation of the feature descriptors. This recognition of the error potential in continuous orthogonal moments led to a new set of discrete orthogonal moments such as Tchebichef, Krawtchouk and Hahn moments as these do not require coordinate transformation and the discretization step in the moment computation [14-18].

The computation of both continuous and discrete orthogonal moments can be realized directly or via the GMs. The complexity in deriving higher order GMs using a direct calculation approach raises a significant challenge for real-time applications. A number of studies address this issue using different approaches. Hatamian [19] proposed a novel approach that uses 2D

© 2013 Asli and Paramesran; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Asli and Paramesran; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

all-pole digital filters with separable impulse responses to implement the first 16 GMs. Wong and Siu [20] moved the delay element in the basic filter structure as proposed in [19] from the feed-forward to the feedback path. Their method considered moment computation of up to the third order. Using a similar structure of cascaded digital filters, Kotoulas and Andreadis showed orthogonal moments can be generated from their outputs [21]. To reduce the chip area size, they introduced an overflow counter in each of the basic filter structures [22]. However, the plurality of large bit-width adders in the digital filter structure presents a major challenge to chip synthesis, placement and the routing process. Taking a different tack, Al-Rawi [23] generalized the relationship between the moments and the digital filter outputs using a recurrence formula.

to make them exact. To circumvent the problems arising from these two facts, we are proposing the use of only positive coefficient multipliers which then makes it possible to use lower digital filter outputs as the order of the moment increases in generation of GMs. As will be discussed in greater detail in Section 3, the proposed method is based on the formulation and under‐ standing of the impulse response of the digital filters, the unit step function to be used and their relationship with GMs. This formulation makes it possible for the digital filter outputs to be evaluated at earlier instances at *N* , *N* - 1, *N* - 2, …, *N* - *p*, where the lowest digital filter output value, sampled at *N* - *p* is for the highest order. Meanwhile, this set of output

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

159

Recently, another types of discrete orthogonal moments such as Krawtchouk, dual Hahn, Racah, Meixner and Hahn moments have been introduced in image analysis community [16]– [17]. It was shown that they have better image representation capability than the continuous

One main difficulty concerning the use of moments as feature descriptors is their high computational complexity. To solve this problem, a number of fast algorithms have been reported in the literature [14]–[19]. Most of them concentrated on the fast computation of geometric moments and continuous orthogonal moments. This work examines various aspects; both theory and applications of image moment implementation using digital filter structures. Since these aspects can be discussed rather independently, we devote each chapter to the discussion of one particular aspect of moment structures. The following is a summary

**• Section 2: Orthogonal Moments.** Numerous types of orthogonal polynomial, both in 1D and 2D, have been described in traditional mathematical literature. In this chapter we present a survey of orthogonal moments that are of importance in image analysis. The literature on orthogonal moments is very broad, namely in the area of practical applications, and our survey has no claim on completeness. We divide orthogonal polynomials and orthogonal moments into two basic groups. The polynomials *orthogonal on a rectangle* originate from 1D orthogonal polynomials whose 2D versions were created as products of 1D polynomials in *x* and *y*. The main advantage of the moments orthogonal on a rectangle is that they preserve the orthogonality even on the sampled image. They can be made scaleinvariant but creating rotation invariants from them is very complicated. The polynomials *orthogonal on a disk* are intrinsically 2D functions. They are constructed as products of a radial factor (usually a 1D orthogonal polynomial) and angular factor which is usually a kind of harmonic function. When implementing these moments, an image must be mapped into a disk of orthogonality which creates certain resampling problems. On the other hand, moments orthogonal on a disk can easily be used for construction of rotation invariants

**• Section 3: A New Formulation of Geometric Moments from Lower Output Values of Digital Filters.** In this chapter we propose a new method to accelerate geometric moment's computation using digital filters based on the lower output values. It is shown in this chapter a brief reviews of the digital filter methods employed in [19] and [20]. First, a description of the proposed method that includes the theoretical formulation of the relationship between

values starts to decrease after *p* / 2 moment orders.

because they change under rotation in a simple way.

orthogonal moments.

of contents of the sections.

Two facts demand our attention at this point. One, in all the aforementioned literature, the basic concept involved in the computation of moments using the digital filter structure has remained unchanged over the past three decades [19]-[20] even though this digital filter structure was formulated when only low orders of moments were used. Two, many recent works have involved increasingly higher moment orders. They include, among others, invariant image watermarking (30 orders) [24], moving object reconstruction (55 orders) [25], and hand shape verification (60 orders) [26].

Two models have been proposed to deal with higher order moments. The usage of cascaded all-pole digital filters in generating higher order GMs for the formulation of orthogonal moments has been successfully explored by Kotoulas and Andreadis*.* In their work [21]-[22], the 40th and 70th high order Zernike and Tchebichef moments, respectively were obtained from the digital filter outputs and their transform coefficients. Their digital filter was based on the feedforward model, and for a *N* × *N* image, the digital filter output values for row filtering are sampled at *N* + 2, *N* + 3, *N* + 4, …, *N* + *p* + 2 , where *p* is the maximum order. The algorithm they propose, however, is undermined by a computational problem. At these time instances, the digital filter output values are much larger when compared to the earlier time instances. This is because the digital filter operates as an accumulator: their output values increase as the number of digital filters directly related to the order increases. The sample of digital filter output values obtained from the row filtering for each row are then used as inputs to the respective digital filters arranged in the column filtering. This further increases the size of the final digital filter outputs. Additionally, since the digital filter used is an approximation except for the first two orders, coefficients are then multiplied from second order onwards to make them exact. The coefficients, both positive and negative, are determined from the impulse response of the digital filter. The model proposed by Wong and Siu [20], though the digital filter outputs for all orders are sampled at their respective *N* , also suffers from a similar problem. The current approaches, it is clear, thus suffer from an increase in computational complexity arising from the large increase in the digital filter output values as the order increases [33]-[34].

The method of computation we propose in this work attempts to reduce computational complexity – and save time through a reduction in the number of additions – by addressing the problems arising from: 1) The increase in digital filter output values as the order of moment's increases and 2) the consequent use of positive and negative coefficient multipliers to make them exact. To circumvent the problems arising from these two facts, we are proposing the use of only positive coefficient multipliers which then makes it possible to use lower digital filter outputs as the order of the moment increases in generation of GMs. As will be discussed in greater detail in Section 3, the proposed method is based on the formulation and under‐ standing of the impulse response of the digital filters, the unit step function to be used and their relationship with GMs. This formulation makes it possible for the digital filter outputs to be evaluated at earlier instances at *N* , *N* - 1, *N* - 2, …, *N* - *p*, where the lowest digital filter output value, sampled at *N* - *p* is for the highest order. Meanwhile, this set of output values starts to decrease after *p* / 2 moment orders.

all-pole digital filters with separable impulse responses to implement the first 16 GMs. Wong and Siu [20] moved the delay element in the basic filter structure as proposed in [19] from the feed-forward to the feedback path. Their method considered moment computation of up to the third order. Using a similar structure of cascaded digital filters, Kotoulas and Andreadis showed orthogonal moments can be generated from their outputs [21]. To reduce the chip area size, they introduced an overflow counter in each of the basic filter structures [22]. However, the plurality of large bit-width adders in the digital filter structure presents a major challenge to chip synthesis, placement and the routing process. Taking a different tack, Al-Rawi [23] generalized the relationship between the moments and the digital filter outputs using a

Two facts demand our attention at this point. One, in all the aforementioned literature, the basic concept involved in the computation of moments using the digital filter structure has remained unchanged over the past three decades [19]-[20] even though this digital filter structure was formulated when only low orders of moments were used. Two, many recent works have involved increasingly higher moment orders. They include, among others, invariant image watermarking (30 orders) [24], moving object reconstruction (55 orders) [25],

Two models have been proposed to deal with higher order moments. The usage of cascaded all-pole digital filters in generating higher order GMs for the formulation of orthogonal moments has been successfully explored by Kotoulas and Andreadis*.* In their work [21]-[22], the 40th and 70th high order Zernike and Tchebichef moments, respectively were obtained from the digital filter outputs and their transform coefficients. Their digital filter was based on the feedforward model, and for a *N* × *N* image, the digital filter output values for row filtering are sampled at *N* + 2, *N* + 3, *N* + 4, …, *N* + *p* + 2 , where *p* is the maximum order. The algorithm they propose, however, is undermined by a computational problem. At these time instances, the digital filter output values are much larger when compared to the earlier time instances. This is because the digital filter operates as an accumulator: their output values increase as the number of digital filters directly related to the order increases. The sample of digital filter output values obtained from the row filtering for each row are then used as inputs to the respective digital filters arranged in the column filtering. This further increases the size of the final digital filter outputs. Additionally, since the digital filter used is an approximation except for the first two orders, coefficients are then multiplied from second order onwards to make them exact. The coefficients, both positive and negative, are determined from the impulse response of the digital filter. The model proposed by Wong and Siu [20], though the digital filter outputs for all orders are sampled at their respective *N* , also suffers from a similar problem. The current approaches, it is clear, thus suffer from an increase in computational complexity arising from the large increase in the digital filter output values as the order

The method of computation we propose in this work attempts to reduce computational complexity – and save time through a reduction in the number of additions – by addressing the problems arising from: 1) The increase in digital filter output values as the order of moment's increases and 2) the consequent use of positive and negative coefficient multipliers

recurrence formula.

158 Digital Filters and Signal Processing

increases [33]-[34].

and hand shape verification (60 orders) [26].

Recently, another types of discrete orthogonal moments such as Krawtchouk, dual Hahn, Racah, Meixner and Hahn moments have been introduced in image analysis community [16]– [17]. It was shown that they have better image representation capability than the continuous orthogonal moments.

One main difficulty concerning the use of moments as feature descriptors is their high computational complexity. To solve this problem, a number of fast algorithms have been reported in the literature [14]–[19]. Most of them concentrated on the fast computation of geometric moments and continuous orthogonal moments. This work examines various aspects; both theory and applications of image moment implementation using digital filter structures. Since these aspects can be discussed rather independently, we devote each chapter to the discussion of one particular aspect of moment structures. The following is a summary of contents of the sections.


the digital filter outputs and GMs is provided. Second, we discuss the computational complexity with both an artificial and two real images, and the computational time. Finally, this study with some suggestions for future research is concluded.

*cpq* =*∬* -∞ +∞

*cpq* = ∑ *k*=0 *p* ∑ *l*=0 *q* ( *p <sup>k</sup>* )( *<sup>q</sup>*

> *mpq* = ∑ *k*=0 *p* ∑ *l*=0 *q* ( *p <sup>k</sup>* )( *<sup>q</sup> <sup>l</sup>* ) (-1)*<sup>q</sup>*-*<sup>l</sup>* 2 *<sup>p</sup>*+*<sup>q</sup>*

*∬ G*

*∬ G*

are equivalent to geometric moments.

GMs as

and vice versa

orthogonality

orthogonality.

**2.2. Orthogonal moments**

or weighted orthogonality

(*x* + *iy*) *<sup>p</sup>*(*x* - *iy*)*<sup>q</sup> f* (*x*, *y*)*dxdy* (2)

*Ppq*(*x*, *y*)*Pmn*(*x*, *y*)*dxdy* =0 (5)

*w*(*x*, *y*)*P pq*(*x*, *y*)*Pmn*(*x*, *y*)*dxdy* =0 (6)

*mk* <sup>+</sup> *<sup>j</sup>*, *<sup>p</sup>*+*q*-*<sup>k</sup>* - *<sup>j</sup>* (3)

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

161

*<sup>i</sup> <sup>q</sup> ck* <sup>+</sup>*l*, *<sup>p</sup>*+*q*-*<sup>k</sup>* -*<sup>l</sup>* (4)

GMs and CMs carry the same amount of information. Each CM can be expressed in terms of

*i <sup>p</sup>*+*q*-*<sup>k</sup>* - *<sup>j</sup>*

CMs are introduced because they behave favorably under image rotation. This property can

If the polynomial basis {*pkj* (*x*, *y*)} is orthogonal, i.e. if its elements satisfy the condition of

for any indexes *p* ≠*m* or *q* ≠*n* , we speak about *orthogonal (OG) moments*. *G* is the area of

In theory, all polynomial bases of the same degree are equivalent because they generate the same space of functions. Any moment with respect to a certain basis can be expressed in terms of moments with respect to any other basis. From this point of view, OG moments of any type

However, a significant difference appears when considering stability and computational issues in a discrete domain. Standard powers are nearly dependent both for small and large values of the exponent and increase rapidly in range as the order increases. This leads to correlated geometric moments and to the need for high computational precision. Using lower precision results in unreliable computation of geometric moments. OG moments can capture the image features in an improved, non-redundant way. They also have the advantage of requiring lower

be advantageously employed when constructing invariants with respect to rotation.

*<sup>l</sup>* )(-1)*<sup>q</sup>*-*<sup>l</sup>*


## **2. Moment functions**

Geometric Moments (GMs) and complex moments (CMs) are the simplest among moment functions, with the kernel function defined as a product of the pixel power coordinates. These type of moments are non-orthogonal and because of this problem, the inverse GMs or CMs formulation are not possible. On the other hand, the orthogonal moment functions have been used widley for image analysis. The orthogonal moment functions are based on the orthogonal polynomials such as Legendre, Zernike, Tchebichef, Krawtchouk and so on. All these moment functions play an important role in continuous or discrete domains of polynomials range.

#### **2.1. Geometric and complex moments**

GMs are defined with the basis kernel {*x <sup>p</sup> y <sup>q</sup>*} . The (*p* + *q*) th order two-dimensional GMs are denoted by *mpq* , and can be expressed as

$$m\_{pq} = \stackrel{\star\omega}{\iint} \propto ^p y \, ^q f\left(\mathbf{x}\_{\prime\prime} \, y\right) dx dy \tag{1}$$

where *f* (*x*, *y*) is the image intensity function. GMs of low orders have an intuitive meaning, *m*<sup>00</sup> is a "mass" of the image (for binary images, *m*<sup>00</sup> is an area of the object), *m*\_01 / *m*\_00 and *m*\_10 / *m*\_00 define the *center of gravity* or *centroid* of the image. Second-order moments *m*<sup>02</sup> and *m*<sup>20</sup> describe the "distribution of mass" of the image with respect to the coordinate axes. In mechanics they are called the *moments of inertia*. Another popular mechanical quantity, the *radius of gyration* with respect to an axis, can also be expressed in terms of moments as *m*<sup>02</sup> / *m*<sup>00</sup> and *m*<sup>20</sup> / *m*00 , respectively.

Another popular choice of the polynomial basis (*x* + *iy*) *<sup>p</sup>*(*x* - *iy*)*<sup>q</sup>* where *i* is the imaginary unit, leads to *complex moments*

Digital Filter Implementation of Orthogonal Moments http://dx.doi.org/10.5772/52191 161

$$\mathcal{L}\_{pq} = \stackrel{\star}{\underset{\cdots}{\text{\cdots}}} (\mathbf{x} + i\mathbf{y})^p (\mathbf{x} - i\mathbf{y})^q f(\mathbf{x}, \mathbf{y}) d\mathbf{x} dy \tag{2}$$

GMs and CMs carry the same amount of information. Each CM can be expressed in terms of GMs as

$$\mathcal{L}\_{pq} = \sum\_{k=0}^{p} \sum\_{l=0}^{q} \binom{p}{k} \binom{q}{l} (-1)^{q-l} i^{-p+q-k-j} m\_{k+j, p+q-k-j} \tag{3}$$

and vice versa

the digital filter outputs and GMs is provided. Second, we discuss the computational complexity with both an artificial and two real images, and the computational time. Finally,

**• Section 4: A Reduced 2D digital Filter Structure for Fast Computation of Geometric Moments.** It is shown in this chapter how to design a reduced 2D digital filter grid for fast computation of geometric moments. For this design, the 1D and 2D all-pole digital filter design procedure using the Z-transform properties is described. It also describes the recurrence equations for the desired filter outputs. The work shows the digital filter design used in [3] and [4], and shows the implementation of the proposed architecture. Finally, the computation results of the proposed method and the method used in [3] is illustrated. **• Section 5: Conclusions.** The presentation is concluded, summarizing the contents of the

Geometric Moments (GMs) and complex moments (CMs) are the simplest among moment functions, with the kernel function defined as a product of the pixel power coordinates. These type of moments are non-orthogonal and because of this problem, the inverse GMs or CMs formulation are not possible. On the other hand, the orthogonal moment functions have been used widley for image analysis. The orthogonal moment functions are based on the orthogonal polynomials such as Legendre, Zernike, Tchebichef, Krawtchouk and so on. All these moment functions play an important role in continuous or discrete domains of polynomials range.

GMs are defined with the basis kernel {*x <sup>p</sup> y <sup>q</sup>*} . The (*p* + *q*) th order two-dimensional GMs are

where *f* (*x*, *y*) is the image intensity function. GMs of low orders have an intuitive meaning, *m*<sup>00</sup> is a "mass" of the image (for binary images, *m*<sup>00</sup> is an area of the object), *m*\_01 / *m*\_00 and *m*\_10 / *m*\_00 define the *center of gravity* or *centroid* of the image. Second-order moments *m*<sup>02</sup> and *m*<sup>20</sup> describe the "distribution of mass" of the image with respect to the coordinate axes. In mechanics they are called the *moments of inertia*. Another popular mechanical quantity, the *radius of gyration* with respect to an axis, can also be expressed in terms of moments as *m*<sup>02</sup> / *m*<sup>00</sup>

*x <sup>p</sup> y <sup>q</sup> f* (*x*, *y*)*dxdy* (1)

where *i* is the imaginary unit,

work and discussing possibilities which may be open for the future research.

**2. Moment functions**

160 Digital Filters and Signal Processing

**2.1. Geometric and complex moments**

denoted by *mpq* , and can be expressed as

and *m*<sup>20</sup> / *m*00 , respectively.

leads to *complex moments*

*mpq* =*∬* -∞ +∞

Another popular choice of the polynomial basis (*x* + *iy*) *<sup>p</sup>*(*x* - *iy*)*<sup>q</sup>*

this study with some suggestions for future research is concluded.

$$\mathfrak{M}\_{pq} = \sum\_{k=0}^{p} \sum\_{l=0}^{q} \binom{p}{k} \binom{q}{l} \frac{(-1)^{q-l}}{2^{p \cdot q} i^{\cdot q}} \mathfrak{c}\_{k \cdot l, p \cdot q \cdot k \cdot l} \tag{4}$$

CMs are introduced because they behave favorably under image rotation. This property can be advantageously employed when constructing invariants with respect to rotation.

#### **2.2. Orthogonal moments**

If the polynomial basis {*pkj* (*x*, *y*)} is orthogonal, i.e. if its elements satisfy the condition of orthogonality

$$\iint\limits\_{\mathcal{C}} P\_{pq}(\mathbf{x}, \ y) P\_{mn}(\mathbf{x}, \ y) d\mathbf{x} dy = 0 \tag{5}$$

or weighted orthogonality

$$\iint\_{G} w(\mathbf{x}, \ y) \mathsf{P}\_{pq}(\mathbf{x}, \ y) \mathsf{P}\_{mn}(\mathbf{x}, \ y) d\mathbf{x} dy = 0 \tag{6}$$

for any indexes *p* ≠*m* or *q* ≠*n* , we speak about *orthogonal (OG) moments*. *G* is the area of orthogonality.

In theory, all polynomial bases of the same degree are equivalent because they generate the same space of functions. Any moment with respect to a certain basis can be expressed in terms of moments with respect to any other basis. From this point of view, OG moments of any type are equivalent to geometric moments.

However, a significant difference appears when considering stability and computational issues in a discrete domain. Standard powers are nearly dependent both for small and large values of the exponent and increase rapidly in range as the order increases. This leads to correlated geometric moments and to the need for high computational precision. Using lower precision results in unreliable computation of geometric moments. OG moments can capture the image features in an improved, non-redundant way. They also have the advantage of requiring lower computing precision because we can evaluate them using recurrent relations, without expressing them in terms of standard powers.

Unlike geometric moments, OG moments are coordinates of *f* in the polynomial basis in the common sense used in linear algebra. Thanks to this, the image reconstruction from OG moments can be performed easily as

$$f\begin{pmatrix}\mathbf{x}\_{\prime\prime} \ y\end{pmatrix} = \sum\_{j} \sum\_{k} M\_{\phantom{\,=\,l}} P\_{kj}(\mathbf{x}\_{\prime\prime} \ y) \tag{7}$$

*∫* -1 1

*Pp*+1(*x*)= <sup>2</sup> *<sup>p</sup>* + 1

*P*0 (*x*)=1,

*P*1 (*x*)= *x*,

mials, is

as

*2.3.2. Zernike moments*

*Apq* <sup>=</sup> *<sup>p</sup>* + 1 *<sup>π</sup> ∫* 0 2*π ∫* 0 1 *V pq*

where the radial part is

The coefficients

*Zernike polynomials* are defined as products

*Pp*(*x*)*Pq*

(*x*)*dx* <sup>=</sup> <sup>2</sup>

*<sup>p</sup>* + 1 *xPp*(*x*) - *<sup>p</sup>*

*Zernike moments* (ZMs) were introduced into image analysis about 30 years ago by Teague [9] who used ZMs to construct rotation invariants. He used the fact that the ZMs keep their magnitude under arbitrary rotation. He also showed that the Zernike invariants of the second and third orders are equivalent to the Hu invariants when expressed in terms of geometric moments. He presented the invariants up to the eighth order in explicit form but no general rule about how to derive them was given. Later, Wallin [31] described an algorithm for the formation of rotation invariants of any order. Numerical properties and possible applications of ZMs in image processing among others. ZMs of the *n*th order with repetition *l* are defined

i.e. the difference *n* - |*l*| is always even. The asterisk means the complex conjugate. The

*Rpq*(*r*)= <sup>∑</sup> *<sup>k</sup>*=|*q*|,|*q*|+2, … *p*

> *p*-*k* <sup>2</sup> ( *<sup>p</sup>* <sup>+</sup> *<sup>k</sup>* <sup>2</sup> )!

> > <sup>2</sup> ) !( *<sup>k</sup>* - *<sup>q</sup>* <sup>2</sup> )!

*Bpqk* <sup>=</sup> (-1)

can be used for conversion from geometric moments,

( *<sup>p</sup>* - *<sup>k</sup>* <sup>2</sup> ) !( *<sup>k</sup>* <sup>+</sup> *<sup>q</sup>* *<sup>p</sup>* + 1 *Pp*-1(*x*)

\* (*r*, *θ*) *f* (*r*, *θ*)*rdrdθ*, *p* =0, 1, 2, … *q* = - *p*, - *p* + 2, …, *p* (13)

*<sup>V</sup> pq*(*r*, *<sup>θ</sup>*)=*Rpq*(*r*)*eiq<sup>θ</sup>* (14)

*Bpqk <sup>r</sup> <sup>k</sup>* (15)

The recurrence relation, which can be used for efficient computation of the Legendre polyno‐

<sup>2</sup>*<sup>q</sup>* + 1 *δpq* (11)

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

(12)

163

(16)

#### **2.3. Continuous moments**

Some of orthogonal moments are in terms of a continuous variable such as Legendre, Zernike and Gaussian-Hermite moments. All of these continuous moments depend on the same continuous polynomials. Here, we will discuss about such these polynomial functions and moments.

#### *2.3.1. Legendre moments*

There have been many works describing the use of the Legendre moments in image processing, e.g. references [29] and [30], among many others. The *Legendre moments* are defined as

$$\Lambda\_{pq} = \frac{(2p+1)(2q+1)}{4} \iint\limits\_{\text{1}} P\_p(\text{x}) P\_q(\text{y}) f(\text{x}, \text{ y}) d\text{xd}; \ p, q = 0, \ 1, \ 2, \ \dots \tag{8}$$

where *Pn*(*x*) is the *n* th degree *Legendre polynomial* (expression by the so-called Rodrigues' formula)

$$P\_p(\mathbf{x}) = \frac{1}{2^{\frac{\pi}{p}} p!} \frac{d^{\frac{\pi}{p}}}{d\mathbf{x}^{\frac{\pi}{p}}} (\mathbf{x}^2 \mathbf{-1})^p \tag{9}$$

and the image *f* (*x*, *y*) is mapped into the square (-1, 1)×(-1, 1) . The Legendre polynomials of low degrees expressed in terms of *x <sup>p</sup>* are

$$\begin{aligned} P\_0(\mathbf{x}) &= \mathbf{1}\_\prime \\ P\_1(\mathbf{x}) &= \mathbf{x}\_\prime \\ P\_2(\mathbf{x}) &= \frac{1}{2} (3\mathbf{x}^2 \cdot \mathbf{1})\_\prime \\ P\_3(\mathbf{x}) &= \frac{1}{2} (5\mathbf{x}^3 \cdot 3\mathbf{x})\_\prime \\ P\_4(\mathbf{x}) &= \frac{1}{8} (35\mathbf{x}^4 \cdot 30\mathbf{x}^2 + 3). \end{aligned} \tag{10}$$

The relation of orthogonality is

$$\int\_{-1}^{1} P\_p(\mathbf{x}) P\_q(\mathbf{x}) d\mathbf{x} = \frac{2}{2q+1} \delta\_{pq} \tag{11}$$

The recurrence relation, which can be used for efficient computation of the Legendre polyno‐ mials, is

$$\begin{aligned} P\_0(\mathbf{x}) &= 1, \\ P\_1(\mathbf{x}) &= \mathbf{x}, \\ P\_{p+1}(\mathbf{x}) &= \frac{2^{p+1}}{p+1} \mathbf{x} P\_p(\mathbf{x}) \cdot \frac{p}{p+1} P\_{p+1}(\mathbf{x}) \end{aligned} \tag{12}$$

#### *2.3.2. Zernike moments*

computing precision because we can evaluate them using recurrent relations, without

Unlike geometric moments, OG moments are coordinates of *f* in the polynomial basis in the common sense used in linear algebra. Thanks to this, the image reconstruction from OG

Some of orthogonal moments are in terms of a continuous variable such as Legendre, Zernike and Gaussian-Hermite moments. All of these continuous moments depend on the same continuous polynomials. Here, we will discuss about such these polynomial functions and

There have been many works describing the use of the Legendre moments in image processing,

where *Pn*(*x*) is the *n* th degree *Legendre polynomial* (expression by the so-called Rodrigues'

and the image *f* (*x*, *y*) is mapped into the square (-1, 1)×(-1, 1) . The Legendre polynomials

are

<sup>2</sup> (3*<sup>x</sup>* <sup>2</sup> - 1),

<sup>2</sup> (5*<sup>x</sup>* <sup>3</sup> - 3*x*),

<sup>8</sup> (35*<sup>x</sup>* <sup>4</sup> - 30*<sup>x</sup>* <sup>2</sup> + 3).

e.g. references [29] and [30], among many others. The *Legendre moments* are defined as

*Pkj*(*x*, *<sup>y</sup>*) (7)

(*y*) *f* (*x*, *y*)*dxd*; *p*, *q* =0, 1, 2, … (8)

*dx <sup>p</sup>* (*<sup>x</sup>* <sup>2</sup> - 1) *<sup>p</sup>* (9)

(10)

*f* (*x*, *y*)=∑

*j* ∑ *k M kj*

expressing them in terms of standard powers.

moments can be performed easily as

**2.3. Continuous moments**

162 Digital Filters and Signal Processing

*2.3.1. Legendre moments*

*<sup>λ</sup>pq* <sup>=</sup> (2 *<sup>p</sup>* + 1)(2*<sup>q</sup>* + 1)

of low degrees expressed in terms of *x <sup>p</sup>*

The relation of orthogonality is

<sup>4</sup> *∬* -1 +1

*Pp*(*x*)*Pq*

*Pp*(*x*)= <sup>1</sup> 2 *<sup>p</sup> p* ! *d <sup>p</sup>*

*P*0 (*x*)=1,

*P*1 (*x*)= *x*,

*P*2 (*x*)= <sup>1</sup>

*P*3 (*x*)= <sup>1</sup>

*P*4 (*x*)= <sup>1</sup>

moments.

formula)

*Zernike moments* (ZMs) were introduced into image analysis about 30 years ago by Teague [9] who used ZMs to construct rotation invariants. He used the fact that the ZMs keep their magnitude under arbitrary rotation. He also showed that the Zernike invariants of the second and third orders are equivalent to the Hu invariants when expressed in terms of geometric moments. He presented the invariants up to the eighth order in explicit form but no general rule about how to derive them was given. Later, Wallin [31] described an algorithm for the formation of rotation invariants of any order. Numerical properties and possible applications of ZMs in image processing among others. ZMs of the *n*th order with repetition *l* are defined as

$$A\_{pq} = \frac{p+1}{\pi} \stackrel{2\pi 1}{\int\_0^\cdot \int\_0^\cdot p\eta' r\_\gamma \, d\theta \, f(r\_\gamma \, \Theta) r dr d\Theta}, \quad p = 0, \ 1, \ 2, \ \dots \; q = -p, \ -p+2, \ \dots, \ \; p \tag{13}$$

i.e. the difference *n* - |*l*| is always even. The asterisk means the complex conjugate. The *Zernike polynomials* are defined as products

$$W\_{pq}(r\_\prime \ \Theta) = R\_{pq}(r)e^{iq\theta} \tag{14}$$

where the radial part is

$$R\_{pq}(r) = \sum\_{k=\lfloor q/\lfloor \frac{p}{r} \rfloor + 2, \dots}^{p} B\_{pqk} r^k \tag{15}$$

The coefficients

$$B\_{pqk} = \frac{\binom{p+k}{2} \binom{\frac{p+k}{2}}{2} \binom{\frac{p+k}{2}}{2}!}{\binom{\frac{p+k}{2}}{2} \binom{\frac{k+q}{2}}{2} ! \left(\frac{k-q}{2}\right)!} \tag{16}$$

can be used for conversion from geometric moments,

$$\mathbf{A}\_{pq} = \frac{p+1}{\pi} \sum\_{\substack{k=\lfloor q \rfloor,\lfloor lq \rfloor+2,\dots \ j=0}}^{\underline{p}} \sum\_{l=0}^{\frac{k-\lfloor q \rfloor}{2}} \sum\_{l=0}^{\lfloor \frac{k}{2} \rfloor} {m \choose \frac{l}{2}}^{\lfloor \frac{k}{2} \rfloor} \binom{\frac{k-\lfloor q \rfloor}{2}}{l} \pi \sigma^{l} \mathbf{B}\_{pqk} m\_{\mathbf{k}-2,\mathbf{j}+l,2,\mathbf{j}+l} \tag{17}$$

where

$$w = \begin{array}{c} \text{\\_} \; i \; \text{\\_} \; q \ge 0 \\ \text{\\_} \; i \; \text{\\_} \; q \le 0 \end{array} \tag{18}$$

where *β*(*p*, *N* ) is a normalization factor. The simplest choice of this factor is *N <sup>p</sup>* . The recur‐

(*<sup>p</sup>* + 1)*<sup>t</sup> <sup>p</sup>*+1(*n*) - (2*<sup>p</sup>* + 1)(2*<sup>n</sup>* - *<sup>N</sup>* + 1)*<sup>t</sup> <sup>p</sup>*(*n*) <sup>+</sup> *<sup>p</sup>*(*<sup>N</sup>* <sup>2</sup> - *<sup>p</sup>* 2)*<sup>t</sup> <sup>p</sup>*-1(*n*)=0 (23)

(*n*)=1 and *t*<sup>1</sup>

(-*n*, - *x*; - *N* ;

1

(*n*)=2*n* - *N* + 1 . The orthogonality

<sup>2</sup> *<sup>p</sup>* + 1 ) (24)

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

165

*<sup>p</sup>* ) <sup>2</sup> (25)

(<sup>∙</sup> ) <sup>2</sup> is the generalized hypergeometric

*<sup>k</sup>* ! (26)

*<sup>n</sup>*(*x*; *p*, *N* )} are defined as [10]

(30)

*<sup>ρ</sup>*(*n*; *<sup>p</sup>*, *<sup>N</sup>* ) (28)

<sup>0</sup> =1 (27)

*<sup>x</sup>* )*p <sup>x</sup>*(1 - *p*)*<sup>N</sup>* -*<sup>x</sup>* (29)

rence relation of Tchebichef polynomials with respect to the chosen order is:

*ρ*(*p*, *N* )=(2*p*)!( *<sup>N</sup>* <sup>+</sup> *<sup>p</sup>*

*ak* ,*n*, *<sup>p</sup> <sup>x</sup> <sup>k</sup>* <sup>=</sup> *<sup>F</sup>*<sup>1</sup>

*k*=0 ∞ (*a*) *<sup>k</sup>* (*b*) *k* (*c*) *k z k*

*<sup>k</sup>* = *x*(*x* + 1)(*x* + 2)⋯(*x* + *k* - 1)

*<sup>n</sup>*(*x*; *<sup>p</sup>*, *<sup>N</sup>* )= *Kn*(*x*; *<sup>p</sup>*, *<sup>N</sup>* ) *<sup>w</sup>*(*x*; *<sup>p</sup>*, *<sup>N</sup>* )

The definition of the *n* th order classical Krawtchouk polynomial is defined as

*k*=0 *n*

*F*1

(*a*, *<sup>b</sup>*; *<sup>c</sup>*; *<sup>z</sup>*) <sup>2</sup> <sup>=</sup> <sup>∑</sup>

where *p* ≥1 and the first two polynomials are *t*<sup>0</sup>

property satisfies the following squared-norm:

*Kn*(*x*; *p*, *N* )= ∑

where *x*, *n* =0, 1, 2, …, *N* , *N* >0, *p* ∈(0,1) and *F*<sup>1</sup>

*<sup>k</sup>* is the Pochhammer symbol given by

*K*¯

(*x*)

The normalized and weighted Krawtchouk polynomials {*<sup>K</sup>*¯

*k* ≥1 and (*x*)

where the weight function, *w*(∙ ) and the square norm, *ρ*(∙ ) are given as

*w*(*x*; *p*, *N* )=( *<sup>N</sup>*

*<sup>ρ</sup>*(*n*; *<sup>p</sup>*, *<sup>N</sup>* )=( *<sup>p</sup>* - 1

*<sup>p</sup>* )*<sup>n</sup> <sup>n</sup>* ! (-*N* ) *n*

*2.4.2. Krawtchouk moments*

function

and (*x*)

and

The Zernike polynomials satisfy the relation of orthogonality

$$\int\_{0}^{2\pi 1} \int\_{p\eta} V\_{pq}^\* \{r\_\nu \: \Theta\} V\_{kl} \{r\_\nu \: \Theta\} r dr d\theta = \frac{\pi}{p+1} \delta\_{kp} \delta\_{l\eta} \tag{19}$$

The recurrence relation for the radial part is

$$R\_{pq}(r) = \frac{2\cdot pr}{p \cdot \ast q} R\_{p \cdot 1, q \cdot 1}(r) \text{ - } \frac{p \cdot q}{p \cdot \ast q} R\_{p \cdot 2, q}(r) \tag{20}$$

Computation of the Zernike polynomials by this formula must commence with

*Rpp*(*r*)=*Rp*,- *<sup>p</sup>*(*r*)=*<sup>r</sup> <sup>p</sup>* , *<sup>p</sup>* =0, 1, …

#### **2.4. Discrete moments**

There is a group of orthogonal polynomials defined directly on a series of points and therefore they are especially suitable for digital images. Some of these polynomials such as Tchebichef, Krawtchouk, Hahn, dual Hahn and Meixner polynomials.

#### *2.4.1. Tchebichef moments*

The 2D TMs of order (*p* + *q*) of an image intensity function *f* (*n*, *m*) with size *N* ×*M* is defined as

$$T\_{p\eta} = A(p, \text{ } N) A(q, \text{ } M) \sum\_{n=0}^{N-1} \sum\_{m=0}^{M-1} t\_p(n) t\_q(m) f(n, \text{ } m) \tag{21}$$

where *t <sup>p</sup>*(*n*) is the *p* th order orthogonal discrete Tchebichef polynomial defined by [8]

$$\text{Let } t\_p(\mu) = p! \sum\_{k=0}^p (-1)^{p-k} \binom{N-1-k}{p-k} \binom{p+k}{p} \binom{n}{k} \tag{22}$$

and

$$A(p\_\prime \text{ \\_N}) = \frac{\beta(p\_\prime \text{ \\_N})}{\rho(p\_\prime \text{ \\_N})}$$

where *β*(*p*, *N* ) is a normalization factor. The simplest choice of this factor is *N <sup>p</sup>* . The recur‐ rence relation of Tchebichef polynomials with respect to the chosen order is:

$$(p+1)t\_{p+1}(n) - (2p+1)(2n - N + 1)t\_p(n) + p\left(N^2 - p^2\right)t\_{p+1}(n) = 0\tag{23}$$

where *p* ≥1 and the first two polynomials are *t*<sup>0</sup> (*n*)=1 and *t*<sup>1</sup> (*n*)=2*n* - *N* + 1 . The orthogonality property satisfies the following squared-norm:

$$
\rho(p, N) = (2\,p)! \binom{N+p}{2\,p+1} \tag{24}
$$

#### *2.4.2. Krawtchouk moments*

*Apq* <sup>=</sup> *<sup>p</sup>* + 1

164 Digital Filters and Signal Processing

where

*<sup>π</sup>* <sup>∑</sup> *<sup>k</sup>*=|*q*|,|*q*|+2, … *p*

The Zernike polynomials satisfy the relation of orthogonality

*Rpq*(*r*)= <sup>2</sup> *pr*

Krawtchouk, Hahn, dual Hahn and Meixner polynomials.

*T pq* = *A*(*p*, *N* )*A*(*q*, *M* ) ∑

*t <sup>p</sup>*(*n*)= *p* ! ∑

*k*=0 *p*

*∫* 0 2*π ∫* 0 1 *V pq*

The recurrence relation for the radial part is

*Rpp*(*r*)=*Rp*,- *<sup>p</sup>*(*r*)=*<sup>r</sup> <sup>p</sup>* , *<sup>p</sup>* =0, 1, …

**2.4. Discrete moments**

*2.4.1. Tchebichef moments*

as

and

*<sup>A</sup>*(*p*, *<sup>N</sup>* )= *<sup>β</sup>*( *<sup>p</sup>*, *<sup>N</sup>* )

*ρ*( *p*, *N* )

∑ *j*=0

*<sup>w</sup>* ={ - *<sup>i</sup>*;*<sup>q</sup>* >0

\* (*r*, *<sup>θ</sup>*)*Vkl*(*r*, *<sup>θ</sup>*)*rdrd<sup>θ</sup>* <sup>=</sup> *<sup>π</sup>*

*<sup>p</sup>* <sup>+</sup> *<sup>q</sup> Rp*-1,*<sup>q</sup>*-1(*r*) - *<sup>p</sup>* - *<sup>q</sup>*

There is a group of orthogonal polynomials defined directly on a series of points and therefore they are especially suitable for digital images. Some of these polynomials such as Tchebichef,

The 2D TMs of order (*p* + *q*) of an image intensity function *f* (*n*, *m*) with size *N* ×*M* is defined

*t <sup>p</sup>*(*n*)*tq*

*<sup>p</sup>* - *<sup>k</sup>* )( *<sup>p</sup>* <sup>+</sup> *<sup>k</sup>*

*<sup>p</sup>* )( *<sup>n</sup>*

*n*=0 *N* -1 ∑ *m*=0 *M* -1

where *t <sup>p</sup>*(*n*) is the *p* th order orthogonal discrete Tchebichef polynomial defined by [8]

(-1) *<sup>p</sup>*-*<sup>k</sup>* ( *<sup>N</sup>* -1- *<sup>k</sup>*

Computation of the Zernike polynomials by this formula must commence with

*<sup>l</sup>* )*w <sup>l</sup> B pqk*

*<sup>p</sup>* <sup>+</sup> *<sup>q</sup> Rp*-2.*<sup>q</sup>*

*mk* -2 *<sup>j</sup>*-*l*,2 *<sup>j</sup>*+*<sup>l</sup>* (17)

*<sup>i</sup>*;*<sup>q</sup>* <sup>≤</sup><sup>0</sup> (18)

*<sup>p</sup>* + 1 *δkpδlq* (19)

(*r*) (20)

(*m*) *f* (*n*, *m*) (21)

*<sup>k</sup>* ) (22)

*k* -|*q*| 2 ∑ *l*=0 |*q*| ( *<sup>k</sup>* - <sup>|</sup>*q*<sup>|</sup> 2 *<sup>j</sup>* )( <sup>|</sup>*q*<sup>|</sup>

The definition of the *n* th order classical Krawtchouk polynomial is defined as

$$K\_n\{\mathbf{x}; p, N\} = \sum\_{k=0}^n a\_{k, n, p} \mathbf{x}^k = {}\_2F\_1\left(\begin{matrix} \cdot \mathbf{n} \ \ \text{ - } \mathbf{x} \ \text{ - } N; \frac{1}{p} \end{matrix}\right) \tag{25}$$

where *x*, *n* =0, 1, 2, …, *N* , *N* >0, *p* ∈(0,1) and *F*<sup>1</sup> (<sup>∙</sup> ) <sup>2</sup> is the generalized hypergeometric function

$$\,\_2F\_1\{a,\ b;c;z\} = \sum\_{k=0}^{\stackrel{\circ}{\circ}} \frac{(a)\_k(b)\_k}{\{c\}\_k} \frac{z^k}{k!} \tag{26}$$

and (*x*) *<sup>k</sup>* is the Pochhammer symbol given by

$$\begin{aligned} \{\mathbf{x}\}\_k &= \mathbf{x}(\mathbf{x}+1)(\mathbf{x}+2)\cdots(\mathbf{x}+k-1) \\ k \ge 1 \quad \text{and } \{\mathbf{x}\}\_0 &= 1 \end{aligned} \tag{27}$$

The normalized and weighted Krawtchouk polynomials {*<sup>K</sup>*¯ *<sup>n</sup>*(*x*; *p*, *N* )} are defined as [10]

$$\overline{K}\_n(\mathbf{x}; p, N) = K\_n(\mathbf{x}; p, N) \sqrt{\frac{w(\mathbf{x}; p, N)}{\rho(\mathbf{u}; p, N)}}\tag{28}$$

where the weight function, *w*(∙ ) and the square norm, *ρ*(∙ ) are given as

$$w(x; p, N) = \binom{N}{x} p^x (1 - p)^{N - x} \tag{29}$$

and

$$\rho\left(n;p,\ N\right) = \left(\frac{p\cdot 1}{p}\right)^n \frac{n!}{\left(\cdot N\right)\_n} \tag{30}$$

The normalized and weighted Krawtchouk polynomials have the following three-term recurrence relation:

**3.1. Hatamian's model**

with transfer function <sup>1</sup>

transfer function is given by

as

Hatamian proposed the all-pole digital filter structure to compute GMs up to the 16th order. The one dimensional GMs of order *p* for a *N* -length sequence *x n* is defined in this model

One reason for using digital filters as a moment generator is based on the convolution of the

Hence, the output of the digital filter *yp* evaluated at the point *n* = *N* - 1 can be expressed:

where *x n* is the reversed sequence. Figure 1 shows the structure of a single pole digital filter

accumulator has a delay in the feed-forward path. For *p* cascaded filters, the corresponding

*<sup>H</sup> <sup>p</sup>*(*z*)= <sup>1</sup>

*n <sup>p</sup> x n* (35)

*x k* (*N* - *k*) *<sup>p</sup>* (36)

(*<sup>z</sup>* - 1) *<sup>p</sup>*+1 (37)

(*z*)= <sup>1</sup> *<sup>z</sup>* - 1 .

*<sup>z</sup>* - 1 , which is equivalent to an accumulator with unity feedback. This

*u n* , where *u n* is the unit step function.

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

167

*mp* = ∑ *n*=0 *N* -1

*yp N* = ∑ *k*=0 *N* -1

aforementioned sequence with impulse response, *n <sup>p</sup>*

**Figure 1.** Single-pole filter structure [19] in feedforward path.

The 1D cascaded filters structure is shown in Figure 2, where *H*<sup>0</sup>

$$p(n \text{ - N})\overline{\mathcal{K}}\_{n+1}\{\mathbf{x}; p\_\star \text{ N}\} = A\{p(\text{N} \cdot 2\text{n}) + n \text{ - x}\} \overline{\mathcal{K}}\_n\{\mathbf{x}; p\_\star \text{ N}\} - Bn(\text{1} \cdot p)\overline{\mathcal{K}}\_{n+1}\{\mathbf{x}; p\_\star \text{ N}\} \tag{31}$$

where

$$A = \sqrt{\frac{(1 - p)(n + 1)}{p(N - n)}}$$

$$B = \sqrt{\frac{(1 - p)^2 (n + 1) n}{p^2 (N - n) (N - n + 1)}}$$

with

$$\begin{aligned} \overline{K}\_0(\mathbf{x}; p\_\prime \ N) &= \sqrt{w(\mathbf{x}; p\_\prime \ N)} \\\\ \overline{K}\_1(\mathbf{x}; p\_\prime \ N) &= \left(1 - \frac{1}{N p} \mathbf{x}\right) \overline{w(\mathbf{x}; p\_\prime \ N)} \dots \end{aligned}$$

The 2D Krawtchouk moment of order (*n* + *m*) of an image intensity function *f* (*x*, *y*) with size *N* ×*M* is defined as [16]

$$\mathbf{Q}\_{\text{mm}} = \sum\_{\mathbf{x}=\mathbf{0}}^{\text{N}-1} \sum\_{\mathbf{y}=\mathbf{0}}^{\text{N}-1} \overline{\mathbf{K}}\_{\text{n}} \{ \mathbf{x}; p\_{1\prime} \text{ N} \ \mathbf{-1} \} \overline{\mathbf{K}}\_{\text{m}} \{ y; p\_{2\prime} \text{ M} \ \mathbf{-1} \} f(\mathbf{x}, \ \mathbf{y}) \tag{32}$$

The orthogonality property leads to the following inverse moment transform

$$f\begin{Bmatrix}\mathbf{x}\ \mathbf{y}\end{Bmatrix} = \sum\_{n=0}^{N-1} \sum\_{m=0}^{M-1} Q\_{nm} \overline{K}\_n \begin{Bmatrix} \mathbf{x}\ \mathbf{y}\ \mathbf{N}\ \mathbf{-1}\end{Bmatrix} \overline{K}\_m \begin{Bmatrix} \mathbf{y}\ \mathbf{y}\ \mathbf{p}\_2\ \mathbf{M}\ \mathbf{-1}\end{Bmatrix} \tag{33}$$

If only the moments of order up to (*Nmax* , *Mmax*) are computed, then the reconstructed image in (2.30) can be approximated by

$$\widetilde{f}\begin{pmatrix}\mathbf{x},\ \mathbf{y}\end{pmatrix} = \sum\_{n=0}^{N\_{\text{max}}} \sum\_{m=0}^{M\_{\text{max}}} \mathbf{Q}\_{nm} \overline{\mathbf{K}}\_n\{\mathbf{x}; p\_{1'} \ \mathbf{N} \ \mathbf{-1}\} \overline{\mathbf{K}}\_m\{y; p\_{2'} \ \mathbf{M} \ \mathbf{-1}\} \tag{34}$$

#### **3. Formulation of geometric moments using digital filters**

This section first reviews the generation of GMs using digital filter structure as proposed by Hatamian [19]. This is then followed by a review of the improved version used by Wong and Siu [20].

#### **3.1. Hatamian's model**

The normalized and weighted Krawtchouk polynomials have the following three-term

The 2D Krawtchouk moment of order (*n* + *m*) of an image intensity function *f* (*x*, *y*) with size

*<sup>n</sup>*(*x*; *<sup>p</sup>*1, *<sup>N</sup>* - 1)*<sup>K</sup>*¯

If only the moments of order up to (*Nmax* , *Mmax*) are computed, then the reconstructed image

*<sup>n</sup>*(*x*; *<sup>p</sup>*1, *<sup>N</sup>* - 1)*<sup>K</sup>*¯

This section first reviews the generation of GMs using digital filter structure as proposed by Hatamian [19]. This is then followed by a review of the improved version used by Wong and

*<sup>n</sup>*(*x*; *<sup>p</sup>*1, *<sup>N</sup>* - 1)*<sup>K</sup>*¯

The orthogonality property leads to the following inverse moment transform

*QnmK*¯

**3. Formulation of geometric moments using digital filters**

*<sup>n</sup>*(*x*; *<sup>p</sup>*, *<sup>N</sup>* ) - *Bn*(1 - *<sup>p</sup>*)*K*¯

*<sup>m</sup>*(*y*; *p*2, *M* - 1) *f* (*x*, *y*) (32)

*<sup>m</sup>*(*y*; *p*2, *M* - 1) (33)

*<sup>m</sup>*(*y*; *p*2, *M* - 1) (34)

*<sup>n</sup>*-1(*x*; *p*, *N* ) (31)

*<sup>n</sup>*+1(*x*; *<sup>p</sup>*, *<sup>N</sup>* )= *<sup>A</sup> <sup>p</sup>*(*<sup>N</sup>* - 2*n*) <sup>+</sup> *<sup>n</sup>* - *<sup>x</sup> <sup>K</sup>*¯

*Np x*) *w*(*x*; *p*, *N* ) .

*Qnm* = ∑ *x*=0 *N* -1 ∑ *y*=0 *M* -1 *K*¯

*f* (*x*, *y*)= ∑

˜(*<sup>x</sup>*, *<sup>y</sup>*)= <sup>∑</sup>

in (2.30) can be approximated by

*f*

Siu [20].

*n*=0 *N* -1 ∑ *m*=0 *M* -1 *QnmK*¯

*n*=0 *Nmax* ∑ *m*=0 *Mmax*

recurrence relation:

166 Digital Filters and Signal Processing

*<sup>p</sup>*(*<sup>n</sup>* - *<sup>N</sup>* )*K*¯

*<sup>A</sup>*<sup>=</sup> (1 - *<sup>p</sup>*)(*<sup>n</sup>* + 1) *p*(*N* - *n*)

*<sup>B</sup>* <sup>=</sup> (1 - *<sup>p</sup>*)2(*<sup>n</sup>* + 1)*<sup>n</sup> p* 2(*N* - *n*)(*N* - *n* + 1)

(*x*; *p*, *N* )= *w*(*x*; *p*, *N* )

(*x*; *<sup>p</sup>*, *<sup>N</sup>* )=(1 - <sup>1</sup>

*N* ×*M* is defined as [16]

where

with *K*¯ 0

*K*¯ 1 Hatamian proposed the all-pole digital filter structure to compute GMs up to the 16th order. The one dimensional GMs of order *p* for a *N* -length sequence *x n* is defined in this model as

$$\mathfrak{m}\_p = \sum\_{n=0}^{N-1} n^{-p} \mathfrak{x} \mathsf{f} n \mathsf{I} \mathsf{m} \mathsf{I} \qquad\tag{35}$$

One reason for using digital filters as a moment generator is based on the convolution of the aforementioned sequence with impulse response, *n <sup>p</sup> u n* , where *u n* is the unit step function. Hence, the output of the digital filter *yp* evaluated at the point *n* = *N* - 1 can be expressed:

$$\mathbf{y}\_p \mathbf{L} \mathbf{N} \mathbf{J} = \sum\_{k=0}^{N-1} \mathbf{x} \mathbf{L} k \mathbf{J} (\mathbf{N} \cdot \mathbf{k})^p \tag{36}$$

where *x n* is the reversed sequence. Figure 1 shows the structure of a single pole digital filter with transfer function <sup>1</sup> *<sup>z</sup>* - 1 , which is equivalent to an accumulator with unity feedback. This accumulator has a delay in the feed-forward path. For *p* cascaded filters, the corresponding transfer function is given by

$$\mathcal{H}\_p(z) = \frac{1}{(z \cdot 1)^{r \cdot 1}} \tag{37}$$

**Figure 1.** Single-pole filter structure [19] in feedforward path.

The 1D cascaded filters structure is shown in Figure 2, where *H*<sup>0</sup> (*z*)= <sup>1</sup> *<sup>z</sup>* - 1 .

$$\xleftarrow[n]{[n]} \xrightarrow[]{} \overbrace{H\_0(\mathbf{z})}^{[y\_0[n]]} \xrightarrow[]{[y\_0[n]]} \xrightarrow{[y\_1[n]]} \cdots \xrightarrow[]{[y\_{n-1}[n]]} \overbrace{H\_0(\mathbf{z})}^{[y\_0[n]]}$$

**Figure 2.** Cascading of single-pole filters for generating of moments up to order *p* .

The relationship between GMs and the all-pole digital filter outputs are related by the following expression

$$\delta m\_p = \sum\_{r=0}^{p} \mathbb{C}\_{p,r} \, \mathcal{Y}\_r \tag{38}$$

**Figure 3.** Single-pole filter structure [20] in feedback path.

coefficient, as shown below:

filters structure outputs are related by

therefore

In this case, however, the GMs are obtained from the digital filter outputs and a different matrix

where the coefficients of *Dp*,*r* are obtained from the following recurrence formula shown in [23].

1 ; *p* =0, *r* =0

*Dp*,*<sup>r</sup>* ={ 0 ; *<sup>p</sup>* >0, *<sup>r</sup>* =0

*r*(*Dp*-1.*r*-1 - *Dp*-1,*r*) ; *others*

For a two dimensional image, the relationship between the GMs and the improved digital

We begin with the one-dimensional case, where the results can be easily extended to two-

unit step function and *p* is the moment order. Now, assume the input of the digital filter is given as *x n* as mentioned in (3.2). Based on the convolution theorem, the output, *y n* is

Using the digital filter as shown in Figure. 3 and changing the unit step function to *u n* + *p* to accommodate the sampling of the digital filter outputs at earlier instances, we get the following

> *k*=0 *n*+*p*

*x k* ( *<sup>N</sup>* - *<sup>k</sup>*

*x k* ( *<sup>n</sup>* - *<sup>k</sup>* <sup>+</sup> *<sup>p</sup>*

*<sup>p</sup>* )*u n* + *p* = ∑

*k*=0 *N*

*yp N* - *p* = ∑

Expanding the binomial equation, and using the Stirling numbers we get

*Dp*,*<sup>r</sup> yr* (42)

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

*Dq*,*<sup>s</sup> yr*,*<sup>s</sup>* (44)

*yp n* = *x n* \**h <sup>p</sup> n* (45)

*u n* , where *u n* is the

*<sup>p</sup>* )(46)

*<sup>p</sup>* ) (47)

(43)

169

*mp* = ∑ *r*=0 *p*

*mp*,*<sup>q</sup>* = ∑ *r*=0 *p Dp*,*<sup>r</sup>* ∑ *s*=0 *q*

**3.3. Proposed method based on the lower output values of digital filters**

dimensional. Consider a digital filter with impulse response *<sup>h</sup> <sup>p</sup> <sup>n</sup>* <sup>=</sup>*<sup>n</sup> <sup>p</sup>*

*yp <sup>n</sup>* <sup>=</sup> *<sup>x</sup> <sup>n</sup>* \*( *<sup>n</sup>* <sup>+</sup> *<sup>p</sup>*

Substituting, *n* = *N* - *p* yields

where *yr* is the *r th* digital filter output and *Cp*,*r* is a matrix of coefficients directly obtained from the impulse responses of the all-pole digital filters as given in [27]:

$$\mathbf{C}\_{p,r} = \begin{array}{c|cccc} & \mathbf{0} & \;; & r \ge p \\ & \{-1\}^p & \/ \ & \/ \\ r\mathbf{C}\_{p-1,r-1} \cdot \begin{matrix} \leftarrow \begin{matrix} \cdot \end{matrix} \end{matrix} \begin{matrix} \cdot \; \begin{matrix} \cdot \; \end{matrix} \end{matrix} & \begin{matrix} \begin{matrix} \cdot \; \; \; \begin{matrix} \cdot \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \end{matrix} \end{array} \tag{39}$$

For a two dimensional image, the relationship between the GMs and digital filters outputs can be expanded to

$$\mathcal{I}m\_{p,q} = \sum\_{r=0}^{p} \mathbb{C}\_{p,r} \sum\_{s=0}^{q} \mathbb{C}\_{q,s} y\_{r,s} \tag{40}$$

However, the all-pole filter structure has delays in the feed-forward path; hence it causes an increase in the computation time.

#### **3.2. Wong and Siu's model**

To overcome this problem, Wong and Siu [20] moved the delay unit to the feedback path of the filter, as shown in Figure 3. The transfer function is given as

$$H\_{\,p}(z) = \left(\frac{1}{1 \cdot z^{-1}}\right)^{p+1} \tag{41}$$

**Figure 3.** Single-pole filter structure [20] in feedback path.

**Figure 2.** Cascading of single-pole filters for generating of moments up to order *p* .

the impulse responses of the all-pole digital filters as given in [27]:

*mp*,*<sup>q</sup>* = ∑ *r*=0 *p Cp*,*<sup>r</sup>* ∑ *s*=0 *q*

the filter, as shown in Figure 3. The transfer function is given as

*mp* = ∑ *r*=0 *p*

*Cp*,*<sup>r</sup>* ={ <sup>0</sup> ; *<sup>r</sup>* <sup>&</sup>gt; *<sup>p</sup>*

following expression

168 Digital Filters and Signal Processing

be expanded to

increase in the computation time.

**3.2. Wong and Siu's model**

The relationship between GMs and the all-pole digital filter outputs are related by the

where *yr* is the *r th* digital filter output and *Cp*,*r* is a matrix of coefficients directly obtained from

*rCp*-1,*r*-1 - (*r* + 1)*Cp*-1,*<sup>r</sup>* ; *r* >0, *p* >0

For a two dimensional image, the relationship between the GMs and digital filters outputs can

However, the all-pole filter structure has delays in the feed-forward path; hence it causes an

To overcome this problem, Wong and Siu [20] moved the delay unit to the feedback path of

*<sup>H</sup> <sup>p</sup>*(*z*)=( <sup>1</sup>

(-1) *<sup>p</sup>* ; *r* =0, *p* ≥0

*Cp*,*<sup>r</sup> yr* (38)

*Cq*,*<sup>s</sup> yr*,*<sup>s</sup>* (40)

1 - *<sup>z</sup>* -1 ) *<sup>p</sup>*+1 (41)

(39)

In this case, however, the GMs are obtained from the digital filter outputs and a different matrix coefficient, as shown below:

$$
\delta m\_p = \sum\_{r=0}^{p} D\_{p,r} y\_r \tag{42}
$$

where the coefficients of *Dp*,*r* are obtained from the following recurrence formula shown in [23].

$$D\_{p,r} = \begin{vmatrix} 0 & \ \vdots & p > 0, \ r = 0 \\ 1 & \ \vdots & p = 0, \ r = 0 \\ r \left( D\_{p-1,r-1} \cdot D\_{p-1,r} \right) & \vdots & \text{others} \end{vmatrix} \tag{43}$$

For a two dimensional image, the relationship between the GMs and the improved digital filters structure outputs are related by

$$\mathfrak{M}\_{p,q} = \sum\_{r=0}^{p} D\_{p,r} \sum\_{s=0}^{q} D\_{q,s} \mathcal{Y}\_{r,s} \tag{44}$$

#### **3.3. Proposed method based on the lower output values of digital filters**

We begin with the one-dimensional case, where the results can be easily extended to twodimensional. Consider a digital filter with impulse response *<sup>h</sup> <sup>p</sup> <sup>n</sup>* <sup>=</sup>*<sup>n</sup> <sup>p</sup> u n* , where *u n* is the unit step function and *p* is the moment order. Now, assume the input of the digital filter is given as *x n* as mentioned in (3.2). Based on the convolution theorem, the output, *y n* is therefore

$$\mathbf{y}\_p \square \mathbf{n} \square = \mathbf{x} \square \mathbf{n} \square^\* h\_p \square \mathbf{n} \square \mathbf{n} \square \tag{45}$$

Using the digital filter as shown in Figure. 3 and changing the unit step function to *u n* + *p* to accommodate the sampling of the digital filter outputs at earlier instances, we get the following

$$\mathbb{E}[\mathbf{y}\_p \mathsf{L}\mathbf{n}] = \mathbf{x} \mathsf{L}\mathsf{L}\mathsf{L}^\* \mathsf{L}^\* \mathsf{L}^{(n+p)} \mathsf{L}\mathsf{L}\mathsf{L}^\* + p\mathsf{L} = \sum\_{k=0}^{n+p} \mathsf{x} \mathsf{L}\mathsf{L}^\* \mathsf{L}^{(n-k+p)} \mathsf{L}^\* \tag{46}$$

Substituting, *n* = *N* - *p* yields

$$\mathbf{x} \,\mathbf{y} \,\mathbf{J} \,\mathbf{N} \,\mathbf{J} \,\mathbf{J} \,\mathbf{J} = \sum\_{k=0}^{N} \mathbf{x} \,\mathbf{J} \,\mathbf{J} \,\mathbf{J} \left( \begin{array}{c} \text{N} \ \ \ \text{J} \end{array} \right) \tag{47}$$

Expanding the binomial equation, and using the Stirling numbers we get

$$\mathbf{y}\_p \mathbf{L} \mathbf{N} \cdot \mathbf{p} \mathbf{J} = \frac{1}{p!} \sum\_{k=0}^{N} \sum\_{i=0}^{p} \mathbf{s}\_1 \{p\_i \mid i\} \mathbf{x} \mathbf{L} \mathbf{J} (\mathbf{N} \cdot \mathbf{k})^i \tag{48}$$

In the third subsection, the speed of the proposed method is compared with the speed achieved

rtificial test image of size 4×4 was used to prove the validity of the proposed approach. In this case, the digital filter outputs up to third order were generated. The intensity function of the

The difference between the digital filter output values for [20] and proposed structure is shown in Table 1. It is clear that for orders higher than one, the proposed output values are much

*y*<sup>00</sup> 1736 1736 *y*<sup>01</sup> 4326 4326 *y*<sup>02</sup> 8651 4325 *y*<sup>03</sup> 15148 2172 *y*<sup>10</sup> 4358 4358 *y*<sup>11</sup> 10870 10870 *y*<sup>12</sup> 21750 10880 *y*<sup>20</sup> 8742 4384 *y*<sup>21</sup> 21811 10941 *y*<sup>30</sup> 15331 2205

**Table 1.** A comparison of filter output values between [20] and the proposed method for an artificial image, *x m*, *n* ,

The advantage of the proposed method lies in the smaller digital filter output values as compared to [19] and [20]. However, it would still be useful to study the computational complexity of these three methods and the direct method in term of the number of additions and multiplications. The proposed method, [19] and [20] consist of two main steps. Digital

**[20] Proposed structure output values**

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

171

using [19] and [20].

*x m*, *n* =

up to third order.

*3.4.2. Computational complexity*

*3.4.1. Artificial test image*

111 114 109 101

106 102 107 110

lower than [20] as the order increases.

test image is given in the following matrix:

103 116 113 108

112 104 105 115

**Digital filter outputs Filter structure output values**

where *s*<sup>1</sup> (*p*, *i*) are the Stirling numbers of the first kind [28], which satisfy

$$\frac{\binom{N}{N \ -k \ -p} !}{\binom{N}{N \ -k \ -p} !} = \sum\_{i=0}^{p} s\_1(p, i) (N \ -k)^i \tag{49}$$

Using (3.2), we can rewrite (3.15) in terms of GMs as follows:

$$\{y\_p \mathbf{f} \mathbf{N} \, \mathbf{N} \, \mathbf{J} \, \mathbf{J} \, \mathbf{J} = \frac{1}{p!} \sum\_{i=0}^{p} s\_1(p\_i \text{ i}) m\_i \} \tag{50}$$

Now by taking the inverse of (3.16), the GMs can be obtained in terms of the digital filter outputs thus:

*mp* = ∑ *r*=0 *p r* !*s*<sup>2</sup> (*p*, *r*)*yr N* - *r* (51)

where *s*<sup>2</sup> (*p*, *r*) are the Stirling numbers of the second kind [28], and the Stirling numbers of the first and second kind can be considered to be inverses of one another:

$$\sum\_{p=0}^{\max} \text{(-1)}^{p,r} \text{s}\_1(p, i) \text{s}\_2(r, p) = \delta\_{ir} \tag{52}$$

where *δir* is the Kronecker delta.

Notice now, for the *p* order, it can be shown that the digital filter outputs are sampled at *N* - *p* , unlike the previous works which were sampled at *N* or later instances of *N* [19]-[20]- [21]. As the order *<sup>p</sup>* <sup>2</sup> is reached, the digital filter output values begin to decrease. This allows the use of low value digital filter outputs for the formulation of GM. The 2D moments can be obtained by expanding the 1D model for the digital filter outputs as follows:

$$m\_{p,q} = \sum\_{r=0}^{p} \sum\_{s=0}^{q} r \text{ 1s 1s}\_2(p\_r \text{ } r \text{)} \text{s}\_2(q\_r \text{ } s \text{)} y\_{rs} \text{[IN - } r \text{ } \text{ } N \text{ - } s\text{]} \tag{53}$$

#### **3.4. Experimental studies**

A set of experiments were carried out to validate the theoretical framework developed in the previous sections and to evaluate the performance of the proposed structure. This section is divided into 3 parts. In the first subsection, an artificial image of size 4×4 is used to generate GMs up to third order. The computational complexity of three algorithms – the algorithms of [19], [20] and the proposed method – is then analyzed and discussed in the second subsection. In the third subsection, the speed of the proposed method is compared with the speed achieved using [19] and [20].

### *3.4.1. Artificial test image*

*yp <sup>N</sup>* - *<sup>p</sup>* <sup>=</sup> <sup>1</sup>

(*N* - *k* )! (*<sup>N</sup>* - *<sup>k</sup>* - *<sup>p</sup>*)! = ∑

*yp <sup>N</sup>* - *<sup>p</sup>* <sup>=</sup> <sup>1</sup>

*mp* = ∑ *r*=0 *p r* !*s*<sup>2</sup>

first and second kind can be considered to be inverses of one another:

(-1) *<sup>p</sup>*-*<sup>r</sup> s*1 (*p*, *i*)*s*<sup>2</sup>

obtained by expanding the 1D model for the digital filter outputs as follows:

(*p*, *r*)*s*<sup>2</sup>

*r* !*s*!*s*<sup>2</sup>

∑ *p*=0 max {*i*,*r*}

*mp*,*<sup>q</sup>* = ∑ *r*=0 *p* ∑ *s*=0 *q*

Using (3.2), we can rewrite (3.15) in terms of GMs as follows:

where *s*<sup>1</sup>

170 Digital Filters and Signal Processing

outputs thus:

where *s*<sup>2</sup>

where *δir* is the Kronecker delta.

[21]. As the order *<sup>p</sup>*

**3.4. Experimental studies**

*<sup>p</sup>* ! ∑ *k*=0 *N* ∑ *i*=0 *p s*1

(*p*, *i*) are the Stirling numbers of the first kind [28], which satisfy

*i*=0 *p s*1

> *<sup>p</sup>* ! ∑ *i*=0 *p s*1

Now by taking the inverse of (3.16), the GMs can be obtained in terms of the digital filter

Notice now, for the *p* order, it can be shown that the digital filter outputs are sampled at *N* - *p* , unlike the previous works which were sampled at *N* or later instances of *N* [19]-[20]-

the use of low value digital filter outputs for the formulation of GM. The 2D moments can be

A set of experiments were carried out to validate the theoretical framework developed in the previous sections and to evaluate the performance of the proposed structure. This section is divided into 3 parts. In the first subsection, an artificial image of size 4×4 is used to generate GMs up to third order. The computational complexity of three algorithms – the algorithms of [19], [20] and the proposed method – is then analyzed and discussed in the second subsection.

<sup>2</sup> is reached, the digital filter output values begin to decrease. This allows

(*p*, *r*) are the Stirling numbers of the second kind [28], and the Stirling numbers of the

(*p*, *i*)*x k* (*N* - *k*)*<sup>i</sup>* (48)

(*p*, *i*)(*N* - *k*)*<sup>i</sup>* (49)

(*p*, *i*)*mi* (50)

(*p*, *r*)*yr N* - *r* (51)

(*r*, *p*)=*δir* (52)

(*q*, *s*)*yrs N* - *r*, *N* - *s* (53)

rtificial test image of size 4×4 was used to prove the validity of the proposed approach. In this case, the digital filter outputs up to third order were generated. The intensity function of the test image is given in the following matrix:


The difference between the digital filter output values for [20] and proposed structure is shown in Table 1. It is clear that for orders higher than one, the proposed output values are much lower than [20] as the order increases.


**Table 1.** A comparison of filter output values between [20] and the proposed method for an artificial image, *x m*, *n* , up to third order.

#### *3.4.2. Computational complexity*

The advantage of the proposed method lies in the smaller digital filter output values as compared to [19] and [20]. However, it would still be useful to study the computational complexity of these three methods and the direct method in term of the number of additions and multiplications. The proposed method, [19] and [20] consist of two main steps. Digital filter outputs are obtained from the respective digital filter structure. Then, these outputs are linearly combined to compute GMs.

For a grayscale image of size *N* × *N* and GM up to order of *s* , where *s* = (*p* + *q*) , the number of additions and multiplications for the proposed method, [19] and [20] are shown in Table 2.


<sup>0</sup> <sup>20</sup> <sup>40</sup> <sup>60</sup> <sup>80</sup> <sup>100</sup> <sup>120</sup> <sup>0</sup>

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

173

Moment Order

The computation speed of the proposed method is compared with [19] and [20]. CPU elapsed time is used in the evaluation process in all the performed numerical experiments. The codes were all written in MATLAB7 and simulations were run with a 3GHz Intel Core2 with 2GB RAM and the average time is used in the discussion. The images used for the experiments were

Table 4 shows the simulation time for GMs computation on the Pepper image of size 128×128 for orders of 5 to 55, in a step of 10. The same experiment was repeated on the Lena image of

**Moment order (***s* **=** *p* **+** *q* **) [19] [20] Proposed method**

 0.4 0.39 0.388 1.12 1.104 1.05 2.03 1.97 1.81 3.28 3.14 2.82 5.15 4.87 4.32 7.96 7.48 6.63

0.5

**Figure 4.** Number of additions for a 128 × 128 grey-scaled image.

the Pepper and Lena images, as shown in Figure 5.

size 512×512. The simulation times are shown in Table 5.

**Table 4.** CPU elapsed time in milliseconds for the 128 × 128 Pepper's test image.

*3.4.3. Speed performance and comparison studies*

1

1.5

Number of Additions

2

2.5

<sup>3</sup> x 106

[19] Approach [20] Approach Proposed

**Table 2.** Complexity analysis of GMs computation using digital filters for image of size *N* × *N* and maximum order of *s* = *p* + *q* .

It can be seen that even though the complexity of the linear combination stage for the proposed method is the same as [20], there is saving in the number of additions at the digital filters stage. This can be clearly shown in the example below. For *N* = 512 and *s* =45 , the number of additions needed in the filter stage for [20] is 12612096 while the proposed method just requires 12090594 additions. The summary of the complexity comparison for all the three methods to compute GMs up to 45th order is shown in Table 3.


**Table 3.** Complexity analysis of GMs computation using digital filters for image of size 512 × 512 and *s* = 45 .

For a 128×128 grayscale image, the advantage of the proposed filter structure as compared to [19] and [20] is clearly depicted in Figure 4.

**Figure 4.** Number of additions for a 128 × 128 grey-scaled image.

#### *3.4.3. Speed performance and comparison studies*

filter outputs are obtained from the respective digital filter structure. Then, these outputs are

For a grayscale image of size *N* × *N* and GM up to order of *s* , where *s* = (*p* + *q*) , the number

of additions and multiplications for the proposed method, [19] and [20] are shown in Table 2.

<sup>2</sup> )(*<sup>N</sup>* + 1) *<sup>s</sup>*(*<sup>s</sup>* + 1)(*<sup>s</sup>* + 2)(*<sup>s</sup>* + 7)

<sup>2</sup> )*<sup>N</sup> <sup>s</sup>*(*<sup>s</sup>* - 1)(*<sup>s</sup>* <sup>2</sup> + 3*<sup>s</sup>* + 14)

**Table 2.** Complexity analysis of GMs computation using digital filters for image of size *N* × *N* and maximum order of

It can be seen that even though the complexity of the linear combination stage for the proposed

method is the same as [20], there is saving in the number of additions at the digital filters stage.

This can be clearly shown in the example below. For *N* = 512 and *s* =45 , the number of

additions needed in the filter stage for [20] is 12612096 while the proposed method just requires

12090594 additions. The summary of the complexity comparison for all the three methods to

**Additions (Digital filter outputs to GMs)**

24

24

*s*(*s* - 1)(*s* <sup>2</sup> + 3*s* + 14) 24

**Multiplications (Digital filter outputs to GMs)**

> *s* <sup>4</sup> + 10*s* <sup>3</sup> + 23*s* <sup>2</sup> - 34*s* - 24 24

> > *s*(*s* - 1)(*s* <sup>2</sup> + 3*s* + 14) 24

> > *s*(*s* - 1)(*s* <sup>2</sup> + 3*s* + 14) 24

linearly combined to compute GMs.

172 Digital Filters and Signal Processing

**[19]** (*s* + 1)(*N* + *<sup>s</sup>* + 2

**[20]** (*s* + 1)(*N* + *<sup>s</sup>* + 2

**method** (*<sup>s</sup>* + 1) *<sup>N</sup>* (*<sup>N</sup>* + 2) - (*<sup>s</sup>* + 2)(*<sup>s</sup>* - 3)

**Additions (Digital filter stages)**

compute GMs up to 45th order is shown in Table 3.

[19] and [20] is clearly depicted in Figure 4.

**Algorithm Additions Multiplications [19]** 12847524 210795 **[20]** 12791451 179355 **Proposed method** 12269949 179355

**Table 3.** Complexity analysis of GMs computation using digital filters for image of size 512 × 512 and *s* = 45 .

For a 128×128 grayscale image, the advantage of the proposed filter structure as compared to

6

**Algorithm**

**Proposed**

*s* = *p* + *q* .

The computation speed of the proposed method is compared with [19] and [20]. CPU elapsed time is used in the evaluation process in all the performed numerical experiments. The codes were all written in MATLAB7 and simulations were run with a 3GHz Intel Core2 with 2GB RAM and the average time is used in the discussion. The images used for the experiments were the Pepper and Lena images, as shown in Figure 5.

Table 4 shows the simulation time for GMs computation on the Pepper image of size 128×128 for orders of 5 to 55, in a step of 10. The same experiment was repeated on the Lena image of size 512×512. The simulation times are shown in Table 5.


**Table 4.** CPU elapsed time in milliseconds for the 128 × 128 Pepper's test image.


**Table 5.** CPU elapsed time in milliseconds for the 512 × 512 Lena's test image.

The tables clearly show that the proposed method requires less time than [19] and [20] to

**Figure 5.** Test images: (a) Peppers and (b) Lena.

0

1

2

3

4

CPU Elapsed Time(mS)

5

6

7

[19] [20] Proposed

8

5 10 15 20 25 30 35 40 45 50 55

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

175

Moment Order

**Figure 6.** Linear scale of CPU time in seconds for the 128 × 128 gray-scale peppers image.

compute GMs of the same order. They also show that the time saving increases as the order

of moments increases. In Figure 6 and Figure 7 we provide a comparison of required CPU time

between [19], [20] and the proposed method.

**Figure 5.** Test images: (a) Peppers and (b) Lena.

**Moment order (***s* **=** *p* **+** *q* **) [19] [20] Proposed method**

The tables clearly show that the proposed method requires less time than [19] and [20] to

compute GMs of the same order. They also show that the time saving increases as the order

of moments increases. In Figure 6 and Figure 7 we provide a comparison of required CPU time

 6.03 6.02 6.00 16.264 16.222 16.018 26.817 26.722 26.13 37.866 37.677 36.494 49.663 49.325 47.343 62.542 61.982 58.99

**Table 5.** CPU elapsed time in milliseconds for the 512 × 512 Lena's test image.

174 Digital Filters and Signal Processing

between [19], [20] and the proposed method.

**Figure 6.** Linear scale of CPU time in seconds for the 128 × 128 gray-scale peppers image.

**4.1. Reduced digital filter structure**

given as:

shown as

digital filter is:

Unlike the previous researchers who used a 1-D Z-transform to derive a 2-D digital filter structure by cascading the filters both in the rows and columns. However, we use a 2-D definition of Z-transform to obtain the impulse response of the filter which led to a reduced digital filter structure as compared with [19]. The 2-D Z-transform for the image, *f m*, *n* is

*f m*, *n z*<sup>1</sup>

*<sup>m</sup>* )*<sup>u</sup> <sup>m</sup>* ( *<sup>n</sup>* <sup>+</sup> *<sup>q</sup>*

and the transfer function *H <sup>p</sup>*,*q*(*z*1, *z*2) , in the 2-D Z-transform domain for this filter structure is

Using the above transfer function, the relationship between the input and the output of the




1 - *z*<sup>1</sup>

1 - *z*<sup>2</sup>

Thereafter, a recurrence relationship between the previous and the next outputs of each digital



Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

177

*<sup>n</sup>* )*u n* (55)






*F* (*z*1, *z*2) = ∑

*<sup>h</sup> <sup>p</sup>*,*<sup>q</sup> <sup>m</sup>*, *<sup>n</sup>* =( *<sup>m</sup>* <sup>+</sup> *<sup>p</sup>*

The impulse response for a 2-D image becomes:

*m*=1 +∞ ∑ *n*=1 +∞

*<sup>H</sup> <sup>p</sup>*,*q*(*z*1, *<sup>z</sup>*2) <sup>=</sup> <sup>1</sup>

*<sup>Y</sup> <sup>p</sup>*,*q*(*z*1, *<sup>z</sup>*2) <sup>=</sup> *<sup>X</sup>* (*z*1, *<sup>z</sup>*2) (1 - *z*<sup>1</sup>

*<sup>Y</sup>*00(*z*1, *<sup>z</sup>*2) <sup>=</sup> *<sup>X</sup>* (*z*1, *<sup>z</sup>*2) (1 - *z*<sup>1</sup>

*<sup>Y</sup> <sup>p</sup>*+1,*q*(*z*1, *<sup>z</sup>*2) <sup>=</sup> *<sup>Y</sup> <sup>p</sup>*,*q*(*z*1, *<sup>z</sup>*2)

*<sup>Y</sup> <sup>p</sup>*,*<sup>q</sup>*+1(*z*1, *<sup>z</sup>*2) <sup>=</sup> *<sup>Y</sup> <sup>p</sup>*,*q*(*z*1, *<sup>z</sup>*2)

By taking the inverse 2-D Z- transform of (4.6) and (4.7), we get the following:

Based on (4.4), the zero order of the digital filter output is derived as

filter for the row and columns as shown in Figure 8 can be obtained:

(1 - *z*<sup>1</sup>

**Figure 7.** Linear scale of CPU time in seconds for the 512 × 512 gray-scale Lena image.

## **4. A Reduced 2D digital filter structures for fast computation of geometric moments**

In the previous works, the linear transformation is performed using an external PC to relate the outputs of the digital filter structure to generate the moments. However, finding of a relationship between the geometric or orthogonal moments and the digital filters outputs has been stood as the main meaning of similar works [19]-[20].

In all the above aforementioned literatures, the basic concept involved the computation of geometric and other types of moments using the accumulator grid structure has remain unchanged over the past decades.

First we summarize the designing modality of all-pole digital filters using 1-D and 2-D Ztransforms [32]. We believe that relevance creation manner among the real-time and Z-domain is one of the most essential issues in studying of digital image moments. This section presents the results of a study aimed at the formulation of a new method to reduce the filter resources used in the digital filter structure. This study, though not exhaustive, serves to provide a suitable classification framework that underlies a new approach in digital filter design for the computation of moments.

#### **4.1. Reduced digital filter structure**

Unlike the previous researchers who used a 1-D Z-transform to derive a 2-D digital filter structure by cascading the filters both in the rows and columns. However, we use a 2-D definition of Z-transform to obtain the impulse response of the filter which led to a reduced digital filter structure as compared with [19]. The 2-D Z-transform for the image, *f m*, *n* is given as:

$$F\left(z\_1, z\_2\right) = \sum\_{m=1}^{\star \alpha} \sum\_{n=1}^{\star \alpha} f\left[m, n\right] z\_1^{\star m} z\_2^{\star n} \tag{54}$$

The impulse response for a 2-D image becomes:

**4. A Reduced 2D digital filter structures for fast computation of geometric**

<sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>35</sup> <sup>40</sup> <sup>45</sup> <sup>50</sup> <sup>55</sup> <sup>0</sup>

Moment Order

In the previous works, the linear transformation is performed using an external PC to relate the outputs of the digital filter structure to generate the moments. However, finding of a relationship between the geometric or orthogonal moments and the digital filters outputs has

In all the above aforementioned literatures, the basic concept involved the computation of geometric and other types of moments using the accumulator grid structure has remain

First we summarize the designing modality of all-pole digital filters using 1-D and 2-D Ztransforms [32]. We believe that relevance creation manner among the real-time and Z-domain is one of the most essential issues in studying of digital image moments. This section presents the results of a study aimed at the formulation of a new method to reduce the filter resources used in the digital filter structure. This study, though not exhaustive, serves to provide a suitable classification framework that underlies a new approach in digital filter design for the

been stood as the main meaning of similar works [19]-[20].

**Figure 7.** Linear scale of CPU time in seconds for the 512 × 512 gray-scale Lena image.

unchanged over the past decades.

10

20

30

40

CPU Elapsed Time(mS)

50

60

[19] [20] Proposed

70

176 Digital Filters and Signal Processing

computation of moments.

**moments**

$$\mathsf{Fun}\_{p,q}\mathsf{Fun}\_{\mathsf{n}}\mathsf{m}\mathsf{I}=\left(\begin{array}{c}\mathsf{n}^{\mathsf{n}+p}\right)\mathsf{n}\mathsf{L}\mathsf{m}\mathsf{I}\left(\begin{array}{c}\mathsf{n}^{\mathsf{n}+q}\end{array}\right)\mathsf{u}\mathsf{L}\mathsf{m}\mathsf{I}\tag{55}$$

and the transfer function *H <sup>p</sup>*,*q*(*z*1, *z*2) , in the 2-D Z-transform domain for this filter structure is shown as

$$\left(H\_{\,\,p,q}(z\_1, z\_2) = \frac{1}{\{1 \, \, \cdot \, z\_1^{-1}\}^{p+1} \{1 \, \cdot \, z\_2^{-1}\}^{q+1}}\right) \tag{56}$$

Using the above transfer function, the relationship between the input and the output of the digital filter is:

$$Y\_{\
u,\rho}(z\_1, z\_2) = \frac{X\begin{pmatrix} z\_1 \ z\_2 \end{pmatrix}}{\begin{pmatrix} 1 \ -z\_1^{-1} \end{pmatrix}^{r+1} \begin{pmatrix} 1 \ -z\_2^{-1} \end{pmatrix}^{r+1}} \tag{57}$$

Based on (4.4), the zero order of the digital filter output is derived as

$$X \cdot Y\_{00}(z\_1, \ z\_2) = \frac{X \cdot (z\_1, z\_2)}{\{1 \cdot z\_1^{-1}\} \{1 \cdot z\_2^{-1}\}} \tag{58}$$

Thereafter, a recurrence relationship between the previous and the next outputs of each digital filter for the row and columns as shown in Figure 8 can be obtained:

$$Y\_{\
u \approx 1, \emptyset} \begin{pmatrix} z\_{1\prime} & z\_2 \end{pmatrix} = \frac{Y\_{\
u, \emptyset} \begin{pmatrix} z\_{1\prime} & z\_2 \end{pmatrix}}{1 \ \cdot \ z\_1^{-1}} \tag{59}$$

$$Y\_{p,q+1}(z\_1, z\_2) = \frac{Y\_{p,q}(z\_1, z\_2)}{1 - z\_2^{-1}} \tag{60}$$

By taking the inverse 2-D Z- transform of (4.6) and (4.7), we get the following:

**Figure 8.** Recurrence relationship between the digital filter outputs of the row and column.

$$\mathbf{f}\_{\mathbf{n}} y\_{p+1,q} \mathbf{\ulcorner} m, \; n \mathbf{J} = y\_{p+1,q} \mathbf{\ulcorner} m \mathbf{\ulcorner} \mathbf{\ulcorner} m \mathbf{\ulcorner} + y\_{p,q} \mathbf{\ulcorner} m, \; n \mathbf{\ulcorner} \tag{61}$$

$$\mathbf{y}\_{p,q+1}\mathbf{\ulcorner}m,\ \mathbf{n}\mathbf{\ulcorner} = y\_{p,q+1}\mathbf{\ulcorner}m,\ \mathbf{n}\mathbf{\ulcorner} - \mathbf{1}\mathbf{\upuparrow} + y\_{p,q}\mathbf{\updownarrow}m,\ \mathbf{n}\mathbf{\updownarrow} \tag{62}$$

**Figure 10.** Digital filters for generating the 2-D geometric moments up to (*p* + *q*) order used in [2] and [8].

proposed method will be 40 digital filters less than [19] and [20].

the digital filter are

*x k h*<sup>0</sup> *N* - *k* =*μ*<sup>0</sup>

*x k h*<sup>1</sup> *N* - *k* = ∑

*x k h*<sup>2</sup> *N* - *k* = ∑

*k*=1 *N*

*k*=1 *N <sup>x</sup> <sup>k</sup>* <sup>1</sup>

*y*<sup>0</sup> *N* = ∑ *k*=1 *N*

*y*<sup>1</sup> *N* = ∑ *k*=1 *N*

*y*<sup>2</sup> *N* = ∑ *k*=1 *N*

filters outputs.

As compared to the digital filter structure used in [19] and [20] as shown in Figure 10, it can be seen that the difference is the prefix, *ypq* and *yp* used in the rows. In the proposed model, the outputs *y*00 , *y*10 ,..., *yp*0 occur at the row filter, unlike in [19] and [20], they occur at the column filters. Hence, the difference in savings of digital filters used in the proposed method is determined by the maximum *p* order. for example, in the design of the 40th order geometric moments, where the maximum *p* order is 40, and thus the number of digital filters used in

In our proposed method, we begin by showing the relationship between the digital filter outputs and the geometric moments by considering a 1D image. So, the first few outputs of

Hence by solving them, the above geometric moments can be obtained in terms of the digital

<sup>2</sup> (*<sup>N</sup>* + 1)(*<sup>N</sup>* + 2)*μ*<sup>0</sup> - <sup>1</sup>

<sup>2</sup> (2*<sup>N</sup>* + 3)*μ*<sup>1</sup> <sup>+</sup> <sup>1</sup>

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

179

<sup>2</sup> *μ*<sup>2</sup>

(63)

*x k* (*N* - *k* + 1)=(*N* + 1)*μ*<sup>0</sup> - *μ*<sup>1</sup>

<sup>2</sup> (*<sup>N</sup>* - *<sup>k</sup>* + 1)(*<sup>N</sup>* - *<sup>k</sup>* + 2)= <sup>1</sup>

The implementation of proposed digital filter structure for generating moments up to (*p* + *q*) orders is shown in Figure 9.

**Figure 9.** Reduced digital filters for generating the 2-D geometric moments up to (*p* + *q*) order.

**Figure 10.** Digital filters for generating the 2-D geometric moments up to (*p* + *q*) order used in [2] and [8].

*yp*+1,*<sup>q</sup> m*, *n* = *yp*+1,*<sup>q</sup> m* - 1, *n* + *yp*,*<sup>q</sup> m*, *n* (61)

*yp*,*q*+1 *m*, *n* = *yp*,*q*+1 *m*, *n* - 1 + *yp*,*<sup>q</sup> m*, *n* (62)

The implementation of proposed digital filter structure for generating moments up to (*p* + *q*)

**Figure 8.** Recurrence relationship between the digital filter outputs of the row and column.

**Figure 9.** Reduced digital filters for generating the 2-D geometric moments up to (*p* + *q*) order.

orders is shown in Figure 9.

178 Digital Filters and Signal Processing

As compared to the digital filter structure used in [19] and [20] as shown in Figure 10, it can be seen that the difference is the prefix, *ypq* and *yp* used in the rows. In the proposed model, the outputs *y*00 , *y*10 ,..., *yp*0 occur at the row filter, unlike in [19] and [20], they occur at the column filters. Hence, the difference in savings of digital filters used in the proposed method is determined by the maximum *p* order. for example, in the design of the 40th order geometric moments, where the maximum *p* order is 40, and thus the number of digital filters used in proposed method will be 40 digital filters less than [19] and [20].

In our proposed method, we begin by showing the relationship between the digital filter outputs and the geometric moments by considering a 1D image. So, the first few outputs of the digital filter are

$$\begin{aligned} y\_0 \|\mathbf{N} \mathbf{J} &= \sum\_{k=1}^N x \mathbf{I} \mathbf{k} \mathbf{J} \mathbf{t}\_0 \mathbf{L} \mathbf{N} \cdot \mathbf{k} \mathbf{J} = \mu\_0 \\ y\_1 \|\mathbf{N} \mathbf{J} &= \sum\_{k=1}^N x \mathbf{I} \mathbf{k} \mathbf{J} \mathbf{t}\_1 \mathbf{L} \mathbf{N} \cdot \mathbf{k} = \sum\_{k=1}^N x \mathbf{I} \mathbf{k} \mathbf{J} (\mathbf{N} \cdot \mathbf{k} + \mathbf{1}) = (\mathbf{N} + 1) \mu\_0 \cdot \mu\_1 \\ y\_2 \|\mathbf{N} \mathbf{J} &= \sum\_{k=1}^N x \mathbf{I} \mathbf{k} \mathbf{J} \mathbf{t}\_2 \mathbf{L} \mathbf{N} \cdot \mathbf{k} = \sum\_{k=1}^N x \mathbf{I} \mathbf{k} \mathbf{J} \frac{1}{\Sigma} (\mathbf{N} \cdot \mathbf{k} + \mathbf{1}) (\mathbf{N} \cdot \mathbf{k} + \mathbf{2}) = \frac{1}{\Sigma} (\mathbf{N} + 1)(\mathbf{N} + 2) \mu\_0 \cdot \frac{1}{\Sigma} (2\mathbf{N} + 3) \mu\_1 + \frac{1}{\Sigma} \mu\_2 \end{aligned} \tag{63}$$

Hence by solving them, the above geometric moments can be obtained in terms of the digital filters outputs.

$$\begin{aligned} \mu\_0 &= y\_0 \\ \mu\_1 &= (N+1)y\_0 \cdot y\_1 \\ \mu\_2 &= (N+1)^2 y\_0 \cdot (2N+3)y\_1 + 2y\_2 \end{aligned} \tag{64}$$

A matrix notation showing the above relationship between the geometric moments and the digital filter outputs as expressed

$$
\mu\_P = \mathbb{C}\_N \, Y\_P \tag{65}
$$

is maximum *p* - order. As the number of moment orders increases the savings of the digital filters will be large. For example, the geometric moments used are forty orders, and then the

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

181

number of digital filters will be reduced by forty as compared with existing methods.

(a)

(b)

model and (b) existing model.

*4.2.1. Artifical test image*

the following input matrix:

241

*<sup>x</sup> <sup>m</sup>*, *<sup>n</sup>* <sup>=</sup> <sup>536</sup>

denoted as *x <sup>R</sup>* .

**Figure 11.** Digital filter structure for generating up to third order geometric moments for 2-D image. (a) proposed

Artificial test images of small size are used to prove validity of the proposed architecture. To illustrate the workings, an artificial image with size of 2×3 to generate up to third order of geometric moments is considered. The intensity function of the test image is represented by

Consider the dimension of the input image is 2×3 , it means the endpoint of the filter outputs, is 6 . Therefore, all generated outputs by the row filters are defined at different intervals till 6 , where be represented by the Table 6. Table 7 shows the generated outputs by the column filters for two sampled seconds (3,6). The input image must be reversed in this method and

where *μP* is the geometric moment vector, *YP* is the digital filter output vector, and *CN* is a square matrix that a recurrence relationship between this matrix elements is given as

$$\mathbf{C}\_{pr} = \begin{bmatrix} 0 & & & \vdots & p \le r \\ (-1)^p \,\_p\mathrm{T} & & \vdots & p = r \\ (N+1)^p & & & \vdots & r = 0 \\ & \cdots p \, \_{p-1,r-1} + p \sum\_{k=r}^{p-1} \frac{\mathbf{C}\_{p-1,k}}{2^{k-r+1} \cdot 3 \left\lfloor \frac{1+r-1}{2} \right\rfloor} & \vdots & \vdots & \end{bmatrix} \tag{66}$$

where, ∙ represents the *floor function*.

Following the same procedure used in obtaining the 1-D relationship digital filter outputs and geometric moments, it can also be extended for 2-D image. This is done by taking the transpose of the digital filter outputs and the transpose of the matrix, *CN* . For a 2-D image of dimensions × *N* , the geometric moments can be obtained from

$$
\mu\_{pq} = \mathbf{C}\_M \mathbf{Y}\_{pq}^T \mathbf{C}\_N \mathbf{y}^T \tag{67}
$$

Also, the summation forms of (4.12) and (4.14) can be written as:

$$\mu\_p = \sum\_{r=0}^p \mathbb{C}\_{pr} y\_r \tag{68}$$

$$\boldsymbol{\mu}\_{pq} = \sum\_{r=0}^{p} \sum\_{s=0}^{q} \mathbf{C}\_{pr} \mathbf{C}\_{qs} \mathbf{y}\_{sr} \tag{69}$$

#### **4.2. Experimental results**

In this subsection, we will begin with an example to determine the geometric moments up to third order for 2-D image using the proposed and existing method [19]. Figure.3. shows the proposed digital filter structure and the structure used in [19]. As can be seen from Figure 11(a) and (b), the proposed filter structure used three less digital filters. The difference between them is maximum *p* - order. As the number of moment orders increases the savings of the digital filters will be large. For example, the geometric moments used are forty orders, and then the number of digital filters will be reduced by forty as compared with existing methods.

**Figure 11.** Digital filter structure for generating up to third order geometric moments for 2-D image. (a) proposed model and (b) existing model.

#### *4.2.1. Artifical test image*

*μ*<sup>0</sup> = *y*<sup>0</sup>

digital filter outputs as expressed

180 Digital Filters and Signal Processing

*Cpr* ={

**4.2. Experimental results**

where, ∙ represents the *floor function*.

(*N* + 1) *<sup>p</sup>*

× *N* , the geometric moments can be obtained from

Also, the summation forms of (4.12) and (4.14) can be written as:

*μ*<sup>1</sup> =(*N* + 1)*y*<sup>0</sup> - *y*<sup>1</sup>


*<sup>μ</sup>*<sup>2</sup> =(*<sup>N</sup>* + 1)2*y*<sup>0</sup> - (2*<sup>N</sup>* + 3)*y*<sup>1</sup> + 2*y*<sup>2</sup>

A matrix notation showing the above relationship between the geometric moments and the

where *μP* is the geometric moment vector, *YP* is the digital filter output vector, and *CN* is a

0; *p* <*r* (-1) *<sup>p</sup> p* !; *p* =*r*

> *<sup>p</sup>*-1 *Cp*-1,*<sup>k</sup>* 2*<sup>k</sup>* -*<sup>r</sup>* +1 × 3

Following the same procedure used in obtaining the 1-D relationship digital filter outputs and geometric moments, it can also be extended for 2-D image. This is done by taking the transpose of the digital filter outputs and the transpose of the matrix, *CN* . For a 2-D image of dimensions

*<sup>T</sup> CN*

In this subsection, we will begin with an example to determine the geometric moments up to third order for 2-D image using the proposed and existing method [19]. Figure.3. shows the proposed digital filter structure and the structure used in [19]. As can be seen from Figure 11(a) and (b), the proposed filter structure used three less digital filters. The difference between them

; *r* =0

*k* -*r* +1 2

; *oth* .

square matrix that a recurrence relationship between this matrix elements is given as

*k*=*r*

*μpq* =*CM Y pq*

*μp* = ∑ *r*=0 *p*

*μpq* = ∑ *r*=0 *p* ∑ *s*=0 *q CprC qs*

*μP* =*CN YP* (65)

*<sup>T</sup>* (67)

*ysr* (69)

*Cpr yr* (68)

(64)

(66)

Artificial test images of small size are used to prove validity of the proposed architecture. To illustrate the workings, an artificial image with size of 2×3 to generate up to third order of geometric moments is considered. The intensity function of the test image is represented by the following input matrix:

$$\begin{aligned} \propto [m, m] = \begin{bmatrix} 5 & 3 & 6 \\ 2 & 4 & 1 \end{bmatrix} \end{aligned} $$

Consider the dimension of the input image is 2×3 , it means the endpoint of the filter outputs, is 6 . Therefore, all generated outputs by the row filters are defined at different intervals till 6 , where be represented by the Table 6. Table 7 shows the generated outputs by the column filters for two sampled seconds (3,6). The input image must be reversed in this method and denoted as *x <sup>R</sup>* .


*n* **3 6** *n* **3 6** *n* **3 6** *n* **3 6** *y***<sup>00</sup>** 14 21 *y*<sup>10</sup> 27 42 *y*<sup>20</sup> 45 70 *y*<sup>30</sup> 68 105

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

183

Based on (4.13) and (4.14), we can derive the digital filter outputs' matrix ( *Y* ) and the coefficients matrices ( *CM* , *CN* ), respectively. Then, the set of third order moments are

The images used for the experiments were the Pepper (128×128) and Lena (512×512) images as shown in Figure 12(a) and (b). As shown in Figure 13(a) and (b) the number of additions for

*y***<sup>01</sup>** 14 35 *y*<sup>11</sup> 27 69 *y*<sup>21</sup> 45 115

*y***<sup>02</sup>** 14 49 *y*<sup>12</sup> 27 96

**Table 9.** State table of proposed method for obtaining column filter outputs.

digital filter processes are compared in [19] and the proposed method.

*y***<sup>03</sup>** 14 63

**Figure 12.** Test images (a) Pepper and (b) Lena.

obtained as (4.17).

*4.2.2. Real test image*

**Table 6.** State table method used in [8] for obtaining row filter outputs.


**Table 7.** State table method used in [8] for obtaining column filter outputs.

The highlighted outputs can be collected as *Y* matrix in (4.12) and the coefficients matrix ( *D* ) as defined in (3.9), will generate the third order moments' set of the artificial image as follows:

$$\begin{aligned} \mu\_{00} &= 21 \quad \mu\_{01} = 42 \quad \mu\_{02} = 98 \quad \mu\_{03} = 252 \\ \mu\_{10} &= 28 \quad \mu\_{11} = 55 \quad \mu\_{12} = 125 \\ \mu\_{20} &= 42 \quad \mu\_{21} = 81 \\ \mu\_{30} &= 70 \end{aligned} \tag{70}$$

If we derive the moments of the same example with the proposed method, the endpoint of the procedure is also 6. However, the results of the digital filter outputs are listed in Tables 8 and 9. In this case, according to Figure 11(a), it is well-known that the number of the digital filter is deduced up to maximum *p* - order. Furthermore, in this method it does not need to reverse of the input sequence.


**Table 8.** State table of proposed method for obtaining row filter outputs.


**Table 9.** State table of proposed method for obtaining column filter outputs.

Based on (4.13) and (4.14), we can derive the digital filter outputs' matrix ( *Y* ) and the coefficients matrices ( *CM* , *CN* ), respectively. Then, the set of third order moments are obtained as (4.17).

#### *4.2.2. Real test image*

(70)

*n* **0123456** *x <sup>R</sup>* 0142635 *y***<sup>0</sup>** 0 1 5 7 6 9 14 *y***<sup>1</sup>** 0 1 6 13 6 15 29 *y***<sup>2</sup>** 0 1 7 20 6 21 50 *y***<sup>3</sup>** 0 1 8 28 6 27 77

*n* **3 6** *n* **3 6** *n* **3 6** *n* **3 6** *y***<sup>0</sup>** 7 14 *y*<sup>1</sup> 13 29 *y*<sup>2</sup> 20 50 *y*<sup>3</sup> 28 77 *y***<sup>00</sup>** 7 21 *y*<sup>10</sup> 13 42 *y*<sup>20</sup> 20 70 *y*<sup>30</sup> 28 105

The highlighted outputs can be collected as *Y* matrix in (4.12) and the coefficients matrix ( *D* ) as defined in (3.9), will generate the third order moments' set of the artificial image as

If we derive the moments of the same example with the proposed method, the endpoint of the procedure is also 6. However, the results of the digital filter outputs are listed in Tables 8 and 9. In this case, according to Figure 11(a), it is well-known that the number of the digital filter is deduced up to maximum *p* - order. Furthermore, in this method it does not need to reverse

> *n* **0 1 2 3 4 5 6** *x <sup>T</sup>* 0 5 2 3 4 6 1 *y* 0 5 7 3 7 6 7 *y <sup>T</sup>* 0 5 3 6 7 7 7 *y***<sup>00</sup>** 0 5 8 14 7 14 21 *y***<sup>10</sup>** 0 5 13 27 7 21 42 *y***<sup>20</sup>** 0 5 18 45 7 28 70 *y***<sup>30</sup>** 0 5 23 68 7 35 105

*μ*<sup>00</sup> =21 , *μ*<sup>01</sup> =42 , *μ*<sup>02</sup> =98 , *μ*<sup>03</sup> =252

*μ*<sup>10</sup> =28 , *μ*<sup>11</sup> =55 , *μ*<sup>12</sup> =125

*μ*<sup>20</sup> =42 , *μ*<sup>21</sup> =81

*μ*<sup>30</sup> =70

**Table 8.** State table of proposed method for obtaining row filter outputs.

*y***<sup>01</sup>** 7 28 *y*<sup>11</sup> 13 55 *y*<sup>21</sup> 20 90

**Table 6.** State table method used in [8] for obtaining row filter outputs.

*y***<sup>02</sup>** 7 35 *y*<sup>12</sup> 13 68

**Table 7.** State table method used in [8] for obtaining column filter outputs.

*y***<sup>03</sup>** 7 42

follows:

182 Digital Filters and Signal Processing

of the input sequence.

The images used for the experiments were the Pepper (128×128) and Lena (512×512) images as shown in Figure 12(a) and (b). As shown in Figure 13(a) and (b) the number of additions for digital filter processes are compared in [19] and the proposed method.

**Figure 12.** Test images (a) Pepper and (b) Lena.

proposed method is modeled using the 2D Z-transform, and the theoretical framework for the proposed reduced digital filter structure is developed and the experimental results validate

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

185

Barmak Honarvar Shakibaei Asli and Raveendran Paramesran

compression, Opt. Eng. 32: 1993, 1596–1608.

Pattern Recognition 26: 1993, 295–306.

Recognition Letters 1: 1983, 451-456.

America 70: 1980, 920-930.

Automation 8: 1992, 186–195.

classification, Proc. Inst. Elect. Eng. 142: 1995, 213–219.

new results, Pattern Recognition 24: 1991, 1117–1138.

Analysis and Machine Intelligence PAMI-7: 1985, 338–344.

Elecrtical Engineering Department, University of Malaya, KulaLumpur, Malaysia

[1] M. K. Hu. Visual pattern recognition by moment invariants, IRE Trans. Information

[2] H. S. Hsu. Moment preserving edge detection and its application to image data

[3] M. I. Heywood. Fractional central moment method for moment-invariant object

[4] S. Ghosal and R. Mehrotra. Orthogonal moment operators for subpixel edge detection,

[5] S. O. Belkasim. Pattern recognition with moment invariants—A comparative study and

[6] J. Flusser. Pattern recognition by affine moment invariants, Pattern Recognition 26:

[7] J.F. Boyce and W.J. Hossack. Moment invariants for pattern recognition, Pattern

[8] Dong Xu and Hua Li. Geometric moment invariant. Pattern Recognition 41: 2008,

[9] M.R. Teague. Image analysis via the general theory of moments, J. Optical Society of

[10] A. Goshtasby. Template matching in rotated images, IEEE Transaction on Pattern

[11] V. Markandey and R. J. P. Figureueiredo. Robot sensing techniques based on high dimensional moment invariants and tensors, IEEE Transactions on Robotics and

the performance.

**Author details**

**References**

Theory 8: 1962, 179-187.

1993, 167–174.

240-249.

**Figure 13.** Comparison of the number of additions in digital filter part for [19] and proposed method for (a) Pepper image (b) Lena image.

## **5. Conclusions**

Orthogonal moments are widely used in image analysis and as pattern features in pattern classification. The computation of orthogonal moments can be achieved via geometric moments. Hatamian introduced cascaded digital filters, where each filter operates as an accumulator to generate geometric moments.

One of the weaknesses in using the outputs of the cascaded digital filters to generate the GMs is that the filter outputs increase exponentially as the orders of the moments increase. This work proposes a new formulation to solve this problem by sampling at earlier instances of *N* , *N* - 1, ….*N* + *p* , where *p* is the maximum moment order for an *N* × *N* image. This step then paved the way to use a set of lower digital filter output values. The work demonstrates the efficacy and validity of the new algorithm in two ways: one, by comparing its complexity of computation with the complexity of two state of the art models proposed by Katoulos and Andreadis [21] and Wong and Siu [20] and two, by carrying out a set of experiments on speed of computation comparing the results obtained using the proposed method with those obtained using existing methods. A number of findings indicate the superiority of the proposed method: (1) A savings of as much as 45% is achieved for the proposed method in the number of additions as the moment order approaches *N* when compared with the existing methods and (2) this leads to less computational time for the proposed method to derive the GMs. This work has focused on the software implications of the algorithm. If the proposed method is implemented in FPGA or ASIC based platforms, a great savings in terms of bitwidths will be realized.

Also we showed a reduced method where the number of digital filters used in the generating of geometric moments is reduced by *p* -order when compared with the existing methods. The proposed method is modeled using the 2D Z-transform, and the theoretical framework for the proposed reduced digital filter structure is developed and the experimental results validate the performance.

## **Author details**

Barmak Honarvar Shakibaei Asli and Raveendran Paramesran

Elecrtical Engineering Department, University of Malaya, KulaLumpur, Malaysia

## **References**

(a) (b)

**Figure 13.** Comparison of the number of additions in digital filter part for [19] and proposed method for (a) Pepper

Orthogonal moments are widely used in image analysis and as pattern features in pattern classification. The computation of orthogonal moments can be achieved via geometric moments. Hatamian introduced cascaded digital filters, where each filter operates as an

One of the weaknesses in using the outputs of the cascaded digital filters to generate the GMs is that the filter outputs increase exponentially as the orders of the moments increase. This work proposes a new formulation to solve this problem by sampling at earlier instances of *N* , *N* - 1, ….*N* + *p* , where *p* is the maximum moment order for an *N* × *N* image. This step then paved the way to use a set of lower digital filter output values. The work demonstrates the efficacy and validity of the new algorithm in two ways: one, by comparing its complexity of computation with the complexity of two state of the art models proposed by Katoulos and Andreadis [21] and Wong and Siu [20] and two, by carrying out a set of experiments on speed of computation comparing the results obtained using the proposed method with those obtained using existing methods. A number of findings indicate the superiority of the proposed method: (1) A savings of as much as 45% is achieved for the proposed method in the number of additions as the moment order approaches *N* when compared with the existing methods and (2) this leads to less computational time for the proposed method to derive the GMs. This work has focused on the software implications of the algorithm. If the proposed method is implemented in FPGA or ASIC based platforms, a great savings in terms of bit-

Also we showed a reduced method where the number of digital filters used in the generating of geometric moments is reduced by *p* -order when compared with the existing methods. The

image (b) Lena image.

184 Digital Filters and Signal Processing

**5. Conclusions**

widths will be realized.

accumulator to generate geometric moments.


[12] Y.S. Kim, W.Y. Kim. Content-based trademark retrieval system using visually salient feature, Image and Vision Computing 16: 1998, 931–939.

[28] Hayes MH. Schaums's outline of theory and problems of digital signal processing, Mc-

Digital Filter Implementation of Orthogonal Moments

http://dx.doi.org/10.5772/52191

187

[29] Zhang, H., Shu, H., Luo, L. and Dillenseger, J. L. A Legendre orthogonal moment based 3D edge operator, Science in China Series G: Physics, Mechanics and Astronomy, vol.

[30] R. Mukundan and K. R. Ramakrishnan, Fast computation of Legendre and Zernike

[31] Wallin, A. and Kübler, O. Complete sets of complex Zernikemoment invariants and the role of the pseudoinvariants, IEEE Transactions Pattern Analysis and Machine Intelli‐

[32] Barmak Honarvar, Raveendran Paramesran, Kim Han-Thung, Kah-Hyong Chang. A reduced 2-D digital filter structure for fast implemebntation of geometric moments, International Conference on Computer and Electrical Engineering 4th (ICCEE),

[33] Chern-Loon Lim, Barmark Honarvar, Kim Han Thung, Raveendran Paramesran. Fast computation of exact Zernike moments using cascaded digital filters. Journal Informa‐

[34] Kah-Hyong Chang, Raveendran Paramesran, Barmak Honarvar Shakibaei Asli and Chern-Loon Lim. Efficient Hardware Accelerators for the Computation of Tchebichef Moments, IEEE Transaction on Circuit, System, and Video Technology. Vol 22, No. 3,

Graw Hill, 1999, New York.

48, no. 1, 2005, pp. 1–13.

Singapore, 2011.

2012, pp 414-425.

moments, Pattern Recognition 28, 1995, 1433–1442.

tion Sciences, Vol. 181, No.17, 2011, pp 3638-3651.

gence, vol. 17, no. 11, 1995, pp. 1106–10.


[28] Hayes MH. Schaums's outline of theory and problems of digital signal processing, Mc-Graw Hill, 1999, New York.

[12] Y.S. Kim, W.Y. Kim. Content-based trademark retrieval system using visually salient

[13] C.W. Chong, P. Raveendran, R. Mukundan. A comparative analysis of algorithms for fast computation of Zernike moments, Pattern Recognition, 36: 2003, 731–742.

[14] Yap P. T., Paramesran R., Ong S. H. Image analysis using Hahn moments, IEEE Transaction on Pattern Analysis and Machine Intelligence PAMI-11: 2007, 2057–2062.

[15] R. Mukundan, S. H. Ong, and P. A. Lee. Image analysis by Tchebichef moments, IEEE

[16] P. T. Yap, R. Paramesran, and S. H. Ong. Image analysis by Krawtchouk moments, IEEE

[17] G.Wang and S.Wang. Recursive computation of Chebyshev moment and its inverse

[18] Guojun Zhang, Zhu Luo, Bo Fu, Bo Li, Jiaping Liao, Xiuxiang Fan, Zheng Xi. A symmetry and bi-recursive algorithm of accurately computing Krawtchuk moments,

[19] M. Hatamian. A real-time two-dimensional moment generating algorithm and its single chip implementation, IEEE Transaction on Acoustic, Speech and Signal Proc‐

[20] Wong. W.-H and Siu. W.-C. Improved digital filter structure for the fast moments computation, Proceedings of IEE on Vision, Image and Signal Processing, 146: 1999,

[21] L. Kotoulas and I. Andreadis. Fast computation of Chebyshev moments, IEEE Trans‐

[22] L. Kotoulas and I. Andreadis. Real-time computation of Zernike moments, IEEE Transaction on Circuits and System for Video Technology 15: 2005, 801–809.

[23] M. Al-Rawi, Fast zernike moments. Journal on Real-Time Image Processing 3: 2008, 86–

[24] H. S. Kim and H Lee. Invariant image watermark using Zernike moments, IEEE Transaction on Circuits and System for Video Technology 13: 2003, 766–775.

[25] S. P. Prismall, M. S. Nixon, and J. N. Carter. On moving object reconstruction by

[26] G. Amayeh, G. Bebis, A. Erol, and M. Nicolescu. Peg-free hand shape verification using high order Zernike moments, in Proceeding Conference on Computer Vision and

[27] M. Al-Rawi, Y. Jie. Practical fast computation of Zernike moments, Journal of Computer

moments, in 13th British Machine Vision Conference, 2002, pp: 73–82.

Pattern Recognition Workshop, 2006, pp: 17–22.

Science and Technology 17: 2002, 181–188.

action on Circuits and System for Video Technology 16: 2006, 884–888.

feature, Image and Vision Computing 16: 1998, 931–939.

Transaction on Image Processing 10: 2001, 1357–1364.

Transaction on Image Processing 12: 2003, 1367–1377.

transform, Pattern Recognition 39: 2006, 47–56.

Pattern Recognition Letters 31: 2010, 548-554.

essing, ASSP-34: 1986, 546–553.

73-79.

186 Digital Filters and Signal Processing

96.


**Chapter 8**

**Provisional chapter**

**Two-Rate Based Structures for Computationally**

**Two-Rate Based Structures for Computationally**

Many digital signal processing (DSP) systems tend to have a very high computational complexity when they target a large part of the Nyquist band. This corresponds to a wide-band system with one or several so called don't-care bands approaching zero. Examples of such systems include frequency selective filters, fractional-delay filters, and differentiators. This chapter considers finite-length impulse response (FIR) filters due to their attractive implementation features. In particular, they can be implemented with non-recursive structures. In contrast to infinite-length impulse response (IIR) filters, they are therefore always automatically stable and have no bound on the maximal sampling rate,

For frequency-selective wide-band FIR filters, the frequency-response masking (FRM) technique can be employed for complexity reductions due to its use of sparse (namely periodic) subfilters, see [3–10]. For other functions, the FRM technique cannot be used directly, and one therefore has to seek other methods to reduce the complexity. This chapter discusses such a method which utilizes a two-rate technique, but only for the derivation of efficient single-rate structures. The basic two-rate approach was originally introduced in [11] and has since then been exploited and extended for various contexts as detailed in [12–19] and to be reviewed in this chapter. For single-function systems, it is however necessary to combine the two-rate technique with the FRM approach in order to achieve an overall complexity reduction. For multi-function realizations, complexity savings may be obtained without incorporating the FRM approach but it offers further complexity savings in such cases, as exemplified in [19]. Recent results have shown that the two-rate approach offers dramatic complexity reductions for wide-band systems, especially when combined with the

> ©2012 Johansson and Gustafsson, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Johansson and Gustafsson; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Johansson and Gustafsson, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

**Efficient Wide-Band FIR Systems**

**Efficient Wide-Band FIR Systems**

Håkan Johansson and Oscar Gustafsson

Additional information is available at the end of the chapter

Håkan Johansson and Oscar Gustafsson

http://dx.doi.org/10.5772/52198

10.5772/52198

**1. Introduction**

see [1, 2].

FRM approach.

Additional information is available at the end of the chapter

**Provisional chapter**

## **Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems Efficient Wide-Band FIR Systems**

**Two-Rate Based Structures for Computationally**

Håkan Johansson and Oscar Gustafsson Additional information is available at the end of the chapter

Håkan Johansson and Oscar Gustafsson

Additional information is available at the end of the chapter 10.5772/52198

http://dx.doi.org/10.5772/52198

## **1. Introduction**

Many digital signal processing (DSP) systems tend to have a very high computational complexity when they target a large part of the Nyquist band. This corresponds to a wide-band system with one or several so called don't-care bands approaching zero. Examples of such systems include frequency selective filters, fractional-delay filters, and differentiators. This chapter considers finite-length impulse response (FIR) filters due to their attractive implementation features. In particular, they can be implemented with non-recursive structures. In contrast to infinite-length impulse response (IIR) filters, they are therefore always automatically stable and have no bound on the maximal sampling rate, see [1, 2].

For frequency-selective wide-band FIR filters, the frequency-response masking (FRM) technique can be employed for complexity reductions due to its use of sparse (namely periodic) subfilters, see [3–10]. For other functions, the FRM technique cannot be used directly, and one therefore has to seek other methods to reduce the complexity. This chapter discusses such a method which utilizes a two-rate technique, but only for the derivation of efficient single-rate structures. The basic two-rate approach was originally introduced in [11] and has since then been exploited and extended for various contexts as detailed in [12–19] and to be reviewed in this chapter. For single-function systems, it is however necessary to combine the two-rate technique with the FRM approach in order to achieve an overall complexity reduction. For multi-function realizations, complexity savings may be obtained without incorporating the FRM approach but it offers further complexity savings in such cases, as exemplified in [19]. Recent results have shown that the two-rate approach offers dramatic complexity reductions for wide-band systems, especially when combined with the FRM approach.

of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Johansson and Gustafsson; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Johansson and Gustafsson, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Johansson and Gustafsson, licensee InTech. This is an open access chapter distributed under the terms

## **1.1. Chapter outline**

Following this introduction, Section 2 considers the two-rate based structure that is appropriate for so called left-band and right-band systems which have don't-care bands at the low-frequency and high-frequency regions, respectively. Section 3 discusses the extension to so called mid-band systems which have don't-care bands at both the low-frequency and high-frequency regions. In Section 4, multi-function system realizations are considered, whereas Section 5 gives more implementation details. Finally, Section 6 concludes the chapter.

## **2. Two-rate based structure for left-band and right-band systems**

This section will first revisit FIR filters and their computational complexity. After that, the two-rate based structure for left-band and right-band systems will be discussed.

## **2.1. Complexity of FIR filters**

Consider a causal FIR filter with an impulse response *h*(*n*), transfer function

*H*(*z*) = *NH* ∑ *n*=0 *<sup>h</sup>*(*n*)*z*−*n*, (1)

10.5772/52198

191

http://dx.doi.org/10.5772/52198

(don't-care band) ∆ = *ω<sup>s</sup>* − *ωc*, where *ω<sup>c</sup>* and *ω<sup>s</sup>* denote the passband and stopband edges, respectively, see [20, 21]. Hence, when the don't-care band decreases towards zero, the order increases rapidly. Then, using a direct-form realization, the computational complexity may become intolerable as it follows the filter order. The same trend exists also for other functions that are not frequency selective filters, like differentiation and integration, as seen in [22].

To reduce the complexity, we consider here a structure that is derived via a two-rate approach, seen in Fig. 1. This structure is efficient for left-band systems (like a differentiator) targeting the frequency region *ω* ∈ [0, *ωc*], 0 < *ω<sup>c</sup>* < *π*. The same structure can also be used for right-band systems targeting the band *ω* ∈ [*ωc*, *π*], 0 < *ω<sup>c</sup>* < *π*. The only difference will appear in the design, and we will therefore focus on the left-band case in this chapter, and

For a left-band specification, the basic idea is to first interpolate the input signal *x*(*n*) by two through upsampling by two followed by a lowpass filter with transfer function *F*(*z*) 2. Then, a subsequent filter with transfer function *G*(*z*) follows that performs the actual function. Finally, downsampling by two takes place to retain the original sampling rate. Using multi-rate theory, see [23], it is readily shown that this scheme corresponds to a linear and time-invariant (LTI) system with a transfer function *H*(*z*) that equals the 0th polyphase

The final realization is thus a single-rate structure. A two-rate technique is only used to derive efficient structures. It is noted here that the order and delay of the overall filter *H*(*z*) is *NH* = (*NF* + *NG*)/2 and *DH* = (*DF* + *DG*)/2, respectively. This can be understood by noting that *F*(*z*) and *G*(*z*) can be viewed as operating (in principle) at two times the input rate, because the structure is derived by sandwiching *F*(*z*)*G*(*z*) between upsampling and

<sup>2</sup> The same function can be achieved by sampling the underlying analog signal with a higher sampling rate instead of sampling it slower and then use interpolation in the digital domain. However, this also increases the requirements on the analog-to-digital converters which are power-hungry components and in many cases one of the bottlenecks

in overall systems. It is therefore often preferred to perform interpolation in the digital domain.

*<sup>H</sup>*(*z*) = *<sup>F</sup>*0(*z*)*G*0(*z*) + *<sup>z</sup>*−1*F*1(*z*)*G*1(*z*) (4)

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

*<sup>F</sup>*(*z*) = *<sup>F</sup>*0(*z*2) + *<sup>z</sup>*−1*F*1(*z*2) (5)

*<sup>G</sup>*(*z*) = *<sup>G</sup>*0(*z*2) + *<sup>z</sup>*−1*G*1(*z*2). (6)

only comment upon the right-band case in the design section.

component of the cascaded filter *F*(*z*)*G*(*z*), i.e.,

where

and

downsampling by two.

**2.2. Two-rate based structure**

and frequency response

$$H(e^{j\omega}) = \sum\_{n=0}^{N\_H} h(n)e^{-j\omega n}.\tag{2}$$

The order of the system is *NH* and the impulse response duration (length) is *NH* + 1. A direct-form implementation of the filter, corresponding directly to the convolution

$$y(n) = \sum\_{k=0}^{N\_H} x(n-k)h(k),\tag{3}$$

where *x*(*n*) is the input and *y*(*n*) the output, requires *NH* + 1 multiplications and *NH* additions to compute each output sample *y*(*n*). In the case of a linear-phase frequency response, *h*(*n*) is symmetric or anti-symmetric which reduces the number of multiplications to roughly *NH*/2 1.

The filter order required is determined by the application and specification. For example, for frequency selective filters, the order is inversely proportional to the transition band

<sup>1</sup> An even-order symmetric (anti-symmetric) linear-phase filter requires *NH* /2 + 1 (*NH* /2) multiplications whereas an odd-order linear-phase filter requires (*NH* + 1)/2 multiplications.

(don't-care band) ∆ = *ω<sup>s</sup>* − *ωc*, where *ω<sup>c</sup>* and *ω<sup>s</sup>* denote the passband and stopband edges, respectively, see [20, 21]. Hence, when the don't-care band decreases towards zero, the order increases rapidly. Then, using a direct-form realization, the computational complexity may become intolerable as it follows the filter order. The same trend exists also for other functions that are not frequency selective filters, like differentiation and integration, as seen in [22].

#### **2.2. Two-rate based structure**

To reduce the complexity, we consider here a structure that is derived via a two-rate approach, seen in Fig. 1. This structure is efficient for left-band systems (like a differentiator) targeting the frequency region *ω* ∈ [0, *ωc*], 0 < *ω<sup>c</sup>* < *π*. The same structure can also be used for right-band systems targeting the band *ω* ∈ [*ωc*, *π*], 0 < *ω<sup>c</sup>* < *π*. The only difference will appear in the design, and we will therefore focus on the left-band case in this chapter, and only comment upon the right-band case in the design section.

For a left-band specification, the basic idea is to first interpolate the input signal *x*(*n*) by two through upsampling by two followed by a lowpass filter with transfer function *F*(*z*) 2. Then, a subsequent filter with transfer function *G*(*z*) follows that performs the actual function. Finally, downsampling by two takes place to retain the original sampling rate. Using multi-rate theory, see [23], it is readily shown that this scheme corresponds to a linear and time-invariant (LTI) system with a transfer function *H*(*z*) that equals the 0th polyphase component of the cascaded filter *F*(*z*)*G*(*z*), i.e.,

$$H(z) = F\_0(z)G\_0(z) + z^{-1}F\_1(z)G\_1(z) \tag{4}$$

where

2 Digital Filters and Signal Processing

**1.1. Chapter outline**

**2.1. Complexity of FIR filters**

and frequency response

to roughly *NH*/2 1.

chapter.

Following this introduction, Section 2 considers the two-rate based structure that is appropriate for so called left-band and right-band systems which have don't-care bands at the low-frequency and high-frequency regions, respectively. Section 3 discusses the extension to so called mid-band systems which have don't-care bands at both the low-frequency and high-frequency regions. In Section 4, multi-function system realizations are considered, whereas Section 5 gives more implementation details. Finally, Section 6 concludes the

This section will first revisit FIR filters and their computational complexity. After that, the

*NH* ∑ *n*=0

*NH* ∑ *n*=0

The order of the system is *NH* and the impulse response duration (length) is *NH* + 1. A

where *x*(*n*) is the input and *y*(*n*) the output, requires *NH* + 1 multiplications and *NH* additions to compute each output sample *y*(*n*). In the case of a linear-phase frequency response, *h*(*n*) is symmetric or anti-symmetric which reduces the number of multiplications

The filter order required is determined by the application and specification. For example, for frequency selective filters, the order is inversely proportional to the transition band

<sup>1</sup> An even-order symmetric (anti-symmetric) linear-phase filter requires *NH* /2 + 1 (*NH* /2) multiplications whereas an

*h*(*n*)*e*

*<sup>h</sup>*(*n*)*z*−*n*, (1)

<sup>−</sup>*jωn*. (2)

*x*(*n* − *k*)*h*(*k*), (3)

**2. Two-rate based structure for left-band and right-band systems**

two-rate based structure for left-band and right-band systems will be discussed.

Consider a causal FIR filter with an impulse response *h*(*n*), transfer function

*H*(*z*) =

*<sup>H</sup>*(*ejω*) =

*y*(*n*) =

odd-order linear-phase filter requires (*NH* + 1)/2 multiplications.

direct-form implementation of the filter, corresponding directly to the convolution

*NH* ∑ *k*=0

$$F(z) = F\_0(z^2) + z^{-1} F\_1(z^2) \tag{5}$$

and

$$G(z) = G\_0(z^2) + z^{-1} G\_1(z^2). \tag{6}$$

The final realization is thus a single-rate structure. A two-rate technique is only used to derive efficient structures. It is noted here that the order and delay of the overall filter *H*(*z*) is *NH* = (*NF* + *NG*)/2 and *DH* = (*DF* + *DG*)/2, respectively. This can be understood by noting that *F*(*z*) and *G*(*z*) can be viewed as operating (in principle) at two times the input rate, because the structure is derived by sandwiching *F*(*z*)*G*(*z*) between upsampling and downsampling by two.

<sup>2</sup> The same function can be achieved by sampling the underlying analog signal with a higher sampling rate instead of sampling it slower and then use interpolation in the digital domain. However, this also increases the requirements on the analog-to-digital converters which are power-hungry components and in many cases one of the bottlenecks in overall systems. It is therefore often preferred to perform interpolation in the digital domain.

10.5772/52198

193

http://dx.doi.org/10.5772/52198

**Overall filter** *H*(*z*) **Half-band filter** *F*(*z*) *G*(*z*) Type I, even order 2(*m* + *p* + 1) Type I, order 4*m* + 2 Type I, order 4*p* + 2 Type II, odd order 2(*m* + *p*) + 1 Type I, order 4*m* + 2 Type I, order 4*p* Type III, even order 2(*m* + *p* + 1) Type I, order 4*m* + 2 Type III, order 4*p* + 2 Type IV, odd order 2(*m* + *p*) + 1 Type I, order 4*m* + 2 Type III, order 4*p*

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

where *DF* is the delay of *F*(*z*), which is always an odd integer. When *F*(*z*) is a linear-phase filter, *G*(*z*) is of the same type as that of the overall filter *H*(*z*), i.e., a linear-phase filter (nonlinear-phase filter) when *H*(*z*) is a linear-phase filter (nonlinear-phase filter). In this section, we focus on linear-phase filters. In the next section, nonlinear-phase applications are

When *F*(*z*) is a linear-phase HB filter, it is a symmetric Type I filter with the odd-integer delay *DF*. Its delay contribution, *DF*/2, to the overall delay *DH* is therefore an integer plus a half. It is consequently the delay contribution *DG*/2 of *G*(*z*) that determines whether the overall delay is an integer or an integer plus a half. As *DG*/2 then must be an integer or an integer plus a half to obtain an overall linear-phase filter, *DG* must be an integer. Consequently, *G*(*z*) is either a Type I or Type III linear-phase FIR filter, i.e., an even-order filter with a symmetric or anti-symmetric impulse response. In other words, the type and order of *G*(*z*) determines the type and order of the overall filter, as summarized in Table 1. A formal proof of these

The order of *G*(*z*) is thus somewhat restricted as it cannot take on all even orders. However, the effective order of *G*(*z*) can be reduced by two by setting its first and last impulse response value to zero. In this way, two multiplications and additions may be saved in some cases. For the HB filter *F*(*z*), it does not make sense to try to reduce the effective order in this way, as its impulse response is always zero for odd indexes of *n* (except for the center tap).

Assume that *H*(*ejω*) is to approximate a desired function *D*(*jω*) in the band *ω* ∈ [0, *ωc*]. Due to the principle of interpolation by two in the two-rate based scheme, the effective bandwidth of *G*(*z*) is *ωc*/2, and thus always less than *π*/2. The complexity of *G*(*z*) alone will therefore be substantially lower than that of a regular direct-form realization of *H*(*z*). (This will be discussed in more detail in the design example considered later in Section 2.4). However, the overall complexity is also determined by the filter *F*(*z*). The requirement on this filter is roughly the same as that of the overall filter *H*(*z*) and its complexity is therefore relatively high. In other words, a major part of the overall complexity is moved to the filter *F*(*z*) and thus to *F*0(*z*) in Fig. 1. Therefore, for a single-function system, there will not be any computational savings using this approach straightforwardly. This is because we can equally

Nevertheless, overall savings can indeed be obtained by utilizing additional complexity saving techniques for the lowpass frequency selective HB filter *F*(*z*). Specifically, by realizing

*<sup>F</sup>*(*z*) = <sup>2</sup>*A*(*zL*)*B*0(*z*) + <sup>2</sup>[*z*−*LDA* <sup>−</sup> *<sup>A</sup>*(*zL*)]*B*1(*z*) (7)

well combine the three subfilters into one single conventional filter.

*F*(*z*) as an FRM filter, see [3–10], we can express the transfer function as

**Table 1.** Linear-phase filter types.

considered.

facts is given in [17].

**2.3. Complexity reduction**

**Figure 1.** (a) Two-rate approach. (b) Equivalent LTI system when *F*(*z*) is an HB filter. (c) and (d) Polyphase component *F*0(*z*) of the HB filter *<sup>F</sup>*(*z*) = *<sup>F</sup>*0(*z*<sup>2</sup>) + *<sup>z</sup>*−*DF* when realized with the FRM approach for an even-order (c) and odd-order (d) masking filter.

#### *2.2.1. Filter Types*

It is possible and efficient to let *F*(*z*) be a linear-phase half-band (HB) FIR filter 3. Such a filter has a symmetric impulse response and every second impulse response value is zero, except the center tap which equals unity for an interpolation filter that preserves the signal energy. This corresponds to a pure-delay polyphase component *<sup>F</sup>*1(*z*), namely *<sup>F</sup>*1(*z*) = *<sup>z</sup>*−(*DF*−1)/2,

<sup>3</sup> If the delay is of importance, one may need to use a nonlinear-phase (approximately linear-phase) low-delay HB filter *F*(*z*) instead. Further, if there are additional requirements in the don't-care band, like attenuation requirements at *ω* = *π*, a general filter *F*(*z*) must be used instead, i.e., a non-HB filter. See [19] for details.


**Table 1.** Linear-phase filter types.

4 Digital Filters and Signal Processing

filter.

*2.2.1. Filter Types*

at *ω* = *π*, a general filter *F*(*z*) must be used instead, i.e., a non-HB filter. See [19] for details.

**Figure 1.** (a) Two-rate approach. (b) Equivalent LTI system when *F*(*z*) is an HB filter. (c) and (d) Polyphase component *F*0(*z*) of the HB filter *<sup>F</sup>*(*z*) = *<sup>F</sup>*0(*z*<sup>2</sup>) + *<sup>z</sup>*−*DF* when realized with the FRM approach for an even-order (c) and odd-order (d) masking

It is possible and efficient to let *F*(*z*) be a linear-phase half-band (HB) FIR filter 3. Such a filter has a symmetric impulse response and every second impulse response value is zero, except the center tap which equals unity for an interpolation filter that preserves the signal energy. This corresponds to a pure-delay polyphase component *<sup>F</sup>*1(*z*), namely *<sup>F</sup>*1(*z*) = *<sup>z</sup>*−(*DF*−1)/2,

<sup>3</sup> If the delay is of importance, one may need to use a nonlinear-phase (approximately linear-phase) low-delay HB filter *F*(*z*) instead. Further, if there are additional requirements in the don't-care band, like attenuation requirements

where *DF* is the delay of *F*(*z*), which is always an odd integer. When *F*(*z*) is a linear-phase filter, *G*(*z*) is of the same type as that of the overall filter *H*(*z*), i.e., a linear-phase filter (nonlinear-phase filter) when *H*(*z*) is a linear-phase filter (nonlinear-phase filter). In this section, we focus on linear-phase filters. In the next section, nonlinear-phase applications are considered.

When *F*(*z*) is a linear-phase HB filter, it is a symmetric Type I filter with the odd-integer delay *DF*. Its delay contribution, *DF*/2, to the overall delay *DH* is therefore an integer plus a half. It is consequently the delay contribution *DG*/2 of *G*(*z*) that determines whether the overall delay is an integer or an integer plus a half. As *DG*/2 then must be an integer or an integer plus a half to obtain an overall linear-phase filter, *DG* must be an integer. Consequently, *G*(*z*) is either a Type I or Type III linear-phase FIR filter, i.e., an even-order filter with a symmetric or anti-symmetric impulse response. In other words, the type and order of *G*(*z*) determines the type and order of the overall filter, as summarized in Table 1. A formal proof of these facts is given in [17].

The order of *G*(*z*) is thus somewhat restricted as it cannot take on all even orders. However, the effective order of *G*(*z*) can be reduced by two by setting its first and last impulse response value to zero. In this way, two multiplications and additions may be saved in some cases. For the HB filter *F*(*z*), it does not make sense to try to reduce the effective order in this way, as its impulse response is always zero for odd indexes of *n* (except for the center tap).

## **2.3. Complexity reduction**

Assume that *H*(*ejω*) is to approximate a desired function *D*(*jω*) in the band *ω* ∈ [0, *ωc*]. Due to the principle of interpolation by two in the two-rate based scheme, the effective bandwidth of *G*(*z*) is *ωc*/2, and thus always less than *π*/2. The complexity of *G*(*z*) alone will therefore be substantially lower than that of a regular direct-form realization of *H*(*z*). (This will be discussed in more detail in the design example considered later in Section 2.4). However, the overall complexity is also determined by the filter *F*(*z*). The requirement on this filter is roughly the same as that of the overall filter *H*(*z*) and its complexity is therefore relatively high. In other words, a major part of the overall complexity is moved to the filter *F*(*z*) and thus to *F*0(*z*) in Fig. 1. Therefore, for a single-function system, there will not be any computational savings using this approach straightforwardly. This is because we can equally well combine the three subfilters into one single conventional filter.

Nevertheless, overall savings can indeed be obtained by utilizing additional complexity saving techniques for the lowpass frequency selective HB filter *F*(*z*). Specifically, by realizing *F*(*z*) as an FRM filter, see [3–10], we can express the transfer function as

$$F(z) = 2A(z^L)B\_0(z) + 2[z^{-LD\_A} - A(z^L)]B\_1(z) \tag{7}$$

where *<sup>A</sup>*(*zL*) is a period model filter and [*z*−*LDA* <sup>−</sup> *<sup>A</sup>*(*zL*)] is its complement, whereas *<sup>B</sup>*0(*z*) and *B*1(*z*) are masking filters. Specifically, in the case of a HB filter, as detailed in [5, 7], *A*(*z*) is given as

$$A(z) = A\_0(z^2) + 0.5z^{-D\_A},\tag{8}$$

10.5772/52198

195

http://dx.doi.org/10.5772/52198

where *D*(*jω*) is a desired function to be approximated in the frequency band Ω by the filter frequency response *H*(*ejω*), whereas *W*(*ω*) is a positive weighting function. A conventional FIR filter, with the frequency response in the form of (2), is then designed by solving the

**Approximation problem:** Given *NH*, find the unknowns *h*(*n*) and *δ* to minimize *δ* subject to

For a linear-phase filter, we also have the additional symmetry constraints *h*(*n*) = *h*(*N* − *n*)

For a conventional filter, the problem above is a convex optimization problem which has a unique global optimum. It can be found using linear programming, see [27], or the more efficient McClellan-Parks-Rabiner algorithm given in [28]. In practice, one usually has a specification on the desired approximation error *δ*, say *δe*. The filter will meet this

For the two-rate based filters, the design becomes more intricate because it contains cascaded and parallel subfilters. This means that the unknowns are not *h*(*n*) but instead *f*(*n*) and *g*(*n*), in general, and *a*(*n*), *b*0(*n*), and *g*(*n*) when *F*(*z*) is realized as an FRM filter. Hence, conventional design methods can no longer be used. Moreover, due to the cascaded subfilters, we are now facing a nonlinear (nonconvex) optimization problem, which means that an overall globally optimum solution cannot be guaranteed. Nevertheless, if carefully designed, even a locally optimum solution for a two-rate based structure can be substantially less complex than the corresponding globally optimum direct-form structure. To ensure a good local optimum, the overall two-rate based filters are designed in three steps as explained below. Although *F*(*z*) should here be an FRM HB filter in order to achieve any savings, we will first explain the essential design steps in terms of a regular HB filter for the sake of simplicity. After that, the necessary modifications required for an FRM design will be

and bandwidth *ω* ∈ [0, *ωc*], *ω<sup>c</sup>* < *π*, as well as a targeted approximation error *δe*, perform the following three-step procedure for each combination of filter orders *NG* and *NF* around

**(1)** Design the regular FIR filter *G*(*z*), which gives *G*0(*z*) and *G*1(*z*) after polyphase decomposition. It is done by minimizing the maximum of |*EG*(*jω*)| in the band *ω* ∈

*EG*(*jω*) = *<sup>G</sup>*(*ejω*) <sup>−</sup> *<sup>e</sup>*

<sup>5</sup> For a right-band specification, the band for *G*(*z*) is *ω* ∈ [*ωc*/2, *π*/2].


Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

<sup>−</sup>*jω*(*NG*+*NF* )/4*D*0(*jω*) (14)

<sup>−</sup>*jωNG*/2*D*0(*j*2*ω*). (15)

following approximation problem.

specification if *δ* after the optimization satisfies *δ* ≤ *δe*.

or *h*(*n*) = −*h*(*N* − *n*).

pointed out.

*2.4.1. Basic Three-Step Design Procedure*

estimated required orders *<sup>N</sup><sup>G</sup>* and *<sup>N</sup><sup>F</sup>*:

*D*(*jω*) = *e*

Given the desired function

[0, *ωc*/2] where <sup>5</sup>

with *DA* being the delay of *A*(*z*), whereas the masking filter are related according to

$$B\_1(z) = z^{-D\_B} - (-1)^{D\_B} B\_0(-z),\tag{9}$$

with *DB* being the delay of *B*0(*z*). One then finds that *F*0(*z*) becomes, for *DB* even:

$$F\_0(z) = 2z^{-(LD\_\Lambda+1)/2}B\_{01}(z) + 2[2B\_{00}(z) - z^{-D\_\mathcal{B}/2}]A\_0(z^L) \tag{10}$$

and, for *DB* odd:

$$F\_0(z) = 2z^{-(LD\_A - 1)/2} B\_{00}(z) + 2[2B\_{01}(z) - z^{-(D\_B - 1)/2}]A\_0(z^L) \tag{11}$$

where *B*00(*z*) and *B*01(*z*) are the polyphase components of *B*0(*z*), i.e., *B*0(*z*) = *B*00(*z*2) + *<sup>z</sup>*−1*B*01(*z*2). The resulting structures for *<sup>F</sup>*0(*z*) are depicted in Fig. 1(c) and (d). More details can be found in [7].

As seen, *<sup>F</sup>*0(*z*) makes use of three subfilters, of which *<sup>A</sup>*0(*zL*) is periodic for an integer *L* > 1. A periodic filter is a sparse filter, meaning it has many zero-valued filter coefficients. Specifically, only every *<sup>L</sup>*th impulse response value of *<sup>A</sup>*0(*zL*) is non-zero. Consequently, a linear-phase filter *A*(*zL*) of order *NA* requires roughly only *NA*/(2*L*) multiplications and *NA*/*L* additions. In this way, substantial overall savings can be obtained as compared to the conventional direct-form structures.

#### **2.4. Design**

Filters are typically designed in the minimax (Chebyshev) sense or least-squares sense, or possibly combinations thereof, see [24–26]. The goal of this chapter is to demonstrate that the complexity (number of multiplications and additions) can be reduced when using the two-rate based structures instead of regular structures. This will be done by designing both filter classes to meet the same specification and then comparing the resulting complexities 4. To this end, the selection of approximation type is irrelevant, as long as one uses the same for both filter classes. In this chapter, we use minimax design, but other designs can of course be used as well after some minor appropriate modifications.

For minimax design, the maximum of the modulus of an error function *E*(*jω*) is minimized. The error function is typically given as

$$E(j\omega) = \mathcal{W}(\omega)[H(e^{j\omega}) - D(j\omega)], \quad \omega \in \Omega. \tag{12}$$

<sup>4</sup> Another type of comparison is to study the approximation error differences between two solutions having the same filter implementation complexity. However, such a comparison is appropriate when using two different design methods applied to the same filter class (structure) which does not apply here.

where *D*(*jω*) is a desired function to be approximated in the frequency band Ω by the filter frequency response *H*(*ejω*), whereas *W*(*ω*) is a positive weighting function. A conventional FIR filter, with the frequency response in the form of (2), is then designed by solving the following approximation problem.

**Approximation problem:** Given *NH*, find the unknowns *h*(*n*) and *δ* to minimize *δ* subject to

$$|E(j\omega)| \le \delta.\tag{13}$$

For a linear-phase filter, we also have the additional symmetry constraints *h*(*n*) = *h*(*N* − *n*) or *h*(*n*) = −*h*(*N* − *n*).

For a conventional filter, the problem above is a convex optimization problem which has a unique global optimum. It can be found using linear programming, see [27], or the more efficient McClellan-Parks-Rabiner algorithm given in [28]. In practice, one usually has a specification on the desired approximation error *δ*, say *δe*. The filter will meet this specification if *δ* after the optimization satisfies *δ* ≤ *δe*.

For the two-rate based filters, the design becomes more intricate because it contains cascaded and parallel subfilters. This means that the unknowns are not *h*(*n*) but instead *f*(*n*) and *g*(*n*), in general, and *a*(*n*), *b*0(*n*), and *g*(*n*) when *F*(*z*) is realized as an FRM filter. Hence, conventional design methods can no longer be used. Moreover, due to the cascaded subfilters, we are now facing a nonlinear (nonconvex) optimization problem, which means that an overall globally optimum solution cannot be guaranteed. Nevertheless, if carefully designed, even a locally optimum solution for a two-rate based structure can be substantially less complex than the corresponding globally optimum direct-form structure. To ensure a good local optimum, the overall two-rate based filters are designed in three steps as explained below. Although *F*(*z*) should here be an FRM HB filter in order to achieve any savings, we will first explain the essential design steps in terms of a regular HB filter for the sake of simplicity. After that, the necessary modifications required for an FRM design will be pointed out.

#### *2.4.1. Basic Three-Step Design Procedure*

Given the desired function

6 Digital Filters and Signal Processing

is given as

and, for *DB* odd:

can be found in [7].

**2.4. Design**

conventional direct-form structures.

The error function is typically given as

be used as well after some minor appropriate modifications.

methods applied to the same filter class (structure) which does not apply here.

where *<sup>A</sup>*(*zL*) is a period model filter and [*z*−*LDA* <sup>−</sup> *<sup>A</sup>*(*zL*)] is its complement, whereas *<sup>B</sup>*0(*z*) and *B*1(*z*) are masking filters. Specifically, in the case of a HB filter, as detailed in [5, 7], *A*(*z*)

with *DA* being the delay of *A*(*z*), whereas the masking filter are related according to

with *DB* being the delay of *B*0(*z*). One then finds that *F*0(*z*) becomes, for *DB* even:

*<sup>A</sup>*(*z*) = *<sup>A</sup>*0(*z*2) + 0.5*z*−*DA* , (8)

*<sup>B</sup>*1(*z*) = *<sup>z</sup>*−*DB* <sup>−</sup> (−1)*DB <sup>B</sup>*0(−*z*), (9)

*<sup>F</sup>*0(*z*) = <sup>2</sup>*z*−(*LDA*+1)/2*B*01(*z*) + <sup>2</sup>[2*B*00(*z*) <sup>−</sup> *<sup>z</sup>*−*DB*/2]*A*0(*zL*) (10)

*<sup>F</sup>*0(*z*) = <sup>2</sup>*z*−(*LDA*−1)/2*B*00(*z*) + <sup>2</sup>[2*B*01(*z*) <sup>−</sup> *<sup>z</sup>*−(*DB*−1)/2]*A*0(*zL*) (11)

where *B*00(*z*) and *B*01(*z*) are the polyphase components of *B*0(*z*), i.e., *B*0(*z*) = *B*00(*z*2) + *<sup>z</sup>*−1*B*01(*z*2). The resulting structures for *<sup>F</sup>*0(*z*) are depicted in Fig. 1(c) and (d). More details

As seen, *<sup>F</sup>*0(*z*) makes use of three subfilters, of which *<sup>A</sup>*0(*zL*) is periodic for an integer *L* > 1. A periodic filter is a sparse filter, meaning it has many zero-valued filter coefficients. Specifically, only every *<sup>L</sup>*th impulse response value of *<sup>A</sup>*0(*zL*) is non-zero. Consequently, a linear-phase filter *A*(*zL*) of order *NA* requires roughly only *NA*/(2*L*) multiplications and *NA*/*L* additions. In this way, substantial overall savings can be obtained as compared to the

Filters are typically designed in the minimax (Chebyshev) sense or least-squares sense, or possibly combinations thereof, see [24–26]. The goal of this chapter is to demonstrate that the complexity (number of multiplications and additions) can be reduced when using the two-rate based structures instead of regular structures. This will be done by designing both filter classes to meet the same specification and then comparing the resulting complexities 4. To this end, the selection of approximation type is irrelevant, as long as one uses the same for both filter classes. In this chapter, we use minimax design, but other designs can of course

For minimax design, the maximum of the modulus of an error function *E*(*jω*) is minimized.

<sup>4</sup> Another type of comparison is to study the approximation error differences between two solutions having the same filter implementation complexity. However, such a comparison is appropriate when using two different design

*<sup>E</sup>*(*jω*) = *<sup>W</sup>*(*ω*)[*H*(*ejω*) − *<sup>D</sup>*(*jω*)], *<sup>ω</sup>* ∈ <sup>Ω</sup>. (12)

$$D(j\omega) = e^{-j\omega(\text{N}\_{\text{C}} + \text{N}\_{\text{F}})/4} D\_0(j\omega) \tag{14}$$

and bandwidth *ω* ∈ [0, *ωc*], *ω<sup>c</sup>* < *π*, as well as a targeted approximation error *δe*, perform the following three-step procedure for each combination of filter orders *NG* and *NF* around estimated required orders *<sup>N</sup><sup>G</sup>* and *<sup>N</sup><sup>F</sup>*:

**(1)** Design the regular FIR filter *G*(*z*), which gives *G*0(*z*) and *G*1(*z*) after polyphase decomposition. It is done by minimizing the maximum of |*EG*(*jω*)| in the band *ω* ∈ [0, *ωc*/2] where <sup>5</sup>

$$E\_G(j\omega) = G(e^{j\omega}) - e^{-j\omega N\_G/2} D\_0(j2\omega). \tag{15}$$

<sup>5</sup> For a right-band specification, the band for *G*(*z*) is *ω* ∈ [*ωc*/2, *π*/2].

**(2)** Design a regular lowpass HB FIR filter *<sup>F</sup>*(*z*), which gives *<sup>F</sup>*0(*z*) and *<sup>F</sup>*1(*z*) = *<sup>z</sup>*−(*DF*−1)/2, *DF* = *NF*/2, after polyphase decomposition. It is done by minimizing the maximum of |*EF*(*jω*)| in the band *ω* ∈ [*π* − *ωc*/2, *π*], where 6,

$$E\_F(j\omega) = F(e^{j\omega}).\tag{16}$$

10.5772/52198

197

http://dx.doi.org/10.5772/52198

0

0

0.01

−0.01

chapter, we have used *T* = 1 for simplicity.

impulse response, see [4].

**2.5. Examples**

Approximation error

0 0.2π 0.4π 0.6π 0.8π π ωT [rad]

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

0 0.2π 0.4π 0.6π 0.8π 0.95π ωT [rad]

<sup>−</sup>*jω*(*NG*+*NF* )/4 *<sup>j</sup><sup>ω</sup>* (17)

**Figure 2.** Magnitude response and approximation error of the two-rate and FRM based filter in Example 1. Throughout this

in the frequency region *ωT* ∈ [0, *ωc*], 0 < *ω<sup>c</sup>* < *π*. This function can be approximated by a Type III linear-phase FIR filter, i.e., by a filter of even order and with an anti-symmetric

*Example 1: ω<sup>c</sup>* = 0.95*π*, and *δ<sup>e</sup>* = 0.01 (−40 dB). Using a conventional differentiator, the specification is met by a 60th-order filter which requires 30 multiplications and 59 additions in an implementation. Using instead the two-rate and FRM based approach, with *L* = 5, we can meet the specification with filter orders 22, 18, and 2, for *A*(*z*), *B*0(*z*), and *G*(*z*), respectively. The corresponding overall realization requires 17 multiplications and 31 additions. Thus, multiplication and addition savings of 43% and 47%, respectively, are achieved. The savings are however dependent of the bandwidth *ω<sup>c</sup>* as will be illustrated below in Example 2. As always when using linear-phase FRM filters, the price to pay is a somewhat increased delay, and a few more delay elements. In this example, the delay is increased from 30 to 32 samples whereas the number of delay elements is increased from 60 to 64. The increase is thus only 7%. The overall filter frequency response is plotted in Fig. 2. *Example 2:* Figure 3 shows the number of multiplications required for the conventional direct-form filter and the two-rate based filter, both approximating first-degree Type III differentiators with approximation errors of *δ* = 0.01, 0.001, 0.0001 (−40, −60, −80 dB).

Consider a first-degree differentiator with the desired function [4]

*D*(*jω*) = *e*

1

2

Magnitude

3

**(3)** Use *F*0(*z*), *G*0(*z*), and *G*1(*z*) obtained above as the initial solution in a further nonlinear optimization routine that solves the approximation problem stated in (13). If the resulting approximation error *δ* is smaller than *δ<sup>e</sup>* after the optimization, store the result.

The estimated orders required, *<sup>N</sup><sup>G</sup>* and *<sup>N</sup><sup>F</sup>*, can be found by separately designing *<sup>G</sup>*(*z*) and *F*(*z*) to approximate their respective desired functions (as given in Steps 1 and 2, respectively) with the same tolerance as the overall targeted error, i.e., *δ<sup>e</sup>* (or similar as in [17–19]). As the bandwidth of *G*(*z*) is always below *π*/2, its order is typically below 12 for approximation errors down to some −100 dB, provided a smooth function is targeted, like a differentiator or integrator. Hence, the value of *<sup>N</sup><sup>G</sup>* is readily found by designing *<sup>G</sup>*(*z*, *<sup>d</sup>*) for all low-order filters, using conventional techniques, and then set *<sup>N</sup><sup>G</sup>* to the lowest one for which the approximation error <sup>|</sup>*EG*(*jω*)<sup>|</sup> is below *<sup>δ</sup>e*. As to the lowpass HB filter *<sup>F</sup>*(*z*), the value *<sup>N</sup><sup>F</sup>* can be found via well-known formulas for order estimation, see [20, 21], and a few designs around the estimated value.

Regarding the designs, the problems in Steps 1 and 2 are convex, and thus have unique global optima, provided they are formulated in accordance with the approximation problem stated earlier in this section. These problems can be solved using any regular solver for such problems. As *F*(*z*) is a linear-phase filter, it can alternatively be designed using the efficient McClellan-Parks-Rabiner algorithm given in [28]. The problem in Step 3 is nonlinear because of the cascaded subfilters. In the examples of this chapter, we use the general-purpose nonlinear-optimization routine *fminimax* in MATLAB together with the real-rotation theorem, see [29], to solve the problem. The real-rotation theorem states that minimizing | *f* | is equivalent to minimizing ℜ{ *f ej*Θ}, ∀Θ ∈ [0, 2*π*]. The optimization problem is then solved with *ω* and Θ discretized to dense enough grids. A few hundred and 10–20 points, respectively, are typically sufficient in practice.

#### *2.4.2. Modifications When Using an FRM Filter F*(*z*)

When *F*(*z*) is an FRM filter, we can use essentially the same design steps as outlined above. However, a difference is that *F*(*z*) is now realized in terms of the two subfilters *A*(*zP*) and *B*0(*z*) or, equivalently, *F*0(*z*) is now realized in terms of the three subfilters *A*0(*zL*), *B*00(*z*) and *<sup>B</sup>*01(*z*). This means that three parameters, *<sup>N</sup><sup>A</sup>*, *<sup>N</sup><sup>B</sup>*, and *<sup>L</sup>*, instead of only one parameter, *<sup>N</sup><sup>F</sup>*, need to be estimated. Given the same approximation error and band edges as before, *F*(*z*) as well as *<sup>N</sup><sup>A</sup>*, *<sup>N</sup><sup>B</sup>*, and *L*, can be obtained as outlined in [7]. It is noted here that the design of *F*(*z*) in Step 2 now corresponds to a nonconvex problem due to cascaded subfilters in the FRM approach. In [7], this is solved via initial linear optimizations and further nonlinear optimization, similar to the approach given above for the two-rate based structure.

<sup>6</sup> For a right-band specification, a highpass filter *F*(*z*) is designed instead, in the stopband *ω* ∈ [0, *ωc*/2].

**Figure 2.** Magnitude response and approximation error of the two-rate and FRM based filter in Example 1. Throughout this chapter, we have used *T* = 1 for simplicity.

#### **2.5. Examples**

8 Digital Filters and Signal Processing

around the estimated value.

as well as *<sup>N</sup><sup>A</sup>*, *<sup>N</sup><sup>B</sup>*, and

respectively, are typically sufficient in practice.

*2.4.2. Modifications When Using an FRM Filter F*(*z*)

*<sup>B</sup>*01(*z*). This means that three parameters, *<sup>N</sup><sup>A</sup>*, *<sup>N</sup><sup>B</sup>*, and


**(2)** Design a regular lowpass HB FIR filter *<sup>F</sup>*(*z*), which gives *<sup>F</sup>*0(*z*) and *<sup>F</sup>*1(*z*) = *<sup>z</sup>*−(*DF*−1)/2, *DF* = *NF*/2, after polyphase decomposition. It is done by minimizing the maximum of

**(3)** Use *F*0(*z*), *G*0(*z*), and *G*1(*z*) obtained above as the initial solution in a further nonlinear optimization routine that solves the approximation problem stated in (13). If the resulting

The estimated orders required, *<sup>N</sup><sup>G</sup>* and *<sup>N</sup><sup>F</sup>*, can be found by separately designing *<sup>G</sup>*(*z*) and *F*(*z*) to approximate their respective desired functions (as given in Steps 1 and 2, respectively) with the same tolerance as the overall targeted error, i.e., *δ<sup>e</sup>* (or similar as in [17–19]). As the bandwidth of *G*(*z*) is always below *π*/2, its order is typically below 12 for approximation errors down to some −100 dB, provided a smooth function is targeted, like a differentiator or integrator. Hence, the value of *<sup>N</sup><sup>G</sup>* is readily found by designing *<sup>G</sup>*(*z*, *<sup>d</sup>*) for all low-order filters, using conventional techniques, and then set *<sup>N</sup><sup>G</sup>* to the lowest one for which the approximation error <sup>|</sup>*EG*(*jω*)<sup>|</sup> is below *<sup>δ</sup>e*. As to the lowpass HB filter *<sup>F</sup>*(*z*), the value *<sup>N</sup><sup>F</sup>* can be found via well-known formulas for order estimation, see [20, 21], and a few designs

Regarding the designs, the problems in Steps 1 and 2 are convex, and thus have unique global optima, provided they are formulated in accordance with the approximation problem stated earlier in this section. These problems can be solved using any regular solver for such problems. As *F*(*z*) is a linear-phase filter, it can alternatively be designed using the efficient McClellan-Parks-Rabiner algorithm given in [28]. The problem in Step 3 is nonlinear because of the cascaded subfilters. In the examples of this chapter, we use the general-purpose nonlinear-optimization routine *fminimax* in MATLAB together with the real-rotation theorem, see [29], to solve the problem. The real-rotation theorem states that minimizing | *f* | is equivalent to minimizing ℜ{ *f ej*Θ}, ∀Θ ∈ [0, 2*π*]. The optimization problem is then solved with *ω* and Θ discretized to dense enough grids. A few hundred and 10–20 points,

When *F*(*z*) is an FRM filter, we can use essentially the same design steps as outlined above. However, a difference is that *F*(*z*) is now realized in terms of the two subfilters *A*(*zP*) and *B*0(*z*) or, equivalently, *F*0(*z*) is now realized in terms of the three subfilters *A*0(*zL*), *B*00(*z*) and

need to be estimated. Given the same approximation error and band edges as before, *F*(*z*)

of *F*(*z*) in Step 2 now corresponds to a nonconvex problem due to cascaded subfilters in the FRM approach. In [7], this is solved via initial linear optimizations and further nonlinear

optimization, similar to the approach given above for the two-rate based structure.

<sup>6</sup> For a right-band specification, a highpass filter *F*(*z*) is designed instead, in the stopband *ω* ∈ [0, *ωc*/2].

*L*, can be obtained as outlined in [7]. It is noted here that the design

approximation error *δ* is smaller than *δ<sup>e</sup>* after the optimization, store the result.

*EF*(*jω*) = *<sup>F</sup>*(*ejω*). (16)

*<sup>L</sup>*, instead of only one parameter, *<sup>N</sup><sup>F</sup>*,

Consider a first-degree differentiator with the desired function [4]

$$D(j\omega) = e^{-j\omega(N\_G + N\_\mathbb{F})/4} j\omega \tag{17}$$

in the frequency region *ωT* ∈ [0, *ωc*], 0 < *ω<sup>c</sup>* < *π*. This function can be approximated by a Type III linear-phase FIR filter, i.e., by a filter of even order and with an anti-symmetric impulse response, see [4].

*Example 1: ω<sup>c</sup>* = 0.95*π*, and *δ<sup>e</sup>* = 0.01 (−40 dB). Using a conventional differentiator, the specification is met by a 60th-order filter which requires 30 multiplications and 59 additions in an implementation. Using instead the two-rate and FRM based approach, with *L* = 5, we can meet the specification with filter orders 22, 18, and 2, for *A*(*z*), *B*0(*z*), and *G*(*z*), respectively. The corresponding overall realization requires 17 multiplications and 31 additions. Thus, multiplication and addition savings of 43% and 47%, respectively, are achieved. The savings are however dependent of the bandwidth *ω<sup>c</sup>* as will be illustrated below in Example 2. As always when using linear-phase FRM filters, the price to pay is a somewhat increased delay, and a few more delay elements. In this example, the delay is increased from 30 to 32 samples whereas the number of delay elements is increased from 60 to 64. The increase is thus only 7%. The overall filter frequency response is plotted in Fig. 2.

*Example 2:* Figure 3 shows the number of multiplications required for the conventional direct-form filter and the two-rate based filter, both approximating first-degree Type III differentiators with approximation errors of *δ* = 0.01, 0.001, 0.0001 (−40, −60, −80 dB). As the plots reveal, the complexity savings using the two-rate based filter is increased substantially when the bandwidth approaches *π*. The break-even point is somewhere around *ω<sup>c</sup>* = 0.8*π* from which the savings increase approximately linearly with increasing bandwidth. In the region between 0.8*π* and 0.98*π*, the savings go from around zero up to some 65%. Similar savings are obtained also for the number of additions as it is proportional to the number of multiplications. Again, a price to pay for the arithmetic complexity reductions is a moderate increase of the delay and number of delay elements, typically between some 5% and 20%.

From the results in [17, 22], the number of multiplications required for a regular Type III differentiator can be estimated as

$$
\hat{M}\_{\text{regular}} = \pi \frac{0.810 [ - \log\_{10}(\delta\_{\varepsilon})]^{0.919}}{\pi - \omega\_{\varepsilon}}.\tag{18}
$$

10.5772/52198

199

http://dx.doi.org/10.5772/52198

For the two-rate based differentiators, we have instead from [17]

$$
\hat{M}\_{\text{tworate}} = \pi \frac{0.884[-\log\_{10}(\delta\_{\varepsilon})]^{0.852}}{\pi - 0.956\omega\_{\varepsilon}}.\tag{19}
$$

0

Throughout this chapter, we have used *T* = 1 for simplicity.

ωc

T [rad]

This section extends the results to mid-band systems which target the region *ω* ∈ [*ωc*1, *ωc*2], 0 < *ωc*<sup>1</sup> < *ωc*<sup>2</sup> < *π*. Example applications include fractional-degree differentiators and integrators, see [30–33]. For later discussions, we define the don't-care bands ∆<sup>1</sup> and ∆<sup>2</sup> as

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

∆<sup>1</sup> = *ωc*1, ∆<sup>2</sup> = *π* − *ωc*2. (22)

**Figure 4.** (a) Two-rate approach with arbitrary *M*. (b) Equivalent single-rate realization.

**3. Two-rate based structure for mid-band systems**

0.8π 0.9π 0.98π

**Figure 3.** Number of multiplications versus bandwidth *ωcT* for the conventional direct-form filter (dashed line) and proposed two-rate based filter (solid line), for a first-degree Type III differentiator with the approximation errors *δ<sup>c</sup>* = 0.01, 0.001, 0.0001.

50

100

Number of Multiplications

150

Proposed Regular

Comparing the two expressions, we see that the main difference is the multiplicative constant 0.956 in front of *ω<sup>c</sup>* in the latter expression. This explains why the savings increase with increasing bandwidth, as illustrated in Fig. 3.

## **2.6. Generalization to** *M* > 2

The two-rate based scheme can readily be extended to the one depicted in Fig. 4(a) where the interpolation factor is an arbitrary integer *M*. Here, the basic principle is thus to first interpolate with *M* via the interpolation filter *F*(*z*). Then the actual function is again approximated by *G*(*z*). Finally, downsampling by *M* occurs. Using multi-rate theory, one finds again that this structure has the LTI system equivalent seen in Fig. 4(b). That is, the overall transfer function is

$$H(z) = F\_0(z)G\_0(z) + \sum\_{m=1}^{M-1} z^{-1} F\_m(z) G\_{M-m}(z),\tag{20}$$

where *Fm*(*z*) and *Gm*(*z*) are polyphase components of *F*(*z*) and *G*(*z*) in the polyphase representations

$$F(z) = \sum\_{m=0}^{M-1} z^{-m} F\_m(z^M), \quad G(z) = \sum\_{m=0}^{M-1} z^{-m} G\_m(z^M). \tag{21}$$

Using an *M*th-band interpolation filter *F*(*z*), also the generalized scheme is appropriate for left-band and right-band systems. It has turned out though that the case with *M* = 2 typically is the most efficient choice which is why that case has been considered in detail in this section. This is because the additional cost of *F*(*z*) exceeds the additional savings of *G*(*z*) when going from *M* = 2 to *M* > 2. This in turn is due to the fact that the complexity of *G*(*z*) is already very low for *M* = 2. A more detailed discussion on this is found in [17].

10 Digital Filters and Signal Processing

between some 5% and 20%.

differentiator can be estimated as

increasing bandwidth, as illustrated in Fig. 3.

*F*(*z*) =

*M*−1 ∑ *m*=0

**2.6. Generalization to** *M* > 2

overall transfer function is

representations

As the plots reveal, the complexity savings using the two-rate based filter is increased substantially when the bandwidth approaches *π*. The break-even point is somewhere around *ω<sup>c</sup>* = 0.8*π* from which the savings increase approximately linearly with increasing bandwidth. In the region between 0.8*π* and 0.98*π*, the savings go from around zero up to some 65%. Similar savings are obtained also for the number of additions as it is proportional to the number of multiplications. Again, a price to pay for the arithmetic complexity reductions is a moderate increase of the delay and number of delay elements, typically

From the results in [17, 22], the number of multiplications required for a regular Type III

Comparing the two expressions, we see that the main difference is the multiplicative constant 0.956 in front of *ω<sup>c</sup>* in the latter expression. This explains why the savings increase with

The two-rate based scheme can readily be extended to the one depicted in Fig. 4(a) where the interpolation factor is an arbitrary integer *M*. Here, the basic principle is thus to first interpolate with *M* via the interpolation filter *F*(*z*). Then the actual function is again approximated by *G*(*z*). Finally, downsampling by *M* occurs. Using multi-rate theory, one finds again that this structure has the LTI system equivalent seen in Fig. 4(b). That is, the

> *M*−1 ∑ *m*=1

where *Fm*(*z*) and *Gm*(*z*) are polyphase components of *F*(*z*) and *G*(*z*) in the polyphase

Using an *M*th-band interpolation filter *F*(*z*), also the generalized scheme is appropriate for left-band and right-band systems. It has turned out though that the case with *M* = 2 typically is the most efficient choice which is why that case has been considered in detail in this section. This is because the additional cost of *F*(*z*) exceeds the additional savings of *G*(*z*) when going from *M* = 2 to *M* > 2. This in turn is due to the fact that the complexity of *G*(*z*) is already

*M*−1 ∑ *m*=0

*<sup>z</sup>*<sup>−</sup>*mFm*(*zM*), *<sup>G</sup>*(*z*) =

0.810[<sup>−</sup> log10(*δc*)]0.919 *π* − *ω<sup>c</sup>*

0.884[<sup>−</sup> log10(*δc*)]0.852 *π* − 0.956*ω<sup>c</sup>*

. (18)

. (19)

*<sup>z</sup>*−1*Fm*(*z*)*GM*<sup>−</sup>*m*(*z*), (20)

*<sup>z</sup>*<sup>−</sup>*mGm*(*zM*). (21)

*<sup>M</sup>*regular <sup>=</sup> *<sup>π</sup>*

*<sup>M</sup>*tworate <sup>=</sup> *<sup>π</sup>*

*H*(*z*) = *F*0(*z*)*G*0(*z*) +

very low for *M* = 2. A more detailed discussion on this is found in [17].

For the two-rate based differentiators, we have instead from [17]

**Figure 3.** Number of multiplications versus bandwidth *ωcT* for the conventional direct-form filter (dashed line) and proposed two-rate based filter (solid line), for a first-degree Type III differentiator with the approximation errors *δ<sup>c</sup>* = 0.01, 0.001, 0.0001. Throughout this chapter, we have used *T* = 1 for simplicity.

**Figure 4.** (a) Two-rate approach with arbitrary *M*. (b) Equivalent single-rate realization.

#### **3. Two-rate based structure for mid-band systems**

This section extends the results to mid-band systems which target the region *ω* ∈ [*ωc*1, *ωc*2], 0 < *ωc*<sup>1</sup> < *ωc*<sup>2</sup> < *π*. Example applications include fractional-degree differentiators and integrators, see [30–33]. For later discussions, we define the don't-care bands ∆<sup>1</sup> and ∆<sup>2</sup> as

$$
\Delta\_1 = \omega\_{c1\prime} \quad \Delta\_2 = \pi - \omega\_{c2}.\tag{22}
$$

In principle, we can again make use of the scheme in Fig. 4 with a lowpass filter *F*(*z*) but it is not efficient for mid-band systems. This is because the filter *G*(*z*) then needs to approximate the desired function in the band between *ωc*1/*M* = ∆1/*M* and *ωc*2/*M* = (*π* − ∆2)/*M*. Although this implies that the width of the upper don't-care band of *G*(*z*) is increased substantially to roughly (*M* − 1)*π*/*M* instead of the original ∆<sup>2</sup> = *π* − *ωc*2, its lower don't-care band, ∆1/*M* = *ωc*1/*M*, becomes *M* times narrower. This means that the complexity of *G*(*z*) may thereby even increase, not decrease. In the left-band case, this is not a problem as there is no don't-care band to the left.

The width of both the lower and the upper don't-care bands of *G*(*z*) can be increased by using a bandpass filter *F*(*z*) instead of a lowpass filter. This also means that we have to use *M* > 2. Again, it appears that the most efficient case is for the lowest possible *M* which is here *M* = 3. The reason for this is two-fold. First, odd values of *M* makes it possible to center the passband of *G*(*z*) around *π*/2, which maximizes the minimum of its lower and upper don't-care bands. Second, the complexity of *F*(*z*) alone reduces with reduced *M*, in accordance with the discussion in [17] for the left-band case. In addition, the use of *M* = 3 instead of *M* > 3, makes it possible to double the amount of sparsity of *F*(*z*), and thus its efficiency, by expressing it as a periodic filter.

Here, *<sup>G</sup>*(*z*) is to approximate *<sup>D</sup>*(*jω<sup>M</sup>* <sup>−</sup> *<sup>j</sup>*(*<sup>K</sup>* <sup>−</sup> <sup>1</sup>)*π*) in the frequency region *<sup>ω</sup>* <sup>∈</sup> [*ω*(*G*) *<sup>c</sup>*<sup>1</sup> , *<sup>ω</sup>*(*G*) *<sup>c</sup>*<sup>2</sup> ] where

$$
\omega\_{c1}^{(\mathcal{G})} = (\mathcal{K} - 1)\pi/\mathcal{M} + \Delta\_1/\mathcal{M}, \quad \omega\_{c2}^{(\mathcal{G})} = \mathcal{K}\pi/\mathcal{M} - \Delta\_2/\mathcal{M}, \tag{23}
$$

with *K* being an appropriately chosen odd integer. For *M* = 3, one should use *K* = −1. After the downsampling by *M*, the above region is mapped to the targeted region *ω* ∈ [*ωc*1, *ωc*2]. Further, *F*(*z*) is to approximate *M* in the same region as that of *G*(*z*) and zero in the corresponding image bands created in the upsampling. Hence, *F*(*z*) is here a bandpass filter with passband and stopband edges at

$$
\omega\_{c1}^{(F)} = \omega\_{c1}^{(G)} \; , \quad \omega\_{c2}^{(F)} = \omega\_{c2}^{(G)} \tag{24}
$$

10.5772/52198

201

http://dx.doi.org/10.5772/52198

*NH NG NF DH* **DE Mult Add**

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

Regular 124 - - 62 124 125 124

Two-rate, regular *F*(*z*) = *P*(*z*2) 126 10 372 63 126 73 134 Two-rate, FRM *F*(*z*) = *P*(*z*2) 140 10 412 70 140 41 59

As opposed to the case of linear-phase overall filters considered in Section 2, we can here achieve complexity savings without using additional FRM techniques. The reason is two-fold. First, as seen above, *F*(*z*) = *P*(*z*2) is already sparse. Second, as a mid-band system is often a nonlinear-phase system, the filter coefficient are not symmetric. By using the two-rate based structure, symmetry can partially be utilized as *F*(*z*) is a symmetric filter whereas only the low-order *G*(*z*) is unsymmetric. As to the sparsity, the degree of sparseness can be increased by realizing *P*(*z*) as an FRM third-band filter. Details are given in [18].

*Example 3:* Consider the approximation of a fractional-degree differentiator with the desired function *D*(*jω*) = *e*−*jω*(*NG*+*NF* )/4(*jω*)0.5 in the frequency band *ω* ∈ [0.02*π*, 0.98*π*] and for an approximation error of *δ<sup>e</sup>* = 0.01. Figure 5 shows the frequency response and approximation error of the two-rate based design. The filter has been designed using essentially the same three-step procedure described earlier, but after minor appropriate modifications, as detailed in [18]. Table 2 gives the results for the conventional direct-form realization and for the two-rate based realizations, both with a sparse regular bandpass filter and a sparse FRM bandpass filter. The quantity *DH* denotes the integer part of the group delay whereas DE denotes the number of delay elements. As seen from the table, substantial savings are achieved using the two-rate based structures, especially when the FRM technique is also utilized. As usual when using the FRM technique, one has to pay a price in a somewhat increased delay. It is also noted that the savings increase/decrease with increased/decreased bandwidth (decreased/increased width of the don't-care bands). This is in line with the basic

In this section, we will discuss the extension to the realization of multifunction systems. The two-rate based approach is even more efficient for such systems as the same *F*(*z*), and thus the same *F*0(*z*), is shared between all functions. We will illustrate this for Farrow-structure based (see [34]) variable fractional-delay (VFD) filters. As an example will reveal, the two-rate based structure offers dramatic complexity reductions in this application, even without using the additional FRM approach. However, incorporating the FRM approach, further

Variable fractional-delay filters find applications in many different contexts like interpolation,

two-rate based scheme and it was exemplified earlier in Example 2.

resampling, delay estimation, and signal reconstruction, see [35–40].

**Table 2.** Results of Example 3.

**3.2. Examples**

**3.1. Complexity savings**

**4. Multi-function systems**

complexity savings are obtained.

**4.1. Variable fractional-delay filters**

and

$$
\omega\_{\rm s1}^{(\rm F)} = (\rm K - 1)\pi/M - \Delta\_1/M, \quad \omega\_{\rm s2}^{(\rm F)} = \rm K\pi/M + \Delta\_2/M,\tag{25}
$$

respectively. Moreover, with *M* = 3 and *K* = −1, *F*(*z*) is a symmetric bandpass filter centered on *π*/2. Consequently, it can be expressed as

$$F(z) = \Im P(z^2) \tag{26}$$

where *P*(*z*) is a unity-gain-passband third-band highpass filter. The polyphase decomposition of *<sup>P</sup>*(*z*) is then *<sup>P</sup>*(*z*) = 1/3 <sup>+</sup> *<sup>z</sup>*−1*P*1(*z*3) + *<sup>z</sup>*−2*P*2(*z*3) which leads to *<sup>F</sup>*(*z*) = <sup>1</sup> <sup>+</sup> <sup>3</sup>*z*−2*P*1(*z*6) + <sup>3</sup>*z*−4*P*2(*z*6) and the polyphase components

$$F\_0(z) = 1, \quad F\_1(z) = 3z^{-1} P\_2(z^2), \quad F\_2(z) = 3P\_1(z^2). \tag{27}$$

A filter *F*(*z*) of the form above requires roughly only one third of the complexity of a general filter of the same order.


**Table 2.** Results of Example 3.

12 Digital Filters and Signal Processing

not a problem as there is no don't-care band to the left.

efficiency, by expressing it as a periodic filter.

*ω*(*G*)

filter with passband and stopband edges at

*ω*(*F*)

filter of the same order.

on *π*/2. Consequently, it can be expressed as

<sup>1</sup> <sup>+</sup> <sup>3</sup>*z*−2*P*1(*z*6) + <sup>3</sup>*z*−4*P*2(*z*6) and the polyphase components

where

and

In principle, we can again make use of the scheme in Fig. 4 with a lowpass filter *F*(*z*) but it is not efficient for mid-band systems. This is because the filter *G*(*z*) then needs to approximate the desired function in the band between *ωc*1/*M* = ∆1/*M* and *ωc*2/*M* = (*π* − ∆2)/*M*. Although this implies that the width of the upper don't-care band of *G*(*z*) is increased substantially to roughly (*M* − 1)*π*/*M* instead of the original ∆<sup>2</sup> = *π* − *ωc*2, its lower don't-care band, ∆1/*M* = *ωc*1/*M*, becomes *M* times narrower. This means that the complexity of *G*(*z*) may thereby even increase, not decrease. In the left-band case, this is

The width of both the lower and the upper don't-care bands of *G*(*z*) can be increased by using a bandpass filter *F*(*z*) instead of a lowpass filter. This also means that we have to use *M* > 2. Again, it appears that the most efficient case is for the lowest possible *M* which is here *M* = 3. The reason for this is two-fold. First, odd values of *M* makes it possible to center the passband of *G*(*z*) around *π*/2, which maximizes the minimum of its lower and upper don't-care bands. Second, the complexity of *F*(*z*) alone reduces with reduced *M*, in accordance with the discussion in [17] for the left-band case. In addition, the use of *M* = 3 instead of *M* > 3, makes it possible to double the amount of sparsity of *F*(*z*), and thus its

Here, *<sup>G</sup>*(*z*) is to approximate *<sup>D</sup>*(*jω<sup>M</sup>* <sup>−</sup> *<sup>j</sup>*(*<sup>K</sup>* <sup>−</sup> <sup>1</sup>)*π*) in the frequency region *<sup>ω</sup>* <sup>∈</sup> [*ω*(*G*)

with *K* being an appropriately chosen odd integer. For *M* = 3, one should use *K* = −1. After the downsampling by *M*, the above region is mapped to the targeted region *ω* ∈ [*ωc*1, *ωc*2]. Further, *F*(*z*) is to approximate *M* in the same region as that of *G*(*z*) and zero in the corresponding image bands created in the upsampling. Hence, *F*(*z*) is here a bandpass

*<sup>c</sup>*<sup>1</sup> , *<sup>ω</sup>*(*F*)

respectively. Moreover, with *M* = 3 and *K* = −1, *F*(*z*) is a symmetric bandpass filter centered

where *P*(*z*) is a unity-gain-passband third-band highpass filter. The polyphase decomposition of *<sup>P</sup>*(*z*) is then *<sup>P</sup>*(*z*) = 1/3 <sup>+</sup> *<sup>z</sup>*−1*P*1(*z*3) + *<sup>z</sup>*−2*P*2(*z*3) which leads to *<sup>F</sup>*(*z*) =

A filter *F*(*z*) of the form above requires roughly only one third of the complexity of a general

*<sup>c</sup>*<sup>2</sup> <sup>=</sup> *<sup>ω</sup>*(*G*)

*<sup>F</sup>*0(*z*) = 1, *<sup>F</sup>*1(*z*) = <sup>3</sup>*z*−1*P*2(*z*2), *<sup>F</sup>*2(*z*) = <sup>3</sup>*P*1(*z*2). (27)

*<sup>c</sup>*<sup>1</sup> = (*<sup>K</sup>* <sup>−</sup> <sup>1</sup>)*π*/*<sup>M</sup>* <sup>+</sup> <sup>∆</sup>1/*M*, *<sup>ω</sup>*(*G*)

*ω*(*F*)

*<sup>c</sup>*<sup>1</sup> <sup>=</sup> *<sup>ω</sup>*(*G*)

*<sup>s</sup>*<sup>1</sup> = (*<sup>K</sup>* <sup>−</sup> <sup>1</sup>)*π*/*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>1/*M*, *<sup>ω</sup>*(*F*)

#### **3.1. Complexity savings**

As opposed to the case of linear-phase overall filters considered in Section 2, we can here achieve complexity savings without using additional FRM techniques. The reason is two-fold. First, as seen above, *F*(*z*) = *P*(*z*2) is already sparse. Second, as a mid-band system is often a nonlinear-phase system, the filter coefficient are not symmetric. By using the two-rate based structure, symmetry can partially be utilized as *F*(*z*) is a symmetric filter whereas only the low-order *G*(*z*) is unsymmetric. As to the sparsity, the degree of sparseness can be increased by realizing *P*(*z*) as an FRM third-band filter. Details are given in [18].

#### **3.2. Examples**

*<sup>c</sup>*<sup>1</sup> , *<sup>ω</sup>*(*G*) *<sup>c</sup>*<sup>2</sup> ]

*<sup>c</sup>*<sup>2</sup> <sup>=</sup> *<sup>K</sup>π*/*<sup>M</sup>* <sup>−</sup> <sup>∆</sup>2/*M*, (23)

*<sup>c</sup>*<sup>2</sup> (24)

*<sup>s</sup>*<sup>2</sup> <sup>=</sup> *<sup>K</sup>π*/*<sup>M</sup>* <sup>+</sup> <sup>∆</sup>2/*M*, (25)

*F*(*z*) = 3*P*(*z*2) (26)

*Example 3:* Consider the approximation of a fractional-degree differentiator with the desired function *D*(*jω*) = *e*−*jω*(*NG*+*NF* )/4(*jω*)0.5 in the frequency band *ω* ∈ [0.02*π*, 0.98*π*] and for an approximation error of *δ<sup>e</sup>* = 0.01. Figure 5 shows the frequency response and approximation error of the two-rate based design. The filter has been designed using essentially the same three-step procedure described earlier, but after minor appropriate modifications, as detailed in [18]. Table 2 gives the results for the conventional direct-form realization and for the two-rate based realizations, both with a sparse regular bandpass filter and a sparse FRM bandpass filter. The quantity *DH* denotes the integer part of the group delay whereas DE denotes the number of delay elements. As seen from the table, substantial savings are achieved using the two-rate based structures, especially when the FRM technique is also utilized. As usual when using the FRM technique, one has to pay a price in a somewhat increased delay. It is also noted that the savings increase/decrease with increased/decreased bandwidth (decreased/increased width of the don't-care bands). This is in line with the basic two-rate based scheme and it was exemplified earlier in Example 2.

### **4. Multi-function systems**

In this section, we will discuss the extension to the realization of multifunction systems. The two-rate based approach is even more efficient for such systems as the same *F*(*z*), and thus the same *F*0(*z*), is shared between all functions. We will illustrate this for Farrow-structure based (see [34]) variable fractional-delay (VFD) filters. As an example will reveal, the two-rate based structure offers dramatic complexity reductions in this application, even without using the additional FRM approach. However, incorporating the FRM approach, further complexity savings are obtained.

#### **4.1. Variable fractional-delay filters**

Variable fractional-delay filters find applications in many different contexts like interpolation, resampling, delay estimation, and signal reconstruction, see [35–40].

**Figure 5.** Fractional-degree differentiator responses in Example 3 using the two-rate based structure with an FRM bandpass filter targeting *ωc*<sup>1</sup> = 0.02*π* and *ωc*<sup>2</sup> = 0.98*π*.

The VFD filter, with a transfer function *H*(*z*, *d*), is for *z* = *ej<sup>ω</sup>* to approximate the ideal VFD filter frequency response

$$D(j\omega, d) = e^{-j\omega(D\_H + d)}\tag{28}$$

10.5772/52198

203

http://dx.doi.org/10.5772/52198

*dkGk*(*z*). (31)

*gk*(*n*)=(−1)*kgk*(*NG* − *n*).

an FD filter in the region [0, *ωc*/2]. That is,

case, *F*0(*z*) is again realized as in Fig. 1(c) or (d).

**4.2. Design examples**

**Figure 6.** Farrow structure realizing the VFD filter transfer function in (29).

cases, the impulse responses are symmetric (anti-symmetric) for even (odd) values of *k*, thus

Figure 6 shows the regular Farrow structure realizing (29). As seen, the problem amounts to realizing the *L* + 1 differentiator functions with ideal responses (−*jωT*)*k*/*k*!. In other words, it essentially corresponds to the realization of a multi-function system, although, in this case, the partial outputs are finally combined via the FD multiplications to form only one output. Using now the two-rate based approach introduced in Section 2.2, each *Hk*(*z*) is realized as

where *<sup>F</sup>*0(*z*) and *<sup>z</sup>*−(*DF*−1)/2 are again the polyphase components of a linear-phase HB interpolation filter *F*(*z*) with a passband gain of two and delay *DF*, whereas *Gk*0(*z*) and *Gk*1(*z*) are the polyphase components of the subfilters *Gk*(*z*). This follows from sandwiching the filter *F*(*z*)*G*(*z*, *d*) between the upsampler and downsampler, where *G*(*z*, *d*) approximates

> *L* ∑ *k*=0

The overall realization is shown in Fig. 7. It is noted that *F*(*z*) again can be realized using the FRM approach in order to further reduce the complexity, as demonstrated in [19]. In this

*Example 4:* We consider the design of a VFD filter with a bandwidth of *ω<sup>c</sup>* = 0.9*π*. The filter has been designed using essentially the same three-step procedure described earlier. More details are given in [19]. Tables 3 and 4 summarize the results where the number of multiplications and additions covers all fixed subfilters assuming appropriate use of direct-form and transposed direct-form realizations. In addition, *L* general multipliers and adders are needed for implementing the FD multiply-and-add chain, but this is required in all VFD filter structures. Further, the NRMS and *δgd* values given in the tables indicate the normalized root-mean square error and maximum group-delay error as defined by (33) and (36), respectively, in [42], whereas *δ<sup>e</sup>* denotes the maximum of the modulus of the complex error. Further, DE denotes the number of delay elements. It is seen from the table that that the two-rate based structure is considerably more efficient than the regular Farrow structure.

*G*(*z*, *d*) =

*Hk*(*z*) = <sup>2</sup>*F*0(*z*)*Gk*0(*z*) + *<sup>z</sup>*−(*DF*+1)/2*Gk*1(*z*) (30)

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

where *DH* is a fixed delay which usually is an integer or an integer plus a half. Further, *d* is the fractional delay. The ideal response should be approximated in the band *ω* ∈ [0, *ωc*], 0 < *ω<sup>c</sup>* < *π*, and for all fractional delays *d* ∈ [−0.5, 0.5] meaning that a whole sampling period (interval) is covered. In general, the sampling period is *T*, but we have used *T* = 1 in this chapter for simplicity.

Using the Farrow structure, *H*(*z*, *d*) is expressed in the form

$$H(z,d) = \sum\_{k=0}^{L} d^k H\_k(z). \tag{29}$$

where *Hk*(*z*) are fixed subfilters which, essentially, realize the weighted differentiators *e*−*jωDH* × (−*jω*)*k*/*k*! This follows immediately by truncating the Taylor series expansion of *e*−*jωd*, see [41]. Further, when there are no restrictions on the fixed part of the delay, it is possible and efficient to use linear-phase subfilters *Hk*(*z*), thus with symmetric or antisymmetric impulse responses. We then have *DH* = *NH*/2, and the following two different cases. When *Gk*(*z*) are of even order *NG*, they are of Type I (Type III) for even (odd) values of *k*. This results in an integer *DH*. In the odd-order case, *Gk*(*z*) are instead of Type II (Type IV) for even (odd) values of *k*. In this case, *DH* is an integer plus a half. In both

**Figure 6.** Farrow structure realizing the VFD filter transfer function in (29).

14 Digital Filters and Signal Processing

0

1

0

0

filter targeting *ωc*<sup>1</sup> = 0.02*π* and *ωc*<sup>2</sup> = 0.98*π*.

0.005


filter frequency response

this chapter for simplicity.

0.01

0.5

arg{

*H*(*e*

*j*ω)}

0.02π 0.2π 0.4π 0.6π 0.8π 0.98π

0.02π 0.2π 0.4π 0.6π 0.8π 0.98π

0.02π 0.2π 0.4π 0.6π 0.8π 0.98π ω [rad]

**Figure 5.** Fractional-degree differentiator responses in Example 3 using the two-rate based structure with an FRM bandpass

The VFD filter, with a transfer function *H*(*z*, *d*), is for *z* = *ej<sup>ω</sup>* to approximate the ideal VFD

where *DH* is a fixed delay which usually is an integer or an integer plus a half. Further, *d* is the fractional delay. The ideal response should be approximated in the band *ω* ∈ [0, *ωc*], 0 < *ω<sup>c</sup>* < *π*, and for all fractional delays *d* ∈ [−0.5, 0.5] meaning that a whole sampling period (interval) is covered. In general, the sampling period is *T*, but we have used *T* = 1 in

> *L* ∑ *k*=0

where *Hk*(*z*) are fixed subfilters which, essentially, realize the weighted differentiators *e*−*jωDH* × (−*jω*)*k*/*k*! This follows immediately by truncating the Taylor series expansion of *e*−*jωd*, see [41]. Further, when there are no restrictions on the fixed part of the delay, it is possible and efficient to use linear-phase subfilters *Hk*(*z*), thus with symmetric or antisymmetric impulse responses. We then have *DH* = *NH*/2, and the following two different cases. When *Gk*(*z*) are of even order *NG*, they are of Type I (Type III) for even (odd) values of *k*. This results in an integer *DH*. In the odd-order case, *Gk*(*z*) are instead of Type II (Type IV) for even (odd) values of *k*. In this case, *DH* is an integer plus a half. In both

<sup>−</sup>*jω*(*DH* <sup>+</sup>*d*) (28)

*dkHk*(*z*). (29)

*D*(*jω*, *d*) = *e*

*H*(*z*, *d*) =

Using the Farrow structure, *H*(*z*, *d*) is expressed in the form

1


*j*ω)| 2

cases, the impulse responses are symmetric (anti-symmetric) for even (odd) values of *k*, thus *gk*(*n*)=(−1)*kgk*(*NG* − *n*).

Figure 6 shows the regular Farrow structure realizing (29). As seen, the problem amounts to realizing the *L* + 1 differentiator functions with ideal responses (−*jωT*)*k*/*k*!. In other words, it essentially corresponds to the realization of a multi-function system, although, in this case, the partial outputs are finally combined via the FD multiplications to form only one output. Using now the two-rate based approach introduced in Section 2.2, each *Hk*(*z*) is realized as

$$H\_k(z) = 2F\_0(z)G\_{k0}(z) + z^{-(D\_{\mathbb{F}}+1)/2}G\_{k1}(z) \tag{30}$$

where *<sup>F</sup>*0(*z*) and *<sup>z</sup>*−(*DF*−1)/2 are again the polyphase components of a linear-phase HB interpolation filter *F*(*z*) with a passband gain of two and delay *DF*, whereas *Gk*0(*z*) and *Gk*1(*z*) are the polyphase components of the subfilters *Gk*(*z*). This follows from sandwiching the filter *F*(*z*)*G*(*z*, *d*) between the upsampler and downsampler, where *G*(*z*, *d*) approximates an FD filter in the region [0, *ωc*/2]. That is,

$$\mathcal{G}(z,d) = \sum\_{k=0}^{L} d^k \mathcal{G}\_k(z). \tag{31}$$

The overall realization is shown in Fig. 7. It is noted that *F*(*z*) again can be realized using the FRM approach in order to further reduce the complexity, as demonstrated in [19]. In this case, *F*0(*z*) is again realized as in Fig. 1(c) or (d).

#### **4.2. Design examples**

*Example 4:* We consider the design of a VFD filter with a bandwidth of *ω<sup>c</sup>* = 0.9*π*. The filter has been designed using essentially the same three-step procedure described earlier. More details are given in [19]. Tables 3 and 4 summarize the results where the number of multiplications and additions covers all fixed subfilters assuming appropriate use of direct-form and transposed direct-form realizations. In addition, *L* general multipliers and adders are needed for implementing the FD multiply-and-add chain, but this is required in all VFD filter structures. Further, the NRMS and *δgd* values given in the tables indicate the normalized root-mean square error and maximum group-delay error as defined by (33) and (36), respectively, in [42], whereas *δ<sup>e</sup>* denotes the maximum of the modulus of the complex error. Further, DE denotes the number of delay elements. It is seen from the table that that the two-rate based structure is considerably more efficient than the regular Farrow structure.

10.5772/52198

205

http://dx.doi.org/10.5772/52198

−100

43

43.5

τ

*d* between −0.5 and 0.5.

additional FRM approach.

multiplications is denoted multiplier block, as in [45].

(ωT,d)

g

44

0 0.2π 0.4π 0.6π 0.8π π ωT [rad]

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

0 0.2π 0.4π 0.6π 0.8π π ωT [rad]

**Figure 8.** Two-rate and FRM based VFD filter in Example 4. Error and group delay responses for 11 evenly distributed values of

adders and subtracters. The situation is different here though, than for the most commonly considered transposed direct-form filter realization, as the proposed structures consist of cascaded subfilters. This section will therefore elaborate on these issues and provide design examples. The focus here is on VFD filters using the two-rate based structure without the

For dedicated hardware implementations, one can take advantage of MCM techniques to reduce the implementation cost. Multiplications by constant coefficients can be performed using adders, subtracters, and shifts. As adders and subtracters have approximately the same implementation complexity we will refer to both as adders. Efficient realization of constant multiplications is an active research area and much effort has been focused on the case where one input data is multiplied by several constant coefficients. This problem has mainly been motivated by single-rate FIR filters, where for a transposed direct form FIR filter the input is multiplied by several coefficients, see [45–49]. The resulting implementation of several

Work has also been done for sampling rate change with an integer factor in [50] and rational factor in [51], where it was shown that FIR filters in parallel can be implemented either using one multiplier block or by using a constant matrix multiplication block, as in [52–55], with the first approach requiring more delay elements than the latter. As a Farrow filter also is composed of several FIR filters in parallel we have the same implementation alternatives here, not only the single multiplier block case as reported in [56]. This has been extensively discussed in [57]. In Fig. 9, the approach to implement the subfilters proposed in [56] is shown. This approach typically requires few additions for the multiplier block. However, a separate set of registers is required for each subfilter and the number of structural adders

−50

20log10|E(ejω T,d)| [dB]

0

**Figure 7.** Two-rate based structure realizing the VFD filter transfer function in (29) with *Hk* (*z*) as in (30).


**Table 3.** Results of Example 4.


**Table 4.** Results of Example 4. \*Magnitude and phase delay errors.

It is also more efficient than two alternative approaches whose results are also included in the table, namely for the hybrid structure in [42] and the structure in [43] which meets roughly the same specification. It is also seen that the extended structure that utilizes the FRM technique offers further complexity reductions. The price to pay in this case is however a slight increase of the delay and delay elements, but the figures are still considerably smaller than for the structure in [42]. Compared with the regular Farrow structure in [44] and the one in [43], one has to pay the moderate price of a delay and delay element increase of some 3% using the basic structure in Fig. 7(b) and 19% using the extended structure incorporating the FRM approach.

## **5. Multiple-constant multiplication techniques for the subfilter implementations**

This section will discuss implementation details, design trade-offs, and comparisons when the multiplications in the filters are implemented using multiple constant multiplication (MCM) techniques, which realize a number of constant multiplications using only shifts,

16 Digital Filters and Signal Processing

**Table 3.** Results of Example 4.

the FRM approach.

**implementations**

**Linear-Phase** *L NH NE NF NA NB P* Reg. Farrow, WLS [44] 7 73 n/a n/a n/a n/a n/a Hybrid, WLS[42] 7 117 n/a n/a n/a n/a n/a Simplified [43] 9 73 n/a n/a n/a n/a n/a Two-rate based 7 75 12 138 n/a n/a n/a Two-rate based with FRM 7 87 12 162 46 26 3

**Linear-Phase** *DH* **DE Mult Add NRMS (%)** *δ<sup>e</sup>* **[dB]** *δgd* Reg. Farrow, WLS [44] 36.5 73 191 374 0.00023 −100.04 0.000446 Hybrid, WLS[42] 58.5 117 148 285 0.00021 −100.21 0.000395 Simplified [43] 36.5 73 91 158 not reported −102.14\* not reported Two-rate based 37.5 75 80 149 0.00019 −102.96 0.000318 Two-rate based with FRM 43.5 87 71 129 0.00019 −104.15 0.000263

It is also more efficient than two alternative approaches whose results are also included in the table, namely for the hybrid structure in [42] and the structure in [43] which meets roughly the same specification. It is also seen that the extended structure that utilizes the FRM technique offers further complexity reductions. The price to pay in this case is however a slight increase of the delay and delay elements, but the figures are still considerably smaller than for the structure in [42]. Compared with the regular Farrow structure in [44] and the one in [43], one has to pay the moderate price of a delay and delay element increase of some 3% using the basic structure in Fig. 7(b) and 19% using the extended structure incorporating

This section will discuss implementation details, design trade-offs, and comparisons when the multiplications in the filters are implemented using multiple constant multiplication (MCM) techniques, which realize a number of constant multiplications using only shifts,

**5. Multiple-constant multiplication techniques for the subfilter**

**Figure 7.** Two-rate based structure realizing the VFD filter transfer function in (29) with *Hk* (*z*) as in (30).

**Table 4.** Results of Example 4. \*Magnitude and phase delay errors.

**Figure 8.** Two-rate and FRM based VFD filter in Example 4. Error and group delay responses for 11 evenly distributed values of *d* between −0.5 and 0.5.

adders and subtracters. The situation is different here though, than for the most commonly considered transposed direct-form filter realization, as the proposed structures consist of cascaded subfilters. This section will therefore elaborate on these issues and provide design examples. The focus here is on VFD filters using the two-rate based structure without the additional FRM approach.

For dedicated hardware implementations, one can take advantage of MCM techniques to reduce the implementation cost. Multiplications by constant coefficients can be performed using adders, subtracters, and shifts. As adders and subtracters have approximately the same implementation complexity we will refer to both as adders. Efficient realization of constant multiplications is an active research area and much effort has been focused on the case where one input data is multiplied by several constant coefficients. This problem has mainly been motivated by single-rate FIR filters, where for a transposed direct form FIR filter the input is multiplied by several coefficients, see [45–49]. The resulting implementation of several multiplications is denoted multiplier block, as in [45].

Work has also been done for sampling rate change with an integer factor in [50] and rational factor in [51], where it was shown that FIR filters in parallel can be implemented either using one multiplier block or by using a constant matrix multiplication block, as in [52–55], with the first approach requiring more delay elements than the latter. As a Farrow filter also is composed of several FIR filters in parallel we have the same implementation alternatives here, not only the single multiplier block case as reported in [56]. This has been extensively discussed in [57]. In Fig. 9, the approach to implement the subfilters proposed in [56] is shown. This approach typically requires few additions for the multiplier block. However, a separate set of registers is required for each subfilter and the number of structural adders

10.5772/52198

207

http://dx.doi.org/10.5772/52198

constant matrix multiplication.

used.

**5.1. Example and comparisons**

two-rate based structure is superior.

and additional adders is constant.

 

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

 

**Figure 11.** Realization of the filter structure in Fig. 7 using direct form subfilters resulting in a sum-of-products block and a

*Example 5:* We consider a specification where the bandwidth is 0.9*π* and the modulus of the complex error should be below 0.0042. To meet this specification, the Farrow structure in Fig. 6, with subfilters jointly optimized as outlined in detail in [41], requires 45 fixed multipliers, 88 fixed adders, and 5 variable multipliers. The two-rate based structure in Fig. 7, with subfilters jointly optimized as detailed in [14], requires 30 fixed multipliers, 53 fixed adders, and 5 variable multipliers. Thus, in terms of number of multiplications and additions, the

To refine the comparison when MCM techniques are applied we must quantize the filter coefficients. For a relative comparison, one can use simple rounding. We found that the original Farrow structure requires 11 bits to fulfil the requirements whereas the structure in Fig. 7 requires 13 fractional bits. The slightly larger number of bits for the two-rate approach is explained by the fact that a cascaded filter must meet the requirements which leads to a somewhat more stringent requirement on the subfilters, at least when simple rounding is

For the regular Farrow structure in Fig. 6, together with the realization in Fig. 9 a total of 33 adders are required for the multiplier block using the RAG-n algorithm in [45]. This is an optimal result since there are 33 different (odd) coefficients as discussed in [58], and, hence, there is no need to apply the slightly more efficient algorithms in [47–49]. Furthermore, 80 structural adders and 118 registers are required for the FIR subfilters. Further, five general multipliers and four additional adders are required (for both the regular Farrow and two-rate based filters). Alternatively, using the proposed structure in Fig. 10 the constant matrix multiplication can be realized using 96 adders using the algorithm in [53]. In addition, 26 structural adders are required. One observation is that separating the symmetric and anti-symmetric subfilters may reduce the complexity as some algorithms work better for fewer columns. If this is utilized, the number of adders can be reduced to 107 by applying the algorithm in [52] to the resulting two matrices and adding and subtracting the results. The number of registers is now decreased to 30, whereas the number of general multipliers

For the two-rate based structure in Fig. 7, the HB filter requires 43 structural adders and 29 registers. The sum of products are realized by computing the corresponding multiplier

**Figure 9.** Realization of Farrow filter using transposed direct form subfilters resulting in a multiplier block and several sets of registers.

**Figure 10.** Realization of Farrow filter using direct form subfilters resulting in a constant matrix multiplication and a single set of registers.

is high. Alternatively, in Fig. 10, an approach based on the observation in [50] and further discussed in [57] is shown. Here, only one set of registers is required and the structural adders of the subfilters are merged into the matrix-vector multiplication.

The Farrow filter part of the two-rate based structure in Fig. 7 can be implemented similarly to what is shown in Figs. 9 or 10. For transposed direct form subfilters, as in Fig. 9, the corresponding structure would have two inputs, and, hence, result in a constant matrix multiplication. Using direct form subfilters, as in Fig. 10, requires two sets of registers, one for each input. For the HB filter it is convenient to use a direct form subfilter as the delayed input values are easily obtained from the registers. We note that the input to the lower branch subfilters in Fig. 7 is just a delayed version of the input, which is available from the upper branch subfilter *F*0(*z*). Therefore, it is possible to use the registers of the HB filter as registers for direct form subfilters. The resulting structure is illustrated in Fig. 11.

Naturally, it is also possible to use a transposed direct form HB filter and/or transposed direct form subfilters in the Farrow filter part. From a complexity point of view, a transposed direct form HB filter will have the same number of adders and registers. However, it will not be possible to share registers as shown in Fig. 11.

**Figure 11.** Realization of the filter structure in Fig. 7 using direct form subfilters resulting in a sum-of-products block and a constant matrix multiplication.

## **5.1. Example and comparisons**

18 Digital Filters and Signal Processing

registers.

of registers.

**Figure 9.** Realization of Farrow filter using transposed direct form subfilters resulting in a multiplier block and several sets of

 

**Figure 10.** Realization of Farrow filter using direct form subfilters resulting in a constant matrix multiplication and a single set

is high. Alternatively, in Fig. 10, an approach based on the observation in [50] and further discussed in [57] is shown. Here, only one set of registers is required and the structural

The Farrow filter part of the two-rate based structure in Fig. 7 can be implemented similarly to what is shown in Figs. 9 or 10. For transposed direct form subfilters, as in Fig. 9, the corresponding structure would have two inputs, and, hence, result in a constant matrix multiplication. Using direct form subfilters, as in Fig. 10, requires two sets of registers, one for each input. For the HB filter it is convenient to use a direct form subfilter as the delayed input values are easily obtained from the registers. We note that the input to the lower branch subfilters in Fig. 7 is just a delayed version of the input, which is available from the upper branch subfilter *F*0(*z*). Therefore, it is possible to use the registers of the HB filter as registers

Naturally, it is also possible to use a transposed direct form HB filter and/or transposed direct form subfilters in the Farrow filter part. From a complexity point of view, a transposed direct form HB filter will have the same number of adders and registers. However, it will not

adders of the subfilters are merged into the matrix-vector multiplication.

for direct form subfilters. The resulting structure is illustrated in Fig. 11.

be possible to share registers as shown in Fig. 11.

*Example 5:* We consider a specification where the bandwidth is 0.9*π* and the modulus of the complex error should be below 0.0042. To meet this specification, the Farrow structure in Fig. 6, with subfilters jointly optimized as outlined in detail in [41], requires 45 fixed multipliers, 88 fixed adders, and 5 variable multipliers. The two-rate based structure in Fig. 7, with subfilters jointly optimized as detailed in [14], requires 30 fixed multipliers, 53 fixed adders, and 5 variable multipliers. Thus, in terms of number of multiplications and additions, the two-rate based structure is superior.

To refine the comparison when MCM techniques are applied we must quantize the filter coefficients. For a relative comparison, one can use simple rounding. We found that the original Farrow structure requires 11 bits to fulfil the requirements whereas the structure in Fig. 7 requires 13 fractional bits. The slightly larger number of bits for the two-rate approach is explained by the fact that a cascaded filter must meet the requirements which leads to a somewhat more stringent requirement on the subfilters, at least when simple rounding is used.

For the regular Farrow structure in Fig. 6, together with the realization in Fig. 9 a total of 33 adders are required for the multiplier block using the RAG-n algorithm in [45]. This is an optimal result since there are 33 different (odd) coefficients as discussed in [58], and, hence, there is no need to apply the slightly more efficient algorithms in [47–49]. Furthermore, 80 structural adders and 118 registers are required for the FIR subfilters. Further, five general multipliers and four additional adders are required (for both the regular Farrow and two-rate based filters). Alternatively, using the proposed structure in Fig. 10 the constant matrix multiplication can be realized using 96 adders using the algorithm in [53]. In addition, 26 structural adders are required. One observation is that separating the symmetric and anti-symmetric subfilters may reduce the complexity as some algorithms work better for fewer columns. If this is utilized, the number of adders can be reduced to 107 by applying the algorithm in [52] to the resulting two matrices and adding and subtracting the results. The number of registers is now decreased to 30, whereas the number of general multipliers and additional adders is constant.

For the two-rate based structure in Fig. 7, the HB filter requires 43 structural adders and 29 registers. The sum of products are realized by computing the corresponding multiplier


10.5772/52198

209

http://dx.doi.org/10.5772/52198

[5] T. Saramäki, Y. C. Lim, and R. Yang. The synthesis of half-band filter using frequency-response masking technique. *IEEE Trans. Circuits Syst. II*, 42(1):58–60, Jan.

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

[6] P. S. R. Diniz, L. C. R. de Barcellos, and S. L. Netto. Design of high-resolution cosine-modulated transmultiplexers with sharp transition band. *IEEE Trans. Signal*

[7] H. Johansson. Two classes of frequency-response masking linear-phase FIR filters for interpolation and decimation. *Circuits, Syst., Signal Processing*, 25(2):175–200, Apr. 2006.

[8] R. Bregovic, Y. C. Lim, and T. Saramäki. Frequency-response masking-based design of nearly perfect-reconstruction two-channel FIR filterbanks with rational sampling

[9] Y. Wei and Y. Lian. Frequency-response masking filters based on serial masking

[10] J. Yli-Kaakinen and T. Saramäki. An efficient algorithm for the optimization of FIR filters synthesized using the multistage frequency-response masking approach.

[11] N. P. Murphy, A. Krukowski, and I. Kale. Implementation of a wide-band integer and

[12] G. Jovanovic-Dolecek and J. Diaz-Carmona. One structure for wide-bandwidth and high-resolution fractional delay filter. In *Proc. IEEE Int. Conf. Electr. Circuits Syst.*, 2002.

[13] E. Hermanowicz. On designing a wideband fractional delay filter using the Farrow approach. In *Proc. XII European Signal Processing Conf.*, Vienna, Austria, Sept. 6–10

[14] E. Hermanowicz and H. Johansson. On designing minimax adjustable wideband fractional delay FIR filters using two-rate approach. In *Proc. European Conf. Circuit*

[15] H. Johansson, O. Gustafsson, K. Johansson, and L. Wanhammar. Adjustable fractional-delay FIR filters using the Farrow structure and multirate techniques. In

[16] J. Diaz-Carmona and G. Jovanovic-Dolecek. Frequency-based optimization design for fractional delay FIR filters with software-defined radio applications. *Int. J. Digital*

[17] Z. U. Sheikh and H. Johansson. A class of wide-band linear-phase FIR differentiators using a two-rate approach and the frequency-response masking technique. *IEEE Trans.*

factors. *IEEE Trans. Circuits Syst I: Regular Papers*, 55(7):2002–2012, July 2008.

schemes. *Circuits, Syst., Signal Processing*, 29(1):7–24, Feb. 2010.

fractional delay element. *Electron. Lett.*, 30(20):1658–1659, 1994.

*Circuits, Syst., Signal Processing*, 30(1):157–183, Feb. 2011.

*Theory Design*, Cork, Ireland, Aug. 29–Sept. 1 2005.

*Proc. IEEE Asia Pacific Conf. Circuits Syst.*, 2006.

*Multimedia Broadcasting*, 53(6):1–6, June 2010.

*Circuits Syst. I: Regular papers*, 58(8):1827–1839, Aug. 2011.

1995.

2004.

*Processing*, 52(5):1278–1288, May 2004.

**Table 5.** Results of Example 5.

block with RAG-n and transposing it. For the constant matrix multiplication, 38 adders are required. This number is not reduced by separating the symmetric and anti-symmetric subfilters. A total of six additional registers are required as well as the five general multipliers and four additional adders.

The results are summarized in Table 5. It is seen that the two-rate based structure still has the lowest complexity for most implementation technologies as five registers will typically be less complex to implement than 26 adders. Furthermore, whereas the use of transposed direct form subfilters for the Farrow filter, as proposed in [50, 57], reduces the number of adders related to the multiplication, it is for the two-rate approach still more efficient to use direct form subfilters.

## **6. Conclusion**

This chapter has reviewed recent two-rate based structures and their design for obtaining efficient wide-band FIR systems. Left-band, right-band, and mid-band systems, as well as single-function and multi-function systems, were covered. Several design examples were given, for differentiators and VFD filters (a special case of multi-function systems), revealing dramatic complexity savings for wide-band specifications. More details can be found in [12–19].

## **Author details**

Håkan Johansson and Oscar Gustafsson

Division of Electronics Systems, Department of Electrical Engineering, Linköping University, Sweden

## **References**


[5] T. Saramäki, Y. C. Lim, and R. Yang. The synthesis of half-band filter using frequency-response masking technique. *IEEE Trans. Circuits Syst. II*, 42(1):58–60, Jan. 1995.

20 Digital Filters and Signal Processing

**Table 5.** Results of Example 5.

direct form subfilters.

**6. Conclusion**

**Author details**

Håkan Johansson and Oscar Gustafsson

*Relat. Technol.*, 1(2):103–109, Mar.–Apr. 1990.

[12–19].

Sweden

**References**

and four additional adders.

**Filter structure Mult Add Registers**

Farrow with transposed direct-form subfilters 5 117 118 Farrow with direct-form subfilters 5 111 30 Proposed two-rate based structure 5 85 35

block with RAG-n and transposing it. For the constant matrix multiplication, 38 adders are required. This number is not reduced by separating the symmetric and anti-symmetric subfilters. A total of six additional registers are required as well as the five general multipliers

The results are summarized in Table 5. It is seen that the two-rate based structure still has the lowest complexity for most implementation technologies as five registers will typically be less complex to implement than 26 adders. Furthermore, whereas the use of transposed direct form subfilters for the Farrow filter, as proposed in [50, 57], reduces the number of adders related to the multiplication, it is for the two-rate approach still more efficient to use

This chapter has reviewed recent two-rate based structures and their design for obtaining efficient wide-band FIR systems. Left-band, right-band, and mid-band systems, as well as single-function and multi-function systems, were covered. Several design examples were given, for differentiators and VFD filters (a special case of multi-function systems), revealing dramatic complexity savings for wide-band specifications. More details can be found in

Division of Electronics Systems, Department of Electrical Engineering, Linköping University,

[1] M. Renfors and Y. Neuvo. The maximum sampling rate of digital filters under hardware speed constraints. *IEEE Trans. Circuits Syst.*, 28(3):196–202, Mar. 1981.

[2] A. Fettweis. On assessing robustness of recursive digital filters. *Eur. Trans. Telecomm.*

[3] Y. C. Lim. Frequency-response masking approach for the synthesis of sharp linear phase digital filters. *IEEE Trans. Circuits, Syst.*, CAS-33(4):357–364, Apr. 1986.

[4] T. Saramäki. Finite impulse response filter design. In S.K. Mitra and J.F. Kaiser, editors, *Handbook for Digital Signal Processing*, chapter 4, pages 155–277. New York: Wiley, 1993.


[18] Z. U. Sheikh and H. Johansson. Efficient wide-band FIR LTI systems derived via multi-rate techniques and sparse bandpass filters. *IEEE Trans. Signal Processing, to appear*, 60(7):3859–3863, July 2012.

10.5772/52198

211

http://dx.doi.org/10.5772/52198

[35] F. M. Gardner. Interpolation in digital modems–Part I: Fundamentals. *IEEE Trans.*

Two-Rate Based Structures for Computationally Efficient Wide-Band FIR Systems

[36] T. I Laakso, V. Välimäki, M. Karjalainen, and U. K. Laine. Splitting the unit delay–Tools for fractional delay filter design. *Signal Processing Mag.*, 13(1):30–60, Jan. 1996.

[37] S. R. Dooley and A. K. Nandi. On explicit time delay estimation using the Farrow

[38] H. Johansson and P. Löwenborg. Reconstruction of nonuniformly sampled bandlimited signals by means of digital fractional delay filters. *IEEE Trans. Signal Processing*,

[39] M. Olsson, H. Johansson, and P. Löwenborg. Time-delay estimation using Farrow-based fractional-delay FIR filters: filter approximation vs. estimation errors.

[40] S. Tertinek and C. Vogel. Reconstruction of nonuniformly sampled bandlimited signals using a differentiator-multiplier cascade. *IEEE Trans. Circuits Syst. I: Regular papers*,

[41] H. Johansson and P. Löwenborg. On the design of adjustable fractional delay FIR filters.

[42] T.-B. Deng. Hybrid structures for low-complexity variable fractional delay filters. *IEEE*

[43] J. Yli-Kaakinen and T. Saramäki. A simplified structure for FIR filters with an adjustable fractional delay. In *Proc. IEEE Int. Symp. Circuits Syst.*, New Orleans, USA,

[44] T.-B. Deng. Symmetric structures for odd-order maximally flat and weighted-least-squares variable fractional-delay filters. *IEEE Trans. Circuits Syst.*

[45] A. G. Dempster and M. D. Macleod. Use of minimum-adder multiplier blocks in FIR

[46] R. I. Hartley. Subexpression sharing in filters using canonic signed digit multipliers.

[47] Y. Voronenko and M. Püschel. Multiplierless multiple constant multiplication. *ACM*

[48] O. Gustafsson. A difference based adder graph heuristic for multiple constant multiplication problems. In *Proc. IEEE Int. Symp. Circuits Syst.*, pages 1097–1100, New

[49] L. Aksoy, E.O. Günes, and P. Flores. Search algorithms for the multiple constant multiplications problem: Exact and approximate. 34(5):151–162, August 2010.

digital filters. *IEEE Trans. Circuits Syst. II*, 42(9):569–577, Sept. 1995.

In *Proc. XIV European Signal Processing Conf.*, Florence, Italy, Sept. 4–8 2006.

*Comm.*, 41(3):502–508, Mar. 1993.

50(11):2757–2767, Nov. 2002.

55(8):2273–2286, Sept. 2008.

May 27–30 2007.

*Trans. Algorithms*, 2006.

Orleans, USA, May 27–30, 2007.

structure. *Signal Processing*, 72:53–57, Jan. 1999.

*IEEE Trans. Circuits Syst. II*, 50(4):164–169, Apr. 2003.

*I: Regular Papers*, 54(12):2718–2732, Dec. 2007.

*IEEE Trans. Circuits Syst. II*, 43(10):677–688, Oct. 1996.

*Trans. Circuits Syst. I: Regular Papaers*, 57(4):897–910, Apr. 2010.


[35] F. M. Gardner. Interpolation in digital modems–Part I: Fundamentals. *IEEE Trans. Comm.*, 41(3):502–508, Mar. 1993.

22 Digital Filters and Signal Processing

2012.

Aug. 29–31 2011.

*appear*, 60(7):3859–3863, July 2012.

[18] Z. U. Sheikh and H. Johansson. Efficient wide-band FIR LTI systems derived via multi-rate techniques and sparse bandpass filters. *IEEE Trans. Signal Processing, to*

[19] H. Johansson and E. Hermanowicz. Two-rate based low-complexity variable fractional-delay FIR filter structures. *IEEE Trans. Circuits Syst. I: Regular papers, to appear*,

[20] J. F. Kaiser. Nonrecursive digital filter design using *I*0-sinh window function. In *Proc.*

[21] K. Ichige, M Iwaki, and R. Ishii. Accurate estimation of minimum filter length for optimum FIR digital filters. *IEEE Trans. Circuits Syst. II*, 47(10):1008–1016, Oct. 2000.

[22] Z. U. Sheikh, A. Eghbali, and H. Johansson. Linear-phase FIR digital differentiator order estimation. In *Proc. European Conf. Circuit Theory Design*, Linköping, Sweden,

[24] L. Rabiner, J. McClellan, and T. Parks. FIR digital filter design technique using weighted-Chebyshev approximation. *IEEE Proc.*, 63(4):595–610, Apr. 1975.

[25] G. Mollova. Compact formulas for least-squares design of digital differentiators.

[26] J. J. Shyu, S. C. Pei, and Y. D. Huang. Least-squares design of variable maximally linear FIR differentiators. *IEEE Trans. Signal Processing*, 57(11):4568–4573, Nov. 2009.

[28] J. H. McClellan, T. W. Parks, and L.R. Rabiner. A computer program for designing optimum FIR linear phase digital filters. *IEEE Trans. Audio Electroacoust.*,

[30] Y. Q. Chen and K. L. Moore. Discretization schemes for fractional-order differentiators

[31] R. S. Barbosa, J. A. T. Machado, and M. F. Silva. Time-domain design of fractional differintegrators using least-squares. *Signal Processing*, 86(10):2567–2581, Oct. 2006.

[32] C. C. Tseng and S.-L. Lee. Design of fractional order digital differentiator using radial basis function. *IEEE Trans. Circuits Syst. I: Regular Papers*, 57(7):1708–1718, July 2010.

[33] B. T. Krishna. Studies on fractional order differentiators and integrators: A survey.

[34] C. W. Farrow. A continuously variable delay element. In *Proc. IEEE Int. Symp., Circuits,*

[27] S. G. Nash and A. Sofer. *Linear and Nonlinear Programming*. McGraw-Hill, 1996.

[29] T. W. Parks and C. S. Burrus. *Digital Filter Design*. John Wiley and Sons, 1987.

and integrators. *IEEE Trans. Circuits Syst. I.*, 49:363–367, Mar. 2002.

*Syst.*, volume 3, pages 2641–2645, Espoo, Finland, June 7–9 1988.

[23] P. P. Vaidyanathan. *Multirate Systems and Filter Banks*. Prentice Hall, 1993.

*Int. Symp. Circuits Syst.*, volume 3, pages 20–23, Apr. 1974.

*Electron. Lett.*, 35(20):1695–1697, Sept. 1999.

*Signal Processing*, 91(3):386–426, Mar. 2011.

AU-21:506–526, Dec. 1973.


[50] O. Gustafsson and A. G. Dempster. On the use of multiple constant multiplication in polyphase fir filters and filter banks. In *Proc. Nordic Signal Processing Symp.*, pages 53–56, Espoo, Finland, June 9–11, 2004.

**Chapter 9**

**Provisional chapter**

**Analytical Approach for Synthesis of Minimum**

**Analytical Approach for Synthesis of Minimum**

*L*2**-Sensitivity Realizations for State-Space Digital**

Shunsuke Yamaki, Masahide Abe and

Masahide Abe and Masayuki Kawamata

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

Masayuki Kawamata

Shunsuke Yamaki,

10.5772/52194

**1. Introduction**

the minimum *L*2-sensitivity.

**Filters**

http://dx.doi.org/10.5772/52194

*L2***-Sensitivity Realizations for State-Space Digital Filters**

On the fixed-point implementation of digital filters, undesirable finite-word-length (FWL) effects arise due to the coefficient truncation and arithmetic roundoff. These FWL effects must be reduced as small as possible because such effects may cause serious degradation of characteristic of digital filters. *L*2-sensitivity is one of the evaluation functions which evaluate the coefficient quantization effects of state-space digital filters [1–14]. The *L*2-sensitivity minimization is quite beneficial technique for the synthesis of high-accuracy digital filter

To the *L*2-sensitivity minimization problem, Yan *et al.* [1] and Hinamoto *et al.* [2] proposed solutions using iterative calculations. Both of the solutions in [1] and [2] try to solve nonlinear equations by successive approximation. Their solutions do not guarantee that the *L*2-sensitivity surely converges to the minimum *L*2-sensitivity since their solutions are not analytical solutions. It is necessary to derive some analytical solutions to the *L*2-sensitivity minimization problem in order to guarantee that their conventional solutions surely derive

This chapter presents analytical approach for synthesis of the minimum *L*2-sensitivity realizations for state-space digital filters. In Section 3, we derive closed form solutions to the *L*2-sensitivity minimization problem for second-order digital filters [12, 13]. This problem can be converted into the problem to find the solution to fourth-degree polynomial equation of constant coefficients, which can be algebraically solved in closed form. Next, we reveal that the *L*2-sensitivity minimization problem can be solved analytically for arbitrary

> ©2012 Yamaki et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Yamaki et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Yamaki et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

structures, which achieves quite low-coefficient quantization error.


**Provisional chapter**

## **Analytical Approach for Synthesis of Minimum** *L2***-Sensitivity Realizations for State-Space Digital Filters Analytical Approach for Synthesis of Minimum** *L*2**-Sensitivity Realizations for State-Space Digital Filters**

Shunsuke Yamaki, Masahide Abe and Masayuki Kawamata Shunsuke Yamaki, Masahide Abe and Masayuki Kawamata

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52194 10.5772/52194

## **1. Introduction**

24 Digital Filters and Signal Processing

212 Digital Filters and Signal Processing

1–4, 2003.

40(11):651–652, May 2004.

*Papers*, 2012. to appear.

*Syst. II*, 54(11):974–978, Nov. 2007.

*Syst.*, vol. 17, no. 1, article 3, Jan. 2012.

53–56, Espoo, Finland, June 9–11, 2004.

[50] O. Gustafsson and A. G. Dempster. On the use of multiple constant multiplication in polyphase fir filters and filter banks. In *Proc. Nordic Signal Processing Symp.*, pages

[51] O. Gustafsson and H. Johansson. Efficient implementation of FIR filter based rational sampling rate converters using constant matrix multiplication. In *Proc. Fortieth Asilomar*

[52] A. G. Dempster, O. Gustafsson, and J. O. Coleman. Towards an algorithm for matrix multiplier blocks. In *Proc. European Conf. Circuit Theory Design*, Krakow, Poland, Sept.

[53] M. D. Macleod and A. G. Dempster. A common subexpression elimination algorithm for low-cost multiplierless implementation of matrix multipliers. *Electronics Lett.*,

[54] N. Boullis and A. Tisserand. Some optimizations of hardware multiplication by

[55] L. Aksoy, E. Costa, P. Flores, and J. Monteiro. Optimization algorithms for the multiplierless realization of linear transforms. *ACM Trans. Design Automation Electronic*

[56] A. G. Dempster and N. P. Murphy. Efficient interpolators and filter banks using

[57] M. Abbas, O. Gustafsson, and H. Johansson. On the fixed-point implementation of fractional-delay filters based on the farrow structure. *IEEE Trans. Circuits Syst I: Regular*

[58] O. Gustafsson. Lower bounds for constant multiplication problems. *IEEE Trans. Circuits*

*Conf. Signals, Systems and Computers ACSSC '06*, pages 888–891, 2006.

constant matrices. *IEEE Trans. Computers*, 54(10):1271–1282, Oct. 2005.

multiplier blocks. *IEEE Trans. Signal Processing*, 48(1):257–261, Jan. 2000.

On the fixed-point implementation of digital filters, undesirable finite-word-length (FWL) effects arise due to the coefficient truncation and arithmetic roundoff. These FWL effects must be reduced as small as possible because such effects may cause serious degradation of characteristic of digital filters. *L*2-sensitivity is one of the evaluation functions which evaluate the coefficient quantization effects of state-space digital filters [1–14]. The *L*2-sensitivity minimization is quite beneficial technique for the synthesis of high-accuracy digital filter structures, which achieves quite low-coefficient quantization error.

To the *L*2-sensitivity minimization problem, Yan *et al.* [1] and Hinamoto *et al.* [2] proposed solutions using iterative calculations. Both of the solutions in [1] and [2] try to solve nonlinear equations by successive approximation. Their solutions do not guarantee that the *L*2-sensitivity surely converges to the minimum *L*2-sensitivity since their solutions are not analytical solutions. It is necessary to derive some analytical solutions to the *L*2-sensitivity minimization problem in order to guarantee that their conventional solutions surely derive the minimum *L*2-sensitivity.

This chapter presents analytical approach for synthesis of the minimum *L*2-sensitivity realizations for state-space digital filters. In Section 3, we derive closed form solutions to the *L*2-sensitivity minimization problem for second-order digital filters [12, 13]. This problem can be converted into the problem to find the solution to fourth-degree polynomial equation of constant coefficients, which can be algebraically solved in closed form. Next, we reveal that the *L*2-sensitivity minimization problem can be solved analytically for arbitrary

Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Yamaki et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Yamaki et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Yamaki et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative

**Figure 1.** Block diagram of a state-space digital filter.

filter order if second-order modes are all equal [14] in Section 4. We derive a general expression of the transfer function of digital filters with all second-order modes equal. We show that the general expression is obtained by a frequency transformation on a first-order prototype FIR digital filter. Furthermore, we show the absence of limit cycles of the minimum *L*2-sensitivity realizations [11] in Section 5. The minimum *L*2-sensitivity realization without limit cycles can be synthesized by selecting an appropriate orthogonal matrix in the coordinate transformation matrix.

## **2. Preliminaries**

This section gives the preliminaries in order to lay groundwork for the main topics of this chapter, which appear in later sections. In Subsection 2.1, we begin with introduction of state-space digital filters. Subsection 2.2 provides the introduction of *L*2-sensitivity. Subsection 2.3 explains coordinate transformations, the operation for changing the structures of state-space digital filters under the transfer function invariant. Subsection 2.4 formulates the *L*2-sensitivity minimization problem.

#### **2.1. State-space digital filters**

It is beneficial to introduce the state-space representation for the synthesis of high accuracy digital filters. For a given *N*th-order transfer function *H*(*z*), a state-space digital filter can be described by the following state-space equations:

$$\mathbf{x}(n+1) = \mathbf{A}\mathbf{x}(n) + \mathbf{b}u(n) \tag{1}$$

$$y(n) = \mathfrak{cx}(n) + du(n) \tag{2}$$

10.5772/52194

215

http://dx.doi.org/10.5772/52194

*<sup>H</sup>*(*z*) = *<sup>c</sup>*(*z<sup>I</sup>* − *<sup>A</sup>*)−1*<sup>b</sup>* + *<sup>d</sup>*. (3)

*b* + *d*. (4)

In this chapter, the state-space representation (*A*, *b*, *c*, *d*) is assumed to be a minimal realization of *H*(*z*), that is, the state-space representation (*A*, *b*, *c*, *d*) is controllable and

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

adj(*zI* − *A*) det(*zI* − *A*)

We know from the above equation that the poles of *H*(*z*) are the solutions of the characteristic equation det(*zI* − *A*) = 0, that is, the eigenvalues of the coefficient matrix *A*. Since we assume that the transfer function *H*(*z*) is stable, all absolute values of the eigenvalues of the coefficient matrix *A* are less than unity. It follows from Eq. (4) that the absolute values of

The *L*2-sensitivity is one of the measurements which evaluate coefficient quantization errors of digital filters. The *L*2-sensitivity of the filter *H*(*z*) with respect to the realization (*A*, *b*, *c*, *d*)

> 

  2 *dω* +

*N* ∑ *l*=1

> 

2

2

*∂H*(*z*) *∂c*

*<sup>∂</sup><sup>b</sup>* <sup>=</sup> *<sup>G</sup>T*(*z*), *<sup>∂</sup>H*(*z*)

1 2*π*  <sup>2</sup>*<sup>π</sup>* 0

*<sup>F</sup>*(*z*)=(*z<sup>I</sup>* − *<sup>A</sup>*)−1*<sup>b</sup>* (7) *<sup>G</sup>*(*z*) = *<sup>c</sup>*(*z<sup>I</sup>* − *<sup>A</sup>*)−<sup>1</sup> (8)

  *∂H*(*ejω*) *∂cl*

 

*<sup>∂</sup><sup>c</sup>* <sup>=</sup> *<sup>F</sup>T*(*z*) (6)

2 *dω*

(5)

2 *dω*

observable. The transfer function *H*(*z*) in Eq. (3) can be rewritten as

*H*(*z*) = *c*

coefficient matrices (*A*, *b*, *c*, *d*) as

poles of *H*(*z*) are less than unity.

*S*(*A*, *b*, *c*)=

*N* ∑ *k*=1

+ *N* ∑ *k*=1

= 

to the coefficient matrices are described by

*∂H*(*z*)

where *F*(*z*) and *G*(*z*) are defined by

*N* ∑ *l*=1

*∂H*(*z*) *∂A*

1 2*π*

1 2*π*

> 

2

2 + 

*<sup>∂</sup><sup>A</sup>* <sup>=</sup> *<sup>G</sup>T*(*z*)*FT*(*z*), *<sup>∂</sup>H*(*z*)

 <sup>2</sup>*<sup>π</sup>* 0

 <sup>2</sup>*<sup>π</sup>* 0

 

  *∂H*(*ejω*) *∂akl*

*∂H*(*ejω*) *∂bk*

> 

2

2 + 

where �·�<sup>2</sup> denotes the *L*2-norm. The derivatives of the transfer function *H*(*z*) with respect

*∂H*(*z*) *∂b*

**2.2.** *L*2**-sensitivity**

is defined by

where *x*(*n*) ∈ ℜ*N*×<sup>1</sup> is a state variable vector, *u*(*n*) ∈ ℜ is a scalar input, *y*(*n*) ∈ ℜ is a scalar output, and *A* ∈ ℜ*N*×*N*, *b* ∈ ℜ*N*<sup>×</sup>1, *c* ∈ ℜ1×*N*, *d* ∈ ℜ are real constant matrices called coefficient matrices. The block diagram of the state-space digital filter (*A*, *b*, *c*, *d*) is shown in Fig. 1(a). In case of second-order digital filters, the block diagram in Fig. 1(a) can be rewritten as shown in Fig. 1(b). The transfer function *H*(*z*) is described in terms of the <sup>214</sup> Digital Filters and Signal Processing Analytical Approach for Synthesis of Minimum *<sup>L</sup>*2-Sensitivity Realizations for State-Space Digital Filters 3 10.5772/52194 Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters http://dx.doi.org/10.5772/52194 215

coefficient matrices (*A*, *b*, *c*, *d*) as

2 Digital Filters and Signal Processing

A

z−<sup>1</sup>

a<sup>22</sup>

*x*(*n* + 1)= *Ax*(*n*) + *bu*(*n*) (1) *y*(*n*)=*cx*(*n*) + *du*(*n*) (2)

u(n) y(n)

z−<sup>1</sup>

a<sup>11</sup>

a<sup>12</sup> a<sup>21</sup>

b1

b2

(a) *N*th-order digital filter (b) Second-order digital filter

filter order if second-order modes are all equal [14] in Section 4. We derive a general expression of the transfer function of digital filters with all second-order modes equal. We show that the general expression is obtained by a frequency transformation on a first-order prototype FIR digital filter. Furthermore, we show the absence of limit cycles of the minimum *L*2-sensitivity realizations [11] in Section 5. The minimum *L*2-sensitivity realization without limit cycles can be synthesized by selecting an appropriate orthogonal matrix in the

This section gives the preliminaries in order to lay groundwork for the main topics of this chapter, which appear in later sections. In Subsection 2.1, we begin with introduction of state-space digital filters. Subsection 2.2 provides the introduction of *L*2-sensitivity. Subsection 2.3 explains coordinate transformations, the operation for changing the structures of state-space digital filters under the transfer function invariant. Subsection 2.4 formulates

It is beneficial to introduce the state-space representation for the synthesis of high accuracy digital filters. For a given *N*th-order transfer function *H*(*z*), a state-space digital filter can be

where *x*(*n*) ∈ ℜ*N*×<sup>1</sup> is a state variable vector, *u*(*n*) ∈ ℜ is a scalar input, *y*(*n*) ∈ ℜ is a scalar output, and *A* ∈ ℜ*N*×*N*, *b* ∈ ℜ*N*<sup>×</sup>1, *c* ∈ ℜ1×*N*, *d* ∈ ℜ are real constant matrices called coefficient matrices. The block diagram of the state-space digital filter (*A*, *b*, *c*, *d*) is shown in Fig. 1(a). In case of second-order digital filters, the block diagram in Fig. 1(a) can be rewritten as shown in Fig. 1(b). The transfer function *H*(*z*) is described in terms of the

x1(n)

c1

d

x2(n)

c2

b c

**Figure 1.** Block diagram of a state-space digital filter.

coordinate transformation matrix.

the *L*2-sensitivity minimization problem.

described by the following state-space equations:

**2.1. State-space digital filters**

**2. Preliminaries**

d

u(n) x(n) y(n) z−<sup>1</sup>

$$H(z) = \mathfrak{c}(zI - A)^{-1}\mathfrak{b} + d.\tag{3}$$

In this chapter, the state-space representation (*A*, *b*, *c*, *d*) is assumed to be a minimal realization of *H*(*z*), that is, the state-space representation (*A*, *b*, *c*, *d*) is controllable and observable. The transfer function *H*(*z*) in Eq. (3) can be rewritten as

$$H(z) = c \frac{\text{adj}(zI - A)}{\det(zI - A)} b + d. \tag{4}$$

We know from the above equation that the poles of *H*(*z*) are the solutions of the characteristic equation det(*zI* − *A*) = 0, that is, the eigenvalues of the coefficient matrix *A*. Since we assume that the transfer function *H*(*z*) is stable, all absolute values of the eigenvalues of the coefficient matrix *A* are less than unity. It follows from Eq. (4) that the absolute values of poles of *H*(*z*) are less than unity.

#### **2.2.** *L*2**-sensitivity**

The *L*2-sensitivity is one of the measurements which evaluate coefficient quantization errors of digital filters. The *L*2-sensitivity of the filter *H*(*z*) with respect to the realization (*A*, *b*, *c*, *d*) is defined by

$$\begin{split} S(\mathbf{A}, \mathbf{b}, \mathbf{c}) &= \sum\_{k=1}^{N} \sum\_{l=1}^{N} \frac{1}{2\pi} \int\_{0}^{2\pi} \left| \frac{\partial H(e^{j\omega})}{\partial a\_{kl}} \right|^{2} d\omega \\ &+ \sum\_{k=1}^{N} \frac{1}{2\pi} \int\_{0}^{2\pi} \left| \frac{\partial H(e^{j\omega})}{\partial b\_{k}} \right|^{2} d\omega + \sum\_{l=1}^{N} \frac{1}{2\pi} \int\_{0}^{2\pi} \left| \frac{\partial H(e^{j\omega})}{\partial c\_{l}} \right|^{2} d\omega \\ &= \left\| \frac{\partial H(z)}{\partial \mathbf{A}} \right\|\_{2}^{2} + \left\| \frac{\partial H(z)}{\partial \mathbf{b}} \right\|\_{2}^{2} + \left\| \frac{\partial H(z)}{\partial c} \right\|\_{2}^{2} \end{split} \tag{5}$$

where �·�<sup>2</sup> denotes the *L*2-norm. The derivatives of the transfer function *H*(*z*) with respect to the coefficient matrices are described by

$$\frac{\partial H(z)}{\partial \mathbf{A}} = \mathbf{G}^T(z)\mathbf{F}^T(z), \ \frac{\partial H(z)}{\partial \mathbf{b}} = \mathbf{G}^T(z), \ \frac{\partial H(z)}{\partial \mathbf{c}} = \mathbf{F}^T(z) \tag{6}$$

where *F*(*z*) and *G*(*z*) are defined by

$$\mathbf{F}(z) = (z\mathbf{I} - \mathbf{A})^{-1}\mathbf{b} \tag{7}$$

$$\mathbf{G}(z) = \mathbf{c}(z\mathbf{I} - \mathbf{A})^{-1} \tag{8}$$

respectively. Substituting Eqs. (6) into Eq. (5), the *L*2-sensitivity can be rewritten as

$$\begin{split} S(\mathbf{A}, \mathbf{b}, \mathbf{c}) &= \left\| \frac{\partial H(\mathbf{z})}{\partial \mathbf{A}} \right\|\_{2}^{2} + \left\| \frac{\partial H(\mathbf{z})}{\partial \mathbf{b}} \right\|\_{2}^{2} + \left\| \frac{\partial H(\mathbf{z})}{\partial \mathbf{c}} \right\|\_{2}^{2} \\ &= \left\| \mathbf{G}^{T}(\mathbf{z}) \mathbf{F}^{T}(\mathbf{z}) \right\|\_{2}^{2} + \left\| \mathbf{G}^{T}(\mathbf{z}) \right\|\_{2}^{2} + \left\| \mathbf{F}^{T}(\mathbf{z}) \right\|\_{2}^{2}. \end{split} \tag{9}$$

10.5772/52194

217

http://dx.doi.org/10.5772/52194

Second-order modes are defined by the square roots of the eigenvalues *θi*'s as follows [3, 5]:

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

In the field of digital signal processing, the controllability and observability Gramians are

It is well known that the number of state-space realizations of a transfer function is infinite since choice of the state variable vector *x*(*n*) is not unique. We can change the filter structures of state-space digital filters by the operation called *coordinate transformation* under the transfer

is applied to a filter structure (*A*, *b*, *c*, *d*), we obtain a new filter structure which has the

It should be noted that the coordinate transformation does not affect the transfer function

*<sup>H</sup>*(*z*)=*c*(*z<sup>I</sup>* − *<sup>A</sup>*)−1*<sup>b</sup>* + *<sup>d</sup>* =*c*(*z<sup>I</sup>* − *<sup>A</sup>*)−1*<sup>b</sup>* + *<sup>d</sup>*

It implies that there exist infinite filter structures for a given transfer function *H*(*z*) since nonsingular *N* × *N* matrices exist infinitely. Therefore, one can synthesize infinite filter structures by the coordinate transformation with keeping the transfer function invariant. Under the coordinate transformation by the nonsingular matrix *T*, the general controllability Gramian *K<sup>i</sup>* and the general observability Gramian *W<sup>i</sup>* are transformed into *K<sup>i</sup>* and *W<sup>i</sup>* given

Let *T* be a nonsingular *N* × *N* real matrix. If a coordinate transformation defined by

also called the covariance and noise matrices of the filter (*A*, *b*, *c*, *d*), respectively.

**2.3. Coordinate transformations**

function invariant.

*H*(*z*), that is,

by

following coefficient matrices:

respectively. Letting *i* = 0 in Eqs. (20) yields

(*θ*1, ··· , *θN*) = Eigenvalues of *K*0*W*0. (16)

*<sup>x</sup>*¯(*n*) = *<sup>T</sup>*−1*x*(*n*) (17)

= *H*(*z*). (19)

(*Ki*,*Wi*)=(*T*−1*KiT*−*T*, *<sup>T</sup>TWiT*) (20)

(*A*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*)=(*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*, *<sup>d</sup>*). (18)

We can express the *L*2-sensitivity *S*(*A*, *b*, *c*) by using complex integral as [1]

$$\begin{split} S(\mathbf{A}, \mathbf{b}, \mathbf{c}) &= \text{tr}\left[\frac{1}{2\pi i} \oint\_{|z|=1} \mathbf{F}(z) \mathbf{G}(z) (\mathbf{F}(z) \mathbf{G}(z))^\dagger \frac{dz}{z}\right] \\ &+ \text{tr}\left[\frac{1}{2\pi j} \oint\_{|z|=1} \mathbf{G}^\dagger(z) \mathbf{G}(z) \frac{dz}{z}\right] + \text{tr}\left[\frac{1}{2\pi j} \oint\_{|z|=1} \mathbf{F}(z) \mathbf{F}^\dagger(z) \frac{dz}{z}\right]. \end{split} \tag{10}$$

Applying Parseval's relation to Eq. (10), Hinamoto *et al.* expressed the *L*2-sensitivity in terms of the general Gramians such as [2]

$$S(\mathbf{A}, \mathbf{b}, \mathbf{c}) = \text{tr}(\mathbf{W}\_0)\text{tr}(\mathbf{K}\_0) + \text{tr}(\mathbf{W}\_0) + \text{tr}(\mathbf{K}\_0) + 2\sum\_{i=1}^{\infty} \text{tr}(\mathbf{W}\_i)\text{tr}(\mathbf{K}\_i). \tag{11}$$

The general controllability Gramian *K<sup>i</sup>* and the general observability Gramian *W<sup>i</sup>* in Eq. (11) are defined as the solutions to the following Lyapunov equations:

$$\mathbf{K}\_{i} = \mathbf{A}\mathbf{K}\_{i}\mathbf{A}^{T} + \frac{1}{2} \left(\mathbf{A}^{i}\mathbf{b}\mathbf{b}^{T} + \mathbf{b}\mathbf{b}^{T}(\mathbf{A}^{T})^{i}\right) \tag{12}$$

$$\mathbf{W}\_{l} = \mathbf{A}^{T}\mathbf{W}\_{l}\mathbf{A} + \frac{1}{2}\left(\mathbf{c}^{T}\mathbf{c}\mathbf{A}^{i} + (\mathbf{A}^{T})^{i}\mathbf{c}^{T}\mathbf{c}\right) \tag{13}$$

for *i* = 0, 1, 2, ··· , respectively. The general controllability and observability Gramians are natural expansions of the controllability and observability Gramians, respectively. Letting *i* = 0 in Eqs. (12) and (13), we have the Lyapunov equations for the controllability Gramian *K*<sup>0</sup> and the observability Gramian *W*<sup>0</sup> as follows:

$$\mathbf{K}\_0 = \mathbf{A}\mathbf{K}\_0\mathbf{A}^T + \mathbf{b}\mathbf{b}^T \tag{14}$$

$$\mathbf{W}\_0 = \mathbf{A}^T \mathbf{W}\_0 \mathbf{A} + \mathbf{c}^T \mathbf{c}.\tag{15}$$

The controllability Gramian *K*<sup>0</sup> and the observability Gramian *W*<sup>0</sup> are positive definite symmetric, and the eigenvalues *θ*<sup>2</sup> *<sup>i</sup>* (*<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>N</sup>*) of the matrix product *<sup>K</sup>*0*W*<sup>0</sup> are all positive. Second-order modes are defined by the square roots of the eigenvalues *θi*'s as follows [3, 5]:

$$(\theta\_1, \dots, \theta\_N) = \sqrt{\text{Eigenvalues of } \mathbf{K}\_0 \mathbf{W}\_0}. \tag{16}$$

In the field of digital signal processing, the controllability and observability Gramians are also called the covariance and noise matrices of the filter (*A*, *b*, *c*, *d*), respectively.

#### **2.3. Coordinate transformations**

4 Digital Filters and Signal Processing

*S*(*A*, *b*, *c*)=tr

of the general Gramians such as [2]

*S*(*A*, *b*, *c*)=

 1 2*πj* |*z*|=1

+tr 1 2*πj* |*z*|=1

 

=  *∂H*(*z*) *∂A*

respectively. Substituting Eqs. (6) into Eq. (5), the *L*2-sensitivity can be rewritten as

 

*GT*(*z*)*FT*(*z*)

We can express the *L*2-sensitivity *S*(*A*, *b*, *c*) by using complex integral as [1]

*S*(*A*, *b*, *c*) = tr(*W*0)tr(*K*0) + tr(*W*0) + tr(*K*0) + 2

are defined as the solutions to the following Lyapunov equations:

*K<sup>i</sup>* = *AKiA<sup>T</sup>* +

*W<sup>i</sup>* = *ATWiA* +

*K*<sup>0</sup> and the observability Gramian *W*<sup>0</sup> as follows:

symmetric, and the eigenvalues *θ*<sup>2</sup>

2

*∂H*(*z*) *∂b*

  2

*∂H*(*z*) *∂c*

  2

2

*F*(*z*)*F*†(*z*)

*dz z* 

tr(*Wi*)tr(*Ki*). (11)

. (10)

(12)

(13)

. (9)

2 + 

> *z*

*bb<sup>T</sup>* + *bbT*(*AT*)*<sup>i</sup>*

*cTcA<sup>i</sup>* + (*AT*)*<sup>i</sup>*

∞ ∑ *i*=1

*K*<sup>0</sup> = *AK*0*A<sup>T</sup>* + *bb<sup>T</sup>* (14) *W*<sup>0</sup> = *ATW*0*A* + *cTc*. (15)

*<sup>i</sup>* (*<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>N</sup>*) of the matrix product *<sup>K</sup>*0*W*<sup>0</sup> are all positive.

*cTc* 

2 + 

> 2 2 + *GT*(*z*) 2 2 + *FT*(*z*) 2 2

*<sup>F</sup>*(*z*)*G*(*z*)(*F*(*z*)*G*(*z*))† *dz*

Applying Parseval's relation to Eq. (10), Hinamoto *et al.* expressed the *L*2-sensitivity in terms

The general controllability Gramian *K<sup>i</sup>* and the general observability Gramian *W<sup>i</sup>* in Eq. (11)

1 2 *Ai*

> 1 2

for *i* = 0, 1, 2, ··· , respectively. The general controllability and observability Gramians are natural expansions of the controllability and observability Gramians, respectively. Letting *i* = 0 in Eqs. (12) and (13), we have the Lyapunov equations for the controllability Gramian

The controllability Gramian *K*<sup>0</sup> and the observability Gramian *W*<sup>0</sup> are positive definite

*dz z* + tr 1 2*πj* |*z*|=1

*G*†(*z*)*G*(*z*)

It is well known that the number of state-space realizations of a transfer function is infinite since choice of the state variable vector *x*(*n*) is not unique. We can change the filter structures of state-space digital filters by the operation called *coordinate transformation* under the transfer function invariant.

Let *T* be a nonsingular *N* × *N* real matrix. If a coordinate transformation defined by

$$\bar{\mathfrak{x}}(n) = T^{-1}\mathfrak{x}(n) \tag{17}$$

is applied to a filter structure (*A*, *b*, *c*, *d*), we obtain a new filter structure which has the following coefficient matrices:

$$(\overline{A}, \overline{b}, \overline{c}, \overline{d}) = (T^{-1} A \mathbf{T}, T^{-1} b, \mathbf{c} \mathbf{T}, d). \tag{18}$$

It should be noted that the coordinate transformation does not affect the transfer function *H*(*z*), that is,

$$\begin{split} \overline{H}(z) &= \overline{c}(zI - \overline{A})^{-1}\overline{b} + \overline{d} \\ &= c(zI - A)^{-1}b + d \\ &= H(z). \end{split} \tag{19}$$

It implies that there exist infinite filter structures for a given transfer function *H*(*z*) since nonsingular *N* × *N* matrices exist infinitely. Therefore, one can synthesize infinite filter structures by the coordinate transformation with keeping the transfer function invariant.

Under the coordinate transformation by the nonsingular matrix *T*, the general controllability Gramian *K<sup>i</sup>* and the general observability Gramian *W<sup>i</sup>* are transformed into *K<sup>i</sup>* and *W<sup>i</sup>* given by

$$(\overline{\mathbf{K}}\_{l}, \overline{\mathbf{W}}\_{l}) = (\mathbf{T}^{-1}\mathbf{K}\_{l}\mathbf{T}^{-T}, \mathbf{T}^{T}\mathbf{W}\_{l}\mathbf{T}) \tag{20}$$

respectively. Letting *i* = 0 in Eqs. (20) yields

$$(\overline{\mathbf{K}}\_0, \overline{\mathbf{W}}\_0) = (T^{-1}\mathbf{K}\_0 T^{-T}, T^T \mathbf{W}\_0 \mathbf{T}) \tag{21}$$

From Eqs. (21), we have

$$\overline{\mathbf{K}}\_0 \overline{\mathbf{W}}\_0 = T^{-1} \mathbf{K}\_0 \mathbf{W}\_0 T \tag{22}$$

10.5772/52194

219

http://dx.doi.org/10.5772/52194

*<sup>S</sup>*(*P*)=tr(*W*0*P*)tr(*K*0*P*−1) + tr(*W*0*P*) + tr(*K*0*P*−1)

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

where *P* = *TTT*. The problem we consider here is to derive *the optimal positive definite*

If one can obtain the optimal positive definite symmetric matrix *P*opt, the optimal coordinate

where *U* is an *arbitrary* orthogonal matrix. It implies that the minimum *L*2-sensitivity realizations exist infinitely for a given digital filter *H*(*z*). The minimum *L*2-sensitivity

This section proposes analytical synthesis of the minimum *L*2-sensitivity realizations for second-order digital filters. We propose closed form solutions to the *L*2-sensitivity minimization problem of second-order state-space digital filters. The proposed closed form solutions can greatly save the computation time and guarantee that the *L*2-sensitivity obtained by the iterative algorithm of the conventional method surely converges to the theoretical minimum. We show that the *L*2-sensitivity is expressed by a simple linear combination of exponential functions, and we can obtain the minimum *L*2-sensitivity realization by solving a fourth degree polynomial equation of constant coefficients in closed

We adopt the balanced realization (*A*b, *b*b, *c*b, *d*b) as the initial realization to synthesize the minimum *L*2-sensitivity realization. The coefficient matrices (*A*b, *b*b, *c*b, *d*b) satisfy symmetric

<sup>b</sup> <sup>=</sup> **<sup>Σ</sup>***A*b**Σ**, *<sup>c</sup><sup>T</sup>*

*AT*

1 2

*T*opt = *P*

**3. Analytical solutions to the** *L*2**-sensitivity minimization problem for**

*symmetric matrix P*opt, which gives the global minimum of *S*(*P*) as follows:

*∂S*(*P*) *∂P P*=*P*opt

tr(*WiP*)tr(*KiP*−1) (26)

= **0**. (27)

opt*<sup>U</sup>* (28)

<sup>b</sup> <sup>=</sup> **<sup>Σ</sup>***b*<sup>b</sup> (29)

+2 ∞ ∑ *i*=1

realizations have freedom for orthogonal transformations.

transformation matrix *T*opt is given by

**second-order digital filters**

form without iterative calculations [12, 13].

**3.1. Problem formulation**

properties as follows:

which shows that *K*0*W*<sup>0</sup> has the same eigenvalues of *K*0*W*0. Thus, the second order modes defined by Eq. (16) are invariant under the coordinate transformations. It implies that second-order modes depends on only the transfer function.

## **2.4.** *L*2**-sensitivity minimization problem**

The value of *L*2-sensitivity depends on not only the transfer function *H*(*z*) but also the coordinate transformation matrix *<sup>T</sup>*. The *<sup>L</sup>*2-sensitivity of the filter (*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*, *<sup>d</sup>*) can be expressed in terms of the complex integral as

$$\begin{split} &S(T^{-1}AT, T^{-1}b, cT) \\ &= \text{tr}\left[\frac{1}{2\pi i}\oint\_{|z|=1} T^{-1}F(z)\mathbf{G}(z)\mathbf{T}T^{T}(F(z)\mathbf{G}(z))^{\dagger}T^{-T}\frac{dz}{z}\right] \\ &+ \text{tr}\left[\frac{1}{2\pi i}\oint\_{|z|=1} \mathbf{T}^{T}\mathbf{G}^{\dagger}(z)\mathbf{G}(z)\mathbf{T}\frac{dz}{z}\right] + \text{tr}\left[\frac{1}{2\pi i}\oint\_{|z|=1} \mathbf{T}^{-1}F(z)\mathbf{F}^{\dagger}(z)\mathbf{T}^{-T}\frac{dz}{z}\right] \tag{23} \end{split} \tag{23}$$

or in terms of the general Gramians as

$$\begin{split} &S(T^{-1}AT, T^{-1}b, cT) \\ &= \text{tr}(T^{T}\mathbf{W}\_{0}T)\text{tr}(T^{-1}\mathbf{K}\_{0}T^{-T}) + \text{tr}(T^{T}\mathbf{W}\_{0}T) + \text{tr}(T^{-1}\mathbf{K}\_{0}T^{-T}) \\ &+ 2\sum\_{i=1}^{\infty} \text{tr}(T^{T}\mathbf{W}\_{i}T)\text{tr}(T^{-1}\mathbf{K}\_{i}T^{-T}). \tag{24} \end{split} \tag{24}$$

The *L*2-sensitivity (23) can be expressed as the function of the positive definite symmetric matrix *P* as follows [1]:

$$\begin{split} S(\mathbf{P}) &= \text{tr}\left[\frac{1}{2\pi j}\oint\_{|z|=1} F(z)\mathbf{G}(z)\mathbf{P}(\mathbf{F}(z)\mathbf{G}(z))^\dagger \mathbf{P}^{-1}\frac{dz}{z}\right] \\ &+ \text{tr}\left[\frac{1}{2\pi j}\oint\_{|z|=1} \mathbf{G}^\dagger(z)\mathbf{G}(z)\mathbf{P}\frac{dz}{z}\right] + \text{tr}\left[\frac{1}{2\pi j}\oint\_{|z|=1} \mathbf{F}(z)\mathbf{F}^\dagger(z)\mathbf{P}^{-1}\frac{dz}{z}\right] \end{split} \tag{25}$$

where *P* = *TTT*. Similarly, the *L*2-sensitivity (24) can be expressed as the function of the positive definite symmetric matrix *P* as follows [2]:

$$\begin{split} S(\mathcal{P}) &= \text{tr}(\mathbf{W}\_0 \mathbf{P}) \text{tr}(\mathbf{K}\_0 \mathbf{P}^{-1}) + \text{tr}(\mathbf{W}\_0 \mathbf{P}) + \text{tr}(\mathbf{K}\_0 \mathbf{P}^{-1}) \\ &+ 2 \sum\_{i=1}^{\infty} \text{tr}(\mathbf{W}\_i \mathbf{P}) \text{tr}(\mathbf{K}\_i \mathbf{P}^{-1}) \end{split} \tag{26}$$

where *P* = *TTT*. The problem we consider here is to derive *the optimal positive definite symmetric matrix P*opt, which gives the global minimum of *S*(*P*) as follows:

$$\left. \frac{\partial \mathcal{S}(\mathcal{P})}{\partial \mathcal{P}} \right|\_{\mathcal{P} = \mathcal{P}\_{\text{opt}}} = \mathbf{0}.\tag{27}$$

If one can obtain the optimal positive definite symmetric matrix *P*opt, the optimal coordinate transformation matrix *T*opt is given by

$$T\_{\rm opt} = P\_{\rm opt}^{\frac{1}{2}} \mathcal{U} \tag{28}$$

where *U* is an *arbitrary* orthogonal matrix. It implies that the minimum *L*2-sensitivity realizations exist infinitely for a given digital filter *H*(*z*). The minimum *L*2-sensitivity realizations have freedom for orthogonal transformations.

## **3. Analytical solutions to the** *L*2**-sensitivity minimization problem for second-order digital filters**

This section proposes analytical synthesis of the minimum *L*2-sensitivity realizations for second-order digital filters. We propose closed form solutions to the *L*2-sensitivity minimization problem of second-order state-space digital filters. The proposed closed form solutions can greatly save the computation time and guarantee that the *L*2-sensitivity obtained by the iterative algorithm of the conventional method surely converges to the theoretical minimum. We show that the *L*2-sensitivity is expressed by a simple linear combination of exponential functions, and we can obtain the minimum *L*2-sensitivity realization by solving a fourth degree polynomial equation of constant coefficients in closed form without iterative calculations [12, 13].

#### **3.1. Problem formulation**

6 Digital Filters and Signal Processing

From Eqs. (21), we have

*<sup>S</sup>*(*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*)

=tr 1 2*πj* |*z*|=1

> +tr 1 2*πj* |*z*|=1

matrix *P* as follows [1]:

*S*(*P*)=tr

or in terms of the general Gramians as

*<sup>S</sup>*(*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*)

+2 ∞ ∑ *i*=1

 1 2*πj* |*z*|=1

positive definite symmetric matrix *P* as follows [2]:

+tr 1 2*πj* |*z*|=1

(*K*0,*W*0)=(*T*−1*K*0*T*−*T*, *<sup>T</sup>TW*0*T*) (21)

*z* 

tr(*TTWiT*)tr(*T*−1*KiT*−*T*). (24)

*z*  *<sup>T</sup>*−1*F*(*z*)*F*†(*z*)*T*−*<sup>T</sup> dz*

*<sup>F</sup>*(*z*)*F*†(*z*)*P*−<sup>1</sup> *dz*

*z* 

*z* 

(23)

(25)

which shows that *K*0*W*<sup>0</sup> has the same eigenvalues of *K*0*W*0. Thus, the second order modes defined by Eq. (16) are invariant under the coordinate transformations. It implies that

The value of *L*2-sensitivity depends on not only the transfer function *H*(*z*) but also the coordinate transformation matrix *<sup>T</sup>*. The *<sup>L</sup>*2-sensitivity of the filter (*T*−1*AT*, *<sup>T</sup>*−1*b*, *cT*, *<sup>d</sup>*) can

*<sup>T</sup>*−1*F*(*z*)*G*(*z*)*TTT*(*F*(*z*)*G*(*z*))†*T*−*<sup>T</sup> dz*

*z* + tr 1 2*πj* |*z*|=1

<sup>=</sup>tr(*TTW*0*T*)tr(*T*−1*K*0*T*−*T*) + tr(*TTW*0*T*) + tr(*T*−1*K*0*T*−*T*)

The *L*2-sensitivity (23) can be expressed as the function of the positive definite symmetric

*<sup>F</sup>*(*z*)*G*(*z*)*P*(*F*(*z*)*G*(*z*))†*P*−<sup>1</sup> *dz*

*z* + tr 1 2*πj* |*z*|=1

where *P* = *TTT*. Similarly, the *L*2-sensitivity (24) can be expressed as the function of the

*<sup>G</sup>*†(*z*)*G*(*z*)*Pdz*

*<sup>T</sup>TG*†(*z*)*G*(*z*)*<sup>T</sup> dz*

second-order modes depends on only the transfer function.

**2.4.** *L*2**-sensitivity minimization problem**

be expressed in terms of the complex integral as

*<sup>K</sup>*0*W*<sup>0</sup> <sup>=</sup> *<sup>T</sup>*−1*K*0*W*0*<sup>T</sup>* (22)

We adopt the balanced realization (*A*b, *b*b, *c*b, *d*b) as the initial realization to synthesize the minimum *L*2-sensitivity realization. The coefficient matrices (*A*b, *b*b, *c*b, *d*b) satisfy symmetric properties as follows:

$$A\_{\mathbf{b}}^{T} = \Sigma A\_{\mathbf{b}} \Sigma \,\,\mathbf{c}\_{\mathbf{b}}^{T} = \Sigma b\_{\mathbf{b}} \tag{29}$$

where **Σ** is a signature matrix defined as follows:

$$\Delta = \text{diag}(\sigma\_1 \cdot \cdots \cdot \sigma\_N), \ \sigma\_i = \pm 1 \ (i = 1, \cdots, N). \tag{30}$$

where

Gramian *<sup>W</sup>*(b)

matrix *P*opt [1].

*definite symmetric matrix P*opt *satisfying*

*then the positive definite symmetric matrix* **<sup>Σ</sup>***P*−<sup>1</sup>

 

*λ* = *λ*<sup>r</sup> + *jλ*i, *α* = *α*<sup>r</sup> + *jα*i, *κ* =

<sup>2</sup> , *<sup>µ</sup>*<sup>2</sup> <sup>=</sup>

*P*<sup>2</sup> − *Q*<sup>2</sup> + *R*,

In this subsection, we consider the property of the positive definite symmetric matrix *P*. The following two theorems lead a symmetric property of the optimal positive definite symmetric

**Theorem 1.** [9] *L*2*-sensitivity S*(*P*) *has the unique global minimum, which is achieved by a positive*

*∂S*(*P*) *∂P* � � � � *P*=*P*opt

*∂S*(*P*) *∂P* � � � � *P*=*P*opt

*∂S*(*P*) *∂P* � � � � *P*=**Σ***P*<sup>−</sup><sup>1</sup>

opt**<sup>Σ</sup>** *also satisfies*

*for the signature matrix* **Σ** *which satisfies Eq. (29).* ✷

�*κ*(|*α*| − *α*i)

Using the parameters *<sup>P</sup>*, *<sup>Q</sup>*, and *<sup>R</sup>*, the controllability Gramian *<sup>K</sup>*(b)

**Θ**=diag(*θ*1, *θ*2)

�

=diag(

**3.3. Property of the positive definite symmetric matrix** *P*

**Theorem 2.** [1] *If a positive definite symmetric matrix P*opt *satisfies*

� *P* + *Q <sup>P</sup>* <sup>−</sup> *<sup>Q</sup>*,

<sup>2</sup>*<sup>κ</sup>* sign(*α*r).

<sup>0</sup> <sup>=</sup> **<sup>Θ</sup>** (38)

�|*α*| + *α*<sup>i</sup>

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

<sup>0</sup> of the balanced realization (*A*b, *<sup>b</sup>*b, *<sup>c</sup>*b, *<sup>d</sup>*b) can be expressed as follows:

�

*µ*<sup>1</sup> =

*K*(b) <sup>0</sup> <sup>=</sup>*W*(b) 10.5772/52194

<sup>0</sup> and the observability

http://dx.doi.org/10.5772/52194

*P*<sup>2</sup> − *Q*<sup>2</sup> − *R*). (39)

= **0**. (40)

= **0** (41)

opt**<sup>Σ</sup>** <sup>=</sup> **<sup>0</sup>** (42)

(37)

221

✷

We exploit the symmetric properties of the balanced realization in order to simplify the *L*2-sensitivity formulation and minimization in the following discussion. Under this condition, the *L*2-sensitivity *S*(*P*) in Eq. (26) is rewritten as

$$\begin{split} S(\mathbf{P}) &= \text{tr}(\mathbf{W}\_0^{(\mathbf{b})} \mathbf{P}) \text{tr}(\mathbf{K}\_0^{(\mathbf{b})} \mathbf{P}^{-1}) + \text{tr}(\mathbf{W}\_0^{(\mathbf{b})} \mathbf{P}) + \text{tr}(\mathbf{K}\_0^{(\mathbf{b})} \mathbf{P}^{-1}) \\ &+ 2 \sum\_{i=1}^{\infty} \text{tr}(\mathbf{W}\_i^{(\mathbf{b})} \mathbf{P}) \text{tr}(\mathbf{K}\_i^{(\mathbf{b})} \mathbf{P}^{-1}) \end{split} \tag{31}$$

and thus, the *L*2-sensitivity minimization problem is formulated as follows:

$$\min\_{\mathbf{P}} S(\mathbf{P}) \text{ in Eq. (31)}\tag{32}$$

where *P* is an arbitrary positive definite symmetric matrix.

We derive the optimal positive definite symmetric matrix *P*opt which gives the global minimum of the *L*2-sensitivity *S*(*P*) in Eq. (31).

#### **3.2. Second-order digital filters**

Consider second-order digital filters with complex conjugate poles given by

$$H(z) = \frac{a}{z - \lambda} + \frac{a^\*}{z - \lambda^\*} + d \tag{33}$$

where (*λ*, *λ*∗) are *complex conjugate* poles, *α* is a *complex* scalar, and *d* is a *real* scalar 1. We define scalar parameters *P*, *Q*, and *R* as follows:

$$P = \frac{|\alpha|}{1 - |\lambda|^2} \tag{34}$$

$$R + jQ = \frac{\alpha}{1 - \lambda^2} \tag{35}$$

which can be calculated directly from the transfer function *H*(*z*). The closed form expression of the balanced realization of the filter *H*(*z*) is given as follows [15]:

$$
\begin{bmatrix}
\frac{A\_{\mathtt{b}}\mid\mathsf{b}\_{\mathtt{b}}}{\mathsf{c}\_{\mathtt{b}}\mid\mathsf{d}\_{\mathtt{b}}}
\end{bmatrix} = \begin{bmatrix}
\lambda\_{\mathtt{r}} - \frac{\mathsf{x} - \mathsf{x}^{-1}}{2}\lambda\_{\mathtt{i}} & \frac{\mathsf{x} + \mathsf{x}^{-1}}{2}\lambda\_{\mathtt{i}} & \left|\boldsymbol{\mu}\_{1} + \boldsymbol{\mu}\_{2}\right| \\
\hline
\boldsymbol{\mu}\_{1} + \boldsymbol{\mu}\_{2} & -(\boldsymbol{\mu}\_{1} - \boldsymbol{\mu}\_{2}) & d
\end{bmatrix} \tag{36}
$$

<sup>1</sup> For also second-order digital filters, we can derive analytical solutions to the *L*2-sensitivity minimization problem [13].

where

8 Digital Filters and Signal Processing

where **Σ** is a signature matrix defined as follows:

*<sup>S</sup>*(*P*)=tr(*W*(b)

min *P*

**3.2. Second-order digital filters**

minimum of the *L*2-sensitivity *S*(*P*) in Eq. (31).

define scalar parameters *P*, *Q*, and *R* as follows:

� *A*<sup>b</sup> *b*<sup>b</sup> *c*<sup>b</sup> *d*<sup>b</sup>

[13].

� =

+2 ∞ ∑ *i*=1

condition, the *L*2-sensitivity *S*(*P*) in Eq. (26) is rewritten as

<sup>0</sup> *<sup>P</sup>*)tr(*K*(b)

tr(*W*(b)

and thus, the *L*2-sensitivity minimization problem is formulated as follows:

Consider second-order digital filters with complex conjugate poles given by

*<sup>H</sup>*(*z*) = *<sup>α</sup>*

of the balanced realization of the filter *H*(*z*) is given as follows [15]:

*<sup>λ</sup>*<sup>r</sup> <sup>−</sup> *<sup>κ</sup>* <sup>−</sup> *<sup>κ</sup>*−<sup>1</sup>

<sup>−</sup>*<sup>κ</sup>* <sup>+</sup> *<sup>κ</sup>*−<sup>1</sup>

 

**Σ** = diag(*σ*1, ··· , *σN*), *σ<sup>i</sup>* = ±1 (*i* = 1, ··· , *N*). (30)

*S*(*P*) in Eq. (31) (32)

<sup>0</sup> *<sup>P</sup>*) + tr(*K*(b)

<sup>0</sup> *<sup>P</sup>*−1)

*<sup>z</sup>* <sup>−</sup> *<sup>λ</sup>*<sup>∗</sup> <sup>+</sup> *<sup>d</sup>* (33)

<sup>1</sup> − |*λ*|<sup>2</sup> (34)

<sup>1</sup> <sup>−</sup> *<sup>λ</sup>*<sup>2</sup> (35)

 

(36)

*<sup>i</sup> <sup>P</sup>*−1) (31)

We exploit the symmetric properties of the balanced realization in order to simplify the *L*2-sensitivity formulation and minimization in the following discussion. Under this

<sup>0</sup> *<sup>P</sup>*−1) + tr(*W*(b)

where *P* is an arbitrary positive definite symmetric matrix.

We derive the optimal positive definite symmetric matrix *P*opt which gives the global

*<sup>z</sup>* <sup>−</sup> *<sup>λ</sup>* <sup>+</sup>

where (*λ*, *λ*∗) are *complex conjugate* poles, *α* is a *complex* scalar, and *d* is a *real* scalar 1. We

*<sup>P</sup>*<sup>=</sup> <sup>|</sup>*α*<sup>|</sup>

*<sup>R</sup>* <sup>+</sup> *jQ*<sup>=</sup> *<sup>α</sup>*

<sup>2</sup> *<sup>λ</sup>*<sup>i</sup>

<sup>2</sup> *<sup>λ</sup>*<sup>i</sup> *<sup>λ</sup>*<sup>r</sup> <sup>+</sup>

<sup>1</sup> For also second-order digital filters, we can derive analytical solutions to the *L*2-sensitivity minimization problem

which can be calculated directly from the transfer function *H*(*z*). The closed form expression

*κ* + *κ*−<sup>1</sup>

*µ*<sup>1</sup> + *µ*<sup>2</sup> −(*µ*<sup>1</sup> − *µ*2) *d*

*κ* − *κ*−<sup>1</sup>

<sup>2</sup> *<sup>λ</sup>*<sup>i</sup> *<sup>µ</sup>*<sup>1</sup> <sup>+</sup> *<sup>µ</sup>*<sup>2</sup>

<sup>2</sup> *<sup>λ</sup>*<sup>i</sup> *<sup>µ</sup>*<sup>1</sup> <sup>−</sup> *<sup>µ</sup>*<sup>2</sup>

*α*∗

*<sup>i</sup> <sup>P</sup>*)tr(*K*(b)

$$\begin{cases} \lambda = \lambda\_\mathbf{r} + j\lambda\_\mathbf{i}, \; \alpha = \mathbf{a}\_\mathbf{r} + j\mathbf{a}\_\mathbf{i}, \; \kappa = \sqrt{\frac{P+Q}{P-Q}},\\ \mu\_1 = \sqrt{\frac{\kappa(|\boldsymbol{\alpha}| - \boldsymbol{a}\_\mathbf{i})}{2}}, \; \mu\_2 = \sqrt{\frac{|\boldsymbol{\alpha}| + \boldsymbol{a}\_\mathbf{i}}{2\kappa}} \operatorname{sign}(\boldsymbol{a}\_\mathbf{r}). \end{cases} \tag{37}$$

Using the parameters *<sup>P</sup>*, *<sup>Q</sup>*, and *<sup>R</sup>*, the controllability Gramian *<sup>K</sup>*(b) <sup>0</sup> and the observability Gramian *<sup>W</sup>*(b) <sup>0</sup> of the balanced realization (*A*b, *<sup>b</sup>*b, *<sup>c</sup>*b, *<sup>d</sup>*b) can be expressed as follows:

$$\begin{aligned} \mathbf{K}\_0^{(\mathbf{b})} &= \mathbf{W}\_0^{(\mathbf{b})} = \boldsymbol{\Theta} \\ \boldsymbol{\Theta} &= \text{diag}(\theta\_1, \theta\_2) \end{aligned} \tag{38}$$

$$\mathbf{p} = \text{diag}(\sqrt{P^2 - Q^2} + \text{R}, \sqrt{P^2 - Q^2} - \text{R}). \tag{39}$$

## **3.3. Property of the positive definite symmetric matrix** *P*

In this subsection, we consider the property of the positive definite symmetric matrix *P*. The following two theorems lead a symmetric property of the optimal positive definite symmetric matrix *P*opt [1].

**Theorem 1.** [9] *L*2*-sensitivity S*(*P*) *has the unique global minimum, which is achieved by a positive definite symmetric matrix P*opt *satisfying*

$$\left. \frac{\partial \mathcal{S}(\mathcal{P})}{\partial \mathcal{P}} \right|\_{\mathcal{P} = \mathcal{P}\_{\text{opt}}} = \mathbf{0}. \tag{40}$$

✷

**Theorem 2.** [1] *If a positive definite symmetric matrix P*opt *satisfies*

$$\left. \frac{\partial \mathcal{S}(\mathcal{P})}{\partial \mathcal{P}} \right|\_{\mathcal{P} = \mathcal{P}\_{\text{opt}}} = \mathbf{0} \tag{41}$$

*then the positive definite symmetric matrix* **<sup>Σ</sup>***P*−<sup>1</sup> opt**<sup>Σ</sup>** *also satisfies*

$$\frac{\partial S(\mathcal{P})}{\partial \mathcal{P}}\Big|\_{\mathcal{P} = \boldsymbol{\Sigma}\boldsymbol{P}\_{\mathrm{opt}}^{-1}\boldsymbol{\Sigma}} = \mathbf{0} \tag{42}$$

*for the signature matrix* **Σ** *which satisfies Eq. (29).* ✷

The derivative *∂S*(*P*)/*∂P* is given by differentiating *S*(*P*) in Eq. (31) with respect to *P* as

$$\frac{\partial \mathcal{S}(\mathbf{P})}{\partial \mathbf{P}} = (1 + \text{tr}(\mathbf{K}\_0^{(\mathbf{b})} \mathbf{P}^{-1})) \mathbf{W}\_0^{(\mathbf{b})} + 2 \sum\_{i=1}^{\infty} \text{tr}(\mathbf{K}\_i^{(\mathbf{b})} \mathbf{P}^{-1}) \mathbf{W}\_i^{(\mathbf{b})}$$

$$-\mathbf{P}^{-1} \left( (1 + \text{tr}(\mathbf{W}\_0^{(\mathbf{b})} \mathbf{P})) \mathbf{K}\_0^{(\mathbf{b})} + 2 \sum\_{i=1}^{\infty} \text{tr}(\mathbf{W}\_i^{(\mathbf{b})} \mathbf{P}) \mathbf{K}\_i^{(\mathbf{b})} \right) \mathbf{P}^{-1}. \tag{43}$$

From Theorem 1 and Theorem 2, it is proved that the optimal positive definite symmetric matrix *P*opt has the following symmetric property [1]:

$$P\_{\rm opt} = \Sigma P\_{\rm opt}^{-1} \Sigma \tag{44}$$

10.5772/52194

http://dx.doi.org/10.5772/52194

(48)

223

(49)

. (50)

b*P*)tr(**Θ**(*A<sup>T</sup>*

*np*. (53)

b )*i*

. (52)

*<sup>P</sup>*−1). (51)

On the other hand, in case of **Σ** = ±diag(1, −1), the authors have derived the closed form expression of a positive definite symmetric matrix *P* which satisfies Eq. (45) as follows:

In this subsection, the closed form expression of the *L*2-sensitivity of second-order digital filters is given. We give the closed form expression of the *L*2-sensitivity *S*(*P*) in Eq. (31). We first express the general Gramians of the balanced realization (*A*b, *b*b, *c*b, *d*b) as follows:

<sup>0</sup> <sup>+</sup> *<sup>K</sup>*(b)

<sup>b</sup> + (*A<sup>T</sup>* b )*i W*(b) 0 

<sup>b</sup>**<sup>Θ</sup>** <sup>+</sup> **<sup>Θ</sup>**(*A<sup>T</sup>*

<sup>b</sup> + (*A<sup>T</sup>* b )*i* **Θ** 

We express the *L*2-sensitivity *S*(*P*) by substituting Eqs. (49) and (50) into Eq. (31) as follows:

The *L*2-sensitivity *S*(*P*) in Eq. (51) can be expressed more simply. Exploiting the symmetric property of coefficient matrix *A*<sup>b</sup> and *P* given in Eqs. (29) and (45) respectively, we can

In order to give the closed form expression of the *L*2-sensitivity *S*(*P*) in Eq. (52), it is necessary to derive the closed form expressions of matrices *A*b, **Θ**, and *P*. The closed form expressions of matrices *A*<sup>b</sup> and **Θ** are given in Eqs. (36) and (39), respectively. The closed form expression of the positive definite symmetric matrix *P* is given in Eq. (48). Substituting the closed form expressions of matrices *A*b, **Θ**, and *P* into Eq. (52) gives the closed form

*S*(*P*) = *S*(*p*) =

<sup>2</sup> + 2

2 ∑ *n*=−2

*sne*

∞ ∑ *i*=0 tr(**Θ***A<sup>i</sup>* <sup>b</sup>*P*) 2

<sup>0</sup> (*A<sup>T</sup>* b )*i* 

> ∞ ∑ *i*=1

tr(**Θ***A<sup>i</sup>*

b )*i* 

cosh(*p*) sinh(*p*) sinh(*p*) cosh(*p*) Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

*P* = 

**3.5. Closed form expression of the** *L*2**-sensitivity** *S*(*P*)

*K*(b) *<sup>i</sup>* <sup>=</sup> <sup>1</sup> 2 *Ai* b*K*(b)

*W*(b) *<sup>i</sup>* <sup>=</sup> <sup>1</sup> 2 *W*(b) <sup>0</sup> *<sup>A</sup><sup>i</sup>*

*<sup>S</sup>*(*P*) = tr(**Θ***P*)tr(**Θ***P*−1) + tr(**Θ***P*) + tr(**Θ***P*−1) + <sup>2</sup>

*S*(*P*)=2tr(**Θ***P*) − (tr(**Θ***P*))

rewrite the *L*2-sensitivity *S*(*P*) as

expression of the *L*2-sensitivity *S*(*p*) as

= 1 2 *Ai*

= 1 2 **Θ***Ai*

where *p* is a real scalar variable [12, 13].

for the signature matrix **Σ** which satisfies Eq. (29). We will thus search the optimal positive definite symmetric matrix *P*opt among the positive definite symmetric matrices *P* which satisfy

$$P = \Sigma P^{-1} \Sigma.\tag{45}$$

## **3.4. Closed form expression of the positive definite symmetric matrix** *P*

In this subsection, we consider the case of second-order digital filters and give the closed form expression of the positive definite symmetric matrix *P*. When we restrict ourselves to the second-order case of state-space digital filters, we can give an closed form expression of the positive definite symmetric matrix *P* which satisfies Eq. (45), considering the form of the signature matrix **Σ** classified into the following two cases:

$$\begin{cases} \mathbf{\dot{\Sigma}} = \pm \text{diag}(1, 1) = \pm I \\ \mathbf{\dot{\Sigma}} = \pm \text{diag}(1, -1) . \end{cases} \tag{46}$$

For each case, we next consider the closed form expression of the positive definite symmetric matrix *P* which satisfies Eq. (45).

In case of **<sup>Σ</sup>** <sup>=</sup> <sup>±</sup>*I*, Eq. (44) yields *<sup>P</sup>*opt <sup>=</sup> *<sup>P</sup>*−<sup>1</sup> opt, that is, *<sup>P</sup>*opt <sup>=</sup> *<sup>I</sup>*. It means that the minimum *L*2-sensitivity realization can be synthesized without any coordinate transformation to the balanced realization, that is, the initial realization. In other words, the minimum *L*2-sensitivity realization is the balanced realization as follows:

$$(A\_{\rm opt}, \mathbf{b}\_{\rm opt}, \mathbf{c}\_{\rm opt}, d\_{\rm opt}) = (A\_{\mathbf{b}\prime} \mathbf{b}\_{\mathbf{b}\prime} \mathbf{c}\_{\mathbf{b}\prime} d\_{\mathbf{b}}).\tag{47}$$

Thus, we need no more discussion on this case since the minimum *L*2-sensitivity realization is already achieved as the balanced realization.

On the other hand, in case of **Σ** = ±diag(1, −1), the authors have derived the closed form expression of a positive definite symmetric matrix *P* which satisfies Eq. (45) as follows:

$$P = \begin{bmatrix} \cosh(p)\sinh(p) \\ \sinh(p)\cosh(p) \end{bmatrix} \tag{48}$$

where *p* is a real scalar variable [12, 13].

10 Digital Filters and Signal Processing

satisfy

*∂S*(*P*)

*<sup>∂</sup><sup>P</sup>* = (<sup>1</sup> <sup>+</sup> tr(*K*(b)

−*P*−<sup>1</sup> 

matrix *P*opt has the following symmetric property [1]:

signature matrix **Σ** classified into the following two cases:

matrix *P* which satisfies Eq. (45).

In case of **<sup>Σ</sup>** <sup>=</sup> <sup>±</sup>*I*, Eq. (44) yields *<sup>P</sup>*opt <sup>=</sup> *<sup>P</sup>*−<sup>1</sup>

is already achieved as the balanced realization.

*L*2-sensitivity realization is the balanced realization as follows:

The derivative *∂S*(*P*)/*∂P* is given by differentiating *S*(*P*) in Eq. (31) with respect to *P* as

<sup>0</sup> + 2

<sup>0</sup> *<sup>P</sup>*))*K*(b)

From Theorem 1 and Theorem 2, it is proved that the optimal positive definite symmetric

*<sup>P</sup>*opt <sup>=</sup> **<sup>Σ</sup>***P*−<sup>1</sup>

for the signature matrix **Σ** which satisfies Eq. (29). We will thus search the optimal positive definite symmetric matrix *P*opt among the positive definite symmetric matrices *P* which

In this subsection, we consider the case of second-order digital filters and give the closed form expression of the positive definite symmetric matrix *P*. When we restrict ourselves to the second-order case of state-space digital filters, we can give an closed form expression of the positive definite symmetric matrix *P* which satisfies Eq. (45), considering the form of the

**Σ** = ±diag(1, 1) = ±*I*

For each case, we next consider the closed form expression of the positive definite symmetric

*L*2-sensitivity realization can be synthesized without any coordinate transformation to the balanced realization, that is, the initial realization. In other words, the minimum

Thus, we need no more discussion on this case since the minimum *L*2-sensitivity realization

**3.4. Closed form expression of the positive definite symmetric matrix** *P*

∞ ∑ *i*=1

<sup>0</sup> + 2

tr(*K*(b)

∞ ∑ *i*=1

*<sup>i</sup> <sup>P</sup>*−1)*W*(b) *i*

tr(*W*(b)

*<sup>i</sup> <sup>P</sup>*)*K*(b) *i* 

opt**<sup>Σ</sup>** (44)

*<sup>P</sup>* = **<sup>Σ</sup>***P*−1**Σ**. (45)

**<sup>Σ</sup>** <sup>=</sup> <sup>±</sup>diag(1, <sup>−</sup>1). (46)

(*A*opt, *b*opt, *c*opt, *d*opt)=(*A*b, *b*b, *c*b, *d*b). (47)

opt, that is, *<sup>P</sup>*opt <sup>=</sup> *<sup>I</sup>*. It means that the minimum

*<sup>P</sup>*<sup>−</sup>1. (43)

<sup>0</sup> *<sup>P</sup>*−1))*W*(b)

(<sup>1</sup> + tr(*W*(b)

## **3.5. Closed form expression of the** *L*2**-sensitivity** *S*(*P*)

In this subsection, the closed form expression of the *L*2-sensitivity of second-order digital filters is given. We give the closed form expression of the *L*2-sensitivity *S*(*P*) in Eq. (31). We first express the general Gramians of the balanced realization (*A*b, *b*b, *c*b, *d*b) as follows:

$$\begin{split} \mathbf{K}\_{i}^{(\mathbf{b})} &= \frac{1}{2} \left( \mathbf{A}\_{\mathbf{b}}^{i} \mathbf{K}\_{0}^{(\mathbf{b})} + \mathbf{K}\_{0}^{(\mathbf{b})} (\mathbf{A}\_{\mathbf{b}}^{T})^{i} \right) \\ &= \frac{1}{2} \left( \mathbf{A}\_{\mathbf{b}}^{i} \boldsymbol{\Theta} + \boldsymbol{\Theta} (\mathbf{A}\_{\mathbf{b}}^{T})^{i} \right) \\ \mathbf{W}\_{i}^{(\mathbf{b})} &= \frac{1}{2} \left( \mathbf{W}\_{0}^{(\mathbf{b})} \mathbf{A}\_{\mathbf{b}}^{i} + (\mathbf{A}\_{\mathbf{b}}^{T})^{i} \mathbf{W}\_{0}^{(\mathbf{b})} \right) \\ &= \frac{1}{2} \left( \boldsymbol{\Theta} \mathbf{A}\_{\mathbf{b}}^{i} + (\mathbf{A}\_{\mathbf{b}}^{T})^{i} \boldsymbol{\Theta} \right) . \end{split} \tag{50}$$

We express the *L*2-sensitivity *S*(*P*) by substituting Eqs. (49) and (50) into Eq. (31) as follows:

$$S(\mathbf{P}) = \text{tr}(\boldsymbol{\Theta}\boldsymbol{P})\text{tr}(\boldsymbol{\Theta}\boldsymbol{P}^{-1}) + \text{tr}(\boldsymbol{\Theta}\boldsymbol{P}) + \text{tr}(\boldsymbol{\Theta}\boldsymbol{P}^{-1}) + 2\sum\_{i=1}^{\infty} \text{tr}(\boldsymbol{\Theta}\boldsymbol{A}\_{\mathbf{b}}^{i}\boldsymbol{P})\text{tr}(\boldsymbol{\Theta}(\boldsymbol{A}\_{\mathbf{b}}^{T})^{i}\boldsymbol{P}^{-1}).\tag{51}$$

The *L*2-sensitivity *S*(*P*) in Eq. (51) can be expressed more simply. Exploiting the symmetric property of coefficient matrix *A*<sup>b</sup> and *P* given in Eqs. (29) and (45) respectively, we can rewrite the *L*2-sensitivity *S*(*P*) as

$$S(\mathsf{P}) = 2\mathrm{tr}(\boldsymbol{\Theta}\boldsymbol{\mathsf{P}}) - \left(\mathrm{tr}(\boldsymbol{\Theta}\boldsymbol{\mathsf{P}})\right)^2 + 2\sum\_{i=0}^{\infty} \left(\mathrm{tr}(\boldsymbol{\Theta}\boldsymbol{A}\_{\mathsf{b}}^i\boldsymbol{\mathsf{P}})\right)^2. \tag{52}$$

In order to give the closed form expression of the *L*2-sensitivity *S*(*P*) in Eq. (52), it is necessary to derive the closed form expressions of matrices *A*b, **Θ**, and *P*. The closed form expressions of matrices *A*<sup>b</sup> and **Θ** are given in Eqs. (36) and (39), respectively. The closed form expression of the positive definite symmetric matrix *P* is given in Eq. (48). Substituting the closed form expressions of matrices *A*b, **Θ**, and *P* into Eq. (52) gives the closed form expression of the *L*2-sensitivity *S*(*p*) as

$$S(\mathcal{P}) = S(p) = \sum\_{n=-2}^{2} s\_n e^{np}.\tag{53}$$

It is remarkable that Eq. (53) is a simple linear combination of exponential functions which does not contain infinite summations. These coefficients *sn*'s are easily computed directly from the transfer function *H*(*z*) [12, 13].

## **3.6. Synthesis of minimum** *L*2**-sensitivity realizations**

The parameter *p* which minimizes *S*(*p*) in Eq. (53) can be derived by solving the following equation with respect to *p*:

$$\frac{\partial S(p)}{\partial p} = \sum\_{n=-2}^{2} n s\_n e^{np} = 0. \tag{54}$$

10.5772/52194

225

http://dx.doi.org/10.5772/52194

(60)

where *U* is an arbitrary orthogonal matrix. We have to note that the optimal coordinate transformation matrix *T*opt is not unique because of the non-uniqueness of matrix *U*. By letting *U* = *I* in Eq. (58), for instance, one of the optimal coordinate transformation matrices

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

� *β* 1 2 opt 0 0 *β* − <sup>1</sup> 2 opt

> − <sup>1</sup> 2 opt *<sup>β</sup>* 1 2 opt <sup>−</sup> *<sup>β</sup>*

> − <sup>1</sup> 2 opt *<sup>β</sup>* 1 2 opt <sup>+</sup> *<sup>β</sup>*

minimum *L*2-sensitivity realization (*A*opt, *b*opt, *c*opt, *d*opt) is synthesized as follows:

= � *T*−<sup>1</sup>

*S*min =

We can give a geometrical interpretation of the above optimal coordinate transformation matrix *T*opt. In Eq. (59), *R* is an orthogonal matrix, which means *π*/4[rad] rotation of the coordinate axes, *B*opt is a positive definite diagonal matrix, which means a simple scaling of each coordinate axis. Eq. (59) shows that the minimum *L*2-sensitivity realization is obtained by the following operations for the coordinate axes of the initial balanced realization; −*π*/4[rad] rotation, simple scaling, and +*π*/4[rad] rotation. Finally, the

and the minimum *L*2-sensitivity *S*min, which is achieved by *p*opt = log(*β*opt), is expressed by

2 ∑ *n*=2

We present numerical examples to demonstrate the validity of the proposed method. Consider a second-order digital filter *H*(*z*) with complex conjugate poles, of which transfer

> *α*∗ *<sup>z</sup>* <sup>−</sup> *<sup>λ</sup>*<sup>∗</sup> <sup>+</sup> *<sup>d</sup>* <sup>=</sup> 0.0396 <sup>+</sup> 0.0793*z*−<sup>1</sup> <sup>+</sup> 0.0396*z*−<sup>2</sup>

*snβ<sup>n</sup>*

 

− <sup>1</sup> 2 opt

− <sup>1</sup> 2 opt

opt*A*b*T*opt *<sup>T</sup>*−<sup>1</sup>

*c*b*T*<sup>b</sup> *d*<sup>b</sup>

opt*b*<sup>b</sup>

�

<sup>1</sup> <sup>−</sup> 1.3315*z*−<sup>1</sup> <sup>+</sup> 0.49*z*−<sup>2</sup> (62)

opt. (61)

� √ 1 <sup>2</sup> <sup>√</sup> 1 2

− √ 1 <sup>2</sup> <sup>√</sup> 1 2 �

. (59)

*T*opt = *P*1/2

opt is given by

*T*opt =*P*

= � √ 1 <sup>2</sup> <sup>−</sup> <sup>√</sup> 1 2

= 1 2

�

substituting *p*opt into Eq. (53) as

**3.7. Numerical examples**

function is given by

*A*opt *b*opt *<sup>c</sup>*opt *<sup>d</sup>*opt �

*<sup>H</sup>*(*z*)= *<sup>α</sup>*

*<sup>z</sup>* <sup>−</sup> *<sup>λ</sup>* <sup>+</sup>

1 2 opt =*RTB* 1 2 opt*<sup>R</sup>*

> √ 1 <sup>2</sup> <sup>√</sup> 1 2

 *β* 1 2 opt <sup>+</sup> *<sup>β</sup>*

*β* 1 2 opt <sup>−</sup> *<sup>β</sup>*

Letting *β* = *e<sup>p</sup>* gives

$$\sum\_{n=-2}^{2} n s\_n \beta^n = 0.\tag{55}$$

The above equation is a fourth-degree polynomial equation with respect to *β* of constant coefficients. In 1545, G. Cardano states in his book entitled *Ars Magna* (its translated edition is [16]) that there exists the formula of solutions for fourth-degree polynomial equations. Therefore, Eq. (55) can be solved analytically. Eq. (55) has four solutions, from which the positive real solution *<sup>β</sup>*opt <sup>=</sup> *<sup>e</sup>p*opt is adopted to derive the optimal positive definite symmetric matrix *P*opt as

$$\begin{split} \mathbf{P}\_{\text{opt}} &= \begin{bmatrix} \cosh(p\_{\text{opt}}) \sinh(p\_{\text{opt}}) \\ \sinh(p\_{\text{opt}}) \cosh(p\_{\text{opt}}) \end{bmatrix} \\ &= \frac{1}{2} \begin{bmatrix} \beta\_{\text{opt}} + \beta\_{\text{opt}}^{-1} \beta\_{\text{opt}} - \beta\_{\text{opt}}^{-1} \\ \beta\_{\text{opt}} - \beta\_{\text{opt}}^{-1} \beta\_{\text{opt}} + \beta\_{\text{opt}}^{-1} \end{bmatrix} . \end{split} \tag{56}$$

The diagonalization of the optimal positive definite symmetric matrix *<sup>P</sup>*opt <sup>=</sup> *<sup>T</sup>*opt*T<sup>T</sup>* opt is given by

$$\begin{split} \mathbf{P}\_{\mathrm{opt}} &= \begin{bmatrix} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \begin{bmatrix} \boldsymbol{\beta}\_{\mathrm{opt}} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\beta}\_{\mathrm{opt}}^{-1} \end{bmatrix} \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \\ &\equiv \mathbf{R}^{T} \mathbf{B}\_{\mathrm{opt}} \mathbf{R}. \end{split} \tag{57}$$

Once the optimal positive definite symmetric matrix *P*opt is derived, the optimal coordinate transformation matrix *T*opt is calculated as

$$T\_{\rm opt} = P\_{\rm opt}^{\frac{1}{2}} \mathcal{U} \tag{58}$$

where *U* is an arbitrary orthogonal matrix. We have to note that the optimal coordinate transformation matrix *T*opt is not unique because of the non-uniqueness of matrix *U*. By letting *U* = *I* in Eq. (58), for instance, one of the optimal coordinate transformation matrices *T*opt = *P*1/2 opt is given by

$$\begin{split} T\_{\rm opt} &= \mathbf{P}^{\frac{1}{2}}\_{\rm opt} \\ &= \mathbf{R}^{T} \mathbf{B}^{\frac{1}{2}}\_{\rm opt} \mathbf{R} \\ &= \begin{bmatrix} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \begin{bmatrix} \boldsymbol{\beta}^{\frac{1}{2}}\_{\rm opt} & 0 \\ 0 & \boldsymbol{\beta}^{-\frac{1}{2}}\_{\rm opt} \end{bmatrix} \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \\ &= \frac{1}{2} \begin{bmatrix} \boldsymbol{\beta}^{\frac{1}{2}}\_{\rm opt} + \boldsymbol{\beta}^{-\frac{1}{2}}\_{\rm opt} \boldsymbol{\beta}^{\frac{1}{2}}\_{\rm opt} - \boldsymbol{\beta}^{-\frac{1}{2}}\_{\rm opt} \\ \boldsymbol{\beta}^{\frac{1}{2}}\_{\rm opt} - \boldsymbol{\beta}^{-\frac{1}{2}}\_{\rm opt} \boldsymbol{\beta}^{\frac{2}{2}}\_{\rm opt} + \boldsymbol{\beta}^{-\frac{1}{2}}\_{\rm opt} \end{bmatrix}. \end{split} \tag{59}$$

We can give a geometrical interpretation of the above optimal coordinate transformation matrix *T*opt. In Eq. (59), *R* is an orthogonal matrix, which means *π*/4[rad] rotation of the coordinate axes, *B*opt is a positive definite diagonal matrix, which means a simple scaling of each coordinate axis. Eq. (59) shows that the minimum *L*2-sensitivity realization is obtained by the following operations for the coordinate axes of the initial balanced realization; −*π*/4[rad] rotation, simple scaling, and +*π*/4[rad] rotation. Finally, the minimum *L*2-sensitivity realization (*A*opt, *b*opt, *c*opt, *d*opt) is synthesized as follows:

$$
\begin{bmatrix}
\frac{A\_{\rm opt}}{c\_{\rm opt}}
\frac{|b\_{\rm opt}|}{d\_{\rm opt}}
\end{bmatrix} = \begin{bmatrix}
T\_{\rm opt}^{-1} A\_{\rm b} T\_{\rm opt} \left|
\boldsymbol{T}\_{\rm opt}^{-1} \boldsymbol{b}\_{\rm b}
\right| \\
\hline
 c\_{\rm b} T\_{\rm b} \quad \left|
\quad d\_{\rm b}
\right.
\end{bmatrix} \tag{60}
$$

and the minimum *L*2-sensitivity *S*min, which is achieved by *p*opt = log(*β*opt), is expressed by substituting *p*opt into Eq. (53) as

$$S\_{\rm min} = \sum\_{n=2}^{2} s\_n \mathcal{J}\_{\rm opt}^n. \tag{61}$$

#### **3.7. Numerical examples**

12 Digital Filters and Signal Processing

equation with respect to *p*:

Letting *β* = *e<sup>p</sup>* gives

matrix *P*opt as

given by

from the transfer function *H*(*z*) [12, 13].

**3.6. Synthesis of minimum** *L*2**-sensitivity realizations**

*P*opt = 

*P*opt =

transformation matrix *T*opt is calculated as

 √ 1 <sup>2</sup> <sup>−</sup> <sup>√</sup> 1 2

√ 1 <sup>2</sup> <sup>√</sup> 1 2

= 1 2 

*∂S*(*p*) *<sup>∂</sup><sup>p</sup>* <sup>=</sup>

> 2 ∑ *n*=−2

It is remarkable that Eq. (53) is a simple linear combination of exponential functions which does not contain infinite summations. These coefficients *sn*'s are easily computed directly

The parameter *p* which minimizes *S*(*p*) in Eq. (53) can be derived by solving the following

2 ∑ *n*=−2

The above equation is a fourth-degree polynomial equation with respect to *β* of constant coefficients. In 1545, G. Cardano states in his book entitled *Ars Magna* (its translated edition is [16]) that there exists the formula of solutions for fourth-degree polynomial equations. Therefore, Eq. (55) can be solved analytically. Eq. (55) has four solutions, from which the positive real solution *<sup>β</sup>*opt <sup>=</sup> *<sup>e</sup>p*opt is adopted to derive the optimal positive definite symmetric

> cosh(*p*opt) sinh(*p*opt) sinh(*p*opt) cosh(*p*opt)

> > *<sup>β</sup>*opt <sup>+</sup> *<sup>β</sup>*−<sup>1</sup>

*<sup>β</sup>*opt <sup>−</sup> *<sup>β</sup>*−<sup>1</sup>

The diagonalization of the optimal positive definite symmetric matrix *<sup>P</sup>*opt <sup>=</sup> *<sup>T</sup>*opt*T<sup>T</sup>*

 *β*opt 0 <sup>0</sup> *<sup>β</sup>*−<sup>1</sup> opt

Once the optimal positive definite symmetric matrix *P*opt is derived, the optimal coordinate

1 2

*T*opt = *P*

opt

. (56)

opt is

opt

 √ 1 <sup>2</sup> <sup>√</sup> 1 2

− √ 1 <sup>2</sup> <sup>√</sup> 1 2

<sup>≡</sup>*RTB*opt*R*. (57)

opt*<sup>U</sup>* (58)

opt *<sup>β</sup>*opt <sup>−</sup> *<sup>β</sup>*−<sup>1</sup>

opt *<sup>β</sup>*opt <sup>+</sup> *<sup>β</sup>*−<sup>1</sup>

*nsne*

*np* = 0. (54)

*nsnβ<sup>n</sup>* = 0. (55)

We present numerical examples to demonstrate the validity of the proposed method. Consider a second-order digital filter *H*(*z*) with complex conjugate poles, of which transfer function is given by

$$\begin{split}H(z) &= \frac{a}{z - \lambda} + \frac{a^\*}{z - \lambda^\*} + d\\ &= \frac{0.0396 + 0.0793z^{-1} + 0.0396z^{-2}}{1 - 1.3315z^{-1} + 0.49z^{-2}}\end{split} \tag{62}$$

**Figure 2.** Frequency responses and zero-pole configurations of *H*(*z*).

where *λ* = 0.7 exp(*j*0.1*π*), *α* = 1.6657 − *j*6.3055, and *d* = 1. The frequency responses and zero-pole configurations of digital filter *H*(*z*) are shown in Fig. 2. From the transfer function *H*(*z*), parameters *P*, *Q*, and *R* are calculated as

$$P = 0.5068, \ Q = -0.2947, \ R = 0.25. \tag{63}$$

10.5772/52194

227

http://dx.doi.org/10.5772/52194

0 2 4 6 8 10

Minimum L2−sensitivity

opt = 3.6070. (68)

Method in [1] Method in [2]

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

Iteration k

**Figure 3.** Minimum *L*2-sensitivity and convergence behaviors of *L*2-sensitivity in conventional methods [1] and [2].

2 ∑ *n*=−2

*snβ<sup>n</sup>*

Figure 3 shows the comparison of our proposed method with the iterative methods reported in [1] and [2], where the initial realization is the balanced realization. Our proposed method achieves the minimum *L*2-sensitivity by only solving a fourth-degree polynomial equation without iterative calculations, while both of the methods in [1] and [2] require many iterative calculations to achieve the minimum *L*2-sensitivity. Furthermore, our proposed method can guarantee that the *L*2-sensitivity surely converges to the theoretical minimum when using

**4. Analytical solutions to the** *L*2**-sensitivity minimization problem for**

This section reveals that the *L*2-sensitivity minimization problem can be solved analytically if second-order modes are all equal. Furthermore, we clarify the general expression of the

We have discovered that there exist some digital filters whose minimum *L*2-sensitivity realization is equal to the balanced realization. Such digital filters satisfy a sufficient

**Theorem 3.** *If all the second-order modes θi*(*i* = 1, ··· , *N*) *of a digital filter H*(*z*) *are equal, then*

*that is, the minimum L*2*-sensitivity realization is equal to the balanced realization.* ✷

(*A*opt, *b*opt, *c*opt, *d*opt)=(*A*b, *b*b, *c*b, *d*b) (69)

*S*min =

**digital filters with all second-order modes equal**

condition summarized in the following theorem:

transfer functions of digital filters with all second-order modes equal [14].

**4.1. Analytical synthesis of the minimum** *L*2**-sensitivity realizations**

3.6

3.62

3.64

3.66

L2−sensitivity S(Pk

of which *L*2-sensitivity *S*min is

conventional methods in [1] and [2].

)

3.68

3.7

The coefficients *sn*'s are computed as follows:

$$(s\_{-2}, s\_{-1}, s\_0, s\_1, s\_2) = (0.3345, 0.8246, 0.8987, 0.8246, 0.7951). \tag{64}$$

We solve the following fourth-degree polynomial equation to derive the optimal solution *β*opt:

$$\sum\_{n=-2}^{2} n s\_n \mathfrak{E}^n = 0.\tag{65}$$

The fourth-degree polynomial equation above has the following four solutions:

$$
\beta = 0.8568, -0.6960, -0.3396 \pm j0.7682.\tag{66}
$$

We adopt *β*opt = 0.8568, which is a positive real scalar, to derive the optimal positive definite symmetric matrix *P*opt. We can derive the minimum *L*2-sensitivity realization (*A*opt, *b*opt, *c*opt, *d*opt) in closed form as follows:

$$
\begin{bmatrix}
\frac{\mathbf{A}\_{\rm opt}}{\mathbf{c}\_{\rm opt}}
\begin{vmatrix}
\mathbf{b}\_{\rm opt} \\
\mathbf{c}\_{\rm opt}
\end{vmatrix}
\end{bmatrix} = \begin{bmatrix}
0.7810 & 0.2451 \begin{vmatrix}
0.4751 \\
0.5505 \begin{vmatrix}
0.3061 \\
0.3061 \end{vmatrix}
\end{bmatrix} \tag{67}
$$

**Figure 3.** Minimum *L*2-sensitivity and convergence behaviors of *L*2-sensitivity in conventional methods [1] and [2].

of which *L*2-sensitivity *S*min is

14 Digital Filters and Signal Processing

**Figure 2.** Frequency responses and zero-pole configurations of *H*(*z*).

*H*(*z*), parameters *P*, *Q*, and *R* are calculated as

The coefficients *sn*'s are computed as follows:

(*A*opt, *b*opt, *c*opt, *d*opt) in closed form as follows:

�

*A*opt *b*opt *c*opt *d*opt � = 

*β*opt:

where *λ* = 0.7 exp(*j*0.1*π*), *α* = 1.6657 − *j*6.3055, and *d* = 1. The frequency responses and zero-pole configurations of digital filter *H*(*z*) are shown in Fig. 2. From the transfer function

We solve the following fourth-degree polynomial equation to derive the optimal solution

We adopt *β*opt = 0.8568, which is a positive real scalar, to derive the optimal positive definite symmetric matrix *P*opt. We can derive the minimum *L*2-sensitivity realization

2 ∑ *n*=−2

The fourth-degree polynomial equation above has the following four solutions:

*P* = 0.5068, *Q* = −0.2947, *R* = 0.25. (63)

*β* = 0.8568, −0.6960, −0.3396 ± *j*0.7682. (66)

0.7810 0.2451 0.4751 −0.2451 0.5505 0.3061 0.4751 −0.3061 0.0396

*nsnβ<sup>n</sup>* = 0. (65)

(67)

(*s*−2,*s*−1,*s*0,*s*1,*s*2)=(0.3345, 0.8246, 0.8987, 0.8246, 0.7951). (64)

$$S\_{\rm min} = \sum\_{n=-2}^{2} s\_n \beta\_{\rm opt}^n = 3.6070. \tag{68}$$

Figure 3 shows the comparison of our proposed method with the iterative methods reported in [1] and [2], where the initial realization is the balanced realization. Our proposed method achieves the minimum *L*2-sensitivity by only solving a fourth-degree polynomial equation without iterative calculations, while both of the methods in [1] and [2] require many iterative calculations to achieve the minimum *L*2-sensitivity. Furthermore, our proposed method can guarantee that the *L*2-sensitivity surely converges to the theoretical minimum when using conventional methods in [1] and [2].

## **4. Analytical solutions to the** *L*2**-sensitivity minimization problem for digital filters with all second-order modes equal**

This section reveals that the *L*2-sensitivity minimization problem can be solved analytically if second-order modes are all equal. Furthermore, we clarify the general expression of the transfer functions of digital filters with all second-order modes equal [14].

## **4.1. Analytical synthesis of the minimum** *L*2**-sensitivity realizations**

We have discovered that there exist some digital filters whose minimum *L*2-sensitivity realization is equal to the balanced realization. Such digital filters satisfy a sufficient condition summarized in the following theorem:

**Theorem 3.** *If all the second-order modes θi*(*i* = 1, ··· , *N*) *of a digital filter H*(*z*) *are equal, then*

$$(A\_{\rm opt}, b\_{\rm opt}, c\_{\rm opt}, d\_{\rm opt}) = (A\_{\rm b}, b\_{\rm b}, c\_{\rm b}, d\_{\rm b}) \tag{69}$$

*that is, the minimum L*2*-sensitivity realization is equal to the balanced realization.* ✷

*Proof:* The general Gramians of the balanced realization are given by Eqs. (49) and (50), respectively. If all the second-order modes *θi*(*i* = 1, ··· , *N*) satisfies

$$\theta\_{\bar{l}} = \theta \text{ ( $i = 1, \dots$ ,  $N$ )}\tag{70}$$

*4.2.1. General expression*

*a nonzero real scalar.*

*can be expressed as the following form:*

*4.2.2. Frequency transformation*

all-pass digital filter.

*such as*

*equal such as*

digital filters with all second-order modes equal.

*The second-order modes of the digital filter H*(*z*) *are given by*

digital filters with all second-order modes equal.

In Ref. [14], we have newly derived a general expression of the transfer function of *N*th-order

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

**Corollary 1.** *Let the second-order modes of an Nth-order digital filter H*(*z*) *be θi*(*i* = 1, ··· , *N*)*. The second-order modes of the transfer function θH*(*z*) *are given by* |*θ*|*θi*(*i* = 1, ··· , *N*)*, where θ is*

**Theorem 4.** *The transfer function of an Nth-order digital filter H*(*z*) *with all second-order modes*

*where θ is a nonzero real scalar, ρ is a real scalar, and HAP*(*z*) *is an Nth-order all-pass digital filter.*

Furthermore, it is remarkable that the transfer function *H*(*z*) in Eq. (77) is generally obtained by the frequency transformation on a first-order FIR prototype filter using an *N*th-order

**Remark 1.** *An Nth-order digital filter H*(*z*) *in Eq. (77) is obtained by the frequency transformation*

*where HP*(*z*) = *<sup>θ</sup>z*−<sup>1</sup> <sup>+</sup> *<sup>ρ</sup> is a first-order prototype FIR digital filter and HAP*(*z*) *is an Nth-order all-pass digital filter.* ✷

The variable substitution in Eq. (79) represents the frequency transformation, where *H*P(*z*) is the prototype digital filter. Block diagrams of the prototype digital filter *H*P(*z*) and transformed digital filter *H*(*z*) are shown in Fig. 4. These figures show that a block diagram of the transformed digital filter *H*(*z*) can be obtained by simple substitution of the all-pass digital filter *<sup>H</sup>*AP(*z*) into the unit delay *<sup>z</sup>*<sup>−</sup>1. Theorem 4 and Remark 1 give us the class of

*θ*<sup>1</sup> = *θ*<sup>2</sup> = ··· = *θN*−<sup>1</sup> = *θ<sup>N</sup>* (76)

*H*(*z*) = *θHAP*(*z*) + *ρ* (77)

*θ<sup>i</sup>* = |*θ*| (*i* = 1, ··· , *N*). (78)

*<sup>H</sup>*(*z*) = *HP*(*z*)|*z*−1←*HAP*(*z*) (79)

10.5772/52194

229

http://dx.doi.org/10.5772/52194

✷

the controllability and observability Gramians are expressed as

$$\mathbf{K}\_0^{(\mathbf{b})} = \mathbf{W}\_0^{(\mathbf{b})} = \text{diag}(\theta\_\prime \cdot \cdots \prime, \theta) = \theta \mathbf{I}. \tag{71}$$

Substituting Eq. (71) into Eqs. (49) and (50), the general Gramians are given by

$$\mathbf{K}\_{i}^{(\mathbf{b})} = \mathbf{W}\_{i}^{(\mathbf{b})} = \frac{1}{2}\theta(\mathbf{A}\_{\mathbf{b}}^{i} + (\mathbf{A}\_{\mathbf{b}}^{T})^{i}).\tag{72}$$

We can express the general Gramians as *<sup>K</sup>*(b) *<sup>i</sup>* <sup>=</sup> *<sup>W</sup>*(b) *<sup>i</sup>* = **Θ***i*, which is defined by

$$\Theta\_{\dot{i}} = \frac{1}{2}\theta(A\_{\mathbf{b}}^{\dot{i}} + (A\_{\mathbf{b}}^{T})^{\dot{i}}) \ (\dot{\imath} = 0, 1, \cdots). \tag{73}$$

Substituting *<sup>K</sup>*(b) *<sup>i</sup>* <sup>=</sup> *<sup>W</sup>*(b) *<sup>i</sup>* = **Θ***<sup>i</sup>* into Eq. (43) yields

$$\frac{\partial \mathcal{S}(\mathbf{P})}{\partial \mathbf{P}} = (1 + \text{tr}(\boldsymbol{\Theta}\_0 \mathbf{P}^{-1})) \boldsymbol{\Theta}\_0 + 2 \sum\_{i=1}^{\infty} \text{tr}(\boldsymbol{\Theta}\_i \mathbf{P}^{-1}) \boldsymbol{\Theta}\_i$$

$$-\mathbf{P}^{-1} \left( (1 + \text{tr}(\boldsymbol{\Theta}\_0 \mathbf{P})) \boldsymbol{\Theta}\_0 + 2 \sum\_{i=1}^{\infty} \text{tr}(\boldsymbol{\Theta}\_i \mathbf{P}) \boldsymbol{\Theta}\_i \right) \mathbf{P}^{-1}. \tag{74}$$

It is obvious that

$$\left. \frac{\partial S(P)}{\partial P} \right|\_{P=I} = \mathbf{0} \tag{75}$$

which means that the minimum *L*2-sensitivity realization can be synthesized without any coordinate transformation to the balanced realization, that is, the initial realization. Therefore, it is proved that the minimum *L*2-sensitivity is equal to the balanced realization. ✷

#### **4.2. Class of digital filters with all second-order modes equal**

In the previous subsection, we revealed that the *L*2-sensitivity minimization problem can be solved analytically if second-order modes are all equal. We next clarify the class of digital filters with all second-order modes equal. We have newly derived a general expression of the transfer function of *N*th-order digital filters with all second-order modes equal.

#### *4.2.1. General expression*

16 Digital Filters and Signal Processing

*Proof:* The general Gramians of the balanced realization are given by Eqs. (49) and (50),

*θ<sup>i</sup>* = *θ* (*i* = 1, ··· , *N*) (70)

<sup>0</sup> <sup>=</sup> diag(*θ*, ··· , *<sup>θ</sup>*) = *<sup>θ</sup>I*. (71)

*<sup>i</sup>* = **Θ***i*, which is defined by

) (*i* = 0, 1, ···). (73)

= **0** (75)

*<sup>P</sup>*<sup>−</sup>1. (74)

). (72)

respectively. If all the second-order modes *θi*(*i* = 1, ··· , *N*) satisfies

the controllability and observability Gramians are expressed as

*K*(b)

**<sup>Θ</sup>***<sup>i</sup>* <sup>=</sup> <sup>1</sup> 2 *<sup>θ</sup>*(*A<sup>i</sup>*

We can express the general Gramians as *<sup>K</sup>*(b)

*<sup>i</sup>* <sup>=</sup> *<sup>W</sup>*(b)

*∂S*(*P*)

Substituting *<sup>K</sup>*(b)

It is obvious that

✷

<sup>0</sup> <sup>=</sup> *<sup>W</sup>*(b)

Substituting Eq. (71) into Eqs. (49) and (50), the general Gramians are given by

*<sup>i</sup>* <sup>=</sup> <sup>1</sup> 2 *<sup>θ</sup>*(*A<sup>i</sup>*

<sup>b</sup> + (*A<sup>T</sup>* b )*i*

(1 + tr(**Θ**0*P*))**Θ**<sup>0</sup> + 2

which means that the minimum *L*2-sensitivity realization can be synthesized without any coordinate transformation to the balanced realization, that is, the initial realization. Therefore, it is proved that the minimum *L*2-sensitivity is equal to the balanced realization.

In the previous subsection, we revealed that the *L*2-sensitivity minimization problem can be solved analytically if second-order modes are all equal. We next clarify the class of digital filters with all second-order modes equal. We have newly derived a general expression of

the transfer function of *N*th-order digital filters with all second-order modes equal.

*∂S*(*P*) *∂P P*=*I*

*<sup>i</sup>* <sup>=</sup> *<sup>W</sup>*(b)

∞ ∑ *i*=1

tr(**Θ***iP*−1)**Θ***<sup>i</sup>*

tr(**Θ***iP*)**Θ***<sup>i</sup>*

∞ ∑ *i*=1

<sup>b</sup> + (*A<sup>T</sup>* b )*i*

*<sup>i</sup>* <sup>=</sup> *<sup>W</sup>*(b)

*<sup>i</sup>* = **Θ***<sup>i</sup>* into Eq. (43) yields

*<sup>∂</sup><sup>P</sup>* = (<sup>1</sup> <sup>+</sup> tr(**Θ**0*P*−1))**Θ**<sup>0</sup> <sup>+</sup> <sup>2</sup>

**4.2. Class of digital filters with all second-order modes equal**

−*P*−<sup>1</sup> 

*K*(b)

In Ref. [14], we have newly derived a general expression of the transfer function of *N*th-order digital filters with all second-order modes equal.

**Corollary 1.** *Let the second-order modes of an Nth-order digital filter H*(*z*) *be θi*(*i* = 1, ··· , *N*)*. The second-order modes of the transfer function θH*(*z*) *are given by* |*θ*|*θi*(*i* = 1, ··· , *N*)*, where θ is a nonzero real scalar.*

**Theorem 4.** *The transfer function of an Nth-order digital filter H*(*z*) *with all second-order modes equal such as*

$$
\theta\_1 = \theta\_2 = \dots = \theta\_{N-1} = \theta\_N \tag{76}
$$

*can be expressed as the following form:*

$$H(z) = \theta H\_{\text{AP}}(z) + \rho \tag{77}$$

*where θ is a nonzero real scalar, ρ is a real scalar, and HAP*(*z*) *is an Nth-order all-pass digital filter. The second-order modes of the digital filter H*(*z*) *are given by*

$$\theta\_{\hat{i}} = |\theta| \ (\hat{\imath} = 1, \hat{\imath} \cdots, \hat{\imath} \text{ N}). \tag{78}$$

✷

#### *4.2.2. Frequency transformation*

Furthermore, it is remarkable that the transfer function *H*(*z*) in Eq. (77) is generally obtained by the frequency transformation on a first-order FIR prototype filter using an *N*th-order all-pass digital filter.

**Remark 1.** *An Nth-order digital filter H*(*z*) *in Eq. (77) is obtained by the frequency transformation such as*

$$H(z) = H\_P(z)|\_{z^{-1} \leftarrow H\_{AP}(z)}\tag{79}$$

*where HP*(*z*) = *<sup>θ</sup>z*−<sup>1</sup> <sup>+</sup> *<sup>ρ</sup> is a first-order prototype FIR digital filter and HAP*(*z*) *is an Nth-order all-pass digital filter.* ✷

The variable substitution in Eq. (79) represents the frequency transformation, where *H*P(*z*) is the prototype digital filter. Block diagrams of the prototype digital filter *H*P(*z*) and transformed digital filter *H*(*z*) are shown in Fig. 4. These figures show that a block diagram of the transformed digital filter *H*(*z*) can be obtained by simple substitution of the all-pass digital filter *<sup>H</sup>*AP(*z*) into the unit delay *<sup>z</sup>*<sup>−</sup>1. Theorem 4 and Remark 1 give us the class of digital filters with all second-order modes equal.

**Figure 4.** Block diagrams of prototype digital filter *H*P(*z*) and transformed digital filter *H*(*z*).

#### **4.3. Examples of digital filters with all second-order modes equal**

There are many types of digital filters with all second-order modes equal. We can design various digital filters by setting variables *θ*, *ρ*, and *H*AP(*z*).

#### *4.3.1. The unit delay*

The unit delay is the simplest example for digital filter with all second-order modes equal. It is obvious that letting *<sup>θ</sup>* <sup>=</sup> 1, *<sup>ρ</sup>* <sup>=</sup> 0, and *<sup>H</sup>*AP(*z*) = *<sup>z</sup>*−<sup>1</sup> in Eq. (77) yields the unit delay *<sup>z</sup>*<sup>−</sup>1.

#### *4.3.2. First-order digital filters*

Any first-order digital filter can be expressed in the form of Eq. (77). Consider a first-order IIR digital filter given by

$$H\_{IIR}(z) = \frac{b\_0 + b\_1 z^{-1}}{1 + a\_1 z^{-1}} \tag{80}$$

10.5772/52194

231

http://dx.doi.org/10.5772/52194

(85)

*<sup>H</sup>*MN(*z*) = <sup>1</sup> <sup>+</sup> *<sup>α</sup>*

, *<sup>ρ</sup>* <sup>=</sup> <sup>1</sup> 2

that Eq. (82) can be rewritten as the form of Eq. (77) where

Consider a first-order FIR digital filter *H*FIR(*z*) given by

**4.4. Numerical examples**

*4.4.1. First-order FIR digital filters*

and controllability Gramian *<sup>K</sup>*(b)

*4.4.2. First-order IIR digital filters*

*<sup>θ</sup>* <sup>=</sup> <sup>1</sup> 2 2

This filter has *N* notches at the frequency 2*πk*/*N*[rad] for *k* = 1, ··· , *N*. One can easily show

This subsection gives numerical examples of synthesis of the minimum *L*2-sensitivity

of which frequency magnitude and phase responses are shown in Fig. 5 (a). The second-order mode of the digital filter *H*FIR(*z*) is *θ* = 0.5. The balanced realization (*A*b, *b*b, *c*b, *d*b), which

> 0 0.7071 0.7071 0.5

<sup>0</sup> and observability Gramians *<sup>W</sup>*(b)

realizations for various types of digital filters with all second-order modes equal.

is equal to the minimum *L*2-sensitivity realization, of *H*FIR(*z*) is derived as

 =

*K*(b)

<sup>0</sup> <sup>=</sup> *<sup>W</sup>*(b)

*<sup>H</sup>*IIR(*z*) = 0.25 <sup>+</sup> 0.25*z*−<sup>1</sup>

of which frequency magnitude and phase responses are shown in Fig. 5 (b). The second-order mode of the digital filter *H*IIR(*z*) is *θ* = 0.5. The balanced realization

 *A*<sup>b</sup> *b*<sup>b</sup> *c*<sup>b</sup> *d*<sup>b</sup>

Consider a first-order IIR digital filter *H*IIR(*z*) given by

1 − *z*−*<sup>N</sup>*

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

, *<sup>H</sup>*AP(*z*) = *<sup>α</sup>* <sup>−</sup> *<sup>z</sup>*−<sup>1</sup>

<sup>1</sup> <sup>−</sup> *<sup>α</sup>z*−*<sup>N</sup>* . (82)

<sup>1</sup> <sup>−</sup> *<sup>α</sup>z*−<sup>1</sup> . (83)

<sup>0</sup> are calculated as

<sup>0</sup> = 0.5. (86)

<sup>1</sup> <sup>−</sup> 0.5*z*−<sup>1</sup> (87)

*<sup>H</sup>*FIR(*z*) = 0.5 <sup>+</sup> 0.5*z*−<sup>1</sup> (84)

where *b*<sup>0</sup> and *b*<sup>1</sup> are numerator coefficients, *a*<sup>1</sup> is a denominator coefficient. One can easily show that Eq. (80) can be rewritten as the form of Eq. (77) where

$$\theta = \frac{b\_1 - a\_1 b\_0}{1 - a\_1^2}, \; \rho = \frac{b\_0 - a\_1 b\_1}{1 - a\_1^2}, \; H\_{\rm AP}(z) = \frac{a\_1 + z^{-1}}{1 + a\_1 z^{-1}}.\tag{81}$$

#### *4.3.3. All-pass digital filters*

It is obvious that all-pass digital filters are included in the class of digital filters expressed as Eq. (77). The transfer function *H*(*z*) in Eq. (77) is an all-pass digital filter when we let *θ* = 1 and *ρ* = 0.

#### *4.3.4. Multi-notch comb digital filters*

The transfer function of an *N*th-order multi-notch comb digital filter is given by

$$H\_{\rm MN}(z) = \frac{1+a}{2} \frac{1-z^{-N}}{1-az^{-N}}.\tag{82}$$

This filter has *N* notches at the frequency 2*πk*/*N*[rad] for *k* = 1, ··· , *N*. One can easily show that Eq. (82) can be rewritten as the form of Eq. (77) where

$$\theta = \frac{1}{2}, \; \rho = \frac{1}{2}, \; H\_{\rm AP}(z) = \frac{a - z^{-1}}{1 - az^{-1}}. \tag{83}$$

#### **4.4. Numerical examples**

18 Digital Filters and Signal Processing

*4.3.1. The unit delay*

*4.3.2. First-order digital filters*

IIR digital filter given by

*4.3.3. All-pass digital filters*

*4.3.4. Multi-notch comb digital filters*

and *ρ* = 0.

u(n) y(n)

(a) First-order prototype digital filter *H*P(*z*)

z−<sup>1</sup>

θ

various digital filters by setting variables *θ*, *ρ*, and *H*AP(*z*).

show that Eq. (80) can be rewritten as the form of Eq. (77) where

*<sup>θ</sup>* <sup>=</sup> *<sup>b</sup>*<sup>1</sup> <sup>−</sup> *<sup>a</sup>*1*b*<sup>0</sup> 1 − *a*<sup>2</sup> 1

**Figure 4.** Block diagrams of prototype digital filter *H*P(*z*) and transformed digital filter *H*(*z*).

**4.3. Examples of digital filters with all second-order modes equal**

There are many types of digital filters with all second-order modes equal. We can design

The unit delay is the simplest example for digital filter with all second-order modes equal. It is obvious that letting *<sup>θ</sup>* <sup>=</sup> 1, *<sup>ρ</sup>* <sup>=</sup> 0, and *<sup>H</sup>*AP(*z*) = *<sup>z</sup>*−<sup>1</sup> in Eq. (77) yields the unit delay *<sup>z</sup>*<sup>−</sup>1.

Any first-order digital filter can be expressed in the form of Eq. (77). Consider a first-order

*HIIR*(*z*) = *<sup>b</sup>*<sup>0</sup> <sup>+</sup> *<sup>b</sup>*1*z*−<sup>1</sup>

where *b*<sup>0</sup> and *b*<sup>1</sup> are numerator coefficients, *a*<sup>1</sup> is a denominator coefficient. One can easily

It is obvious that all-pass digital filters are included in the class of digital filters expressed as Eq. (77). The transfer function *H*(*z*) in Eq. (77) is an all-pass digital filter when we let *θ* = 1

, *<sup>H</sup>*AP(*z*)= *<sup>a</sup>*<sup>1</sup> <sup>+</sup> *<sup>z</sup>*−<sup>1</sup>

, *<sup>ρ</sup>* <sup>=</sup> *<sup>b</sup>*<sup>0</sup> <sup>−</sup> *<sup>a</sup>*1*b*<sup>1</sup> 1 − *a*<sup>2</sup> 1

The transfer function of an *N*th-order multi-notch comb digital filter is given by

u(n) y(n)

(b) *N*th-order transformed digital filter *H*(*z*)

<sup>1</sup> <sup>+</sup> *<sup>a</sup>*1*z*−<sup>1</sup> (80)

<sup>1</sup> <sup>+</sup> *<sup>a</sup>*1*z*−<sup>1</sup> . (81)

HAP(z)

θ

ρ

ρ

This subsection gives numerical examples of synthesis of the minimum *L*2-sensitivity realizations for various types of digital filters with all second-order modes equal.

#### *4.4.1. First-order FIR digital filters*

Consider a first-order FIR digital filter *H*FIR(*z*) given by

$$H\_{\rm FIR}(z) = 0.5 + 0.5z^{-1} \tag{84}$$

of which frequency magnitude and phase responses are shown in Fig. 5 (a). The second-order mode of the digital filter *H*FIR(*z*) is *θ* = 0.5. The balanced realization (*A*b, *b*b, *c*b, *d*b), which is equal to the minimum *L*2-sensitivity realization, of *H*FIR(*z*) is derived as

$$
\begin{bmatrix}
\frac{A\_{\rm b}}{\mathbf{c}\_{\rm b}}
\frac{|b\_{\rm b}|}{d\_{\rm b}}
\end{bmatrix} = \begin{bmatrix}
0 & \left|
\frac{0.7071}{0.7071}
\right| \\
\hline
0.7071
\end{bmatrix} \tag{85}
$$

and controllability Gramian *<sup>K</sup>*(b) <sup>0</sup> and observability Gramians *<sup>W</sup>*(b) <sup>0</sup> are calculated as

$$\mathbf{K}\_0^{(\mathbf{b})} = \mathbf{W}\_0^{(\mathbf{b})} = \mathbf{0}.5. \tag{86}$$

#### *4.4.2. First-order IIR digital filters*

Consider a first-order IIR digital filter *H*IIR(*z*) given by

$$H\_{\rm IIR}(z) = \frac{0.25 + 0.25z^{-1}}{1 - 0.5z^{-1}}\tag{87}$$

of which frequency magnitude and phase responses are shown in Fig. 5 (b). The second-order mode of the digital filter *H*IIR(*z*) is *θ* = 0.5. The balanced realization

**Figure 5.** Frequency magnitude and phase responses of digital filters with all second-order modes equal.

(*A*b, *b*b, *c*b, *d*b), which is equal to the minimum *L*2-sensitivity realization, of *H*IIR(*z*) is derived as

$$
\begin{bmatrix}
\frac{A\_{\rm b}|b\_{\rm b}}{c\_{\rm b}|d\_{\rm b}}
\end{bmatrix} = \begin{bmatrix}
\frac{0.5}{0.6124}
\begin{vmatrix}
0.6124\\
0.25
\end{vmatrix}
\end{bmatrix} \tag{88}
$$

10.5772/52194

233

http://dx.doi.org/10.5772/52194

*4.4.3. All-pass digital filters*

Consider a fourth-order all-pass digital filter *H*AP(*z*) given by

of which poles *λp*(*p* = 1, 2, 3, 4) are given by

�

realization, of *H*AP(*z*) is derived as

� *A*<sup>b</sup> *b*<sup>b</sup> *c*<sup>b</sup> *d*<sup>b</sup>

and controllability Gramian *<sup>K</sup>*(b)

*4.4.4. Multi-notch comb digital filters*

of which poles *λp*(*p* = 1, 2, 3, 4) are given by

� =  

*K*(b)

�

<sup>0</sup> <sup>=</sup> *<sup>W</sup>*(b)

Consider a fourth-order multi-notch comb digital filter *H*MN(*z*) given by

*<sup>H</sup>*AP(*z*) = 0.5184 <sup>−</sup> 1.9805*z*−<sup>1</sup> <sup>+</sup> 3.3350*z*−<sup>2</sup> <sup>−</sup> 2.7507*z*−<sup>3</sup> <sup>+</sup> *<sup>z</sup>*−<sup>4</sup>

*λ*<sup>1</sup> = 0.9 exp(*j*0.2*π*), *λ*<sup>2</sup> = 0.9 exp(−*j*0.2*π*),

and of which frequency magnitude and phase responses are shown in Fig. 5 (c). The

The balanced realization (*A*b, *b*b, *c*b, *d*b), which is equal to the minimum *L*2-sensitivity

<sup>0</sup> and observability Gramians *<sup>W</sup>*(b)

*<sup>H</sup>*MN(*z*) = 0.9073 <sup>−</sup> 0.9073*z*−<sup>4</sup>

*λ*<sup>1</sup> = 0.95, *λ*<sup>2</sup> = −0.95,

0.8144 −0.1106 0.2499 −0.5039 −0.0903 0.1599 0.4698 −0.7114 −0.1105 −0.4853 −0.1616 0.2997 0.6211 0.1062 −0.6978 0.5318 0.0448 0.0006 0.8453 0.0252 0.0481 0.8217 0.2137 −0.0894 0.5184

second-order modes *θ<sup>i</sup>* (*i* = 1, 2, 3, 4) of the all-pass digital filter *H*AP(*z*) are given by

<sup>1</sup> <sup>−</sup> 2.7507*z*−<sup>1</sup> <sup>+</sup> 3.3350*z*−<sup>2</sup> <sup>−</sup> 1.9805*z*−<sup>3</sup> <sup>+</sup> 0.5184*z*−<sup>4</sup> (90)

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

*<sup>λ</sup>*<sup>3</sup> <sup>=</sup> 0.8 exp(*j*0.2*π*), *<sup>λ</sup>*<sup>4</sup> <sup>=</sup> 0.8 exp(−*j*0.2*π*) (91)

(*θ*1, *θ*2, *θ*3, *θ*4)=(1, 1, 1, 1). (92)

 

<sup>0</sup> are calculated as

<sup>0</sup> = diag(1, 1, 1, 1). (94)

<sup>1</sup> <sup>−</sup> 0.8145*z*−<sup>4</sup> (95)

*<sup>λ</sup>*<sup>3</sup> <sup>=</sup> *<sup>j</sup>*0.95, *<sup>λ</sup>*<sup>4</sup> <sup>=</sup> <sup>−</sup>*j*0.95 (96)

(93)

and controllability Gramian *<sup>K</sup>*(b) <sup>0</sup> and observability Gramians *<sup>W</sup>*(b) <sup>0</sup> are calculated as

$$\mathbf{K}\_0^{(\mathbf{b})} = \mathbf{W}\_0^{(\mathbf{b})} = \mathbf{0}.5. \tag{89}$$

#### *4.4.3. All-pass digital filters*

20 Digital Filters and Signal Processing

0.5

−0.5

0.5

−3 −2 −1 0

and controllability Gramian *<sup>K</sup>*(b)

∠ HAP(ejω)/π

as


1

∠ HFIR(ejω)/π 0


1

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> <sup>0</sup>

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> <sup>0</sup>

Frequency ω/π

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −1

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> <sup>0</sup>

Frequency ω/π

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −4

(d) fourth-order multi-notch comb digital filter *H*MN(*z*)

Frequency ω/π

<sup>0</sup> are calculated as

<sup>0</sup> = 0.5. (89)

(88)

(b) first-order IIR digital filter *H*IIR(*z*)

Frequency ω/π

0.5

−0.5

0.5

−3 −2 −1 0

∠ HMN(ejω)/π


1

∠ HIIR(ejω)/π 0


1

Frequency ω/π

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −1

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> <sup>0</sup>

Frequency ω/π

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −4

(c) fourth-order all-pass digital filter *H*AP(*z*)

Frequency ω/π

 *A*<sup>b</sup> *b*<sup>b</sup> *c*<sup>b</sup> *d*<sup>b</sup>

**Figure 5.** Frequency magnitude and phase responses of digital filters with all second-order modes equal.

 =

*K*(b)

<sup>0</sup> <sup>=</sup> *<sup>W</sup>*(b)

(*A*b, *b*b, *c*b, *d*b), which is equal to the minimum *L*2-sensitivity realization, of *H*IIR(*z*) is derived

 0.5 0.6124 0.6124 0.25

<sup>0</sup> and observability Gramians *<sup>W</sup>*(b)

(a) first-order FIR digital filter *H*FIR(*z*)

Frequency ω/π

Consider a fourth-order all-pass digital filter *H*AP(*z*) given by

$$H\_{\rm AP}(z) = \frac{0.5184 - 1.9805z^{-1} + 3.3350z^{-2} - 2.7507z^{-3} + z^{-4}}{1 - 2.7507z^{-1} + 3.3350z^{-2} - 1.9805z^{-3} + 0.5184z^{-4}}\tag{90}$$

of which poles *λp*(*p* = 1, 2, 3, 4) are given by

$$\begin{cases} \lambda\_1 = 0.9 \exp(j0.2\pi), \ \lambda\_2 = 0.9 \exp(-j0.2\pi), \\ \lambda\_3 = 0.8 \exp(j0.2\pi), \ \lambda\_4 = 0.8 \exp(-j0.2\pi) \end{cases} \tag{91}$$

and of which frequency magnitude and phase responses are shown in Fig. 5 (c). The second-order modes *θ<sup>i</sup>* (*i* = 1, 2, 3, 4) of the all-pass digital filter *H*AP(*z*) are given by

$$(\theta\_1, \theta\_2, \theta\_3, \theta\_4) = (1, 1, 1, 1). \tag{92}$$

The balanced realization (*A*b, *b*b, *c*b, *d*b), which is equal to the minimum *L*2-sensitivity realization, of *H*AP(*z*) is derived as

$$
\begin{bmatrix} A\_{\mathtt{b}} \begin{vmatrix} b\_{\mathtt{b}} \\ c\_{\mathtt{b}} \end{vmatrix} = \begin{bmatrix} 0.8144 & -0.1106 & 0.2499 & -0.5039 \end{bmatrix} - 0.0903 \\\ \begin{bmatrix} 0.1599 & 0.4698 & -0.7114 & -0.1105 \end{bmatrix} - 0.4853 \\\ \begin{bmatrix} -0.1616 & 0.2997 & 0.6211 & 0.1062 \end{bmatrix} - 0.6978 \\\ \begin{bmatrix} 0.5318 & 0.0448 & 0.0006 & 0.8453 & 0.0252 \\\ \hline 0.0481 & 0.8217 & 0.2137 & -0.0894 & 0.5184 \end{bmatrix} \end{bmatrix} \tag{93}
$$

and controllability Gramian *<sup>K</sup>*(b) <sup>0</sup> and observability Gramians *<sup>W</sup>*(b) <sup>0</sup> are calculated as

$$\mathbf{K}\_0^{(\mathbf{b})} = \mathbf{W}\_0^{(\mathbf{b})} = \text{diag}(1, 1, 1, 1). \tag{94}$$

#### *4.4.4. Multi-notch comb digital filters*

�

Consider a fourth-order multi-notch comb digital filter *H*MN(*z*) given by

$$H\_{\rm MN}(z) = \frac{0.9073 - 0.9073z^{-4}}{1 - 0.8145z^{-4}}\tag{95}$$

of which poles *λp*(*p* = 1, 2, 3, 4) are given by

$$\begin{cases} \lambda\_1 = 0.95, \, \lambda\_2 = -0.95, \\ \lambda\_3 = j0.95, \, \lambda\_4 = -j0.95 \end{cases} \tag{96}$$

and of which frequency magnitude and phase responses are shown in Fig. 5 (d). The second-order modes *θ<sup>i</sup>* (*i* = 1, 2, 3, 4) of the multi-notch comb digital filter *H*MN(*z*) are given by

$$(\theta\_1, \theta\_2, \theta\_3, \theta\_4) = (0.5, 0.5, 0.5, 0.5). \tag{97}$$

10.5772/52194

235

http://dx.doi.org/10.5772/52194

*P*opt = *RTB*opt*R*. (100)

opt*RU*. (101)

opt. (102)

opt*b*b, *<sup>c</sup>*b*T*˜ opt, *<sup>d</sup>*b) (103)

opt (104)

opt (105)

<sup>0</sup> of the minimum *L*2-sensitivity realization

We can give the diagonalization of the matrix *P*opt as follows:

transformation matrix *T*opt is given by

matrix as *U* = *RT*, which yields

*coordinate transformation by T*˜ *opt such as*

(*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) are expressed as

<sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt)

the controllability and observability Gramians as follows:

*K*˜ (opt)

(*A*˜ opt, *<sup>b</sup>*˜ opt, ˜*c*opt, ˜*d*opt)=(*T*˜ <sup>−</sup><sup>1</sup>

*K*˜ (opt) <sup>0</sup> <sup>=</sup>*T*˜ <sup>−</sup><sup>1</sup>

*W*˜ (opt) <sup>0</sup> <sup>=</sup>*T*˜ *<sup>T</sup>*

Since the matrix *P*opt is positive definite symmetric, it can be diagonalized by an orthogonal matrix *R*, and *B*opt is a positive definite diagonal matrix. The optimal coordinate

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

1 2 opt*<sup>U</sup>*

=*RTB* 1 2

In the above expression, *U* is an arbitrary orthogonal matrix. It means that the minimum *L*2-sensitivity realizations exist infinitely for a given digital filter *H*(*z*). The minimum *L*2-sensitivity realizations have freedom for orthogonal transformations. We show that the minimum *L*2-sensitivity realization does not generate limit cycles if we specify the orthogonal

*T*˜ opt = *RTB*

**Theorem 5.** *The minimum L*2*-sensitivity realization* (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt)*, obtained by the*

*does not generate limit cycles.* ✷

*Proof:* Under the coordinate transformation by *T*˜ opt in Eq. (102), the controllability Gramian

opt*K*(b) <sup>0</sup> *<sup>T</sup>*˜ <sup>−</sup>*<sup>T</sup>* opt

opt*W*(b)

where **Θ** = diag(*θ*1, ··· , *θN*). From Eqs. (104) and (105), we can derive the relation between

opt*R***Θ***RT<sup>B</sup>*

<sup>0</sup> *<sup>T</sup>*˜ opt

2

1 2

<sup>=</sup>*B*<sup>−</sup> <sup>1</sup> 2 opt*R***Θ***RTB*<sup>−</sup> <sup>1</sup>

=*B* 1 2 1 2

opt*A*b*T*˜ opt, *<sup>T</sup>*˜ <sup>−</sup><sup>1</sup>

*T*opt =*P*

The balanced realization (*A*b, *b*b, *c*b, *d*b), which is equal to the minimum *L*2-sensitivity realization, of *H*MN(*z*) is derived as

$$
\begin{bmatrix}
\frac{A\_{\mathbf{b}}}{\mathbf{c}\_{\mathbf{b}}}
\frac{b\_{\mathbf{b}}}{d\_{\mathbf{b}}}
\end{bmatrix} = \begin{bmatrix}
0 \ 0 \ 0 \ 0.8145 & 0.4102 \\
1 \ 0 \ 0 \ 0 & 0 \\
0 \ 1 \ 0 & 0 \\
0 \ 0 \ 1 & 0 \\
\hline
0 \ 0 \ 0 & -0.4102 \boxed{0.9073}
\end{bmatrix} \tag{98}
$$

and controllability Gramian *<sup>K</sup>*(b) <sup>0</sup> and observability Gramians *<sup>W</sup>*(b) <sup>0</sup> are calculated as

$$\mathbf{K}\_0^{(\mathbf{b})} = \mathbf{W}\_0^{(\mathbf{b})} = \text{diag}(0.5, 0.5, 0.5, 0.5). \tag{99}$$

## **5. Absence of limit cycles in the minimum** *L*2**-sensitivity realizations**

This section proves the absence of limit cycles of the minimum *L*2-sensitivity realization from the viewpoint of the controllability and observability Gramians. The minimum *L*2-sensitivity realizations have freedom for orthogonal transformations. In other words, minimum *L*2-sensitivity realizations are not unique. We select the minimum *L*2-sensitivity realization without limit cycles among these minimum *L*2-sensitivity realizations. The controllability and observability Gramians of the selected minimum *L*2-sensitivity realization satisfy a sufficient condition for the absence of limit cycles [11].

## **5.1. Theoretical proof of the absence of limit cycles**

For high-order digital filters, we synthesize the minimum *L*2-sensitivity realization by the successive approximation methods in [1] or [2], for examples. For second-order digital filters, we can synthesize the minimum *L*2-sensitivity realization by the closed form solutions proposed in Section 3. For both cases, we can construct the minimum *L*2-sensitivity realization *without limit cycles*.

We begin by reviewing the procedure to synthesize the minimum *L*2-sensitivity. We solve the *L*2-sensitivity minimization problem in (32) adopting the balanced realization (*A*b, *b*b, *c*b, *d*b) as an initial realization. We obtain the optimal positive definite symmetric matrix *P*opt. In case of high-order digital filters, we can derive the optimal positive definite symmetric matrix *P*opt by successive approximation method in [1] or [2], for example. In case of second-order digital filters, we can derive the optimal positive definite symmetric matrix *P*opt analytically as proposed in Section 3.

We can give the diagonalization of the matrix *P*opt as follows:

$$P\_{\rm opt} = \mathbf{R}^T \mathbf{B}\_{\rm opt} \mathbf{R}.\tag{100}$$

Since the matrix *P*opt is positive definite symmetric, it can be diagonalized by an orthogonal matrix *R*, and *B*opt is a positive definite diagonal matrix. The optimal coordinate transformation matrix *T*opt is given by

$$T\_{\rm opt} = \mathbf{P}\_{\rm opt}^{\frac{1}{2}} \mathbf{U}$$

$$= \mathbf{R}^T \mathbf{B}\_{\rm opt}^{\frac{1}{2}} \mathbf{R} \mathbf{U}. \tag{101}$$

In the above expression, *U* is an arbitrary orthogonal matrix. It means that the minimum *L*2-sensitivity realizations exist infinitely for a given digital filter *H*(*z*). The minimum *L*2-sensitivity realizations have freedom for orthogonal transformations. We show that the minimum *L*2-sensitivity realization does not generate limit cycles if we specify the orthogonal matrix as *U* = *RT*, which yields

$$
\tilde{T}\_{\rm opt} = \mathbf{R}^T \mathbf{B}\_{\rm opt}^{\frac{1}{2}}.\tag{102}
$$

**Theorem 5.** *The minimum L*2*-sensitivity realization* (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt)*, obtained by the coordinate transformation by T*˜ *opt such as*

$$(\tilde{\mathbf{A}}\_{\mathrm{opt}} \tilde{\mathbf{b}}\_{\mathrm{opt}} \tilde{\mathbf{c}}\_{\mathrm{opt}} \tilde{\mathbf{d}}\_{\mathrm{opt}}) = (\tilde{T}\_{\mathrm{opt}}^{-1} \mathbf{A}\_{\mathrm{b}} \tilde{T}\_{\mathrm{opt}} \tilde{T}\_{\mathrm{opt}}^{-1} \mathbf{b}\_{\mathrm{b}\prime} \mathbf{c}\_{\mathrm{b}} \tilde{T}\_{\mathrm{opt}} \boldsymbol{d}\_{\mathrm{b}}) \tag{103}$$

*does not generate limit cycles.* ✷

22 Digital Filters and Signal Processing

realization, of *H*MN(*z*) is derived as

and controllability Gramian *<sup>K</sup>*(b)

realization *without limit cycles*.

as proposed in Section 3.

� *A*<sup>b</sup> *b*<sup>b</sup> *c*<sup>b</sup> *d*<sup>b</sup>

*K*(b)

satisfy a sufficient condition for the absence of limit cycles [11].

**5.1. Theoretical proof of the absence of limit cycles**

<sup>0</sup> <sup>=</sup> *<sup>W</sup>*(b)

� =  

by

and of which frequency magnitude and phase responses are shown in Fig. 5 (d). The second-order modes *θ<sup>i</sup>* (*i* = 1, 2, 3, 4) of the multi-notch comb digital filter *H*MN(*z*) are given

The balanced realization (*A*b, *b*b, *c*b, *d*b), which is equal to the minimum *L*2-sensitivity

0 0 0 0.8145 0.4102 100 0 0 010 0 0 001 0 0 000 −0.4102 0.9073

<sup>0</sup> and observability Gramians *<sup>W</sup>*(b)

**5. Absence of limit cycles in the minimum** *L*2**-sensitivity realizations**

This section proves the absence of limit cycles of the minimum *L*2-sensitivity realization from the viewpoint of the controllability and observability Gramians. The minimum *L*2-sensitivity realizations have freedom for orthogonal transformations. In other words, minimum *L*2-sensitivity realizations are not unique. We select the minimum *L*2-sensitivity realization without limit cycles among these minimum *L*2-sensitivity realizations. The controllability and observability Gramians of the selected minimum *L*2-sensitivity realization

For high-order digital filters, we synthesize the minimum *L*2-sensitivity realization by the successive approximation methods in [1] or [2], for examples. For second-order digital filters, we can synthesize the minimum *L*2-sensitivity realization by the closed form solutions proposed in Section 3. For both cases, we can construct the minimum *L*2-sensitivity

We begin by reviewing the procedure to synthesize the minimum *L*2-sensitivity. We solve the *L*2-sensitivity minimization problem in (32) adopting the balanced realization (*A*b, *b*b, *c*b, *d*b) as an initial realization. We obtain the optimal positive definite symmetric matrix *P*opt. In case of high-order digital filters, we can derive the optimal positive definite symmetric matrix *P*opt by successive approximation method in [1] or [2], for example. In case of second-order digital filters, we can derive the optimal positive definite symmetric matrix *P*opt analytically

(*θ*1, *θ*2, *θ*3, *θ*4)=(0.5, 0.5, 0.5, 0.5). (97)

 

<sup>0</sup> = diag(0.5, 0.5, 0.5, 0.5). (99)

<sup>0</sup> are calculated as

(98)

*Proof:* Under the coordinate transformation by *T*˜ opt in Eq. (102), the controllability Gramian *K*˜ (opt) <sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt) <sup>0</sup> of the minimum *L*2-sensitivity realization (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) are expressed as

$$\begin{split} \tilde{\mathbf{K}}\_{0}^{\text{(opt)}} &= \tilde{T}\_{\text{opt}}^{-1} \mathbf{K}\_{0}^{\text{(b)}} \tilde{T}\_{\text{opt}}^{-T} \\ &= \mathbf{B}\_{\text{opt}}^{-\frac{1}{2}} \mathbf{R} \boldsymbol{\Theta} \mathbf{R}^{T} \mathbf{B}\_{\text{opt}}^{-\frac{1}{2}} \end{split} \tag{104}$$

$$\begin{split} \tilde{\mathbf{W}}\_{0}^{(\text{opt})} &= \tilde{\mathbf{T}}\_{\text{opt}}^{T} \mathbf{W}\_{0}^{(\text{b})} \tilde{\mathbf{T}}\_{\text{opt}} \\ &= \mathbf{B}\_{\text{opt}}^{\frac{1}{2}} \mathbf{R} \boldsymbol{\Theta} \mathbf{R}^{T} \mathbf{B}\_{\text{opt}}^{\frac{1}{2}} \end{split} \tag{105}$$

where **Θ** = diag(*θ*1, ··· , *θN*). From Eqs. (104) and (105), we can derive the relation between the controllability and observability Gramians as follows:

**Figure 6.** Synthesis of the minimum *L*2-sensitivity realization which does not generate limit cycles.

$$\mathbf{W}\_0^{(\text{opt})} = \mathbf{B}\_{\text{opt}} \mathbf{K}\_0^{(\text{opt})} \mathbf{B}\_{\text{opt}}.\tag{106}$$

10.5772/52194

237

http://dx.doi.org/10.5772/52194

x 1 (n) x 2 (n)

0 0.2 0.4 0.6 0.8 1

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

Frequency ω/π

−0.5

x

1(n), x2(n)

0

0.5

1

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> −1

<sup>1</sup> <sup>−</sup> 1.4562*z*−<sup>1</sup> <sup>+</sup> 0.81*z*−<sup>2</sup> . (108)

Time n

(b) Direct form II

x 1 (n) x 2 (n)

Consider a second-order narrow-band band-pass digital filter *H*(*z*) given by

*<sup>H</sup>*(*z*) = 0.0316 <sup>+</sup> 0.0602*z*−<sup>1</sup> <sup>+</sup> 0.0316*z*−<sup>2</sup>

The poles of the transfer function *H*(*z*) in Eq. (108) are 0.9 exp(±*j*0.2*π*), which are very close to the unit circle. The frequency response of the digital filter *H*(*z*) in Eq. (108) is shown in Fig. 7. The coefficient matrices of the minimum *L*2-sensitivity realization which is free of

0

0.2

0.4

0.6


**Figure 7.** Frequency Response of digital filter *H*(*z*) in Eq. (108).

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> −1

**Figure 8.** Zero-input responses of *H*(*z*) in Eq. (108).

*5.2.1. Second-order digital filters*

Time n

limit cycle is derived by Eq. (107) as follows:

(a) Minimum *L*2-sensitivity realization

−0.5

0

x1(n), x2(n)

0.5

1

0.8

1

Eq. (106) is equivalent to a sufficient condition for the absence of limit cycles proposed in Ref. [6]. Therefore, the minimum *L*2-sensitivity realization (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) does not generate limit cycles. ✷

Theorem 5 shows that we can synthesize the minimum *L*2-sensitivity realization without limit cycles by choosing appropriate orthogonal matrix *U*. Fig. 6 shows the synthesis procedure of the minimum *L*2-sensitivity realization which does not generate limit cycles. The coefficient matrices of the minimum *L*2-sensitivity realization without limit cycles (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) are given by

$$
\left[\frac{\tilde{\mathbf{A}}\_{\mathrm{opt}} \Big| \tilde{\mathbf{b}}\_{\mathrm{opt}}}{\tilde{\mathbf{c}}\_{\mathrm{opt}} \Big| \tilde{d}\_{\mathrm{opt}}}\right] = \left[\frac{\mathbf{B}\_{\mathrm{opt}}^{-\frac{1}{2}} \mathbf{R} \mathbf{A}\_{\mathrm{b}} \mathbf{R}^{T} \mathbf{B}\_{\mathrm{opt}}^{\frac{1}{2}} \mathbf{B}\_{\mathrm{opt}}^{-\frac{1}{2}} \mathbf{B} \mathbf{b}\_{\mathrm{b}}}{\mathbf{c}\_{\mathrm{b}} \mathbf{R}^{T} \mathbf{B}\_{\mathrm{opt}}^{\frac{1}{2}} \Big| \quad d}\right].\tag{107}
$$

#### **5.2. Numerical examples**

We present numerical examples to demonstrate the validity of our proposed method. We synthesize the minimum *L*2-sensitivity realizations of second-order and fourth-order digital filters which do not generate limit cycles.

**Figure 7.** Frequency Response of digital filter *H*(*z*) in Eq. (108).

24 Digital Filters and Signal Processing

� Aopt bopt copt dopt

(*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) are given by

**5.2. Numerical examples**

filters which do not generate limit cycles.

� *A*˜ opt *b*˜ opt *c*˜opt ˜*d*opt

� =  *B*<sup>−</sup> <sup>1</sup> 2 opt*RA*b*RT<sup>B</sup>*

> *<sup>c</sup>*b*RT<sup>B</sup>* 1 2

We present numerical examples to demonstrate the validity of our proposed method. We synthesize the minimum *L*2-sensitivity realizations of second-order and fourth-order digital

1 2 opt *<sup>B</sup>*<sup>−</sup> <sup>1</sup> 2 opt*Rb*<sup>b</sup>

opt *<sup>d</sup>*

�

1

**Figure 6.** Synthesis of the minimum *L*2-sensitivity realization which does not generate limit cycles.

*W*˜ (opt)

� A<sup>b</sup> b<sup>b</sup> c<sup>b</sup> d<sup>b</sup>

� Aopt bopt copt dopt

Balanced realization

Minimum L2-sensitivity realizations without L2-scaling constraints

<sup>0</sup> <sup>=</sup>*B*opt*K*˜ (opt)

Eq. (106) is equivalent to a sufficient condition for the absence of limit cycles proposed in Ref. [6]. Therefore, the minimum *L*2-sensitivity realization (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) does not generate limit cycles. ✷ Theorem 5 shows that we can synthesize the minimum *L*2-sensitivity realization without limit cycles by choosing appropriate orthogonal matrix *U*. Fig. 6 shows the synthesis procedure of the minimum *L*2-sensitivity realization which does not generate limit cycles. The coefficient matrices of the minimum *L*2-sensitivity realization without limit cycles

Limit Cycle Free

c˜opt ˜ dopt

� A˜ opt ˜

�

R<sup>T</sup> B <sup>1</sup> <sup>2</sup> R

U<sup>1</sup> U<sup>M</sup>

�

bopt

R<sup>T</sup>

�

� Aopt bopt copt dopt � M

<sup>0</sup> *<sup>B</sup>*opt. (106)

. (107)

**Figure 8.** Zero-input responses of *H*(*z*) in Eq. (108).

#### *5.2.1. Second-order digital filters*

Consider a second-order narrow-band band-pass digital filter *H*(*z*) given by

$$H(z) = \frac{0.0316 + 0.0602z^{-1} + 0.0316z^{-2}}{1 - 1.4562z^{-1} + 0.81z^{-2}}.\tag{108}$$

The poles of the transfer function *H*(*z*) in Eq. (108) are 0.9 exp(±*j*0.2*π*), which are very close to the unit circle. The frequency response of the digital filter *H*(*z*) in Eq. (108) is shown in Fig. 7. The coefficient matrices of the minimum *L*2-sensitivity realization which is free of limit cycle is derived by Eq. (107) as follows:

$$
\begin{bmatrix}
\frac{\mathbf{A}\_{\rm opt}}{\mathbf{c}\_{\rm opt}}
\begin{bmatrix}
\dot{\mathbf{b}}\_{\rm opt}
\end{bmatrix}
\end{bmatrix} = \begin{bmatrix}
0.7281 & 0.5229 \\
\hline
0.1282 & -0.4146 \\
\end{bmatrix} \begin{bmatrix}
0.4146 \\
0.0316
\end{bmatrix} \,. \tag{109}
$$

The controllability Gramian *<sup>K</sup>*(opt) <sup>0</sup> and the observability Gramian *<sup>W</sup>*(opt) <sup>0</sup> are given as follows:

$$
\tilde{\mathbf{K}}\_0^{(\text{opt})} = \begin{bmatrix} 0.5100 \ -0.0870 \\ -0.0870 & 0.4901 \end{bmatrix} \tag{110}
$$

10.5772/52194

239

http://dx.doi.org/10.5772/52194

. (114)

(115)

0

<sup>0</sup> are given as follows:

. (116)

 

 

 

<sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt)

� *A*˜ opt *b*˜ opt *<sup>c</sup>*˜opt ˜*d*opt �

The controllability Gramian *<sup>K</sup>*˜ (opt)

cycles.

form II.

[0.9 0.9 0.9 0.9]

=

 

*K*˜ (opt) <sup>0</sup> =

*W*˜ (opt) <sup>0</sup> =

We have to note that the controllability Gramian *<sup>K</sup>*˜ (opt)

 

 

satisfy the sufficient condition of the absence of limit cycles given in Eq. (106) with

Therefore, (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) is the minimum *L*2-sensitivity realization without limit

We demonstrate the absence of limit cycles in the minimum *L*2-sensitivity realization by observing its zero-input response. We calculate the zero-input responses of the minimum *L*2-sensitivity realization and the direct form II, setting the initial state as *x*(0) =

complement as the overflow characteristic. The zero-input responses are shown in Fig. 10(a) and 10(b). We assume that each filter coefficient and signal have 16[bits] fixed-point representation, of which lower 13[bits] are fractional bits. In this numerical example, the overflow of the state variables occurs in both cases. Also in this case, we can confirm that the minimum *L*2-sensitivity realization does not generate limit cycles. On the other hand, a large-amplitude autonomous oscillation is observed in the zero-input response of the direct

0.6028 0.6394 0.1512 0.0655 0.0344 −0.6360 0.7461 −0.0655 −0.0297 −0.0153 −0.0806 −0.0283 0.6028 0.6360 0.3950 0.0283 0.0118 −0.6394 0.7461 −0.1423 0.3950 0.1423 0.0344 0.0153 0.0178

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

<sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt)

0.3531 −0.0004 0.2470 0.0155 −0.0004 0.3563 −0.0157 0.2470 0.2470 −0.0157 0.5309 0.0006 0.0155 0.2470 0.0006 0.5264

0.5309 −0.0006 0.2470 0.0157 −0.0006 0.5264 −0.0155 0.2470 0.2470 −0.0155 0.3531 0.0004 0.0157 0.2470 0.0004 0.3563

*B*opt = diag(1.2261, 1.2155, 0.8156, 0.8227). (117)

*<sup>T</sup>*. We let the dynamic range of signals to be [−1, 1) and adopt two's

$$
\tilde{\mathbf{W}}\_0^{(\text{opt})} = \begin{bmatrix} 0.4901 \ -0.0870 \\ -0.0870 & 0.5100 \end{bmatrix} . \tag{111}
$$

We have to note that the controllability Gramian *<sup>K</sup>*˜ (opt) <sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt) 0 satisfy the sufficient condition of the absence of limit cycles given in Eq. (106) with

$$\mathcal{B}\_{\text{opt}} = \text{diag}(0.9803, 1.0201). \tag{112}$$

Therefore, (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) is the minimum *L*2-sensitivity realization without limit cycles.

We demonstrate the absence of limit cycles in the minimum *L*2-sensitivity realization by observing its zero-input response. We calculate the zero-input responses of the minimum *L*2-sensitivity realization and the dierct form II, setting the initial state as *x*(0)=[0.8 − 0.8] *T*. We let the dynamic range of signals to be [−1, 1) and adopt two's complement as the overflow characteristic. The zero-input responses are shown in Fig. 8(a) and 8(b). We assume that each filter coefficient and signal have 16[bits] fixed-point representation, of which lower 14[bits] are fractional bits. In this numerical example, the overflow of the state variables occurs in both cases. It is desirable that the effect of the overflow is decreasing since the digital filter *H*(*z*) in Eq. (108) is stable. For the minimum *L*2-sensitivity realization synthesized by our proposed method, the state variables *x*1(*n*) and *x*2(*n*) converge to zero after the overflow, as shown in Fig. 8(a). Therefore, there are no limit cycles. On the other hand, for the direct form II, a large-amplitude autonomous oscillation is observed as shown in Fig. 8(b). Therefore, the direct form II generates the limit cycles.

#### *5.2.2. High-order digital filters*

We can demonstrate the validity of the proposed method for also high-order digital filters. Consider a fourth-order band-pass digital filter *H*(*z*) given by

$$H(z) = \frac{0.0178 - 0.0252z^{-1} + 0.0173z^{-2} - 0.0252z^{-3} + 0.0178z^{-4}}{1 - 2.6977z^{-1} + 3.5410z^{-2} - 2.3340z^{-3} + 0.7497z^{-4}}\tag{113}$$

The frequency response of the digital filter *H*(*z*) in Eq. (113) is shown in Fig. 9. We obtain the limit cycle free minimum *L*2-sensitivity realization (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) by successive approximation method:

<sup>238</sup> Digital Filters and Signal Processing Analytical Approach for Synthesis of Minimum *<sup>L</sup>*2-Sensitivity Realizations for State-Space Digital Filters 27 10.5772/52194 Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters http://dx.doi.org/10.5772/52194 239

$$
\begin{bmatrix}
\mathbf{A}\_{\rm opt}|\boldsymbol{\tilde{\theta}}\_{\rm opt}\end{bmatrix} = \begin{bmatrix}
0.6028 & 0.6394 & 0.1512 & 0.0655 & 0.0344 \\
\hline
0.0283 & 0.0118 & -0.6394 & 0.7461 & -0.1423 \\
\hline
0.3950 & 0.1423 & 0.0344 & 0.0153 & 0.0178
\end{bmatrix} . \tag{114}
$$

The controllability Gramian *<sup>K</sup>*˜ (opt) <sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt) <sup>0</sup> are given as follows:

26 Digital Filters and Signal Processing

The controllability Gramian *<sup>K</sup>*(opt)

cycles.

� *A*˜ opt *b*˜ opt *c*˜opt ˜*d*opt

We have to note that the controllability Gramian *<sup>K</sup>*˜ (opt)

the direct form II generates the limit cycles.

Consider a fourth-order band-pass digital filter *H*(*z*) given by

*5.2.2. High-order digital filters*

approximation method:

� = 

*K*˜ (opt) <sup>0</sup> =

*W*˜ (opt) <sup>0</sup> =

satisfy the sufficient condition of the absence of limit cycles given in Eq. (106) with

Therefore, (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) is the minimum *L*2-sensitivity realization without limit

We demonstrate the absence of limit cycles in the minimum *L*2-sensitivity realization by observing its zero-input response. We calculate the zero-input responses of the minimum *L*2-sensitivity realization and the dierct form II, setting the initial state as *x*(0)=[0.8 − 0.8]

We let the dynamic range of signals to be [−1, 1) and adopt two's complement as the overflow characteristic. The zero-input responses are shown in Fig. 8(a) and 8(b). We assume that each filter coefficient and signal have 16[bits] fixed-point representation, of which lower 14[bits] are fractional bits. In this numerical example, the overflow of the state variables occurs in both cases. It is desirable that the effect of the overflow is decreasing since the digital filter *H*(*z*) in Eq. (108) is stable. For the minimum *L*2-sensitivity realization synthesized by our proposed method, the state variables *x*1(*n*) and *x*2(*n*) converge to zero after the overflow, as shown in Fig. 8(a). Therefore, there are no limit cycles. On the other hand, for the direct form II, a large-amplitude autonomous oscillation is observed as shown in Fig. 8(b). Therefore,

We can demonstrate the validity of the proposed method for also high-order digital filters.

*<sup>H</sup>*(*z*) = 0.0178 <sup>−</sup> 0.0252*z*−<sup>1</sup> <sup>+</sup> 0.0173*z*−<sup>2</sup> <sup>−</sup> 0.0252*z*−<sup>3</sup> <sup>+</sup> 0.0178*z*−<sup>4</sup>

The frequency response of the digital filter *H*(*z*) in Eq. (113) is shown in Fig. 9. We obtain the limit cycle free minimum *L*2-sensitivity realization (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) by successive

<sup>1</sup> <sup>−</sup> 2.6977*z*−<sup>1</sup> <sup>+</sup> 3.5410*z*−<sup>2</sup> <sup>−</sup> 2.3340*z*−<sup>3</sup> <sup>+</sup> 0.7497*z*−<sup>4</sup> (113)

0.7281 0.5229 0.4146 −0.5351 0.7281 −0.1282 0.1282 −0.4146 0.0316

�

�

*B*opt = diag(0.9803, 1.0201). (112)

<sup>0</sup> and the observability Gramian *<sup>W</sup>*(opt)

� 0.5100 −0.0870 −0.0870 0.4901

� 0.4901 −0.0870 −0.0870 0.5100 . (109)

<sup>0</sup> are given as follows:

. (111)

<sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt)

(110)

0

*T*.

$$
\begin{aligned}
\mathbf{K}\_{0}^{(\text{opt})} &= \begin{bmatrix}
0.3531 \ -0.0004 & 0.2470 \ 0.0155 \\
0.2470 \ -0.0157 & 0.5309 \ 0.0006 \\
0.0155 & 0.2470 & 0.0006 \ 0.5264
\end{bmatrix}
\end{aligned}
\tag{115}
$$

$$
\begin{aligned}
\dot{\mathbf{W}}\_{0}^{(\text{opt})} &= \begin{bmatrix}
0.5309 \ -0.0006 & 0.2470 \ 0.0157 \\
0.2470 \ -0.0155 & 0.3531 \ 0.0004 \\
0.0157 & 0.2470 & 0.0004 \ 0.3563
\end{bmatrix}
\end{aligned}
\tag{116}
$$

We have to note that the controllability Gramian *<sup>K</sup>*˜ (opt) <sup>0</sup> and the observability Gramian *<sup>W</sup>*˜ (opt) 0 satisfy the sufficient condition of the absence of limit cycles given in Eq. (106) with

$$\mathcal{B}\_{\rm opt} = \text{diag}(1.2261, 1.2155, 0.8156, 0.8227). \tag{117}$$

Therefore, (*A*˜ opt, *b*˜ opt, ˜*c*opt, ˜*d*opt) is the minimum *L*2-sensitivity realization without limit cycles.

We demonstrate the absence of limit cycles in the minimum *L*2-sensitivity realization by observing its zero-input response. We calculate the zero-input responses of the minimum *L*2-sensitivity realization and the direct form II, setting the initial state as *x*(0) = [0.9 0.9 0.9 0.9] *<sup>T</sup>*. We let the dynamic range of signals to be [−1, 1) and adopt two's complement as the overflow characteristic. The zero-input responses are shown in Fig. 10(a) and 10(b). We assume that each filter coefficient and signal have 16[bits] fixed-point representation, of which lower 13[bits] are fractional bits. In this numerical example, the overflow of the state variables occurs in both cases. Also in this case, we can confirm that the minimum *L*2-sensitivity realization does not generate limit cycles. On the other hand, a large-amplitude autonomous oscillation is observed in the zero-input response of the direct form II.

10.5772/52194

241

http://dx.doi.org/10.5772/52194

Section 5 proves the absence of limit cycles of the minimum *L*2-sensitivity realization from the view point of relationship between the controllability and observability Gramians. The minimum *L*2-sensitivity realizations were originally known to be low-coefficient sensitivity filter structures. We have succeeded in discovering the novel property of the minimum

Analytical Approach for Synthesis of Minimum *L2*-Sensitivity Realizations for State-Space Digital Filters

Department of Electronic Engineering, Graduate School of Engineering, Tohoku University,

[1] W.-Y. Yan and J. B. Moore. On *L*2-sensitivity minimization of linear state-space systems. *IEEE Trans. Circuits Syst. I Fundamental theory and applications*, 39(8):641–648, August

[2] T. Hinamoto, S. Yokoyama, T. Inoue, W. Zeng, and W.-S. Lu. Analysis and minimization of *L*2-sensitivity for linear systems and two-dimensional state-space filters using general controllability and observability gramians. *IEEE Trans. Circuits Syst.*,

[3] Clifford T. Mullis and Richard A. Roberts. Synthesis of minimum roundoff noise fixed point digital filters. *IEEE Trans. Circuits Syst.*, CAS-23(9):551–562, September 1976.

[4] Clifford T. Mullis and Richard A. Roberts. Roundoff noise in digital filters: Frequency transformations and invariants. *IEEE Trans. Acoust., Speech, Signal Process.*,

[5] Sheng Y. Hwang. Minimum uncorrelated unit noise in state-space digital filtering. *IEEE*

[6] M. Kawamata and T. Higuchi. On the absence of limit cycles in a class of state-space digital filters which contains minimum noise realizations. *IEEE Trans. Acoust., Speech,*

[7] M. Kawamata and T. Higuchi. A unified approach to the optimal synthesis of fixed-point state-space digital filters. *IEEE Trans. Acoust., Speech, Signal Process.*, ASSP-33(4):911–920,

[8] V. Tavsanoglu and L. Thiele. Optimal design of state-space digital filters by simultaneous minimization of sensitivity and roundoff noise. *IEEE Trans. Circuits Syst.*,

*Trans. Acoust., Speech, Signal Process.*, ASSP-25(4):273–281, August 1977.

*L*2-sensitivity realizations.

Masahide Abe and Masayuki Kawamata

CAS-49(9):1279–1289, September 2002.

ASSP-24(6):538–550, December 1976.

CAS-31(10):884–888, October 1984.

*Signal Process.*, ASSP-32(4):928–930, August 1984.

<sup>⋆</sup> Address all correspondence to: yamaki@mk.ecei.tohoku.ac.jp

**Author details** Shunsuke Yamaki⋆,

Sendai, Japan

**References**

1992.

August 1985.

**Figure 9.** Frequency Response of digital filter *H*(*z*) in Eq. (113).

**Figure 10.** Zero-input responses of digital filter *H*(*z*) in Eq. (113).

## **6. Conclusions**

This chapter presents analytical approach for synthesis of the minimum *L*2-sensitivity realizations for state-space digital filters. The contributions of this chapter are summarized as follows.

Section 3 presents closed form solutions to the *L*2-sensitivity minimization problem for second-order state-space digital filters. We have shown that the *L*2-sensitivity is expressed by a linear combination of exponential functions, and we can synthesize the minimum *L*2-sensitivity realization by only solving a fourth degree polynomial equation, which can be solved analytically.

Section 4 reveals that the *L*2-sensitivity minimization problem can be solved analytically for arbitrary filter order if second-order modes are all equal. We derive a general expression of the transfer function of digital filters with all second-order modes equal. We show that the general expression is obtained by a frequency transformation on a first-order prototype FIR digital filter.

Section 5 proves the absence of limit cycles of the minimum *L*2-sensitivity realization from the view point of relationship between the controllability and observability Gramians. The minimum *L*2-sensitivity realizations were originally known to be low-coefficient sensitivity filter structures. We have succeeded in discovering the novel property of the minimum *L*2-sensitivity realizations.

## **Author details**

28 Digital Filters and Signal Processing

0 0.2 0.4 0.6 0.8 1

Frequency ω/π

−0.5

x

This chapter presents analytical approach for synthesis of the minimum *L*2-sensitivity realizations for state-space digital filters. The contributions of this chapter are summarized

Section 3 presents closed form solutions to the *L*2-sensitivity minimization problem for second-order state-space digital filters. We have shown that the *L*2-sensitivity is expressed by a linear combination of exponential functions, and we can synthesize the minimum *L*2-sensitivity realization by only solving a fourth degree polynomial equation, which can

Section 4 reveals that the *L*2-sensitivity minimization problem can be solved analytically for arbitrary filter order if second-order modes are all equal. We derive a general expression of the transfer function of digital filters with all second-order modes equal. We show that the general expression is obtained by a frequency transformation on a first-order prototype FIR

1(n), x2(n), x3(n), x4(n)

0

0.5

1

<sup>0</sup> <sup>20</sup> <sup>40</sup> <sup>60</sup> <sup>80</sup> <sup>100</sup> −1

x1 (n) x2 (n) x 3 (n) x 4 (n)

Time n

(b) Direct form II

x 1 (n) x 2 (n) x3 (n) x4 (n)

0

0.2

0.4

0.6


**Figure 9.** Frequency Response of digital filter *H*(*z*) in Eq. (113).

<sup>0</sup> <sup>20</sup> <sup>40</sup> <sup>60</sup> <sup>80</sup> <sup>100</sup> −1

Time n

(a) Minimum *L*2-sensitivity realization

**Figure 10.** Zero-input responses of digital filter *H*(*z*) in Eq. (113).

−0.5

**6. Conclusions**

be solved analytically.

as follows.

digital filter.

x

1(n), x2(n), x3(n), x4(n)

0

0.5

1

0.8

1

Shunsuke Yamaki⋆, Masahide Abe and Masayuki Kawamata

<sup>⋆</sup> Address all correspondence to: yamaki@mk.ecei.tohoku.ac.jp

Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, Sendai, Japan

## **References**


[9] M. Gevers and G. Li. *Parametrizations in Control, Estimation and Filtering Problems*, chapter 5. Springer-Verlag, 1993.

**Chapter 10**

**Provisional chapter**

**Particle Swarm Optimization of Highly Selective Digital**

**over the Finite-Precision Multiplier Coefficient Space**

Digital filters find wide variety of applications in modern digital signal processing systems [1, 2]. As a result of the recent progress in such systems, there is an ever growing demand for sharp transition band digital filters. These narrow transition bandwidth digital filters are usually designed by using the frequency response masking (FRM) approach [3]. The computational efficiency of the FRM technique makes it suitable for different applications, e.g. in audio signal processing and data compression [4]. Practical design of digital filters is based on optimization for satisfying the given design specifications together with the hardware architecture. However, the optimization may be carried out in terms of fixed configurations but variable multiplier coefficient values. On the other hand, the problem may concern the optimization of the hardware architecture without taking the multiplier coefficient values

In order to optimize the given design specifications, the multiplier coefficient values can be determined in infinite precision by using hitherto optimization techniques. However, in an actual hardware implementation of the digital filters, the infinite precision multipliers should be quantized to their finite precision counterparts, but these finite precision multiplier coefficients may no longer satisfy the given design specifications. Consequently, from a hardware implementation point of view, there is a need for finite precision optimization techniques, capable of finding the optimized digital filter rapidly while keeping the computational complexity at a desired level. In principle, there exist two different techniques for the optimization of digital filters, namely, gradient-based and heuristic optimization

Gradient-based optimization techniques have been studied widely. In [5], an integer programming technique was developed for the optimization of digital filters over a discrete multiplier coefficient

> ©2012 Hashemi and Nowrouzian, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any me © 2013 Ali Hashemi and Nowrouzian; licensee InTech. This is an open access article distributed under the dium, provided the original work is properly cited. terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is

distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Ali Hashemi and Nowrouzian, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

**Filters over the Finite-Precision Multiplier Coefficient**

Seyyed Ali Hashemi and Behrouz Nowrouzian

Seyyed Ali Hashemi and Behrouz Nowrouzian

**Particle Swarm Optimization**

**of Highly Selective Digital Filters**

Additional information is available at the end of the chapter

properly cited.

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52196

**1. Introduction**

into consideration.

approaches.

**Space**


**Chapter 10**

**Provisional chapter**

## **Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space**

Seyyed Ali Hashemi and Behrouz Nowrouzian Seyyed Ali Hashemi and Behrouz Nowrouzian

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52196

## **1. Introduction**

30 Digital Filters and Signal Processing

242 Digital Filters and Signal Processing

January 2008.

2011.

August 2001.

chapter 5. Springer-Verlag, 1993.

*Fragility*. Springer-Verlag, 2001.

[9] M. Gevers and G. Li. *Parametrizations in Control, Estimation and Filtering Problems*,

[10] Robert S. H. Istepanian and James F. Whidborne. *Digital Controller Implementation and*

[11] S. Yamaki, M. Abe, and M. Kawamata. On the absence of limit cycles in state-space digital filters with minimum *L*2-sensitivity. *IEEE Trans. Circuits Syst. II*, 55(1):46–50,

[12] S. Yamaki, M. Abe, and M. Kawamata. A closed form solution to *L*2-sensitivity minimization of second-order state-space digital filters. *IEICE Trans. Fundam. Electron.,*

[13] S. Yamaki, M. Abe, and M. Kawamata. Closed form solutions to *L*2-sensitivity minimization of second-order state-space digital filters with real poles. *IEICE Trans.*

[14] S. Yamaki, M. Abe, and M. Kawamata. Derivation of the class of digital filters with all second-order modes equal. *IEEE Trans. Signal Process.*, 59(11):5236–5242, November

[15] H. Matsukawa and M. Kawamata. Design of variable digital filters based on state-space realizations. *IEICE Trans. Fundam. Electron., Commun., Comput. Sci.,*, E84-A(8):1822–1830,

[16] G. Cardano and translated by T. R. Witmer with a foreword by O. Ore. *The Great Art or*

*Fundam. Electron., Commun., Comput. Sci.,*, E93-A(5):966–971, May 2010.

*Commun., Comput. Sci.,*, E91-A(5):1268–1273, May 2008.

*the Rules of Algebra*. The M. I. T. Press, 1968.

Digital filters find wide variety of applications in modern digital signal processing systems [1, 2]. As a result of the recent progress in such systems, there is an ever growing demand for sharp transition band digital filters. These narrow transition bandwidth digital filters are usually designed by using the frequency response masking (FRM) approach [3]. The computational efficiency of the FRM technique makes it suitable for different applications, e.g. in audio signal processing and data compression [4].

Practical design of digital filters is based on optimization for satisfying the given design specifications together with the hardware architecture. However, the optimization may be carried out in terms of fixed configurations but variable multiplier coefficient values. On the other hand, the problem may concern the optimization of the hardware architecture without taking the multiplier coefficient values into consideration.

In order to optimize the given design specifications, the multiplier coefficient values can be determined in infinite precision by using hitherto optimization techniques. However, in an actual hardware implementation of the digital filters, the infinite precision multipliers should be quantized to their finite precision counterparts, but these finite precision multiplier coefficients may no longer satisfy the given design specifications. Consequently, from a hardware implementation point of view, there is a need for finite precision optimization techniques, capable of finding the optimized digital filter rapidly while keeping the computational complexity at a desired level. In principle, there exist two different techniques for the optimization of digital filters, namely, gradient-based and heuristic optimization approaches.

Gradient-based optimization techniques have been studied widely. In [5], an integer programming technique was developed for the optimization of digital filters over a discrete multiplier coefficient

properly cited.

of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any me © 2013 Ali Hashemi and Nowrouzian; licensee InTech. This is an open access article distributed under the dium, provided the original work is properly cited. terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is © 2013 Ali Hashemi and Nowrouzian, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Hashemi and Nowrouzian, licensee InTech. This is an open access chapter distributed under the terms

space. In [6], a Remez exchange algorithm was used for the optimization of FRM finite impulse response (FIR) digital filters and it was shown that this algorithm may provide a speed advantage over the linear programming approach. However, both these techniques suffer from sub-optimality problems. In [7], unconstrained weighted least-squares criterion was used to develop another technique for the optimization of digital filters. Convex optimization approaches such as semi-definite programming [8] and second-order cone programming [9] have also been applied to the optimization of digital filters. However, if a large number of constraints are present, these optimization techniques may become computationally inefficient in terms of time consumption and speed.

sum/difference of a pair of bilinear-LDI digital allpass networks. The salient features of the bilinear-LDI digital filters are that they lend themselves to fast two-cycle parallel digital signal processing speeds, while being minimal in the number of digital multiplication operations (and, practically, minimal in

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

http://dx.doi.org/10.5772/52196

245

The starting point in the design of FRM digital filters is to find the multiplier coefficients constituent in the FRM digital filter in infinite precision by using the hitherto gradient-based optimization techniques (e.g. Parks-McClellan approach [35] for FIR digital filters) followed by a quantization step. The quantization can be performed by constraining the multiplier coefficients values to conform to certain number systems such as the signed power-of-two (SPT) system. SPT is a computationally efficient number system which can further reduce the hardware complexity of the FRM IIR digital filters. In this number system, each multiplier coefficient is represented with only a few non-zero bits within its wordlength, permitting the decomposition of the multiplication operation into a finite series of shift and add operations. Digital filters incorporating SPT multiplier coefficient representation are commonly referred to as *multiplierless* digital filters [36]. However, the SPT representation of a given number is not unique, resulting in redundancy in the multiplier coefficient representation. This redundancy can adversely affect the corresponding computational complexity due to recourse to compare operations

The canonical signed digit (CSD) number system is a special case of the SPT number system which circumvents the above redundancy problem by limiting the number of non-zero bits in the representation of the multiplier coefficients. It is usually used in combination with subexpression sharing and elimination, which in turn results in substantial reduction in the cost of the VLSI hardware implementation of the digital filters [37]. In CSD number system, no two (or more) non-zero bits can appear consecutively in the representation of the multiplier coefficients, reducing the maximum number

After multiplier coefficient quantization, the resulting FRM digital filter may no longer satisfy the given target design specifications. Therefore, the next step in the design of FRM digital filters is to perform a further optimization to make the finite precision FRM digital filter to conform to the design specifications. This can be achieved by resorting to a finite-precision optimization technique such as

A direct application of the conventional PSO algorithm to the optimization of the above FRM digital

• The first problem arises because in the course of optimization, the multiplier coefficient update operations lead to values that may no longer conform to the desired CSD wordlength, etc. (due to random nature of velocity and position of particles). This problem is resolved by generating indexed look-up tables (LUTs) of permissible CSD multiplier coefficient values, and by employing

• The second problem stems from the fact that in case of FRM IIR digital filters, the resulting FRM IIR digital filters may no longer be bounded-input-bounded-output (BIBO) stable. This problem can be resolved by generation and successive augmentation of template LUTs until the BIBO stability

• Finally, the third problem arises because even in case of having indexed LUTs, the particles may go over the boundaries of LUTs in course of optimization (due to the inherent limited search space). This can be resolved by introducing *barren layers*. A barren layer is a region, with a certain width

the indices of LUTs to represent FRM digital filter multiplier coefficient values.

of non-zero bits by a factor of two in terms of shift and add operations [38].

number of digital addition and unit-delay operations).

repetitively.

PSO.

filters gives rise to three separate problems:

constraints remain satisfied [23].

Heuristic optimization algorithms have emerged as promising candidates for the design and discrete optimization of digital filters, particularly due to the fact that they are capable of automatically finding near-optimum solutions while keeping the computational complexity of the algorithm at moderate levels. Simulated annealing (SA) and genetic algorithms (GAs) were widely used in the design and optimization of digital filters [10–12]. Particle swarm optimization (PSO) and seeker optimization algorithm (SOA) are two newly developed algorithms suitable for the optimization of various digital filters due to their few number of implementation parameters and high speed of convergence [13, 14]. It was shown that SOA has advantages over PSO in terms of the speed of convergence and global search ability [15]. Tabu search (TS) [16], ant colony optimization (ACO) [17], immune algorithm (IA) [18] and differential evolution (DE) [19, 20] are alternative candidates for the optimization of digital filters. All the foregoing techniques allow a robust search of the solution space through a parallel search in all directions without any recourse to gradient information. However, the aforementioned techniques were developed for infinite precision optimization of digital filters which require the user to perform a quantization step for a hardware implementation.

In [21–23], a technique was developed for finite-precision design and optimization of FRM digital filters using GAs. finite-precision optimization of FRM FIR digital filters using PSO was studied in [24, 25] and finite-precision optimization of infinite impulse response based (IIR-based) FRM digital filters was studied in [26, 27]. PSO was originally proposed by Kennedy and Eberhart in 1995 as a new intelligent optimization algorithm which simulates the migration and aggregation of a flock of birds seeking food [28]. It adopts a strategy based on particle swarm and parallel global random search, that may exhibit superior performance to other intelligent algorithms in computational speed and memory. In PSO, a potential candidate solution is represented as a particle in a multidimensional search space, where each dimension represents a distinct optimization variable. The particles in the multidimensional search space are characterized by corresponding fitness values. They make movements in the search space towards regions characterized by high fitness values.

The conventional FRM digital filters incorporate FIR interpolation digital subfilters. These digital subfilters are usually of high orders, rendering the resulting overall FRM digital filters as not economical, since the resulting digital filters occupy large chip areas and consume high amounts of power in their VLSI hardware implementations. In general, the multiplication operation is the most cost-sensitive part in such an implementation. Therefore, there is every incentive to reduce the number of multiplication operations in the digital filter realization. This problem may be circumvented by employing IIR interpolation digital subfilters [29, 30].

There is a vast body of literature available for the design and optimization of digital IIR filters [31–33]. However, all the aforementioned designs are based on the exact transfer function coefficients which leads to an uneconomical hardware realization of such filters. In order to realize the constituent IIR interpolation digital subfilters on a hardware platform, the bilinear-lossless-discrete-integrator (bilinear-LDI) digital filter design approach is employed [34]. These digital subfilters are realized as a sum/difference of a pair of bilinear-LDI digital allpass networks. The salient features of the bilinear-LDI digital filters are that they lend themselves to fast two-cycle parallel digital signal processing speeds, while being minimal in the number of digital multiplication operations (and, practically, minimal in number of digital addition and unit-delay operations).

2 Digital Filters and Signal Processing

space. In [6], a Remez exchange algorithm was used for the optimization of FRM finite impulse response (FIR) digital filters and it was shown that this algorithm may provide a speed advantage over the linear programming approach. However, both these techniques suffer from sub-optimality problems. In [7], unconstrained weighted least-squares criterion was used to develop another technique for the optimization of digital filters. Convex optimization approaches such as semi-definite programming [8] and second-order cone programming [9] have also been applied to the optimization of digital filters. However, if a large number of constraints are present, these optimization techniques may become

Heuristic optimization algorithms have emerged as promising candidates for the design and discrete optimization of digital filters, particularly due to the fact that they are capable of automatically finding near-optimum solutions while keeping the computational complexity of the algorithm at moderate levels. Simulated annealing (SA) and genetic algorithms (GAs) were widely used in the design and optimization of digital filters [10–12]. Particle swarm optimization (PSO) and seeker optimization algorithm (SOA) are two newly developed algorithms suitable for the optimization of various digital filters due to their few number of implementation parameters and high speed of convergence [13, 14]. It was shown that SOA has advantages over PSO in terms of the speed of convergence and global search ability [15]. Tabu search (TS) [16], ant colony optimization (ACO) [17], immune algorithm (IA) [18] and differential evolution (DE) [19, 20] are alternative candidates for the optimization of digital filters. All the foregoing techniques allow a robust search of the solution space through a parallel search in all directions without any recourse to gradient information. However, the aforementioned techniques were developed for infinite precision optimization of digital filters which require the user to perform a

In [21–23], a technique was developed for finite-precision design and optimization of FRM digital filters using GAs. finite-precision optimization of FRM FIR digital filters using PSO was studied in [24, 25] and finite-precision optimization of infinite impulse response based (IIR-based) FRM digital filters was studied in [26, 27]. PSO was originally proposed by Kennedy and Eberhart in 1995 as a new intelligent optimization algorithm which simulates the migration and aggregation of a flock of birds seeking food [28]. It adopts a strategy based on particle swarm and parallel global random search, that may exhibit superior performance to other intelligent algorithms in computational speed and memory. In PSO, a potential candidate solution is represented as a particle in a multidimensional search space, where each dimension represents a distinct optimization variable. The particles in the multidimensional search space are characterized by corresponding fitness values. They make movements in the search

The conventional FRM digital filters incorporate FIR interpolation digital subfilters. These digital subfilters are usually of high orders, rendering the resulting overall FRM digital filters as not economical, since the resulting digital filters occupy large chip areas and consume high amounts of power in their VLSI hardware implementations. In general, the multiplication operation is the most cost-sensitive part in such an implementation. Therefore, there is every incentive to reduce the number of multiplication operations in the digital filter realization. This problem may be circumvented by

There is a vast body of literature available for the design and optimization of digital IIR filters [31–33]. However, all the aforementioned designs are based on the exact transfer function coefficients which leads to an uneconomical hardware realization of such filters. In order to realize the constituent IIR interpolation digital subfilters on a hardware platform, the bilinear-lossless-discrete-integrator (bilinear-LDI) digital filter design approach is employed [34]. These digital subfilters are realized as a

computationally inefficient in terms of time consumption and speed.

quantization step for a hardware implementation.

space towards regions characterized by high fitness values.

employing IIR interpolation digital subfilters [29, 30].

The starting point in the design of FRM digital filters is to find the multiplier coefficients constituent in the FRM digital filter in infinite precision by using the hitherto gradient-based optimization techniques (e.g. Parks-McClellan approach [35] for FIR digital filters) followed by a quantization step. The quantization can be performed by constraining the multiplier coefficients values to conform to certain number systems such as the signed power-of-two (SPT) system. SPT is a computationally efficient number system which can further reduce the hardware complexity of the FRM IIR digital filters. In this number system, each multiplier coefficient is represented with only a few non-zero bits within its wordlength, permitting the decomposition of the multiplication operation into a finite series of shift and add operations. Digital filters incorporating SPT multiplier coefficient representation are commonly referred to as *multiplierless* digital filters [36]. However, the SPT representation of a given number is not unique, resulting in redundancy in the multiplier coefficient representation. This redundancy can adversely affect the corresponding computational complexity due to recourse to compare operations repetitively.

The canonical signed digit (CSD) number system is a special case of the SPT number system which circumvents the above redundancy problem by limiting the number of non-zero bits in the representation of the multiplier coefficients. It is usually used in combination with subexpression sharing and elimination, which in turn results in substantial reduction in the cost of the VLSI hardware implementation of the digital filters [37]. In CSD number system, no two (or more) non-zero bits can appear consecutively in the representation of the multiplier coefficients, reducing the maximum number of non-zero bits by a factor of two in terms of shift and add operations [38].

After multiplier coefficient quantization, the resulting FRM digital filter may no longer satisfy the given target design specifications. Therefore, the next step in the design of FRM digital filters is to perform a further optimization to make the finite precision FRM digital filter to conform to the design specifications. This can be achieved by resorting to a finite-precision optimization technique such as PSO.

A direct application of the conventional PSO algorithm to the optimization of the above FRM digital filters gives rise to three separate problems:


and certain entries, which is added to the problem space such that the particles tend to shy away from such a region. The width of the barren layers is calculated based on a worst case scenario that may happen in the particles movements in the search space. However, the entries of barren layers are different for different problems and depend on the topology of the search space and the fitness function used in the problem.

This chapter discusses in detail the design, realization and discrete PSO of FRM IIR digital filters. FRM IIR digital filters are designed by FIR masking digital subfilters together with IIR interpolation digital subfilters. The FIR filter design is straightforward and can be performed by using hitherto techniques. The IIR digital subfilter design topology consists of a parallel combination of a pair of allpass networks such that its magnitude-frequency response matches that of an odd order elliptic minimum Q-factor (EMQF) transfer function. This design is realized using the bilinear-LDI approach, with multiplier coefficient values represented as finite-precision CSD numbers.

*<sup>X</sup>i*−<sup>1</sup> 1

**Figure 1.** Movement of Particles in PSO Algorithm

*vi*

*kj* <sup>=</sup> *wvi*−<sup>1</sup>

*kj* <sup>+</sup> *<sup>c</sup>*1*r*1(*xi*−<sup>1</sup>

if *v<sup>i</sup>*

if *v<sup>i</sup>*

in accordance with the equations:

present in the swarm, i.e. *K* = 2.

complexity and consumes time.

*<sup>X</sup>i*−<sup>2</sup> 1

*<sup>V</sup>i*−<sup>1</sup> 1

> *<sup>G</sup>i*−<sup>1</sup> *best*

*Xi*

The velocity and position of each particle in the *i*-th iteration throughout the course of PSO are updated

*bestkj* <sup>−</sup> *<sup>x</sup>i*−<sup>1</sup>

*kj* <sup>&</sup>lt; *vmin* ; *<sup>v</sup><sup>i</sup>*

*kj* <sup>&</sup>gt; *vmax* ; *<sup>v</sup><sup>i</sup>*

The parameter *w* represents an inertia weight; *c*1 and *c*2 are the correction (learning) factors, and *r*1 and *r*<sup>2</sup> are random numbers in the interval [0, 1]. The velocity is limited between *vmin* and *vmax* to avoid very large particle movements in the search space, where *vmin* < 0 and *vmax* > 0. Fig. 1 illustrates how the particles move in a two-dimensional search space (*N* = 2). In this figure, two particles are

The first term in the right hand side of movement update Eqn. (1), weighted by *w*, signifies the dependence of the current particle velocity on its value in the previous iteration. The second term, weighted by *c*1, signifies an attractor to pull the particle towards its previous best position. The third

In addition to the update Eqns. (1) and (2), one can limit the coordinates in a particle between two user defined values *xjmin* and *xjmax* in order to limit the search space. However, This operation increases the

term, weighted by *c*2 controls the movement of the particle towards the global best position.

*kj* <sup>+</sup> *<sup>v</sup><sup>i</sup>*

*xi kj* <sup>=</sup> *<sup>x</sup>i*−<sup>1</sup> <sup>2</sup> *<sup>V</sup><sup>i</sup>* 2

*kj* ) + *<sup>c</sup>*2*r*2(*gi*−<sup>1</sup>

*kj* = *vmin*

*kj* = *vmax*

*bestj*

− *<sup>x</sup>i*−<sup>1</sup>

*kj* (2)

*<sup>X</sup>i*−<sup>1</sup> 2

*<sup>X</sup>i*−<sup>1</sup> *best*2

*<sup>V</sup>i*−<sup>1</sup> 2

*<sup>X</sup>i*−<sup>2</sup> 2

http://dx.doi.org/10.5772/52196

247

*kj* ) (1)

*Xi* 1

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

*<sup>X</sup>i*−<sup>1</sup> *best*1

*Vi* 1

The above FRM digital filters are optimized over the discrete multiplier coefficient space, resulting in FRM digital filters which are capable of direct implementation in digital hardware platform without any need for further optimization. A new PSO algorithm is developed to tackle three different problems. In this PSO algorithm, a set of indexed LUTs of permissible CSD multiplier coefficient values is generated to ensure that in the course of optimization, the multiplier coefficient update operations constituent in the underlying PSO algorithm lead to values that are guaranteed to conform to the desired CSD wordlength, etc. In addition, a general set of constraints is derived in terms of multiplier coefficients to guarantee that the IIR bilinear-LDI interpolation digital subfilters automatically remain BIBO stable throughout the course of PSO algorithm. Moreover, by introducing barren layers, the particles are ensured to automatically remain inside the boundaries of LUTs in course of optimization.

## **2. The conventional PSO algorithm**

Let us consider an optimization problem consisting of *N* design variables, and let us refer to each solution as a particle. Let us further consider a swarm of *K* particles in the *N*-dimensional search space. The position of the *k*-th particle in the search space can be assigned a *N*-dimensional position vector *Xk* = {*xk*1, *xk*2,..., *xkN*}. In this way, the element *xkj* (for *j* = 1, 2, . . . , *N*) represents the *j*-th coordinate of the particle *Xk*.

The PSO optimization fitness function maps each particle *Xk* in the search space to a fitness value. In addition, the particle *Xk* is assigned a *N*-dimensional velocity vector *Vk* = {*vk*1, *vk*2,..., *vkN*}. The PSO optimization search is directed towards promising regions by taking into account the velocity vector *Vk* together with the best previous position of the *k*-th particle *Xbestk* = {*xbestk*<sup>1</sup> , *xbestk*<sup>2</sup> ,..., *xbestkN* }, and the best global position of the swarm *Gbest* = {*gbest*<sup>1</sup> , *gbest*<sup>2</sup> ,..., *gbestN* } (i.e. the location of the particle with the best fitness value).

The conventional PSO is initialized by spreading the particles *Xk* through the search space in a random fashion. Then, the particles make movements through the search space towards regions characterized by high fitness values with corresponding velocities *Vk*. The movement of each particle is governed by the best previous location of the same particle *Xbestk* , and by the global best location *Gbest*. The velocity of particle movement is determined from the previous best location of the particle, the global best location, and the previous velocity.

<sup>246</sup> Digital Filters and Signal Processing Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 5 Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space http://dx.doi.org/10.5772/52196 247

**Figure 1.** Movement of Particles in PSO Algorithm

4 Digital Filters and Signal Processing

function used in the problem.

coefficient values represented as finite-precision CSD numbers.

**2. The conventional PSO algorithm**

, *xbestk*<sup>2</sup>

by the best previous location of the same particle *Xbestk*

best location, and the previous velocity.

coordinate of the particle *Xk*.

particle *Xbestk* = {*xbestk*<sup>1</sup>

automatically remain inside the boundaries of LUTs in course of optimization.

and certain entries, which is added to the problem space such that the particles tend to shy away from such a region. The width of the barren layers is calculated based on a worst case scenario that may happen in the particles movements in the search space. However, the entries of barren layers are different for different problems and depend on the topology of the search space and the fitness

This chapter discusses in detail the design, realization and discrete PSO of FRM IIR digital filters. FRM IIR digital filters are designed by FIR masking digital subfilters together with IIR interpolation digital subfilters. The FIR filter design is straightforward and can be performed by using hitherto techniques. The IIR digital subfilter design topology consists of a parallel combination of a pair of allpass networks such that its magnitude-frequency response matches that of an odd order elliptic minimum Q-factor (EMQF) transfer function. This design is realized using the bilinear-LDI approach, with multiplier

The above FRM digital filters are optimized over the discrete multiplier coefficient space, resulting in FRM digital filters which are capable of direct implementation in digital hardware platform without any need for further optimization. A new PSO algorithm is developed to tackle three different problems. In this PSO algorithm, a set of indexed LUTs of permissible CSD multiplier coefficient values is generated to ensure that in the course of optimization, the multiplier coefficient update operations constituent in the underlying PSO algorithm lead to values that are guaranteed to conform to the desired CSD wordlength, etc. In addition, a general set of constraints is derived in terms of multiplier coefficients to guarantee that the IIR bilinear-LDI interpolation digital subfilters automatically remain BIBO stable throughout the course of PSO algorithm. Moreover, by introducing barren layers, the particles are ensured to

Let us consider an optimization problem consisting of *N* design variables, and let us refer to each solution as a particle. Let us further consider a swarm of *K* particles in the *N*-dimensional search space. The position of the *k*-th particle in the search space can be assigned a *N*-dimensional position vector *Xk* = {*xk*1, *xk*2,..., *xkN*}. In this way, the element *xkj* (for *j* = 1, 2, . . . , *N*) represents the *j*-th

The PSO optimization fitness function maps each particle *Xk* in the search space to a fitness value. In addition, the particle *Xk* is assigned a *N*-dimensional velocity vector *Vk* = {*vk*1, *vk*2,..., *vkN*}. The PSO optimization search is directed towards promising regions by taking into account the velocity vector *Vk* together with the best previous position of the *k*-th

The conventional PSO is initialized by spreading the particles *Xk* through the search space in a random fashion. Then, the particles make movements through the search space towards regions characterized by high fitness values with corresponding velocities *Vk*. The movement of each particle is governed

velocity of particle movement is determined from the previous best location of the particle, the global

{*gbest*<sup>1</sup> , *gbest*<sup>2</sup> ,..., *gbestN* } (i.e. the location of the particle with the best fitness value).

,..., *xbestkN* }, and the best global position of the swarm *Gbest* =

, and by the global best location *Gbest*. The

The velocity and position of each particle in the *i*-th iteration throughout the course of PSO are updated in accordance with the equations:

$$v\_{kj}^i = wv\_{kj}^{i-1} + c\_1 r\_1 (x\_{best\_{kj}}^{i-1} - x\_{kj}^{i-1}) + c\_2 r\_2 (y\_{best\_j}^{i-1} - x\_{kj}^{i-1}) \tag{1}$$

$$\text{if} \quad v\_{kj}^i < v\_{min} \quad ; \quad v\_{kj}^i = v\_{min}$$

$$\text{if} \quad v\_{kj}^i > v\_{max} \quad ; \quad v\_{kj}^i = v\_{max}$$

$$x\_{kj}^i = x\_{kj}^{i-1} + v\_{kj}^i \tag{2}$$

The parameter *w* represents an inertia weight; *c*1 and *c*2 are the correction (learning) factors, and *r*1 and *r*<sup>2</sup> are random numbers in the interval [0, 1]. The velocity is limited between *vmin* and *vmax* to avoid very large particle movements in the search space, where *vmin* < 0 and *vmax* > 0. Fig. 1 illustrates how the particles move in a two-dimensional search space (*N* = 2). In this figure, two particles are present in the swarm, i.e. *K* = 2.

The first term in the right hand side of movement update Eqn. (1), weighted by *w*, signifies the dependence of the current particle velocity on its value in the previous iteration. The second term, weighted by *c*1, signifies an attractor to pull the particle towards its previous best position. The third term, weighted by *c*2 controls the movement of the particle towards the global best position.

In addition to the update Eqns. (1) and (2), one can limit the coordinates in a particle between two user defined values *xjmin* and *xjmax* in order to limit the search space. However, This operation increases the complexity and consumes time.

**Figure 2.** FRM Digital Filter Block Diagram

**Figure 3.** Block Diagram Representation of Frequency-Response Masking

#### **3. The conventional FRM design approach**

#### **3.1. Design of lowpass FRM digital filters**

The block diagram in Fig. 2 shows a conventional FRM digital filter, where *Ha*(*z*) represents a FIR interpolation lowpass digital subfilter, and where *Hb*(*z*) represents a power complementary counterpart of *Ha*(*z*) in accordance with

$$|H\_{\mathfrak{a}}(e^{j\omega})|^2 + |H\_{\mathfrak{b}}(e^{j\omega})|^2 = 1 \tag{3}$$

**Filter Passband Edge Stopband Edge**

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

2*ILπ* + *ω<sup>a</sup> M*

http://dx.doi.org/10.5772/52196

249

2(*IL* + 1)*π* − *ω<sup>a</sup> M*

> 2*ILπ* + *ω<sup>p</sup> M*

> 2*ILπ* − *ω<sup>p</sup> M*

> 2*ILπ* − *ω<sup>a</sup> M*

> 2*ILπ* + *ω<sup>p</sup> M*

*<sup>H</sup>*(*z*) <sup>2</sup>*IL<sup>π</sup>* <sup>+</sup> *<sup>ω</sup><sup>p</sup>*

*<sup>F</sup>*0(*z*) <sup>2</sup>*IL<sup>π</sup>* <sup>+</sup> *<sup>ω</sup><sup>a</sup>*

*<sup>F</sup>*1(*z*) <sup>2</sup>*IL<sup>π</sup>* <sup>−</sup> *<sup>ω</sup><sup>p</sup>*

*<sup>H</sup>*(*z*) <sup>2</sup>*IL<sup>π</sup>* <sup>−</sup> *<sup>ω</sup><sup>a</sup>*

*<sup>F</sup>*1(*z*) <sup>2</sup>*IL<sup>π</sup>* <sup>−</sup> *<sup>ω</sup><sup>p</sup>*

*<sup>F</sup>*0(*z*) <sup>2</sup>(*IL* <sup>−</sup> <sup>1</sup>)*<sup>π</sup>* <sup>+</sup> *<sup>ω</sup><sup>a</sup>*

*M*

*M*

*M*

*M*

*M*

*M*

The masking digital subfilters *F*0(*z*) and *F*1(*z*) are employed to suppress the unwanted image bands produced by the interpolated digital subfilters *Ha*(*zM*) and *Hb*(*zM*). The masking filters are made to have equal order (by zero padding) in order to ensure that their phase characteristics are similar. The corresponding interpolated digital subfilters *Ha*(*zM*) and *Hb*(*zM*) can realize transition bands which are a factor of *M* sharper than those of *Ha*(*z*) and *Hb*(*z*), without increasing the number of required non-zero digital multipliers. The magnitude frequency-response of the various subfilters incorporated

Here, Case I design is when the transition band of *H*(*z*) is extracted from that of *Ha*(*zM*) and Case II design is when the transition band of *H*(*z*) is extracted from that of *Hb*(*zM*). The edge frequencies of the overall digital FRM filter and its constituent subfilters are given in Table 1, where *IL* represents the

Case I

Case II

(6)

Case I

Case II

**Table 1.** Edge Frequencies of the Overall FRM FIR filter and Masking Subfilters

by the FRM digital filter design approach are shown in Fig. 4.

*IL* =

  � *Mω<sup>p</sup>* 2*π* �

� *Mω<sup>a</sup>* 2*π* �

where ⌊ ⌋ denotes the largest integer from the lower side, and ⌈ ⌉ signifies the smallest integer from the

In general, it is possible to extend the conventional FRM approach for the design of bandpass or bandstop FRM digital filters. However, the resulting FRM digital filters are constrained to have identical lower and upper transition bandwidths. In [39], this restriction was relaxed by realizing the bandstop FRM FIR digital filter as a parallel combination of a corresponding pair of lowpass and highpass FIR

number of image lobes to be masked given by:

**3.2. Design of bandpass FRM digital filters**

upper side.

Here, *z* represents the discrete-time complex frequency, and *ω* represents the corresponding (normalized) real frequency variable. Moreover, *F*0(*z*) and *F*1(*z*) represent FIR masking digital subfilters, while *Ha*(*zM*) and *Hb*(*zM*) represent *M*-fold interpolated versions of *Ha*(*z*) and *Hb*(*z*), respectively. In case of FIR digital interpolation subfilters, for a linear-phase filter *Ha*(*z*) of order *NFIR*, the relationship between *Hb*(*z*) and *Ha*(*z*) is as follows:

$$H\_b(z) = z^{(N\_{FIR} + 1)/2} - H\_a(z) \tag{4}$$

and hence *Hb*(*z*) can be implemented by subtracting the output of *Ha*(*z*) from the delayed version of the input, as shown in Fig. 3.

The FRM digital filter in Fig. 2 has an overall transfer function

$$H(z) = H\_{\mathfrak{a}}(z^{M})F\_{\mathfrak{0}}(z) + H\_{\mathfrak{b}}(z^{M})F\_{\mathfrak{1}}(z) \tag{5}$$



6 Digital Filters and Signal Processing

**Figure 2.** FRM Digital Filter Block Diagram

*Ha*(*zM*)

*F*0(*z*)

+

+

<sup>2</sup> = 1 (3)

*F*1(*z*)

*F*0(*z*)

*F*1(*z*)

*Hb*(*z*) = *<sup>z</sup>*(*NFIR*+1)/2 <sup>−</sup> *Ha*(*z*) (4)

*H*(*z*) = *Ha*(*zM*)*F*0(*z*) + *Hb*(*zM*)*F*1(*z*) (5)

*Hb*(*zM*)

*Ha*(*zM*)

<sup>−</sup>*M*(*NFIR*+1)

<sup>2</sup> <sup>+</sup><sup>−</sup>

<sup>|</sup>*Ha*(*ejω*)<sup>|</sup>

The block diagram in Fig. 2 shows a conventional FRM digital filter, where *Ha*(*z*) represents a FIR interpolation lowpass digital subfilter, and where *Hb*(*z*) represents a power complementary counterpart

<sup>2</sup> <sup>+</sup> <sup>|</sup>*Hb*(*ejω*)<sup>|</sup>

Here, *z* represents the discrete-time complex frequency, and *ω* represents the corresponding (normalized) real frequency variable. Moreover, *F*0(*z*) and *F*1(*z*) represent FIR masking digital subfilters, while *Ha*(*zM*) and *Hb*(*zM*) represent *M*-fold interpolated versions of *Ha*(*z*) and *Hb*(*z*), respectively. In case of FIR digital interpolation subfilters, for a linear-phase filter *Ha*(*z*) of order

and hence *Hb*(*z*) can be implemented by subtracting the output of *Ha*(*z*) from the delayed version of

*z*

**Figure 3.** Block Diagram Representation of Frequency-Response Masking

*NFIR*, the relationship between *Hb*(*z*) and *Ha*(*z*) is as follows:

The FRM digital filter in Fig. 2 has an overall transfer function

**3. The conventional FRM design approach**

**3.1. Design of lowpass FRM digital filters**

of *Ha*(*z*) in accordance with

the input, as shown in Fig. 3.

The masking digital subfilters *F*0(*z*) and *F*1(*z*) are employed to suppress the unwanted image bands produced by the interpolated digital subfilters *Ha*(*zM*) and *Hb*(*zM*). The masking filters are made to have equal order (by zero padding) in order to ensure that their phase characteristics are similar. The corresponding interpolated digital subfilters *Ha*(*zM*) and *Hb*(*zM*) can realize transition bands which are a factor of *M* sharper than those of *Ha*(*z*) and *Hb*(*z*), without increasing the number of required non-zero digital multipliers. The magnitude frequency-response of the various subfilters incorporated by the FRM digital filter design approach are shown in Fig. 4.

Here, Case I design is when the transition band of *H*(*z*) is extracted from that of *Ha*(*zM*) and Case II design is when the transition band of *H*(*z*) is extracted from that of *Hb*(*zM*). The edge frequencies of the overall digital FRM filter and its constituent subfilters are given in Table 1, where *IL* represents the number of image lobes to be masked given by:

$$I\_L = \begin{cases} \left\lfloor \frac{M\omega\_p}{2\pi} \right\rfloor & \text{Case I} \\\\ \left\lceil \frac{M\omega\_d}{2\pi} \right\rceil & \text{Case II} \end{cases} \tag{6}$$

where ⌊ ⌋ denotes the largest integer from the lower side, and ⌈ ⌉ signifies the smallest integer from the upper side.

#### **3.2. Design of bandpass FRM digital filters**

In general, it is possible to extend the conventional FRM approach for the design of bandpass or bandstop FRM digital filters. However, the resulting FRM digital filters are constrained to have identical lower and upper transition bandwidths. In [39], this restriction was relaxed by realizing the bandstop FRM FIR digital filter as a parallel combination of a corresponding pair of lowpass and highpass FIR

*Halp*(*z <sup>M</sup>*) *F*0*lp* (*z*)

+

digital filters. The latter lowpass and highpass FRM digital filters were obtained using a variation of the

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

Let the desired bandpass FRM digital filter *H*(*z*) have a lower transition bandwidth which is not identical to its upper transition bandwidth. *H*(*z*) can be realized as a cascade combination of a pair

where *Hl p*(*z*) represents a lowpass and *Hhp*(*z*) represents a highpass FRM digital filter. In this way,

The lower transition bandwidth is governed by the constituent transition bandwidth of the highpass FRM digital filter, while the upper transition bandwidth is governed by the constituent transition bandwidth of the lowpass FRM digital filter. The realization for bandpass FRM digital filter are as

In the case of FRM IIR digital filters, *Ha*(*z*) and *Hb*(*z*) (in section 3) act as IIR interpolation digital subfilters. The masking filters *F*0(*z*) and *F*1(*z*) are not changed (i.e. they are still equal order FIR digital

The IIR interpolation digital subfilter *Ha*(*z*) is chosen to have an odd order *NIIR*. Odd-ordered elliptic transfer functions can be represented as a sum of or difference between two allpass transfer functions [40]. Therefore, *Ha*(*z*) can be realized as the addition of two allpass digital networks *G*0(*z*) and *G*1(*z*)

*Ha*(*z*) = *<sup>G</sup>*0(*z*) + *<sup>G</sup>*1(*z*)

where *G*0(*z*) is odd-ordered and *G*1(*z*) is even-ordered. The interesting fact is that the difference between *G*0(*z*) and *G*1(*z*) results in a filter that is power complementary to *Ha*(*z*), and can subsequently

*Hb*(*z*) = *<sup>G</sup>*0(*z*) <sup>−</sup> *<sup>G</sup>*1(*z*)

be used as the power complementary interpolation digital subfilter *Hb*(*z*) as in the following:

**4. Design of FRM digital filters incorporating IIR interpolation digital**

*Hah p*(*z <sup>M</sup>*) *F*0*h p* (*z*)

http://dx.doi.org/10.5772/52196

+

251

*F*1*h p* (*z*)

*Hbh p*(*z <sup>M</sup>*)

*H*(*z*) = *Hl p*(*z*)*Hhp*(*z*) (7)

<sup>2</sup> (10)

<sup>2</sup> (11)

*Hl p*(*z*) = *Halp* (*zM*)*F*0*lp* (*z*) + *Hblp* (*zM*)*F*1*lp* (*z*) (8) *Hhp*(*z*) = *Hah p* (*zM*)*F*0*h p* (*z*) + *Hbh p* (*zM*)*F*1*h p* (*z*) (9)

*F*1*lp* (*z*)

*Hblp*(*z <sup>M</sup>*)

conventional FRM approach.

shown in Fig. 5.

**subfilters**

as follows:

**Figure 5.** Bandpass FRM Digital Filter Block Diagram

of lowpass and highpass FRM digital filters, so that

*Hl p*(*z*) and *Hhp*(*z*) can be obtained with the help of Eqn. (5) as

filters). Therefore, Eqn. (5) is still valid for the FRM IIR digital filter.

**Figure 4.** Magnitude Frequency-Response of FRM Digital Filter. (a) Magnitude Frequency-Response of the Bandedge-Shaping Digital Subfilters *Ha*(*z*) and *Hb*(*z*). (b) Magnitude Frequency-Response of the *M*-Interpolated Complementary Digital Subfilters *Ha*(*z<sup>M</sup>*) and *Hb*(*z<sup>M</sup>*). (c) Magnitude Frequency-Response of the Masking Digital Subfilters *F*<sup>0</sup> (*z*) and *F*<sup>1</sup> (*z*) for Case I. (d) Magnitude Frequency-Response of the Overall FRM Digital Filter *H*(*z*) for Case I. (e) Magnitude Frequency-Response of the Masking Digital Subfilters *F*<sup>0</sup> (*z*) and *F*<sup>1</sup> (*z*) for Case II. (f) Magnitude Frequency-Response of the Overall FRM Digital Filter *H*(*z*) for Case II [3].

<sup>250</sup> Digital Filters and Signal Processing Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 9 Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space http://dx.doi.org/10.5772/52196 251

**Figure 5.** Bandpass FRM Digital Filter Block Diagram

8 Digital Filters and Signal Processing

<sup>|</sup>*Ha*/*b*(*ejω*)<sup>|</sup>

<sup>|</sup>*Ha*/*b*(*ejMω*)<sup>|</sup>





for Case II [3].

1

1

1

1

1

1

0 *ω<sup>p</sup> ω<sup>a</sup> π*

<sup>|</sup>*Hb*(*ejMω*)| |*Ha*(*ejMω*)<sup>|</sup>

0 *π*


*M*

*M*

*M*



<sup>2</sup>(*IL*+1)*π*−*ω<sup>a</sup> M*

> <sup>2</sup>*ILπ*+*ω<sup>p</sup> M*


0 <sup>2</sup>*ILπ*+*ω<sup>a</sup>*

<sup>2</sup>*ILπ*−*ω<sup>p</sup> M*

0 <sup>2</sup>*ILπ*+*ω<sup>a</sup>*

<sup>2</sup>*ILπ*+*ω<sup>p</sup> M*

0 <sup>2</sup>*ILπ*−*ω<sup>p</sup>*

0 <sup>2</sup>*ILπ*−*ω<sup>p</sup>*

<sup>2</sup>(*IL*−1)*π*+*ω<sup>a</sup> M*

<sup>2</sup>*ILπ*−*ω<sup>a</sup> M*

> <sup>2</sup>*ILπ*−*ω<sup>a</sup> M*

**Figure 4.** Magnitude Frequency-Response of FRM Digital Filter. (a) Magnitude Frequency-Response of the Bandedge-Shaping Digital Subfilters *Ha*(*z*) and *Hb*(*z*). (b) Magnitude Frequency-Response of the *M*-Interpolated Complementary Digital Subfilters *Ha*(*z<sup>M</sup>*) and *Hb*(*z<sup>M</sup>*). (c) Magnitude Frequency-Response of the Masking Digital Subfilters *F*<sup>0</sup> (*z*) and *F*<sup>1</sup> (*z*) for Case I. (d) Magnitude Frequency-Response of the Overall FRM Digital Filter *H*(*z*) for Case I. (e) Magnitude Frequency-Response of the Masking Digital Subfilters *F*<sup>0</sup> (*z*) and *F*<sup>1</sup> (*z*) for Case II. (f) Magnitude Frequency-Response of the Overall FRM Digital Filter *H*(*z*)

<sup>2</sup>*ILπ*+*ω<sup>p</sup> M*

*M*

<sup>|</sup>*Ha*(*ejω*)| |*Hb*(*ejω*)<sup>|</sup>

*ω*

*ω*

*ω*

*ω*

*ω*

*ω*

(f)

(e)

(d)

(c)

*π*

*π*

*π*

*π*

(b)

(a)

digital filters. The latter lowpass and highpass FRM digital filters were obtained using a variation of the conventional FRM approach.

Let the desired bandpass FRM digital filter *H*(*z*) have a lower transition bandwidth which is not identical to its upper transition bandwidth. *H*(*z*) can be realized as a cascade combination of a pair of lowpass and highpass FRM digital filters, so that

$$H(z) = H\_{lp}(z)H\_{lp}(z) \tag{7}$$

where *Hl p*(*z*) represents a lowpass and *Hhp*(*z*) represents a highpass FRM digital filter. In this way, *Hl p*(*z*) and *Hhp*(*z*) can be obtained with the help of Eqn. (5) as

$$H\_{lp}(z) = H\_{\mathfrak{d}\_{lp}}(z^M) F\_{\mathbb{O}\_{lp}}(z) + H\_{\mathfrak{d}\_{lp}}(z^M) F\_{\mathbb{I}\_{lp}}(z) \tag{8}$$

$$H\_{hp}(z) = H\_{\mathfrak{a}\_{hp}}(z^M) F\_{\mathbb{O}\_{hp}}(z) + H\_{\mathbb{O}\_{hp}}(z^M) F\_{\mathbb{I}\_{hp}}(z) \tag{9}$$

The lower transition bandwidth is governed by the constituent transition bandwidth of the highpass FRM digital filter, while the upper transition bandwidth is governed by the constituent transition bandwidth of the lowpass FRM digital filter. The realization for bandpass FRM digital filter are as shown in Fig. 5.

## **4. Design of FRM digital filters incorporating IIR interpolation digital subfilters**

In the case of FRM IIR digital filters, *Ha*(*z*) and *Hb*(*z*) (in section 3) act as IIR interpolation digital subfilters. The masking filters *F*0(*z*) and *F*1(*z*) are not changed (i.e. they are still equal order FIR digital filters). Therefore, Eqn. (5) is still valid for the FRM IIR digital filter.

The IIR interpolation digital subfilter *Ha*(*z*) is chosen to have an odd order *NIIR*. Odd-ordered elliptic transfer functions can be represented as a sum of or difference between two allpass transfer functions [40]. Therefore, *Ha*(*z*) can be realized as the addition of two allpass digital networks *G*0(*z*) and *G*1(*z*) as follows:

$$H\_{\mathfrak{a}}(z) = \frac{\mathcal{G}\_0(z) + \mathcal{G}\_1(z)}{2} \tag{10}$$

where *G*0(*z*) is odd-ordered and *G*1(*z*) is even-ordered. The interesting fact is that the difference between *G*0(*z*) and *G*1(*z*) results in a filter that is power complementary to *Ha*(*z*), and can subsequently be used as the power complementary interpolation digital subfilter *Hb*(*z*) as in the following:

$$H\_b(z) = \frac{G\_0(z) - G\_1(z)}{2} \tag{11}$$

*<sup>G</sup>*0(*zM*)

*A*(*z*)

*B*(*z*)

The advantage of realizing the FRM IIR digital filter as shown in Fig. 8 is that two adders shown in Fig. 7 are removed and they are no longer required. This subsequently simplifies the hardware implementation of the overall FRM IIR digital filter. However, it should be noted that the FIR masking digital subfilters *F*0(*z*) and *F*1(*z*) are made to be equal order using zero padding, and this results in the masking filters being moderately sparse. This is not the case when *A*(*z*) and *B*(*z*) are used instead. Therefore, the gain in hardware that could be achieved by using the realization in Fig. 8 is offset by a greater number of

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

**5. Realization of IIR interpolation digital subfilters using Elliptic Filters with**

Bilinear-LDI transformation falls into the category of digital filter realization techniques that transform an analog reference filter to its digital counterpart. Therefore, in order to determine the multiplier coefficient values of the IIR interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) constituent in the FRM IIR digital filter, a suitable analog reference filter *Ha*(*s*) and its power complementary analog filter *Hb*(*s*) have to be determined, where *s* is the analog frequency domain variable. Once *Ha*(*s*) and *Hb*(*s*) have been determined, the interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) are derived by using bilinear-LDI

EMQF filters have several advantages for the design of FRM IIR digital filters. The squared ripple in the passband region of *Ha*(*z*) and the squared ripple in the stopband region of *Hb*(*z*) are equal as indicated by Eqn. (3). On the other hand, the squared ripple in the stopband region of *Ha*(*z*) and the squared ripple in the passband region of *Hb*(*z*) are equal. In addition, depending on whether the design specifications require a Case I or Case II FRM technique, either *Ha*(*z*) or *Hb*(*z*) could determine the maximum passband and stopband ripple of the overall FRM IIR digital filter *H*(*z*). Consequently, the interpolation filter *Ha*(*z*) is chosen to have equal passband and stopband squared tolerances. In this way, the resulting *Hb*(*z*) also displays equal passband and stopband squared tolerances. These characteristics can be generalized for the analog reference subfilters *Ha*(*s*) and *Hb*(*s*). Therefore, there is a need for an analog reference filter *Ha*(*s*) that together with its power complement *Hb*(*s*) can exactly satisfy the passband and stopband relations in the FRM IIR filter. EMQF filters can successfully comply with the specifications present in the FRM IIR filter design. In addition, an EMQF transfer function can be easily designed by using bilinear-LDI transformation technique or any other structure consisting of two digital allpass networks in parallel. Furthermore, filters having EMQF transfer functions are

non-zero multiplier coefficients required in the realization of FRM IIR digital filters.

+

http://dx.doi.org/10.5772/52196

253

*<sup>G</sup>*1(*zM*)

**Figure 8.** Alternative Structure of the Overall FRM IIR Digital Filter Fig. 8 shows the block diagram representing Eqn. (15).

**Minimum Q-factor (EMQF)**

minimally sensitive to component variations.

technique (see Section 6).

**Figure 6.** Block Diagram of Interpolation and Complementary Filters as a Parallel Combination of Two Allpass Networks

**Figure 7.** FRM Digital Filter Realization in Terms of Allpass Digital Networks *G*0(*z*) and *G*1(*z*)

It can be easily verified that *Ha*(*z*) and *Hb*(*z*) are power complementary digital filters [29], i.e. they satisfy Eqn. (3). In addition, it is well known that this structure halves the number of multiplier coefficients required for the implementation of FRM digital filters and therefore is the most economical realization since it requires a total of only *NIIR* multiplier coefficients to realize both *Ha*(*z*) and *Hb*(*z*). The overall transfer function of *H*(*z*) given by Eqn. (5) can be expressed as:

$$H(z) = \frac{G\_0(z^M) + G\_1(z^M)}{2} F\_0(z) + \frac{G\_0(z^M) - G\_1(z^M)}{2} F\_1(z) \tag{12}$$

The block diagram in Fig. 6 shows the IIR interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) realized as a parallel combination of two allpass networks. It should be noted that if *Ha*(*z*) is a lowpass filter, *Hb*(*z*), which is the power complementary of *Ha*(*z*), is a highpass filter. Fig. 7 shows an overall FRM IIR digital filter realization.

One may rearrange the structure in Fig. 7 by using Eqns. (10-11). This can be performed by defining two digital subfilters as follows:

$$A(z) = \frac{F\_0(z) + F\_1(z)}{2} \tag{13}$$

$$B(z) = \frac{F\_0(z) - F\_1(z)}{2} \tag{14}$$

Then *H*(*z*) in Eqn. (12) simplifies to:

$$H(z) = G\_0(z^M)A(z) + G\_1(z^M)B(z) \tag{15}$$

<sup>252</sup> Digital Filters and Signal Processing Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 11 Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space http://dx.doi.org/10.5772/52196 253

**Figure 8.** Alternative Structure of the Overall FRM IIR Digital Filter

10 Digital Filters and Signal Processing

IIR digital filter realization.

two digital subfilters as follows:

Then *H*(*z*) in Eqn. (12) simplifies to:

*G*0(*z*)

+

1

1

<sup>2</sup> *Ha*(*z*)

<sup>2</sup> *Hb*(*z*)

+ <sup>1</sup> 2

<sup>2</sup> *<sup>F</sup>*1(*z*) (12)

<sup>2</sup> (13)

<sup>2</sup> (14)

*<sup>H</sup>*(*z*) = *<sup>G</sup>*0(*zM*)*A*(*z*) + *<sup>G</sup>*1(*zM*)*B*(*z*) (15)

+ −

*F*1(*z*)

*F*0(*z*)

*G*1(*z*)

**Figure 7.** FRM Digital Filter Realization in Terms of Allpass Digital Networks *G*0(*z*) and *G*1(*z*)

The overall transfer function of *H*(*z*) given by Eqn. (5) can be expressed as:

*<sup>H</sup>*(*z*) = *<sup>G</sup>*0(*zM*) + *<sup>G</sup>*1(*zM*)

*<sup>G</sup>*0(*zM*)

*<sup>G</sup>*1(*zM*)

**Figure 6.** Block Diagram of Interpolation and Complementary Filters as a Parallel Combination of Two Allpass Networks

+

+ −

It can be easily verified that *Ha*(*z*) and *Hb*(*z*) are power complementary digital filters [29], i.e. they satisfy Eqn. (3). In addition, it is well known that this structure halves the number of multiplier coefficients required for the implementation of FRM digital filters and therefore is the most economical realization since it requires a total of only *NIIR* multiplier coefficients to realize both *Ha*(*z*) and *Hb*(*z*).

The block diagram in Fig. 6 shows the IIR interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) realized as a parallel combination of two allpass networks. It should be noted that if *Ha*(*z*) is a lowpass filter, *Hb*(*z*), which is the power complementary of *Ha*(*z*), is a highpass filter. Fig. 7 shows an overall FRM

One may rearrange the structure in Fig. 7 by using Eqns. (10-11). This can be performed by defining

*<sup>A</sup>*(*z*) = *<sup>F</sup>*0(*z*) + *<sup>F</sup>*1(*z*)

*<sup>B</sup>*(*z*) = *<sup>F</sup>*0(*z*) <sup>−</sup> *<sup>F</sup>*1(*z*)

<sup>2</sup> *<sup>F</sup>*0(*z*) + *<sup>G</sup>*0(*zM*) <sup>−</sup> *<sup>G</sup>*1(*zM*)

Fig. 8 shows the block diagram representing Eqn. (15).

The advantage of realizing the FRM IIR digital filter as shown in Fig. 8 is that two adders shown in Fig. 7 are removed and they are no longer required. This subsequently simplifies the hardware implementation of the overall FRM IIR digital filter. However, it should be noted that the FIR masking digital subfilters *F*0(*z*) and *F*1(*z*) are made to be equal order using zero padding, and this results in the masking filters being moderately sparse. This is not the case when *A*(*z*) and *B*(*z*) are used instead. Therefore, the gain in hardware that could be achieved by using the realization in Fig. 8 is offset by a greater number of non-zero multiplier coefficients required in the realization of FRM IIR digital filters.

## **5. Realization of IIR interpolation digital subfilters using Elliptic Filters with Minimum Q-factor (EMQF)**

Bilinear-LDI transformation falls into the category of digital filter realization techniques that transform an analog reference filter to its digital counterpart. Therefore, in order to determine the multiplier coefficient values of the IIR interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) constituent in the FRM IIR digital filter, a suitable analog reference filter *Ha*(*s*) and its power complementary analog filter *Hb*(*s*) have to be determined, where *s* is the analog frequency domain variable. Once *Ha*(*s*) and *Hb*(*s*) have been determined, the interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) are derived by using bilinear-LDI technique (see Section 6).

EMQF filters have several advantages for the design of FRM IIR digital filters. The squared ripple in the passband region of *Ha*(*z*) and the squared ripple in the stopband region of *Hb*(*z*) are equal as indicated by Eqn. (3). On the other hand, the squared ripple in the stopband region of *Ha*(*z*) and the squared ripple in the passband region of *Hb*(*z*) are equal. In addition, depending on whether the design specifications require a Case I or Case II FRM technique, either *Ha*(*z*) or *Hb*(*z*) could determine the maximum passband and stopband ripple of the overall FRM IIR digital filter *H*(*z*). Consequently, the interpolation filter *Ha*(*z*) is chosen to have equal passband and stopband squared tolerances. In this way, the resulting *Hb*(*z*) also displays equal passband and stopband squared tolerances. These characteristics can be generalized for the analog reference subfilters *Ha*(*s*) and *Hb*(*s*). Therefore, there is a need for an analog reference filter *Ha*(*s*) that together with its power complement *Hb*(*s*) can exactly satisfy the passband and stopband relations in the FRM IIR filter. EMQF filters can successfully comply with the specifications present in the FRM IIR filter design. In addition, an EMQF transfer function can be easily designed by using bilinear-LDI transformation technique or any other structure consisting of two digital allpass networks in parallel. Furthermore, filters having EMQF transfer functions are minimally sensitive to component variations.

Despite all the advantages of EMQF filters, they suffer from not being able to independently specify passband and stopband ripples [41],[42] of the filter. Additionally, EMQF filters have exceedingly low passband attenuation.

All the poles of an EMQF transfer function reside on a circle in the *s* domain rendering them to have equal magnitudes. Given a squared passband and stopband tolerance of *δ<sup>p</sup>* and *δa*, respectively, for an EMQF filter, the passband ripple ∆*<sup>p</sup>* and minimum stopband attenuation ∆*<sup>a</sup>* can be obtained as follows [43]:

$$
\Delta\_p = -10 \log(1 - \delta\_p) \tag{16}
$$

reference filter. However, bilinear transform may result in a digital filter that has delay-free loops in its implementation. Unfortunately, delay-free loops prevent the implementation of a digital filter to be

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

The LDI frequency transformation ensures the absence of delay-free loops in the digital implementation

The LDI frequency transformation maps the hardware implementation of the analog reference filter to digital domain. While the LDI frequency transformation guarantees that there are no delay-free loops in the implementation of the digital filter, it does this to the cost of resulting in a digital filter having poor magnitude-frequency responses. Moreover, it is incapable of preserving the BIBO stability properties

The bilinear-LDI approach is a combination of the two above mentioned realization techniques. In bilinear-LDI transform, a precompensation is performed to the reference analog filter. Then, the conventional LDI design technique is applied to a network resulting from the precompensated analog prototype filter. The precompensation is such that the application of the LDI design technique results in a filter that exactly matches the bilinear frequency transform of the uncompensated analog prototype

The resulting bilinear-LDI digital filters have several desirable features from a hardware realization point of view. They are minimal in the number of digital multiplication operations. Although they are not minimal in the number of digital adders and unit-delays, the additional adders and the additional unit delay lead to certain advantages when the concept of generalized delay unit is used for the realization of the network [34]. Moreover, The bilinear-LDI digital filters lend themselves to fast two-cycle parallel digital signal processing speeds and they exhibit exceptionally low passband sensitivity to their

As discussed in Section 5, the analog reference filter *Ha*(*s*) is decomposed into two allpass analog networks *G*0(*s*) and *G*1(*s*). The digital allpass networks *G*0(*z*) and *G*1(*z*) are obtained from *G*0(*s*) and

It should be pointed out that *G*0(*s*) is an odd-ordered allpass function. Therefore, it has a pole on the real axis in the *s* domain. On the other hand, *G*1(*s*) ends up having an even-ordered allpass function. It

*<sup>G</sup>*(*s*) = *<sup>P</sup>*(−*s*)

*<sup>P</sup>*(*s*) (21)

*P*(*s*) = Ev*P*(*s*) + Od*P*(*s*) (22)

is well known that an allpass transfer function can be written in the general form [34]:

where *P*(*s*) is a Hurwitz polynomial of order, say, *n*˜ . Moreover, *P*(*s*) can be expressed as:

where Ev*P*(*s*) denotes the even and Od*P*(*s*) denotes the odd part of *P*(*s*).

multiplier coefficient values, resulting in small coefficient wordlengths.

*G*1(*s*) using the bilinear-LDI design approach.

(20)

255

http://dx.doi.org/10.5772/52196

*<sup>s</sup>* <sup>=</sup> <sup>1</sup> *T z* 1 <sup>2</sup> − *z*<sup>−</sup> <sup>1</sup> 2 

realizable in hardware platform.

of the analog reference filter.

and is given by

filter.

$$
\Delta\_d = -10 \log(\delta\_d) \tag{17}
$$

The required passband and stopband edge frequencies for the analog reference filter *Ha*(*s*) can be determined using design specifications along with Table 1. Frequency wrapping from digital to analog domain, and vice versa, has to be taken into account in accordance with:

$$
\Omega\_A = \frac{2}{T} \tan(\frac{\omega\_d T}{2}) \tag{18}
$$

where Ω*<sup>A</sup>* is the analog frequency variable, where *ω<sup>d</sup>* is the digital frequency variable, and where *T* is the sampling period.

Once the transfer function of the analog reference filter *Ha*(*s*) is determined, it is represented as a sum of two allpass analog filters *G*0(*s*) and *G*1(*s*). In addition, *Hb*(*s*), which is the power complementary of *Ha*(*s*) is represented as the difference of *G*0(*s*) and *G*1(*s*). The poles of *G*0(*s*) and *G*1(*s*) are determined by cyclically distributing the poles of the reference filter *Ha*(*s*) [43]. In the next section, belinear-LDI design technique is used to transform the two allpass networks *G*0(*s*) and *G*1(*s*) into digital domain.

## **6. Implementation of EMQF interpolation subfilters using bilinear-LDI design approach**

In this section, the design procedure in [34, 44] is briefly explained to design and implement digital filters *G*0(*z*) and *G*1(*z*) using the the bilinear-LDI approach. This approach transforms analog reference filters *G*0(*s*) and *G*1(*s*) to obtain their digital filter counterparts *G*0(*z*) and *G*1(*z*).

The bilinear frequency transformation maps the analog frequency variable *s* to its digital domain counterpart *z* in accordance with:

$$s = \frac{2}{T} \frac{z - 1}{z + 1} \tag{19}$$

where *T* represents the sampling period, for mapping the transfer function of a prototype reference filter from the analog domain to the digital domain. The bilinear transform maps the left half of the complex *s*-plane to the interior of the unit circle in the *z*-plane. Therefore, BIBO stable filters in the *s* domain are converted to filters in the *z* domain which preserve that stability. Similarly, if the analog reference filter is minimum-phase, the previous characteristic of bilinear transform guarantees that the resulting digital filter is also minimum-phase. It also preserves the sensitivity properties of the analog reference filter. However, bilinear transform may result in a digital filter that has delay-free loops in its implementation. Unfortunately, delay-free loops prevent the implementation of a digital filter to be realizable in hardware platform.

12 Digital Filters and Signal Processing

passband attenuation.

the sampling period.

digital domain.

**approach**

counterpart *z* in accordance with:

[43]:

Despite all the advantages of EMQF filters, they suffer from not being able to independently specify passband and stopband ripples [41],[42] of the filter. Additionally, EMQF filters have exceedingly low

All the poles of an EMQF transfer function reside on a circle in the *s* domain rendering them to have equal magnitudes. Given a squared passband and stopband tolerance of *δ<sup>p</sup>* and *δa*, respectively, for an EMQF filter, the passband ripple ∆*<sup>p</sup>* and minimum stopband attenuation ∆*<sup>a</sup>* can be obtained as follows

The required passband and stopband edge frequencies for the analog reference filter *Ha*(*s*) can be determined using design specifications along with Table 1. Frequency wrapping from digital to analog

*<sup>T</sup>* tan(

where Ω*<sup>A</sup>* is the analog frequency variable, where *ω<sup>d</sup>* is the digital frequency variable, and where *T* is

Once the transfer function of the analog reference filter *Ha*(*s*) is determined, it is represented as a sum of two allpass analog filters *G*0(*s*) and *G*1(*s*). In addition, *Hb*(*s*), which is the power complementary of *Ha*(*s*) is represented as the difference of *G*0(*s*) and *G*1(*s*). The poles of *G*0(*s*) and *G*1(*s*) are determined by cyclically distributing the poles of the reference filter *Ha*(*s*) [43]. In the next section, belinear-LDI design technique is used to transform the two allpass networks *G*0(*s*) and *G*1(*s*) into

**6. Implementation of EMQF interpolation subfilters using bilinear-LDI design**

In this section, the design procedure in [34, 44] is briefly explained to design and implement digital filters *G*0(*z*) and *G*1(*z*) using the the bilinear-LDI approach. This approach transforms analog reference

The bilinear frequency transformation maps the analog frequency variable *s* to its digital domain

where *T* represents the sampling period, for mapping the transfer function of a prototype reference filter from the analog domain to the digital domain. The bilinear transform maps the left half of the complex *s*-plane to the interior of the unit circle in the *z*-plane. Therefore, BIBO stable filters in the *s* domain are converted to filters in the *z* domain which preserve that stability. Similarly, if the analog reference filter is minimum-phase, the previous characteristic of bilinear transform guarantees that the resulting digital filter is also minimum-phase. It also preserves the sensitivity properties of the analog

*z* − 1

*<sup>s</sup>* <sup>=</sup> <sup>2</sup> *T*

filters *G*0(*s*) and *G*1(*s*) to obtain their digital filter counterparts *G*0(*z*) and *G*1(*z*).

*<sup>ω</sup>dT*

<sup>Ω</sup>*<sup>A</sup>* <sup>=</sup> <sup>2</sup>

domain, and vice versa, has to be taken into account in accordance with:

∆*<sup>p</sup>* = −10 log(1 − *δp*) (16) ∆*<sup>a</sup>* = −10 log(*δa*) (17)

<sup>2</sup> ) (18)

*<sup>z</sup>* <sup>+</sup> <sup>1</sup> (19)

The LDI frequency transformation ensures the absence of delay-free loops in the digital implementation and is given by

$$s = \frac{1}{T} \left( z^{\frac{1}{2}} - z^{-\frac{1}{2}} \right) \tag{20}$$

The LDI frequency transformation maps the hardware implementation of the analog reference filter to digital domain. While the LDI frequency transformation guarantees that there are no delay-free loops in the implementation of the digital filter, it does this to the cost of resulting in a digital filter having poor magnitude-frequency responses. Moreover, it is incapable of preserving the BIBO stability properties of the analog reference filter.

The bilinear-LDI approach is a combination of the two above mentioned realization techniques. In bilinear-LDI transform, a precompensation is performed to the reference analog filter. Then, the conventional LDI design technique is applied to a network resulting from the precompensated analog prototype filter. The precompensation is such that the application of the LDI design technique results in a filter that exactly matches the bilinear frequency transform of the uncompensated analog prototype filter.

The resulting bilinear-LDI digital filters have several desirable features from a hardware realization point of view. They are minimal in the number of digital multiplication operations. Although they are not minimal in the number of digital adders and unit-delays, the additional adders and the additional unit delay lead to certain advantages when the concept of generalized delay unit is used for the realization of the network [34]. Moreover, The bilinear-LDI digital filters lend themselves to fast two-cycle parallel digital signal processing speeds and they exhibit exceptionally low passband sensitivity to their multiplier coefficient values, resulting in small coefficient wordlengths.

As discussed in Section 5, the analog reference filter *Ha*(*s*) is decomposed into two allpass analog networks *G*0(*s*) and *G*1(*s*). The digital allpass networks *G*0(*z*) and *G*1(*z*) are obtained from *G*0(*s*) and *G*1(*s*) using the bilinear-LDI design approach.

It should be pointed out that *G*0(*s*) is an odd-ordered allpass function. Therefore, it has a pole on the real axis in the *s* domain. On the other hand, *G*1(*s*) ends up having an even-ordered allpass function. It is well known that an allpass transfer function can be written in the general form [34]:

$$G(s) = \frac{P(-s)}{P(s)}\tag{21}$$

where *P*(*s*) is a Hurwitz polynomial of order, say, *n*˜ . Moreover, *P*(*s*) can be expressed as:

$$P(s) = \text{Ev}P(s) + \text{Od}P(s) \tag{22}$$

where Ev*P*(*s*) denotes the even and Od*P*(*s*) denotes the odd part of *P*(*s*).

**Figure 9.** Signal Flow Graph of *G*(*s*)

By simple manipulation of Eqns. (21) and (22) one can get

$$G(\mathbf{s}) = \tilde{\mathcal{K}} \frac{1 - Z(\mathbf{s})}{1 + Z(\mathbf{s})} \tag{23}$$

+

−

**Figure 10.** Voltage Divider Circuit for *g*(*s*)

**Figure 11.** Realization of Impedance *Z*(*s*)

*VA*<sup>1</sup> (*s*)

*r*<sup>0</sup> = 1Ω

*C*<sup>1</sup> *L*<sup>1</sup>

+

http://dx.doi.org/10.5772/52196

257

−

*Lm*

*Cm*

*<sup>Y</sup>*(*s*) (27)

*<sup>s</sup>*2*CiLi* <sup>+</sup> <sup>1</sup> (28)

<sup>1</sup> <sup>−</sup> *sT*/2 (29)

<sup>2</sup> *r*<sup>0</sup> (30)

*sCi*

*VA*<sup>2</sup> (*s*)

*Z*(*s*)

*L*3

*C*3

*L*2

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

*C*2

Finally, *Z*(*s*) represents realizable reactances (consisting of capacitors and inductors only) and can

1 *sL*<sup>1</sup> + *m* ∑ *i*=2

where *m* = *n*˜/2 for even *n*˜ and *m* = (*n*˜ + 1)/2 for odd *n*˜, and where *Ci* represent capacitances and *Li* represent inductances (for *i* = 1, 2, . . . , *m*), and inductor *L*<sup>1</sup> is only present for even *n*˜. • The impedance *Z*(*s*) in Fig. 11 is substituted into Fig. 10 and the precompensation is applied to the resulting network. This amounts to a modification of circuit elements in accordance with:

(*s*) = *VA*<sup>1</sup> (*s*)

be decomposed into its Foster II canonical form, as in Fig. 11, in accordance with

*<sup>Z</sup>*(*s*) = <sup>1</sup>

*Y*(*s*) = *sC*<sup>1</sup> +

*V*′ *A*1

> *r* ′ <sup>0</sup> = *z* 1

The resistance in *r*<sup>0</sup> in Fig. 10 is modified to:

Here, *K*˜ = 1 or −1, and *Z*(*s*) is a realizable reactive impedance given by

$$Z(s) = \begin{cases} \frac{\text{Od}P(s)}{\text{Ev}P(s)} & \text{for even } \tilde{n} \\\\ \frac{\text{Ev}P(s)}{\text{Od}P(s)} & \text{for odd } \tilde{n} \end{cases} \tag{24}$$

where *n*˜ is the order of *G*(*s*)(odd when realizing *G*0(*s*) and even when realizing *G*1(*s*)). The impedance *Z*(*s*) has a zero at *s* = 0 for even *n*˜ and a pole at *s* = 0 for odd *n*˜, while having a zero at *s* = ∞ both for even *n*˜ and for odd *n*˜.

The bilinear-LDI digital realization of *G*(*s*) is achieved by using the following steps:

• The transfer function *G*(*s*) is decomposed in the form

$$G(s) = \tilde{K}[1 - 2g(s)]\tag{25}$$

where

$$\mathbf{g}(\mathbf{s}) = \frac{Z(\mathbf{s})}{1 + Z(\mathbf{s})} \tag{26}$$

Here, *G*(*s*) can be realized as the transfer function of the signal-flow graph in Fig. 9. Furthermore, *g*(*s*) represents a lowpass or highpass analog filter that can be realized as the transfer function of the voltage divider network in Fig. 10.

**Figure 11.** Realization of Impedance *Z*(*s*)

14 Digital Filters and Signal Processing

*K*˜

**Figure 9.** Signal Flow Graph of *G*(*s*)

for even *n*˜ and for odd *n*˜.

where

−2

Here, *K*˜ = 1 or −1, and *Z*(*s*) is a realizable reactive impedance given by

*Z*(*s*) =

  Od*P*(*s*)

Ev*P*(*s*)

where *n*˜ is the order of *G*(*s*)(odd when realizing *G*0(*s*) and even when realizing *G*1(*s*)). The impedance *Z*(*s*) has a zero at *s* = 0 for even *n*˜ and a pole at *s* = 0 for odd *n*˜, while having a zero at *s* = ∞ both

*<sup>g</sup>*(*s*) = *<sup>Z</sup>*(*s*)

Furthermore, *g*(*s*) represents a lowpass or highpass analog filter that can be realized as the transfer

Here, *G*(*s*) can be realized as the transfer function of the signal-flow graph in Fig. 9.

Ev*P*(*s*) for even *<sup>n</sup>*˜

Od*P*(*s*) for odd *<sup>n</sup>*˜

The bilinear-LDI digital realization of *G*(*s*) is achieved by using the following steps:

By simple manipulation of Eqns. (21) and (22) one can get

• The transfer function *G*(*s*) is decomposed in the form

function of the voltage divider network in Fig. 10.

*VA*<sup>1</sup> (*s*)

*<sup>G</sup>*(*s*) = *<sup>K</sup>*˜ <sup>1</sup> <sup>−</sup> *<sup>Z</sup>*(*s*)

*g*(*s*)

*VA*<sup>2</sup> (*s*)

<sup>1</sup> <sup>+</sup> *<sup>Z</sup>*(*s*) (23)

*G*(*s*) = *K*˜[1 − 2*g*(*s*)] (25)

<sup>1</sup> <sup>+</sup> *<sup>Z</sup>*(*s*) (26)

(24)

Finally, *Z*(*s*) represents realizable reactances (consisting of capacitors and inductors only) and can be decomposed into its Foster II canonical form, as in Fig. 11, in accordance with

$$Z(\mathbf{s}) = \frac{1}{Y(\mathbf{s})} \tag{27}$$

$$Y(s) = s\mathcal{C}\_1 + \frac{1}{sL\_1} + \sum\_{i=2}^{m} \frac{s\mathcal{C}\_i}{s^2 \mathcal{C}\_i L\_i + 1} \tag{28}$$

where *m* = *n*˜/2 for even *n*˜ and *m* = (*n*˜ + 1)/2 for odd *n*˜, and where *Ci* represent capacitances and *Li* represent inductances (for *i* = 1, 2, . . . , *m*), and inductor *L*<sup>1</sup> is only present for even *n*˜.

• The impedance *Z*(*s*) in Fig. 11 is substituted into Fig. 10 and the precompensation is applied to the resulting network. This amounts to a modification of circuit elements in accordance with:

$$V\_{A\_1}^{'}(\mathbf{s}) = \frac{V\_{A\_1}(\mathbf{s})}{1 - \mathbf{s}T/2} \tag{29}$$

The resistance in *r*<sup>0</sup> in Fig. 10 is modified to:

$$r\_0' = z^{\frac{1}{2}} r\_0 \tag{30}$$

and

$$L\_1^{'} = L\_1 \tag{31}$$

**7. Constraints for guaranteed BIBO stability**

quantized reactive elements *L*ˆ *<sup>i</sup>* and *C*ˆ

Moreover, in accordance with Eqns. (31-34), one has:

where *L*ˆ <sup>1</sup> = ∞ for odd-ordered allpass network *G*0(*z*).

*i*, one can obtain

reactive elements *L*ˆ *<sup>i</sup>* and *C*ˆ

*C*ˆ ′ <sup>1</sup> <sup>=</sup> *<sup>C</sup>*<sup>ˆ</sup> <sup>1</sup> + *T* <sup>2</sup> <sup>+</sup>

> *L*ˆ ′ *<sup>i</sup>* = *L*ˆ *<sup>i</sup>*

> > *C*ˆ ′ *<sup>i</sup>* <sup>=</sup> *<sup>C</sup>*ˆ2 *i C*ˆ *<sup>i</sup>* + *<sup>T</sup>*<sup>2</sup> 4*L*ˆ *<sup>i</sup>*

In order for the FRM digital filter consisting of CSD multiplier coefficients *m*ˆ *FRM* to be BIBO stable, it is both necessary and sufficient for the bilinear-LDI IIR interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) to be BIBO stable. Likewise, in order for the interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) to be BIBO stable, it is both necessary and sufficient for the bilinear-LDI allpass digital networks *G*0(*z*) and *G*1(*z*) to be BIBO stable. In this way, it is required that the bilinear-LDI digital allpass networks

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

In the course of PSO algorithm, the infinite-precision multiplier coefficients *mLi* and *mCi* can only take quantized values *m*ˆ *Li* and *m*ˆ *Ci* that belong to *CSD*(*L*, *l*, *f*). In order for the bilinear-LDI digital allpass networks *G*0(*z*) and *G*1(*z*) to remain BIBO stable, it is required that the values of the corresponding

the properties of the bilinear frequency transformation from analog to digital domain. In order to find

*i* remain positive [45] in the course of optimization. This is due to

<sup>1</sup> = *<sup>L</sup>*<sup>ˆ</sup> <sup>1</sup> (39)

(37)

259

http://dx.doi.org/10.5772/52196

(38)

(40)

(41)

(42)

(43)

*G*0(*z*) and *G*1(*z*) remain BIBO stable throughout the course of the PSO algorithm.

the conditions for BIBO stability and in accordance with Eqns. (35) and (36), one has:

*L*ˆ ′ *<sup>i</sup>* <sup>=</sup> *<sup>T</sup> m*ˆ *Li*

*C*ˆ ′ *<sup>i</sup>* <sup>=</sup> *<sup>T</sup> m*ˆ *Ci*

*L*ˆ ′

*T*2 4*L*ˆ <sup>1</sup> + *m* ∑ *i*=2

 *C*ˆ *<sup>i</sup>* + *<sup>T</sup>*<sup>2</sup> 4*L*ˆ *<sup>i</sup> C*ˆ *i*

By substituting Eqns. (37) and (38) into Eqns. (39-42), and by solving the resulting equations for the

*<sup>L</sup>*<sup>ˆ</sup> <sup>1</sup> <sup>=</sup> *<sup>T</sup> m*ˆ *<sup>L</sup>*<sup>1</sup> *C*ˆ *i T*2 4*L*ˆ *<sup>i</sup>*

*C*ˆ *<sup>i</sup>* + *<sup>T</sup>*<sup>2</sup> 4*L*ˆ *<sup>i</sup>*

 

2

$$\mathbf{C}\_{1}^{'} = \mathbf{C}\_{1} + \frac{T}{2} + \frac{T^{2}}{4L\_{1}} + \sum\_{i=2}^{m} \frac{\mathbf{C}\_{i} \frac{T^{2}}{4L\_{i}}}{\mathbf{C}\_{i} + \frac{T^{2}}{4L\_{i}}} \tag{32}$$

$$L\_i^{'} = L\_i \left[ \frac{\mathbf{C}\_i + \frac{T^2}{4L\_i}}{\mathbf{C}\_i} \right]^2 \tag{33}$$

$$\mathbf{C}'\_{i} = \frac{\mathbf{C}\_{i}^{2}}{\mathbf{C}\_{i} + \frac{T^{2}}{4L\_{i}}} \tag{34}$$

with *r*<sup>0</sup> = 1Ω and for *i* = 2, 3, ..., *m*.

• Since the voltage/current signal-flow graph of the precompensated network [34] consists of analog integrators only and it has no analog differentiators, it can be used for bilinear-LDI realization method. Therefore, the analog integrators in the signal-flow graph of the precompensated network are replaced by LDI digital integrators, and by impedance-scaling, the resulting network is scaled by *z*<sup>−</sup> <sup>1</sup> <sup>2</sup> to eliminate any half-delay elements. The resulting digital network is displayed in Fig. 12. The multiplier coefficients in Fig. 12 are as follows:

$$m\_{L\_i} = \frac{T}{L\_i'} \tag{35}$$

$$m\_{\mathbb{C}\_i} = \frac{T}{\mathbb{C}\_i'} \tag{36}$$

for *i* = 1, 2, ..., *m*.

**Figure 12.** Realization of the Bilinear-LDI Digital Allpass Network *G*(*z*) [34]

### **7. Constraints for guaranteed BIBO stability**

16 Digital Filters and Signal Processing

with *r*<sup>0</sup> = 1Ω and for *i* = 2, 3, ..., *m*.

The multiplier coefficients in Fig. 12 are as follows:

**Figure 12.** Realization of the Bilinear-LDI Digital Allpass Network *G*(*z*) [34]

*L* ′

> *T*2 4*L*<sup>1</sup> + *m* ∑ *i*=2

 

*<sup>i</sup>* <sup>=</sup> *<sup>C</sup>*<sup>2</sup> *i Ci* + *<sup>T</sup>*<sup>2</sup> 4*Li*

• Since the voltage/current signal-flow graph of the precompensated network [34] consists of analog integrators only and it has no analog differentiators, it can be used for bilinear-LDI realization method. Therefore, the analog integrators in the signal-flow graph of the precompensated network are replaced by LDI digital integrators, and by impedance-scaling, the resulting network is scaled

<sup>2</sup> to eliminate any half-delay elements. The resulting digital network is displayed in Fig. 12.

*mLi* <sup>=</sup> *<sup>T</sup> L*′ *i*

*mCi* <sup>=</sup> *<sup>T</sup> C*′ *i*

*Ci* + *<sup>T</sup>*<sup>2</sup> 4*Li Ci*

*Ci T*2 4*Li Ci* + *<sup>T</sup>*<sup>2</sup> 4*Li*

 

2

*T* <sup>2</sup> <sup>+</sup>

*C*′

<sup>1</sup> = *C*<sup>1</sup> +

*L* ′ *<sup>i</sup>* = *Li*

*C*′

<sup>1</sup> = *L*<sup>1</sup> (31)

(32)

(33)

(34)

(35)

(36)

and

by *z*<sup>−</sup> <sup>1</sup>

for *i* = 1, 2, ..., *m*.

In order for the FRM digital filter consisting of CSD multiplier coefficients *m*ˆ *FRM* to be BIBO stable, it is both necessary and sufficient for the bilinear-LDI IIR interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) to be BIBO stable. Likewise, in order for the interpolation digital subfilters *Ha*(*z*) and *Hb*(*z*) to be BIBO stable, it is both necessary and sufficient for the bilinear-LDI allpass digital networks *G*0(*z*) and *G*1(*z*) to be BIBO stable. In this way, it is required that the bilinear-LDI digital allpass networks *G*0(*z*) and *G*1(*z*) remain BIBO stable throughout the course of the PSO algorithm.

In the course of PSO algorithm, the infinite-precision multiplier coefficients *mLi* and *mCi* can only take quantized values *m*ˆ *Li* and *m*ˆ *Ci* that belong to *CSD*(*L*, *l*, *f*). In order for the bilinear-LDI digital allpass networks *G*0(*z*) and *G*1(*z*) to remain BIBO stable, it is required that the values of the corresponding quantized reactive elements *L*ˆ *<sup>i</sup>* and *C*ˆ *i* remain positive [45] in the course of optimization. This is due to the properties of the bilinear frequency transformation from analog to digital domain. In order to find the conditions for BIBO stability and in accordance with Eqns. (35) and (36), one has:

$$
\hat{L}'\_i = \frac{T}{\hat{m}\_{L\_i}} \tag{37}
$$

$$
\hat{\mathbf{C}}'\_i = \frac{T}{\hat{m}\_{\mathbf{C}\_i}} \tag{38}
$$

Moreover, in accordance with Eqns. (31-34), one has:

$$
\hat{L}\_1' = \hat{L}\_1 \tag{39}
$$

$$
\hat{C}\_1' = \hat{C}\_1 + \frac{T}{2} + \frac{T^2}{4\hat{L}\_1} + \sum\_{i=2}^m \frac{\hat{C}\_i \frac{T^2}{4\hat{L}\_i}}{\hat{C}\_i + \frac{T^2}{4\hat{L}\_i}} \tag{40}
$$

$$
\hat{L}\_i' = \hat{L}\_i \left[ \frac{\hat{\mathcal{C}}\_i + \frac{T^2}{4\hat{L}\_i}}{\hat{\mathcal{C}}\_i} \right]^2 \tag{41}
$$

$$
\hat{\mathcal{C}}'\_i = \frac{\hat{\mathcal{C}}^2\_i}{\hat{\mathcal{C}}\_i + \frac{T^2}{4L\_l}} \tag{42}
$$

where *L*ˆ <sup>1</sup> = ∞ for odd-ordered allpass network *G*0(*z*).

By substituting Eqns. (37) and (38) into Eqns. (39-42), and by solving the resulting equations for the reactive elements *L*ˆ *<sup>i</sup>* and *C*ˆ *i*, one can obtain

$$L\_1 = \frac{T}{\hat{m}\_{L\_1}}\tag{43}$$

$$\hat{C}\_{1} = \frac{T\left\{\frac{4}{\hat{m}\_{C\_{1}}} - \hat{m}\_{L\_{1}} - 4\left(\sum\_{i=2}^{m} \frac{1}{\frac{4}{\hat{m}\_{L\_{i}}}} - \hat{m}\_{C\_{i}}\right) - 2\right\}}{4} \tag{44}$$

$$
\hat{L}\_i = \frac{T(\hat{m}\_{L\_i}\hat{m}\_{C\_i} - 4)^2}{16\hat{m}\_{L\_i}}\tag{45}
$$

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 19

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

*m*ˆ *<sup>C</sup>*<sup>1</sup> <4

*<sup>m</sup>*<sup>ˆ</sup> *Ci* <sup>−</sup> <sup>4</sup>) *<sup>m</sup>*<sup>ˆ</sup> *Ci* <sup>&</sup>lt; <sup>4</sup> (*m*<sup>ˆ</sup> *Li*

 

*m*ˆ *<sup>L</sup>*1+4

*m*ˆ *<sup>L</sup>*<sup>1</sup> > 0

*m*ˆ *Li* > 0

1 4 *m*ˆ *Li* −*m*ˆ *Ci*

http://dx.doi.org/10.5772/52196

) −1  +<sup>2</sup>

 

−1

261

 *m* ∑ *i*=2

**Element Equation Inequality Constraints**

 −<sup>2</sup> 

The starting point of any stochastic algorithm plays an important role in the convergence behavior of the optimization algorithm [46]. Therefore, it is important to generate the initial swarm in proper positions in the search space rather than complete random generation of the initial population. In order to achieve

To start the PSO algorithm from a good position in the search space the infinite precision multiplier coefficient values of the seed particle are generated by using classical techniques as discussed in previous sections. These infinite precision multiplier coefficient values are turned into their finite precision counterparts by simply rounding them to their closest CSD values. This seed particle is used as the center of the swarm and a cloud of particles are generated randomly around the seed particle. It should be noted that the distance of the randomly generated particles should not be far from the seed particle. In this way, the initial swarm contains particles which have high chances of being near the optimal solution. The multiplier coefficient values of the swarm are taken from a set of CSD LUTs

It is necessary and sufficient to choose the values of the multiplier coefficients, such that the inequality

• One LUT is constructed for all multiplier coefficient values *m*ˆ *FIR* ∈ *CSD*(*L*0, *l*0, *f*0) for the masking digital subfilters *F*0(*z*) and *F*1(*z*). The values of *L*0, *l*<sup>0</sup> and *f*<sup>0</sup> are determined empirically based on the amplitude frequency-response of the masking digital subfilters *F*0(*z*) and *F*1(*z*). • A LUT is constructed for all multiplier coefficient values *m*ˆ *IIR* ∈ *CSD*(*L*1, *l*1, *f*1) for the digital allpass networks *G*0(*z*) and *G*1(*z*). Once again, the values of *L*1, *l*<sup>1</sup> and *f*<sup>1</sup> are determined

• The above CSD LUT is used to form one size-reduced LUT per the multiplier coefficient for digital allpass networks *G*0(*z*) and *G*1(*z*), where each size-reduced LUT initially includes CSD values bounded from below by the smallest representable value belonging to *CSD*(*L*1, *l*1, *f*1), and

constraints (47-50) are satisfied. In order to achieve this, the LUTs are constructed as follows:

empirically. Also, it is expedient to assume that *m*ˆ *IIR* have only positive values.

1 4 *m*ˆ *Li* −*m*ˆ *Ci*

*T m*ˆ *<sup>L</sup>*<sup>1</sup>

 *m* ∑ *i*=2

*m*ˆ *Ci* − 4)<sup>2</sup> 16*m*ˆ *Li*

−4*T*

*L*ˆ 1

*C*ˆ 1

*L*ˆ *i*

*C*ˆ *i* 1 4 *T* 

4 *<sup>m</sup>*<sup>ˆ</sup> *<sup>C</sup>*<sup>1</sup> −*m*ˆ *<sup>L</sup>*1−4

*m*ˆ *Ci* (*m*ˆ *Li*

**Table 2.** Relations for Elements of Back-Transformed Reactance

this, the following technique is employed:

**8.1. Initiation of PSO**

which are constructed as follows:

**8.2. FRM IIR digital filter template LUTs**

*T*(*m*ˆ *Li*

$$
\hat{\mathbf{C}}\_{l} = \frac{-4T}{\hat{m}\_{\mathbf{C}\_{l}}(\hat{m}\_{L\_{l}}\hat{m}\_{\mathbf{C}\_{l}} - 4)}\tag{46}
$$

From Eqns. (43-46), *L*ˆ *<sup>i</sup>* > 0 and *C*ˆ *<sup>i</sup>* > 0 provide that

$$
\hat{m}\_{L\_1} > 0 \tag{47}
$$

$$
\hat{m}\_{L\_i} > 0 \tag{48}
$$

$$
\hbar \hat{m}\_{\mathcal{C}\_l} < \frac{4}{\hat{m}\_{L\_l}} \tag{49}
$$

$$\hat{m}\_{\mathbb{C}\_1} < \frac{4}{\left\{ \hat{m}\_{L\_1} + 4 \left( \sum\_{i=2}^m \frac{1}{\frac{4}{\hat{m}\_{L\_i}} - \hat{m}\_{\mathbb{C}\_i}} \right) + 2 \right\}}\tag{50}$$

Then, in order to make the CSD FRM digital filter BIBO stable, it is necessary and sufficient to choose the values of the multiplier coefficients *m*ˆ *FRM* ∈ *CSD*(*L*, *l*, *f*) such that the inequality constraints (47-50) are satisfied. The equations and corresponding condition required for BIBO stability are summarized in Table 2.

In order to make the CSD lowpass digital IIR FRM filter BIBO stable, it is necessary and sufficient to choose the values of the multiplier coefficients *m*ˆ *Li* , *m*ˆ *Ci* ∈ *CSD*(*L*, *l*, *f*) such that the inequality constraints of Table 2 are satisfied.

It should be pointed out that constraint (49) is most stringent when *m*ˆ *Li* is at its largest possible value. Similarly, constraint (50) is most stringent when *m*ˆ *<sup>L</sup>*<sup>1</sup> , *m*ˆ *Li* and *m*ˆ *Ci* are all at their largest possible values (while *m*ˆ *Li* and *m*ˆ *Ci* still adhere to constraint *m*ˆ *Ci* < 4 (*m*ˆ *Li* ) −1 ).

#### **8. Proposed PSO of FRM IIR digital filters**

The proposed particle swarm optimization of BIBO stable FRM IIR digital filters is carried out over the CSD multiplier coefficient space *CSD*(*L*<sup>0</sup> or 1, *l*<sup>0</sup> or 1, *f*<sup>0</sup> or <sup>1</sup>), where *L*<sup>0</sup> or <sup>1</sup> represents the multiplier coefficient wordlength, where *l*<sup>0</sup> or <sup>1</sup> represents the maximum number of non-zero digits, and where *f*<sup>0</sup> or <sup>1</sup> represents the number of fractional part digits (for FIR or IIR digital subfilters, respectively).

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 19


**Table 2.** Relations for Elements of Back-Transformed Reactance

The starting point of any stochastic algorithm plays an important role in the convergence behavior of the optimization algorithm [46]. Therefore, it is important to generate the initial swarm in proper positions in the search space rather than complete random generation of the initial population. In order to achieve this, the following technique is employed:

## **8.1. Initiation of PSO**

18 Digital Filters and Signal Processing

260 Digital Filters and Signal Processing

From Eqns. (43-46), *L*ˆ *<sup>i</sup>* > 0 and *C*ˆ

are summarized in Table 2.

constraints of Table 2 are satisfied.

*C*ˆ <sup>1</sup> = *T* 

*m*ˆ *<sup>C</sup>*<sup>1</sup> <

to choose the values of the multiplier coefficients *m*ˆ *Li*

(while *m*ˆ *Li* and *m*ˆ *Ci* still adhere to constraint *m*ˆ *Ci* < 4 (*m*ˆ *Li*

**8. Proposed PSO of FRM IIR digital filters**

 

*m*ˆ *<sup>L</sup>*<sup>1</sup> + 4

4 *<sup>m</sup>*<sup>ˆ</sup> *<sup>C</sup>*<sup>1</sup>

*C*ˆ

*<sup>i</sup>* > 0 provide that

− *m*ˆ *<sup>L</sup>*<sup>1</sup> − 4

*<sup>L</sup>*<sup>ˆ</sup> *<sup>i</sup>* <sup>=</sup> *<sup>T</sup>*(*m*<sup>ˆ</sup> *Li*

*<sup>i</sup>* <sup>=</sup> <sup>−</sup>4*<sup>T</sup> m*ˆ *Ci* (*m*ˆ *Li*

*m*ˆ *Ci* <

 *m* ∑ *i*=2

Then, in order to make the CSD FRM digital filter BIBO stable, it is necessary and sufficient to choose the values of the multiplier coefficients *m*ˆ *FRM* ∈ *CSD*(*L*, *l*, *f*) such that the inequality constraints (47-50) are satisfied. The equations and corresponding condition required for BIBO stability

In order to make the CSD lowpass digital IIR FRM filter BIBO stable, it is necessary and sufficient

It should be pointed out that constraint (49) is most stringent when *m*ˆ *Li* is at its largest possible value. Similarly, constraint (50) is most stringent when *m*ˆ *<sup>L</sup>*<sup>1</sup> , *m*ˆ *Li* and *m*ˆ *Ci* are all at their largest possible values

The proposed particle swarm optimization of BIBO stable FRM IIR digital filters is carried out over the CSD multiplier coefficient space *CSD*(*L*<sup>0</sup> or 1, *l*<sup>0</sup> or 1, *f*<sup>0</sup> or <sup>1</sup>), where *L*<sup>0</sup> or <sup>1</sup> represents the multiplier coefficient wordlength, where *l*<sup>0</sup> or <sup>1</sup> represents the maximum number of non-zero digits, and where *f*<sup>0</sup> or <sup>1</sup> represents the number of fractional part digits (for FIR or IIR digital subfilters, respectively).

4 *m*ˆ *Li*

4

1 4 *m*ˆ *Li*

> ) −1 ).

− *m*ˆ *Ci*

 <sup>+</sup> <sup>2</sup>

 *m* ∑ *i*=2

*m*ˆ *Ci* − 4)<sup>2</sup> 16*m*ˆ *Li*

1 4 *m*ˆ *Li*

− *m*ˆ *Ci*

 <sup>−</sup> <sup>2</sup>

 <sup>4</sup> (44)

*<sup>m</sup>*<sup>ˆ</sup> *Ci* <sup>−</sup> <sup>4</sup>) (46)

*m*ˆ *<sup>L</sup>*<sup>1</sup> > 0 (47)

*m*ˆ *Li* > 0 (48)

 

, *m*ˆ *Ci* ∈ *CSD*(*L*, *l*, *f*) such that the inequality

(45)

(49)

(50)

To start the PSO algorithm from a good position in the search space the infinite precision multiplier coefficient values of the seed particle are generated by using classical techniques as discussed in previous sections. These infinite precision multiplier coefficient values are turned into their finite precision counterparts by simply rounding them to their closest CSD values. This seed particle is used as the center of the swarm and a cloud of particles are generated randomly around the seed particle. It should be noted that the distance of the randomly generated particles should not be far from the seed particle. In this way, the initial swarm contains particles which have high chances of being near the optimal solution. The multiplier coefficient values of the swarm are taken from a set of CSD LUTs which are constructed as follows:

## **8.2. FRM IIR digital filter template LUTs**

It is necessary and sufficient to choose the values of the multiplier coefficients, such that the inequality constraints (47-50) are satisfied. In order to achieve this, the LUTs are constructed as follows:


20 Digital Filters and Signal Processing

from above by the corresponding value of the finite-wordlength coefficients for the seed particle. The size-reduced LUTs are augmented before PSO process commences. The purpose of this augmentation is to ensure that the exploration space include as many of those CSD multiplier coefficients *m*ˆ *<sup>L</sup>*<sup>1</sup> , *m*ˆ *<sup>C</sup>*<sup>1</sup> , *m*ˆ *Li* and *m*ˆ *Ci* which still satisfy the BIBO stability constraints (47-50).

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 21

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

The first problem in the construction of barren layers concerns how to make the fitness values in the barren layer relatively low. This problem can be resolved by filling the header part by unrealistically

The second problem, on the other hand, concerns how to determine the width of the barren layer such that the particles do not cross over to the outside of the search space even under the worst case scenario. These two problems relate to the number of entries and the CSD values of the entries in header and footer parts of the template LUTs. To overcome this problem, let us consider the *j*-th variable in the *k*-th particle is in the boundaries of one of the template LUTs in iteration *i* − 1. The worst case scenario

> *x*ˆ *i bestkj*

*g*ˆ *i bestj* > *x*ˆ *i*

*x*ˆ *i bestkj*

*g*ˆ *i bestj* < *x*ˆ *i*

*kj* moves toward the barren layer with the peak permissible velocities (*vmax* for the

> *x*ˆ *i*

< *x*ˆ *i*

Eqns. (53-56) show that the velocity of the particle in iteration *i* + 1 tends to move the particle in a direction opposite to the direction of the barren regions. Here, the worst case happens when *r*<sup>1</sup> = *r*<sup>2</sup> = 0. In this way, the number of entries *L <sup>f</sup>* in the footer part, and the number of entries *Lh* in the header

*<sup>L</sup> <sup>f</sup>* = |*v*ˆ*min*| + [*w*|*v*ˆ*min*|]+[*w*[*w*|*v*ˆ*min*|]] + ...

*Lh* = *v*ˆ*max* + [*wv*ˆ*max*]+[*w*[*wv*ˆ*max*]] + ...

*v*ˆ*max* <sup>2</sup> <sup>+</sup>

<sup>2</sup> <sup>+</sup> <sup>|</sup>*v*ˆ*min*<sup>|</sup>

*v*ˆ*max* <sup>4</sup> <sup>+</sup> ...

if *<sup>v</sup>* : positive integer <sup>⇒</sup> [*wv*] <sup>≤</sup> *<sup>v</sup>*

<sup>4</sup> <sup>+</sup> ...

= 2|*v*ˆ*min*| (57)

= 2*v*ˆ*max* (58)

≤ |*v*ˆ*min*<sup>|</sup> <sup>+</sup> <sup>|</sup>*v*ˆ*min*<sup>|</sup>

≤ *v*ˆ*max* +

*kj* is in the footer:

*kj* (53)

http://dx.doi.org/10.5772/52196

263

*kj* (54)

*kj* (55)

*kj* (56)

<sup>2</sup> (59)

large, and the footer part by unrealistically small CSD multiplier coefficient values.

header, and *vmin* for the footer). If in the *i*-th iteration *x<sup>i</sup>*

*8.4.1. barren layer entries*

*8.4.2. barren layer width*

occurs when *<sup>x</sup>i*−<sup>1</sup>

and if it is in the header:

part is determined in accordance with

Let us recall that since 0 ≤ *w* < 0.5,

The above constructed LUTs are used as template LUTs. There are two problems concerning the PSO of FRM IIR digital filters over the CSD multiplier coefficient space. To overcome these problems, the template LUTs must be further processed. These two problems and the way to solve them are discussed in the following.

#### **8.3. PSO indirect search method**

In PSO, the required new particle position is obtained from the previous position of the particle through the addition of a random (normalized) velocity value. However, by directly applying the conventional PSO to the above optimization over the CSD multiplier coefficients, one may obtain new particle positions whose coordinate values are no longer in *CSD*(*L*<sup>0</sup> or 1, *l*<sup>0</sup> or 1, *f*<sup>0</sup> or <sup>1</sup>). In order to overcome this problem, the optimization search is carried out indirectly via the indices to the LUT CSD values (as opposed to LUT CSD values themselves). In this way, the CSD coordinate values for each particle position are obtained by integer indices to the CSD LUTs. The key point in the indirect search rests with ensuring that the index set is closed, i.e. by ensuring that each index points to a valid CSD value in the LUT, and that the resulting particle in the course of PSO adheres to the prespecified CSD number format.

If the velocity values are replaced by their closest integer values, the update equations become modified to

$$
\boldsymbol{\mathfrak{d}}\_{kj}^{i} = [w\boldsymbol{\mathfrak{d}}\_{kj}^{i-1} + c\_1 r\_1 (\boldsymbol{\mathfrak{d}}\_{best\_{kj}}^{i-1} - \boldsymbol{\mathfrak{x}}\_{kj}^{i-1}) + c\_2 r\_2 (\boldsymbol{\mathfrak{d}}\_{best\_j}^{i-1} - \boldsymbol{\mathfrak{x}}\_{kj}^{i-1})]^1 \tag{51}
$$

$$
\text{if} \quad \boldsymbol{\mathfrak{d}}\_{kj}^{i} < \boldsymbol{\hat{v}}\_{min} \quad ; \quad \boldsymbol{\mathfrak{d}}\_{kj}^{i} = \boldsymbol{\hat{v}}\_{min}
$$

$$
\text{if} \quad \boldsymbol{\mathfrak{d}}\_{kj}^{i} > \boldsymbol{\mathfrak{d}}\_{max} \quad ; \quad \boldsymbol{\mathfrak{d}}\_{kj}^{i} = \boldsymbol{\mathfrak{d}}\_{max}
$$

$$
\boldsymbol{\mathfrak{d}}\_{kj}^{i} = \boldsymbol{\mathfrak{x}}\_{kj}^{i-1} + \boldsymbol{\mathfrak{d}}\_{kj}^{i} \tag{52}
$$

Here, *x*ˆ*kj*, *v*ˆ*kj*, *x*ˆ*bestkj*, *g*ˆ*bestj* , *v*ˆ*min* and *v*ˆ*max* are all integer values where *v*ˆ*min* < 0 and *v*ˆ*max* > 0. In addition, *w* is limited in the interval [0, 0.5) (as discussed shortly).

#### **8.4. Barren layers**

Due to their finite length, the template LUTs inevitably lead to a bounded optimization search space. In order to ensure that the particles do not cross over to the outside of the search space in the course of PSO, the search space is constructed as a combination of two regions, namely the interior and barren layers. The barren layer is constructed to yield relatively low fitness values, and is represented as header and footer in the template LUT. There are two problems concerning the construction of the barren layers:

<sup>1</sup> [*R*] denotes rounding *R* to its closest integer, where *R* is assumed to be a real value.

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 21

#### *8.4.1. barren layer entries*

20 Digital Filters and Signal Processing

262 Digital Filters and Signal Processing

**8.3. PSO indirect search method**

*v*ˆ *i kj* = [*wv*ˆ

Here, *x*ˆ*kj*, *v*ˆ*kj*, *x*ˆ*bestkj*, *g*ˆ*bestj*

**8.4. Barren layers**

*i*−1 *kj* + *c*1*r*1(*x*ˆ

addition, *w* is limited in the interval [0, 0.5) (as discussed shortly).

<sup>1</sup> [*R*] denotes rounding *R* to its closest integer, where *R* is assumed to be a real value.

if *v*ˆ *i*

if *v*ˆ *i*

in the following.

format.

to

from above by the corresponding value of the finite-wordlength coefficients for the seed particle. The size-reduced LUTs are augmented before PSO process commences. The purpose of this augmentation is to ensure that the exploration space include as many of those CSD multiplier coefficients *m*ˆ *<sup>L</sup>*<sup>1</sup> , *m*ˆ *<sup>C</sup>*<sup>1</sup> , *m*ˆ *Li* and *m*ˆ *Ci* which still satisfy the BIBO stability constraints (47-50).

The above constructed LUTs are used as template LUTs. There are two problems concerning the PSO of FRM IIR digital filters over the CSD multiplier coefficient space. To overcome these problems, the template LUTs must be further processed. These two problems and the way to solve them are discussed

In PSO, the required new particle position is obtained from the previous position of the particle through the addition of a random (normalized) velocity value. However, by directly applying the conventional PSO to the above optimization over the CSD multiplier coefficients, one may obtain new particle positions whose coordinate values are no longer in *CSD*(*L*<sup>0</sup> or 1, *l*<sup>0</sup> or 1, *f*<sup>0</sup> or <sup>1</sup>). In order to overcome this problem, the optimization search is carried out indirectly via the indices to the LUT CSD values (as opposed to LUT CSD values themselves). In this way, the CSD coordinate values for each particle position are obtained by integer indices to the CSD LUTs. The key point in the indirect search rests with ensuring that the index set is closed, i.e. by ensuring that each index points to a valid CSD value in the LUT, and that the resulting particle in the course of PSO adheres to the prespecified CSD number

If the velocity values are replaced by their closest integer values, the update equations become modified

*i*−1

*kj* ) + *c*2*r*2(*g*ˆ

*i kj* = *v*ˆ*min*

*i kj* = *v*ˆ*max*

*i*−1 *bestj* − *x*ˆ *i*−1

, *v*ˆ*min* and *v*ˆ*max* are all integer values where *v*ˆ*min* < 0 and *v*ˆ*max* > 0. In

*kj* )]<sup>1</sup> (51)

*kj* (52)

*i*−1 *bestkj* <sup>−</sup> *<sup>x</sup>*<sup>ˆ</sup>

*x*ˆ *i kj* = *x*ˆ *i*−1 *kj* + *<sup>v</sup>*<sup>ˆ</sup> *i*

*kj* <sup>&</sup>lt; *<sup>v</sup>*ˆ*min* ; *<sup>v</sup>*<sup>ˆ</sup>

*kj* <sup>&</sup>gt; *<sup>v</sup>*ˆ*max* ; *<sup>v</sup>*<sup>ˆ</sup>

Due to their finite length, the template LUTs inevitably lead to a bounded optimization search space. In order to ensure that the particles do not cross over to the outside of the search space in the course of PSO, the search space is constructed as a combination of two regions, namely the interior and barren layers. The barren layer is constructed to yield relatively low fitness values, and is represented as header and footer in the template LUT. There are two problems concerning the construction of the barren layers:

The first problem in the construction of barren layers concerns how to make the fitness values in the barren layer relatively low. This problem can be resolved by filling the header part by unrealistically large, and the footer part by unrealistically small CSD multiplier coefficient values.

#### *8.4.2. barren layer width*

The second problem, on the other hand, concerns how to determine the width of the barren layer such that the particles do not cross over to the outside of the search space even under the worst case scenario. These two problems relate to the number of entries and the CSD values of the entries in header and footer parts of the template LUTs. To overcome this problem, let us consider the *j*-th variable in the *k*-th particle is in the boundaries of one of the template LUTs in iteration *i* − 1. The worst case scenario occurs when *<sup>x</sup>i*−<sup>1</sup> *kj* moves toward the barren layer with the peak permissible velocities (*vmax* for the header, and *vmin* for the footer). If in the *i*-th iteration *x<sup>i</sup> kj* is in the footer:

$$\mathfrak{X}\_{best\_{kj}}^{i} > \mathfrak{X}\_{kj}^{i} \tag{53}$$

$$
\hat{\mathbf{g}}\_{best\_{\parallel}}^{l} > \hat{\mathbf{x}}\_{kj}^{l} \tag{54}
$$

and if it is in the header:

$$
\hat{\mathfrak{X}}\_{best\_{kj}}^{\bar{l}} < \hat{\mathfrak{X}}\_{kj}^{\bar{l}} \tag{55}
$$

$$\mathfrak{g}\_{best\_{\parallel}}^{i} < \mathfrak{X}\_{kj}^{i} \tag{56}$$

Eqns. (53-56) show that the velocity of the particle in iteration *i* + 1 tends to move the particle in a direction opposite to the direction of the barren regions. Here, the worst case happens when *r*<sup>1</sup> = *r*<sup>2</sup> = 0. In this way, the number of entries *L <sup>f</sup>* in the footer part, and the number of entries *Lh* in the header part is determined in accordance with

$$\begin{split} L\_f &= |\mathfrak{d}\_{\min}| + [w|\mathfrak{d}\_{\min}|] + [w[w|\mathfrak{d}\_{\min}]] + \dots \\ &\le |\mathfrak{d}\_{\min}| + \frac{|\mathfrak{d}\_{\min}|}{2} + \frac{|\mathfrak{d}\_{\min}|}{4} + \dots \\ &= 2|\mathfrak{d}\_{\min}| \\ L\_h &= \mathfrak{d}\_{\max} + [w\mathfrak{d}\_{\max}] + [w[w\mathfrak{d}\_{\max}]] + \dots \\ &\le \mathfrak{d}\_{\max} + \frac{\mathfrak{d}\_{\max}}{2} + \frac{\mathfrak{d}\_{\max}}{4} + \dots \\ &= 2\hat{v}\_{\max} \end{split} \tag{58}$$

Let us recall that since 0 ≤ *w* < 0.5,

$$\text{if } \quad v:\text{positive integer} \quad \Rightarrow \quad [wv] \le \frac{v}{2} \tag{59}$$

22 Digital Filters and Signal Processing

In addition, after some iterations *v*ˆ *i*+1 *kj* <sup>=</sup> <sup>0</sup>. Otherwise, if *<sup>w</sup>* <sup>≥</sup> 0.5, *<sup>v</sup>*<sup>ˆ</sup> *i*+1 *kj* can never become zero, and the width of the barren layer will be infinity.

The augmented LUTs remains fixed in the course of PSO, restricting automatic particle movement inside the limited search space. Modifying the index values inside each particle by adding the current indices to the length of the footer barren region, *L <sup>f</sup>* , PSO algorithm is ready to start the optimization of FRM digital filters.

#### **9. Design methodology**

The design methodology for the proposed PSO of BIBO stable bilinear-LDI based FRM IIR digital filters over the CSD multiplier coefficient space can be summarized as follows:

1. *Designing the interpolation digital subfilter*: the first step in determining the interpolation subfilter specifications is to fix the interpolation factor *M* from a pre-specified range. This is done in a way that the order of the FIR masking filters is kept minimal. Using the passband edge frequency *ω<sup>p</sup>* and stopband edge frequency *ω<sup>a</sup>* and the expressions for boundary frequencies given in Table 1, one can determine the filter case and calculate the approximate passband edge ˜ *θ* and stopband edge *φ*˜ of the digital interpolation lowpass subfilter *H*(*ejω*), for every value of the user specified range of interpolation factors *M*. The order of the FIR masking filters depends on the minimum distance between consecutive image replicas of either the interpolated subfilter *Ha*(*ejMω*) or its complement *Hb*(*ejMω*). Then, displacement *<sup>λ</sup><sup>M</sup>* and distance *<sup>D</sup>*˜ *<sup>M</sup>* for each interpolation factor *<sup>M</sup>* are given as:

$$\lambda\_M = \max\left[ |(\frac{\pi}{2} - \tilde{\theta})| , |(\frac{\pi}{2} - \tilde{\phi})| \right] \tag{60}$$

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 23

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

<sup>0</sup> <sup>+</sup> <sup>15</sup>*q*<sup>9</sup>

<sup>0</sup> <sup>+</sup> <sup>150</sup>*q*<sup>13</sup> 0

*θ* for Case I

265

http://dx.doi.org/10.5772/52196

2

*q* = *q*<sup>0</sup> + 2*q*<sup>5</sup>

*kp* = 1 − 2*q*<sup>0</sup> 1 + 2*q*<sup>0</sup>

> *k* = 1 − *k*<sup>2</sup> *p*

In order to satisfy the passband edge specification, the digital passband edge *ω<sup>p</sup>* = ˜

determine the transfer function of the FIR masking filters *<sup>F</sup>*0(*ejω*) and *<sup>F</sup>*1(*ejω*).

allpass digital networks *G*0(*z*) and *G*1(*z*).

cloud around the seed particle as discussed in section 8.1.

follows:

with

where

*F*0(*z*) and *F*1(*z*).

filters. The digital stopband edge *ω<sup>a</sup>* is then determined using the analog ratio *k*. (Here, frequency warping from digital to analog domain, and vice versa, given by Eqn. (18) needs to be taken into account.) Similarly, *ω<sup>a</sup>* = *φ*˜ for Case II filters, and *ω<sup>p</sup>* can be determined by using ratio *k*. Also, using given ripple specifications along with the boundary frequencies described in Table 1, one can

2. *Generation of seed FRM digital filter particle*: The seed FRM digital filter particle is formed as

• A particle with *B*1 coordinates is formed in which each coordinate serves as an index of the corresponding CSD LUT for each multiplier coefficient constituent in the interpolation digital subfilters. For FRM IIR digital filters, the multiplier coefficients correspond to the bilinear-LDI

• A particle with *B*2 coordinates is formed in which each coordinate serves as an index of the corresponding CSD LUT for each multiplier coefficient in the FIR masking digital subfilters

3. *Generation of Initial Swarm*: An initial swarm of *K* particles is formed by generating a random

4. *Fitness Evaluation*: The fitness function for CSD FRM IIR digital filters is defined in accordance

*ε <sup>p</sup>* = *max ω*∈∆*ω<sup>p</sup>*

*ε<sup>a</sup>* = *max ω*∈∆*ω<sup>a</sup>*

*ς <sup>p</sup>* = *max ω*∈∆*ω<sup>p</sup>*

*fitnessmagnitude* = −20*log*[*max*(*ε <sup>p</sup>*,*εa*)] (62) *fitnessgroup*<sup>−</sup>*delay* = *<sup>ς</sup> <sup>p</sup>* (63) *fitness* = *fintessmagnitude* − *fitnessgroup*<sup>−</sup>*delay* (64)

[*Wp*|*H*(*ejω*) <sup>−</sup> <sup>1</sup>|] (65)

[*Wa*|*H*(*ejω*)|] (66)

[*Wgd*|*τ*(*ω*) − *µτ*|] (67)

$$
\tilde{D}\_M = \frac{\pi}{M} - \frac{2\lambda}{M} \tag{61}
$$

To minimize the length of FIR-masking filters, the value of *M* that results in the largest value of *D*˜ *<sup>M</sup>* is chosen. This determines the optimal interpolation factor *M* as well as the approximate passband edge ˜ *θ* and stopband edge *φ*˜ of the digital interpolation subfilter *H*(*ejω*). EMQF filters have the property of equal square magnitude ripple size in the passband and stopband. Therefore, of the two ripple specifications, whichever gives the smallest tolerance in the squared magnitude response determines both the passband ripple *Rp* and stopband attenuation *Ra* of the interpolation digital subfilter *Ha*(*ejω*). The interpolation digital subfilter order *NIIR* is then determined using *Rp*, *Ra*, ˜ *θ* and *φ*˜. *NIIR* must be rounded to the nearest larger odd integer so that it can be implemented by a parallel combination of two allpass networks. With the order *NIIR*, and passband and stopband ripples *Rp* and *Ra* fixed, the ratio of the analog passband edge *θ<sup>A</sup>* and stopband edge *φ<sup>A</sup>* is a constant *k* given by [47]

$$D = \frac{10^{0.1R\_s} - 1}{10^{0.1R\_p} - 1}$$
 
$$q = 10^{\frac{-\log(16D)}{N\_{IIR}}}$$

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space http://dx.doi.org/10.5772/52196 265

$$\begin{aligned} q &= q\_0 + 2q\_0^5 + 15q\_0^9 + 150q\_0^{13} \\\\ k\_p &= \left[\frac{1 - 2q\_0}{1 + 2q\_0}\right]^2 \\\\ k &= \sqrt{1 - k\_p^2} \end{aligned}$$

In order to satisfy the passband edge specification, the digital passband edge *ω<sup>p</sup>* = ˜ *θ* for Case I filters. The digital stopband edge *ω<sup>a</sup>* is then determined using the analog ratio *k*. (Here, frequency warping from digital to analog domain, and vice versa, given by Eqn. (18) needs to be taken into account.) Similarly, *ω<sup>a</sup>* = *φ*˜ for Case II filters, and *ω<sup>p</sup>* can be determined by using ratio *k*. Also, using given ripple specifications along with the boundary frequencies described in Table 1, one can determine the transfer function of the FIR masking filters *<sup>F</sup>*0(*ejω*) and *<sup>F</sup>*1(*ejω*).

	- A particle with *B*1 coordinates is formed in which each coordinate serves as an index of the corresponding CSD LUT for each multiplier coefficient constituent in the interpolation digital subfilters. For FRM IIR digital filters, the multiplier coefficients correspond to the bilinear-LDI allpass digital networks *G*0(*z*) and *G*1(*z*).
	- A particle with *B*2 coordinates is formed in which each coordinate serves as an index of the corresponding CSD LUT for each multiplier coefficient in the FIR masking digital subfilters *F*0(*z*) and *F*1(*z*).

$$fitness\_{magnitude} = -20\log\left[\max(\varepsilon\_p, \varepsilon\_d)\right] \tag{62}$$

$$fitness\_{group-delay} = \varsigma\_p \tag{63}$$

$$fitness = fitness\_{magnitude} - fitness\_{group-delay} \tag{64}$$

where

22 Digital Filters and Signal Processing In addition, after some iterations *v*ˆ

264 Digital Filters and Signal Processing

**9. Design methodology**

FRM digital filters.

edge ˜

˜

*k* given by [47]

the width of the barren layer will be infinity.

*i*+1

filters over the CSD multiplier coefficient space can be summarized as follows:

one can determine the filter case and calculate the approximate passband edge ˜

*λ<sup>M</sup>* = max[|(

*<sup>D</sup>*˜ *<sup>M</sup>* <sup>=</sup> *<sup>π</sup>*

*kj* <sup>=</sup> <sup>0</sup>. Otherwise, if *<sup>w</sup>* <sup>≥</sup> 0.5, *<sup>v</sup>*<sup>ˆ</sup>

The augmented LUTs remains fixed in the course of PSO, restricting automatic particle movement inside the limited search space. Modifying the index values inside each particle by adding the current indices to the length of the footer barren region, *L <sup>f</sup>* , PSO algorithm is ready to start the optimization of

The design methodology for the proposed PSO of BIBO stable bilinear-LDI based FRM IIR digital

1. *Designing the interpolation digital subfilter*: the first step in determining the interpolation subfilter specifications is to fix the interpolation factor *M* from a pre-specified range. This is done in a way that the order of the FIR masking filters is kept minimal. Using the passband edge frequency *ω<sup>p</sup>* and stopband edge frequency *ω<sup>a</sup>* and the expressions for boundary frequencies given in Table 1,

*φ*˜ of the digital interpolation lowpass subfilter *H*(*ejω*), for every value of the user specified range of interpolation factors *M*. The order of the FIR masking filters depends on the minimum distance between consecutive image replicas of either the interpolated subfilter *Ha*(*ejMω*) or its complement *Hb*(*ejMω*). Then, displacement *<sup>λ</sup><sup>M</sup>* and distance *<sup>D</sup>*˜ *<sup>M</sup>* for each interpolation factor *<sup>M</sup>* are given as:

> *π* <sup>2</sup> <sup>−</sup> ˜ *θ*)|, |( *π*

To minimize the length of FIR-masking filters, the value of *M* that results in the largest value of *D*˜ *<sup>M</sup>* is chosen. This determines the optimal interpolation factor *M* as well as the approximate passband

*θ* and *φ*˜. *NIIR* must be rounded to the nearest larger odd integer so that it can be implemented by a parallel combination of two allpass networks. With the order *NIIR*, and passband and stopband ripples *Rp* and *Ra* fixed, the ratio of the analog passband edge *θ<sup>A</sup>* and stopband edge *φ<sup>A</sup>* is a constant

> *<sup>D</sup>* <sup>=</sup> 100.1*Ra* <sup>−</sup> <sup>1</sup> 100.1*Rp* − 1

> > − log(16*D*) *NIIR*

*q* = 10

*θ* and stopband edge *φ*˜ of the digital interpolation subfilter *H*(*ejω*). EMQF filters have the property of equal square magnitude ripple size in the passband and stopband. Therefore, of the two ripple specifications, whichever gives the smallest tolerance in the squared magnitude response determines both the passband ripple *Rp* and stopband attenuation *Ra* of the interpolation digital subfilter *Ha*(*ejω*). The interpolation digital subfilter order *NIIR* is then determined using *Rp*, *Ra*,

*<sup>M</sup>* <sup>−</sup> <sup>2</sup>*<sup>λ</sup>*

*i*+1

*kj* can never become zero, and

*θ* and stopband edge

<sup>2</sup> <sup>−</sup> *<sup>φ</sup>*˜)|] (60)

*<sup>M</sup>* (61)

$$\varepsilon\_p = \underbrace{\max}\_{\omega \in \Delta \omega\_p} \left[ \mathcal{W}\_p |H(e^{j\omega}) - 1| \right] \tag{65}$$

$$\varepsilon\_d = \underbrace{\max\_{\omega \in \Delta \omega\_4} \left[ W\_d |H(e^{j\omega})| \right]}\_{=\text{max}} \tag{66}$$

$$\mathfrak{c}\_{p} = \underbrace{\max\_{\omega \in \Delta \omega\_{p}} \left[ \mathcal{W}\_{\mathbb{S}^{d}} | \boldsymbol{\pi}(\omega) - \boldsymbol{\mu}\_{\boldsymbol{\pi}} | \right]}\_{} \tag{67}$$

24 Digital Filters and Signal Processing

with ∆*ω<sup>p</sup>* representing the passband frequency region(s), with ∆*ω<sup>a</sup>* representing the stopband frequency region(s), and with *τ*(*ω*) representing the group-delay frequency response of the FRM IIR digital filter. Here, *Wp*, *Wa*, and *Wgd* represent weighting factors for the passband and stopband magnitude responses, and for the group-delay response, respectively. Moreover, *µτ* represents the average group-delay over the passband region.

In [48], a convenient way to represent digital networks in terms of matrix representation is presented. This technique can be used to find the magnitude and group delay frequency response of the digital network in Fig. 12. Let us consider the input to the digital network in Fig. 12 to be *xD* and the output of it to be *yD*. In addition, let the output of the *i*-th time delay in Fig. 12 to be *xi* and the input to the *i*-th time delay to be *yi*. The transfer function matrix of the network, **T**, can be found as

$$\mathbf{y} = \mathbf{T}\mathbf{x}\tag{68}$$

The matrix **T** is also useful in finding the group delay of *H*(*z*). The group-delay of *H*(*ejω*) is given

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

1 *H*(*ejω*)

*<sup>d</sup>*(*F*0(*ejω*) + *<sup>F</sup>*1(*ejω*)) *dω*

*<sup>d</sup>*(*F*0(*ejω*) − *<sup>F</sup>*1(*ejω*)) *dω*

The derivative of FIR filters can be easily found from their transfer function. In order to find the derivative of the digital allpass networks *G*0(*z*) and *G*1(*z*), the following expression can be used

> ∑ *i*=1

where *Gxi*(*z*) is the transfer function between *xD* and *yi*, and where *Giy*(*z*) is the transfer function between *xi* and *yD*. The transfer functions *Gxi*(*z*) and *Giy*(*z*) can be found from the transfer

*Gxi*(*z*) = *axi* <sup>+</sup> *<sup>z</sup>*−1**e***xi*[**<sup>I</sup>** <sup>−</sup> *<sup>z</sup>*−1**D**]

*Giy*(*z*) = *aiy* <sup>+</sup> *<sup>z</sup>*−1**e**[**<sup>I</sup>** <sup>−</sup> *<sup>z</sup>*−1**D**]

accordance with [*axi* **e***xi*] is the *i*-th row of the matrix **T**, and [*aiy* **c***<sup>t</sup>*

the matrix **<sup>T</sup>**. Having the expressions for *<sup>H</sup>*(*ejω*) and *dH*(*ej<sup>ω</sup>*)

specifications. The group-delay weighting factor is set as

where *axi* and *aiy* are scalars, **e***xi* is a row vector and **c***iy* is a column vector of length 2*m* + 1, in

The passband and stopband weighting factors *Wp* and *Wa* are easily determined from user

*Wgd* <sup>=</sup> *<sup>ζ</sup>* <sup>×</sup> *fitnessmagnitude*

where *<sup>ζ</sup>* is a fixed constant such that <sup>0</sup> < *<sup>ζ</sup>* < <sup>1</sup>, and where *fitnessmagnitude* and *fitnessgroup*<sup>−</sup>*delay* are obtained by examining the seed FRM digital filter particle. The weighting factor for the

*fitnessgroup*<sup>−</sup>*delay*

*<sup>d</sup><sup>ω</sup>* <sup>=</sup> <sup>−</sup>*je*−*j<sup>ω</sup>* <sup>2</sup>*m*+<sup>1</sup>

*<sup>d</sup><sup>ω</sup>* can be written as

*<sup>d</sup><sup>ω</sup>* (*F*0(*ejω*) + *<sup>F</sup>*1(*ejω*))+

*<sup>d</sup><sup>ω</sup>* (*F*0(*ejω*) <sup>−</sup> *<sup>F</sup>*1(*ejω*))+

*dH*(*ejω*) *dω*

*<sup>G</sup>*0(*ejω*)+

*<sup>G</sup>*1(*ejω*) 

*Gxi*(*ejω*)*Giy*(*ejω*) (74)

<sup>−</sup>1**<sup>c</sup>** (75)

<sup>−</sup>1**c***iy* (76)

*<sup>t</sup>* is the *i*-th column of

(77)

*iy*]

*<sup>d</sup><sup>ω</sup>* , the group delay can be obtained in

(72)

267

http://dx.doi.org/10.5772/52196

(73)

*<sup>τ</sup>*(*ω*) = <sup>−</sup>Im

2

*dG*0(*ejω*)

*dG*1(*ejω*)

With the help of Eqn. (12), the expression *dH*(*ej<sup>ω</sup>*)

*dH*(*ejω*) *<sup>d</sup><sup>ω</sup>* <sup>=</sup> <sup>1</sup>

*dG*(*ejω*)

function matrix **T** as follows

accordance with Eqn. (72).

group-delay increases as *ζ* → 1.

by

where **y** = [*yD*, *y*1, *y*2,..., *y*2*m*+1] *<sup>t</sup>* 2and **x** = [*xD*, *x*1, *x*2,..., *x*2*m*+1] *t* , and **T** is a (2*m* + 2) × (2*m* + 2) matrix with the entries obtained as Eqn. (69).

$$\mathbf{T} = \begin{bmatrix} 0 & 1 & -1 & 0 & 0 & 0 & \dots & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & \dots & 0 & 0 \\ m\_{\mathbb{C}\_1} & m\_{\mathbb{C}\_1} & 1 - m\_{\mathbb{C}\_1} \left(1 + \sum\_{i=1}^m m\_{L\_i}\right) - m\_{\mathbb{C}\_1} & m\_{\mathbb{C}\_1} m\_{L\_2} & -m\_{\mathbb{C}\_1} \dots & m\_{\mathbb{C}\_1} m\_{L\_{\mathbb{M}}} & -m\_{\mathbb{C}\_1} \\ 0 & 0 & m\_{\mathbb{L}\_1} & 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 0 & m\_{\mathbb{C}\_2} m\_{\mathbb{L}\_2} & 0 & 1 - m\_{\mathbb{C}\_2} m\_{\mathbb{L}\_2} m\_{\mathbb{C}\_2} & \dots & 0 & 0 \\ 0 & 0 & m\_{\mathbb{L}\_2} & 0 & -m\_{\mathbb{L}\_2} & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & m\_{\mathbb{C}\_m} m\_{\mathbb{L}\_{\mathbb{M}}} & 0 & 0 & 0 & 1 - m\_{\mathbb{C}\_m} m\_{\mathbb{L}\_{\mathbb{M}}} & m\_{\mathbb{C}\_m} \\ 0 & 0 & m\_{\mathbb{L}\_m} & 0 & 0 & 0 & -m\_{\mathbb{L}\_m} & 1 \end{bmatrix} \tag{69}$$

Since *xi* <sup>=</sup> *<sup>z</sup>*−<sup>1</sup>*yi*, the transfer function *<sup>G</sup>*(*z*) = *yD xD* can be found as

$$\mathbf{G}(z) = z^{-1}\mathbf{e}[\mathbf{I} - z^{-1}\mathbf{D}]^{-1}\mathbf{c} \tag{70}$$

where **e** is a row vector and **c** is a column vector of length 2*m* + 1, and where **I** is the identity matrix and **D** is a (2*m* + 1) × (2*m* + 1) matrix in accordance with

$$\mathbf{T} = \begin{bmatrix} 0 & \mathbf{e} \\\\ \mathbf{c} \ \mathbf{D} \end{bmatrix} \tag{71}$$

<sup>2</sup> **X***<sup>t</sup>* denotes the transpose of the matrix **X**.

The matrix **T** is also useful in finding the group delay of *H*(*z*). The group-delay of *H*(*ejω*) is given by

$$\tau(\omega) = -\text{Im}\left\{ \frac{1}{H(e^{j\omega})} \frac{dH(e^{j\omega})}{d\omega} \right\} \tag{72}$$

With the help of Eqn. (12), the expression *dH*(*ej<sup>ω</sup>*) *<sup>d</sup><sup>ω</sup>* can be written as

24 Digital Filters and Signal Processing

average group-delay over the passband region.

where **y** = [*yD*, *y*1, *y*2,..., *y*2*m*+1]

**T**=

 

. . . . .

<sup>2</sup> **X***<sup>t</sup>* denotes the transpose of the matrix **X**.

(2*m* + 2) matrix with the entries obtained as Eqn. (69).

*m* ∑ *i*=1 *mLi*

*mC*<sup>1</sup> *mC*<sup>1</sup> <sup>1</sup>−*mC*<sup>1</sup> (1+

. .

Since *xi* <sup>=</sup> *<sup>z</sup>*−<sup>1</sup>*yi*, the transfer function *<sup>G</sup>*(*z*) = *yD*

.

matrix and **D** is a (2*m* + 1) × (2*m* + 1) matrix in accordance with

. .

with ∆*ω<sup>p</sup>* representing the passband frequency region(s), with ∆*ω<sup>a</sup>* representing the stopband frequency region(s), and with *τ*(*ω*) representing the group-delay frequency response of the FRM IIR digital filter. Here, *Wp*, *Wa*, and *Wgd* represent weighting factors for the passband and stopband magnitude responses, and for the group-delay response, respectively. Moreover, *µτ* represents the

In [48], a convenient way to represent digital networks in terms of matrix representation is presented. This technique can be used to find the magnitude and group delay frequency response of the digital network in Fig. 12. Let us consider the input to the digital network in Fig. 12 to be *xD* and the output of it to be *yD*. In addition, let the output of the *i*-th time delay in Fig. 12 to be *xi* and the input to the *i*-th time delay to be *yi*. The transfer function matrix of the network, **T**, can be found as

*<sup>t</sup>* 2and **x** = [*xD*, *x*1, *x*2,..., *x*2*m*+1]

) <sup>−</sup>*mC*<sup>1</sup> *mC*1*mL*<sup>2</sup> <sup>−</sup>*mC*<sup>1</sup> ... *mC*1*mLm* <sup>−</sup>*mC*<sup>1</sup>

*xD* can be found as

0 1 −1 0 0 0 ... 0 0 1 0 0 0 0 0 ... 0 0

0 0 *mL*<sup>1</sup> 1 0 0 ... 0 0 0 0 *mC*2*mL*<sup>2</sup> 0 1−*mC*2*mL*<sup>2</sup> *mC*<sup>2</sup> ... 0 0 0 0 *mL*<sup>2</sup> <sup>0</sup> <sup>−</sup>*mL*<sup>2</sup> 1 ... 0 0

.. 0 0 *mCm mLm* 0 0 01−*mCm mLm mCm* 0 0 *mLm* <sup>000</sup> −*mLm* <sup>1</sup>

*<sup>G</sup>*(*z*) = *<sup>z</sup>*−1**e**[**<sup>I</sup>** − *<sup>z</sup>*−1**D**]

**T** =

where **e** is a row vector and **c** is a column vector of length 2*m* + 1, and where **I** is the identity

 

0 **e**

 

**c D**

. . . . . . . . .

**y** = **Tx** (68)

*t*

, and **T** is a (2*m* + 2) ×

(69)

(71)

 

<sup>−</sup>1**<sup>c</sup>** (70)

$$\frac{d H(e^{j\omega})}{d\omega} = \frac{1}{2} \left[ \frac{d \mathcal{G}\_0(e^{j\omega})}{d\omega} (F\_0(e^{j\omega}) + F\_1(e^{j\omega})) + \cdots \right]$$

$$\frac{d (F\_0(e^{j\omega}) + F\_1(e^{j\omega}))}{d\omega} \mathcal{G}\_0(e^{j\omega}) +$$

$$\frac{d \mathcal{G}\_1(e^{j\omega})}{d\omega} (F\_0(e^{j\omega}) - F\_1(e^{j\omega})) +$$

$$\frac{d (F\_0(e^{j\omega}) - F\_1(e^{j\omega}))}{d\omega} \mathcal{G}\_1(e^{j\omega}) \right] \tag{73}$$

The derivative of FIR filters can be easily found from their transfer function. In order to find the derivative of the digital allpass networks *G*0(*z*) and *G*1(*z*), the following expression can be used

$$\frac{d\mathbf{G}(e^{j\omega})}{d\omega} = -j e^{-j\omega} \sum\_{i=1}^{2m+1} \mathbf{G}\_{\mathbf{x}i}(e^{j\omega}) \mathbf{G}\_{i\mathbf{y}}(e^{j\omega}) \tag{74}$$

where *Gxi*(*z*) is the transfer function between *xD* and *yi*, and where *Giy*(*z*) is the transfer function between *xi* and *yD*. The transfer functions *Gxi*(*z*) and *Giy*(*z*) can be found from the transfer function matrix **T** as follows

$$G\_{\rm xi}(z) = a\_{\rm xi} + z^{-1} \mathbf{e}\_{\rm xi} [\mathbf{I} - z^{-1} \mathbf{D}]^{-1} \mathbf{c} \tag{75}$$

$$G\_{\dot{\mathbf{y}}}(z) = a\_{\dot{\mathbf{y}}} + z^{-1} \mathbf{e} [\mathbf{I} - z^{-1} \mathbf{D}]^{-1} \mathbf{c}\_{\dot{\mathbf{y}}} \tag{76}$$

where *axi* and *aiy* are scalars, **e***xi* is a row vector and **c***iy* is a column vector of length 2*m* + 1, in accordance with [*axi* **e***xi*] is the *i*-th row of the matrix **T**, and [*aiy* **c***<sup>t</sup> iy*] *<sup>t</sup>* is the *i*-th column of the matrix **<sup>T</sup>**. Having the expressions for *<sup>H</sup>*(*ejω*) and *dH*(*ej<sup>ω</sup>*) *<sup>d</sup><sup>ω</sup>* , the group delay can be obtained in accordance with Eqn. (72).

The passband and stopband weighting factors *Wp* and *Wa* are easily determined from user specifications. The group-delay weighting factor is set as

$$\mathcal{W}\_{\mathcal{g}d} = \frac{\mathcal{J} \times fitness\_{magnitude}}{fitness\_{group-delay}} \tag{77}$$

where *<sup>ζ</sup>* is a fixed constant such that <sup>0</sup> < *<sup>ζ</sup>* < <sup>1</sup>, and where *fitnessmagnitude* and *fitnessgroup*<sup>−</sup>*delay* are obtained by examining the seed FRM digital filter particle. The weighting factor for the group-delay increases as *ζ* → 1.


<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 −80

**Figure 13.** Magnitude Frequency-Response of the Overall Infinite-Precision Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*)

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 −80

**Figure 15.** Magnitude Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) Before PSO

ω [Rad]

ω [Rad]

<sup>1</sup> 1.2 1.4 1.6 1.8 <sup>2</sup> <sup>50</sup>

**Figure 14.** Group Delay Frequency-Response of the Overall Infinite-Precision Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*)

<sup>1</sup> 1.2 1.4 1.6 1.8 <sup>2</sup> <sup>50</sup>

**Figure 16.** Group Delay Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) Before PSO

ω [Rad]

ω [Rad]

http://dx.doi.org/10.5772/52196

269

Group Delay [Samples]

**Multiplier CSD Representation Decimal Value** *mC*0,1 00001.000100 ¯ 1 0.9297 ¯ *mC*0,2 00010.00010¯ 10 1.9219 ¯ *mL*0,2 00000.1000010 0.5156 *mC*1,1 00001.00100 ¯ 10 0.8594 ¯ *mC*1,2 10000.10¯ 10000 15.375 ¯ *mL*1,1 00001.00010¯ 10 0.9219 ¯ *mL*1,2 00000.001010¯ 1 0.0859 ¯

Based on the infinite-precision bandpass FRM IIR digital filter, the corresponding CSD FRM IIR initial digital filter is obtained to have a magnitude and group delay frequency responses as shown in Figs. 15

By applying the proposed PSO to the initial FRM IIR digital filter and after about 160 iterations, the PSO converges to the optimal bandpass FRM IIR digital filter having a magnitude frequency response as shown in Fig. 17. In addition, Fig. 18 gives us a closer look to the magnitude frequency response of the passband region of the bandpass FRM IIR digital filter. Fig. 19 illustrates the group delay frequency response of the optimized bandpass FRM IIR digital filter. The values of the multiplier coefficients for the lowpass and highpass sections of the bandpass FRM IIR digital filter are obtained as summarized in

**Table 6.** Digital Multiplier Values for the Lowpass Section of the Bandpass FRM IIR Digital Filter

Group Delay [Samples]

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

−70 −60 −50 −40 −30 −20 −10 0 10

−70 −60 −50 −40 −30 −20 −10 0 10


and 16.

Tables 6 and 7.


**Table 3.** Design Specifications for Bandpass FRM IIR Digital Filter


**Table 4.** PSO Design Parameters for Bandpass FRM IIR Digital Filter


**Table 5.** CSD Parameters for Bandpass FRM IIR Digital Filter

#### **10. Application examples**

#### **10.1. Bandpass FRM IIR digital filter design example**

Consider the design of a bandpass FRM IIR digital filter satisfying the magnitude response design specifications given in Table 3 over the CSD multiplier coefficient space.

The parameters for the PSO of bandpass FRM IIR digital filter is shown in Table 4 and the CSD parameters are presented in Table 5.

Given the design specification in Table 3, The order of the digital allpass networks *G*0*lp* (*z*), *G*1*lp* (*z*), *G*0*h p* (*z*) and *G*1*h p* (*z*) are found to be 3, 4, 3 and 4, respectively. In addition, the digital masking subfilters *F*0*lp* (*z*), *F*1*lp* (*z*), *F*0*h p* (*z*) and *F*1*h p* (*z*) have the same length as the previous example, i.e. 24, 42, 25 and 35 respectively, resulting in *N* = 140. In this example a set of fifteen CSD LUTs are required, fourteen LUTs for the multiplier coefficients *mC*0,1 , *mC*0,2 , *mC*0,3 , *mL*0,2 , *mL*0,3 , *mC*1,1 , *mL*1,1 , *mC*1,2 and *mL*1,2 constituent in the digital allpass networks *G*0*lp* (*z*), *G*1*lp* (*z*), *G*0*h p* (*z*) and *G*1*h p* (*z*), and one template LUT for all the multiplier coefficients constituent in the masking digital subfilters *F*0*lp* (*z*), *F*1*lp* (*z*), *F*0*h p* (*z*) and *F*1*h p* (*z*).

Finally, by using Parks McClellan approach, the subfilters *F*0*lp* (*z*), *F*1*lp* (*z*), *F*0*h p* (*z*) and *F*1*h p* (*z*) can be designed. Also, by using the EMQF technique, the digital allpass networks *G*0*lp* (*z*), *G*1*lp* (*z*), *G*0*h p* (*z*) and *G*1*h p* (*z*) can be designed. Consequently, the magnitude and group delay frequency responses of the overall infinite-precision bandpass FRM IIR digital filter *H*(*z*) is obtained as shown in Figs. 13 and 14. <sup>268</sup> Digital Filters and Signal Processing Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 27 Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space http://dx.doi.org/10.5772/52196 269

26 Digital Filters and Signal Processing

**Table 3.** Design Specifications for Bandpass FRM IIR Digital Filter

**Table 4.** PSO Design Parameters for Bandpass FRM IIR Digital Filter

**Table 5.** CSD Parameters for Bandpass FRM IIR Digital Filter

**10.1. Bandpass FRM IIR digital filter design example**

specifications given in Table 3 over the CSD multiplier coefficient space.

**10. Application examples**

parameters are presented in Table 5.

*F*0*h p* (*z*) and *F*1*h p* (*z*).

Maximum Passband Ripple *Ap* 0.1[dB] Minimum Stopband Loss *Aa* 40[dB] Lower Stopband-Edge Normalized Frequency *ωa*<sup>1</sup> 0.31*π*[Rad] Lower Passband-Edge Normalized Frequency *ωp*<sup>1</sup> 0.33*π*[Rad] Upper Passband-Edge Normalized Frequency *ωp*<sup>2</sup> 0.60*π*[Rad] Upper Stopband-Edge Normalized Frequency *ωa*<sup>2</sup> 0.61*π*[Rad] Normalized Sampling Period *T* 1[s] Lowpass Filter Interpolation Factor *Ml p* 6 Highpass Filter Interpolation Factor *Mhp* 5

> *K wc*<sup>1</sup> *c*<sup>2</sup> *v*ˆ*min v*ˆ*max L <sup>f</sup> Lh* 700 0.4 2 2 −5 5 10 10

> > *L*0 *l*0 *f*0 *L*1 *l*1 *f*1 11 3 10 12 3 7

Consider the design of a bandpass FRM IIR digital filter satisfying the magnitude response design

The parameters for the PSO of bandpass FRM IIR digital filter is shown in Table 4 and the CSD

Given the design specification in Table 3, The order of the digital allpass networks *G*0*lp* (*z*), *G*1*lp* (*z*), *G*0*h p* (*z*) and *G*1*h p* (*z*) are found to be 3, 4, 3 and 4, respectively. In addition, the digital masking subfilters *F*0*lp* (*z*), *F*1*lp* (*z*), *F*0*h p* (*z*) and *F*1*h p* (*z*) have the same length as the previous example, i.e. 24, 42, 25 and 35 respectively, resulting in *N* = 140. In this example a set of fifteen CSD LUTs are required, fourteen LUTs for the multiplier coefficients *mC*0,1 , *mC*0,2 , *mC*0,3 , *mL*0,2 , *mL*0,3 , *mC*1,1 , *mL*1,1 , *mC*1,2 and *mL*1,2 constituent in the digital allpass networks *G*0*lp* (*z*), *G*1*lp* (*z*), *G*0*h p* (*z*) and *G*1*h p* (*z*), and one template LUT for all the multiplier coefficients constituent in the masking digital subfilters *F*0*lp* (*z*), *F*1*lp* (*z*),

Finally, by using Parks McClellan approach, the subfilters *F*0*lp* (*z*), *F*1*lp* (*z*), *F*0*h p* (*z*) and *F*1*h p* (*z*) can be designed. Also, by using the EMQF technique, the digital allpass networks *G*0*lp* (*z*), *G*1*lp* (*z*), *G*0*h p* (*z*) and *G*1*h p* (*z*) can be designed. Consequently, the magnitude and group delay frequency responses of the overall infinite-precision bandpass FRM IIR digital filter *H*(*z*) is obtained as shown in Figs. 13 and 14.

**Figure 13.** Magnitude Frequency-Response of the Overall Infinite-Precision Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*)

**Figure 15.** Magnitude Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) Before PSO

**Figure 14.** Group Delay Frequency-Response of the Overall Infinite-Precision Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*)

**Figure 16.** Group Delay Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) Before PSO


**Table 6.** Digital Multiplier Values for the Lowpass Section of the Bandpass FRM IIR Digital Filter

Based on the infinite-precision bandpass FRM IIR digital filter, the corresponding CSD FRM IIR initial digital filter is obtained to have a magnitude and group delay frequency responses as shown in Figs. 15 and 16.

By applying the proposed PSO to the initial FRM IIR digital filter and after about 160 iterations, the PSO converges to the optimal bandpass FRM IIR digital filter having a magnitude frequency response as shown in Fig. 17. In addition, Fig. 18 gives us a closer look to the magnitude frequency response of the passband region of the bandpass FRM IIR digital filter. Fig. 19 illustrates the group delay frequency response of the optimized bandpass FRM IIR digital filter. The values of the multiplier coefficients for the lowpass and highpass sections of the bandpass FRM IIR digital filter are obtained as summarized in Tables 6 and 7.

28 Digital Filters and Signal Processing

**Figure 17.** Magnitude Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) After PSO

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 29

http://dx.doi.org/10.5772/52196

271

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada

[1] D. R. Wilson, D. R. Corrall, and R. F. Mathias, "The Design and Application of Digital Filters," *IEEE Transactions on Industrial Electronics and Control Instrumentation*, vol. IECI-20, pp.

[2] P. P. Vaidyanathan, "Multirate digital filters, filter banks, polyphase networks, and applications: a

[3] Y. C. Lim, "Frequency-Response Masking Approach for the Synthesis of Sharp Linear Phase Digital Filters," *IEEE Transactions on Circuits and Systems*, vol. 33, no. 4, pp. 357–364, 1986.

[4] ——, "A Digital Filter Bank for Digital Audio Systems," *IEEE Transactions on Circuits and*

[5] Y. C. Lim, S. R. Parker, and A. G. Constantinides, "Finite Word Length FIR Filter Design Using Integer Programming over a Discrete Coefficient Space," *IEEE Transactions on Acoustics, Speech*

[6] T. Saramaki and Y. C. Lim, "Use of the Remez Algorithm for Designing FIR Filters Utilizing the Frequency-Response Masking Approach," in *1999 IEEE International Symposium on Circuits*

[7] Y. J. Yu and Y. C. Lim, "FRM Based FIR Filter Design - the WLS Approach," in *2002 IEEE International Symposium on Circuits and Systems. ISCAS 2002*, vol. 3, 2002, pp. III–221 –

[8] W.-S. Lu and T. Hinamoto, "Optimal Design of Frequency-Response-Masking Filters Using Semidefinite Programming," *IEEE Transactions on Circuits and Systems I: Fundamental Theory*

[9] ——, "Optimal Design of FIR Frequency-Response-Masking Filters Using Second-Order Cone Programming," in *Proceedings of 2003 IEEE International Symposium on Circuits and Systems.*

[10] L. Cen and Y. Lian, "Hybrid Genetic Algorithm for the Design of Modified Frequency-Response Masking Filters in a Discrete Space," *Circuits, Systems, and Signal Processing*, vol. 25, pp.

[11] S. Chen, R. H. Istepanian, and B. L. Luk, "Digital IIR Filter Design Using Adaptive Simulated

[12] K. sang Tang, K. fung Man, S. Kwong, and Z. feng Liu, "Design and Optimization of IIR Filter Structure Using Hierarchical Genetic Algorithms," *IEEE Transactions on Industrial Electronics*,

Annealing," *Digital Signal Processing*, vol. 11, no. 3, pp. 241–251, July 2001.

tutorial," *Proceedings of the IEEE*, vol. 78, pp. 56–93, 1990.

*Systems*, vol. 33-8, p. 848 âA ¸S 849, Aug. 1986. ˘

*and Signal Processing*, vol. 30, pp. 661–664, 1982.

*and Systems. ISCAS '99*, vol. 3, 1999, pp. 449–455.

*and Applications*, vol. 50, pp. 557–568, 2003.

*ISCAS '03*, vol. 3, 2003, pp. III–878 – III–881.

**Author details**

**References**

68–74, 1973.

III–224.

153–174, April 2006.

vol. 45, no. 3, pp. 481–487, 1998.

Seyyed Ali Hashemi and Behrouz Nowrouzian

**Figure 18.** Magnitude Frequency-Response of the Passband Region of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) After PSO

**Figure 19.** Group Delay Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) After PSO


**Table 7.** Digital Multiplier Values for the Highpass Section of the Bandpass FRM IIR Digital Filter


**Table 8.** Frequency-Response Analysis of the CSD Bandpass FRM IIR Digital Filter Before and After PSO

Table 8 represents the comparison of the CSD bandpass FRM IIR digital filters before and after PSO.

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 29

## **Author details**

28 Digital Filters and Signal Processing

270 Digital Filters and Signal Processing

−60 −50 −40 −30 −20 −10 0 10


<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 −70

**Figure 17.** Magnitude Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) After PSO

**Table 7.** Digital Multiplier Values for the Highpass Section of the Bandpass FRM IIR Digital Filter

**Table 8.** Frequency-Response Analysis of the CSD Bandpass FRM IIR Digital Filter Before and After PSO

Group Delay [Samples]

ω [Rad]

1 1.2 1.4 1.6 1.8

**Figure 18.** Magnitude Frequency-Response of the Passband Region of the Overall CSD Bandpass FRM IIR

ω [Rad]

−0.1 −0.05 0 0.05 0.1

Digital Filter *H*(*ej<sup>ω</sup>*) After PSO


<sup>1</sup> 1.2 1.4 1.6 1.8 <sup>2</sup> <sup>50</sup>

**Multiplier CSD Representation Decimal Value** *mC*0,1 00001.0010¯ 100 0.8438 ¯ *mC*0,2 00010.0001001 2.0547 ¯ *mL*0,2 00000.1000001 0.4922 ¯ *mC*1,1 00001.0100010 0.7656 ¯ *mC*1,2 10000.0100001 16.2578 *mL*1,1 00001.0000101 0.9766 ¯ *mL*1,2 00000.001010¯ 1 0.0859 ¯

**Frequency-Response Characteristic Before PSO After PSO** Maximum Passband Ripple *Ap* 0.8982[dB] 0.0978[dB] Minimum Stopband Loss *Aa* 9.1715[dB] 40.0172[dB] Maximum Group Delay 312[Samples] 239[Samples]

Table 8 represents the comparison of the CSD bandpass FRM IIR digital filters before and after PSO.

**Figure 19.** Group Delay Frequency-Response of the Overall CSD Bandpass FRM IIR Digital Filter *H*(*ej<sup>ω</sup>*) After PSO

ω [Rad]

Seyyed Ali Hashemi and Behrouz Nowrouzian

Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada

## **References**


30 Digital Filters and Signal Processing

[13] C. Dai, W. Chen, and Y. Zhu, "Seeker Optimization Algorithm for Digital IIR Filter Design," *IEEE Transactions on Industrial Electronics*, vol. 57, no. 5, pp. 1710–1718, May 2010.

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 31 [26] ——, "Discrete particle swarm optimization of magnitude response of iir-based frm digital filters," in *proceedings of 17th IEEE International Conference on Electronics, Circuits, and Systems, 2010.*

http://dx.doi.org/10.5772/52196

273

Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space

[27] ——, "A novel finite-wordlength particle swarm optimization technique for frm iir digital filters," in *proceedings of 2011 IEEE International Symposium on Circuits and Systems (ISCAS)*, May

[28] J. Kennedy and R. Eberhart, "Particle Swarm Optimization," in *Proceedings of IEEE International*

[29] M. D. Lutovac and L. D. Milic, "IIR Filters Based on Frequency-Response Masking Approa ´ ch," in *Telecommunications in Modern Satellite, Cable and Broadcasting Service, TELSIKS 2001*, Sept.

[30] H. Johansson and L. Wanhammar, "High-speed Recursive Filtering Using the Frequency-Response Masking Approach," in *Proceedings of the IEEE Int. Symposium on*

[31] J. Sun, W. Fang, and W. Xu, "A Quantum-Behaved Particle Swarm Optimization With Diversity-Guided Mutation for the Design of Two-Dimensional IIR Digital Filters," *IEEE Transactions on Circuits and Systems II: Express Briefs*, vol. 57, no. 2, pp. 141–145, 2010.

[32] A. Slowik and M. Bialko, "Design and Optimization of IIR Digital Filters with Non-Standard Characteristics Using Particle Swarm Optimization Algorithm," in *14th IEEE International*

[33] B. Luitel and G. K. Venayagamoorthy, "Particle Swarm Optimization with Quantum Infusion for the design of digital filters," in *IEEE Swarm Intelligence Symposium, SIS 2008*, 2008, pp. 1–8.

[34] B. Nowrouzian and L. S. Lee, "Minimal Multiplier Realisation of Bilinear-LDI Digital Allpass Networks," in *IEE Proceedings on Devices and Systems, G Circuits*, vol. 136, Jun. 1989, pp.

[35] T. Parks and J. McClellan, "Chebyshev Approximation for Nonrecursive Digital Filters with Linear Phase," *IEEE Transactions on Circuit Theory*, vol. CT-19, pp. 189–194, 1972.

[36] Y. C. Lim, R. Yang, D. Li, and J. Song, "Signed Power-of-Two Term Allocation Scheme for the Design of Digital Filters," *IEEE Transactions on Circuits and Systems II: Analog and Digital*

[37] R. I. Hartley, "Subexpression Sharing in Filters Using Canonic Signed Digit Multipliers," *IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing*, vol. 43, pp.

[38] A. T. G. Fuller, B. Nowrouzian, and F. Ashrafzadeh, "Optimization of FIR Digital Filters over the Canonical Signed-Digit Coefficient Space Using Genetic Algorithms," in *1998 Midwest*

*Conference on Electronics, Circuits and Systems, ICECS 2007*, 2007, pp. 162–165.

*Conference on Neural Networks*, vol. 4, 1995, pp. 1942–1948.

*Circuits and Systems*, 1997, pp. 2208–2211.

*Signal Processing*, vol. 46, pp. 577–584, 1999.

*Symposium on Circuits and Systems*, 1998, pp. 456–459.

*ICECS 2010.*, December 2010.

2011, pp. 2745–2748.

2001, pp. 163–170.

114–117.

677–688, 1996.


Particle Swarm Optimization of Highly Selective Digital Filters over the Finite-Precision Multiplier Coefficient Space 31

[26] ——, "Discrete particle swarm optimization of magnitude response of iir-based frm digital filters," in *proceedings of 17th IEEE International Conference on Electronics, Circuits, and Systems, 2010. ICECS 2010.*, December 2010.

30 Digital Filters and Signal Processing

272 Digital Filters and Signal Processing

2009.

111–117, May 2005.

301–309, April 2004.

[13] C. Dai, W. Chen, and Y. Zhu, "Seeker Optimization Algorithm for Digital IIR Filter Design," *IEEE Transactions on Industrial Electronics*, vol. 57, no. 5, pp. 1710–1718, May 2010.

[14] M. Najjarzadeh and A. Ayatollahi, "FIR Digital Filters Design: Particle Swarm Optimization Utilizing LMS and Minimax Strategies," in *IEEE International Symposium on Signal Processing*

[15] C. Dai, W. Chen, Y. Zhu, and X. Zhang, "Seeker Optimization Algorithm for Optimal Reactive Power Dispatch," *IEEE Transactions on Power Systems*, vol. 24, no. 3, pp. 1218–1231, August

[16] A. Kalinli and N. Karaboga, "A New Method for Adaptive IIR Filter Design Based on Tabu Search Algorithm," *AEU - International Journal of Electronics and Communications*, vol. 59, no. 3, pp.

[17] N. Karaboga, A. Kalinli, and D. Karaboga, "Designing Digital IIR Filters Using Ant Colony Optimisation Algorithm," *Engineering Applications of Artificial Intelligence*, vol. 17, no. 3, pp.

[18] A. Kalinli and N. Karaboga, "Artificial Immune Algorithm for IIR Filter Design," *Engineering*

[19] R. Storn, "Designing Nonstandard Filters with Differential Evolution," *IEEE Signal Processing*

[20] N. Karaboga, "Digital IIR Filter Design Using Differential Evolution Algorithm," *EURASIP*

[21] P. Mercier, S. M. Kilambi, and B. Nowrouzian, "Optimization of FRM FIR Digital Filters Over CSD and CDBNS Multiplier Coefficient Spaces Employing a Novel Genetic Algorithm," *Journal*

[22] S. Bokhari and B. Nowrouzian, "DCGA Optimization of Lowpass FRM IIR Digital Filters Over CSD Multiplier Coefficient Space," in *52nd IEEE International Midwest Symposium on Circuits*

[23] S. Bokhari, B. Nowrouzian, and S. A. Hashemi, "A novel technique for DCGA optimization of guaranteed BIBO stable IIR-based FRM digital filters over the CSD multiplier coefficient space," in *proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS)*, 2010,

[24] S. A. Hashemi and B. Nowrouzian, "Particle swarm optimization of FRM FIR digital filters over the CSD multiplier coefficient space," in *proceedings of 53rd IEEE International Midwest*

[25] ——, "A novel discrete particle swarm optimization for FRM FIR digital filters," *Journal of*

*Symposium on Circuits and Systems (MWSCAS)*, 2010, pp. 1246–1249.

*Journal on Applied Signal Processing*, vol. 2005, no. 8, pp. 1269–1276, Jan 2005.

*Applications of Artificial Intelligence*, vol. 18, no. 5, pp. 919–929, Dec 2005.

*Magazine*, vol. 22, no. 1, pp. 103–106, Jan 2005.

*of Computers*, vol. 2, no. 7, pp. 20–31, Sept. 2007.

*and Systems*, August 2009, pp. 573–576.

*Computers*, vol. 7, no. 7, July 2012.

pp. 2710–2713.

*and Information Technology, 2008. ISSPIT 2008.*, 2008, pp. 129–132.


32 Digital Filters and Signal Processing

[39] R. Yang, Y. C. Lim, and S. R. Parker, "Design of sharp linear-phase FIR bandstop filters using the frequency-response-masking technique," *Circuits, Systems, and Signal Processing*, vol. 17, no. 1, pp. 1–27, Jan. 1998.

**Chapter 11**

**Analytical Design of Two-Dimensional Filters and**

The field of two-dimensional filters and their design methods has known a large development due to its importance in image processing (Lim, 1990; Lu & Antoniou, 1992). There are methods based on numerical optimization and also analytical methods relying on 1D prototypes. A commonly-used design technique for 2D filters is to start from a specified 1D prototype filter and transform its transfer function using various frequency mappings in order to obtain a 2D filter with a desired frequency response. These are essentially spectral transformations from *s* to *z* plane, followed by z to (*z*1, *z*2) mappings, approached in early papers (Chakrabarti & Mitra, 1977; Hirano & Aggarwal, 1978; Harn & Shenoi, 1986; Nie & Unbehauen, 1989). Generally these transformations conserve stability, so from 1D prototypes various stable recursive 2D filters can be obtained. The most common types are directional, fan-shaped, diamond-shaped and circular filters. Diamond filters are commonly used as anti-aliasing filters in the conversion between signals sampled on the rectangular sampling grid and the quincunx sampling grid. Various design methods for diamond-shaped filters were studied in (Tosic, 1997; Lim & Low,

There are several classes of filters with orientation-selective frequency response, useful in tasks like edge detection, motion analysis, texture segmentation etc. Some relevant papers on directional filters and their applications are (Danielsson, 1980; Paplinski, 1998; Austvoll, 2000). An important class of orientation-selective filters are steerable filters, synthesized as a linear combination of a set of basis filters (Freeman & Adelson, 1991) and steerable wedge filters (Simoncelli & Farid, 1996). A directional filter bank (DFB) for image decomposition in the frequency domain was proposed in (Bamberger, 1992). In (Qunshan & Swamy, 1994) various 2D recursive filters are approached. Fan-shaped, also known as wedge-shaped filters find interesting applications. Design methods for IIR and FIR fan filters are presented in some

> © 2013 Matei and Matei; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

> © 2013 Matei and Matei; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Applications in Biomedical Image Processing**

Radu Matei and Daniela Matei

http://dx.doi.org/10.5772/52195

**1. Introduction**

Additional information is available at the end of the chapter

1997; Low & Lim, 1998; Ito, 2010; Matei, 2010).


## **Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing**

Radu Matei and Daniela Matei

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52195

## **1. Introduction**

32 Digital Filters and Signal Processing

274 Digital Filters and Signal Processing

pp. 1–27, Jan. 1998.

1999.

[39] R. Yang, Y. C. Lim, and S. R. Parker, "Design of sharp linear-phase FIR bandstop filters using the frequency-response-masking technique," *Circuits, Systems, and Signal Processing*, vol. 17, no. 1,

[40] A. Willson and H. Orchard, "Insights into Digital Filters Made as the Sum of Two Allpass

[41] D. Rabrenovic and M. Lutovac, "Elliptic filters with minimal Q-factors," in *IEE Electronics*

[42] L. D. Milic and M. D. Lutovac, "Design of Multiplierless Elliptic IIR F ´ ilters with a Small Quantization Error," *IEEE Transactions on Signal Processing*, vol. 47, no. 2, pp. 469–479, Feb.

[43] M. D. Lutovac and L. D. Milic, "Design of Computationally Efficient Elliptic IIR Filter ´ s with a Reduced Number of Shift-and-Add Operations in Multipliers," *IEEE Transactions on Signal*

[44] B. Nowrouzian, "A Novel Approach to the Exact Design of LDI Symmetrical Digital and Switched-Capacitor Filters," in *Proceedings of 33rd Midwest Symposium on Circuits and Systems*,

[45] V. Valkenburg, *Introduction to Modern Network Synthesis*. John Wiley and Sons, Inc., 1965.

[47] A. Antoniou, *Digital Filters: Analysis, Design, and Applications*. McGraw Hill, Inc., 1993.

[46] F. van den Bergh and A. P. Engelbrecht, "A Cooperative Approach to Particle Swarm Optimization," *IEEE Transactions on Evolutionary Computation*, vol. 8, no. 3, pp. 225–239, 2004.

[48] R. E. Crochiere, "Digital Network Theory and its Application to the Analysis and Design of Digital Filters," Ph.D. dissertation, M.I.T, Dep. of Elec. Eng., M.I.T, Cambridge, MA, May 1974.

Functions," *IEEE Trans. On Circuits And Syst.*, vol. 42, pp. 129–137, Mar. 1995.

*Letters Online*, vol. 30, no. 3, Feb. 1994, pp. 206–207.

*Processing*, vol. 45, no. 7, pp. 2422–2430, Oct. 1997.

vol. 2, Aug. 1990, pp. 967–972.

The field of two-dimensional filters and their design methods has known a large development due to its importance in image processing (Lim, 1990; Lu & Antoniou, 1992). There are methods based on numerical optimization and also analytical methods relying on 1D prototypes. A commonly-used design technique for 2D filters is to start from a specified 1D prototype filter and transform its transfer function using various frequency mappings in order to obtain a 2D filter with a desired frequency response. These are essentially spectral transformations from *s* to *z* plane, followed by z to (*z*1, *z*2) mappings, approached in early papers (Chakrabarti & Mitra, 1977; Hirano & Aggarwal, 1978; Harn & Shenoi, 1986; Nie & Unbehauen, 1989). Generally these transformations conserve stability, so from 1D prototypes various stable recursive 2D filters can be obtained. The most common types are directional, fan-shaped, diamond-shaped and circular filters. Diamond filters are commonly used as anti-aliasing filters in the conversion between signals sampled on the rectangular sampling grid and the quincunx sampling grid. Various design methods for diamond-shaped filters were studied in (Tosic, 1997; Lim & Low, 1997; Low & Lim, 1998; Ito, 2010; Matei, 2010).

There are several classes of filters with orientation-selective frequency response, useful in tasks like edge detection, motion analysis, texture segmentation etc. Some relevant papers on directional filters and their applications are (Danielsson, 1980; Paplinski, 1998; Austvoll, 2000). An important class of orientation-selective filters are steerable filters, synthesized as a linear combination of a set of basis filters (Freeman & Adelson, 1991) and steerable wedge filters (Simoncelli & Farid, 1996). A directional filter bank (DFB) for image decomposition in the frequency domain was proposed in (Bamberger, 1992). In (Qunshan & Swamy, 1994) various 2D recursive filters are approached. Fan-shaped, also known as wedge-shaped filters find interesting applications. Design methods for IIR and FIR fan filters are presented in some

© 2013 Matei and Matei; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Matei and Matei; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

early papers (Kayran & King, 1983; Ansari, 1987). An efficient design method for recursive fan filters is presented in (Zhu & Zhenya, 1990). An implementation of recursive fan filters using all-pass sections is given in (Zhu & Nakamura, 1996). In (Mollova, 1997), an analytical leastsquares technique for FIR filters, in particular fan-type, is proposed. Design methods for efficient 2D FIR filters were treated in papers like (Zhu et al., 1999; Zhu et al., 2006). Zero-phase filters were studied as well (Psarakis, 1990). Different types of 2D filters derived from 1D prototypes through spectral transformations were treated in (Matei, 2011a).

Stability of the two-dimensional recursive filters is also an important issue and is much more complicated than for 1D filters. For 2D filters, in general, it is quite difficult to take stability constraints into account during approximation stage (O'Connor & Huang, 1978). Therefore, various techniques were developed to separate stability from approximation. If the designed filter becomes unstable, some stabilization procedures are needed (Jury et al., 1977). Various

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

http://dx.doi.org/10.5772/52195

277

The medical image processing field has known a rapid development due to imaging value in assisting and assessing clinical diagnosis (Semmlow, 2004; Berry, 2007; Dougherty, 2011). In particular, the currently used vascular imaging technique is x-ray angiography, mainly in diagnosing cardio-vascular pathologies. A frequent application of cardiac imaging is the localization of narrowed or blocked coronary arteries. Fluorescein angiography is the best technique to view the retinal circulation and is useful for diagnosing retinal or optic nerve condition and assessing disorders like diabetic retinopathy, macular degeneration, retinal vein occlusions etc. There are many papers approaching various methods and techniques aiming at improving angiogram images. In papers like (Frangi et al., 1998) the multiscale analysis is used, with the purpose of vessel enhancement and detection. Usual approaches include Hessian-based filtering, based on the multiscale local structure of an image and directional features of vessels (Truc et al., 2007). In cardio-vascular imaging, an essential pre-processing task is the enhancement of coronary arterial tree, commonly using gradient or other local operators. In (Khan et al., 2004) a decimation-free directional filter bank is used. An adaptive vessel detection scheme is proposed in (Wu et al., 2006) based on Gabor filter response. Filtering is an elementary operation in low level computer vision and a pre-processing stage in many biomedical image processing applications. Some edge-preserving filtering techniques for biomedical image smoothing have been proposed (Rydell et al., 2008; Wong et al., 2004). At the end of this chapter some simulation results are given for biomedical image filtering using some of the proposed 2D filters, namely the directional narrow fan-filter with specified

**2. Analog and digital 1D prototype filters used in 2D filter design**

*P ij*

*P s Hs ps qs Q s* = =

order *N* has a transfer function in variable *s* of the general form:

( ) ( ) ( )

second order. Such a second-order rational function (biquad) can be written:

This section presents the types of analog and digital 1D recursive prototype filters which will be further used to derive the desired 2D filter characteristics. An analog IIR prototype filter of

0 0

*i j*

This general transfer function can be factorized into simpler rational functions of first and

*i j*

==× × å å (1)

*M N*

stability conditions for 2D filters have been found (Mastorakis, 2000).

orientation and the zero-phase circular filter.

We propose in this chapter some new design procedures for particular classes of 2D filters; the described methods are mainly analytical but also include numerical approximations. Various types of 2D filters will be approached, both recursive (IIR) and non-recursive (FIR). The design methods will focus however on recursive filters, since they are the most efficient.

The proposed design methods start from either digital or analog 1D prototypes with a desired characteristic. In this chapter we will mainly use analog prototypes, since the design turns out to be simpler and the 2D filters result of lower complexity. This analog prototype filter is described by a transfer function in the complex variable *s*, which can be factorized as a product of elementary functions of first or second order. The prototype transfer function results from an usual approximation (Butterworth, Chebyshev, elliptic) and the shape of the frequency response corresponds to the desired characteristic of the 2D filter.

The next design stage consists in finding the specific complex frequency transformation from the axis *s* to the complex plane (*z*1, *<sup>z</sup>*2), of the general form *<sup>F</sup>* :ℂ→ℂ<sup>2</sup> , *s* →*F* (*z*1, *z*2). This mapping will be determined for each type of 2D filter separately, starting from the geometrical specification of its shape in the frequency plane. Once found this particular mapping, the 2D filter function results directly by applying this transformation to each factor function of the prototype. Thus, the 2D filter transfer function *H* (*z*1, *z*2) results directly factorized, which is a major advantage in its implementation. The proposed design method applies the bilinear transform as an intermediate step in determining the 1D to 2D frequency mapping. All the proposed design techniques are mainly analytical but also involve numerical optimization, in particular rational approximations (e.g. Chebyshev-Padé). Some of the designed 2D filters result with complex coefficients. This should not be a serious shortcoming, since such IIR filters are also used (Nikolova et al., 2011).

In this chapter we will approach two main classes of 2D filters. The first one comprises three types of orientation-selective filters, as follows: square-shaped (diamond-type) IIR filters, with arbitrary orientation in the frequency plane; fan-type IIR filters with specified orientation and aperture angles; and very selective IIR multi-directional filters (in particular two-directional and three-directional), which are useful in detecting and extracting simultaneously lines with different orientations from an image.

The other class discussed here refers to FIR filters. From this category we will approach zerophase filters with circular frequency response. Zero-phase filters, with real transfer functions, are often used in image processing since they do not introduce any phase distortions. All these types of 2D filters are analyzed in detail in the following sections.

Stability of the two-dimensional recursive filters is also an important issue and is much more complicated than for 1D filters. For 2D filters, in general, it is quite difficult to take stability constraints into account during approximation stage (O'Connor & Huang, 1978). Therefore, various techniques were developed to separate stability from approximation. If the designed filter becomes unstable, some stabilization procedures are needed (Jury et al., 1977). Various stability conditions for 2D filters have been found (Mastorakis, 2000).

early papers (Kayran & King, 1983; Ansari, 1987). An efficient design method for recursive fan filters is presented in (Zhu & Zhenya, 1990). An implementation of recursive fan filters using all-pass sections is given in (Zhu & Nakamura, 1996). In (Mollova, 1997), an analytical leastsquares technique for FIR filters, in particular fan-type, is proposed. Design methods for efficient 2D FIR filters were treated in papers like (Zhu et al., 1999; Zhu et al., 2006). Zero-phase filters were studied as well (Psarakis, 1990). Different types of 2D filters derived from 1D

We propose in this chapter some new design procedures for particular classes of 2D filters; the described methods are mainly analytical but also include numerical approximations. Various types of 2D filters will be approached, both recursive (IIR) and non-recursive (FIR). The design

The proposed design methods start from either digital or analog 1D prototypes with a desired characteristic. In this chapter we will mainly use analog prototypes, since the design turns out to be simpler and the 2D filters result of lower complexity. This analog prototype filter is described by a transfer function in the complex variable *s*, which can be factorized as a product of elementary functions of first or second order. The prototype transfer function results from an usual approximation (Butterworth, Chebyshev, elliptic) and the shape of the frequency

The next design stage consists in finding the specific complex frequency transformation from

mapping will be determined for each type of 2D filter separately, starting from the geometrical specification of its shape in the frequency plane. Once found this particular mapping, the 2D filter function results directly by applying this transformation to each factor function of the prototype. Thus, the 2D filter transfer function *H* (*z*1, *z*2) results directly factorized, which is a major advantage in its implementation. The proposed design method applies the bilinear transform as an intermediate step in determining the 1D to 2D frequency mapping. All the proposed design techniques are mainly analytical but also involve numerical optimization, in particular rational approximations (e.g. Chebyshev-Padé). Some of the designed 2D filters result with complex coefficients. This should not be a serious shortcoming, since such IIR filters

In this chapter we will approach two main classes of 2D filters. The first one comprises three types of orientation-selective filters, as follows: square-shaped (diamond-type) IIR filters, with arbitrary orientation in the frequency plane; fan-type IIR filters with specified orientation and aperture angles; and very selective IIR multi-directional filters (in particular two-directional and three-directional), which are useful in detecting and extracting simultaneously lines with

The other class discussed here refers to FIR filters. From this category we will approach zerophase filters with circular frequency response. Zero-phase filters, with real transfer functions, are often used in image processing since they do not introduce any phase distortions. All these

, *s* →*F* (*z*1, *z*2). This

prototypes through spectral transformations were treated in (Matei, 2011a).

methods will focus however on recursive filters, since they are the most efficient.

response corresponds to the desired characteristic of the 2D filter.

types of 2D filters are analyzed in detail in the following sections.

are also used (Nikolova et al., 2011).

276 Digital Filters and Signal Processing

different orientations from an image.

the axis *s* to the complex plane (*z*1, *<sup>z</sup>*2), of the general form *<sup>F</sup>* :ℂ→ℂ<sup>2</sup>

The medical image processing field has known a rapid development due to imaging value in assisting and assessing clinical diagnosis (Semmlow, 2004; Berry, 2007; Dougherty, 2011). In particular, the currently used vascular imaging technique is x-ray angiography, mainly in diagnosing cardio-vascular pathologies. A frequent application of cardiac imaging is the localization of narrowed or blocked coronary arteries. Fluorescein angiography is the best technique to view the retinal circulation and is useful for diagnosing retinal or optic nerve condition and assessing disorders like diabetic retinopathy, macular degeneration, retinal vein occlusions etc. There are many papers approaching various methods and techniques aiming at improving angiogram images. In papers like (Frangi et al., 1998) the multiscale analysis is used, with the purpose of vessel enhancement and detection. Usual approaches include Hessian-based filtering, based on the multiscale local structure of an image and directional features of vessels (Truc et al., 2007). In cardio-vascular imaging, an essential pre-processing task is the enhancement of coronary arterial tree, commonly using gradient or other local operators. In (Khan et al., 2004) a decimation-free directional filter bank is used. An adaptive vessel detection scheme is proposed in (Wu et al., 2006) based on Gabor filter response. Filtering is an elementary operation in low level computer vision and a pre-processing stage in many biomedical image processing applications. Some edge-preserving filtering techniques for biomedical image smoothing have been proposed (Rydell et al., 2008; Wong et al., 2004). At the end of this chapter some simulation results are given for biomedical image filtering using some of the proposed 2D filters, namely the directional narrow fan-filter with specified orientation and the zero-phase circular filter.

## **2. Analog and digital 1D prototype filters used in 2D filter design**

This section presents the types of analog and digital 1D recursive prototype filters which will be further used to derive the desired 2D filter characteristics. An analog IIR prototype filter of order *N* has a transfer function in variable *s* of the general form:

$$H\_P(\mathbf{s}) = \frac{P(\mathbf{s})}{Q(\mathbf{s})} = \sum\_{i=0}^{M} p\_i \cdot \mathbf{s}^i \bigg/ \sum\_{j=0}^{N} q\_j \cdot \mathbf{s}^j \tag{1}$$

This general transfer function can be factorized into simpler rational functions of first and second order. Such a second-order rational function (biquad) can be written:

$$H\_b(\mathbf{s}) = k \left(\mathbf{s}^2 + b\_1 \mathbf{s} + b\_0\right) \Big/ \left(\mathbf{s}^2 + a\_1 \mathbf{s} + a\_0\right) \tag{2}$$

In Fig.1 (d) the shifted filter response magnitude for *ω*<sup>01</sup> =0.416*π* is shown. Another useful analog prototype is the selective second-order (resonant) filter with central frequency *ω*0:

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

( ) 2 2

(a) (b)

(c) (d) (e)

**Figure 1.** Frequency response magnitudes of: (a) LP elliptic prototype of order 6; (b) LP elliptic prototype of order 4; very selective first-order filter with central frequencies ω<sup>0</sup> = 0 (c) and ω<sup>0</sup> = 0.416π (d); (e) selective band-pass filter with

A useful zero-phase prototype can be obtained from the general function (1) by preserving only the magnitude characteristics of the 1D filter; this prototype will be further used to design 2D zero-phase FIR filters of different types, specifically circular filters, with real-valued transfer functions. In order to obtain a zero-phase filter, we consider the magnitude charac‐ teristics of *HP*( *jω*), defined by the absolute value | *HP*( *jω*)| = |*P*( *jω*)| / |*Q*( *jω*)|. We look for a series expansion of the magnitude | *HP*( *jω*)| that has to be an approximation as accurate as possible on the frequency domain −*π*, *π* . The most convenient for our purpose is the Chebyshev series expansion, because it yields an efficient approximation of a given function, which is uniform along the desired interval. The Chebyshev series in powers of the frequency variable *ω* for a given function on a specified interval can be easily found using a symbolic computation software like MAPLE. However, we will finally need a trigonometric expansion of | *HP*( *jω*)|, namely in cos(*nω*), rather than a polynomial expansion in powers of *ω*.

Therefore, prior to Chebyshev series calculation, we apply the change of variable:

(9)

http://dx.doi.org/10.5772/52195

279

)(b b)

 aw

The transfer function magnitude for such a filter with *α* =0.1 and *ω*<sup>0</sup> =1.3 is shown in Fig. 1

<sup>0</sup> ( ) *Hs ss s <sup>r</sup>* = ++ a

(e). This will be further used as a prototype for two-directional filters.

ω<sup>0</sup> = 1.3

where generally the second-order polynomials at numerator and denominator have complexconjugated roots, and *k* is a constant. For typical approximations – Chebyshev or elliptic – usually *b*<sup>1</sup> =0, therefore the numerator has imaginary zeros. For odd-order filters, the denom‐ inator contains at least a first-order factor (*s* + *α*). An elliptic approximation with very low ripple can be used for an almost maximally-flat low-order filter. Next we consider two such low-pass (LP) prototypes with imposed specifications. The first is an elliptic LP analog filter of order *N* =6, cutoff frequency *ω<sup>c</sup>* =0.4*π*, peak-to-peak ripple *Rp* =0.04dB, stop-band attenua‐ tion *Rs* =38dB. Its transfer function can be factorized into three biquad functions like (2): *HP*(*s*)=*k* ⋅*Hb*1(*s*) ⋅*Hb*2(*s*) ⋅*Hb*3(*s*) where *k* =2.375 and:

$$H\_{b1}(\text{s}) = \text{(s}^2 + 39.195) \Big/ \text{(s}^2 + 0.2221 \text{s} + 2.8797\text{)}\tag{3}$$

$$H\_{b2}(\text{s}) = (\text{s}^2 + 6.5057) \Big/ (\text{s}^2 + 0.9172\text{s} + 2.4291) \tag{4}$$

$$H\_{b3}(\text{s}) = \text{(s}^2 + 4.2217) \Big/ \{\text{s}^2 + 2.0448\text{s} + 1.5454\text{}\)\tag{5}$$

The frequency response magnitude of this LP filter for *ω* ∈ −*π*, *π* is shown in Fig. 1(a).

The second prototype is an elliptic LP analog filter with parameters: *N* =4, *ω<sup>c</sup>* =0.4*π*, *Rp* =0.05 db,*Rs* =36db. Its transfer function is written as a product of two biquad functions like (2): *HP*(*s*)=*k* ⋅*Hb*1(*s*) ⋅*Hb*2(*s*), where *k* ≅0.01 and

$$H\_{b1}(\text{s}) = (\text{s}^2 + 33.385) \left\{ (\text{s}^2 + 0.5894s + 2.2398) \right\} \tag{6}$$

$$H\_{b2}(\text{s}) = \text{(s}^2 + 6.42226) \Big/ \text{(s}^2 + 1.9691\text{s} + 1.5266\text{)}\tag{7}$$

The frequency response magnitude of this LP filter for *ω* ∈ −*π*, *π* is shown in Fig.1 (b). The simplest analog LP filter has a transfer function *Hj* (*s*)=*α* / (*s* + *α*), where the value *α* gives the selectivity (Fig.1(c)). If the filter characteristic is shifted to a given frequency *ω*01∈ −*π*, *π* , the transfer function becomes:

$$H\_{j\mathbb{S}}(\mathbf{s}) = a \{ (\mathbf{s} + \mathbf{a} + j \cdot \boldsymbol{\alpha}\_{01}) \} \tag{8}$$

In Fig.1 (d) the shifted filter response magnitude for *ω*<sup>01</sup> =0.416*π* is shown. Another useful analog prototype is the selective second-order (resonant) filter with central frequency *ω*0:

( ) ( ) 2 2

where generally the second-order polynomials at numerator and denominator have complexconjugated roots, and *k* is a constant. For typical approximations – Chebyshev or elliptic – usually *b*<sup>1</sup> =0, therefore the numerator has imaginary zeros. For odd-order filters, the denom‐ inator contains at least a first-order factor (*s* + *α*). An elliptic approximation with very low ripple can be used for an almost maximally-flat low-order filter. Next we consider two such low-pass (LP) prototypes with imposed specifications. The first is an elliptic LP analog filter of order *N* =6, cutoff frequency *ω<sup>c</sup>* =0.4*π*, peak-to-peak ripple *Rp* =0.04dB, stop-band attenua‐ tion *Rs* =38dB. Its transfer function can be factorized into three biquad functions like (2):

*HP*(*s*)=*k* ⋅*Hb*1(*s*) ⋅*Hb*2(*s*) ⋅*Hb*3(*s*) where *k* =2.375 and:

278 Digital Filters and Signal Processing

*HP*(*s*)=*k* ⋅*Hb*1(*s*) ⋅*Hb*2(*s*), where *k* ≅0.01 and

simplest analog LP filter has a transfer function *Hj*

transfer function becomes:

2 2

2 2

2 2

2 2

2 2

The frequency response magnitude of this LP filter for *ω* ∈ −*π*, *π* is shown in Fig. 1(a).

The second prototype is an elliptic LP analog filter with parameters: *N* =4, *ω<sup>c</sup>* =0.4*π*, *Rp* =0.05 db,*Rs* =36db. Its transfer function is written as a product of two biquad functions like (2):

The frequency response magnitude of this LP filter for *ω* ∈ −*π*, *π* is shown in Fig.1 (b). The

selectivity (Fig.1(c)). If the filter characteristic is shifted to a given frequency *ω*01∈ −*π*, *π* , the

<sup>01</sup> () ( ) *Hs s j jS* = + +× a

 aw

10 10 ( ) *H s k s bs b s as a <sup>b</sup>* = ++ ++ (2)

<sup>1</sup>( ) ( 39.195) ( 0.2221 2.8797) *Hs s s s <sup>b</sup>* =+ + + (3)

<sup>2</sup>( ) ( 6.5057) ( 0.9172 2.4291) *Hs s s s <sup>b</sup>* =+ + + (4)

<sup>3</sup>( ) ( 4.2217) ( 2.0448 1.5454) *Hs s s s <sup>b</sup>* =+ + + (5)

<sup>1</sup>( ) ( 33.385) ( 0.5894 2.2398) *Hs s s s <sup>b</sup>* =+ + + (6)

<sup>2</sup>( ) ( 6.4226) ( 1.9691 1.5266) *Hs s s s <sup>b</sup>* =+ + + (7)

(*s*)=*α* / (*s* + *α*), where the value *α* gives the

(8)

$$H\_r(\mathbf{s}) = a\mathbf{s} \Big/ \left(\mathbf{s}^2 + a\mathbf{s} + a\_0^2\right) \tag{9}$$

The transfer function magnitude for such a filter with *α* =0.1 and *ω*<sup>0</sup> =1.3 is shown in Fig. 1 (e). This will be further used as a prototype for two-directional filters.

**Figure 1.** Frequency response magnitudes of: (a) LP elliptic prototype of order 6; (b) LP elliptic prototype of order 4; very selective first-order filter with central frequencies ω<sup>0</sup> = 0 (c) and ω<sup>0</sup> = 0.416π (d); (e) selective band-pass filter with ω<sup>0</sup> = 1.3

A useful zero-phase prototype can be obtained from the general function (1) by preserving only the magnitude characteristics of the 1D filter; this prototype will be further used to design 2D zero-phase FIR filters of different types, specifically circular filters, with real-valued transfer functions. In order to obtain a zero-phase filter, we consider the magnitude charac‐ teristics of *HP*( *jω*), defined by the absolute value | *HP*( *jω*)| = |*P*( *jω*)| / |*Q*( *jω*)|. We look for a series expansion of the magnitude | *HP*( *jω*)| that has to be an approximation as accurate as possible on the frequency domain −*π*, *π* . The most convenient for our purpose is the Chebyshev series expansion, because it yields an efficient approximation of a given function, which is uniform along the desired interval. The Chebyshev series in powers of the frequency variable *ω* for a given function on a specified interval can be easily found using a symbolic computation software like MAPLE. However, we will finally need a trigonometric expansion of | *HP*( *jω*)|, namely in cos(*nω*), rather than a polynomial expansion in powers of *ω*. Therefore, prior to Chebyshev series calculation, we apply the change of variable:

$$
\rho = \arccos(\mathbf{x}) \Leftrightarrow \mathbf{x} = \cos(\rho) \tag{10}
$$

frequency transformation is applied, which yields a 2D filter with the desired square shape in the frequency plane. The proposed method combines the analytical approach with numerical

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

http://dx.doi.org/10.5772/52195

281

The standard diamond filter has the shape in the frequency plane as shown in Fig.2 (a). It is a square with a side length of *π* 2, while its axis is tilted by an angle of *φ* =*π* / 4 radians about the two frequency axes. Next we will consider the orientation angle *φ* about the *ω*<sup>2</sup> − (vertical) axis. In this chapter a more general case is approached, i.e. a 2D diamond-type filter with a square shape in the frequency plane, but with arbitrary axis orientation angle, as shown in Fig. 2(e). Next we refer to them as diamond-type filters, since they are more general than the

The diamond-type filter in Fig.2 (e) is derived as the intersection of two oriented low-pass filters whose axes are perpendicular to each other, for which the shape in the frequency plane is given in Fig.2 (c), (d). Correspondingly, the diamond-type filter transfer function *HD*(*z*1, *z*2)

12 112 212 (,) (,) (,) *H zz Hzz Hzz <sup>D</sup>* = × (16)

(c)

**3.1. Specification of diamond-type filters in the frequency plane**

approximations.

diamond filter from Fig. 2 (a).

results as a product of two partial transfer functions:

(a) (b)

)

(d) (e) (f)

**Figure 2.** (a) diamond filter; (b) wide-band oriented filter; (c), (d) wide-band oriented filters with orientations forming an angle φ =π / 2; (e) square-shaped filter resulted as product of the above oriented filters; (f) rhomboidal filter

The frequency characteristic of *H*2(*z*1, *z*2) is ideally identical to the frequency characteristic of *H*1(*z*1, *z*2) rotated by an angle of *φ* =*π* / 2. Since this rotation of axes implies the frequency

and so we get the polynomial expansion in variable *x*:

$$\left| H\_P(\arccos(\mathbf{x})) \right| \cong \sum\_{n=0}^{N} a\_n \cdot \mathbf{x}^n = a\_0 + a\_0 \mathbf{x} + a\_2 \mathbf{x}^2 + a\_3 \mathbf{x}^3 + \dots + a\_N \mathbf{x}^N \tag{11}$$

where the number of terms *N* is chosen large enough to ensure the desired precision. The next step is to substitute back *x* =cos*ω* in the polynomial expression (11), therefore we obtain the factorized function in cos*ω*, with *n* + 2*m*= *N* :

$$\left| H\_P(o\rho) \right| \equiv \sum\_{n=0}^N a\_n \cdot \cos^n(o\rho) = k \cdot \prod\_{i=1}^n (\cos o + a\_i) \cdot \prod\_{j=1}^m (\cos^2 o + a\_{1j} \cos o + a\_{2j}) \tag{12}$$

Next let us consider a recursive digital filter of order *N* with the transfer function:

$$H\_P(\mathbf{z}) = \frac{P(\mathbf{z})}{Q(\mathbf{z})} = \sum\_{i=0}^{M} p\_i \cdot \mathbf{z}^i \bigg/ \sum\_{j=0}^{N} q\_j \cdot \mathbf{z}^j \tag{13}$$

This general transfer function with *M* = *N* can be factorized into first and second order rational functions. For an odd order filter, *HP*(*z*) has at least one first-order factor:

$$H\_1(z) = \left(b\_1 z + b\_0\right) / \left(z + a\_0\right) \tag{14}$$

The transfer function also contains second-order (biquad) functions, where in general the numerator and denominator polynomials have complex-conjugated roots:

$$H\_2(z) = \left(b\_2 z^2 + b\_1 z + b\_0\right) \Big/ \left(z^2 + a\_1 z + a\_0\right) \tag{15}$$

We will further use the term *template*, common in the field of cellular neural networks, for the coefficient matrices of the numerator and denominator of a 2D transfer function *H* (*z*1, *z*2).

#### **3. Diamond-type recursive filters**

In this section a design method is proposed for 2D square-shaped (diamond-type) IIR filters. The design relies on an analog 1D maximally-flat low-pass prototype filter. To this filter a frequency transformation is applied, which yields a 2D filter with the desired square shape in the frequency plane. The proposed method combines the analytical approach with numerical approximations.

#### **3.1. Specification of diamond-type filters in the frequency plane**

w

0

=

*n*

 w

*N*

(arccos( )) ...

0 1 1

*<sup>N</sup> n m <sup>n</sup>*

= = =

*n i j*

( ) ( ) ( )

functions. For an odd order filter, *HP*(*z*) has at least one first-order factor:

numerator and denominator polynomials have complex-conjugated roots:

*P n N*

where the number of terms *N* is chosen large enough to ensure the desired precision. The next step is to substitute back *x* =cos*ω* in the polynomial expression (11), therefore we obtain the

( ) cos ( ) (cos ) (cos cos )

0 0

*i j*

This general transfer function with *M* = *N* can be factorized into first and second order rational

The transfer function also contains second-order (biquad) functions, where in general the

( ) ( ) 2 2

We will further use the term *template*, common in the field of cellular neural networks, for the coefficient matrices of the numerator and denominator of a 2D transfer function *H* (*z*1, *z*2).

In this section a design method is proposed for 2D square-shaped (diamond-type) IIR filters. The design relies on an analog 1D maximally-flat low-pass prototype filter. To this filter a

*M N*

*P n i jj*

 w

*H a k a aa*

Next let us consider a recursive digital filter of order *N* with the transfer function:

*P ij*

*P z Hz pz qz Q z* = =

*H x a x a ax ax ax a x*

and so we get the polynomial expansion in variable *x*:

factorized function in cos*ω*, with *n* + 2*m*= *N* :

**3. Diamond-type recursive filters**

w

280 Digital Filters and Signal Processing

 w

00 2 3

2 3

2

@ × =× + × + + å Õ Õ (12)

*i j*

1 2

ww

= =× × å å (13)

( )( ) 1 10 0 *H z bz b z a* ( ) =+ + (14)

2 2 10 10 *H z bz bz b z az a* ( ) = ++ ++ (15)

@ × = + + + ++ å (11)

*n N*

= Û= arccos( ) cos( ) *x x* (10)

The standard diamond filter has the shape in the frequency plane as shown in Fig.2 (a). It is a square with a side length of *π* 2, while its axis is tilted by an angle of *φ* =*π* / 4 radians about the two frequency axes. Next we will consider the orientation angle *φ* about the *ω*<sup>2</sup> − (vertical) axis. In this chapter a more general case is approached, i.e. a 2D diamond-type filter with a square shape in the frequency plane, but with arbitrary axis orientation angle, as shown in Fig. 2(e). Next we refer to them as diamond-type filters, since they are more general than the diamond filter from Fig. 2 (a).

The diamond-type filter in Fig.2 (e) is derived as the intersection of two oriented low-pass filters whose axes are perpendicular to each other, for which the shape in the frequency plane is given in Fig.2 (c), (d). Correspondingly, the diamond-type filter transfer function *HD*(*z*1, *z*2) results as a product of two partial transfer functions:

$$H\_D(\mathbf{z}\_1, \mathbf{z}\_2) = H\_1(\mathbf{z}\_1, \mathbf{z}\_2) \cdot H\_2(\mathbf{z}\_1, \mathbf{z}\_2) \tag{16}$$

**Figure 2.** (a) diamond filter; (b) wide-band oriented filter; (c), (d) wide-band oriented filters with orientations forming an angle φ =π / 2; (e) square-shaped filter resulted as product of the above oriented filters; (f) rhomboidal filter

The frequency characteristic of *H*2(*z*1, *z*2) is ideally identical to the frequency characteristic of *H*1(*z*1, *z*2) rotated by an angle of *φ* =*π* / 2. Since this rotation of axes implies the frequency variable change: *ω*1→*ω*2, *ω*2→ −*ω*1, the transfer function *H*2(*z*1, *z*2) can be derived from *H*1(*z*1, *z*2) as *H*2(*z*1, *z*2)=*H*1(*z*2, *z*<sup>1</sup> −1 ). A more general filter belonging to this class is a rhomboi‐ dal filter, as shown in Fig.2 (f). In this case the two oriented LP filters may have different bandwidths and their axes are no longer perpendicular to each other.

#### **3.2. Design method for diamond-type filters**

The issue of this section is to find the transfer function *H*2*D*(*z*1, *z*2) of the desired 2D filter using a complex frequency transformation *s* →*F* (*z*1, *z*2). From a prototype *HP*(*s*)=*HP*( *jω*) (which varies on one axis only), a 2D oriented filter is obtained by rotating the axes of the plane (*ω*1, *ω*2) by an angle *φ*. The rotation is defined by the following linear transformation, where *ω*1, *ω*2 are the original frequency variables and *<sup>ω</sup>*¯ <sup>1</sup>, *<sup>ω</sup>*¯2 the rotated ones:

$$
\begin{bmatrix}
\alpha\_1\\\alpha\_2\\\alpha\_2
\end{bmatrix} = \begin{bmatrix}
\cos\varphi & \sin\varphi\\-\sin\varphi & \cos\varphi
\end{bmatrix} \cdot \begin{bmatrix}
\overline{\alpha}\_1\\\overline{\alpha}\_2
\end{bmatrix} \tag{17}
$$

where *ω* is a frequency of the discrete filter and *ωa* is the corresponding frequency of the analog filter. This error can be corrected by applying a pre-warping. Taking *T* =1 in (20), we substitute

( ) ( ) 1 12 2

( ) ( ) <sup>2</sup> arctg 2 0.4751 1 0.05

the 1D to 2D mapping which includes the pre-warping along both frequency axes:

*s s s Fss*

Substituting the nonlinear mappings (21) with approximate expression (22) into (18) we get

1 2 2 2

Applying the bilinear transform (19) along the two axes we obtain the mapping *s* →*Fφ*(*z*1, *z*2)

1 3 1 101 131

1 3 1 101 131

**M N** (25)


cos 0 0 0 sin 3 0 3 3 9 3

Substituting the mapping (24) into the expression (2) of the biquad transfer function *Hb*(*s*) with

é ù é ùé ù --- - ê ú ê úê ú = × + ×- <sup>=</sup>

 j

2 and *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup>

1 2 12 12 ( , ) ( , ) ( , ) ( )( ) *T T s F z z kM z z N z z k* ® =×

cos sin ( , ) 0.95 1 0.05 1 0.05

® =× + ê ú -× -× ë û

ww

1 2

é ù j

1 2

j

 j

(,) 1 2 ( )( ) *T T H zz <sup>B</sup>* = ´´ ´ ´ **1 12 1 12 zBz zAz** (26)

 j=× ´ ´ ´ ´ **1 21 2 zMz zNz** (24)

 j

(23)

*s s*

2 :

In order to include the nonlinear mappings (21) into the frequency transformation, a rational approximation is needed. One of the most efficient is Chebyshev-Padé, which gives uniform approximation over a specified range. We get the accurate approximation for *ω* ∈ −*π*, *π* :

 w® × 2 arctg 2 2 arctg 2 ® × (21)

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

@ × +× (22)

http://dx.doi.org/10.5772/52195

283

ww

w

j

jj

Here *k* =1.5233 and the matrices *Mφ* and *Nφ* of size 3×3 are given by:

*b*<sup>1</sup> =0, we get the 2D transfer function *HB*(*z*1, *z*2) in the matrix form:

in matrix form, where *z***<sup>1</sup>** = 1 *z*<sup>1</sup> *z*<sup>1</sup>

j

j

where *z***1** and *z***2** are the vectors:

j

the mappings:

w

The spatial orientation is specified by an angle *φ* with respect to *ω*<sup>1</sup> −axis, defined by the 1D to 2D frequency mapping *ω* →*ω*1cos*φ* + *ω*2sin*φ*. By substitution, we obtain the oriented filter transfer function *Hφ*(*ω*1, *ω*2)=*HP*(*ω*1cos*φ* + *ω*2sin*φ*). In the complex plane (*s*1, *s*2) the above frequency transformation becomes:

$$\mathbf{s} \to \mathbf{s}\_1 \cos \varphi + \mathbf{s}\_2 \sin \varphi \tag{18}$$

The oriented filter *Hφ*(*ω*1, *ω*2) has the frequency response magnitude section along the line *ω*1cos*φ* + *ω*2sin*φ* =0, identical with prototype *HP*(*ω*), and constant along the perpendicular line (filter longitudinal axis) *ω*1sin*φ* −*ω*2cos*φ* =0. The usual method to obtain a discrete filter from an analog prototype is the bilinear transform. If the sample interval takes the value *T* =1, the bilinear transform for *s*1 and *s*2 in the complex plane (*s*1, *s*2) has the form:

$$s\_1 = \mathcal{Z}(z\_1 - 1) / (z\_1 + 1) \ s\_2 = \mathcal{Z}(z\_2 - 1) / (z\_2 + 1) \tag{19}$$

This method is straightforward, still the resulted 2D filter will present linearity distortions in its shape, which increase towards the limits of the frequency plane as compared to the ideal frequency response. This is mainly due to the so-called frequency warping effect of the bilinear transform, expressed by the continuous to discrete frequency mapping:

$$\alpha = \text{(2/T)} \cdot \text{rectg}\left(\alpha\_a T/2\right) \tag{20}$$

where *ω* is a frequency of the discrete filter and *ωa* is the corresponding frequency of the analog filter. This error can be corrected by applying a pre-warping. Taking *T* =1 in (20), we substitute the mappings:

$$a\_1 \to 2 \cdot \text{arctg}\left(a\_1/2\right) \qquad a\_2 \to 2 \cdot \text{arctg}\left(a\_2/2\right) \tag{21}$$

In order to include the nonlinear mappings (21) into the frequency transformation, a rational approximation is needed. One of the most efficient is Chebyshev-Padé, which gives uniform approximation over a specified range. We get the accurate approximation for *ω* ∈ −*π*, *π* :

$$\text{array}(o/2) \equiv 0.4751 \cdot o \Big/ \Big(1 + 0.05 \cdot o^2\Big) \tag{22}$$

Substituting the nonlinear mappings (21) with approximate expression (22) into (18) we get the 1D to 2D mapping which includes the pre-warping along both frequency axes:

$$s \to F\_{\varphi}(s\_1, s\_2) = 0.95 \cdot \left[ \frac{s\_1 \cos \varphi}{1 - 0.05 \cdot s\_1^2} + \frac{s\_2 \sin \varphi}{1 - 0.05 \cdot s\_2^2} \right] \tag{23}$$

Applying the bilinear transform (19) along the two axes we obtain the mapping *s* →*Fφ*(*z*1, *z*2) in matrix form, where *z***<sup>1</sup>** = 1 *z*<sup>1</sup> *z*<sup>1</sup> 2 and *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup> 2 :

$$\mathbf{s} \rightarrow \mathbf{F}\_{\boldsymbol{\phi}}(\mathbf{z}\_{1}, \mathbf{z}\_{2}) = \mathbf{k} \cdot \mathbf{M}\_{\boldsymbol{\phi}}(\mathbf{z}\_{1}, \mathbf{z}\_{2}) / \mathbf{N}\_{\boldsymbol{\phi}}(\mathbf{z}\_{1}, \mathbf{z}\_{2}) = \mathbf{k} \cdot (\mathbf{z}\_{1} \times \mathbf{M}\_{\boldsymbol{\phi}} \times \mathbf{z}\_{2}^{T}) / (\mathbf{z}\_{1} \times \mathbf{N}\_{\boldsymbol{\phi}} \times \mathbf{z}\_{2}^{T}) \tag{24}$$

Here *k* =1.5233 and the matrices *Mφ* and *Nφ* of size 3×3 are given by:

$$\mathbf{M}\_{\boldsymbol{\phi}} = \cos \boldsymbol{\rho} \cdot \begin{bmatrix} -1 & -3 & -1 \\ 0 & 0 & 0 \\ 1 & 3 & 1 \end{bmatrix} + \sin \boldsymbol{\rho} \cdot \begin{bmatrix} -1 & 0 & 1 \\ -3 & 0 & 3 \\ -1 & 0 & 1 \end{bmatrix} \mathbf{N}\_{\boldsymbol{\phi}} = \begin{bmatrix} 1 & 3 & 1 \\ 3 & 9 & 3 \\ 1 & 3 & 1 \end{bmatrix} \tag{25}$$

Substituting the mapping (24) into the expression (2) of the biquad transfer function *Hb*(*s*) with *b*<sup>1</sup> =0, we get the 2D transfer function *HB*(*z*1, *z*2) in the matrix form:

$$H\_{\mathcal{B}}(\mathbf{z}\_1, \mathbf{z}\_2) = \left(\mathbf{z}\_1 \times \mathbf{B}\_1 \times \mathbf{z}\_2^T\right) \Big/ \left(\mathbf{z}\_1 \times \mathbf{A}\_1 \times \mathbf{z}\_2^T\right) \tag{26}$$

where *z***1** and *z***2** are the vectors:

variable change: *ω*1→*ω*2, *ω*2→ −*ω*1, the transfer function *H*2(*z*1, *z*2) can be derived from

dal filter, as shown in Fig.2 (f). In this case the two oriented LP filters may have different

The issue of this section is to find the transfer function *H*2*D*(*z*1, *z*2) of the desired 2D filter using a complex frequency transformation *s* →*F* (*z*1, *z*2). From a prototype *HP*(*s*)=*HP*( *jω*) (which varies on one axis only), a 2D oriented filter is obtained by rotating the axes of the plane (*ω*1, *ω*2) by an angle *φ*. The rotation is defined by the following linear transformation, where *ω*1, *ω*2 are

> 1 1 2 2 cos sin sin cos

j

j

1 2 *ss s* ® + cos sin j

bilinear transform for *s*1 and *s*2 in the complex plane (*s*1, *s*2) has the form:

transform, expressed by the continuous to discrete frequency mapping:

w

 w

é ù é ù é ù <sup>=</sup> <sup>×</sup> ê ú ê ú ê ú - ë û ë û ë û (17)

(18)

 w

 j

11 1 22 2 *sz z sz z* =- + =- + 2( 1) ( 1) 2( 1) ( 1) (19)

= × *T T* (20)

 j

 j

The spatial orientation is specified by an angle *φ* with respect to *ω*<sup>1</sup> −axis, defined by the 1D to 2D frequency mapping *ω* →*ω*1cos*φ* + *ω*2sin*φ*. By substitution, we obtain the oriented filter transfer function *Hφ*(*ω*1, *ω*2)=*HP*(*ω*1cos*φ* + *ω*2sin*φ*). In the complex plane (*s*1, *s*2) the above

The oriented filter *Hφ*(*ω*1, *ω*2) has the frequency response magnitude section along the line *ω*1cos*φ* + *ω*2sin*φ* =0, identical with prototype *HP*(*ω*), and constant along the perpendicular line (filter longitudinal axis) *ω*1sin*φ* −*ω*2cos*φ* =0. The usual method to obtain a discrete filter from an analog prototype is the bilinear transform. If the sample interval takes the value *T* =1, the

This method is straightforward, still the resulted 2D filter will present linearity distortions in its shape, which increase towards the limits of the frequency plane as compared to the ideal frequency response. This is mainly due to the so-called frequency warping effect of the bilinear

(2 ) arctg 2 ( ) *<sup>a</sup>*

 w

). A more general filter belonging to this class is a rhomboi‐

−1

the original frequency variables and *<sup>ω</sup>*¯ 1, *<sup>ω</sup>*¯2 the rotated ones:

w

w

bandwidths and their axes are no longer perpendicular to each other.

*H*1(*z*1, *z*2) as *H*2(*z*1, *z*2)=*H*1(*z*2, *z*<sup>1</sup>

282 Digital Filters and Signal Processing

frequency transformation becomes:

**3.2. Design method for diamond-type filters**

$$\mathbf{z\_1} = \begin{bmatrix} 1 & z\_1 & z\_1^2 & z\_1^3 & z\_1^4 \end{bmatrix} \quad \mathbf{z\_2} = \begin{bmatrix} 1 & z\_2 & z\_2^2 & z\_2^3 & z\_2^4 \end{bmatrix} \tag{27}$$

The characteristics of a diamond-type filter with orientation angle *φ* =*π* / 4 and based on the prototype filter of order 6 given by the factors (3)-(5) is shown in Fig.3 (b), (c). As can be noticed, the filter characteristic corrected by pre-warping has a good linearity, however it still twists towards the margins of the frequency plane. These marginal linearity distortions can be corrected using an additional LP filter. For instance, we can choose as prototype an 1D elliptic digital filter of order *N* =3, pass-band ripple *Rp* =0.1dB, stop-band attenuation *Rs* =40 dB and

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

1 1 [0.3513 1.01 1.01 0.3513] [1 0.9644 0.6701 0.088] *C C* **B A** = = (29)

*<sup>T</sup>* <sup>⊗</sup> *<sup>A</sup>C*1, where the symbol ⊗ denotes outer product of vectors. The correction filter has

(a) (b)

(c) (d)

The resulted 2D square-shaped correction filter characteristic is shown in Fig.3 (a) and is al‐ most maximally-flat, as required. The corrected version of the diamond-type filter from Fig. 3 (b), (c) has the magnitude and the contour plot shown in Fig. 3 (d), (e). It can be easily no‐

**Figure 4.** Diamond-type filters with orientation angle: (a), (b) φ =π / 12; (c), (d) φ =π / 6

<sup>2</sup> *<sup>z</sup>*<sup>1</sup>

<sup>3</sup> , *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup>

(,) 1 2 ( ) ( ) *T T H zz C CC* = ´´ ´ ´ **1 21 2 zB z zA z** (30)

)

<sup>2</sup> *<sup>z</sup>*<sup>2</sup> 3 :

http://dx.doi.org/10.5772/52195

285

*<sup>T</sup>* <sup>⊗</sup> *<sup>B</sup>C*1,

The 2D low-pass filter is separable and results by applying successively the 1D filter along the

two frequency axes; the 4×4 matrices of the correction filter result as: *B<sup>C</sup>* = *BC*<sup>1</sup>

cutoff frequency *ω<sup>c</sup>* =0.6, which has the coefficients given by the vectors:

the following transfer function, where *z***<sup>1</sup>** = 1 *z*<sup>1</sup> *z*<sup>1</sup>

*A<sup>C</sup>* = *AC*<sup>1</sup>

and the 5×5 templates *B***1**, *A***1** are given by the expressions:

$$\mathbf{B}\_1 = k^2 \cdot \mathbf{M}\_\phi \ast \mathbf{M}\_\phi + b\_0 \cdot \mathbf{N}\_\phi \ast \mathbf{N}\_\phi \quad \text{;} \quad \mathbf{A}\_1 = k^2 \cdot \mathbf{M}\_\phi \ast \mathbf{M}\_\phi + a\_1 \cdot k \cdot \mathbf{M}\_\phi \ast \mathbf{N}\_\phi + a\_0 \cdot \mathbf{N}\_\phi \ast \mathbf{N}\_\phi \tag{28}$$

For instance, corresponding to the third biquad function *Hb*3(*s*) given by (5), the following 5×5 templates result according to the expressions (28):

$$\begin{aligned} \mathbf{B}\_{1} &= \begin{bmatrix} 0.2464 & 0.9407 & 1.1418 & 0.4027 & 0.0671 \\ 0.9407 & 3.2233 & 3.8917 & 1.6092 & 0.4027 \\ 1.1418 & 3.8917 & 6.1484 & 3.8917 & 1.1418 \\ 0.4027 & 1.6092 & 3.8917 & 3.2233 & 0.9407 \\ 0.0671 & 0.4027 & 1.1418 & 0.9407 & 0.2464 \\ 0.0947 & 0.1941 & 0.0732 & -0.0163 & 0.0245 \\ \end{bmatrix} \\ \mathbf{A}\_{1} &= \begin{bmatrix} 0.0947 & 0.1941 & 0.0732 & -0.0163 & 0.0245 \\ 0.1941 & -0.2738 & -0.7181 & 0.0774 & 0.3112 \\ 0.0732 & -0.7181 & 1.0000 & 2.8851 & 1.2743 \\ -0.0163 & 0.0774 & 2.8851 & 3.6570 & 1.1768 \\ 0.0245 & 0.3112 & 1.2743 & 1.1768 & 0.3131 \end{bmatrix} \end{aligned}$$

**Figure 3.** (a) LP correction filter characteristic; frequency response magnitudes and contour plots of: (b), (c) uncorrect‐ ed diamond-type filter; (d), (e) corrected diamond-type filter

The characteristics of a diamond-type filter with orientation angle *φ* =*π* / 4 and based on the prototype filter of order 6 given by the factors (3)-(5) is shown in Fig.3 (b), (c). As can be noticed, the filter characteristic corrected by pre-warping has a good linearity, however it still twists towards the margins of the frequency plane. These marginal linearity distortions can be corrected using an additional LP filter. For instance, we can choose as prototype an 1D elliptic digital filter of order *N* =3, pass-band ripple *Rp* =0.1dB, stop-band attenuation *Rs* =40 dB and cutoff frequency *ω<sup>c</sup>* =0.6, which has the coefficients given by the vectors:

234 234

j

For instance, corresponding to the third biquad function *Hb*3(*s*) given by (5), the following 5×5

= × \* + × \* = × \* + ×× \* + × \* **B MM NN A MM MN NN 1 1** (28)

 j  jj  jj

(c)

<sup>0</sup> 1 0 *k b* ; *k ak a*

and the 5×5 templates *B***1**, *A***1** are given by the expressions:

2 2

templates result according to the expressions (28):

0.2464 0.9407 1.1418 0.4027 0.0671 0.9407 3.2233 3.8917 1.6092 0.4027 1.1418 3.8917 6.1484 3.8917 1.1418 0.4027 1.6092 3.8917 3.2233 0.9407 0.0671 0.4027 1.1418 0.9407 0.2464

0.0947 0.1941 0.0732 −0.0163 0.0245 0.1941 −0.2738 −0.7181 0.0774 0.3112 0.0732 −0.7181 1.0000 2.8851 1.2743 −0.0163 0.0774 2.8851 3.6570 1.1768 0.0245 0.3112 1.2743 1.1768 0.3131

(a) (b)

ed diamond-type filter; (d), (e) corrected diamond-type filter

)

(d) (e)

**Figure 3.** (a) LP correction filter characteristic; frequency response magnitudes and contour plots of: (b), (c) uncorrect‐

 jj

j

284 Digital Filters and Signal Processing

*B*<sup>1</sup> =

*A*<sup>1</sup> =

 j

11 1 1 <sup>2222</sup> = = é ùé 1 1 *zzzz zzzz* <sup>ù</sup> ë ûë <sup>û</sup> **1 2 z z** (27)

1 1 [0.3513 1.01 1.01 0.3513] [1 0.9644 0.6701 0.088] *C C* **B A** = = (29)

The 2D low-pass filter is separable and results by applying successively the 1D filter along the two frequency axes; the 4×4 matrices of the correction filter result as: *B<sup>C</sup>* = *BC*<sup>1</sup> *<sup>T</sup>* <sup>⊗</sup> *<sup>B</sup>C*1, *A<sup>C</sup>* = *AC*<sup>1</sup> *<sup>T</sup>* <sup>⊗</sup> *<sup>A</sup>C*1, where the symbol ⊗ denotes outer product of vectors. The correction filter has the following transfer function, where *z***<sup>1</sup>** = 1 *z*<sup>1</sup> *z*<sup>1</sup> <sup>2</sup> *<sup>z</sup>*<sup>1</sup> <sup>3</sup> , *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup> <sup>2</sup> *<sup>z</sup>*<sup>2</sup> 3 :

$$H\_{\mathbb{C}}(\mathbf{z}\_1, \mathbf{z}\_2) = \left(\mathbf{z}\_1 \times \mathbf{B}\_{\mathbb{C}} \times \mathbf{z}\_2^T\right) \Big/ \left(\mathbf{z}\_1 \times \mathbf{A}\_{\mathbb{C}} \times \mathbf{z}\_2^T\right) \tag{30}$$

**Figure 4.** Diamond-type filters with orientation angle: (a), (b) φ =π / 12; (c), (d) φ =π / 6

The resulted 2D square-shaped correction filter characteristic is shown in Fig.3 (a) and is al‐ most maximally-flat, as required. The corrected version of the diamond-type filter from Fig. 3 (b), (c) has the magnitude and the contour plot shown in Fig. 3 (d), (e). It can be easily no‐ ticed that the initial distortions have been eliminated. Another two diamond-type filters with orientation angles *φ* =*π* / 12 and *φ* =*π* / 6 are shown in Fig. 4 (a)-(d).

## **4. Fan-type recursive filters**

In this section an analytical design method in the frequency domain for 2D fan-type filters is proposed, starting from an 1D analog prototype filter, with a transfer function decomposed as a product of elementary functions. Since we envisage designing efficient 2D filters, of minimum order, recursive filters are used as prototypes, and the 2D fan-type filters will result recursive as well.

In Fig.5 (a) a general fan-type filter is shown, with an aperture angle ≺ *BOD* =*θ*, oriented along an axis *CC* ' and its longitudinal axis forming an angle ≺ *AOC* =*φ* with frequency axis *Oω*2. A particular case is the two-quadrant fan filter, shown in Fig.5 (b). Fig.5 (c) shows a DFB with 8 band frequency partition (Bamberger, 1992), an angularly-oriented image decomposition which splits the frequency plane into fan-shaped sub-bands (channels).

Applying the same steps as in Section 3.2 in order to obtain a discrete form of the above frequency mapping, using relations (21), (22) and (32) we obtain the 1D to 2D mapping which

**Figure 6.** Frequency response magnitudes and contour plots for: (a) fan-type filter with aperture θ = 0.1 π and orienta‐

)

(d) (e) (f)

1 2 2 2

(,) (,) (,) 1 2 <sup>φ</sup> φ12 12 ( )( ) *T T s F z z jaP z z Q z z ja* ® =××

( ) ( )

(1 0.05 )sin (1 0.05 )cos

(1 0.05 )cos (1 0.05 )sin

12 21

j

*ss ss*


1221

j

*ssss*


1 3 1 101 1 3 1 101

1 3 1 101 1 3 1 101


Substituting the mapping (34) into the biquad expression (2) with *b*<sup>1</sup> =0, we get the 2D transfer

 j

*<sup>T</sup>* ) /(*z***<sup>1</sup>** <sup>×</sup> *<sup>A</sup>*<sup>2</sup> <sup>×</sup> *<sup>z</sup>***<sup>2</sup>**

cos 0 0 0 sin 3 0 3 ; sin 0 0 0 cos 3 0 3

é ùé ù é ù é ù --- - --- - ê úê ú ê ú ê ú = × - ×- = × + ×-

We now apply the bilinear transform (19) along the two axes and obtain the mapping

2 2

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

2 and *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup>

 j

(c)

http://dx.doi.org/10.5772/52195

287

(33)

 j

2 :

 j

*<sup>T</sup>* ), similar to (26), where the

=×× ´ ´ ´ ´ **1 21 2 zPz zQ z** (34)

includes pre-warping along both axes of the frequency plane:

tion φ =π / 7; corrected filters with θ = 0.1 π, φ =π / 7 (b) and φ = 0(c)

(a) (b)

(, )

® =××

*s* →*Fφ*(*z*1, *z*2) in matrix form, where *z***<sup>1</sup>** = 1 *z*<sup>1</sup> *z*<sup>1</sup>

and the 3×3 matrices *P*φ and *Q*φ are given by:

φ φ

function in matrix form *HW* <sup>1</sup>(*z*1, *z*2)=(*z***<sup>1</sup>** × *B*<sup>2</sup> × *z***<sup>2</sup>**

jj

jj

vectors *z***1**, *z***2** are given by (27). The 5×5 templates *B*2 and *A*2 are given by:

*s F s s ja*

j

j

**Figure 5.** (a) Ideal fan filter with given aperture, oriented at an angle φ; (b) Ideal two-quadrant fan filter (c) 8-band partitions of the frequency plane

The 1D analog filter discussed in section 2 is used as prototype. The general fan-type filter can be derived from a LP prototype using the frequency mapping (Matei & Matei, 2012):

$$a \to f\_{\varphi}(\boldsymbol{\alpha}\_{1}, \boldsymbol{\alpha}\_{2}) = a \cdot \left(\boldsymbol{\alpha}\_{1} \cdot \cos \varphi - \boldsymbol{\alpha}\_{2} \cdot \sin \varphi\right) \Big| \left(\boldsymbol{\alpha}\_{1} \cdot \sin \varphi + \boldsymbol{\alpha}\_{2} \cdot \cos \varphi\right) \tag{31}$$

In (31), *a* =1 / tg(*θ* / 2) is the aperture coefficient, where *θ* is the aperture angle of the fan-type filter. This frequency mapping in the complex variables *s*<sup>1</sup> = *jω*1, *s*<sup>2</sup> = *jω*2 is:

$$s \to f\_{\varphi}(s\_1, s\_2) = j \cdot a \cdot \left(s\_1 \cdot \cos \varphi - s\_2 \cdot \sin \varphi\right) \Big/ \left(s\_1 \cdot \sin \varphi + s\_2 \cdot \cos \varphi\right) \tag{32}$$

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing http://dx.doi.org/10.5772/52195 287

**Figure 6.** Frequency response magnitudes and contour plots for: (a) fan-type filter with aperture θ = 0.1 π and orienta‐ tion φ =π / 7; corrected filters with θ = 0.1 π, φ =π / 7 (b) and φ = 0(c)

Applying the same steps as in Section 3.2 in order to obtain a discrete form of the above frequency mapping, using relations (21), (22) and (32) we obtain the 1D to 2D mapping which includes pre-warping along both axes of the frequency plane:

$$s \rightarrow F\_{\varphi}(s\_1, s\_2) = j \cdot a \cdot \frac{\left(s\_1(1 - 0.05s\_2^2)\cos\varphi - s\_2(1 - 0.05s\_1^2)\sin\varphi\right)}{\left(s\_1(1 - 0.05s\_2^2)\sin\varphi + s\_2(1 - 0.05s\_1^2)\cos\varphi\right)}\tag{33}$$

We now apply the bilinear transform (19) along the two axes and obtain the mapping *s* →*Fφ*(*z*1, *z*2) in matrix form, where *z***<sup>1</sup>** = 1 *z*<sup>1</sup> *z*<sup>1</sup> 2 and *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup> 2 :

$$\mathbf{s} \rightarrow \mathbf{F}\_{\boldsymbol{\phi}}(\mathbf{z}\_{\boldsymbol{\Psi}}, \mathbf{z}\_2) = \mathbf{j} \cdot \mathbf{a} \cdot \mathbf{P}\_{\boldsymbol{\phi}}(\mathbf{z}\_{\boldsymbol{\Psi}}, \mathbf{z}\_2) \Big/ \mathbf{Q}\_{\boldsymbol{\phi}}(\mathbf{z}\_1, \mathbf{z}\_2) = \mathbf{j} \cdot \mathbf{a} \cdot \left(\mathbf{z}\_1 \times \mathbf{P} \times \mathbf{z}\_2^T\right) \Big/ \left(\mathbf{z}\_1 \times \mathbf{Q} \times \mathbf{z}\_2^T\right) \tag{34}$$

and the 3×3 matrices *P*φ and *Q*φ are given by:

ticed that the initial distortions have been eliminated. Another two diamond-type filters

In this section an analytical design method in the frequency domain for 2D fan-type filters is proposed, starting from an 1D analog prototype filter, with a transfer function decomposed as a product of elementary functions. Since we envisage designing efficient 2D filters, of minimum order, recursive filters are used as prototypes, and the 2D fan-type filters will result

In Fig.5 (a) a general fan-type filter is shown, with an aperture angle ≺ *BOD* =*θ*, oriented along an axis *CC* ' and its longitudinal axis forming an angle ≺ *AOC* =*φ* with frequency axis *Oω*2. A particular case is the two-quadrant fan filter, shown in Fig.5 (b). Fig.5 (c) shows a DFB with 8 band frequency partition (Bamberger, 1992), an angularly-oriented image decomposition

(a) (b) (c)

**Figure 5.** (a) Ideal fan filter with given aperture, oriented at an angle φ; (b) Ideal two-quadrant fan filter (c) 8-band

The 1D analog filter discussed in section 2 is used as prototype. The general fan-type filter can

In (31), *a* =1 / tg(*θ* / 2) is the aperture coefficient, where *θ* is the aperture angle of the fan-type

( , ) cos sin sin cos 12 1 2 ( jj

 j w  j w

 j

( , ) cos sin sin cos 12 1 2 ( ) ( 1 2 ) (31)

 j

 j) ( 1 2 ) (32)

be derived from a LP prototype using the frequency mapping (Matei & Matei, 2012):

® =× × - × × + × *f a*

 j w

filter. This frequency mapping in the complex variables *s*<sup>1</sup> = *jω*1, *s*<sup>2</sup> = *jω*2 is:

*s f s s ja s s s s* ® =×× × - × × + ×

with orientation angles *φ* =*π* / 12 and *φ* =*π* / 6 are shown in Fig. 4 (a)-(d).

which splits the frequency plane into fan-shaped sub-bands (channels).

**4. Fan-type recursive filters**

286 Digital Filters and Signal Processing

recursive as well.

partitions of the frequency plane

w

 ww

j

j

 w

$$\mathbf{P}\_{\boldsymbol{\Psi}} = \cos\boldsymbol{\rho} \cdot \begin{bmatrix} -1 & -3 & -1 \\ 0 & 0 & 0 \\ 1 & 3 & 1 \end{bmatrix} - \sin\boldsymbol{\rho} \cdot \begin{bmatrix} -1 & 0 & 1 \\ -3 & 0 & 3 \\ -1 & 0 & 1 \end{bmatrix}; \mathbf{Q}\_{\boldsymbol{\Psi}} = \sin\boldsymbol{\rho} \cdot \begin{bmatrix} -1 & -3 & -1 \\ 0 & 0 & 0 \\ 1 & 3 & 1 \end{bmatrix} + \cos\boldsymbol{\rho} \cdot \begin{bmatrix} -1 & 0 & 1 \\ -3 & 0 & 3 \\ -1 & 0 & 1 \end{bmatrix} \tag{35}$$

Substituting the mapping (34) into the biquad expression (2) with *b*<sup>1</sup> =0, we get the 2D transfer function in matrix form *HW* 1(*z*1, *z*2)=(*z***<sup>1</sup>** × *B*<sup>2</sup> × *z***<sup>2</sup>** *<sup>T</sup>* ) /(*z***<sup>1</sup>** <sup>×</sup> *<sup>A</sup>*<sup>2</sup> <sup>×</sup> *<sup>z</sup>***<sup>2</sup>** *<sup>T</sup>* ), similar to (26), where the vectors *z***1**, *z***2** are given by (27). The 5×5 templates *B*2 and *A*2 are given by:

$$\mathbf{B}\_2 = b\_0 \cdot \mathbf{Q}\_\wp \ast \mathbf{Q}\_\wp - a^2 \cdot \mathbf{P}\_\wp \ast \mathbf{P}\_\wp; \quad \mathbf{A}\_2 = a\_0 \cdot \mathbf{Q}\_\wp \ast \mathbf{Q}\_\wp - a^2 \cdot \mathbf{P}\_\wp \ast \mathbf{P}\_\wp + j \cdot a \cdot a\_1 \cdot \mathbf{P}\_\wp \ast \mathbf{Q}\_\wp \tag{36}$$

The 2D transfer function for each biquad is complex. The characteristics of a fan-type filter designed with this method and based on the prototype filter of order 4 given by (6)-(7) is shown in Fig.6 (a), for the indicated parameters. As with the diamond-type filter analyzed in the previous section, the fan-type filter characteristic features marginal linearity distortions which can be corrected using a LP filter, similar with the correction filter used in Section 3.2 and having the frequency characteristic shown in Fig. 3 (a).

(a) (b) (c)

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

http://dx.doi.org/10.5772/52195

289

**Figure 8.** Ideal shapes of some directional filters in the frequency plane: (a) two-directional filter; (b) three-directional

A two-directional 2D filter is orientation-selective along two directions in the frequency plane. It is based on a selective resonant IIR prototype as given in section 2. Applying the same frequency transformation *s* →*Fφ*(*z*1, *z*2) derived for fan-type filters and given by (33) to the prototype filter (9) we get the 2D two-directional transfer function *H*2(*z*1, *z*2) in matrix form

*<sup>T</sup>* ), similar to (26), but the 5×5 matrices *B*3, *A*<sup>3</sup> now have the

( ) 2 2

 w

 j j

 jj

**B PQ A PQ PP QQ** =× × \* =× × \* + × \* - × \* (37)

filter; (c) three-band selective filter with ω<sup>01</sup> = 0.59π, ω<sup>02</sup> = −0.36π

*<sup>T</sup>* ) /(*z***<sup>1</sup>** <sup>×</sup> *<sup>A</sup>*<sup>3</sup> <sup>×</sup> *<sup>z</sup>***<sup>2</sup>**

aa

3 3 <sup>0</sup> *a a j a*

zontal and vertical lines from an image, as shown in the next section.

j j

The denominator matrix *A*3 has complex elements. In Fig.9 (a), the contour plot of the frequency response magnitude is shown for a two-directional filter with aperture *θ* = *π* / 6 and orientation *φ* =*π* / 5. As with the previous types of filters, the marginal linearity distortions can be corrected

The templates *B*3*C* and *A*3*C* of the corrected filter result by convolution: *B*3*<sup>C</sup>* = *B*3∗ *BC* and *A*3*<sup>C</sup>* = *A*3∗ *AC*. In Fig.9 (b), (c) the frequency response magnitudes and contour plots are displayed for the corrected two-directional filter with specified aperture and orientation. The

The second two-directional filter in Fig.9 (d), (e) is a particular case, being oriented along the two frequency axes (*θ* = *π* / 2, *φ* =*π* / 4), therefore can be used to detect simultaneously hori‐

In order to design a three-directional filter like the one depicted in Fig. 8 (b), we must start from an analog three-band selective filter, like the one with frequency response shown in Fig.

**5.1. Two-directional fan-type filters**

j j

using an additional LP square-shaped filter.

initial distortions have been eliminated.

**5.2. Three-directional fan-type filters**

*H*2(*z*1, *z*2)=(*z***<sup>1</sup>** × *B*<sup>3</sup> × *z***<sup>2</sup>**

form:

**Figure 7.** (a) Frequency response magnitude and (b) contour plot for the 2-quadrant fan filter

Two corrected fan-type filters with specified parameters have the magnitudes and contour plots shown in Fig.6 (b), (c). The initial distortions have been eliminated. With the same correction filter, we obtain the two-quadrant fan filter, shown in Fig. 7, by setting the aperture angle *θ* =*π* / 2 and orientation angle *φ* =*π* / 4.

### **5. Very selective multidirectional IIR Filters**

In this section a design method based on spectral transformations is proposed for another class of 2D IIR filters, namely multi-directional filters. The design starts from an analog prototype with specified parameters. Applying an appropriate frequency transformation to the 1D transfer function, the desired 2D filter is directly obtained in a factorized form, like the filters designed in the previous sections. For two-directional filters, an example is given of extracting lines with two different orientations from a test image. The spectral transformation used in the case of multi-directional filters is similar to the one presented in the previous section, derived for fan-type filters and given by (34), (35). In this section the design of two-directional and three-directional filters with specified orientation is detailed. The method can be easily generalized to arbitrary multi-directional filters.

**Figure 8.** Ideal shapes of some directional filters in the frequency plane: (a) two-directional filter; (b) three-directional filter; (c) three-band selective filter with ω<sup>01</sup> = 0.59π, ω<sup>02</sup> = −0.36π

#### **5.1. Two-directional fan-type filters**

2 2 2 0 2 0 <sup>1</sup> *b a* ; *a a jaa*

j j

The 2D transfer function for each biquad is complex. The characteristics of a fan-type filter designed with this method and based on the prototype filter of order 4 given by (6)-(7) is shown in Fig.6 (a), for the indicated parameters. As with the diamond-type filter analyzed in the previous section, the fan-type filter characteristic features marginal linearity distortions which can be corrected using a LP filter, similar with the correction filter used in Section 3.2 and

**B QQ PP A QQ PP PQ** = × \* - × \* = × \* - × \* +×× × \* (36)

(a) (b)

Two corrected fan-type filters with specified parameters have the magnitudes and contour plots shown in Fig.6 (b), (c). The initial distortions have been eliminated. With the same correction filter, we obtain the two-quadrant fan filter, shown in Fig. 7, by setting the aperture

In this section a design method based on spectral transformations is proposed for another class of 2D IIR filters, namely multi-directional filters. The design starts from an analog prototype with specified parameters. Applying an appropriate frequency transformation to the 1D transfer function, the desired 2D filter is directly obtained in a factorized form, like the filters designed in the previous sections. For two-directional filters, an example is given of extracting lines with two different orientations from a test image. The spectral transformation used in the case of multi-directional filters is similar to the one presented in the previous section, derived for fan-type filters and given by (34), (35). In this section the design of two-directional and three-directional filters with specified orientation is detailed. The method can be easily

**Figure 7.** (a) Frequency response magnitude and (b) contour plot for the 2-quadrant fan filter

 jj  j j

j j

288 Digital Filters and Signal Processing

 jj

having the frequency characteristic shown in Fig. 3 (a).

angle *θ* =*π* / 2 and orientation angle *φ* =*π* / 4.

**5. Very selective multidirectional IIR Filters**

generalized to arbitrary multi-directional filters.

A two-directional 2D filter is orientation-selective along two directions in the frequency plane. It is based on a selective resonant IIR prototype as given in section 2. Applying the same frequency transformation *s* →*Fφ*(*z*1, *z*2) derived for fan-type filters and given by (33) to the prototype filter (9) we get the 2D two-directional transfer function *H*2(*z*1, *z*2) in matrix form *H*2(*z*1, *z*2)=(*z***<sup>1</sup>** × *B*<sup>3</sup> × *z***<sup>2</sup>** *<sup>T</sup>* ) /(*z***<sup>1</sup>** <sup>×</sup> *<sup>A</sup>*<sup>3</sup> <sup>×</sup> *<sup>z</sup>***<sup>2</sup>** *<sup>T</sup>* ), similar to (26), but the 5×5 matrices *B*3, *A*<sup>3</sup> now have the form:

$$\mathbf{B}\_3 = a \cdot a \cdot \mathbf{P}\_\rho \ast \mathbf{Q}\_\rho \qquad \mathbf{A}\_3 = a \cdot a \cdot \mathbf{P}\_\rho \ast \mathbf{Q}\_\rho + j \left( a^2 \cdot \mathbf{P}\_\rho \ast \mathbf{P}\_\rho - a\_0^2 \cdot \mathbf{Q}\_\rho \ast \mathbf{Q}\_\rho \right) \tag{37}$$

The denominator matrix *A*3 has complex elements. In Fig.9 (a), the contour plot of the frequency response magnitude is shown for a two-directional filter with aperture *θ* = *π* / 6 and orientation *φ* =*π* / 5. As with the previous types of filters, the marginal linearity distortions can be corrected using an additional LP square-shaped filter.

The templates *B*3*C* and *A*3*C* of the corrected filter result by convolution: *B*3*<sup>C</sup>* = *B*3∗ *BC* and *A*3*<sup>C</sup>* = *A*3∗ *AC*. In Fig.9 (b), (c) the frequency response magnitudes and contour plots are displayed for the corrected two-directional filter with specified aperture and orientation. The initial distortions have been eliminated.

The second two-directional filter in Fig.9 (d), (e) is a particular case, being oriented along the two frequency axes (*θ* = *π* / 2, *φ* =*π* / 4), therefore can be used to detect simultaneously hori‐ zontal and vertical lines from an image, as shown in the next section.

#### **5.2. Three-directional fan-type filters**

In order to design a three-directional filter like the one depicted in Fig. 8 (b), we must start from an analog three-band selective filter, like the one with frequency response shown in Fig.

*Ab*2) and (*Bb*3, *Ab*3), the templates of size 7×7 of the entire three-directional filter will result by

3 1 2 3 12 3 1 23 3 1 2 3 ; *b b b bb b b bb b b b* **BBA A AB A A A B AA A A** =\* \* + \*\* + \* \* = \* \* (40)

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

( ) <sup>2</sup>

We see that *a*<sup>2</sup> is real and *a*0, *a*<sup>1</sup> are generally complex. The coefficients *a*0, *a*<sup>1</sup> are real only when *ω*<sup>02</sup> = −*ω*01, i.e. for symmetric frequency values around the origin. Finally, for any specified set

*B s sr sr*

a

*A s s sp sp* a

(a) (b)

**Figure 10.** Frequency response magnitude (a) and contour plot (b) of a corrected three-directional filter with parame‐

At the denominator, we denoted *p*<sup>1</sup> =*α* + *j* ⋅*ω*01, *p*<sup>2</sup> =*α* + *j* ⋅*ω*02. Applying to each factor the frequency transformation *s* →*Fφ*(*z*1, *z*2) given by (34), after some algebraic manipulations, we

finally obtain the templates of the three-directional filter as discrete convolutions:

(41)

1)(*<sup>s</sup>* <sup>+</sup> *<sup>r</sup>*

2), where *<sup>r</sup>*

1 and *r* 2 are

http://dx.doi.org/10.5772/52195

291

(*s*)=3*α* ⋅ (*s* + *r*

×+ + = = ++ + (42)

( )( ) ( )( )( )

1 2 1 2

2 10 ( ) *<sup>P</sup> Bs as asa* = × × + ×+

a

where *a*<sup>2</sup> =3, *a*<sup>1</sup> =6*<sup>α</sup>* <sup>+</sup> *<sup>j</sup>* <sup>⋅</sup>2(*ω*<sup>01</sup> <sup>+</sup> *<sup>ω</sup>*02) and *a*<sup>0</sup> =3*<sup>α</sup>* <sup>2</sup> <sup>−</sup>*ω*<sup>01</sup> <sup>⋅</sup>*ω*<sup>02</sup> <sup>+</sup> *<sup>j</sup>* <sup>⋅</sup>2*α*(*ω*<sup>01</sup> <sup>+</sup> *<sup>ω</sup>*02).

complex roots. Therefore the factorized prototype transfer function is:

( ) ( ) *j*

*j*

( ) 3

summing up the convolutions of elementary templates:

The numerator *BP*(*s*) of *HP*(*s*) from (38) has the general form:

of values *α*, *ω*01, *ω*02 the denominator factorizes as *Bj*

*j*

ters: θ = 0.23π, φ = 0.27π, ω<sup>01</sup> = 0.59π, ω<sup>02</sup> = −0.36π

*H s*

**Figure 9.** (a) Contour plot of a two-directional filter with θ = π / 6, φ =π / 5; frequency response magnitudes and con‐ tour plots of two corrected two-directional filters with parameters: θ = π / 6, φ =π / 5 (b), (c) and θ = π / 2, φ =π / 4 (d), (e)

8 (c). For a three directional filter, the middle peak frequency can always be taken *ω*<sup>0</sup> =0, and the other two on each side at specified values. The prototype transfer function *HP*(*s*) in variable *s* will be in this case the sum of three elementary functions:

$$H\_p(s) = \frac{B\_p(s)}{A\_p(s)} = \frac{a}{s+a} + \frac{a}{s+a+j \cdot o\_{01}} + \frac{a}{s+a+j \cdot o\_{02}}\tag{38}$$

The frequency response of a filter of this kind with parameter values *α* =0.03, *ω*<sup>01</sup> =0.59*π* and *ω*<sup>02</sup> = −0.36*π* is shown in Fig. 8 (c). Substituting the mapping (34) into the expression (8) of the elementary function *H jS* (*s*), we get the 2D transfer function *H*1(*z*1, *z*2) in matrix form: *H*1(*z*1, *z*2)=(*z***<sup>1</sup>** × *B<sup>b</sup>* × *z***<sup>2</sup>** *<sup>T</sup>* ) /(*z***<sup>1</sup>** <sup>×</sup> *<sup>A</sup><sup>b</sup>* <sup>×</sup> *<sup>z</sup>***<sup>2</sup>** *<sup>T</sup>* ), where *z***<sup>1</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>1</sup> *<sup>z</sup>*<sup>1</sup> <sup>2</sup> , *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup> 2 and the 3×3 templates *Bb*,*Ab* are given by:

$$\mathbf{B}\_b = \boldsymbol{\alpha} \cdot \mathbf{Q}\_{\boldsymbol{\varphi}} \colon \mathbf{A}\_b = \boldsymbol{\alpha} \cdot \mathbf{Q}\_{\boldsymbol{\varphi}} + j \cdot \left( \boldsymbol{a} \cdot \mathbf{P}\_{\boldsymbol{\varphi}} + \alpha\_0 \cdot \mathbf{Q}\_{\boldsymbol{\varphi}} \right) \tag{39}$$

Each of the three elementary terms in (38) corresponds to a pair of 3×3 templates *Bb* and *A<sup>b</sup>* given by (39). If the three elementary filters are given by the pairs of templates (*Bb*1, *Ab*1), (*Bb*2, *Ab*2) and (*Bb*3, *Ab*3), the templates of size 7×7 of the entire three-directional filter will result by summing up the convolutions of elementary templates:

$$\mathbf{B}\_3 = \mathbf{B}\_{b1} \ast \mathbf{A}\_{b2} \ast \mathbf{A}\_{b3} + \mathbf{A}\_{b1} \ast \mathbf{B}\_{b2} \ast \mathbf{A}\_{b3} + \mathbf{A}\_{b1} \ast \mathbf{A}\_{b2} \ast \mathbf{B}\_{b3} \; ; \quad \mathbf{A}\_3 = \mathbf{A}\_{b1} \ast \mathbf{A}\_{b2} \ast \mathbf{A}\_{b3} \tag{40}$$

The numerator *BP*(*s*) of *HP*(*s*) from (38) has the general form:

8 (c). For a three directional filter, the middle peak frequency can always be taken *ω*<sup>0</sup> =0, and the other two on each side at specified values. The prototype transfer function *HP*(*s*) in variable

**Figure 9.** (a) Contour plot of a two-directional filter with θ = π / 6, φ =π / 5; frequency response magnitudes and con‐ tour plots of two corrected two-directional filters with parameters: θ = π / 6, φ =π / 5 (b), (c) and θ = π / 2, φ =π / 4 (d),

)

(d) (e)

*As s s j s j* aa

The frequency response of a filter of this kind with parameter values *α* =0.03, *ω*<sup>01</sup> =0.59*π* and *ω*<sup>02</sup> = −0.36*π* is shown in Fig. 8 (c). Substituting the mapping (34) into the expression (8) of the elementary function *H jS* (*s*), we get the 2D transfer function *H*1(*z*1, *z*2) in matrix form:

*<sup>T</sup>* ), where *z***<sup>1</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>1</sup> *<sup>z</sup>*<sup>1</sup>

**B QA Q P Q** *b b* = × = × +× × + ×

 j

Each of the three elementary terms in (38) corresponds to a pair of 3×3 templates *Bb* and *A<sup>b</sup>* given by (39). If the three elementary filters are given by the pairs of templates (*Bb*1, *Ab*1), (*Bb*2,

 aw

a

(a) (b)

= =+ +

01 02

 aw

 w

jj

; *j a*( <sup>0</sup> ) (39)

 a

+ + +× + +× (38)

(c)

<sup>2</sup> , *z***<sup>2</sup>** <sup>=</sup> <sup>1</sup> *<sup>z</sup>*<sup>2</sup> *<sup>z</sup>*<sup>2</sup>

2 and the 3×3

*s* will be in this case the sum of three elementary functions:

( ) ( ) ( ) *P*

*B s H s*

*P*

*<sup>T</sup>* ) /(*z***<sup>1</sup>** <sup>×</sup> *<sup>A</sup><sup>b</sup>* <sup>×</sup> *<sup>z</sup>***<sup>2</sup>**

aa

j

*P*

*H*1(*z*1, *z*2)=(*z***<sup>1</sup>** × *B<sup>b</sup>* × *z***<sup>2</sup>**

290 Digital Filters and Signal Processing

(e)

templates *Bb*,*Ab* are given by:

$$B\_p(\mathbf{s}) = \alpha \cdot \left(a\_2 \cdot \mathbf{s}^2 + a\_1 \cdot \mathbf{s} + a\_0\right) \tag{41}$$

where *a*<sup>2</sup> =3, *a*<sup>1</sup> =6*<sup>α</sup>* <sup>+</sup> *<sup>j</sup>* <sup>⋅</sup>2(*ω*<sup>01</sup> <sup>+</sup> *<sup>ω</sup>*02) and *a*<sup>0</sup> =3*<sup>α</sup>* <sup>2</sup> <sup>−</sup>*ω*<sup>01</sup> <sup>⋅</sup>*ω*<sup>02</sup> <sup>+</sup> *<sup>j</sup>* <sup>⋅</sup>2*α*(*ω*<sup>01</sup> <sup>+</sup> *<sup>ω</sup>*02).

We see that *a*<sup>2</sup> is real and *a*0, *a*<sup>1</sup> are generally complex. The coefficients *a*0, *a*<sup>1</sup> are real only when *ω*<sup>02</sup> = −*ω*01, i.e. for symmetric frequency values around the origin. Finally, for any specified set of values *α*, *ω*01, *ω*02 the denominator factorizes as *Bj* (*s*)=3*α* ⋅ (*s* + *r* 1)(*<sup>s</sup>* <sup>+</sup> *<sup>r</sup>* <sup>2</sup>), where *<sup>r</sup>* 1 and *r* 2 are complex roots. Therefore the factorized prototype transfer function is:

$$H\_j(\mathbf{s}) = \frac{B\_j(\mathbf{s})}{A\_j(\mathbf{s})} = \frac{3\alpha \cdot (\mathbf{s} + r\_1)(\mathbf{s} + r\_2)}{(\mathbf{s} + \alpha)(\mathbf{s} + p\_1)(\mathbf{s} + p\_2)}\tag{42}$$

**Figure 10.** Frequency response magnitude (a) and contour plot (b) of a corrected three-directional filter with parame‐ ters: θ = 0.23π, φ = 0.27π, ω<sup>01</sup> = 0.59π, ω<sup>02</sup> = −0.36π

At the denominator, we denoted *p*<sup>1</sup> =*α* + *j* ⋅*ω*01, *p*<sup>2</sup> =*α* + *j* ⋅*ω*02. Applying to each factor the frequency transformation *s* →*Fφ*(*z*1, *z*2) given by (34), after some algebraic manipulations, we finally obtain the templates of the three-directional filter as discrete convolutions:

$$\mathbf{B}\_3 = \mathbf{3}a \cdot \mathbf{Q}\_{\phi} \* \left( r\_1 \mathbf{Q}\_{\phi} + j a \mathbf{P}\_{\phi} \right) \* \left( r\_2 \mathbf{Q}\_{\phi} + j a \mathbf{P}\_{\phi} \right) \tag{43}$$

which maps the real frequency axis *ω* onto the complex plane (*z*1, *z*2), defined by the real

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

 w w rw w

2 2

In (48) *ρ*(*ω*1, *ω*2) is initially determined in the angle variable *φ* as *ρ*(*φ*) and can be referred to

(,) (48)

http://dx.doi.org/10.5772/52195

293

= + (49)

( ) 2 2 2 2 <sup>1</sup> 12 1 2 12 *F F* : , (,) ¡ ¡ ® ® =+

> 1 12 cosjw

 w w

(a) (b) (c)

If the radial function *ρ*(*φ*) can be expressed in the variable cos*φ*, using (49) we obtain by substitution the function *ρ*(*ω*1, *ω*2). The function *ρ*(*φ*) will result as a polynomial or a ratio of polynomials in cos*φ*. For instance, the four-lobe filter with the contour plot given in Fig.11

plotted in Fig.11 (b) on the range *φ* ∈ 0, 2*π* . More generally, the 2D filter can be rotated in the frequency plane with a specified angle *φ*0 about one of the frequency axes, e.g. *O* −*ω*2. For instance, in a four-lobe filter, two opposite lobes are oriented along a direction at an angle *φ*0, and the other two at *φ*<sup>0</sup> + *π* / 2, as shown in Fig.12 (b). It can be shown that the cosine of the

( )( ) <sup>2</sup> <sup>2222</sup> 2 2 <sup>0</sup> 01 02 0 12 1 2 cos ( ) cos sin 0.5sin 2

For filters with an even number of lobes, the radial function *ρ*(*φ*) is expressed in even powers

 j w

of cos*φ* or cos(*φ* + *φ*0). The frequency transformation (48) can be also expressed as:

( ) cos4 8 cos 8 cos =+ =+- + *ab ab b b*

j 2 4

 j ww

+ = ×+ ×+ × + (51)

 j

> w w

(50)

j

 w w

as a *radial compressing function*. In the frequency plane (*ω*1, *ω*2) we have:

**Figure 11.** (a) contour plot of a four-lobe filter; (b) periodic function ρ(φ); (c) LP prototype

w

frequency mapping:

(a) corresponds to a function:

j j r j

current angle *φ* with initial phase *φ*0 can be expressed as:

 j w

$$\mathbf{A}\_{3} = \left( a\mathbf{Q}\_{\boldsymbol{\varphi}} + ja\mathbf{P}\_{\boldsymbol{\varphi}} \right) \* \left( p\_{1}\mathbf{Q}\_{\boldsymbol{\varphi}} + ja\mathbf{P}\_{\boldsymbol{\varphi}} \right) \* \left( p\_{2}\mathbf{Q}\_{\boldsymbol{\varphi}} + ja\mathbf{P}\_{\boldsymbol{\varphi}} \right) \tag{44}$$

This implies the fact that the transfer function *H*3(*z*1, *z*2) of the 2D three-directional filter with templates *B*3 and *A*3 of size 7×7 results directly in a factorized form, which is an important advantage in implementation. As a general remark on the method, using an analog prototype instead of a digital one, as is currently done, simplifies the design in this case, as the frequency mapping results simpler and leads to a 2D filter of lower complexity. The designed filters result with complex coefficients, however such IIR filters can also be implemented (Nikolova et al., 2011).

### **6. Directional IIR filters designed in polar coordinates**

We approach here a particular class of 2D filters, namely filters whose frequency response is symmetric about the origin and has at the same time an angular periodicity. The contour plots of their frequency response, resulted as sections with planes parallel with the frequency plane, can be defined as closed curves which can be described in terms of a variable radius which is a periodic function of the current angle formed with one of the axes.

It can be described in polar coordinates by *ρ* =*ρ*(*φ*), where *φ* is the angle formed by the radius op with *ω*1-axis, as shown in Fig.8(a) for a four-lobe filter. Therefore *ρ*(*φ*) is a periodic function of the angle *φ* in the range *φ* ∈ 0, 2*π* .

#### **6.1. Spectral transformation for filters designed in polar coordinates**

The main issue approached here is to find the transfer function of the desired 2D filter *H*2*D*(*z*1, *z*2) using appropriate frequency transformations of the form *ω* →*F* (*ω*1, *ω*2). The elementary transfer functions (14) and (15) have the complex frequency responses:

$$H\_1(j\alpha) = \left(b\_0 + b\_1 \cos\alpha + jb\_1 \sin\alpha\right) \left(a\_0 + \cos\alpha + j\sin\alpha\right) \tag{45}$$

$$H\_2(jo) = \frac{b\_1 + (b\_2 + b\_0)\cos\alpha + j(b\_2 - b\_0)\sin\alpha}{a\_1 + (1 + a\_0)\cos\alpha + j(1 - a\_0)\sin\alpha} = \frac{P(o)}{Q(o)}\tag{46}$$

The proposed design method for these 2D filters is based on the frequency transformation:

$$F: \mathbb{R} \to \mathbb{C}^2, o^2 \to F(z\_1, z\_2) = B\_f(z\_1, z\_2) / A\_f(z\_1, z\_2) \tag{47}$$

which maps the real frequency axis *ω* onto the complex plane (*z*1, *z*2), defined by the real frequency mapping:

**B QQP QP** 31 2 =×\* + \* + 3

**A QP QP QP** 3 12 = +\* +\* + (

 jj

This implies the fact that the transfer function *H*3(*z*1, *z*2) of the 2D three-directional filter with templates *B*3 and *A*3 of size 7×7 results directly in a factorized form, which is an important advantage in implementation. As a general remark on the method, using an analog prototype instead of a digital one, as is currently done, simplifies the design in this case, as the frequency mapping results simpler and leads to a 2D filter of lower complexity. The designed filters result with complex coefficients, however such IIR filters can also be implemented (Nikolova et al., 2011).

We approach here a particular class of 2D filters, namely filters whose frequency response is symmetric about the origin and has at the same time an angular periodicity. The contour plots of their frequency response, resulted as sections with planes parallel with the frequency plane, can be defined as closed curves which can be described in terms of a variable radius which is

It can be described in polar coordinates by *ρ* =*ρ*(*φ*), where *φ* is the angle formed by the radius op with *ω*1-axis, as shown in Fig.8(a) for a four-lobe filter. Therefore *ρ*(*φ*) is a periodic function

The main issue approached here is to find the transfer function of the desired 2D filter *H*2*D*(*z*1, *z*2) using appropriate frequency transformations of the form *ω* →*F*(*ω*1, *ω*2). The

> w

 w

++ + - = = ++ + - (46)

*Fz z B z z A z z* (47)

 ww

=+ + + + (45)

ww

w

elementary transfer functions (14) and (15) have the complex frequency responses:

1 20 20

10 0 ( )cos ( )sin ( ) ( ) (1 )cos (1 )sin ( ) *b b b jb b <sup>P</sup> H j a a ja Q* w

w

12 12 12 : , (,) (,) (,) *f f F* ¡ £ ®® =

The proposed design method for these 2D filters is based on the frequency transformation:

( ) ( ) 1 01 1 0 *H j b b jb a j* ( ) cos sin cos sin

 jj

 jj*ja p ja p ja* ) ( ) ( ) (44)

( )( *r ja r ja* ) (43)

 jj

a

a

292 Digital Filters and Signal Processing

j

jj

**6. Directional IIR filters designed in polar coordinates**

a periodic function of the current angle formed with one of the axes.

**6.1. Spectral transformation for filters designed in polar coordinates**

 w

2 2

w

of the angle *φ* in the range *φ* ∈ 0, 2*π* .

w

2

w

$$F\_1: \mathbb{R} \to \mathbb{R}^2, o^2 \to F(o\_1, o\_2) = \left(o\_1^2 + o\_2^2\right) \Big/ \rho(o\_1, o\_2) \tag{48}$$

= + (49)

In (48) *ρ*(*ω*1, *ω*2) is initially determined in the angle variable *φ* as *ρ*(*φ*) and can be referred to as a *radial compressing function*. In the frequency plane (*ω*1, *ω*2) we have:

> 1 12 cosjw

 w w

2 2

**Figure 11.** (a) contour plot of a four-lobe filter; (b) periodic function ρ(φ); (c) LP prototype

If the radial function *ρ*(*φ*) can be expressed in the variable cos*φ*, using (49) we obtain by substitution the function *ρ*(*ω*1, *ω*2). The function *ρ*(*φ*) will result as a polynomial or a ratio of polynomials in cos*φ*. For instance, the four-lobe filter with the contour plot given in Fig.11 (a) corresponds to a function:

$$\rho(\varphi) = a + b\cos 4\varphi = a + b - 8b\cos^2\varphi + 8b\cos^4\varphi \tag{50}$$

plotted in Fig.11 (b) on the range *φ* ∈ 0, 2*π* . More generally, the 2D filter can be rotated in the frequency plane with a specified angle *φ*0 about one of the frequency axes, e.g. *O* −*ω*2. For instance, in a four-lobe filter, two opposite lobes are oriented along a direction at an angle *φ*0, and the other two at *φ*<sup>0</sup> + *π* / 2, as shown in Fig.12 (b). It can be shown that the cosine of the current angle *φ* with initial phase *φ*0 can be expressed as:

$$\cos^2(\varphi + \varphi\_0) = \left(\cos^2\varphi\_0 \cdot o\_1^2 + \sin^2\varphi\_0 \cdot o\_2^2 + 0.5\sin2\varphi\_0 \cdot o\_1o\_2\right) \Big/ \left(o\_1^2 + o\_2^2\right) \tag{51}$$

For filters with an even number of lobes, the radial function *ρ*(*φ*) is expressed in even powers of cos*φ* or cos(*φ* + *φ*0). The frequency transformation (48) can be also expressed as:

$$
\rho \to \sqrt{(\alpha\_1^2 + \alpha\_1^2) \Big/ \rho(\alpha\_1, \alpha\_2)} = \sqrt{F\_1(\alpha\_1, \alpha\_2)}\tag{52}
$$

which can be also written as:

( )

**6.2. Two-directional filter design**

*ρ*(*φ*)=*k* ⋅*Hr*(*φ*). We get using (49):

w

and the function *F*2(*s*1, *s*2) of the form:

 w w  w

w

2 2

 w

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

 w

= × -+ ( ) % (59)

= +× -× (60)

 w w (58)

295

.

http://dx.doi.org/10.5772/52195

1 00 1 2 22 2 1 00 1 00 1

We approach now the design of a particular filter type designed in polar coordinates, namely two-directional (selective four-lobe) filters along the two plane axes or with a specified

*H pB p <sup>r</sup>*() 1 () 1

( ) 2 4 ( ) 1 1 8 (cos ) 8 (cos ) *H pp <sup>r</sup>*

( )( ) <sup>2</sup> 4 22 4 2 2 12 1 12 2 1 2

 ww

( )( ) 4 22 4 2 2

Finally we derive a transfer function of the 2D filter *H* (*z*1, *z*2) in the complex plane (*z*1, *z*2). This can be achieved if we find a discrete counterpart of the function *ρ*(*ω*1, *ω*2), denoted *R*(*z*1, *z*2). A possible method is to express the function *ρ*(*ω*1, *ω*2) in the complex plane (*s*1, *s*2) and then find the appropriate mapping to (*z*1, *z*2) using the bilinear transform or the Euler approxima‐ tion. Even if generally the bilinear transform is more often used, being a more accurate

and is plotted for *φ* ∈ −*π*, *π* in Fig.12(a). This is a periodic function with period *Φ* =*π* / 4 and has the shape of a multi-band ("comb") filter. In order to control the amplitude of this function, we introduce another parameter k, such that the radial function *ρ*(*φ*) takes the form

jj

 w

® = ++ + + *F pk* ( , ) (2 8 ) ( ) (61)

212 1 12 2 1 2 *F s s s pss s ks s* ( , ) (2 8 ) =- + + + + ( ) (62)

narrow lobes in the plane (*ω*1, *ω*2). Using trigonometric identities, (59) becomes:

 j

where *<sup>B</sup>*˜(*<sup>φ</sup>*) is a periodic function; let *<sup>B</sup>*˜(*<sup>φ</sup>*)=cos(4*φ*). We use it to design a 2D filter with four

++ + <sup>=</sup> + + + + +× - +

*b r bc c H j a r ac c j as s* w

The function (58) has even parity, since it is expressed as a rational function in *ω* <sup>2</sup>

orientation angle. Let us consider the radial function given by:

j

j

ww

(1 ) 2 ( ) (1 ) (1 )( ) (1 )( )

In order to obtain a rational expression for the frequency response of the 2D filter from an elementary 1D prototype of the form (45) or (46) by applying the frequency mapping (52), we need to derive rational expressions for the functions cos *ω* and sin *ω*. Using the Chebyshev-Padé method in a symbolic computation software, the following second-order rational approximations were found:

$$\cos\sqrt{\alpha} \equiv \left( 1.0559 - 0.086514 \cdot \alpha - 0.1304 \cdot \alpha^2 \right) \Big/ \Big( 1 + 0.75 \cdot \alpha - 0.110583 \cdot \alpha^2 \Big) = \mathbb{C}\_{\mathbb{S}}(\alpha) / A\_{\mathbb{S}}(\alpha) \tag{53}$$

$$\sin\sqrt{\alpha} \equiv \left( 0.167 + 1.46287 \cdot \alpha - 0.259815 \cdot \alpha^2 \right) \Big/ \left( 1 + 0.75 \cdot \alpha - 0.110583 \cdot \alpha^2 \right) = \mathcal{S}\_{\mathbb{S}}(\alpha) / A\_{\mathbb{S}}(\alpha) \tag{54}$$

which are sufficiently accurate on the range *ω* ∈[0,*π* . Since these functions are developed on the range [0,*π* , their approximations result neither odd nor even. However, using the above approximations will lead to relatively complex 2D filters, described by templates of size at least 9×9. For the type of filters approached, namely selective two-directional (four-lobe) filters, the approximations for cos *ω* and sin *ω* need not necessarily hold throughout the range [0,*π* , but only on a smaller range near the origin, corresponding to filter pass-band. Using now the Padé method we get the first-order approximations:

$$\sin\sqrt{\alpha} \equiv (\mathbf{s}\_0 + \mathbf{s}\_1\alpha) / \{1 + r\alpha\} \qquad\qquad\cos\sqrt{\alpha} \equiv (\mathbf{c}\_0 + \mathbf{c}\_1\alpha) / \{1 + r\alpha\} \tag{55}$$

with *s*<sup>0</sup> =0.0928, *s*<sup>1</sup> =2.5218, *c*<sup>0</sup> =1.0104, *c*<sup>1</sup> =1.2193, *r* =1.979, which hold only on a

narrower range around zero of the interval *ω* ∈[0,*π* . Using (55) instead of (53), (54) will result in much more efficient 2D filters, which fully satisfy the imposed specifications.

We will use here a Chebyshev low-pass second-order filter of the general form (15). For this type of filter we have the coefficient symmetry *b*<sup>2</sup> =*b*0. According to (46) we can write:

$$H\_2\left(j\sqrt{\alpha o}\right) = \frac{b\_1 + 2b\_0 \cos\sqrt{\alpha o}}{a\_1 + (1 + a\_0)\cos\sqrt{\alpha o} + j(1 - a\_0)\sin\sqrt{\alpha o}} = \frac{P(\sqrt{\alpha o})}{Q(\sqrt{\alpha o})}\tag{56}$$

The numerator results real because the imaginary part is cancelled. Substituting the expres‐ sions (55) into this complex frequency response we get the rational approximation:

$$\ln H\_2\left(j\sqrt{\alpha o}\right) = \frac{b\_1(1+roo) + 2b\_0(c\_0 + c\_1o)}{a\_1(1+roo) + (1+a\_0)(c\_0 + c\_1o) + j \cdot (1-a\_0)(s\_0 + s\_1o)}\tag{57}$$

which can be also written as:

2 2

 w w rw w

w

w

w

approximations were found:

294 Digital Filters and Signal Processing

w

2

2

w

w

w

w

1 1 12 112

In order to obtain a rational expression for the frequency response of the 2D filter from an elementary 1D prototype of the form (45) or (46) by applying the frequency mapping (52), we need to derive rational expressions for the functions cos *ω* and sin *ω*. Using the Chebyshev-Padé method in a symbolic computation software, the following second-order rational

( ) ( ) 2 2 cos 1.0559 0.086514 0.1304 1 0.75 0.110583 ( ) ( )

( ) ( ) 2 2 sin 0.167 1.46287 0.259815 1 0.75 0.110583 ( ) ( ) *S S*

0 1 0 1 sin ( ) (1 ) cos ( ) (1 )

narrower range around zero of the interval *ω* ∈[0,*π* . Using (55) instead of (53), (54) will result

We will use here a Chebyshev low-pass second-order filter of the general form (15). For this

with *s*<sup>0</sup> =0.0928, *s*<sup>1</sup> =2.5218, *c*<sup>0</sup> =1.0104, *c*<sup>1</sup> =1.2193, *r* =1.979, which hold only on a

in much more efficient 2D filters, which fully satisfy the imposed specifications.

type of filter we have the coefficient symmetry *b*<sup>2</sup> =*b*0. According to (46) we can write:

1 0 0

sions (55) into this complex frequency response we get the rational approximation:

*b r bc c H j a r ac c j as s* w

ww

( ) <sup>1</sup> 00 1

w

The numerator results real because the imaginary part is cancelled. Substituting the expres‐

1 00 1 00 1 (1 ) 2 ( ) (1 ) (1 )( ) (1 )( )

<sup>+</sup> <sup>=</sup> <sup>=</sup> ++ + -

*b b <sup>P</sup> H j a a ja Q*

which are sufficiently accurate on the range *ω* ∈[0,*π* . Since these functions are developed on the range [0,*π* , their approximations result neither odd nor even. However, using the above approximations will lead to relatively complex 2D filters, described by templates of size at least 9×9. For the type of filters approached, namely selective two-directional (four-lobe) filters, the approximations for cos *ω* and sin *ω* need not necessarily hold throughout the range [0,*π* , but only on a smaller range near the origin, corresponding to filter pass-band.

 w

@ + ×- × + ×- × =

Using now the Padé method we get the first-order approximations:

ww

( ) 1 0

 w

 w w®+ = ( )(,) (,) *F* (52)

> w

 w

 w

@+ + *ss r* @+ + *cc r* (55)

2 cos ( )

 w

++ + <sup>=</sup> + + + + +× - + (57)

(1 )cos (1 )sin ( )

w ww

w

 w (56)

ww

*C A S S* @ - ×- × + ×- × = (53)

 w

 w  ww

 ww*S A* (54)

$$H\_2\left(joo\right) = \frac{b\_1(1+roo^2) + 2b\_0(c\_0 + c\_1o^2)}{a\_1(1+roo^2) + (1+a\_0)(c\_0 + c\_1o^2) + j \cdot (1-a\_0)(s\_0 + s\_1o^2)}\tag{58}$$

The function (58) has even parity, since it is expressed as a rational function in *ω* <sup>2</sup> .

#### **6.2. Two-directional filter design**

We approach now the design of a particular filter type designed in polar coordinates, namely two-directional (selective four-lobe) filters along the two plane axes or with a specified orientation angle. Let us consider the radial function given by:

$$H\_r(\varphi) = \mathbf{1}\{ (p \cdot \tilde{B}(\varphi) - p + 1) \tag{59}$$

where *<sup>B</sup>*˜(*<sup>φ</sup>*) is a periodic function; let *<sup>B</sup>*˜(*<sup>φ</sup>*)=cos(4*φ*). We use it to design a 2D filter with four narrow lobes in the plane (*ω*1, *ω*2). Using trigonometric identities, (59) becomes:

$$H\_r(\varphi) = \mathbf{1} \left| \left( 1 + 8p \cdot (\cos \varphi)^2 - 8p \cdot (\cos \varphi)^4 \right) \right. \tag{60}$$

and is plotted for *φ* ∈ −*π*, *π* in Fig.12(a). This is a periodic function with period *Φ* =*π* / 4 and has the shape of a multi-band ("comb") filter. In order to control the amplitude of this function, we introduce another parameter k, such that the radial function *ρ*(*φ*) takes the form *ρ*(*φ*)=*k* ⋅*Hr*(*φ*). We get using (49):

$$\left(\alpha o^2 \rightarrow F(\alpha\_1, \alpha\_2)\right) = \left(\alpha\_1^4 + (2+8p)\alpha\_1^2 \alpha\_2^2 + \alpha\_2^4\right) \Big/ \left(k(\alpha\_1^2 + \alpha\_2^2)\right) \tag{61}$$

and the function *F*2(*s*1, *s*2) of the form:

$$F\_2(s\_1, s\_2) = -\left(s\_1^4 + (2 + 8p)s\_1^2 s\_2^2 + s\_2^4\right) \Big/ \left(k(s\_1^2 + s\_2^2)\right) \tag{62}$$

Finally we derive a transfer function of the 2D filter *H* (*z*1, *z*2) in the complex plane (*z*1, *z*2). This can be achieved if we find a discrete counterpart of the function *ρ*(*ω*1, *ω*2), denoted *R*(*z*1, *z*2). A possible method is to express the function *ρ*(*ω*1, *ω*2) in the complex plane (*s*1, *s*2) and then find the appropriate mapping to (*z*1, *z*2) using the bilinear transform or the Euler approxima‐ tion. Even if generally the bilinear transform is more often used, being a more accurate mapping, especially if a frequency pre-warping is applied to compensate for distortions, for the particular type of 2D filter approached here the Euler formula leads to more efficient filters and with better characteristics, at the same filter order. We will use the backward Euler method, which approximates the spatial derivative ∂ *X* / ∂ *x* by *X n* − *X n* −1 , replacing *s* by *s* =1− *z* <sup>−</sup><sup>1</sup> . On the two directions of the plane we have: *s*<sup>1</sup> =1− *z*<sup>1</sup> −1 , *s*<sup>2</sup> =1− *z*<sup>2</sup> −1 . The operators *s*<sup>1</sup> 2 , *s*2 2 and *s*1*s*2 correspond to second-order partial derivatives: ∂<sup>2</sup> / <sup>∂</sup> *<sup>x</sup>* <sup>2</sup>↔*s*<sup>1</sup> <sup>2</sup> <sup>=</sup> *<sup>z</sup>*<sup>1</sup> <sup>+</sup> *<sup>z</sup>*<sup>1</sup> <sup>−</sup><sup>1</sup> −2, <sup>∂</sup><sup>2</sup> / <sup>∂</sup> *<sup>y</sup>* <sup>2</sup>↔*s*<sup>2</sup> <sup>2</sup> <sup>=</sup> *<sup>z</sup>*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>−</sup><sup>1</sup> <sup>−</sup>2, ∂<sup>2</sup> / <sup>∂</sup> *<sup>x</sup>*<sup>∂</sup> *<sup>y</sup>*↔*s*1*s*2. For the mixed operator *s*1*s*2, using repeatedly the Euler formula, we get the expression (Matei, 2011 a): 2*s*1*s*<sup>2</sup> = *z*<sup>1</sup> + *z*<sup>1</sup> <sup>−</sup><sup>1</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>−</sup><sup>1</sup> <sup>−</sup>2<sup>−</sup> *<sup>z</sup>*1*z*<sup>2</sup> <sup>−</sup><sup>1</sup> <sup>−</sup> *<sup>z</sup>*<sup>1</sup> −1 *z*2. Substituting the above relations into (62) we obtain a frequency mapping similar to (47), with the templates:

$$\mathbf{B}\_{f} = \begin{bmatrix} 0 & 0 & 1 & 0 & 0 \\ 0 & 2+8p & -8-16p & 2+8p & 0 \\ 1 & -8-16p & 20+32p & -8-16p & 1 \\ 0 & 2+8p & -8-16p & 2+8p & 0 \\ 0 & 0 & 1 & 0 & 0 \end{bmatrix} \mathbf{A}\_{f} = k \cdot \begin{bmatrix} 0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0 \end{bmatrix} \* \begin{bmatrix} 0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0 \end{bmatrix} = k \cdot \mathbf{A}\_{1} \* \mathbf{A}\_{1} \tag{63}$$

The template *Af* results as a convolution of two 3×3 matrices. The last step in the design of this 2D filter is to apply the frequency transformation (61) to the frequency response (58) and we find the filter templates B and A as linear combinations of *Bf* and *A<sup>f</sup>* :

$$\mathbf{B} = (b\_1 + 2b\_0c\_0) \cdot \mathbf{A}\_f + (rb\_1 + 2b\_0c\_1) \cdot \mathbf{B}\_f \tag{64}$$

For a good directional selectivity we also choose *p* =30 and *k* =10. The 2D filter frequency response magnitude is displayed in Fig.12 (c) and shows a very good linearity along the two directions and practically no distortions in the stop band. The constant level contour in the plane (*ω*1, *ω*2) is given in Fig.12 (d). Calculating the singular values of filter templates for the

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

= = é2.09347 0.00225 0.0005 0 0 2.14297 0.01939 0.00387 0 0 ù é ù **A B** ë û ë û **S S** (68)

Taking into account the fact that the first singular value of the templates A and B is much larger than the other four, the filter designed above can be approximated by a separable filter.

The singular value decomposition of a matrix M is written as *M* **=***U* **×***S* **×***V* where U and V are unitary matrices and S is a diagonal matrix containing the singular values. Thus we can

and *VΑ*<sup>1</sup> are the first columns of the matrices *UA*, *<sup>V</sup> <sup>A</sup>*, then A can be approximated by

corresponding columns of *UA* and *V <sup>A</sup>*, ⊗ stands for outer product, *T* for transposition.

*sB*1=2.14297 and for template B the column vectors *UB*1 and *VB*1 result identical:

frequency response of the resulted filter is given in Fig.12 (e). As can be noticed, the effect of the above approximation is an "overshoot" at zero frequency. This should not affect the filter functionality in detecting lines parallel with the two axes. Moreover, since the marginal elements of the 5×1 vectors *UA*1, *V <sup>A</sup>*1, *UB*1, *VB*1 have negligible values, by discarding them we

We finally obtain a very selective two-directional 2D filter implemented with two minimum size (3×3) templates. The template B is real while A is complex. The frequency response magnitude of this filter is shown in Fig.12 (f) and is practically similar to the one in Fig.12 (e). Similarly we can design a two-directional (four-lobe) filter with a specified orientation angle. Using the previously described method and based on the Euler approximation, the expression

*<sup>T</sup>* , where *sA*<sup>1</sup> is the largest singular value of A, *UA*<sup>1</sup>

**A =U ×S ×V B=U ×S ×V AA A BB B** (69)

*<sup>T</sup>* . For the specified filter parameters we obtain

and *VΑ*<sup>1</sup>

http://dx.doi.org/10.5772/52195

297

,*VΑ*<sup>1</sup> have complex elements. The

are the

above parameters we find the vectors *SA*, *SB* for the templates A, B:

write for the filter templates A and B:

Similarly for B we find *B*1**=***sB*<sup>1</sup> ⋅*UB*<sup>1</sup> ⊗ *VB*<sup>1</sup>

*UB*<sup>1</sup> <sup>=</sup>*VB*<sup>1</sup> <sup>=</sup> -0.00424 0.3971 -0.82743 0.3971 -0.00424 *<sup>T</sup>* .

For the template A we get *sA*1=2.09347 and the vectors *UA*<sup>1</sup>

obtain the 3×1 vectors: *UB*<sup>2</sup> <sup>=</sup>*VB*<sup>2</sup> <sup>=</sup> 0.3971 -0.8274 0.3971 *<sup>T</sup>*

*UA*<sup>2</sup> <sup>=</sup> -0.315 0.6334 -0.315 *<sup>T</sup>* <sup>+</sup> *<sup>j</sup>* <sup>⋅</sup> 0.2573 -0.5175 0.2573 *<sup>T</sup>*

*<sup>V</sup> <sup>A</sup>*<sup>2</sup> <sup>=</sup> 0.4067 -0.818 0.4067 *<sup>T</sup>* <sup>−</sup> *<sup>j</sup>* <sup>⋅</sup> 0.0024 -0.0048 0.0024 *<sup>T</sup>*

a matrix *A*1**=***sA*<sup>1</sup> ⋅*UA*<sup>1</sup> ⊗ *V <sup>A</sup>*<sup>1</sup>

If *UA*<sup>1</sup>

$$\mathbf{A} = \left(a\_1 + (1 + a\_0)c\_0 + j(1 - a\_0)s\_0\right) \cdot \mathbf{A}\_f + \left(a\_1r + (1 + a\_0)c\_1 + j(1 - a\_0)s\_1\right) \cdot \mathbf{B}\_f \tag{65}$$

where *b*2, *b*1, *b*0, *a*1, *a*0 are the coefficients of prototype (15). Finally the 2D filter transfer function in *z*1 and *z*2 has the following expression, with *z*1 and *z*2 given by (27):

$$H\_{2D}(\mathbf{z}\_1, \mathbf{z}\_2) = \mathbf{B}(\mathbf{z}\_1, \mathbf{z}\_2) / A(\mathbf{z}\_1, \mathbf{z}\_2) = \left(\mathbf{z}\_1 \times \mathbf{B} \times \mathbf{z}\_2^T\right) / \left(\mathbf{z}\_1 \times \mathbf{A} \times \mathbf{z}\_2^T\right) \tag{66}$$

Let us design a two-directional filter following this procedure. As 1D prototype let us consider a type-2 low-pass Chebyshev digital filter with the parameter values: order*N* =2, stop-band attenuation *Rs* =40db and passband-edge frequency *ω<sup>p</sup>* =0.5 (1.0 is half the sampling frequen‐ cy). The transfer function in *z* is:

$$H\_p(z) = \left(0.012277 \cdot z^2 - 0.012525 \cdot z + 0.012277\right) \left(z^2 - 1.850147 \cdot z + 0.862316\right) \tag{67}$$

For a good directional selectivity we also choose *p* =30 and *k* =10. The 2D filter frequency response magnitude is displayed in Fig.12 (c) and shows a very good linearity along the two directions and practically no distortions in the stop band. The constant level contour in the plane (*ω*1, *ω*2) is given in Fig.12 (d). Calculating the singular values of filter templates for the above parameters we find the vectors *SA*, *SB* for the templates A, B:

$$\mathbf{S\_A} = \begin{bmatrix} 2.09347 & 0.00225 & 0.0005 & 0 & 0 \end{bmatrix} \quad \mathbf{S\_B} = \begin{bmatrix} 2.14297 & 0.01939 & 0.00387 & 0 & 0 \end{bmatrix} \tag{68}$$

Taking into account the fact that the first singular value of the templates A and B is much larger than the other four, the filter designed above can be approximated by a separable filter.

The singular value decomposition of a matrix M is written as *M* **=***U* **×***S* **×***V* where U and V are unitary matrices and S is a diagonal matrix containing the singular values. Thus we can

write for the filter templates A and B:

mapping, especially if a frequency pre-warping is applied to compensate for distortions, for the particular type of 2D filter approached here the Euler formula leads to more efficient filters and with better characteristics, at the same filter order. We will use the backward Euler method, which approximates the spatial derivative ∂ *X* / ∂ *x* by *X n* − *X n* −1 , replacing *s* by

Substituting the above relations into (62) we obtain a frequency mapping similar to (47), with

*ppp k k*

**B A A A** (63)

The template *Af* results as a convolution of two 3×3 matrices. The last step in the design of this 2D filter is to apply the frequency transformation (61) to the frequency response (58) and

( ) ( ) 1 00 00 1 01 01 (1 ) (1 ) (1 ) (1 ) *f f* **AA B** = ++ + - × + ++ + - × *a a c j a s ar a c j a s* (65)

2 12 12 12 1 2 1 2 (,) (,) (,) ( )( ) *T T H z z Bz z Az z <sup>D</sup>* <sup>=</sup> = ´´ ´ ´ **z Bz z Az** (66)

where *b*2, *b*1, *b*0, *a*1, *a*0 are the coefficients of prototype (15). Finally the 2D filter transfer function

Let us design a two-directional filter following this procedure. As 1D prototype let us consider a type-2 low-pass Chebyshev digital filter with the parameter values: order*N* =2, stop-band attenuation *Rs* =40db and passband-edge frequency *ω<sup>p</sup>* =0.5 (1.0 is half the sampling frequen‐

( ) ( ) 2 2 ( ) 0.012277 0.012525 0.012277 1.850147 0.862316 *H z <sup>p</sup>* = × - ×+ *z z z z* - ×+ (67)

ê úê ú <sup>=</sup> -- + -- =× - \* - =× \* + -- + ë ûë û

−1

<sup>−</sup><sup>1</sup> <sup>−</sup>2, ∂<sup>2</sup> / <sup>∂</sup> *<sup>x</sup>*<sup>∂</sup> *<sup>y</sup>*↔*s*1*s*2. For the mixed operator *s*1*s*2, using repeatedly the

1 00 1 01 ( 2) ( 2) *f f* **BA B** =+ × + + × *b b c rb b c* (64)

, *s*<sup>2</sup> =1− *z*<sup>2</sup>

<sup>−</sup><sup>1</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup>

−1

. The operators *s*<sup>1</sup>

1 1

<sup>−</sup><sup>1</sup> <sup>−</sup>2<sup>−</sup> *<sup>z</sup>*1*z*<sup>2</sup>

<sup>2</sup> <sup>=</sup> *<sup>z</sup>*<sup>1</sup> <sup>+</sup> *<sup>z</sup>*<sup>1</sup>

<sup>−</sup><sup>1</sup> <sup>−</sup> *<sup>z</sup>*<sup>1</sup> −1 *z*2.

2 ,

<sup>−</sup><sup>1</sup> −2,

. On the two directions of the plane we have: *s*<sup>1</sup> =1− *z*<sup>1</sup>

Euler formula, we get the expression (Matei, 2011 a): 2*s*1*s*<sup>2</sup> = *z*<sup>1</sup> + *z*<sup>1</sup>

and *s*1*s*2 correspond to second-order partial derivatives: ∂<sup>2</sup> / <sup>∂</sup> *<sup>x</sup>* <sup>2</sup>↔*s*<sup>1</sup>

0 2 8 8 16 2 8 0 0 10 0 10 1 8 16 20 32 8 16 1 1 41 1 41 0 2 8 8 16 2 8 0 0 10 0 10

we find the filter templates B and A as linear combinations of *Bf* and *A<sup>f</sup>* :

in *z*1 and *z*2 has the following expression, with *z*1 and *z*2 given by (27):

ê ú + -- + é ùé ù

*s* =1− *z* <sup>−</sup><sup>1</sup>

<sup>∂</sup><sup>2</sup> / <sup>∂</sup> *<sup>y</sup>* <sup>2</sup>↔*s*<sup>2</sup>

296 Digital Filters and Signal Processing

the templates:

<sup>2</sup> <sup>=</sup> *<sup>z</sup>*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup>

cy). The transfer function in *z* is:

00 1 00

é ù

*p pp*

*p pp*

00 1 00

ë û

*f f*

*s*2 2

$$\mathbf{A} = \mathbf{U\_A} \times \mathbf{S\_A} \times \mathbf{V\_A} \quad \mathbf{B} = \mathbf{U\_B} \times \mathbf{S\_B} \times \mathbf{V\_B} \tag{69}$$

If *UA*<sup>1</sup> and *VΑ*<sup>1</sup> are the first columns of the matrices *UA*, *<sup>V</sup> <sup>A</sup>*, then A can be approximated by a matrix *A*1**=***sA*<sup>1</sup> ⋅*UA*<sup>1</sup> ⊗ *V <sup>A</sup>*<sup>1</sup> *<sup>T</sup>* , where *sA*<sup>1</sup> is the largest singular value of A, *UA*<sup>1</sup> and *VΑ*<sup>1</sup> are the corresponding columns of *UA* and *V <sup>A</sup>*, ⊗ stands for outer product, *T* for transposition. Similarly for B we find *B*1**=***sB*<sup>1</sup> ⋅*UB*<sup>1</sup> ⊗ *VB*<sup>1</sup> *<sup>T</sup>* . For the specified filter parameters we obtain *sB*1=2.14297 and for template B the column vectors *UB*1 and *VB*1 result identical: *UB*<sup>1</sup> <sup>=</sup>*VB*<sup>1</sup> <sup>=</sup> -0.00424 0.3971 -0.82743 0.3971 -0.00424 *<sup>T</sup>* .

For the template A we get *sA*1=2.09347 and the vectors *UA*<sup>1</sup> ,*VΑ*<sup>1</sup> have complex elements. The frequency response of the resulted filter is given in Fig.12 (e). As can be noticed, the effect of the above approximation is an "overshoot" at zero frequency. This should not affect the filter functionality in detecting lines parallel with the two axes. Moreover, since the marginal elements of the 5×1 vectors *UA*1, *V <sup>A</sup>*1, *UB*1, *VB*1 have negligible values, by discarding them we obtain the 3×1 vectors: *UB*<sup>2</sup> <sup>=</sup>*VB*<sup>2</sup> <sup>=</sup> 0.3971 -0.8274 0.3971 *<sup>T</sup>*

$$\mathbf{U}\_{A2} = \begin{bmatrix} \text{-0.315} & \text{0.6334} & \text{-0.315} \end{bmatrix}^T + j \begin{bmatrix} \text{0.2573} & \text{-0.5175} & \text{0.2573} \end{bmatrix}^T$$

$$\mathbf{V}\_{A2} = \begin{bmatrix} 0.4067 & -0.818 & 0.4067 \end{bmatrix}^T - j \cdot \begin{bmatrix} 0.0024 & -0.0048 & 0.0024 \end{bmatrix}^T$$

We finally obtain a very selective two-directional 2D filter implemented with two minimum size (3×3) templates. The template B is real while A is complex. The frequency response magnitude of this filter is shown in Fig.12 (f) and is practically similar to the one in Fig.12 (e). Similarly we can design a two-directional (four-lobe) filter with a specified orientation angle. Using the previously described method and based on the Euler approximation, the expression (10) of cos<sup>2</sup> (*φ* + *φ*0) corresponds to a frequency transformation in the complex variables *z*1 and *z*2, written in matrix form as:

Indeed, referring to (48), the circular filter is the trivial case for which *ρ*(*ω*1, *ω*2)=1 and the

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

with a non-convex shape in the frequency plane. The proposed design method does not involve global numerical optimization techniques, but only a few numerical approximations. The method is more general and can be applied as well to design fan filters, diamond filters, multi-

Filters with circular symmetry are very useful in image processing. We propose an efficient design technique for 2D circularly-symmetric filters, based on the previous 1D filters, consid‐ ered as prototypes. Given a 1D prototype *HP*(*ω*), the corresponding 2D circular filter function

1 2 1 2 (,) *H H C P*

1 2 12 1 2 1 2 cos

 + @ =- + + + × *C*( , ) 0.5 0.5 cos cos 0.5cos cos w

> 0.125 0.25 0.125 0.25 0.5 0.25 0.125 0.25 0.125

ë û

Let us consider as prototype a LP analog elliptic filter of order *N* =4, pass-band peak-to-peak ripple *RP* =0.04 dB, stop-band attenuation *RS* =40 dB and passband-edge frequency *Ω<sup>P</sup>* =*π* / 2.

( ) ( ) 42 432 ( ) 0.1037 19.864 84.041 3.2041 8.4315 13.126 14.082 *Hs s s s s s s <sup>P</sup>* = × + × + + × + × + ×+ (76)

Using MAPLE or another symbolic computation program and following the design steps described in section 2, we obtain a a polynomial approximation of the magnitude | *HP*( *jω*)| through Chebyshev expansion, which has the following factorized form, with *x* =cos*ω*:

é ù ê ú = -

ww

( ) 2 2

The currently-used approximation of the circular function cos *ω*<sup>1</sup>

( ) 2 2

 w

**C** (75)

 w w

, as expected. This method allows one to design 2D filters

<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup> 2 :

= + (73)

<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup> 2

> ww

is given by:

http://dx.doi.org/10.5772/52195

299

(74)

<sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>1</sup> 2

mapping (48) reduces to *<sup>ω</sup>* <sup>2</sup>→*ω*<sup>1</sup>

directional filters etc. (Matei, 2011 a).

w w

which corresponds to the 3×3 array:

Its transfer function in variable *s* is:

**7. Zero-phase FIR circular filters**

*HC*(*ω*1, *ω*2) results using the frequency mapping *ω* → *ω*<sup>1</sup>

 ww

**Figure 12.** (a) Periodic radial function; (b) contour plot of a two-directional filter with orientation φ<sup>0</sup> =π / 12; (c) fre‐ quency response and (d) contour plot of the two-directional filter with 5 × 5 templates; frequency response of the filter with separable 5 × 5 (e) and 3 × 3 templates (f)

$$\cos^2(\varphi + \varphi\_0) \to F(z\_1, z\_2) = B\_{\varphi 0}(z\_1, z\_2) / A\_{\varphi 0}(z\_1, z\_2) = \left(\mathbf{Z}\_1 \times \mathbf{B}\_{\varphi 0} \times \mathbf{Z}\_2^T\right) \Big/ \left(\mathbf{Z}\_1 \times \mathbf{A}\_{\varphi 0} \times \mathbf{Z}\_2^T\right) \tag{70}$$

$$\mathbf{B}\_{\varphi0} = \cos^2 \varphi\_0 \begin{bmatrix} 0 & 1 & 0 \\ 0 & -2 & 0 \\ 0 & 1 & 0 \end{bmatrix} + \sin^2 \varphi\_0 \begin{bmatrix} 0 & 0 & 0 \\ 1 & -2 & 1 \\ 0 & 0 & 0 \end{bmatrix} + 0.25 \sin(2\varphi\_0) \begin{bmatrix} 0 & 1 & -1 \\ 1 & -1 & 1 \\ -1 & 1 & 0 \end{bmatrix} \tag{71}$$

and *Aφ*0 is identical to *A*1 from (63). The 5×5 templates of the mapping (47) are given by:

$$\mathbf{B}\_{\not{p}} = \mathbf{A}\_{\not{q}0} \ast \mathbf{A}\_{\not{q}0} \cdot \mathbf{A}\_{\not{p}} = \mathbf{A}\_{\not{q}0} \ast \mathbf{A}\_{\not{q}0} + 8p \cdot \mathbf{B}\_{\not{q}0} \ast \mathbf{A}\_{\not{q}0} - 8p \cdot \mathbf{B}\_{\not{q}0} \ast \mathbf{B}\_{\not{q}0} \tag{72}$$

The final filter templates result according to relations (64) and (65).

Regarding the proposed method, the frequency responses of this class of 2D filters can be viewed as derived through a radial distortion from a generic maximally-flat circular filter. Indeed, referring to (48), the circular filter is the trivial case for which *ρ*(*ω*1, *ω*2)=1 and the mapping (48) reduces to *<sup>ω</sup>* <sup>2</sup>→*ω*<sup>1</sup> <sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>1</sup> 2 , as expected. This method allows one to design 2D filters with a non-convex shape in the frequency plane. The proposed design method does not involve global numerical optimization techniques, but only a few numerical approximations. The method is more general and can be applied as well to design fan filters, diamond filters, multidirectional filters etc. (Matei, 2011 a).

## **7. Zero-phase FIR circular filters**

(10) of cos<sup>2</sup>

*z*2, written in matrix form as:

298 Digital Filters and Signal Processing

with separable 5 × 5 (e) and 3 × 3 templates (f)

j

j j

j (*φ* + *φ*0) corresponds to a frequency transformation in the complex variables *z*1 and

**Figure 12.** (a) Periodic radial function; (b) contour plot of a two-directional filter with orientation φ<sup>0</sup> =π / 12; (c) fre‐ quency response and (d) contour plot of the two-directional filter with 5 × 5 templates; frequency response of the filter

( ) ( ) <sup>2</sup> 0 12 012 012 1 0 2 1 0 2 cos ( ) ( , ) ( , ) ( , ) *T T Fz z B z z A z z*

cos 0 2 0 sin 1 2 1 0.25sin(2 ) 1 1 1

éùéù é ù - êúêú ê ú = -+ -+ -

and *Aφ*0 is identical to *A*1 from (63). The 5×5 templates of the mapping (47) are given by:

0 0 0 0 0 0 00 8 8 *f f p p*

Regarding the proposed method, the frequency responses of this class of 2D filters can be viewed as derived through a radial distortion from a generic maximally-flat circular filter.

 j  j

> j

> > jj

+® = =´ ´ ´ ´ **ZB Z ZA Z** (70)

0 10 0 00 011

0 10 0 00 110


> j

**B A AA A A B A B B** = \* = \* +× \* -× \* (72)

 j

 j

jj

0 0 0 0

 j

The final filter templates result according to relations (64) and (65).

2 2

 j

jj

Filters with circular symmetry are very useful in image processing. We propose an efficient design technique for 2D circularly-symmetric filters, based on the previous 1D filters, consid‐ ered as prototypes. Given a 1D prototype *HP*(*ω*), the corresponding 2D circular filter function

*HC*(*ω*1, *ω*2) results using the frequency mapping *ω* → *ω*<sup>1</sup> <sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup> 2 :

$$H\_{\mathbb{C}}(o\rho\_1, o\rho\_2) = H\_p\left(\sqrt{o\rho\_1^2 + o\rho\_2^2}\right) \tag{73}$$

The currently-used approximation of the circular function cos *ω*<sup>1</sup> <sup>2</sup> <sup>+</sup> *<sup>ω</sup>*<sup>2</sup> 2 is given by:

$$\cos\sqrt{\alpha\_1^2 + \alpha\_2^2} \equiv \mathcal{C}(\alpha\_1, \alpha\_2) = -0.5 + 0.5(\cos\alpha\_1 + \cos\alpha\_2) + 0.5\cos\alpha\_1 \cdot \cos\alpha\_2 \tag{74}$$

which corresponds to the 3×3 array:

$$\mathbf{C} = \begin{bmatrix} 0.125 & 0.25 & 0.125 \\ 0.25 & -0.5 & 0.25 \\ 0.125 & 0.25 & 0.125 \end{bmatrix} \tag{75}$$

Let us consider as prototype a LP analog elliptic filter of order *N* =4, pass-band peak-to-peak ripple *RP* =0.04 dB, stop-band attenuation *RS* =40 dB and passband-edge frequency *Ω<sup>P</sup>* =*π* / 2. Its transfer function in variable *s* is:

$$H\_P(\text{s}) = 0.1037 \cdot \left(\text{s}^4 + 19.864 \cdot \text{s}^2 + 84.041\right) \left(\left(\text{s}^4 + 3.2041 \cdot \text{s}^3 + 8.4315 \cdot \text{s}^2 + 13.126 \cdot \text{s} + 14.082\right) \right. \tag{76}$$

Using MAPLE or another symbolic computation program and following the design steps described in section 2, we obtain a a polynomial approximation of the magnitude | *HP*( *jω*)| through Chebyshev expansion, which has the following factorized form, with *x* =cos*ω*:

$$\begin{aligned} \left| H\_{P}(\boldsymbol{\alpha}) \right| &\equiv 48.6 \cdot (\mathbf{x} + 0.8491) (\mathbf{x} + 0.7717) (\mathbf{x} - 1.087) (\mathbf{x}^2 + 1.9934 \mathbf{x} + 0.994) \\ (\mathbf{x}^2 + 1.0797 \mathbf{x} + 0.318) (\mathbf{x}^2 - 0.3849 \mathbf{x} + 0.1766) (\mathbf{x}^2 - 1.2882 \mathbf{x} + 0.5314) (\mathbf{x}^2 - 1.9338 \mathbf{x} + 0.9726) \end{aligned} \tag{77}$$

In order to obtain a filter with circular symmetry from the factorized 1D prototype function, we replace in (12) cos*ω* with the circular cosine function (74). For instance, corresponding to (12), the filter template A results in general as the discrete convolution:

$$\mathbf{A} = k \cdot \mathbf{A}\_{11} \ast \mathbf{A}\_{12} \ast \dots \ast \mathbf{A}\_{1n} \ast \mathbf{A}\_{21} \ast \mathbf{A}\_{22} \ast \dots \ast \mathbf{A}\_{2m} \tag{78}$$

1

*k s* =

according to (79). This is an important aspect in the filter implementation.

pixels and can be detected, if after filtering a proper threshold is applied.

for outer product and the superscript *T* for transposition.

first largest: (a), (b) 8 singular values; (c), (d) 5 singular values

**8. Applications and simulation results**

*M kk k*

(a) (b) (c)

**Figure 14.** Frequency response magnitudes and contour plots for the circular filter resulted by taking into account the

Here *B<sup>M</sup>* is the approximation of matrix B taking into account the first *M* singular values (in our case *M* ≤14), while *UB<sup>k</sup>* , *VBk* are the *k*-th columns of the matrices *UB*1 and *VB*1; ⊗ stands

Fig.14 shows the frequency response magnitudes of the designed circular filter approximated by taking into account the first largest 8 singular values and 5 singular values. It can be noticed that even retaining only the first 5 singular values, the 2D filter preserves its circular shape without large distortions. In this case the filter template B is approximated by *BM* from (79), for *M* =5. Therefore, the template B can be written as a sum of only 5 *separable* matrices

An example of image filtering with a two-directional filter is given. We use the filter shown in Fig.3(e), (f). This type of filter can be used in simultaneously detecting perpendicular lines from an image. The binary test image in Fig.15 (a) contains straight lines with different orientations and lengths, and a few curves. It is known that the spectrum of a straight line is oriented in the plane (*ω*1, *ω*2) at an angle of *π* / 2 with respect to the line direction. Depending on filter selectivity, only the lines with the spectrum oriented more or less along the filter pass-bands will remain in the filtered image. In the output image in Fig.15 (b), the lines roughly oriented horizontally and vertically are preserved, while the others are filtered out or appear very blurred, due to directional low-pass filtering. The joints of detected lines appear as darker

Let us apply the designed fan-type filters, which can be regarded as components of a DFB, in filtering a typical retinal vascular image. Clinicians usually search in angiograms relevant

*T*

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

http://dx.doi.org/10.5772/52195

301

**BB= U V** @ ×Ä å **B B** (79)

*M*

where *A*1*<sup>i</sup>* (*i* =1…*n*) are 3×3 templates and *A*<sup>2</sup> *<sup>j</sup>* ( *j* =1…*m*) are 5×5 templates, given by: *A*1*<sup>i</sup>* =*C* + *ai* ⋅ *A*01 and *A*<sup>2</sup> *<sup>j</sup>* =*C* ∗*C* + *a*<sup>1</sup> *<sup>j</sup>* ⋅*C*<sup>0</sup> + *a*<sup>2</sup> *<sup>j</sup>* ⋅ *A*02, where *A*01 is a 3×3 zero template and *A*02 a 5×5 zero template with the central element equal to one; *C*0 is a 5×5 template

obtained by bordering C with zeros. The above expressions correspond to the factors in (12).

The frequency response *HC*(*ω*1, *ω*2) of the 2D circular filter results in a factorized form by substituting *x* =*C*(*ω*1, *ω*2) in (77). Even if the filter results of high order, with very large templates, next we show that using the Singular Value Decomposition (SVD), the resulted 2D filter can be approximated with a negligible error. For the filter template B we can write *B***=***UB***×***SB***×***VB*. The vector of singular values *SB* of size 1×27 has 14 non-zero elements:

*SB*<sup>1</sup> = 0.50536 0.086111 0.032794 0.013627 0.00521 0.002937 0.001935 0.001061 0.000639 0.000451 0.000418 0.0000385 0.0000196 0.00000144]

**Figure 13.** Frequency response magnitude (a) and contour plot (b) of a circular FIR filter

Let us denote the vector above as *SB*<sup>1</sup> = *sk* , with *k* =1...14 in our case. The exact filter matrix B can be written as: *B***=***UB*1**×***SB*1**×***VB*1, where *UB*1 and *VB*<sup>1</sup> are made up of the first 14 columns of the unitary matrices *UB* and *VB*. If we consider the first largest M values of the vector *SB*<sup>1</sup> = *sk* , the matrix B can be approximated as:

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing http://dx.doi.org/10.5772/52195 301

**Figure 14.** Frequency response magnitudes and contour plots for the circular filter resulted by taking into account the first largest: (a), (b) 8 singular values; (c), (d) 5 singular values

$$\mathbf{B} \equiv \mathbf{B}\_M = \sum\_{k=1}^{M} s\_k \cdot \mathbf{U}\_{\mathbf{B}k} \otimes \mathbf{V}\_{\mathbf{B}k}^T \tag{79}$$

Here *B<sup>M</sup>* is the approximation of matrix B taking into account the first *M* singular values (in our case *M* ≤14), while *UB<sup>k</sup>* , *VBk* are the *k*-th columns of the matrices *UB*1 and *VB*1; ⊗ stands for outer product and the superscript *T* for transposition.

Fig.14 shows the frequency response magnitudes of the designed circular filter approximated by taking into account the first largest 8 singular values and 5 singular values. It can be noticed that even retaining only the first 5 singular values, the 2D filter preserves its circular shape without large distortions. In this case the filter template B is approximated by *BM* from (79), for *M* =5. Therefore, the template B can be written as a sum of only 5 *separable* matrices according to (79). This is an important aspect in the filter implementation.

#### **8. Applications and simulation results**

2

11 12 1 21 22 2 *n m* **A AA AAA A** =× \* \* \* \* \* \* \* *k* K K (78)

( *j* =1…*m*) are 5×5 templates, given by:

(77)

( 1.0797 0.318)( 0.3849 0.1766)( 1.2882 0.5314)( 1.9338 0.9726)

In order to obtain a filter with circular symmetry from the factorized 1D prototype function, we replace in (12) cos*ω* with the circular cosine function (74). For instance, corresponding to

*A*1*<sup>i</sup>* =*C* + *ai* ⋅ *A*01 and *A*<sup>2</sup> *<sup>j</sup>* =*C* ∗*C* + *a*<sup>1</sup> *<sup>j</sup>* ⋅*C*<sup>0</sup> + *a*<sup>2</sup> *<sup>j</sup>* ⋅ *A*02, where *A*01 is a 3×3 zero template and *A*02 a

obtained by bordering C with zeros. The above expressions correspond to the factors in (12).

The frequency response *HC*(*ω*1, *ω*2) of the 2D circular filter results in a factorized form by substituting *x* =*C*(*ω*1, *ω*2) in (77). Even if the filter results of high order, with very large templates, next we show that using the Singular Value Decomposition (SVD), the resulted 2D filter can be approximated with a negligible error. For the filter template B we can write *B***=***UB***×***SB***×***VB*. The vector of singular values *SB* of size 1×27 has 14 non-zero elements:

0.001061 0.000639 0.000451 0.000418 0.0000385 0.0000196 0.00000144]

(a) (b)

Let us denote the vector above as *SB*<sup>1</sup> = *sk* , with *k* =1...14 in our case. The exact filter matrix B can be written as: *B***=***UB*1**×***SB*1**×***VB*1, where *UB*1 and *VB*<sup>1</sup> are made up of the first 14 columns of the unitary matrices *UB* and *VB*. If we consider the first largest M values of the vector

++ -+ -+ -+

*x xx x x x x x*

5×5 zero template with the central element equal to one; *C*0 is a 5×5 template

*SB*<sup>1</sup> = 0.50536 0.086111 0.032794 0.013627 0.00521 0.002937 0.001935

**Figure 13.** Frequency response magnitude (a) and contour plot (b) of a circular FIR filter

*SB*<sup>1</sup> = *sk* , the matrix B can be approximated as:

22 2 2 ( ) 48.6 ( 0.8491)( 0.7717)( 1.087)( 1.9934 0.994)

(12), the filter template A results in general as the discrete convolution:

@ ×+ + - + +

*H x x xx x <sup>P</sup>*

(*i* =1…*n*) are 3×3 templates and *A*<sup>2</sup> *<sup>j</sup>*

w

300 Digital Filters and Signal Processing

where *A*1*<sup>i</sup>*

An example of image filtering with a two-directional filter is given. We use the filter shown in Fig.3(e), (f). This type of filter can be used in simultaneously detecting perpendicular lines from an image. The binary test image in Fig.15 (a) contains straight lines with different orientations and lengths, and a few curves. It is known that the spectrum of a straight line is oriented in the plane (*ω*1, *ω*2) at an angle of *π* / 2 with respect to the line direction. Depending on filter selectivity, only the lines with the spectrum oriented more or less along the filter pass-bands will remain in the filtered image. In the output image in Fig.15 (b), the lines roughly oriented horizontally and vertically are preserved, while the others are filtered out or appear very blurred, due to directional low-pass filtering. The joints of detected lines appear as darker pixels and can be detected, if after filtering a proper threshold is applied.

Let us apply the designed fan-type filters, which can be regarded as components of a DFB, in filtering a typical retinal vascular image. Clinicians usually search in angiograms relevant features like number and position of vessels (arteries, capillaries). An angular-oriented filter bank may be used in analyzing angiography images by detecting vessels with a given orientation. Let us consider the retinal fluorescein angiogram from Fig.16(a), featuring some pathological elements which indicate a diabetic retinopathy. This image is filtered using 5 oriented wedge filters with narrow aperture (*θ* =*π* / 24), designed using the method described in section 4. Fig.16 (b)-(f) show the directionally filtered angiography images.

The designed zero-phase circularly-symmetric FIR filters may be useful as well in preprocessing tasks on biomedical images, having a blurring effect on the image which depends on its selectivity given by the circular filter bandwidth. The effect is somewhat similar to the Gaussian smoothing, which is used as a pre-processing stage in computer vision tasks to enhance image structures at different scales. Applying the presented design procedure, a circularly-symmetric filter bank can be derived, with components having desired bandwidths. Let us consider another retinal fluorescein angiogram, displayed in Fig.17(a). In the simulation result shown in Fig.17 (b) and (c), the two circular filters introduce gradual blurring which is visible on the fine image details, like small vessels and capillaries. In the image in Fig.17 (c) all

(a) (b) (c)

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

http://dx.doi.org/10.5772/52195

303

**Figure 17.** (a) Retinal fluorescein angiogram; (b), (c) results of image filtering with a FIR LP circular filter with cutoff

The design methods presented in this chapter combine the analytical approach based on 1D prototype filters and frequency transformations with numerical optimization techniques. For the classes of 2D filters designed here, we have used mainly analog filters as prototypes, which turn out to make simpler the expressions of the derived frequency mappings, and therefore the complexity of the designed 2D filters is lower in the analyzed cases. The prototypes used here were both maximally-flat or very selective, either low-pass or band-pass. For each type of 2D filter, a particular spectral transformation is derived. An important advantage is that these spectral transformations include some parameters which depend on the 2D filter specifications, like bandwidth, orientation, aperture etc. Once found the specific frequency mapping, the 2D filter results from its factorized prototype function by a simple substitution in each factor. The designed filters are versatile in the sense that the prototype parameters (bandwidth, selectivity) can be adjusted and the 2D filter will inherit these properties.

An advantage of the analytical approach over the completely numerical optimization techni‐ ques is the possibility to control the 2D filter parameters by adjusting the prototype. Another novelty is the proposed analytical design method in polar coordinates, which can yield

the finer details have been almost completely smoothed out.

**9. Conclusion**

frequency Ω*C*<sup>1</sup> = 0.16π and Ω*C*<sup>2</sup> = 0.08π

**Figure 15.** (a) Test image; (b) filtered image

**Figure 16.** (a) Retinal fluorescein angiogram; (b)-(f) images resulted as output of five component filters of the fantype filter bank

As can be easily noticed, the vessels for which the frequency spectrum overlaps more or less with the filter characteristic remain visible, while the others are blurred, an effect of the directional low-pass filtering (Matei & Matei, 2012). The directional resolution depends on the filter angular selectivity given by *θ*.

**Figure 17.** (a) Retinal fluorescein angiogram; (b), (c) results of image filtering with a FIR LP circular filter with cutoff frequency Ω*C*<sup>1</sup> = 0.16π and Ω*C*<sup>2</sup> = 0.08π

The designed zero-phase circularly-symmetric FIR filters may be useful as well in preprocessing tasks on biomedical images, having a blurring effect on the image which depends on its selectivity given by the circular filter bandwidth. The effect is somewhat similar to the Gaussian smoothing, which is used as a pre-processing stage in computer vision tasks to enhance image structures at different scales. Applying the presented design procedure, a circularly-symmetric filter bank can be derived, with components having desired bandwidths. Let us consider another retinal fluorescein angiogram, displayed in Fig.17(a). In the simulation result shown in Fig.17 (b) and (c), the two circular filters introduce gradual blurring which is visible on the fine image details, like small vessels and capillaries. In the image in Fig.17 (c) all the finer details have been almost completely smoothed out.

## **9. Conclusion**

features like number and position of vessels (arteries, capillaries). An angular-oriented filter bank may be used in analyzing angiography images by detecting vessels with a given orientation. Let us consider the retinal fluorescein angiogram from Fig.16(a), featuring some pathological elements which indicate a diabetic retinopathy. This image is filtered using 5 oriented wedge filters with narrow aperture (*θ* =*π* / 24), designed using the method described

(a) (b)

)

(d) (e) (f)

**Figure 16.** (a) Retinal fluorescein angiogram; (b)-(f) images resulted as output of five component filters of the fan-

As can be easily noticed, the vessels for which the frequency spectrum overlaps more or less with the filter characteristic remain visible, while the others are blurred, an effect of the directional low-pass filtering (Matei & Matei, 2012). The directional resolution depends on the

(c)

in section 4. Fig.16 (b)-(f) show the directionally filtered angiography images.

(a) (b)

**Figure 15.** (a) Test image; (b) filtered image

302 Digital Filters and Signal Processing

type filter bank

filter angular selectivity given by *θ*.

The design methods presented in this chapter combine the analytical approach based on 1D prototype filters and frequency transformations with numerical optimization techniques. For the classes of 2D filters designed here, we have used mainly analog filters as prototypes, which turn out to make simpler the expressions of the derived frequency mappings, and therefore the complexity of the designed 2D filters is lower in the analyzed cases. The prototypes used here were both maximally-flat or very selective, either low-pass or band-pass. For each type of 2D filter, a particular spectral transformation is derived. An important advantage is that these spectral transformations include some parameters which depend on the 2D filter specifications, like bandwidth, orientation, aperture etc. Once found the specific frequency mapping, the 2D filter results from its factorized prototype function by a simple substitution in each factor. The designed filters are versatile in the sense that the prototype parameters (bandwidth, selectivity) can be adjusted and the 2D filter will inherit these properties.

An advantage of the analytical approach over the completely numerical optimization techni‐ ques is the possibility to control the 2D filter parameters by adjusting the prototype. Another novelty is the proposed analytical design method in polar coordinates, which can yield selective two-directional and even multi-directional filters, and also fan and diamond filters. In polar coordinates more general filters with a specified rotation angle can be synthesized.

[7] Dougherty, G. editor). ((2011). Medical Image Processing: Techniques and Applica‐

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

http://dx.doi.org/10.5772/52195

305

[8] Frangi, A. F, et al. (1998). Multiscale vessel enhancement filtering, *Intl. Conf. on Medical Image Computing Computer-Assisted Intervention*, Berlin, 1998, , 1496, 130-137.

[9] Freeman, W. T, & Adelson, E. H. (1991). The design and use of steerable filters. *IEEE*

[10] Harn, L, & Shenoi, B. (1986). Design of stable two-dimensional IIR filters using digital spectral transformations. *IEEE Transactions on Circuits and Systems*, May 1986, , 33,

[11] Hirano, K, & Aggarwal, J. K. (1978). Design of two-dimensional recursive digital filters.

[12] Ito, N. (2010). Efficient design of two-dimensional diamond-shaped filters. *Proceedings*

[13] Jury, E. I, Kolavennu, V. R, & Anderson, B. D. (1977). Stabilization of certain twodimensional recursive digital filters. *Proceedings of the IEEE*, , 65(6), 887-892.

[14] Kayran, A, & King, R. (1983). Design of recursive and nonrecursive fan filters with complex transformations, *IEEE Trans. on Circuits and Systems*, CAS-30(12), 1983, ,

[15] Khan, M. A. U, et al. (2004). Coronary angiogram image enhancement using decima‐ tion-free directional filter banks, *IEEE ICASSP*, Montreal, May 17-21, 2004, , 5, 441-444.

[16] Lim, J. S. (1990). Two-Dimensional Signal and Image Processing. Prentice-Hall 1990

[17] Lim, Y. C, & Low, S. H. (1997). The synthesis of sharp diamond-shaped filters using the frequency response masking approach. Proc. of IEEE Int. Conf. on Acoustics, Speech & Signal Processing ICASSP-97, Munich, Germany, Apr. 21-24, 1997, 2181-2184.

[18] Low, S. H, & Lim, Y. C. (1998). A new approach to design sharp diamond-shaped filters.

[19] Lu, W. S, & Antoniou, A. (1992). Two-Dimensional Digital Filters, CRC Press, 1992

[20] Mastorakis, N. E. (2000). New necessary stability conditions for 2D systems, *IEEE Trans.*

[21] Matei, R. (2010). A new design method for IIR diamond-shaped filters, *Proc. of the 18th European Signal Processing Conference EUSIPCO 2010*, Aalborg, Denmark, , 65-69.

[22] Matei, R. (2011a). New Design Methods for Two-Dimensional Filters Based on 1D Prototypes and Spectral Transformations, In: "Digital Filters", Fausto Pedro García

Márquez (Ed.), IN-TECH Open Access Publisher, Vienna, 2011, 91-121.

*Signal Processing,* May 1998, , 67, 35-48.

*on Circuits and Systems*, Part I, July 2000, 47, 1103-1105.

*Trans. on Pattern Analysis and Machine Intelligence*, Sept. 1991, , 13, 891-906.

*IEEE Trans. on Circuits and Systems*, Dec. 1978, , 25, 1066-1076.

*of Int. Symposium ISPACS 2010*, Chengdu, China, 6-8 Dec. 2010, , 1-4.

tions. Springer, 2011

483-490.

849-857.

The design methods approached here are rather simple, efficient and flexible, since by starting from different specifications, the matrices of a new 2D filter result directly by applying the determined frequency mapping, and there is no need to resume every time the whole design procedure.

Stability of the designed filters is also an important problem and will be studied in detail in future work on this topic. In principle the spectral transformations used preserve the stability of the 1D prototype. The derived 2D filter could become unstable only if the numerical approximations introduce large errors. In this case the precision of approximation has to be increased by considering higher order terms, which would increase in turn the filter complex‐ ity; however, this is the price paid for obtaining efficient and stable 2D filters. Further research will focus on an efficient implementation of the designed filters and also on their applications in real-life image processing.

## **Author details**

Radu Matei1 and Daniela Matei2


## **References**


[7] Dougherty, G. editor). ((2011). Medical Image Processing: Techniques and Applica‐ tions. Springer, 2011

selective two-directional and even multi-directional filters, and also fan and diamond filters. In polar coordinates more general filters with a specified rotation angle can be synthesized.

The design methods approached here are rather simple, efficient and flexible, since by starting from different specifications, the matrices of a new 2D filter result directly by applying the determined frequency mapping, and there is no need to resume every time the whole design

Stability of the designed filters is also an important problem and will be studied in detail in future work on this topic. In principle the spectral transformations used preserve the stability of the 1D prototype. The derived 2D filter could become unstable only if the numerical approximations introduce large errors. In this case the precision of approximation has to be increased by considering higher order terms, which would increase in turn the filter complex‐ ity; however, this is the price paid for obtaining efficient and stable 2D filters. Further research will focus on an efficient implementation of the designed filters and also on their applications

[1] Ansari, R. (1987). Efficient IIR and FIR fan filters, *IEEE Transactions on Circuits and*

[2] Austvoll, I. (2000). Directional filters and a new structure for estimation of optical flow. *Proc. of Int. Conf. on Image Processing ICIP* 2000, Vancouver, Canada, , 2, 574-577.

[3] Bamberger, R, & Smith, M. (1992). A filter bank for the directional decomposition of images: theory and design, *IEEE Trans. Signal Processing*, Apr. 1992, , 40, 882-893.

[4] Berry, E. (2007). A Practical Approach to Medical Image Processing. Taylor & Francis,

[5] Chakrabarti, S, & Mitra, S. K. (1977). Design of two-dimensional digital filters via

[6] Danielsson, P. E. (1980). Rotation-invariant linear operators with directional response. Proc. of *5th International Conf. on Pattern Recognition*, Miami, USA, Dec. 1980

spectral transformations, *Proceedings of the IEEE*, June 1977, , 65, 905-914.

procedure.

304 Digital Filters and Signal Processing

in real-life image processing.

and Daniela Matei2

*Systems*, Aug. 1987, , 34, 941-945.

1 "Gh.Asachi" Technical University of Iasi, Romania

2 "Gr.T.Popa" University of Medicine and Pharmacy of Iasi, Romania

**Author details**

Radu Matei1

**References**

2007


[23] Matei, R. (2011b). A class of 2D recursive filters with two-directional selectivity, *Proc. of the WSEAS International Conference on Applied, Numerical and Computational Mathe‐ matics (ICANCM'11)*, Barcelona, Spain, Sept. 15-17, 2011, 212-217.

[39] Zhu, W, & Zhenya, P. H. ((1990). A design method for complementary recursive fan

Analytical Design of Two-Dimensional Filters and Applications in Biomedical Image Processing

http://dx.doi.org/10.5772/52195

307

[40] Zhu, W, & Nakamura, P. S. ((1996). An efficient approach for the synthesis of 2-D recursive fan filters using 1-D prototypes, *IEEE Transactions on Signal Processing*, Apr.

[41] Zhu, W, et al. (1999). A least-square design approach for 2D FIR filters with arbitrary frequency response. *IEEE Transactions on Circuits and Systems II,* Aug. 1999, 46,

[42] Zhu, W, et al. (2006). Realization of 2D FIR filters using generalized polyphase structure combined with singular-value decomposition, *Proc. of ISCAS 2006*, Kos, Greece

filters, *IEEE ISCAS 1990*, 1-3 May 1990, , 3, 2153-2156.

1996, 44, 979-983.

1027-1034.


[39] Zhu, W, & Zhenya, P. H. ((1990). A design method for complementary recursive fan filters, *IEEE ISCAS 1990*, 1-3 May 1990, , 3, 2153-2156.

[23] Matei, R. (2011b). A class of 2D recursive filters with two-directional selectivity, *Proc. of the WSEAS International Conference on Applied, Numerical and Computational Mathe‐*

[24] Matei, R, & Matei, D. (2012). Vascular image processing using recursive directional filters, *World Congress on Medical Physics and Biomedical Engineering*, Beijing, China, May

[25] Mollova, G. S. (1997). Analytical least squares design of 2-D fan type FIR filter, *International Conference on Digital Signal Processing DSP'97*, July 1997, 2, 625-628.

[26] Nie, X, & Unbehauen, R. (1989). D IIR filter design using the extended McClellan transformation, *Proc. of International Conference ICASSP'89*, 23-26 May 1989, , 3,

[27] Nikolova, Z, et al. (2011). Complex coefficient IIR digital filters, In: "Digital Filters",

[28] Connor, O, & Huang, B. T. T.S. ((1978). Stability of general two-dimensional recursive digital filters, *IEEE Trans. Acoustics, Speech & Signal Processing*, , 26, 550-560.

[29] Paplinski, A. P. (1998). Directional filtering in edge detection. *IEEE Transactions on Image*

[30] Psarakis, E. Z, et al. (1990). Design of two-dimensional zero phase FIR fan filters via the McClellan transform. *IEEE Trans. Circuits & Systems*, Jan. 1990, , 37, 10-16.

[31] Qunshan, G, & Swamy, M. N. S. (1994). On the design of a broad class of 2D recursive digital filters with fan, diamond and elliptically-symmetric responses, *IEEE Trans. on*

[32] Rydell, J, et al. (2008). Bilateral filtering of fMRI data. *IEEE Journal of Selected Topics in*

[33] Semmlow, J. L. (2004). Biosignal and Biomedical Image Processing. Marcel Dekker, 2004

[34] Simoncelli, E. P, & Farid, H. (1996). Steerable wedge filters for local orientation analysis.

[35] Tosic, D. V, et al. (1997). Symbolic approach to 2D biorthogonal diamond-shaped filter design, 21st Int. Conf. on Microelectronics, 1997, Nis, Yugoslavia, , 2, 709-712.

[36] Truc, P, et al. (2007). A new approach to vessel enhancement in angiography images, *Int. Conf. on Complex Medical Engineering CME 2007*, Beijing, 23-27 May 2007, , 878-884.

[37] Wong, W. C. K, et al. (2004). Trilateral filtering for biomedical images. IEEE Int.

[38] Wu, D, et al. (2006). On the adaptive detection of blood vessels in retinal images, *IEEE*

*IEEE Transactions on Image Processing*, Sep 1996, , 5, 1377-1382.

Symposium on Biomedical Imaging, 15-18 Apr. 2004, , 1, 820-823.

*Trans. Biomedical Engineering*, Feb 2006, , 53, 341-343.

*matics (ICANCM'11)*, Barcelona, Spain, Sept. 15-17, 2011, 212-217.

26-31, 2012, IFMBE Proceedings , 39, 947-950.

1572-1574.

306 Digital Filters and Signal Processing

InTech, April 2011, , 209-239.

*Processing*, Apr. 1998, , 7, 611-615.

*Circuits and Systems II*, Sep.1994, , 41, 603-614.

*Signal Processing*, Dec 2008, , 2, 891-896.


## *Edited by Fausto Pedro García Márquez and Noor Zaman*

Digital filters, together with signal processing, are being employed in the new technologies and information systems, and are implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different cases with great flexibility and reliability.

This book presents advanced developments in digital filters and signal process methods covering different cases studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed worldwide.

Digital Filters and Signal Processing

Digital Filters

and Signal Processing

*Edited by Fausto Pedro García Márquez* 

*and Noor Zaman*

Photo by donfiore / iStock