**Time, Frequency and Phase Estimation**

182 Fourier Transform – Signal Processing

Laboratory tests results for broken rotor bars, short-circuit, eccentricities and bearing faults are shown. The method was validated for the failures studied. The results show that it is possible by this method to detect the failures in an incipient stage. Finally the results of

Bellini, A. Filippetti, F. Franceschini, G. Tassoni, C. & Kliman, G. B. (2002); On-field

Thomson, W.T (1999). A Review of On-Line Condition Monitoring Techniques for Three-

Thomson, W.T. & Fenger, M. (2001). Current Signature Analysis to Detect Induction Motor

Thomson, W.T. & Gilmore, R. J. (2003). Motor Current Signature Analysis to Detect Faults in

MCSA, IEEE Trans. on Ind Applications, Vol. 38, Nº 4, pp. 1045-1053 Botha, M. M (1997). Electrical Machines Failure, Causes and Cures, Electrical Machines and

Gordon University, Schoolhill, Aberdeen, Scotland

Histories. Proccedings of 32rd Turbomachinery Symposium.

Faults, IEEE Ind. Applications Magazine.

experience with online diagnosis of large induction motors cage failures using

Drives. 8th annual conference of IEEE, Nº 444, pp. 114-117, 1-3 de September of

Phase Squirrel-Cage Induction Motors – Past, Present and Future, The Robert

Induction Motor Drives- Fundamentals, Data Interpretation, and Industrial Case

industrial tests show the accuracy of the methodology.

**6. References** 

1997

**8** 

Dušan Agrež

*Slovenia* 

*University of Ljubljana,* 

*Faculty of Electrical Engineering* 

**Non-Parametric Estimation** 

**in the Frequency Domain** 

**of the Periodic Signal Parameters** 

is, in principle, the best approximation to periodicity of the signal.

≥ 1 (Gabor, 1946), and the principle of the limited changes of

*W* are the effective widths of lobes in the time and

Parameters estimations of periodic signals *g*(*t*) , where frequency of the investigated component is the key parameter, play a fundamental role in a variety of applications: impedance measurement, power quality estimation, radar, A/D testing, etc. The problem of evaluating the spectral performance of a given periodic signal reduces to the parameter estimation of each spectral component (frequency, amplitude, and phase) in the presence of noise. Estimation methods can be classified as parametric (D'Antona & Ferrero, 2006) and nonparametric (Agrež, 2002). Parametric methods are model-based and have very good selectivity and statistical efficiency, but require computationally intensive algorithms and very good 'model agreement' with a real multi-component signal. For this reason, such methods are unsuitable for many estimation problems. A better approach is to use nonparametric methods, which estimate the spectral parameters of interest by evaluating first the discrete Fourier transform (DFT) of the signal and then the parameters of the particular component. As we are dealing with periodic signals, the integral frequency transformation

Analysis of the frequency spectrum provides the opportunity to see systematic periodicities in the presence of the reduced random noise by integration. Many of these estimations are based on coherent sampling; that is, on the accurate synchronization of the signal and the sampling rate, and on the collection of a number of samples belonging to an integer number of the signal periods. However, the normal situation for signal parameter estimation is non-coherent, or quasi-coherent sampling, and in such a sampled signal there can also be spurious components. When failing to observe an integer number of periods of even a single tone, the tone energy is spread over the whole frequency axis, and the leakage from neighboring components can significantly bias estimations of the component

There are two fundamental principles that restrict estimations: the time-frequency

**1. Introduction** 

with the kernel *j ft e*

parameters.

uncertainty principle

signals. The first, where

− 2π

> Δ*T W*Δ

> > Δ*T* and

 π

Δ

frequency domains (1), is a generalization of the Heisenberg uncertainty principle.

## **Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain**

Dušan Agrež *University of Ljubljana, Faculty of Electrical Engineering Slovenia* 

### **1. Introduction**

Parameters estimations of periodic signals *g*(*t*) , where frequency of the investigated component is the key parameter, play a fundamental role in a variety of applications: impedance measurement, power quality estimation, radar, A/D testing, etc. The problem of evaluating the spectral performance of a given periodic signal reduces to the parameter estimation of each spectral component (frequency, amplitude, and phase) in the presence of noise. Estimation methods can be classified as parametric (D'Antona & Ferrero, 2006) and nonparametric (Agrež, 2002). Parametric methods are model-based and have very good selectivity and statistical efficiency, but require computationally intensive algorithms and very good 'model agreement' with a real multi-component signal. For this reason, such methods are unsuitable for many estimation problems. A better approach is to use nonparametric methods, which estimate the spectral parameters of interest by evaluating first the discrete Fourier transform (DFT) of the signal and then the parameters of the particular component. As we are dealing with periodic signals, the integral frequency transformation with the kernel *j ft e* − 2π is, in principle, the best approximation to periodicity of the signal. Analysis of the frequency spectrum provides the opportunity to see systematic periodicities in the presence of the reduced random noise by integration. Many of these estimations are based on coherent sampling; that is, on the accurate synchronization of the signal and the sampling rate, and on the collection of a number of samples belonging to an integer number of the signal periods. However, the normal situation for signal parameter estimation is non-coherent, or quasi-coherent sampling, and in such a sampled signal there can also be spurious components. When failing to observe an integer number of periods of even a single tone, the tone energy is spread over the whole frequency axis, and the leakage from neighboring components can significantly bias estimations of the component parameters.

There are two fundamental principles that restrict estimations: the time-frequency uncertainty principle Δ*T W*Δ π ≥ 1 (Gabor, 1946), and the principle of the limited changes of signals. The first, where Δ*T* and Δ*W* are the effective widths of lobes in the time and frequency domains (1), is a generalization of the Heisenberg uncertainty principle.

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 187

*m* among *M* + 1 , respectively. Index *k* is the current time index of the successive samples

( ) () () *m m*

θ

*<sup>j</sup> Gi A Wi e Wi e*<sup>−</sup>

δ

investigated component, and from both terms of other components (7) (Fig. 1).

ϕ

*G im* ( ) −1

sampling. The DFT coefficients surrounding one signal component are due to both the short-range leakage and the long-range leakage contributions from the second term of the

> ( ) () ( ) ( ) *m m <sup>M</sup> <sup>m</sup> j j m m m m k*

2

*G im* ( )

a

*<sup>A</sup> Gi W e W i e <sup>i</sup>* <sup>−</sup>

= −+ + ∑

*im* 0 θ

Fig. 1. The short-range leakage influences (a) and long-range leakage influences (b) on the

θ*<sup>m</sup>* = 6.3)

Δ( ) *im*

*mm m*

Δ

=− − − + <sup>⎡</sup> <sup>⎤</sup> <sup>∑</sup> <sup>⎣</sup> <sup>⎦</sup> <sup>0</sup> <sup>2</sup> ϕ

*t* apart. Tones of the sampled signal do not generally coincide with the basic set of the periodic components of the DFT, which is the most well-known, non-parametric method for frequency decomposition of signals (Harris, 1978). Using *N* samples of the signal (4), the

 Δϕ

*<sup>m</sup>* are frequency, amplitude, and phase of one component with index

*j j*

*f* = 1 /(*N t*

 −0.5 0.5 < ≤ δ

 θ

Δ

δ

 ϕ

Δ( ) *i* Δ( ) *i <sup>m</sup>* +1 *<sup>m</sup>* −1

δ*m*

δΔ

 ϕ

θ

*k km*

*G im* ( ) +1

( ) *Am <sup>W</sup> <sup>m</sup>* <sup>2</sup> θ θ −

= ≠

0,

, *k N* = 0, 1, , 1 … − (4)

, (5)

) , and can be written in two

*<sup>m</sup>* (6)

*<sup>m</sup>* is caused by the non-coherent

(7)

*<sup>m</sup>* is the signal component

π

( ) ( ) ( ) *M*

*m gk t wk A f k t* =

*M*

*m*

where *W* ( )\* is the spectrum of the used window *w k*( ) , and

frequency divided by the frequency resolution

θ

where *im* is an integer value and the displacement term

2

=

*<sup>m</sup> m mm <sup>f</sup> <sup>i</sup> f*

= = +

δ

Δ

sin 2

Δ

DFT at the spectral line *i* is given by:

ϕ

where *fm* , *Am* , and

Δ

parts:

0 1.

( ) *Am <sup>W</sup> <sup>m</sup>* <sup>2</sup> θ θ +

b

amplitude DFT coefficients (rectangular window;

0 2.

0 3.

0 5.

0 4.

*G i*( )

*N m mm*

= ⋅ ∑ <sup>+</sup> 0

$$\mathbf{(A7)}^2 = 4 \left[ \underbrace{\int\_{-\infty}^{\infty} t^2 g^2(t) dt}\_{=\infty} - \underbrace{\left( \int\_{-\infty}^{\infty} t g^2(t) dt}\_{=\infty} \right)^2 \right] \tag{1a}$$

The normalized moment of inertia about the center of gravity of the signal distribution.

The normalized first moment – center of gravity of the signal distribution.

$$\left(\left(\mathcal{A}\mathcal{W}\right)^{2}\right)^{2} = 4\frac{\int\_{-\infty}^{\infty} f^{2} \left|\mathcal{G}\left(f\right)\right|^{2} df}{\int\_{-\infty}^{\infty} \left|\mathcal{G}\left(f\right)\right|^{2} df} \tag{1b}$$

In words, Δ*T* and Δ*W* cannot simultaneously be arbitrary small. In relation to measurements this is interpreted to imply that the uncertainty in the determination of a frequency, is of the order of magnitude of the reciprocal of the time taken to measure it.

The second principle, that of the limited changes of signals, limits the design of the signal shape. The more smoothly and slowly a function changes, the more rapidly its transformation changes and vice versa (Seibert, 1986). In practice this means that the spectrum of the signal should essentially vanish for frequencies greater than some frequency *max f* , and that the tails of the pulse in the time domain must die sufficiently rapidly that the tail of a large pulse will not seriously distort another smaller pulse at an adjacent time instant. The quantitative expression of this principle can be derived from Parseval's theorem and the differentiation properties of the Fourier transformation d d j2 *g*(*t t* ) ⇔ π*fG*( *f* ) :

$$\int\_{-\infty}^{\infty} \left| \frac{\mathbf{d}^n g(t)}{\mathbf{d}t^n} \right|^2 \, \mathrm{d}t = (2\pi)^2 \int\_{-\infty}^{\infty} f^{2n} \left| \mathrm{G}(f) \right|^2 \, \mathrm{d}f \tag{2}$$

Thus, if all the derivatives of the signal *g*(*t*) through the ( ) *n* − st 1 are square-integrable, but the *n*th is not, we may in general conclude that its spectrum *G*( *<sup>f</sup>* ) vanishes faster than *<sup>n</sup> <sup>f</sup>* − +1 2 , but not faster than *<sup>n</sup> <sup>f</sup>* − −1 2 .

Both fundamental principles limit the accuracy of parameter estimations and depend upon the measurement time. Here this is taken to mean the relative time to measure a periodic signal, or the number of repetitions of the periodicity in it:

$$\theta = \frac{T\_{\text{meas}}}{T\_{\text{period}}} = T\_{\text{meas}} \cdot f = \frac{f}{\Delta f} \tag{3}$$

where the measurement time determines the basic frequency resolution in the frequency domain *f T* = meas Δ1 .

A finite time of measurement is a source of dynamic errors, which are shown as leakage parts of the measurement window spectrum, convolved on the spectrum of the measuredsampled signal (Fig. 1). The sampled analogue multi-frequency signal *g*(*t*) , by *f* ( ) sampling 1 = Δ*t* , can be written as follows:

186 Fourier Transform – Signal Processing

( )

2

4 (1b)

The normalized first moment – center of gravity of the signal

(1a)

π*fG*( *f* ) :

(2)

(3)

( )

 

distribution.

( )

( ) ( )

= ∫

2

and the differentiation properties of the Fourier transformation d d j2 *g*(*t t* ) ⇔

∞ ∞ −∞ −∞ <sup>=</sup> ∫ ∫

n

d

signal, or the number of repetitions of the periodicity in it:

θ

*t* , can be written as follows:

Δ

order of magnitude of the reciprocal of the time taken to measure it.

∞ −∞ ∞ −∞

∫

this is interpreted to imply that the uncertainty in the determination of a frequency, is of the

The second principle, that of the limited changes of signals, limits the design of the signal shape. The more smoothly and slowly a function changes, the more rapidly its transformation changes and vice versa (Seibert, 1986). In practice this means that the spectrum of the signal should essentially vanish for frequencies greater than some frequency *max f* , and that the tails of the pulse in the time domain must die sufficiently rapidly that the tail of a large pulse will not seriously distort another smaller pulse at an adjacent time instant. The quantitative expression of this principle can be derived from Parseval's theorem

( ) ( ) ( ) *<sup>n</sup> g t <sup>t</sup> <sup>f</sup> <sup>G</sup> f f <sup>t</sup>*

<sup>2</sup> <sup>n</sup> <sup>2</sup> 2 2

<sup>d</sup> d 2 <sup>d</sup>

π

Thus, if all the derivatives of the signal *g*(*t*) through the ( ) *n* − st 1 are square-integrable, but the *n*th is not, we may in general conclude that its spectrum *G*( *<sup>f</sup>* ) vanishes faster than *<sup>n</sup> <sup>f</sup>* − +1 2 , but not faster than *<sup>n</sup> <sup>f</sup>* − −1 2 .

Both fundamental principles limit the accuracy of parameter estimations and depend upon the measurement time. Here this is taken to mean the relative time to measure a periodic

*<sup>T</sup> <sup>f</sup> T f <sup>T</sup> <sup>f</sup>* = = ⋅= meas meas

where the measurement time determines the basic frequency resolution in the frequency

A finite time of measurement is a source of dynamic errors, which are shown as leakage parts of the measurement window spectrum, convolved on the spectrum of the measuredsampled signal (Fig. 1). The sampled analogue multi-frequency signal *g*(*t*) , by

Δ

period

*<sup>f</sup> <sup>G</sup> <sup>f</sup> df <sup>W</sup>*

 

*t g t dt tg t dt <sup>T</sup>*

∫ ∫

∫ ∫

∞ ∞ −∞ −∞ ∞ ∞ −∞ −∞ <sup>⎡</sup> <sup>⎤</sup> ⎛ ⎞ <sup>⎢</sup> <sup>⎥</sup> ⎜ ⎟ <sup>⎢</sup> <sup>⎥</sup> ⎜ ⎟ <sup>=</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> ⎜ ⎟ <sup>⎢</sup> <sup>⎥</sup> ⎜ ⎟ <sup>⎢</sup> <sup>⎥</sup> ⎜ ⎟ <sup>⎢</sup> ⎝ ⎠ <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

2 2 2

2 2

*g t dt g t dt*

( )

2 2

*G f df*

2

*W* cannot simultaneously be arbitrary small. In relation to measurements

( ) ( )

4

2

Δ

In words,

Δ*T* and

domain *f T* = meas Δ

*f* ( ) sampling 1 =

1 .

Δ

Δ

The normalized moment of inertia about the center of gravity

of the signal distribution.

$$\log\left(k\,\Delta t\right)\_N = w\left(k\right) \cdot \sum\_{m=0}^M A\_m \sin\left(2\pi f\_m k\,\Delta t + \varphi\_m\right), \qquad k = 0, 1, \ldots, N - 1\tag{4}$$

where *fm* , *Am* , and ϕ*<sup>m</sup>* are frequency, amplitude, and phase of one component with index *m* among *M* + 1 , respectively. Index *k* is the current time index of the successive samples Δ*t* apart. Tones of the sampled signal do not generally coincide with the basic set of the periodic components of the DFT, which is the most well-known, non-parametric method for frequency decomposition of signals (Harris, 1978). Using *N* samples of the signal (4), the DFT at the spectral line *i* is given by:

$$\mathcal{G}\left(\dot{i}\right) = -\frac{\dot{j}}{2} \sum\_{m=0}^{M} A\_m \left[ \mathcal{W}\left(\dot{i} - \theta\_m\right) e^{j\varphi\_m} - \mathcal{W}\left(\dot{i} + \theta\_m\right) e^{-j\varphi\_m} \right] \tag{5}$$

where *W* ( )\* is the spectrum of the used window *w k*( ) , and θ *<sup>m</sup>* is the signal component frequency divided by the frequency resolution Δ *f* = 1 /(*N t* Δ ) , and can be written in two parts:

$$
\partial\_m = \frac{f\_m}{\Delta f} = \mathbf{i}\_m + \delta\_m \qquad \qquad -0.5 < \delta\_m \le 0.5 \tag{6}
$$

where *im* is an integer value and the displacement term δ *<sup>m</sup>* is caused by the non-coherent sampling. The DFT coefficients surrounding one signal component are due to both the short-range leakage and the long-range leakage contributions from the second term of the investigated component, and from both terms of other components (7) (Fig. 1).

$$\left| \mathbf{G} \left( i\_m \right) \right| = \frac{A\_m}{2} \left| \mathcal{W} \left( \delta\_m \right) e^{j\phi\_m} - \mathcal{W} \left( 2\mathbf{i}\_m + \delta\_m \right) e^{-j\phi\_m} \right| + \sum\_{k=0, k \neq m}^{M} \left| \mathcal{A} \left( i\_k \right) \right| \tag{7}$$

Fig. 1. The short-range leakage influences (a) and long-range leakage influences (b) on the amplitude DFT coefficients (rectangular window; θ*<sup>m</sup>* = 6.3)

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 189

It is important how the used window works in the estimation. Especially in the first step if we have an iterative procedure of estimation. Estimation of the first step has the non-

Parameters of the measurement component can be non-parametric estimates by means of the interpolation. From the comparative study (Schoukens et al., 1992) it can be concluded that the key for estimating the three basic parameters is in determining the position of the

surrounding the component *m* (Fig. 1.). Estimation can be done by multi-point estimations

*m*

δ

The three largest, local DFT coefficients can be used in the three-point interpolation and in

range leakage tails have the following properties: they decrease with increasing frequency and they change sign at successive coefficients *G i*( ) , if they have a sine function in the

enough, the long-range leakage influence can be approximated to

*W iW i W W W iW i W W*

11 1 11 1

 δΔ

 δΔ

+ + +− − + + <sup>=</sup> <sup>≈</sup> + + −− + + −

− +− −

1 1 1

The numerators and the denominators in (9) and (11) have the form where the amplitude DFT coefficients are added with suitable weights. The form of the denominator in (11) *Gi Gi Gi* H HH ( −+ + + 1 2 ) ( ) ( 1) is very characteristic, and looks like the form in the

(*i ii m mm* −≈ ≈ + 1 1 ) ( ) ( ) , so that the ratio of coefficients can be expressed as:

2

*s*

2

*<sup>m</sup>* ) is the sign of displacement and can be estimated by the difference of the

(*i i* + =− ± + *m m* )) sin 1 ( ( )) ). For example, the rectangular window, the

*mm mm m m mm mm m m*

*m m mm*

+ −+ + +

1 1 2 1

*Gi Gi Gi Gi Gi*

( ) ( ) ( ) ( ) ( ) *<sup>m</sup> m m*

− *i* , between DFT coefficients *G i*( ) *<sup>m</sup>* and *G i*( 1) *<sup>m</sup>* + ,

δ

(9b)

Δ

( ) ( ) ( ) ( )

 δ

 δ θ

 δ

 δ

(11)

(9a)

(*i*) (7) of the long-

*<sup>m</sup>* that is large

(10)

 in ( ) ( ) *<sup>m</sup> m m <sup>A</sup> Gi W* <sup>=</sup> <sup>2</sup>

> ( ) ( ) *m m*

*m m Gi s Gi*

*Gi Gi s* + − <sup>=</sup> + +

parametric nature since there is no information about the signal at the beginning.

(Agrež, 2002) and using windows with known spectra, like the Hann window:

πθ

 θ

**2. Parameters estimations** 

**2.1 Frequency estimation** 

measurement component

where *s* = sign(

kernel ( sin(

Δ

δ

πδ

ΔΔ

*m*

and the displacement term as:

α

3

δ

*<sup>W</sup>* >> <sup>=</sup>

θ

 *m mm* = θ

( ) ( ) ( ) *<sup>N</sup>*

sin 2 1

for 2-points estimation: ( ) ( )

phase DFT coefficients: *s=sign arg G i arg G i* ( ⎡ ⎤⎡ ⎤ ⎣ ⎦⎣ ⎦ () ( ) *m m* − + −π 1 2) .

this case, long-range leakage contributions can be considered. Portions

 δ

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

Hann window, and Rife-Vincent Class I windows satisfy this condition. For

 π

> 1 1

≅ =

3

2 2

α

α

3

( ) ( ) ( ) ( )

*m m*

+ − <sup>=</sup> <sup>=</sup> + +

*Gi Gi Gi Gi*

*m m*

δΔ

δΔ

*m*

δ

3

πθ

<sup>−</sup> <sup>H</sup> <sup>1</sup> <sup>2</sup>

The long-range leakage contributions can be reduced in several ways: by increasing the measurement time, by using windows with a faster reduction of the side lobes than with a rectangular window (like the Hann window, Rife-Vincent windows, Dolph-Chebychev windows, etc., Fig. 2), or by using the multi-point interpolated DFT algorithms and a window with known behavior of the spectrum (Agrež, 2002). For the sake of analytical simplicity, cosine-class windows are frequently used (Belega & Dallet, 2009; Novotný & Sedláček, 2010). The three basic classes of cosine windows were defined RV1, RV2, and RV3. For analyses, the first two classes are interesting.

Windows of the class RV1 (Fig. 2: curves a, b, and c) are designed for maximization of the window spectrum side-lobes fall-off <sup>−</sup>*<sup>b</sup>* θ , based on the number of the time domain window derivatives zeroes at the window ends (Novotný & Sedláček, 2010):

$$w(k) = \sum\_{l=0}^{b-1} (-1)^l a\_{l,1} \cdot \cos\left(l \frac{2\pi}{N} \cdot k\right), \ k = 0, \dots, N-1 \tag{8}$$

When the order *b* is 1 (RV1-1), the coefficient *a*0,1 is 1 and the equation (8) gives the rectangular shape. If *b* is 2 ( RV1 2 : − *a*0,1 = 12, *a*1,1 = 1 2 ) we get the Hann window. Higher values of *b* ( RV1 4 : − *a*0,1 = 10 32 , *a*1,1 = 15 32 , *a*2,1 = 6 32 , *a*3,1 = 1 32 ) expand the window transform main-lobe and reduce the spectral leakage.

Windows of the class RV2 (Fig. 2: curves d and e) are designed for minimization of the window spectrum main-lobe width, for a given maximum level of the side-lobes relative magnitude ( ) *<sup>d</sup> A A* <sup>−</sup> side lobe <sup>−</sup> = max (0) 10 , where *d* is the exponent of the damping. They are the Taylor approximation to the Dolph-Chebychev windows and give good results when spectral components are very close (Andria et al., 1989).

Fig. 2. Spectra shapes of the windows: a – the rectangular window (RV1-1), b – the Hann window (RV1-2), c – the RV1-4 window, d – the RV2 window with *d* = 4 (RV2-4), e – the RV2 window with *d* = 6 (RV2-6)

### **2. Parameters estimations**

188 Fourier Transform – Signal Processing

The long-range leakage contributions can be reduced in several ways: by increasing the measurement time, by using windows with a faster reduction of the side lobes than with a rectangular window (like the Hann window, Rife-Vincent windows, Dolph-Chebychev windows, etc., Fig. 2), or by using the multi-point interpolated DFT algorithms and a window with known behavior of the spectrum (Agrež, 2002). For the sake of analytical simplicity, cosine-class windows are frequently used (Belega & Dallet, 2009; Novotný & Sedláček, 2010). The three basic classes of cosine windows were defined RV1, RV2, and RV3.

Windows of the class RV1 (Fig. 2: curves a, b, and c) are designed for maximization of the

*N*

When the order *b* is 1 (RV1-1), the coefficient *a*0,1 is 1 and the equation (8) gives the rectangular shape. If *b* is 2 ( RV1 2 : − *a*0,1 = 12, *a*1,1 = 1 2 ) we get the Hann window. Higher values of *b* ( RV1 4 : − *a*0,1 = 10 32 , *a*1,1 = 15 32 , *a*2,1 = 6 32 , *a*3,1 = 1 32 ) expand the

Windows of the class RV2 (Fig. 2: curves d and e) are designed for minimization of the window spectrum main-lobe width, for a given maximum level of the side-lobes relative

Taylor approximation to the Dolph-Chebychev windows and give good results when

side lobe <sup>−</sup> = max (0) 10 , where *d* is the exponent of the damping. They are the

2 4 6 8 10 12

Fig. 2. Spectra shapes of the windows: a – the rectangular window (RV1-1), b – the Hann window (RV1-2), c – the RV1-4 window, d – the RV2 window with *d* = 4 (RV2-4), e – the

π

, based on the number of the time domain window

, *k N* = 0, , 1 … − (8)

θ

c

a

b

θ

derivatives zeroes at the window ends (Novotný & Sedláček, 2010):

*l l*

*wk a l k*

,1

⎛ ⎞ <sup>=</sup> −⋅ ⋅ ⎜ ⎟ ⎝ ⎠ ∑

<sup>2</sup> ( 1) cos

For analyses, the first two classes are interesting.

( ) *b*

*l*

=

0

window transform main-lobe and reduce the spectral leakage.

spectral components are very close (Andria et al., 1989).

d

e

−

1

window spectrum side-lobes fall-off <sup>−</sup>*<sup>b</sup>*

magnitude ( ) *<sup>d</sup> A A* <sup>−</sup>

*W* ( ) θ

<sup>2</sup> 10<sup>−</sup>

1

<sup>4</sup> 10<sup>−</sup>

<sup>6</sup> 10<sup>−</sup>

0

RV2 window with *d* = 6 (RV2-6)

It is important how the used window works in the estimation. Especially in the first step if we have an iterative procedure of estimation. Estimation of the first step has the nonparametric nature since there is no information about the signal at the beginning.

### **2.1 Frequency estimation**

Parameters of the measurement component can be non-parametric estimates by means of the interpolation. From the comparative study (Schoukens et al., 1992) it can be concluded that the key for estimating the three basic parameters is in determining the position of the measurement component δ *m mm* = θ − *i* , between DFT coefficients *G i*( ) *<sup>m</sup>* and *G i*( 1) *<sup>m</sup>* + , surrounding the component *m* (Fig. 1.). Estimation can be done by multi-point estimations (Agrež, 2002) and using windows with known spectra, like the Hann window:

$$\left|\mathcal{W}\_{\rm H}(\boldsymbol{\theta})\right|\_{\mathbb{N}\times\mathbb{N}} = \frac{\left|\sin\left(\pi\boldsymbol{\theta}\right)\right|}{2\pi\theta\left(1-\boldsymbol{\theta}^{2}\right)}\qquad\text{in}\quad\left|\mathcal{G}\left(i\_{m}\right)\right| \doteq \frac{A\_{m}}{2}\left|\mathcal{W}\left(\boldsymbol{\delta}\_{m}\right)\right|\tag{9a}$$

$$\text{for 2-points estimation: } \, \_2\delta\_m \doteq s \frac{2\left|\mathbf{G}\left(i\_m + s\right)\right| - \left|\mathbf{G}\left(i\_m\right)\right|}{\left|\mathbf{G}\left(i\_m\right)\right| + \left|\mathbf{G}\left(i\_m + s\right)\right|}\tag{9b}$$

where *s* = sign(δ *<sup>m</sup>* ) is the sign of displacement and can be estimated by the difference of the phase DFT coefficients: *s=sign arg G i arg G i* ( ⎡ ⎤⎡ ⎤ ⎣ ⎦⎣ ⎦ () ( ) *m m* − + −π 1 2) .

The three largest, local DFT coefficients can be used in the three-point interpolation and in this case, long-range leakage contributions can be considered. Portions Δ(*i*) (7) of the longrange leakage tails have the following properties: they decrease with increasing frequency and they change sign at successive coefficients *G i*( ) , if they have a sine function in the kernel ( sin(πδ π δ (*i i* + =− ± + *m m* )) sin 1 ( ( )) ). For example, the rectangular window, the Hann window, and Rife-Vincent Class I windows satisfy this condition. For θ *<sup>m</sup>* that is large enough, the long-range leakage influence can be approximated to Δ ΔΔ(*i ii m mm* −≈ ≈ + 1 1 ) ( ) ( ) , so that the ratio of coefficients can be expressed as:

$$\begin{aligned} \, \_{G} \mathcal{O}\_{m} &= \frac{\left| \mathbf{G} \left( \dot{i}\_{m} \right) \right| + \left| \mathbf{G} \left( \dot{i}\_{m} - 1 \right) \right|}{\left| \mathbf{G} \left( \dot{i}\_{m} \right) \right| + \left| \mathbf{G} \left( \dot{i}\_{m} + 1 \right) \right|} = \\ &= \frac{\left| \mathcal{V} \left( \boldsymbol{\delta}\_{m} \right) \right| + \left| \mathcal{A} \left( \dot{i}\_{m} \right) \right| + \left| \mathcal{V} \mathcal{V} \left( 1 + \boldsymbol{\delta}\_{m} \right) \right| - \left| \mathcal{A} \left( \dot{i}\_{m} - 1 \right) \right|}{\left| \mathcal{V} \left( \boldsymbol{\delta}\_{m} \right) \right| + \left| \mathcal{A} \left( \dot{i}\_{m} \right) \right| + \left| \mathcal{V} \left( 1 + \boldsymbol{\delta}\_{m} \right) \right|} \approx \frac{\left| \mathcal{V} \mathcal{V} \left( \boldsymbol{\delta}\_{m} \right) \right| + \left| \mathcal{V} \mathcal{V} \left( 1 + \boldsymbol{\delta}\_{m} \right) \right|}{\left| \mathcal{V} \left( \boldsymbol{\delta}\_{m} \right) \right| + \left| \mathcal{V} \mathcal{V} \left( 1 - \boldsymbol{\delta}\_{m} \right) \right|} \end{aligned} \tag{10}$$

and the displacement term as:

$$\delta\_3 \mathcal{S}\_m \cong 2 \frac{1 - \,\_3\alpha\_m}{1 + \,\_3\alpha\_m} = 2 \frac{\left| \mathcal{G} \left( i\_m + 1 \right) \right| - \left| \mathcal{G} \left( i\_m - 1 \right) \right|}{\left| \mathcal{G} \left( i\_m - 1 \right) \right| + 2 \left| \mathcal{G} \left( i\_m \right) \right| + \left| \mathcal{G} \left( i\_m + 1 \right) \right|} \tag{11}$$

The numerators and the denominators in (9) and (11) have the form where the amplitude DFT coefficients are added with suitable weights. The form of the denominator in (11) *Gi Gi Gi* H HH ( −+ + + 1 2 ) ( ) ( 1) is very characteristic, and looks like the form in the

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 191

( ) ( ) ( ) ( ) ( ) ( )

⎛ ⎞ + <sup>≅</sup> <sup>⋅</sup> ⎜ ⎟ ⎝ ⎠ 1 2

where *l* ( *l r* = ≤ 1, 2, ... ) is the current index. The first term of the numerator in (17) is a difference of the side coefficients around the largest one, and the remaining terms are sums of the symmetrical pairs of coefficients ( *Gi l* ( *<sup>m</sup>* ± ) ). All terms [∗] are weighted by

and, the differences of the symmetrically located coefficients are weighted by

The results of the simulations, where the relative frequency was changed, show that the systematic contribution of the error – the estimation bias - decreases (Fig. 3.), while the

θ

relative frequency) for one sine component in the signal have been checked with a double scan

iterations) at the given relative frequency were compared for the multi-point DFT

<sup>4</sup> <sup>10</sup> <sup>−</sup> <sup>a</sup>

1 2 3 4 5

Fig. 3. Maximal errors of frequency estimation with the multi-point interpolations of the DFT for the Hann window (a: two-point interpolation; b: three-point interpolation; c: five-

 = + − δ θ ( <sup>∗</sup> θ

= 180 ). The absolute maximum values of the errors (from 181

*K Gi K Gi Gi sK G i l G i l* <sup>+</sup> ⎡ ⎤⎡ ⎤ + − − + +− ++ − ⎣ ⎦⎣ ⎦ ≅+⋅

> η

*m*

δ

η

( ) . The signs of weights also alter successively ( )*<sup>l</sup>* <sup>−</sup>1 .

θ *m* σ

varying both frequency and phase ( *Am* = 1 , *N* = 1024 , 1 6 ≤

The errors of the frequency estimations *E i* ( ) ( ) <sup>∗</sup>

1 1 ... 1

*r m l*

1 1 2

*r*

( )

( ) *<sup>l</sup> <sup>r</sup> Kd r l* <sup>+</sup> <sup>−</sup> <sup>=</sup><sup>1</sup> 2

(Fig. 4.).

− ≤≤ π

 2 2 ϕ π,

( ) ( ) ( )

*<sup>l</sup>* ( ) *r r <sup>n</sup> rl rl <sup>K</sup>* − − − −− = − 21 21

( )

denominator is weighted by ( ) *<sup>r</sup> Kd r* <sup>=</sup><sup>1</sup>

**2.1.1 Reduction of systematic error** 

influence of noise on estimations (

Δϕ π

<sup>2</sup> 10<sup>−</sup>

<sup>6</sup> 10<sup>−</sup>

point interpolation; d: seven-point interpolation)

1

interpolations using the Hann window (Fig. 3).

( ) max *E* θ

1

2 1

δ

( ) ( ) ( ) ( ) ( ) ( )

<sup>2</sup> . The signs of weights alter successively. The largest coefficient in the

*nm m nm m*

*K Gi Gi sK G i l G i l*

*dm d m m dm m*

*d n*

η

η

1 1 ... 1

*l*

+ + + − + +− <sup>⎡</sup> ⎤⎡ ⎤ +− − <sup>⎣</sup> ⎦⎣ ⎦

*l*

<sup>2</sup> , the sum of the first side coefficients by ( ) *<sup>r</sup> Kd <sup>r</sup>*<sup>−</sup> <sup>=</sup> <sup>2</sup>

) increases with the number of interpolation points

θ≤ ,

6

θ

b c d

*l*

(17)

2 ( 1)

is the true value of the

= 0.001 and

Δθ

construction of the Hann window spectrum with the Dirac delta function *D*(\* , and the ) spectrum of the rectangular window (Harris, 1978):

$$\mathcal{W}\_{\text{Harm.}}\left(\theta\right) = \left(\frac{1}{2}D(i) - \frac{1}{4}D(i-1) - \frac{1}{4}D(i+1)\right) \otimes \mathcal{W}\_{\text{rect.}}\left(\theta\right) \tag{12}$$

but instead of the rectangular window, the Hann window can be used:

$$\mathcal{W}(\theta) = \left( D(i-1) + 2D(i) + D(i+1) \right) \otimes \mathcal{W}\_{\text{Ham.}}(\theta) \tag{13}$$

From the point of view of leakage, the denominator is a sum of the weighted leakages. We can get the weights with a triple subtraction of the long-range leakage tails:

$$\begin{aligned} \left| \Delta(i\_m - \mathbf{1}, i\_m) \right| = \left| \Delta(i\_m - \mathbf{1}) \right| - \left| \Delta(i\_m) \right|; \quad \left| \Delta(i\_m, i\_m + \mathbf{1}) \right| = \left| \Delta(i\_m) \right| - \left| \Delta(i\_m + \mathbf{1}) \right| \\\\ \left| \Delta(i\_m - \mathbf{1}, i\_m, i\_m + \mathbf{1}) \right| = \left| \Delta\left(i\_m - \mathbf{1}, i\_m \right) \right| - \left| \Delta\left(i\_m, i\_m + \mathbf{1} \right) \right| \\ = \left| \Delta\left(i\_m - \mathbf{1} \right) \right| - 2 \left| \Delta(i\_m) \right| + \left| \Delta\left(i\_m + \mathbf{1} \right) \right| << \left| \Delta\left(i\_m \right) \right| \end{aligned} \tag{14}$$

The numerator is a subtraction of the sum of the first two, from the sum of the last two DFT coefficients ( ) *Gi Gi Gi Gi Gi Gi* ( ) *mm m m m m* + + − −+ = +− − ( ) 1 ( ( ) 1 ( ) ) ( ) 1 1 . In this case ( ) the long-range leakage tails are also reduced.

$$
\Delta \left( \left| \Delta(i\_m) \right| - \left| \Delta(i\_m + 1) \right| \right) - \left( - \left| \Delta(i\_m - 1) \right| + \left| \Delta(i\_m) \right| \right) = \left| \Delta(i\_m - 1) \right| - \left| \Delta(i\_m + 1) \right| \tag{15}
$$

It is appropriate to form multi-point interpolations on an odd number of coefficients, in order to have symmetry around the largest local coefficient *G i*( *<sup>m</sup>* ) . In a five-point interpolation with the Hann window, similar averages are used as in the three-point interpolation. The quotient is used to eliminate the amplitude influence of the investigated component.

$$\boldsymbol{\delta}\_{\mathcal{G}}\boldsymbol{\delta}\_{m} = 3 \cdot \frac{2\left[\left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} + \mathbf{1}\right)\right| - \left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} - \mathbf{1}\right)\right|\right] + \mathbf{s} \cdot \left[\left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} + \mathbf{2}\right)\right| + \left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} - \mathbf{2}\right)\right|\right]}{6\left|\boldsymbol{G}\left(\boldsymbol{i}\_{m}\right)\right| + 4\left[\left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} + \mathbf{1}\right)\right| + \left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} - \mathbf{1}\right)\right|\right] + \mathbf{s} \cdot \left[\left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} + \mathbf{2}\right)\right| - \left|\boldsymbol{G}\left(\boldsymbol{i}\_{m} - \mathbf{2}\right)\right|\right]}\tag{16}$$

An estimation of the periodic parameter by the interpolation of the DFT gives the same effect as the reduction of spectrum tails. The meaning of the interpolation is the weighted summation of the amplitude coefficients, or better, symmetrical subtraction of the successive adjacent leakage parts of the window spectrum (14). The idea for long-range leakage reduction by summation of the adjacent weighted DFT coefficients, is at the core of the construction of the cosine class windows. Weights for forming the Hann window and the Rife-Vincent Class I windows from the rectangular window, are obtained by repeated convolution of the two-point weight pairs ( 1, 1 ), that is by repeated subtraction of the neighboring pairs of the spectrum leakage tails. Binomial weights ( ) *<sup>r</sup> j* <sup>2</sup> (*r* = 1, 2, ... is a number of coefficients of one half; *j r* = 0, 1, , 2 … ) can be obtained from a Pascal triangle.

The displacement estimations with the multi-point (η = += 2 1 3, 5, 7, *r* … ) interpolations of the DFT using the Hann window can be written as:

$$\mathcal{S}\_{2r+1}\mathcal{S}\_{m} \equiv (r+1) \cdot \frac{K\_{n\_i} \left[ \left| \mathbf{G} \left( i\_m + 1 \right) \right| - \left| \mathbf{G} \left( i\_m - 1 \right) \right| \right] + ... + \left( -1 \right)^l s K\_{n\_i} \left[ \left| \mathbf{G} \left( i\_m + l \right) \right| + \left| \mathbf{G} \left( i\_m - l \right) \right| \right]} \tag{17}$$

$$\mathcal{S}\_{d\_i} \left| \mathbf{G} \left( i\_m \right) \right| + K\_{d\_2} \left[ \left| \mathbf{G} \left( i\_m + 1 \right) \right| + \left| \mathbf{G} \left( i\_m - 1 \right) \right| \right] + ... + \left( -1 \right)^l s K\_{d\_l} \left[ \left| \mathbf{G} \left( i\_m + l \right) \right| - \left| \mathbf{G} \left( i\_m - l \right) \right| \right]} \tag{18}$$

$$\eta\_{\eta} \mathcal{S}\_m \cong \left( \frac{\eta + 1}{2} \right) \cdot \frac{d\_{\eta}}{n\_{\eta}} \tag{17}$$

η

where *l* ( *l r* = ≤ 1, 2, ... ) is the current index. The first term of the numerator in (17) is a difference of the side coefficients around the largest one, and the remaining terms are sums of the symmetrical pairs of coefficients ( *Gi l* ( *<sup>m</sup>* ± ) ). All terms [∗] are weighted by ( ) ( ) ( ) ( ) ( ) *<sup>l</sup>* ( ) *r r <sup>n</sup> rl rl <sup>K</sup>* − − − −− = − 21 21 <sup>2</sup> . The signs of weights alter successively. The largest coefficient in the denominator is weighted by ( ) *<sup>r</sup> Kd r* <sup>=</sup><sup>1</sup> <sup>2</sup> , the sum of the first side coefficients by ( ) *<sup>r</sup> Kd <sup>r</sup>*<sup>−</sup> <sup>=</sup> <sup>2</sup> 2 ( 1) and, the differences of the symmetrically located coefficients are weighted by ( ) *<sup>l</sup> <sup>r</sup> Kd r l* <sup>+</sup> <sup>−</sup> <sup>=</sup><sup>1</sup> 2 ( ) . The signs of weights also alter successively ( )*<sup>l</sup>* <sup>−</sup>1 .

### **2.1.1 Reduction of systematic error**

190 Fourier Transform – Signal Processing

construction of the Hann window spectrum with the Dirac delta function *D*(\* , and the )

*W D* ( ) ( )*i D*( ) *i D*( ) () *i W* ⎛ ⎞ = − −− + ⊗ ⎜ ⎟ ⎝ ⎠ Hann. rect. 11 1 1 1 24 4

) = −+ + + ⊗ ( ( 12 1 ) ( ) ( )) Hann.(

From the point of view of leakage, the denominator is a sum of the weighted leakages. We

The numerator is a subtraction of the sum of the first two, from the sum of the last two DFT coefficients ( ) *Gi Gi Gi Gi Gi Gi* ( ) *mm m m m m* + + − −+ = +− − ( ) 1 ( ( ) 1 ( ) ) ( ) 1 1 . In this case ( )

It is appropriate to form multi-point interpolations on an odd number of coefficients, in order to have symmetry around the largest local coefficient *G i*( *<sup>m</sup>* ) . In a five-point interpolation with the Hann window, similar averages are used as in the three-point interpolation. The quotient is used to eliminate the amplitude influence of the investigated

 Δ

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) *mm mm*

*mm m m m*

*Gi Gi s Gi Gi Gi Gi Gi s Gi Gi* ⎡ ⎤ + − − +⋅ + + − ⎡ ⎤ ⎣ ⎦ ⎣ ⎦ = ⋅ + + + − +⋅ + − − <sup>⎡</sup> ⎤ ⎡ <sup>⎤</sup> <sup>⎣</sup> ⎦ ⎣ <sup>⎦</sup>

An estimation of the periodic parameter by the interpolation of the DFT gives the same effect as the reduction of spectrum tails. The meaning of the interpolation is the weighted summation of the amplitude coefficients, or better, symmetrical subtraction of the successive adjacent leakage parts of the window spectrum (14). The idea for long-range leakage reduction by summation of the adjacent weighted DFT coefficients, is at the core of the construction of the cosine class windows. Weights for forming the Hann window and the Rife-Vincent Class I windows from the rectangular window, are obtained by repeated convolution of the two-point weight pairs ( 1, 1 ), that is by repeated subtraction of the

number of coefficients of one half; *j r* = 0, 1, , 2 … ) can be obtained from a Pascal triangle.

η

21 1 2 2

6 41 1 2 2

Δ

*W Di Di Di W* (

θ

θ

ΔΔ(*ii i i m m* , 1 += − + ) ( *m m* ) ( 1)

1 2 <sup>1</sup> (14)

 Δ

*j*

= += 2 1 3, 5, 7, *r* … ) interpolations of

<sup>2</sup> (*r* = 1, 2, ... is a

( ) ( ) ( ) ( )

*i ii i*

= Δ − − Δ + Δ + << Δ

*m mm m*

 Δ

( ) *ii i i i i mm m m m m* − ( ) + −− − + = − − + 1 1 11 ) ( ( ) ( ) ) ( ) ( ) (15)

(12)

(16)

) (13)

spectrum of the rectangular window (Harris, 1978):

θ

θ

the long-range leakage tails are also reduced.

ΔΔ

Δ

(

*m*

δ

3

5

component.

but instead of the rectangular window, the Hann window can be used:

can get the weights with a triple subtraction of the long-range leakage tails:

( ) ( ) ( )

11 1 1

*m mm m m mm*

*i ,i ,i i ,i i ,i*

 Δ

neighboring pairs of the spectrum leakage tails. Binomial weights ( ) *<sup>r</sup>*

The displacement estimations with the multi-point (

the DFT using the Hann window can be written as:

Δ − + =Δ − −Δ +

ΔΔ(*ii i i mm m m* − = −− 1, ) ( 1) ( ) ;

> The results of the simulations, where the relative frequency was changed, show that the systematic contribution of the error – the estimation bias - decreases (Fig. 3.), while the influence of noise on estimations ( θ *m* σ ) increases with the number of interpolation points (Fig. 4.).

> The errors of the frequency estimations *E i* ( ) ( ) <sup>∗</sup> θ = + − δ θ ( <sup>∗</sup> θ is the true value of the relative frequency) for one sine component in the signal have been checked with a double scan varying both frequency and phase ( *Am* = 1 , *N* = 1024 , 1 6 ≤ θ ≤ , Δθ = 0.001 and − ≤≤ π 2 2 ϕ π , Δϕ π = 180 ). The absolute maximum values of the errors (from 181 iterations) at the given relative frequency were compared for the multi-point DFT interpolations using the Hann window (Fig. 3).

Fig. 3. Maximal errors of frequency estimation with the multi-point interpolations of the DFT for the Hann window (a: two-point interpolation; b: three-point interpolation; c: fivepoint interpolation; d: seven-point interpolation)

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 193

As the standard deviations of the amplitude coefficients are almost equal

displacement. For the three-point interpolation using the Hann window (11), it can be

(( ( ) () ( ))

*ci ci ci*

 δδ

*<sup>m</sup> m mm*

δ

δ

22 2 2 2 3 11 DFT

3

 δ

6

= ++ − +

2 1 1

Sensitivity coefficients *<sup>c</sup>* (*p*) have forms such as ( ) (( ) ( ) ) *<sup>c</sup> <sup>p</sup>* = ⋅− ⋅ *d n n dn* / / <sup>2</sup>

δ

( ) ( )( ) )

4 1 <sup>1</sup>

 δ

 δ

*m m*

The same mathematical procedure can be used for other higher multi-point interpolations

( ) ( ) () ( ) () ( ) *m m <sup>m</sup>*

δ

*pi r pi r pi r*

1 2 <sup>2</sup> <sup>2</sup>

= + = +− = +−

*m m m*

*pi r pi r pi r*

= − = − = −

3 3

 are the denominator and the numerator of fraction (17), respectively. Both are sums of the weighted amplitude DFT coefficients, which change with relative displacement - the short-range leakage influences. For this reason, the standard deviations of displacements

*<sup>m</sup>* (Fig. 4).

1 2 3 4 5 6

The standard deviation of the displacement estimation is related to the absolute form of the standard deviation of the amplitude DFT coefficient with a suitable factor

Fig. 4. Standard deviations of the displacement estimation related to the CRB standard deviation for the frequency estimation (a: two-point interpolation; b: three-point

interpolation; c: five-point interpolation; d: seven-point interpolation)

*ci ci*

+ −⋅ +

, it is possible to formulate the expression for the standard deviation of

( ) ( ) () () ( )

*c p c pc p c pc p*

4 1 1 2

η

 η

η δ

 ηη

θ

a

b

c

d

 δ

, where *d*

(22)

η,

= +⋅ ⋅ + +⋅ ⋅ + ∑∑ ∑

 δ  δδ (21)

*m m mm*

*ci ci ci ci*

+ −⋅ + ⋅ +

*G p*( ) *G v*( ) ≅ = DFT

 σ

δ

 σ

> δ

change their values periodically −0.5 0.5 < ≤

δ*<sup>m</sup>* σσ CRB, *<sup>f</sup>*

σ

σ

expressed as:

σ

(17).

*n*η *m*

η δ σ σ

DFT

1

2

3

4

#### **2.1.2 Uncertainty of the frequency estimations**

Distributions of errors (*E Gi Gi* = − ( ) ( ) noise noiseless ) of the largest amplitude DFT coefficients have very similar (Gaussian) shapes with almost equal standard deviations DFT (*i i m m* ) ≈ ±= DFT ( ) DFT σ σ σ 1,2,. , if the time domain noise with standard deviation σ *t* and a mean value *u t*() 0 → , is statistically independent of the signal and sampling process (18). In other words, if in the first approximation only the quantization noise is considered, the sampling frequency should be suitably larger than the frequency of the highest frequency component ( *fM* ) *f f* <sup>S</sup> *<sup>M</sup>* >> 2 and the effective number of bits ( *ENOB b* = ) of the A/D conversion suitably high enough *b* ≥ 3 , or *SNR* ≥ 20dB (Widrow, Kollar. 2008). The signal to noise ratio in the time domain is defined as *SNR A*= *m t* ( ) 2 2 2σ ( dB 10lo *SNR* = g*SNR*), or expressed with an effective number of bits ( ) *<sup>b</sup> SNR* = ⋅ <sup>2</sup> 3 2 2 , where the rectangular distribution of the quantization errors is taken into account ( ) *<sup>b</sup>* σ*t m* = ⋅ *A* 2 13 .

$$
\sigma\_{\text{[DFT]}} = \sigma\_{\text{t}} \frac{1}{N\sqrt{2}} \sqrt{\sum\_{k=0}^{N-1} w^2(k)} \tag{18}
$$

The "absolute" form of the standard deviation (18) is usually related to the values of the DFT coefficients of interest. In coherent sampling the largest local amplitude DFT coefficient is equal to *Gi A wk N* ( ) *m m* = ⋅ <sup>2</sup> ∑ ( ) , where ∑*wk N* ( ) <sup>≤</sup> 1 represents the normalized peak signal gain of the window *w*(*k*) (Solomon 1992). The relative form of the standard deviation can be written as:

$$
\sigma\_{\text{[DFT]}}^{\*} = \frac{\sigma\_{\text{[DFT]}}}{\left| \overline{\mathbf{G}(i\_m)} \right|} = \frac{\sigma\_x}{A\_m} \sqrt{2} \frac{\sqrt{\sum\_{k=0}^{N-1} w^2(k)}}{\sum\_{k=0}^{N-1} w(k)} = \frac{\sigma\_x}{A\_m} \frac{\sqrt{2}}{\sqrt{N}} \sqrt{ENBV} \tag{19}
$$

The root of the equivalent noise bandwidth *ENBW* (Harris, 1978) is a factor determining the size of the standard deviation when using different windows: *ENBW ENBW* > = rect. 1 .

Distortions of the DFT coefficients and their number in an interpolation have a significant influence on the uncertainty of the displacement estimation. The standard deviation of δ *<sup>m</sup>* , as a dependent quantity, can generally be expressed as (Joint Committee for Guides in Metrology [JCGM], 2008):

$$\sigma\_{\delta\_m}^2 = \sum\_{p=i\_m-r}^{p=i\_m+r} \left( c\_\delta(p) \cdot \sigma\_{\left\| \mathbb{G}(p) \right\|} \right)^2 + 2 \sum\_{p=i\_m-r}^{p=i\_m+r-1} \sum\_{v=p+1}^{v=i\_m+r} \left( r\_\mathbb{C} \left( \left\| \mathbb{G}(p) \right\| \cdot \left\| \mathbb{G}(v) \right\| \right) \cdot c\_\delta(p) \cdot c\_\delta(v) \cdot \sigma\_{\left\| \mathbb{G}(p) \right\|} \cdot \sigma\_{\left\| \mathbb{G}(v) \right\|} \right) (20)^{-1}$$

where *c p Gp* δ ( ) =∂ ∂ δ *<sup>m</sup>* ( ) ( *p* = *ir ir m m* − + , , … ) is the sensitivity coefficient associated with the amplitude coefficient *G p*( ) , and *r G* <sup>C</sup> ( ( ) *p* , ( *G v*( ) ) *p* ≠ *v* ) is the correlation coefficient. In the case of the Hann window, two successive amplitude coefficients have the correlation factor *r Gp Gp* <sup>C</sup> ( ( ) , 1 2 3 , and amplitude coeffici ( ) + =) ents with a current index of two apart, have *r Gp Gp* <sup>C</sup> ( ( ) , 2 1 6 . Other correlation coefficients are zero. ( ) + =)

192 Fourier Transform – Signal Processing

Distributions of errors (*E Gi Gi* = − ( ) ( ) noise noiseless ) of the largest amplitude DFT coefficients have very similar (Gaussian) shapes with almost equal standard deviations

and a mean value *u t*() 0 → , is statistically independent of the signal and sampling process (18). In other words, if in the first approximation only the quantization noise is considered, the sampling frequency should be suitably larger than the frequency of the highest frequency component ( *fM* ) *f f* <sup>S</sup> *<sup>M</sup>* >> 2 and the effective number of bits ( *ENOB b* = ) of the A/D conversion suitably high enough *b* ≥ 3 , or *SNR* ≥ 20dB (Widrow, Kollar. 2008). The

1,2,. , if the time domain noise with standard deviation

σ

σ

( )

*w k*

*N*

−

1 2

*k*

=

( )

<sup>2</sup> <sup>2</sup>

 σ

*w k*

( )

*x x k <sup>N</sup> <sup>m</sup> m m k*

0

*N*

<sup>=</sup> ∑

The "absolute" form of the standard deviation (18) is usually related to the values of the DFT coefficients of interest. In coherent sampling the largest local amplitude DFT coefficient is equal to *Gi A wk N* ( ) *m m* = ⋅ <sup>2</sup> ∑ ( ) , where ∑*wk N* ( ) <sup>≤</sup> 1 represents the normalized peak signal gain of the window *w*(*k*) (Solomon 1992). The relative form of the standard deviation

*N*

−

1 2

= −

∑

*G i A A <sup>N</sup> w k*

= = =

=

0

The root of the equivalent noise bandwidth *ENBW* (Harris, 1978) is a factor determining the size of the standard deviation when using different windows: *ENBW ENBW* > = rect. 1 .

Distortions of the DFT coefficients and their number in an interpolation have a significant influence on the uncertainty of the displacement estimation. The standard deviation of

as a dependent quantity, can generally be expressed as (Joint Committee for Guides in

*c p r Gp Gv c p c v*

= ⋅+ ∑ ∑ <sup>∑</sup> ⋅⋅⋅⋅ <sup>2</sup> <sup>1</sup>

of two apart, have *r Gp Gp* <sup>C</sup> ( ( ) , 2 1 6 . Other correlation coefficients are zero. ( ) + =)

( ) ( ( ) ) ( ) ( ) ( ) ( ) ( ) ( ( ) ( ) ) *m m <sup>m</sup>*

C 1 2 ,

with the amplitude coefficient *G p*( ) , and *r G* <sup>C</sup> ( ( ) *p* , ( *G v*( ) ) *p* ≠ *v* ) is the correlation coefficient. In the case of the Hann window, two successive amplitude coefficients have the correlation factor *r Gp Gp* <sup>C</sup> ( ( ) , 1 2 3 , and amplitude coeffici ( ) + =) ents with a current index

*G p G p G v*

*<sup>m</sup>* ( ) ( *p* = *ir ir m m* − + , , … ) is the sensitivity coefficient associated

δ

∑

1 2 ( dB 10lo *SNR* = g*SNR*),

3 2 2 , where the rectangular

(18)

(19)

σ

δ*<sup>m</sup>* ,

(20)

*t m* = ⋅ *A* 2 13 .

*ENBW*

 σ

 δ σ*t*

**2.1.2 Uncertainty of the frequency estimations** 

 σ

signal to noise ratio in the time domain is defined as *SNR A*= *m t* ( ) 2 2 2

distribution of the quantization errors is taken into account ( ) *<sup>b</sup>*

DFT t

 σ

or expressed with an effective number of bits ( ) *<sup>b</sup> SNR* = ⋅ <sup>2</sup>

σ

( )

σ

σ

*m m*

σ

*pi r pi r vi r*

= + = +− = +

*pi r pi r vp*

= − = − =+

\* DFT 0 DFT 1

σ

DFT (*i i m m* ) ≈ ±= DFT ( ) DFT

σ

can be written as:

Metrology [JCGM], 2008):

where *c p Gp* δ

 δ

 ( ) =∂ ∂ δ

*m*

2

σ δ

σ As the standard deviations of the amplitude coefficients are almost equal *G p*( ) *G v*( ) ≅ = DFT σ σ σ , it is possible to formulate the expression for the standard deviation of displacement. For the three-point interpolation using the Hann window (11), it can be expressed as:

$$\begin{split} \sigma\_{\delta}^{2} \sigma\_{\delta\_{m}}^{2} &= \sigma\_{\mathrm{[DFT]}}^{2} \left( \left( c\_{\delta}^{2} \left( i\_{m-1} \right) + c\_{\delta}^{2} \left( i\_{m} \right) + c\_{\delta}^{2} \left( i\_{m+1} \right) \right) \right. \\ &\left. + \frac{4}{3} \left( c\_{\delta} \left( i\_{m} - 1 \right) \cdot c\_{\delta} \left( i\_{m} \right) + c\_{\delta} \left( i\_{m} \right) \cdot c\_{\delta} \left( i\_{m} + 1 \right) \right) \right. \\ &\left. + \frac{2}{6} \left( c\_{\delta} \left( i\_{m} - 1 \right) \cdot c\_{\delta} \left( i\_{m} + 1 \right) \right) \right) \end{split} \tag{21}$$

The same mathematical procedure can be used for other higher multi-point interpolations (17).

$$\left(\,\_{\eta}\sigma\_{\delta\_{m}}\Big/\sigma\_{\left[\mathrm{DFT}\right]}\right)^{2} = \sum\_{p=i\_{m}-r}^{p=i\_{m}+r} c\_{\delta}^{2}\left(p\right) + \frac{4}{3} \cdot \sum\_{p=i\_{m}-r}^{p=i\_{m}+r-1} c\_{\delta}\left(p\right) \cdot c\_{\delta}\left(p+1\right) + \frac{1}{3} \cdot \sum\_{p=i\_{m}-r}^{p=i\_{m}+r-2} c\_{\delta}\left(p\right) \cdot c\_{\delta}\left(p+2\right) \tag{22}$$

Sensitivity coefficients *<sup>c</sup>* (*p*) have forms such as ( ) (( ) ( ) ) *<sup>c</sup> <sup>p</sup>* = ⋅− ⋅ *d n n dn* / / <sup>2</sup> η η η ηη , where *d*η,

*n*η are the denominator and the numerator of fraction (17), respectively. Both are sums of the weighted amplitude DFT coefficients, which change with relative displacement - the short-range leakage influences. For this reason, the standard deviations of displacements change their values periodically −0.5 0.5 < ≤ δ*<sup>m</sup>* (Fig. 4).

Fig. 4. Standard deviations of the displacement estimation related to the CRB standard deviation for the frequency estimation (a: two-point interpolation; b: three-point interpolation; c: five-point interpolation; d: seven-point interpolation)

The standard deviation of the displacement estimation is related to the absolute form of the standard deviation of the amplitude DFT coefficient with a suitable factor

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 195

better to use the five-point interpolation (or even the seven-point interpolation in intervals

line at the top of Fig. 5 shows where different multi-point interpolations can be used to

1 2 5 10

From the behavior of the systematic error of the frequency estimation (Fig. 3), it can be concluded that it is better to use the Hann, or some higher order cosine window for the

the specific component is determined, it is easy to get the amplitude using the Hann

*m m m*

As in the case of the frequency estimation, with the summation of the DFT coefficients, we subtract the long-range leakage tails and reduce their influences. We get the weights for the three-point summation with the triple subtraction of the long-range leakage tails (14).

ΔΔ

In this manner, we can get the amplitude of the signal by summing the largest three local

πδ

*<sup>A</sup> G i* <sup>−</sup> <sup>=</sup> <sup>2</sup> 2 1

sin πδ

2

DFT coefficients around the signal component following the result of (26):

( ) ( ) ( ) *m m*

 δ Δ

 Δ(*i ii i m mm m* − − + + << 1 2 ) ( ) ( 1) ( ) (26)

(*im* ) in (7):

estimation, if the window spectrum is analytically known. When the displacement

Fig. 5. The use of multi-point DFT interpolations for a 10-bit A/D converter (a: three-point

θ

Multi-point interpolations present worse results: at lower

achieve the best results of the one-component frequency estimation.

b c

interpolationb: five-point interpolationc: seven-point interpolation)

window (9a) and neglecting the long-range contribution

Δ

a

("window width"), and at higher values of

σ <sup>Σ</sup> ( )θ

10 6−

**2.2 Amplitude estimation** 

10 4−

10 2−

onward (Fig. 5). Between values 2.2 and 6.7 of the relative frequency, it is

θ

owing to the larger noise sensibility. The solid

owing to systematic error

7 pt 5 pt 3 pt

> a b c

> > θ

(25)

δ*<sup>m</sup>* for

from 6.7 <

θ

[3.5, 3.8] and [4.3, 4.5] ).

*m* σ δ = ⋅ σ DFT η δ *R* according to (22). If one wants to compare it with the unbiased Cramér-Rao lower bound (CRB) for the estimation of the frequency (Petri, 2002) the relationship has to be reexpressed:

$$\sigma\_{\delta\_{m}} = \frac{\sigma\_{\text{[DFT]}}}{\left| \mathbf{G} \left( i\_{m} \right) \right|} \cdot \left( \left| \mathbf{G} \left( i\_{m} \right) \right| \cdot \left. \eta \right| \mathbf{R}\_{\delta} \right) = \frac{1}{\sqrt{\text{SNR}}} \frac{\sqrt{EN \text{BW}}}{\sqrt{N}} \cdot \left( \left| \mathbf{G} \left( i\_{m} \right) \right| \cdot \left. \eta \right| \mathbf{R}\_{\delta} \right) \geq \frac{1}{\sqrt{\text{SNR}}} \frac{1}{\sqrt{N}} \frac{\sqrt{3}}{\pi} = \sigma\_{\text{CRB}\delta} \quad \text{(23)}$$

This form is larger than σ CRB,*<sup>f</sup>* for the frequency estimation taking into account all measurement information (Fig. 4).

Errors in relative frequency estimations with different numbers of interpolation points have normal distributions. The standard deviation of the three-point frequency estimation, which has the lowest standard deviation in the vicinity of the integer values of the relative frequency, is about 2.2 times higher than σ CRB,*<sup>f</sup>* (Fig. 4: curve b). The lowest value ( 3 CRB, *<sup>m</sup> <sup>f</sup>* ≈ 1.9 σ δ σ ) is attained at δ *<sup>m</sup>* ≈ 0 , and the highest ratio ( 3 CRB, *<sup>m</sup> <sup>f</sup>* ≈ 2.55 σ δ σ ) is at the worst cases of the non-coherent sampling ( δ *<sup>m</sup>* ≈ 0.5 ). A two-point interpolation is worse around δ *<sup>m</sup>* ≈ 0 ( 2 CRB, *<sup>m</sup> <sup>f</sup>* ≈ 2.26 σ δ σ ), but it is superior in the interval 0.15 0.5 ≤ ≤ δ *m* ( 2 CRB, *<sup>m</sup> <sup>f</sup>* ≥ 1.59 σ δ σ).

#### **2.1.3 A trade-off between bias and uncertainty**

If we reduce the leakage tails, or systematic errors by the interpolation, we apparently widen the estimation main-lobe Δ*W*est ( *ENBW*>1 ), and the noise in the estimation increases in comparison to the CRB. For example, the noise of the cosine windows increases *ENBW X ENBW X* ( ( )) ( ( )) ≈ ≈ 2 4 cos cos 1.5 1.94 1 1.29 , while the side-lobes levels decrease *SL fall X SL fall X* ( ( )) ( ( )) ≈− − 2 4 cos cos 18dB 30dB (Harris, 1978). At the same time, the systematic errors ( *E*θ max ) decrease with increasing numbers of points. Increasing the number of the used DFT coefficients is reasonable until the systematic error drops under the noise error. After this point, by increasing the relative frequency θ *<sup>m</sup>* , or with spacing between the two frequency components ( ∝ 2 ⋅θ *<sup>m</sup>* ), it is logical to decrease the number of interpolation points.

The criterion for selecting one of the algorithms could be the minimum common uncertainty of the estimation considering both contributions:

$$
\sigma\_{\Sigma} = \sqrt{\frac{\left| \left| E(\theta\_m) \right|\_{\text{max}} \right|^2}{\sqrt{2}}} + \sigma\_{\theta\_m}^2 \quad \rightarrow \quad \text{min} \tag{24}
$$

The effective value of the systematic contribution is obtained by dividing the maximal error by the square root of two, since systematic errors are phase dependent with a sine like shape.

The borders of relative frequency where one interpolation can pass over another depend upon the number *b* of bits of the A/D converter. With a 10-bit A/D converter ( *SNR* ≈ 62dB ), which is frequently used in industrial environments, it is convenient to use the three-point DFT interpolation with the Hann window, in the interval 1 2.2 < < θand from 6.7 < θ onward (Fig. 5). Between values 2.2 and 6.7 of the relative frequency, it is better to use the five-point interpolation (or even the seven-point interpolation in intervals [3.5, 3.8] and [4.3, 4.5] ).

Multi-point interpolations present worse results: at lower θ owing to systematic error ("window width"), and at higher values of θ owing to the larger noise sensibility. The solid line at the top of Fig. 5 shows where different multi-point interpolations can be used to achieve the best results of the one-component frequency estimation.

Fig. 5. The use of multi-point DFT interpolations for a 10-bit A/D converter (a: three-point interpolationb: five-point interpolationc: seven-point interpolation)

### **2.2 Amplitude estimation**

194 Fourier Transform – Signal Processing

Rao lower bound (CRB) for the estimation of the frequency (Petri, 2002) the relationship has

( ) ( ) ( ) ( ) ( ) *<sup>m</sup> m m <sup>f</sup>*

Errors in relative frequency estimations with different numbers of interpolation points have normal distributions. The standard deviation of the three-point frequency estimation, which has the lowest standard deviation in the vicinity of the integer values of the relative

σ

δ

If we reduce the leakage tails, or systematic errors by the interpolation, we apparently

increases in comparison to the CRB. For example, the noise of the cosine windows increases *ENBW X ENBW X* ( ( )) ( ( )) ≈ ≈ 2 4 cos cos 1.5 1.94 1 1.29 , while the side-lobes levels decrease *SL fall X SL fall X* ( ( )) ( ( )) ≈− − 2 4 cos cos 18dB 30dB (Harris, 1978). At the same time, the

number of the used DFT coefficients is reasonable until the systematic error drops under the

The criterion for selecting one of the algorithms could be the minimum common uncertainty

2

The effective value of the systematic contribution is obtained by dividing the maximal error by the square root of two, since systematic errors are phase dependent with a sine like

The borders of relative frequency where one interpolation can pass over another depend upon the number *b* of bits of the A/D converter. With a 10-bit A/D converter ( *SNR* ≈ 62dB ), which is frequently used in industrial environments, it is convenient to use the three-point DFT interpolation with the Hann window, in the interval 1 2.2 < <

( )

θ

*E <sup>m</sup>* ⎛ ⎞ = +→ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠

2

θ

*m*

 θ

 σ

max <sup>2</sup> min

*ENBW Gi R Gi R G i SNR N SNR N* = ⋅ ⋅= ⋅ ⋅≥ = DFT

*R* according to (22). If one wants to compare it with the unbiased Cramér-

1 1 1 3

η δ

*<sup>m</sup>* ≈ 0 , and the highest ratio ( 3 CRB, *<sup>m</sup> <sup>f</sup>* ≈ 2.55

), but it is superior in the interval 0.15 0.5 ≤ ≤

max ) decrease with increasing numbers of points. Increasing the

CRB,*<sup>f</sup>* for the frequency estimation taking into account all

CRB,

(23)

σ

) is at the

*<sup>m</sup>* , or with spacing

(24)

θ

and

δ*m*

π

CRB,*<sup>f</sup>* (Fig. 4: curve b). The lowest value

*<sup>m</sup>* ≈ 0.5 ). A two-point interpolation is worse

σ δ σ

*W*est ( *ENBW*>1 ), and the noise in the estimation

θ

*<sup>m</sup>* ), it is logical to decrease the number of

*m* σ δ

δ

σ

σ δ σ

around

σ δ σ

 = ⋅ σ DFT η δ

to be reexpressed:

σ

( 3 CRB, *<sup>m</sup> <sup>f</sup>* ≈ 1.9

δ

( 2 CRB, *<sup>m</sup> <sup>f</sup>* ≥ 1.59

systematic errors ( *E*

interpolation points.

shape.

).

widen the estimation main-lobe

*m*

This form is larger than

measurement information (Fig. 4).

 η δ

frequency, is about 2.2 times higher than

worst cases of the non-coherent sampling (

 *<sup>m</sup>* ≈ 0 ( 2 CRB, *<sup>m</sup> <sup>f</sup>* ≈ 2.26 σ δ σ

θ

between the two frequency components ( ∝ 2 ⋅

of the estimation considering both contributions:

Σ

σ

**2.1.3 A trade-off between bias and uncertainty** 

) is attained at

σ

δ

Δ

noise error. After this point, by increasing the relative frequency

From the behavior of the systematic error of the frequency estimation (Fig. 3), it can be concluded that it is better to use the Hann, or some higher order cosine window for the estimation, if the window spectrum is analytically known. When the displacement δ *<sup>m</sup>* for the specific component is determined, it is easy to get the amplitude using the Hann window (9a) and neglecting the long-range contribution Δ(*im* ) in (7):

$$A\_m = 2 \left| \frac{2\pi \delta\_m \left(1 - \delta\_m^2\right)}{\sin\left(\pi \delta\_m\right)} \right| \left| G(i\_m) \right| \tag{25}$$

As in the case of the frequency estimation, with the summation of the DFT coefficients, we subtract the long-range leakage tails and reduce their influences. We get the weights for the three-point summation with the triple subtraction of the long-range leakage tails (14).

$$\left|\Delta(i\_m - 1)\right| - 2\left|\Delta(i\_m)\right| + \left|\Delta(i\_m + 1)\right| << \left|\Delta(i\_m)\right|\tag{26}$$

In this manner, we can get the amplitude of the signal by summing the largest three local DFT coefficients around the signal component following the result of (26):

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 197

The relative error *eA AA* ( ) ( ) <sup>∗</sup> = − 1 ( *A*<sup>∗</sup> = 1 is the true value of the amplitude) drops with increasing relative frequency and with the number of the interpolation points (Fig. 6: 0 *e* the amplitude is estimated only with the largest coefficient, 1 *e* - estimation with (25), *e* <sup>3</sup> estimation with (30), etc; The same testing conditions as for Fig. 3). Comparing figures 6 and 7 shows the importance of the frequency estimation accuracy. If we know the value of the

*e A*( )max

10 9−

DFT interpolations with the Hann window (

1

10 9−

DFT interpolations with the Hann window (

10 6−

10 3−

*e A*( )max

10 6−

10 3−

1

0 *e*

7 *e*

θ

7 *e* 5 *e*

θ

is obtained with the three-point int. (11))

3 *e*

1 *e*

0 *e*

5 *e*

3 *e*

1 *e*

5

Fig. 6. Maximal relative values of errors of the amplitude estimation with the multi-point

θ

5

Fig. 7. Maximal relative values of errors of the amplitude estimation with the multi-point

θ

0 1 10

is known)

0 1 10

$$\begin{aligned} \left\lceil \left| G(i\_m - 1) \right| + 2 \left| G(i\_m) \right| + \left| G(i\_m + 1) \right| \right\rceil &= \frac{A\_m}{2} \left[ \left| \mathcal{V} \left( 1 + \delta\_m \right) \right| \pm \left| A(i\_m - 1) \right| + \\ &\quad \cdot 2 \left| \mathcal{V} \left( \delta\_m \right) \right| \mp 2 \left| A(i\_m) \right| + \left| \mathcal{V} \left( 1 - \delta\_m \right) \right| \pm \left| A(i\_m + 1) \right| \right] \end{aligned}$$

$$A\_m \cong 2 \cdot \frac{\left[ \left| G(i\_m - 1) \right| + 2 \left| G(i\_m) \right| + \left| G(i\_m + 1) \right| \right]}{\left| \mathcal{V} \left( 1 + \delta\_m \right) \right| + 2 \left| \mathcal{V} \left( \delta\_m \right) \right| + \left| \mathcal{V} \left( 1 - \delta\_m \right) \right|} \tag{27}$$

Using the Hann window:

$$\left|\mathcal{W}\_{\rm H}\left(\boldsymbol{\delta}\_{m}\right)\right| \equiv \frac{\sin\left(\pi\delta\_{m}\right)}{2\pi\delta\_{m}\left(1-\delta\_{m}^{2}\right)} \text{ and } \left|\mathcal{W}\_{\rm H}\left(1+\mathrm{s}\delta\_{m}\right)\right| \equiv \frac{\sin\left(\pi\delta\_{m}\right)}{2\pi\delta\_{m}\left(1+\mathrm{s}\delta\_{m}\right)\left(2+\mathrm{s}\delta\_{m}\right)}\tag{28}$$

$$\left|\mathcal{W}\left(1+\delta\_{m}\right)\right| + 2\left|\mathcal{W}\left(\delta\_{m}\right)\right| + \left|\mathcal{W}\left(1-\delta\_{m}\right)\right| = \frac{\sin\left(\pi\delta\_{m}\right)}{2\pi\delta\_{m}}\frac{12}{\left(1-\delta\_{m}^{2}\right)\left(4-\delta\_{m}^{2}\right)},\tag{29}$$

the amplitude estimation with the three-point interpolation ( 3 H *Am* ) can be expressed as follows:

$$\delta\_{\beta}A\_{m\text{H}} \equiv \frac{\pi \delta\_{m}}{\sin\left(\pi \delta\_{m}\right)} \frac{\left(1 - \delta\_{m}^{2}\right)\left(4 - \delta\_{m}^{2}\right)}{3} \cdot \left\{ \left| \mathbf{G}\left(i\_{m} - 1\right) \right| + 2\left| \mathbf{G}\left(i\_{m}\right) \right| + \left| \mathbf{G}\left(i\_{m} + 1\right) \right| \right\} \tag{30}$$

We can use the same procedure for the five-point interpolation with ten subtractions of the tails. In the first step of the procedure we do four subtractions of the adjacent tails Δ(*i i m m* − − 2, 1) , …, Δ(*i i m m* + + 1, 2) as in (14), or summations of the DFT coefficients, then three subtractions of the obtained and reduced tails as in (26), and so on. After rearrangement the amplitude can be expressed with the weighted five largest coefficients:

$$\begin{split} \mathcal{S}\_{\text{in}\mathbb{H}} &= \frac{\pi \mathcal{S}\_{m}}{\sin\left(\pi \mathcal{S}\_{m}\right)} \frac{\left(1 - \delta\_{m}^{2}\right) \left(4 - \delta\_{m}^{2}\right) \left(9 - \delta\_{m}^{2}\right)}{90}. \\ &\cdot \left[\mathsf{6}\left|\mathsf{G}\left(i\_{m}\right)\right| + \mathsf{4}\left(\left|\mathsf{G}\left(i\_{m} + 1\right)\right| + \left|\mathsf{G}\left(i\_{m} - 1\right)\right|\right) + \left|\left|\mathsf{G}\left(i\_{m} + 2\right)\right| - \left|\mathsf{G}\left(i\_{m} - 2\right)\right|\right|\right] \end{split} \tag{31}$$

In the last term of the equation (31), the absolute value of the difference is used, since one of the coefficients *G i*( *<sup>m</sup>* − 2) or *G i*( *<sup>m</sup>* + 2) drops out of the spectrum main lobe ( 4Δ *f* wide in the Hann case) and gets a negative sign.

The same procedure can be used for the seven-point interpolation:

$$\begin{split} \, \_7\boldsymbol{A}\_{m\text{H}} &= \frac{\pi \delta\_m}{\sin\left(\pi \delta\_m\right)} \frac{\left(1 - \delta\_m^2\right) \left(4 - \delta\_m^2\right) \left(9 - \delta\_m^2\right) \left(16 - \delta\_m^2\right)}{5040} \cdot \\ &\cdot \left[20 \cdot \left|\mathbf{G}\left(i\_m\right)\right| + 15 \cdot \left(\left|\mathbf{G}\left(i\_m + 1\right)\right| + \left|\mathbf{G}\left(i\_m - 1\right)\right|\right) + \\ &+ 6 \cdot \left|\left|\mathbf{G}\left(i\_m + 2\right)\right| - \left|\mathbf{G}\left(i\_m - 2\right)\right|\right| - \left|\left|\mathbf{G}\left(i\_m + 3\right)\right| - \left|\mathbf{G}\left(i\_m - 3\right)\right|\right| \,\, \right] \end{split} \tag{32}$$

196 Fourier Transform – Signal Processing

δ

*Gi Gi Gi*

12 1 δδ

*W WW* <sup>⎡</sup> −+ + + <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> ≅ ⋅ ++ + − 1 2 1

( ) ( ) *m m* ( ) *m m*

sin <sup>12</sup> 12 1

 δ

the amplitude estimation with the three-point interpolation ( 3 H *Am* ) can be expressed as

( )( ) ( ) ( ) ( ) *m m <sup>m</sup> m m <sup>m</sup> <sup>m</sup>*

*A G <sup>i</sup> <sup>G</sup> <sup>i</sup> <sup>G</sup> <sup>i</sup>* − − <sup>≅</sup> ⋅ −+ + + <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup>

We can use the same procedure for the five-point interpolation with ten subtractions of the tails. In the first step of the procedure we do four subtractions of the adjacent tails

three subtractions of the obtained and reduced tails as in (26), and so on. After rearrangement the amplitude can be expressed with the weighted five largest coefficients:

( ) ( ) ( ) ( ) ( ) ( )

*mm m m m*

⋅ + ++ − + +− − <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup>

6 41 1 2 2

( )( )( )( )

222 2

 δ

( ) ( ) ( ) ( )

*mm mm*

+⋅ + − − − + − − ⎤

*Gi Gi Gi Gi*

62 2 3 3

( ) ( ) ( ) ( )

1 4 9 16

δδδ

*m mm*

⋅ ⋅ +⋅ ++ − + ⎡

−−− − <sup>=</sup> <sup>⋅</sup>

*Gi Gi Gi Gi Gi*

In the last term of the equation (31), the absolute value of the difference is used, since one of

sin sin and 1

≅ + <sup>≅</sup> <sup>−</sup> + +

( ) ( ) ( ) ( ) ( ) ( ) *m mm*

*mm m*

*m m mm m*

2 1 2 (1 )(2 )

 δ

( ) ( ) ( ) ( )

 δΔ

*s s*

 δ

πδ

( )( )

− − 2 2

 δ

*<sup>m</sup> m m*

δ

1 2 1

⎦

(28)

, (29)

(30)

(31)

*f* wide in

(32)

Δ

⎦

(27)

*m m mm*

 δ

πδδ

*m*

(*i i m m* + + 1, 2) as in (14), or summations of the DFT coefficients, then

πδ

πδ

2 1 4

+ + − ± + ⎤

22 1 1 ∓

*W iW i*

 Δ

δ

 Δ

( ) ( ) ( ) ( ) ( )

1 2 1 1 1 2

⎡ ⎤ −+ + + = + ± −+ <sup>⎡</sup> ⎣ ⎦ <sup>⎣</sup>

*<sup>m</sup> m mm m m*

*<sup>A</sup> Gi Gi Gi <sup>W</sup> <sup>i</sup>*

*m*

( ) ( )

πδ

δδ

( )

πδ

( )

πδ

πδ

the Hann case) and gets a negative sign.

⎣

*A*

7 H

*m*

*mmm <sup>m</sup> <sup>m</sup>*

The same procedure can be used for the seven-point interpolation:

( )

πδ

πδ

*m*

sin 90

−−− <sup>=</sup> <sup>⋅</sup>

πδ

Δ

*m*

sin 3

2

*W W s*

 δ

*W WW* ++ + −=

*mm m*

1 4

δ

( ) ( ) ( ) ( )

2 2

 δ

( )( )( )

the coefficients *G i*( *<sup>m</sup>* − 2) or *G i*( *<sup>m</sup>* + 2) drops out of the spectrum main lobe ( 4

*mmm m <sup>m</sup> <sup>m</sup>*

sin 5040

*Gi Gi Gi*

20 15 1 1

δδδ

149

222

πδ

H H 2

*A*

Using the Hann window:

follows:

Δ

δ

3 H

(*i i m m* − − 2, 1) , …,

*A*

5 H

The relative error *eA AA* ( ) ( ) <sup>∗</sup> = − 1 ( *A*<sup>∗</sup> = 1 is the true value of the amplitude) drops with increasing relative frequency and with the number of the interpolation points (Fig. 6: 0 *e* the amplitude is estimated only with the largest coefficient, 1 *e* - estimation with (25), *e* <sup>3</sup> estimation with (30), etc; The same testing conditions as for Fig. 3). Comparing figures 6 and 7 shows the importance of the frequency estimation accuracy. If we know the value of the

Fig. 6. Maximal relative values of errors of the amplitude estimation with the multi-point DFT interpolations with the Hann window (θis known)

Fig. 7. Maximal relative values of errors of the amplitude estimation with the multi-point DFT interpolations with the Hann window (θis obtained with the three-point int. (11))

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 199

1 *e*

10 6−

10 6−

**2.3 Phase estimation** 

the function *W* (

(Harris, 1978):

converter (a) and 16-bit A/D converter (b)

θ

10 3−

1

10 3−

1

*e A*( )max

*e A*( )max

5

5

0 1 10

(b) Fig. 9. The influence of the quantization noise on the amplitude estimation with 10-bit A/D

The second parameter of the signal component, besides the amplitude of the frequency main lobe, is the phase, i.e. the time position of the signal structure. As with previous estimations,

number of points *N* >> 1 , the following equation is valid, where the Dirichlet kernel is used

) has to be analytically known. For the rectangular window with a large

0 1 10

(a)

3,..*.e*

θ

3,.*..e*

θ

1 *e*

0 *e*

0 *e*

frequency on the three-point interpolation accuracy level, then the amplitude estimation is reasonable with the three-point interpolation. The accuracy of the amplitude estimation can be improved, if the frequency is better estimated (e.g., by the multipoint interpolations).

### **2.2.1 Influence of noise on the amplitude estimation**

Uncertainty of the component amplitude estimation mainly depends on the uncertainties of the amplitude DFT coefficients. Equation (19) is valid for all amplitude coefficients of the DFT that are large enough and sufficiently (half of the main lobe width) moved away from the margins of the spectral field (θ= 0, /2 *N* ).

The price for the effective leakage reduction is in the increase of the estimation uncertainties, related to the unbiased CRB fixed by the signal-to-noise-ratio for a particular component. In Fig. 8, there are standard uncertainties of the amplitude estimation related to the CRB (33) (Petri, 2002) for the three-point estimation.

$$
\sigma\_{A\_m} \ge \frac{1}{\sqrt{\text{SNR}}} \frac{1}{\sqrt{N}} = \sigma\_{\text{CRB}\beta} \tag{33}
$$

Fig. 8. Standard uncertainty of the amplitude three-point estimation using the Hann window related to the CRB (33) (a: θ is estimated, b: θis known)

The distortions of the DFT coefficients and the number of points in the interpolation, have significant influence on the uncertainty of displacement δ *<sup>m</sup>* , and amplitude *Am* , succesively. As with the frequency estimation, the systematic errors decrease when increasing the number of points. Increasing the number of DFT coefficients used in the interpolation is reasonable, until the systematic error drops under the noise error (Fig. 9). After this point, by increasing the relative frequency θ *<sup>m</sup>* (the number of periods of the measured signal in the measurement interval), or increasing spacing between two frequency components ( ∝ ⋅ 2 θ *<sup>m</sup>* ), the number of interpolation points can be decreased. A smaller number of the DFT coefficients in the calculation produces lower noise distortion, which becomes dominant in the final result.

Fig. 9. The influence of the quantization noise on the amplitude estimation with 10-bit A/D converter (a) and 16-bit A/D converter (b)

#### **2.3 Phase estimation**

198 Fourier Transform – Signal Processing

frequency on the three-point interpolation accuracy level, then the amplitude estimation is reasonable with the three-point interpolation. The accuracy of the amplitude estimation can be improved, if the frequency is better estimated (e.g., by the multipoint interpolations).

Uncertainty of the component amplitude estimation mainly depends on the uncertainties of the amplitude DFT coefficients. Equation (19) is valid for all amplitude coefficients of the DFT that are large enough and sufficiently (half of the main lobe width) moved away from

The price for the effective leakage reduction is in the increase of the estimation uncertainties, related to the unbiased CRB fixed by the signal-to-noise-ratio for a particular component. In Fig. 8, there are standard uncertainties of the amplitude estimation related to the CRB (33)

> *A A <sup>m</sup> SNR N* ≥ = CRB, 1 1

1 2 3 4 5 6

is estimated, b:

The distortions of the DFT coefficients and the number of points in the interpolation, have

As with the frequency estimation, the systematic errors decrease when increasing the number of points. Increasing the number of DFT coefficients used in the interpolation is reasonable, until the systematic error drops under the noise error (Fig. 9). After this point,

the measurement interval), or increasing spacing between two frequency components

 *<sup>m</sup>* ), the number of interpolation points can be decreased. A smaller number of the DFT coefficients in the calculation produces lower noise distortion, which becomes

θ

δ

is known)

*<sup>m</sup>* (the number of periods of the measured signal in

Fig. 8. Standard uncertainty of the amplitude three-point estimation using the Hann

θ

θ

significant influence on the uncertainty of displacement

 σ (33)

θ

*<sup>m</sup>* , and amplitude *Am* , succesively.

b

a

**2.2.1 Influence of noise on the amplitude estimation** 

θ

σ

= 0, /2 *N* ).

the margins of the spectral field (

1.6

1.4

1.2

1

( ∝ ⋅ 2 θ

window related to the CRB (33) (a:

by increasing the relative frequency

dominant in the final result.

(Petri, 2002) for the three-point estimation.

*Am* σσ CRB,*<sup>A</sup>*

The second parameter of the signal component, besides the amplitude of the frequency main lobe, is the phase, i.e. the time position of the signal structure. As with previous estimations, the function *W* (θ ) has to be analytically known. For the rectangular window with a large number of points *N* >> 1 , the following equation is valid, where the Dirichlet kernel is used (Harris, 1978):

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 201

2 *Am*

ϕ*m*

( −1) *miG*

3,0 ℜ

4,0

ℑ

( ) *<sup>m</sup>* Δϕ*i*

> ϕ*<sup>m</sup>* = π

*N N* ) ≈ . Considering also *<sup>a</sup>* ≈ − <sup>j</sup> e 1 and <sup>−</sup> *<sup>a</sup>* ≈ − <sup>j</sup> e 1 , the expression

j

θ

e

( )( ) ( )

⎛ ⎞ ⎛ ⎞ ⎜ ⎟ + − − + ++ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠

*m*

2 π

*<sup>A</sup> G i* <sup>=</sup>

 ϕ

πδ

2 2

 δ

, ( ) ( )

<sup>−</sup> <sup>H</sup>

( ) *<sup>m</sup> <sup>m</sup>*

sin 4 1

πδ

*m m*

 δ

πδ

2

(42)

*i i*

j j 2

<sup>⎜</sup> <sup>−</sup> + −+ <sup>⎟</sup> <sup>⎝</sup> <sup>⎠</sup>

πθ

 θ

δ*<sup>m</sup>* = −0.4

Δ ( −1) *<sup>m</sup>* ϕ*i*

0

( ) *miG*

3 , and the rectangular

<sup>1</sup> , and we finally get:

θ = −δand

δ ϕ  π

(41)

(40)

θ

If we have a lot of points in the measurement set *N* >> 1 , the sine function can be

⎛ ⎞ ⎛ ⎞ ⎜ ⎟ − += ⎜ ⎟

( ) ( )

1 sin

2 1

πθ

The largest DFT coefficient can be deduced from (5) and (40) considering *imm m* −

*m m mm mm*

( ) *<sup>a</sup> m m*

π

*W* <sup>−</sup> *<sup>a</sup>* = ⋅ −

( ) ( )

2 2

As with the rectangular window, the second part in (41) causes additional disturbing

( ) ( ) ( ) *m i mm m <sup>m</sup>* arg *G i* <sup>H</sup> = = + −± *a i* Δ

 δ

ϕϕ

πδ

*a a m m* ( ) *imm m <sup>m</sup> m m*

<sup>⎞</sup> <sup>⎛</sup> <sup>⎟</sup> <sup>⎜</sup> <sup>=</sup> <sup>−</sup> <sup>⎟</sup> <sup>⎜</sup>

e e

21 1

*<sup>m</sup>* = 1.6 , *im* = 2 ,

⎝ ⎠ + − <sup>−</sup> ⎝ ⎠ <sup>2</sup> 11 1 1 1

( )

 θ

πθ

2

 π θ

4,0

ℑ

( ) +1 *miG*

θ

approximated by sin(

*i i mm mm* += + θ2

*m*

components:

H

*<sup>A</sup> G i*

δ:

( ) ( )

πδ

*<sup>m</sup>* = 2.4 , *im* = 2 ,

πθ

( )

 δ

() () ( )

*Gi Gi m m im* ⎛ ⎞ ⎜ ⎟ + − ⎝ ⎠ <sup>=</sup> <sup>±</sup> <sup>j</sup> <sup>2</sup>

δ ϕ

H H e Δ

πδ

ϕ( ) *im* +Δ 1

window: a)

0 3,0 ℜ

ϕ*m*

2 *Am*

( ) *<sup>m</sup>* Δϕ*i*

( ) *miG*

(a) (b)

*<sup>m</sup>* = 0.4 ; b)

in brackets can be simplified ( ) ( ) ( )

H

 π θ

θ

sin sin

δ ϕ π

<sup>4</sup> <sup>1</sup> 2 12

Fig. 10. Phasors diagrams for a single component *Am* = 1 ,

δ

 πθ

πθ

$$\mathcal{W}\_{\text{rect}}\left(\theta\right) = \frac{\sin\left(\pi\theta\right)}{N\sin\left(\pi\theta/N\right)} \cdot \mathbf{e}^{-j\pi\left(\frac{N-1}{N}\right)\theta} \tag{34}$$

The largest DFT coefficient, which is mostly composed of the short-range leakage contribution of the investigated component *m* , can be deduced from (5), and (34) using *aN N* = − π ( 1) and <sup>−</sup> − = <sup>j</sup> <sup>2</sup> j e π:

$$\mathbf{G}\left(i\_{m}\right) = \frac{A\_{m}}{2} \left[ \frac{\sin\left(\pi\left(i\_{m} - \theta\_{m}\right)\right)}{N\sin\left(\pi\left(i\_{m} - \theta\_{m}\right)\right/N\right)} \mathbf{e}^{\left(\pi\left(\theta\_{n} - i\_{n}\right) + \theta\_{n} - \frac{\pi}{2}\right)} - \frac{\sin\left(\pi\left(i\_{m} + \theta\_{m}\right)\right)}{N\sin\left(\pi\left(i\_{m} + \theta\_{m}\right)/N\right)} \mathbf{e}^{-\left(\pi\left(\theta\_{n} + i\_{n}\right) + \theta\_{n} + \frac{\pi}{2}\right)} \right] \tag{35}$$

The component phase ϕ*<sup>m</sup>* is referred to as the start point of the window (not the middle point as is the case with the frequency and amplitude).

As *N* is usually large *N* >> 1 , and considering (6), equation (35) can be rewritten:

$$G(i\_m) = \frac{A\_m}{2} \left( \frac{\sin \left( \pi \delta\_m \right)}{\pi \delta\_m} \mathbf{e}^{\left( \left( \delta\_m \right) + \phi\_m - \frac{\pi}{2} \right)} - \frac{\sin \left( \pi \delta\_m \right)}{\pi \left( 2i\_m + \delta\_m \right)} \mathbf{e}^{\left( \left( \left( 2i\_m + \delta\_m \right) + \phi\_m + \frac{\pi}{2} \right) \right)} \right) \tag{36}$$

Both, amplitude and phase have additional disturbing components from the second part in (36):

$$\mathbf{G}\begin{pmatrix} \mathbf{i}\_m \end{pmatrix} = \left| \mathbf{G}\begin{pmatrix} \mathbf{i}\_m \end{pmatrix} \right| \mathbf{e}^{\frac{\mathbf{i}\cdot\mathbf{s}\_m + \rho\_m - \frac{\pi}{2}}} \pm \Delta \begin{pmatrix} \mathbf{i}\_m \end{pmatrix} \ \cdot \ \arg\left(\mathbf{G}\begin{pmatrix} \mathbf{i}\_m \end{pmatrix}\right) = \rho\_m + a\delta\_m - \frac{\pi}{2} \pm \Delta \rho \begin{pmatrix} \mathbf{i}\_m \end{pmatrix} \tag{37}$$

If the displacement term is positive 0.5 0 > ≥ δ *<sup>m</sup>* , then the second largest DFT coefficient is *G i*( *<sup>m</sup>* + 1), and if the displacement term is negative 0 0.5 > ≥− δ *<sup>m</sup>* , then the second largest DFT coefficient is *G i*( *<sup>m</sup>* − 1 (Fig. 10). The largest side coefficient may commonly be expressed as: )

$$G\left(i\_m + s\right) = \frac{A\_m}{2} \left(\frac{\sin\left(\pi\left(s - \delta\_m\right)\right)}{\pi\left(s - \delta\_m\right)} \mathrm{e}^{\left\{\left(a(\delta\_m - s) + \rho\_n - \frac{\pi}{2}\right)} - \frac{\sin\left(\pi\left(2i\_m + s + \delta\_m\right)\right)}{\pi\left(2i\_m + s + \delta\_m\right)}\right) \mathrm{e}^{-\left\{\left(a(\delta\_m - s) + \rho\_n - \frac{\pi}{2}\right)}\right\}}\right) \tag{38}$$

$$G\left(i\_m + s\right) = \left|G\left(i\_m + s\right)\right| \mathrm{e}^{\left\{\left(a(\delta\_m - s) + \rho\_n - \frac{\pi}{2}\right)} \mp \Delta\left(i\_m + s\right)}\right.$$

$$\mathrm{arg}\left(G\left(i\_m + s\right)\right) = \rho\_m + a\left(\delta\_m - s\right) - \frac{\pi}{2} \mp \Delta\rho\left(i\_m + s\right) \tag{39}$$

Fig. 10, with large values of displacements, shows the amplitude and the phase differences between the phasors of the short-range contributions (dotted lines) and the complete phasors *G i*( ) *<sup>m</sup>* and *Gi s* ( *<sup>m</sup>* + ) . It should be noted that differences have opposite signs at the largest two DFT coefficients surrounding the investigated component.

As we know, the spectrum with the Hann window can be obtained by shifting and weighing summations of the rectangular window spectrum (12). Using (34) in (12), it can be written:

$$\mathcal{W}\_{\rm H}(\theta) = \frac{\sin(\pi \theta)}{2} \cdot \mathbf{e}^{-j\mu \theta} \cdot \left( \frac{1}{N \sin(\pi \theta/N)} - \frac{1}{2} \left| \frac{-1}{N \sin(\pi (\theta + 1)/N)} \cdot \mathbf{e}^{-j\mu} + \frac{-1}{N \sin(\pi (\theta - 1)/N)} \cdot \mathbf{e}^{j\mu} \right| \right)$$

200 Fourier Transform – Signal Processing

( )

πθ

πθ

The largest DFT coefficient, which is mostly composed of the short-range leakage contribution of the investigated component *m* , can be deduced from (5), and (34) using

⎝ ⎠ = ⋅

e

( ) ( ) ( )

*N*

θ

( ) ( )

 θ

 θ

⎛ ⎞ ⎛ ⎞ ⎜ ⎟ −+− − +++ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠

π

π

*<sup>m</sup>* is referred to as the start point of the window (not the middle

( )

πδ

⎛⎞ ⎛ ⎞ ⎜⎟ ⎜ ⎟ + − − + ++ ⎝⎠ ⎝ ⎠

> δ

, arg( ) *Gi a* ( ) *m mm m* = + −± ( ) *i*

ϕ

( )

Δ

Δϕ

⎛ ⎞ ⎛ ⎞ ⎜ ⎟ −+ − − −+ − ⎜ ⎟ ⎝ ⎠ ⎝ ⎠

 δ

*<sup>m</sup>* , then the second largest DFT coefficient is

 δ

 δ

*i*

δ

( ) ( ( ))

π

π

( ) *a s m m*

( ) ( ) ( ) ( ) ( )

*N NN N N N*

j j j

e e e

π

π

⎛ ⎞ − + +

j j 2 2 sin sin 2 e e

> ⎛ ⎞ ⎜ ⎟ −+ − ⎝ ⎠ <sup>+</sup> = + <sup>+</sup> <sup>j</sup> <sup>2</sup> <sup>e</sup> <sup>∓</sup>

 ϕ

⎛ ⎞

(34)

θ

 π

δ ϕ

> 2 π

*<sup>m</sup>* , then the second largest DFT

π θ

 Δϕ

δ

 π

(36)

(37)

(38)

 π

 ϕ

(39)

(35)

 ϕ

⎛ ⎞ <sup>−</sup> <sup>−</sup> ⎜ ⎟

π

<sup>1</sup> <sup>j</sup>

( ) ( )

rect.

π:

( ) ( )

ϕ

 θ

point as is the case with the frequency and amplitude).

( ) ( ) ( ) ( )

⎛ ⎞ ⎜ ⎟ + − ⎝ ⎠ <sup>=</sup> <sup>±</sup> <sup>j</sup> <sup>2</sup> <sup>e</sup>

*G i*( *<sup>m</sup>* + 1), and if the displacement term is negative 0 0.5 > ≥−

πδ

πδ

 θ

*aN N* = − π

The component phase

*G i*

(36):

*Gi s*

( ) ( )

πθ

θ

H

θ

( 1) and <sup>−</sup> − = <sup>j</sup> <sup>2</sup> j e

( ) ( ) ( )

*m*

*<sup>A</sup> G i*

() () ( )

( ) ( ( ))

*Gi Gi m m im*

If the displacement term is positive 0.5 0 > ≥

π

π

δ ϕ

( )

 δ

 δ

π

π

θ

*A i i*

θ

2 sin sin

*W <sup>N</sup> N N*

sin

sin

*m m a i mm m m m a i* ( ) *mm m <sup>m</sup> <sup>m</sup> m m m m*

 ϕ

<sup>⎛</sup> <sup>⎞</sup> − + <sup>=</sup> <sup>⎜</sup> <sup>−</sup> <sup>⎟</sup> <sup>⎜</sup> − + <sup>⎟</sup> <sup>⎝</sup> <sup>⎠</sup> j j 2 2 sin sin e e

*a a m m* ( ) *imm m <sup>m</sup> m m*

<sup>=</sup> ⎜ ⎟ <sup>−</sup> <sup>+</sup> ⎝ ⎠ j j 2 2 2 sin sin e e

π

*m m m*

Both, amplitude and phase have additional disturbing components from the second part in

π

π

*Ni N Ni N*

As *N* is usually large *N* >> 1 , and considering (6), equation (35) can be rewritten:

2 2

( ) *<sup>a</sup> m m*

Δ

*A s i s*

δ

( )( ) ( )

largest two DFT coefficients surrounding the investigated component.

πθ ϕ

2 2

δ

coefficient is *G i*( *<sup>m</sup>* − 1 (Fig. 10). The largest side coefficient may commonly be expressed as: )

π

*m m a s m m <sup>m</sup> a s* ( ) *m m <sup>m</sup> <sup>m</sup> m m m*

*Gi s Gi s m m i s <sup>m</sup>*

arg( ) *Gi s a s i s* ( ) *m mm* <sup>+</sup> = + −− + ( ) ( ) *<sup>m</sup>* <sup>2</sup> <sup>∓</sup>

 δ

Fig. 10, with large values of displacements, shows the amplitude and the phase differences between the phasors of the short-range contributions (dotted lines) and the complete phasors *G i*( ) *<sup>m</sup>* and *Gi s* ( *<sup>m</sup>* + ) . It should be noted that differences have opposite signs at the

As we know, the spectrum with the Hann window can be obtained by shifting and weighing summations of the rectangular window spectrum (12). Using (34) in (12), it can be written:

*a a <sup>a</sup> W*

2 sin 2 sin 1 sin 1

sin 11 1 1

− − ⎛ ⎞ ⎛ ⎞ − − = ⋅⋅ − ⎜ ⎟ ⎜ ⎟ ⋅ + <sup>⋅</sup> ⎜ ⎟ + − ⎝ ⎠ ⎝ ⎠

π θ

+ = ⎜ ⎟ <sup>−</sup> − + <sup>+</sup> ⎝ ⎠

δ

*s i s*

 ϕ

π δ ϕ

Fig. 10. Phasors diagrams for a single component *Am* = 1 , ϕ*<sup>m</sup>* = π 3 , and the rectangular window: a) θ *<sup>m</sup>* = 2.4 , *im* = 2 , δ *<sup>m</sup>* = 0.4 ; b) θ *<sup>m</sup>* = 1.6 , *im* = 2 , δ*<sup>m</sup>* = −0.4

If we have a lot of points in the measurement set *N* >> 1 , the sine function can be approximated by sin(πθ πθ*N N* ) ≈ . Considering also *<sup>a</sup>* ≈ − <sup>j</sup> e 1 and <sup>−</sup> *<sup>a</sup>* ≈ − <sup>j</sup> e 1 , the expression

in brackets can be simplified ( ) ( ) ( ) ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ − += ⎜ ⎟ ⎝ ⎠ + − <sup>−</sup> ⎝ ⎠ <sup>2</sup> 11 1 1 1 πθ π θ π θ 21 1 πθ θ<sup>1</sup> , and we finally get:

$$\mathcal{W}\_{\rm H} \left( \theta \right) = \frac{1}{2} \frac{\sin \left( \pi \theta \right)}{\pi \theta \left( 1 - \theta^2 \right)} \cdot \mathbf{e}^{-j\omega \theta} \tag{40}$$

The largest DFT coefficient can be deduced from (5) and (40) considering *imm m* −θ = −δ and *i i mm mm* += + θ 2 δ:

$$\mathbf{G\_{H}}\left(i\_{m}\right) = \frac{A\_{m}}{4} \left(\frac{\sin\left(\pi\delta\_{m}\right)}{\pi\delta\_{m}\left(1-\delta\_{m}^{2}\right)}\mathbf{e}^{\left(\left(a\left(\delta\_{m}\right)+\eta\_{m}-\frac{\pi}{2}\right)} - \frac{\sin\left(\pi\delta\_{m}\right)}{\pi\left(2i\_{m}+\delta\_{m}\right)\left(1-\left(2i\_{m}+\delta\_{m}\right)^{2}\right)}\mathbf{e}^{-\left(\left(a\left(2i\_{m}+\delta\_{m}\right)+\eta\_{m}+\frac{\pi}{2}\right)}\right)\right) \tag{41}$$

As with the rectangular window, the second part in (41) causes additional disturbing components:

$$\left|\mathbf{G\_{H}}\left(i\_{m}\right)\right| = \left|\mathbf{G\_{H}}\left(i\_{m}\right)\right| \mathbf{e}^{\int \left(a\left(\delta\_{m}\right) + \varphi\_{n} - \frac{\pi}{2}\right)} \pm \Delta\left(i\_{m}\right), \quad \left|\mathbf{G\_{H}}\left(i\_{m}\right)\right| = \frac{A\_{m}}{4} \frac{\sin\left(\pi\delta\_{m}\right)}{\pi\delta\_{m}\left(1 - \delta\_{m}^{2}\right)}$$

$$\arg\left(G\_{\mathbf{H}}\left(i\_{m}\right)\right) = \varphi\_{i\_{m}} = \varphi\_{m} + a\delta\_{m} - \frac{\pi}{2} \pm \Delta\varphi\left(i\_{m}\right) \tag{42}$$

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 203

( ) *m m i m is* = − ⋅+ ⋅ + *m m* <sup>+</sup>

1.-0

π

0 1 5 10

A better estimation can be obtained by also considering the long-range contributions in (47), when only one component is dominant in the signal – a more parametric approach (Fig.

Fig. 12. Maximal systematic errors of the phase estimations with the rectangular window:

is known

θ

 2 2 ≤ ≤ ϕ π

0.1

*E*( ) ϕrad

 δϕ

 δ ϕ

2 π

1

(49)

ϕrad

; Estimations: a – by (44), b – by

θ

c

a

b

d

III ,R 1

b

θ= 2.2 , −

( ) rad *<sup>m</sup>* max *<sup>E</sup>* ϕ

ϕ


a

Fig. 11. Phase dependency errors at

(49), c – by (51), d – by (45)

<sup>6</sup> 10<sup>−</sup>

a – by (44), b – by (49), c – by (51);

12c):

<sup>4</sup> 10<sup>−</sup>

<sup>2</sup> 10<sup>−</sup>

1

c

The largest side coefficient can be expressed in short form as:

$$\mathbf{G}\_{\rm H}(\dot{i}\_m + \mathbf{s}) = \left| \mathbf{G}\_{\rm H}(\dot{i}\_m + \mathbf{s}) \right| \mathbf{e}^{\left| a(\delta\_m - s) + \boldsymbol{\rho}\_m - \frac{\pi}{2} \right|} \mp \Delta \left( \dot{i}\_m + \mathbf{s} \right) \cdot \left| \mathbf{G}\_{\rm H}(\dot{i}\_m + \mathbf{s}) \right| = \frac{A\_m}{4} \frac{\sin \left( \pi (s - \delta\_m) \right)}{\pi \left( \mathbf{s} - \delta\_m \right) \left( 1 - \left( \mathbf{s} - \delta\_m \right)^2 \right)}$$

$$\arg \left( G\_{\rm H}(\dot{i}\_m + \mathbf{s}) \right) = \boldsymbol{\rho}\_{\dot{i}\_m + \mathbf{s}} = \boldsymbol{\rho}\_m + a \left( \boldsymbol{\delta}\_m - \mathbf{s} \right) - \frac{\pi}{2} \mp \Delta \boldsymbol{\rho} \left( \dot{i}\_m + \mathbf{s} \right) \tag{43}$$

#### **2.3.1 Reduction of the systematic error**

In the first approximation using the rectangular window, the second term in (36) and (38) can be neglected and the phase of component can be estimated by:

$$\log\_{m,\mathcal{R}}^{1} = \arg\left[\mathcal{G}\left(i\_m\right)\right] - a\mathcal{S}\_m + \frac{\pi}{2} \tag{44}$$

$$\boldsymbol{\sigma}\_{m,\mathcal{R}}^{\rm II} = \arg\Big[\boldsymbol{G}\left(\boldsymbol{i}\_{m} + \boldsymbol{s}\right)\Big] + \boldsymbol{a}\left(\boldsymbol{s} - \boldsymbol{\delta}\_{m}\right) + \frac{\pi}{2} \tag{45}$$

Another possibility is to estimate the component phase only by the phase DFT coefficient itself, where ϕ*<sup>m</sup>* is referred to the middle point of the measurement window. However, this method has the same weak point as (44) and (45), since it doesn't consider the long-range contributions of the window (Fig. 14d).

We can improve the estimation by considering the long-range contributions. Because the disturbing angle components Δϕ (∗) in (37) and (39) are small, they can be exchanged by sine functions and approximated by quotients:

$$\frac{\Delta\phi\left(\dot{i}\_m\right)}{\Delta\phi\left(\dot{i}\_m+s\right)} \cong \frac{\sin\left[\Delta\phi\left(\dot{i}\_m\right)\right]}{\sin\left[\Delta\phi\left(\dot{i}\_m+s\right)\right]} = \frac{\left|\Delta\left(\dot{i}\_m\right)\right|}{\left|G\left(\dot{i}\_m\right)\right|} \frac{\left|G\left(\dot{i}\_m+s\right)\right|}{\left|A\left(\dot{i}\_m+s\right)\right|}\tag{46}$$

Here, the maximal amplitude values \* from (36) and (38) are taken in approximation:

$$\frac{\left|\Delta(i\_m)\right|}{\left|\Delta(i\_m+s)\right|} \cdot \frac{\left|G\left(i\_m+s\right)\right|}{\left|G\left(i\_m\right)\right|} = \frac{2i\_m+s+\delta\_m}{2i\_m+\delta\_m} \cdot \frac{\left|G\left(i\_m+s\right)\right|}{\left|G\left(i\_m\right)\right|}\tag{47}$$

If considering only one component with the DFT coefficient index large enough *im* >> 1 , or we want to symmetrically 'equalize' the long-range leakage contributions coming from both sides of the frequency axis, considering a multi-component signal - non-parametric approach, leakages can be equalized Δ Δ(*i is m m* ) ≈ ( + ) and (46) rewritten as:

$$\frac{\Delta\rho(\mathbf{i}\_m)}{\Delta\rho(\mathbf{i}\_m + \mathbf{s})} \cong \frac{\left| \mathbf{G} \left( \mathbf{i}\_m + \mathbf{s} \right) \right|}{\left| \mathbf{G} \left( \mathbf{i}\_m \right) \right|} = \frac{\delta\_m}{\mathbf{s} - \delta\_m}; \qquad \left( \mathbf{s} - \delta\_m \right) \Delta\rho(\mathbf{i}\_m) \cong \delta\_m \Delta\rho(\mathbf{i}\_m + \mathbf{s}) \tag{48}$$

The multiplication of (37) and (39) by the correction (48), and subsequent summation gives us an estimation of the phase, as an averaging of the two arguments ( ) *i m <sup>m</sup>* = ⎡ ⎤ *G i* ⎣ ⎦ ϕ arg and ( ) *i s <sup>m</sup>* <sup>+</sup> *Gi s <sup>m</sup>* = + ⎡ ⎤ ⎣ ⎦ ϕarg surrounding the component (Fig. 12b):

202 Fourier Transform – Signal Processing

( ) ( ) ( ) ( ) arg *Gi s* <sup>H</sup> *m is m m* <sup>+</sup> = = + −− + *<sup>m</sup>* <sup>+</sup> *a s is* <sup>Δ</sup> *<sup>m</sup>* <sup>2</sup> <sup>∓</sup>

In the first approximation using the rectangular window, the second term in (36) and (38)

*m mm* <sup>=</sup> ⎡ ⎤ *Gi a* ( ) − + ⎣ ⎦ <sup>I</sup> ,R arg <sup>2</sup>

*mm m* <sup>=</sup> ⎡ ⎤ *Gi s as* ( )( ) ++ − + ⎣ ⎦ II ,R arg <sup>2</sup>

Another possibility is to estimate the component phase only by the phase DFT coefficient

method has the same weak point as (44) and (45), since it doesn't consider the long-range

We can improve the estimation by considering the long-range contributions. Because the

( ) ( )

Δϕ

Here, the maximal amplitude values \* from (36) and (38) are taken in approximation:

Δϕ

( ) ( )

Δ

δ

δ

( ) ( ) *m m m m m m*

arg surrounding the component (Fig. 12b):

+ ≅ = + −

*i Gi s i s G i s*

*m m mm m m mm*

*m m m m m m m m m m i Gi s i s Gi s i s Gi i G i* + + + + ⋅= ⋅ <sup>+</sup> <sup>+</sup> 2 2

If considering only one component with the DFT coefficient index large enough *im* >> 1 , or we want to symmetrically 'equalize' the long-range leakage contributions coming from both sides of the frequency axis, considering a multi-component signal - non-parametric

> Δ

;

The multiplication of (37) and (39) by the correction (48), and subsequent summation gives us an estimation of the phase, as an averaging of the two arguments ( ) *i m <sup>m</sup>* = ⎡ ⎤ *G i* ⎣ ⎦

*i i i Gi s i s i s Gi i s* ⎡ ⎤ <sup>+</sup> ⎣ ⎦ ≅ = +⎡ ⎤ + + ⎣ ⎦

ϕϕδ

; ( ) ( ( ))

<sup>−</sup> + =

<sup>H</sup> <sup>2</sup> sin

πδ

*A s*

<sup>4</sup> <sup>1</sup>

*m*

π

π

 δ

( ) ( )

δ

(*i is m m* ) ≈ ( + ) and (46) rewritten as:

(*s i is* −

 δ Δϕ*m mmm* ) ( ) ≅ + ( ) (48)

ϕ

δ

δ Δϕ

 Δ π

(∗) in (37) and (39) are small, they can be exchanged by

Δ

( ) ( )

( ) ( )

 δ

*<sup>m</sup>* is referred to the middle point of the measurement window. However, this

 ϕ

*Gi s*

( )( ) ( ) *<sup>m</sup> <sup>m</sup>*

π

− −−

*s s*

(44)

(45)

(46)

(47)

arg and

*m m*

 δ

> δ

(43)

The largest side coefficient can be expressed in short form as:

⎛ ⎞

 ϕ

can be neglected and the phase of component can be estimated by:

ϕ

Δϕ

sin sin

( ) ( )

( ) ( )

Δ

Δ

approach, leakages can be equalized

Δϕ

Δϕ

( ) *i s <sup>m</sup>* <sup>+</sup> *Gi s <sup>m</sup>* = + ⎡ ⎤ ⎣ ⎦

ϕ

( ) ( ) ϕ

( ) *a s m m*

π

() () ( )

*Gi s Gi s m Hm i s <sup>m</sup>*

<sup>H</sup> e ∓ Δ

**2.3.1 Reduction of the systematic error** 

itself, where

ϕ

disturbing angle components

contributions of the window (Fig. 14d).

sine functions and approximated by quotients:

Δϕ

Δϕ

⎜ ⎟ −+ − ⎝ ⎠ += + <sup>+</sup> <sup>j</sup> <sup>2</sup>

δ

$$\left|\boldsymbol{\varphi}\_{m,\mathcal{R}}^{\mathrm{III}} = \left(1 - \left|\boldsymbol{\delta}\_{m}\right|\right) \cdot \boldsymbol{\varphi}\_{i\_{m}} + \left|\boldsymbol{\delta}\_{m}\right| \cdot \boldsymbol{\varphi}\_{i\_{m} + s} + \frac{\pi}{2} \tag{49}$$

Fig. 11. Phase dependency errors at θ = 2.2 , −π 2 2 ≤ ≤ ϕ π ; Estimations: a – by (44), b – by (49), c – by (51), d – by (45)

Fig. 12. Maximal systematic errors of the phase estimations with the rectangular window: a – by (44), b – by (49), c – by (51); θis known

A better estimation can be obtained by also considering the long-range contributions in (47), when only one component is dominant in the signal – a more parametric approach (Fig. 12c):

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 205

*m m m m m m mm*

*mm mm m*

*is is Gi s G i i i* ++ − ++ <sup>+</sup> <sup>=</sup> <sup>⋅</sup>

*<sup>m</sup> mm mm*

( )( ) ( ) *<sup>m</sup> <sup>m</sup> m m <sup>m</sup> m m <sup>m</sup> m m*

δ

<sup>2</sup> <sup>H</sup>

δ

() ( ) *mim m m <sup>m</sup>* = − +± ⋅− *a is* <sup>I</sup> ,H 2 2 π

( ) ( )( ) *m is m* = + − + + ⋅+ *<sup>m</sup>* <sup>+</sup> *a s is s <sup>m</sup> <sup>m</sup>*

( ) ( ) ( ) *m i m is m m m m s s a*

− ++ <sup>+</sup> <sup>=</sup> +− + III

 δϕ

When the sign of the displacement is positive *s* = 1 , the phase estimation (56) is made with

( ) ( ) ( ) ( ) *m m*

2 1 , 1 2

( ) ( ) ( ) ( ) *m m*

The phase estimation can be improved further with averaging of the estimations by (57) and

( ) ( ) ( ) *m m m m*

, , , , <sup>2</sup>

 φϕ

,H 1 ,H 1 IV

1 41 2

( ) ( ) *mi i mi mm m <sup>m</sup> <sup>m</sup>* − + ++ − + *<sup>a</sup>* <sup>=</sup> − + IV 1 1

 ϕ

<sup>+</sup> <sup>=</sup> III III

> δϕ

, 1 2

+ +− <sup>=</sup> −+ + III <sup>1</sup>

2 1

ϕ

δϕ

*mi mi m ii m*

− ++ <sup>=</sup> +− + III <sup>1</sup>

*mi mi m ii m*

 δϕ

 δϕ

 Δϕ

,H 1 2 ∓ π

 Δϕ

*i Gi s s i s G i s s s*

≅= = + − − −−

 δ

*i i i Gi s i s i s Gi i s*

<sup>≅</sup> <sup>=</sup> <sup>=</sup> <sup>+</sup> + +

( ) ( )

2

<sup>2</sup> <sup>H</sup>

( )

 δ

+ − +

Δ

2

 δ

*mm mm* ) ( ) ≅+ + ( ) ( ) , the equations (42) and (43) can be

1 1 <sup>2</sup> <sup>1</sup>

δ

*s*

<sup>+</sup> *a*

3 32

<sup>−</sup> *a*

*m ii m ii*

 ϕ

6 3 2

3 32

+ −

 ϕϕ

> δ

π

3 32

2

 δ δ

π

δ

δ

 and ϕ*im* <sup>−</sup><sup>1</sup> :

ϕ

π

π

H

 δ

Δ

( ) ( )

( ) ( )

> Δ

(54)

(55)

(56)

, (57)

(58)

(59)

(60)

(*i is m m* ) ≈ ( + ) and (54) can be

δ

 δ

H

Δ

H

+

( ( )) ( ) ( )

Δϕ

( )( ) ( )

( )( ) ( )

+ −+

Δϕ

sin sin

2 12

δ

δ

2 12

( ) ( )

> δ Δϕ

 δ

> δ

δϕ

2 1

δϕ

and when the sign is negative *s* = −1 , the estimation is done with *im*

(58). With this averaging we get the three-point estimation (Fig. 14c):

+ −

δϕ

 ϕ

H

ϕ

( ) ( )

In the non-parametric approach, one can also equalize

( ) ( )

If we equalize (2 1 − *s i s is* δ Δϕ

> ϕ

> > ϕ

II

,H

*m m*

*m m*

,H 1

 ϕϕ

,H 1

 ϕϕ

+

−

*mm m*

,H 1 1

*m ii i*

 ϕϕ

,H

ϕ

ϕ

ϕ

Δϕ

Δϕ

multiplied by corrections and added:

Δϕ

Δϕ

rewritten as:

arguments *im*

ϕ

 and ϕ*im* <sup>+</sup><sup>1</sup> :

ϕ

ϕ

ϕ

$$\frac{\Delta\phi\left(\dot{i}\_m\right)}{\Delta\phi\left(\dot{i}\_m+s\right)} \cong \frac{\Delta\dot{i}\_m+s+\delta\_m}{\Delta\dot{i}\_m+\delta\_m} \cdot \frac{\left|\mathbf{G}\left(\dot{i}\_m+s\right)\right|}{\left|\mathbf{G}\left(\dot{i}\_m\right)\right|} = b \cdot \frac{\left|\mathbf{G}\left(\dot{i}\_m+s\right)\right|}{\left|\mathbf{G}\left(\dot{i}\_m\right)\right|}\tag{50}$$

$$\boldsymbol{\rho}\_{m,\mathcal{R}}^{\rm IV} = \frac{\left| \mathbf{G} \left( \boldsymbol{i}\_{m} \right) \right| \boldsymbol{\rho}\_{\boldsymbol{i}\_{m}} + b \left| \mathbf{G} \left( \mathbf{i}\_{m} + \mathbf{s} \right) \right| \boldsymbol{\rho}\_{\boldsymbol{i}\_{m} + \mathbf{s}}}{\left| \mathbf{G} \left( \mathbf{i}\_{m} \right) \right| + b \left| \mathbf{G} \left( \mathbf{i}\_{m} + \mathbf{s} \right) \right|} + \text{sa} \left( \frac{b \left| \mathbf{G} \left( \mathbf{i}\_{m} + \mathbf{s} \right) \right|}{\left| \mathbf{G} \left( \mathbf{i}\_{m} \right) \right| + b \left| \mathbf{G} \left( \mathbf{i}\_{m} + \mathbf{s} \right) \right|} - \left| \boldsymbol{\delta}\_{m} \right| \right) + \frac{\pi}{2} \tag{51}$$

The systematic errors of the phase estimations *E* = ϕ*<sup>m</sup>* −ϕ0 (ϕ0 - is the true value of the phase) are phase dependent (Fig. 11: The error curves are very close to the sine like functions). In simulations, the absolute maximum values of the errors at a given relative frequency have been searched when phase has been changed in intervals − ≤≤ π 2 2 ϕ π (Fig. 12). The estimation errors drop with the increasing relative frequency.

Fig. 13 shows the importance of the accuracy of the frequency estimation. If the frequency is estimated by a known two-point estimation (9) the overall errors increase (Fig. 3: *E E* <sup>∗</sup> <sup>c</sup> ≈ <sup>c</sup> 200 ).

Fig. 13. Maximal systematic errors of the phase estimations with the rectangular window in the interval 1.98 2.02 ≤ θ ≤ : a – by (44), b –by (49), c –by (51); <sup>∗</sup> ∗ ∗ a ,b ,c θis estimated by (9)

Using the Hann window, the expressions for phase have the same forms as for the rectangular window when the second term in (41) and (43) is neglected:

$$
\sigma\_{m,\text{H}}^{\text{I}} = \phi\_{i\_m} - a\delta\_m + \frac{\pi}{2} \tag{52}
$$

$$
\sigma\_{m,\text{FI}}^{\text{II}} = \phi\_{i\_m + s} + a \left(s - \delta\_m\right) + \frac{\pi}{2} \tag{53}
$$

We can again improve the estimation by considering the long-range contributions, which have the following properties:

204 Fourier Transform – Signal Processing

*m m m m m m mm m m i i s Gi s Gi s*

*sa G i bGi s G i bGi s*

( ) ( )

= +− ⎜ ⎟ +

phase) are phase dependent (Fig. 11: The error curves are very close to the sine like functions). In simulations, the absolute maximum values of the errors at a given relative frequency have been searched when phase has been changed in intervals − ≤≤

Fig. 13 shows the importance of the accuracy of the frequency estimation. If the frequency is estimated by a known two-point estimation (9) the overall errors increase (Fig. 3:

Fig. 13. Maximal systematic errors of the phase estimations with the rectangular window in

Using the Hann window, the expressions for phase have the same forms as for the

*mim <sup>m</sup>* <sup>=</sup> − + *<sup>a</sup>* <sup>I</sup> ,H 2

( ) *m is m <sup>m</sup> a s* <sup>=</sup> <sup>+</sup> +− + II ,H 2

We can again improve the estimation by considering the long-range contributions, which

 ϕδ π

 δ π

≤ : a – by (44), b –by (49), c –by (51); <sup>∗</sup> ∗ ∗ a ,b ,c -

rectangular window when the second term in (41) and (43) is neglected:

ϕ

ϕϕ

≅ ⋅ =⋅

*is i Gi Gi*

δ

δ

( ) ( ) *m i m is m m m m m m m m m*

(Fig. 12). The estimation errors drop with the increasing relative frequency.

 ϕ

*G i bGi s bGi s*

( ) ( )

<sup>+</sup> + + ⎛ ⎞ +

+ + + + ⎝ ⎠

ϕ*<sup>m</sup>* −ϕ0 (ϕ

( ) ( ) 2

+ + + +

*b*

( ) ( )

(50)

(51)

 2 2 ϕ π

π

0 - is the true value of the

π

,2 02

θ

(52)

(53)

θ

is estimated by (9)

a

b

c

,a\* b\* , c\*

 δ

( ) ( )

Δϕ

ϕ

The systematic errors of the phase estimations *E* =

Δϕ

( )

( ) rad *<sup>m</sup>* max

,1 98 2

IV ,R

ϕ

*E E* <sup>∗</sup> <sup>c</sup> ≈ <sup>c</sup> 200 ).

7 10 <sup>−</sup>

the interval 1.98 2.02 ≤

have the following properties:

θ

5 10 <sup>−</sup>

3 10 <sup>−</sup>

1 10 <sup>−</sup>

*E* ϕ + + 2 2

$$\frac{\Delta\rho(i\_m)}{\Delta\rho(i\_m+s)} \cong \frac{\sin\left(\Delta\rho(i\_m)\right)}{\sin\left(\Delta\rho(i\_m+s)\right)} = \frac{\left|\Delta(i\_m)\right|}{\left|\mathcal{G}\_\mathcal{H}\left(i\_m\right)\right|} \frac{\left|\mathcal{G}\_\mathcal{H}\left(i\_m+s\right)\right|}{\left|\mathcal{A}\left(i\_m+s\right)\right|} = $$

$$=\frac{\left(2i\_m+s+\mathcal{S}\_m\right)\left(1-\left(2i\_m+s+\mathcal{S}\_m\right)^2\right)}{\left(2i\_m+\mathcal{S}\_m\right)\left(1-\left(2i\_m+\mathcal{S}\_m\right)^2\right)} \cdot \frac{\left|\mathcal{G}\_\mathcal{H}\left(i\_m+s\right)\right|}{\left|\mathcal{G}\_\mathcal{H}\left(i\_m\right)\right|}\tag{54}$$

In the non-parametric approach, one can also equalize Δ Δ (*i is m m* ) ≈ ( + ) and (54) can be rewritten as:

$$\frac{\Delta\phi\left(i\_m\right)}{\Delta\phi\left(i\_m+s\right)} \cong \frac{\left|\mathbf{G\_{\rm II}}\left(i\_m+s\right)\right|}{\left|\mathbf{G\_{\rm II}}\left(i\_m\right)\right|} = \frac{\delta\_m\left(1-\delta\_m^2\right)}{\left(s-\delta\_m\right)\left(1-\left(s-\delta\_m\right)^2\right)} = \frac{1+s\delta\_m}{2-s\delta\_m} \tag{55}$$

If we equalize (2 1 − *s i s is* δ Δϕ δ Δϕ *mm mm* ) ( ) ≅+ + ( ) ( ) , the equations (42) and (43) can be multiplied by corrections and added:

$$
\begin{aligned}
\boldsymbol{\rho}\_{m,\mathcal{H}}^{\rm I} &= \boldsymbol{\rho}\_{i\_m} - a\boldsymbol{\delta}\_m + \frac{\pi}{2} \pm A\boldsymbol{\rho}(i\_m) \quad \left| \cdot (\boldsymbol{2} - s\boldsymbol{\delta}\_m) \right. \\\\
\left. \cdot \boldsymbol{\rho}\_{m,\mathcal{H}}^{\rm II} &= \boldsymbol{\rho}\_{i\_m + s} + a\left(s - \boldsymbol{\delta}\_m\right) + \frac{\pi}{2} \mp A\boldsymbol{\rho}(i\_m + s) \quad \left| \cdot (\boldsymbol{1} + s\boldsymbol{\delta}\_m) \right. \\\\
\left. \cdot \boldsymbol{\rho}\_{m,\mathcal{H}}^{\rm II} &= \frac{\left(2 - s\boldsymbol{\delta}\_m\right)\boldsymbol{\rho}\_{i\_m} + \left(1 + s\boldsymbol{\delta}\_m\right)\boldsymbol{\rho}\_{i\_m + s}}{3} + \frac{a}{3}\left(s - 2\boldsymbol{\delta}\_m\right) + \frac{\pi}{2} \end{aligned} \tag{56}
$$

When the sign of the displacement is positive *s* = 1 , the phase estimation (56) is made with arguments *im* ϕ and ϕ*im* <sup>+</sup><sup>1</sup> :

$$\log\_{m,\mathcal{H}}^{\text{III}}\left(\boldsymbol{\varphi}\_{i\_m},\boldsymbol{\varphi}\_{i\_m+1}\right) = \frac{\left(2-\mathcal{S}\_m\right)\boldsymbol{\varphi}\_{i\_m}+\left(1+\mathcal{S}\_m\right)\boldsymbol{\varphi}\_{i\_m+1}}{3} + \frac{a}{3}\left(1-2\mathcal{S}\_m\right) + \frac{\pi}{2}\text{ }\tag{57}$$

and when the sign is negative *s* = −1 , the estimation is done with *im* ϕ and ϕ*im* <sup>−</sup><sup>1</sup> :

$$\log\_{m,\mathrm{H}}^{\mathrm{III}}\left(\varphi\_{i\_m},\varphi\_{i\_m-1}\right) = \frac{\left(2+\mathcal{S}\_m\right)\varphi\_{i\_m}+\left(1-\mathcal{S}\_m\right)\varphi\_{i\_m-1}}{3} - \frac{a}{3}\left(1+2\mathcal{S}\_m\right) + \frac{\pi}{2} \tag{58}$$

The phase estimation can be improved further with averaging of the estimations by (57) and (58). With this averaging we get the three-point estimation (Fig. 14c):

$$\left(\varphi\_{m,\mathcal{H}}^{\mathrm{IV}}\left(\boldsymbol{\varphi}\_{i\_{m}},\boldsymbol{\varphi}\_{i\_{m}+1},\boldsymbol{\varphi}\_{i\_{m}-1}\right)\right) = \frac{\varphi\_{m,\mathcal{H}}^{\mathrm{III}}\left(\boldsymbol{\phi}\_{i\_{m}},\boldsymbol{\varphi}\_{i\_{m}+1}\right) + \boldsymbol{\varphi}\_{m,\mathcal{H}}^{\mathrm{III}}\left(\boldsymbol{\varphi}\_{i\_{m}},\boldsymbol{\varphi}\_{i\_{m}-1}\right)}{2} \tag{59}$$

$$\boldsymbol{\rho}\_{m,\mathcal{H}}^{\rm IV} = \frac{\left(1 - \delta\_m\right)\rho\_{i\_m - 1} + 4\rho\_{i\_m} + \left(1 + \delta\_m\right)\rho\_{i\_m + 1}}{6} - \frac{2a\delta\_m}{3} + \frac{\pi}{2} \tag{60}$$

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 207

( ) ( )( ) ( ) ( ) *<sup>i</sup> RR II*

It is evident that the standard uncertainty of the phase depends on the amplitude of the

σ δ

*G i*( ) <sup>=</sup> <sup>a</sup>

<sup>b</sup> =− + ++ (1 i is 2 *s s*

; ( )

; ( )

,

coefficients associated with the real and imaginary coefficients for two spectral lines *i* ,

As the correlation coefficients for the rectangular window *rRi Ri s* ( ( ), ( + )) and *rIi Ii s* ( ) () ( ) , +

σ

DFT

σ ϕ

σ

In calculations of the four sensitivity coefficients for the third estimation (51), one needs partial sensitivity coefficients for the amplitude ∂ *G R RG* (∗ ∂ ∗= ∗ ∗ ) ( ) ( ) ( ) ,

> ( ) ( ) *G i bG i s bGi s*

> > ( )

In this case the uncertainty is close to the uncertainty of the estimation by (49) (Fig. 15c).

 ϕ

*G i bGi s G i bGi s* ++ + ⎛ ⎞ <sup>+</sup> <sup>=</sup> + − ⎜ ⎟ <sup>+</sup> + + + + ⎝ ⎠ <sup>c</sup> i i s ( )

 δϕ

DFT σϕ1

σ

δ ϕ

δ

δ

(*i a* ) 2 , and from (62) we get (Fig. 15a):

DFT 4 2

2 2 2 2 DFT

= += =

 σσ

component. Moreover, in non-coherent sampling it changes with displacement

*c c*

 σ

ϕ

First, we consider that frequency is known ( = 0

<sup>a</sup> = −⋅ + ϕ

ϕ

( ) ( ) *R i* ( ) () *<sup>R</sup> i c s c i R i* <sup>∂</sup> = = − <sup>∂</sup> <sup>b</sup> b , 1 ϕ

( ) *R is <sup>R</sup>* ( ) *i s c s c i s*

*R i I i R is I is c cc c* <sup>+</sup> <sup>+</sup>

∂ ∗ ∂ ∗= ∗ ∗ *G I IG* ( ) ( ) ( ) ( ) and for the phase coefficients ∂

( ) () ( ) ( )

( ) *Rc R* <sup>∗</sup> <sup>∂</sup> <sup>∗</sup> <sup>=</sup> <sup>∂</sup> <sup>∗</sup> <sup>c</sup> c

ϕ

,

bb b b

2 22 2 ,, , ,

∂ + <sup>=</sup> = +

*Ri s* <sup>+</sup>

ϕ

∂ + <sup>b</sup> b

are zero, and standard uncertainties are equal

 δπ

ϕ

σ

in the following examples.

For the second estimation by

( )

,

according to (JCGM, 2008) (Fig. 15b):

⎝ ⎠ b

σ

DFT σϕ

2

ϕ

⎛ ⎞ ⎜ ⎟ = ++ +

since all of them contribute in the estimation:

ϕ

omit index *m* in (44)

*i s* + :

*I R*

+

*Gi Gi*

 π

( ) ( ) () *I i <sup>I</sup> i c s c i I i* <sup>∂</sup> = = − <sup>∂</sup> <sup>b</sup> b , 1 ϕ

*Ii s* <sup>+</sup>

ϕ

∂ + <sup>b</sup> b

 *R I* , , DFT ∗ ∗ = σ

1

 σ

( ) ( ) *I is <sup>I</sup> i s c s c i s*

; =− + + ( ) ( ) () *s Gi Gi s* ( ) ( ) <sup>b</sup> 2 2

δ

ϕ

*sa s*

( ) ( ) 2

ϕ

 ; ( ) ( ) *<sup>I</sup> <sup>c</sup> I* <sup>∗</sup> <sup>∂</sup> <sup>∗</sup> <sup>=</sup> <sup>∂</sup> <sup>∗</sup> <sup>c</sup> c

,

(∗∂ ∗ ) *R*( ) , ∂

 δ

∂ + <sup>=</sup> = +

22 2

σ

). For ease of understanding, one can

) ( ) ( ) (49) one needs sensitivity

δ

δ

= ( ∗ = + *ii s* , ) we can write

 δ

ϕ

π

(67)

(∗∂∗ ) () *I* (61),

(66)

(63)

(62)

δ

as we see

(64)

(65)

The best results are obtained from the three-point estimation using the Hann window, when the frequency is known (Fig. 14c: *E*( ) <sup>−</sup> ≤ ⋅ ≈ °⇐ > <sup>5</sup> max ϕ θ 1.7 10 rad 1m 5.5 ). If the frequency has to be estimated, the overall error increases, but it is still under the error level of the one-point estimation (Fig. 14: <⇐> \* b a 5.5 θ ). In Fig. 14, for error curve \* b , the frequency is estimated by the three-point interpolation (11).

Fig. 14. Maximal systematic errors of the phase estimations with Hann window: a – onepoint by (52), b – two-point by (56), c – three-point by (60), θ is known; <sup>∗</sup> b θ is estimated by (11); and estimations using only phase DFT coefficient: d – the rectangular window, a – the Hann window

#### **2.3.2 Uncertainty of the phase estimation**

The uncertainty propagation through the DFT procedure is well known ( ) ( ) ( ) *N R I G i t k N wk* − = = = = = ∑ 1 2 DFT 0 σ σσ σ σ 2 (Agrež, 2007), where we use *Ri Gi* ( ) =⎡ ⎤ ( ) ⎣ ⎦ Re and *Ii Gi* ( ) =⎡ ⎤ ( ) ⎣ ⎦ Im for the real and imaginary parts of the DFT, respectively, and *Gi R i I i* ( ) = + () () 2 2 for the amplitude, and (*i Gi Ii Ri* ) = ⎡ ⎤= ( ) ( ( ) ( )) ⎣ ⎦ -1 ϕ arg tan for the phase. The phase uncertainty is equal to the uncertainty of the DFT procedure scaled by the amplitude coefficient ( ) ( ) *<sup>i</sup>* = DFT *G i* ϕ σ σ:

$$c\_R\left(i\right) = \frac{\partial \rho(i)}{\partial R\left(i\right)} = -\frac{I\left(i\right)}{\left|G\left(i\right)\right|^2}; \quad c\_I\left(i\right) = \frac{\partial \rho(i)}{\partial I\left(i\right)} = \frac{R\left(i\right)}{\left|G\left(i\right)\right|^2} \tag{61}$$

206 Fourier Transform – Signal Processing

The best results are obtained from the three-point estimation using the Hann window, when

max ϕ

frequency has to be estimated, the overall error increases, but it is still under the error level

0 1 5 10

by (11); and estimations using only phase DFT coefficient: d – the rectangular window, a – the

The uncertainty propagation through the DFT procedure is well known

and *Ii Gi* ( ) =⎡ ⎤ ( ) ⎣ ⎦ Im for the real and imaginary parts of the DFT, respectively, and

phase. The phase uncertainty is equal to the uncertainty of the DFT procedure scaled by the

ϕ

; ( ) ( )

*c i*

( )

( ) *<sup>I</sup>*

θ

2 (Agrež, 2007), where we use *Ri Gi* ( ) =⎡ ⎤ ( ) ⎣ ⎦ Re

Fig. 14. Maximal systematic errors of the phase estimations with Hann window: a – one-

θ

 θ1.7 10 rad 1m 5.5 ). If the

θ

θ

is estimated

(61)

is known; <sup>∗</sup> b -

arg tan for the

( )

*i Ri*

*I i G i* <sup>∂</sup> = = <sup>∂</sup> <sup>2</sup> ϕ

b

a

\* b

d

c

). In Fig. 14, for error curve \* b , the

the frequency is known (Fig. 14c: *E*( ) <sup>−</sup> ≤ ⋅ ≈ °⇐ > <sup>5</sup>

of the one-point estimation (Fig. 14: <⇐> \* b a 5.5

<sup>8</sup> 10<sup>−</sup>

*R I G i t*

Hann window

σσ

σ <sup>6</sup> 10<sup>−</sup>

<sup>4</sup> 10<sup>−</sup>

<sup>2</sup> 10<sup>−</sup>

1

frequency is estimated by the three-point interpolation (11).

( ) rad *<sup>m</sup>* max *<sup>E</sup>* ϕ

point by (52), b – two-point by (56), c – three-point by (60),

( ) ( ) ( )

 σ

( ) ( ) ( )

ϕ

*N*

1 2

*k N wk* −

:

*i Ii*

*R i G i* <sup>∂</sup> = = − <sup>∂</sup> <sup>2</sup>

( ) *<sup>R</sup>*

=

0

*Gi R i I i* ( ) = + () () 2 2 for the amplitude, and (*i Gi Ii Ri* ) = ⎡ ⎤= ( ) ( ( ) ( )) ⎣ ⎦ -1

( )

**2.3.2 Uncertainty of the phase estimation** 

= = = = ∑

 σ

*c i*

DFT

amplitude coefficient ( ) ( ) *<sup>i</sup>* = DFT *G i* ϕ σ

 σ

$$
\sigma\_{\phi(i)}^2 = \left(c\_R \sigma\_R\right)^2 + \left(c\_I \sigma\_I\right)^2 = \sigma\_{\text{DFT}}^2 \frac{I^2 + R^2}{\left|\mathbf{G}(i)\right|^4} = \frac{\sigma\_{\text{DFT}}^2}{\left|\mathbf{G}(i)\right|^2} \tag{62}
$$

It is evident that the standard uncertainty of the phase depends on the amplitude of the component. Moreover, in non-coherent sampling it changes with displacement δ as we see in the following examples.

First, we consider that frequency is known ( = 0 σ δ ). For ease of understanding, one can omit index *m* in (44) ϕ<sup>a</sup> = −⋅ + ϕ δπ(*i a* ) 2 , and from (62) we get (Fig. 15a):

$$\frac{\sigma\_{\varphi\_{\text{a}}}}{\sigma\_{\text{DFT}}} = \frac{1}{\left| G(i) \right|} \tag{63}$$

For the second estimation by ϕ<sup>b</sup> =− + ++ (1 i is 2 *s s* δ ϕ δϕ π ) ( ) ( ) (49) one needs sensitivity coefficients associated with the real and imaginary coefficients for two spectral lines *i* , *i s* + :

$$\begin{aligned} \mathbf{c}\_{R\_{\mathbf{b},i},i} &= \frac{\partial \mathfrak{sp}\_{\mathbf{b}}(i)}{\partial \mathfrak{R}(i)} = (1 - s\delta)\mathbf{c}\_{R}(i); & \mathbf{c}\_{I\_{\mathbf{b},i}} &= \frac{\partial \mathfrak{sp}\_{\mathbf{b}}(i)}{\partial \mathfrak{I}(i)} = (1 - s\delta)\mathbf{c}\_{I}(i) \\\\ \mathbf{c}\_{R\_{\mathbf{b},i},i+s} &= \frac{\partial \mathfrak{sp}\_{\mathbf{b}}(i+s)}{\partial \mathfrak{R}(i+s)} = s\delta \mathbf{c}\_{R}(i+s); & \mathbf{c}\_{I\_{\mathbf{b},i},i+s} &= \frac{\partial \mathfrak{sp}\_{\mathbf{b}}(i+s)}{\partial \mathfrak{I}(i+s)} = s\delta \mathbf{c}\_{I}(i+s) \end{aligned} \tag{64}$$

As the correlation coefficients for the rectangular window *rRi Ri s* ( ( ), ( + )) and *rIi Ii s* ( ) () ( ) , + are zero, and standard uncertainties are equal σ *R I* , , DFT ∗ ∗ = σ σ= ( ∗ = + *ii s* , ) we can write according to (JCGM, 2008) (Fig. 15b):

$$\left(\frac{\sigma\_{\rho\_b}}{\sigma\_{\text{DFT}}}\right)^2 = c\_{R\_{\text{b}},i}^2 + c\_{l\_{\text{b}},i}^2 + c\_{R\_{\text{b}},i+s}^2 + c\_{l\_{\text{b}},i+s}^2; \quad \frac{\sigma\_{\rho\_b}}{\sigma\_{\text{DFT}}} = \sqrt{\left((1-s\delta)\bigwedge^2(i)\right)^2 + \left(\delta\bigwedge^2(i+s)\right)^2} \tag{65}$$

In calculations of the four sensitivity coefficients for the third estimation (51), one needs partial sensitivity coefficients for the amplitude ∂ *G R RG* (∗ ∂ ∗= ∗ ∗ ) ( ) ( ) ( ) , ∂ ∗ ∂ ∗= ∗ ∗ *G I IG* ( ) ( ) ( ) ( ) and for the phase coefficients ∂ϕ (∗∂ ∗ ) *R*( ) , ∂ϕ (∗∂∗ ) () *I* (61), since all of them contribute in the estimation:

$$\varphi\_{\mathbf{c}} = \frac{\left| \mathbf{G}(\mathbf{i}) \right| \left| \boldsymbol{\sigma}(\mathbf{i}) + \mathbf{b} \right| \mathbf{G}(\mathbf{i} + \mathbf{s}) \left| \boldsymbol{\sigma}(\mathbf{i} + \mathbf{s}) \right|}{\left| \mathbf{G}(\mathbf{i}) \right| + \mathbf{b} \left| \mathbf{G}(\mathbf{i} + \mathbf{s}) \right|} + \text{sa} \left( \frac{\mathbf{b} \left| \mathbf{G}(\mathbf{i} + \mathbf{s}) \right|}{\left| \mathbf{G}(\mathbf{i}) \right| + \mathbf{b} \left| \mathbf{G}(\mathbf{i} + \mathbf{s}) \right|} - \mathbf{s} \boldsymbol{\mathcal{S}} \right) + \frac{\pi}{2} \tag{66}$$

$$c\_{R\_{c'},\*} = \frac{\partial \varphi\_c(\*)}{\partial R(\*)} \; ; \qquad c\_{I\_{c'},\*} = \frac{\partial \varphi\_c(\*)}{\partial I(\*)} \tag{67}$$

In this case the uncertainty is close to the uncertainty of the estimation by (49) (Fig. 15c).

Non-Parametric Estimation of the Periodic Signal Parameters in the Frequency Domain 209

particular component as a function of its frequency position. The weights emphasize the DFT coefficients of the spectrum peak related to the investigated component. The spectrum of the window used must be formally well-known, like the Hann window, for a better analytical expression. Interpolation with a larger number of the DFT coefficients decreases the systematic errors. It can be concluded, that if we selectively use a different number of the DFT coefficients in the interpolation algorithms for a particular component of the signal, we adapt the apparent window shape for that component. A trade-off between a reduction in systematic error of the parameter estimation and the uncertainty of the estimated results is highlighted. The use of a suitable interpolation algorithm depends on the level of the noise floor of the acquisition channel, or better *SNR*, and on the position of the frequency

Agrež D. (2002). Weighted Multi-Point Interpolated DFT to Improve Amplitude Estimation

Agrež, D. (2007). Dynamics of frequency estimation in the frequency domain. *IEEE* 

Belega, D. & Dallet, D. (2009). Multifrequency signal analysis by interpolated DFT method

D'Antona G. & Ferrero, A. (2006). *Digital Signal Processing for Measurement Systems. Theory* 

Harris, F. J. (1978). On the use of windows for harmonic analysis with the discrete Fourier

JCGM (2008). *Evaluation of measurement data — Guide to the expression of uncertainty in* 

Novotný, M. & Sedláček, M. (2010). The influence of window sidelobes on DFT-based

Petri, D. (2002). Frequency-domain testing of waveform digitizers. *IEEE Transactions* 

Schoukens, J., Pintelon, R. & Van hamme, H. (1992). The interpolated fast Fourier transform:

Siebert, W. McC. (1986) *Circuits, Signals and Systems*, The MIT Press, ISBN 0-262-19229-2,

*and Applications*, Springer Science 2006, ISBN-10: 0-387-24966-4 Gabor, D. (1946). Theory of communication. *Journal of the IEEE*, Vol. 93, pp. 429-457

Vol. 51, No. 2, April 2002, pp. 287-292, ISSN 0018-9456

of Multi-Frequency Signal. *IEEE Transactions on Instrumentation and Measurement*,

*Transactions on Instrumentation and Measurement*, Vol. 56, No. 6, December 2007, pp.

with maximum sidelobe decay windows. *Measurement*, Vol. 42, No. 3, 2009, pp. 420-

transform. *Proceedings of the IEEE*, Vol. 66, No. 1, January 1978, pp. 51-83, ISSN

*measurement*. Joint Committee for Guides in Metrology [JCGM] 100:2008, First

multifrequency signal measurement. *Computer Standards & Interfaces*, Vol. 32, No. 3,

*on Instrumentation and Measurement*, Vol. 51, No. 3, 2002, pp. 445-453, ISSN 0018-

A comparative study. *IEEE Transactions on Instrumentation and Measurement*, Vol.

component along the frequency axis.

2111-2118, ISSN 0018-9456

edition, September 2008

March 2010, pp. 110-118, ISSN 0920-5489

41, No. 2, April 1992, pp. 226-232, ISSN 0018-9456

McGraw-Hill, Cambridge, New York, ..., pp. 497-502

0018-9219

9456

426, April 2009, ISSN 0263-2241

**4. References** 

The price for the effective leakage reduction is in the increase of the estimation uncertainties related to the unbiased CRB. Ratios of the uncertainties of the phase estimations related to the CRB (68), are between 1 and 1.7 symmetrically, depending upon the term δ at higher values of the relative frequency θ(Fig. 15).

$$
\sigma\_{\phi\_m} \ge \ 2 \frac{1}{\sqrt{SNR}} \frac{1}{\sqrt{N}} = \sigma\_{\text{CRB}, \phi} \tag{68}
$$

Fig. 15. Ratios of the uncertainties of the phase estimations with rectangular window related to the CRB (68). Estimations: a – by (44), b – by (49), c – by (51), θ is known; <sup>∗</sup> ∗ ∗ a ,b ,c θ is estimated by (9b)

The uncertainties of the estimations increase if one needs to also estimate the displacement term δ ( ) *Gi Gi s* ( ) , ( ) + . In estimation algorithms (44), (49), and (51), one also needs partial sensitivity coefficients for the displacement terms ∂ ∂ δ *G i*( ) and ∂δ ∂ + *Gi s* ( ) . Fig. 15 shows that uncertainties of the estimations increase for a factor 2 2.3 if frequency is ÷ estimated by (9b). The uncertainties of the estimations where frequency has to be estimated first, are very close to each other (Fig. 15: *EEE* \* \*\* ≈ ≈ abc ) at higher values of θ . For the Hann window, the uncertainty levels increase (62), as the main amplitude coefficient decreases by a factor of 2 with respect to the rectangular window.

#### **3. Conclusion**

In chapter, the non-parametric interpolated DFT algorithms for emulating coherent sampling are described. The advantages of the DFT interpolations for the frequency, amplitude, and phase of the signal component are identified. Interpolations where the longrange leakage is considered, illustrate a decrease in systematic effects. The algorithms retain all benefits of the DFT approach and improve the estimation accuracy adaptively for a particular component as a function of its frequency position. The weights emphasize the DFT coefficients of the spectrum peak related to the investigated component. The spectrum of the window used must be formally well-known, like the Hann window, for a better analytical expression. Interpolation with a larger number of the DFT coefficients decreases the systematic errors. It can be concluded, that if we selectively use a different number of the DFT coefficients in the interpolation algorithms for a particular component of the signal, we adapt the apparent window shape for that component. A trade-off between a reduction in systematic error of the parameter estimation and the uncertainty of the estimated results is highlighted. The use of a suitable interpolation algorithm depends on the level of the noise floor of the acquisition channel, or better *SNR*, and on the position of the frequency component along the frequency axis.

### **4. References**

208 Fourier Transform – Signal Processing

The price for the effective leakage reduction is in the increase of the estimation uncertainties related to the unbiased CRB. Ratios of the uncertainties of the phase estimations related to

> ≥ = CRB, 1 1 <sup>2</sup>

 σ

ϕ

δ

(68)

θ

is known; <sup>∗</sup> ∗ ∗ a ,b ,c -

δ

θis

∂ + *Gi s* ( ) . Fig. 15

θ

. For the

at higher

the CRB (68), are between 1 and 1.7 symmetrically, depending upon the term

*<sup>m</sup> SNR N*

0 1 2 3 4 5

Fig. 15. Ratios of the uncertainties of the phase estimations with rectangular window related

The uncertainties of the estimations increase if one needs to also estimate the displacement

shows that uncertainties of the estimations increase for a factor 2 2.3 if frequency is ÷ estimated by (9b). The uncertainties of the estimations where frequency has to be estimated

Hann window, the uncertainty levels increase (62), as the main amplitude coefficient

In chapter, the non-parametric interpolated DFT algorithms for emulating coherent sampling are described. The advantages of the DFT interpolations for the frequency, amplitude, and phase of the signal component are identified. Interpolations where the longrange leakage is considered, illustrate a decrease in systematic effects. The algorithms retain all benefits of the DFT approach and improve the estimation accuracy adaptively for a

first, are very close to each other (Fig. 15: *EEE* \* \*\* ≈ ≈ abc ) at higher values of

( ) *Gi Gi s* ( ) , ( ) + . In estimation algorithms (44), (49), and (51), one also needs partial

δ

a

b

c

θ

*G i*( ) and ∂

(Fig. 15).

θ

σ

a\*

to the CRB (68). Estimations: a – by (44), b – by (49), c – by (51),

sensitivity coefficients for the displacement terms ∂ ∂

decreases by a factor of 2 with respect to the rectangular window.

b\* c\*

ϕ

values of the relative frequency

ϕ σσCRB,ϕ

5

4

3

2

0

estimated by (9b)

**3. Conclusion** 

term δ 1


**9** 

Csaba Szántay, Jr. *Gedeon Richter Plc.* 

*Hungary* 

**A Reformulative Retouch** 

**on the Fourier Transform –** 

**"Unprincipled" Uncertainty Principle** 

According to a widespread notion, a monochromatic harmonic temporal wave's frequency ω becomes inherently "uncertain" to a degree Δω if the wave lasts for only a limited time Δ*t* owing to the "Uncertainty Principle" which states that constant Δ*t* ⋅ Δω ≥ (i.e. ωΔ cannot be zero if Δ*t* is finite). This frequency uncertainty is commonly thought of as also being reflected in the fact that the ("sinc"-shaped) frequency spectrum obtained by Fouriertransforming a finite-duration monochromatic wave spreads over a nonzero frequency range. Since, as often argued, the Fourier spectrum represents the "frequency components" of a temporal function (or signal), the nonzero width of the spectrum tells us that the "nominally" monochromatic ω frequency of a time-limited sinusoid is "effectively" polychromatic, with frequencies continuously distributed over a Δω range in accord with

A typical area where this argument shows up is Pulsed Fourier-Transform (PFT) nuclear magnetic resonance (NMR) spectroscopy. In NMR a molecular ensemble is placed in a strong homogeneous static magnetic field, whereupon the ensemble's NMR-active nuclei (those that act like spinning atomic magnets, say protons) will precess about the direction of the magnetic field. Those "spins" in the ensemble which precess (Larmor precession) with identical ω<sup>L</sup> frequencies form a net macroscopic magnetization which is normally aligned along the external field, but which will also (Larmor-)precess with a frequency ω<sup>L</sup> following a transient perturbation of the magnetization from its equilibrium state. Typically there are several such bundles of homogeneous spins in the analyzed sample, covering a range of Larmor frequencies Δω<sup>L</sup> . PFT-NMR works on the principle that the sample's bulk magnetizations of interest can be excited into precession-mode (made to "resonate") simultaneously in a wide ΔωL ("off-resonance") frequency range when perturbed by a "driving" electromagnetic wave of a single well-defined monochromatic radio-frequency (RF) ωD if the wave is applied as a "pulse" in a sufficiently short time Δ*t* . Because this phenomenon may appear counter-intuitive, the fundamental NMR literature is abundant in attempts to offer a preliminary explanation for why pulsed excitation works, typically invoking the "Uncertainty Principle" and the "sinc"-shaped FT-spectrum of the pulse according to the notions outlined above. Some representative examples from authoritative

**1. Introduction** 

the Uncertainty Principle.


## **A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle**

Csaba Szántay, Jr. *Gedeon Richter Plc. Hungary* 

### **1. Introduction**

210 Fourier Transform – Signal Processing

Solomon, O. M. (1992). The effects of windowing and quantization error on the amplitude of

Widrow, B. & Kollar, I. (2008) *Quantization Noise*, Cambridge Univ. Press 2008, ISBN 978-0-

Vol. 41, No. 6, December 1992, pp. 932-937, ISSN 0018-9456

521-88671-0, Cambridge, New York,

frequency-domain functions. *IEEE Transactions on Instrumentation and Measurement*,

According to a widespread notion, a monochromatic harmonic temporal wave's frequency ω becomes inherently "uncertain" to a degree Δω if the wave lasts for only a limited time Δ*t* owing to the "Uncertainty Principle" which states that constant Δ*t* ⋅ Δω ≥ (i.e. ωΔ cannot be zero if Δ*t* is finite). This frequency uncertainty is commonly thought of as also being reflected in the fact that the ("sinc"-shaped) frequency spectrum obtained by Fouriertransforming a finite-duration monochromatic wave spreads over a nonzero frequency range. Since, as often argued, the Fourier spectrum represents the "frequency components" of a temporal function (or signal), the nonzero width of the spectrum tells us that the "nominally" monochromatic ω frequency of a time-limited sinusoid is "effectively" polychromatic, with frequencies continuously distributed over a Δω range in accord with the Uncertainty Principle.

A typical area where this argument shows up is Pulsed Fourier-Transform (PFT) nuclear magnetic resonance (NMR) spectroscopy. In NMR a molecular ensemble is placed in a strong homogeneous static magnetic field, whereupon the ensemble's NMR-active nuclei (those that act like spinning atomic magnets, say protons) will precess about the direction of the magnetic field. Those "spins" in the ensemble which precess (Larmor precession) with identical ω<sup>L</sup> frequencies form a net macroscopic magnetization which is normally aligned along the external field, but which will also (Larmor-)precess with a frequency ω<sup>L</sup> following a transient perturbation of the magnetization from its equilibrium state. Typically there are several such bundles of homogeneous spins in the analyzed sample, covering a range of Larmor frequencies Δω<sup>L</sup> . PFT-NMR works on the principle that the sample's bulk magnetizations of interest can be excited into precession-mode (made to "resonate") simultaneously in a wide ΔωL ("off-resonance") frequency range when perturbed by a "driving" electromagnetic wave of a single well-defined monochromatic radio-frequency (RF) ωD if the wave is applied as a "pulse" in a sufficiently short time Δ*t* . Because this phenomenon may appear counter-intuitive, the fundamental NMR literature is abundant in attempts to offer a preliminary explanation for why pulsed excitation works, typically invoking the "Uncertainty Principle" and the "sinc"-shaped FT-spectrum of the pulse according to the notions outlined above. Some representative examples from authoritative

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 213

from thinking more deeply about how and why the FT works as a mathematical tool and what exactly it means when evoked to interpret physical phenomena. One of the central features of the present author's approach in discussing the FT was the use of a "touched up" mathematical formalism to provoke a new way of looking at some key ideas of the FT and to avoid some hidden ambiguities in meaning that are otherwise inherent to the conventional formalism and can give rise to the misconceptions mentioned above. Besides coming out with a disambiguative but symbolically elaborate formalism for the FT involving the depiction of the domain intervals on which functions "exist" and introducing the concept of thinking about the FT in either "trigonometric-frequency-space" or "phasor-frequencyspace", those articles also expounded, from a new perspective, on several apparently simple but in practice elusive related concepts such as "periodicity", "harmonic oscillation", "phasor", "negative frequency", "uncertainty", etc.; much emphasis was placed also on a comparison of Fourier analysis and the Fourier transform, on the pros and cons of metaphoric thinking and the dangers of unrecognized linguistic ambiguities (e.g., words and phrases like "*uncertainty*"*,* "*nominally* monochromatic" and "*effectively* polychromatic" deceptively appear to be meaningful, but on deeper examination their meaning is

Owing to the volume and thematic complexity of those papers the need has repeatedly arisen to present its main thrusts in a more compact and symbolically simplified form. The present work therefore attempts to condense the original discussion into a shorter communication by focusing only on two themes, namely the reformulation of the primary FT equations (other closely related topics such as Fourier analysis will be left out) and the main conclusions on the problem of the "Uncertainty Principle". These thoughts are hoped to provide some fresh insights and a deeper understanding of the essence of the FT and the Uncertainty Principles. In that regard it should be emphasized that the applied mathematical formalism is not a l'art pour l'art design: it serves to facilitate a more conscientious way of thinking about the pertinent topics. For the full technical and philosophical details behind the ideas presented here the reader is encouraged to consult the

• The term *Fourier analysis* will be used to refer to the Fourier series expansion of periodic functions and the term *Fourier transform* (FT) will refer to the generalization of Fourier analysis including nonperiodic functions. The FT will be discussed only in the context of the Fourier-pair variables time (*t*) and frequency (ω). The word "spectrum" will in genereal mean the ω-dimension (ω-D) representation of a time-dimension (*t*-D)

• Sets will be represented by bold, italicized letters while vectors will be denoted by bold,

scale, I will use the symbol "@" as a primary generic index to tag an arbitrary value as " *x*@ " so as to emphasize that we are focusing on that particular value in the set ("@"

general take on negative as well as positive values, only nonnegative values, and only

x

its members *x* can assume values on a continuous

x , <sup>+</sup> x

represents an interval for the variable

 and <sup>−</sup> x

can in

function, more specifically as it has been obtained by the FT.

x

can be interpreted as: "*at* a given value". Members *x* of the sets

non-italicized letters. If in a set

nonpositive values, respectively. If a set

ambiguous and notoriously misunderstood).

original articles.

**2. Nomenclature 2.1 General notation** 

sources are as follows: "…although the applied excitation may be precisely centred at a frequency ω<sup>D</sup> , […], our act of turning the excitation power on at time zero and and off at time Δ*t* effectively broadens the spectral range of the excitation (to a bandwidth of ~1/Δ*t*)." (Marshall and Verdun, 1990); "A pulse of monochromatic RF with a rectangular envelope can be described in the frequency domain as a band of frequencies centred at the RF frequency. The Heisenberg principle states that there is a minimum uncertainty in the simultaneous specification of […] the frequency of a system and the duration of the measurement. […] this means that the […] irradiation is spread over a wide frequency band. […] the 'sinc' Fourier spectrum of a rectangular RF pulse shows that a shorter pulse gives a wider 'sinc' band and a longer pulse gives a narrower 'sinc' band." (King and Williams, 1989); "…if the pulse is made shorter, we will no longer have a truly monochromatic Fourier spectrum even though the source is still monochromatic. This is because many different frequencies have to be combined in order to form the rising and falling edges of the rectangular pulse." (Fukushima and Roeder, 1981); "… as the Uncertainty Principle indicates, a pulse [of carrier frequency ω<sup>D</sup> ] will contain, in effect, a range of frequencies centred on ω<sup>D</sup> . […] the distribution of RF magnetic field amplitudes takes the 'sinc' form […] which is the frequency-domain equivalent of a short pulse in the time domain. The two domains are connected by the Fourier transform." (Harris, 1983); "… the RF source […] is monochromatic, so we have to work out a way of using a single frequency to excite multiple frequencies. To see how this can be done we take our cue from the Uncertainty Principle. If the irradiation is applied for a time Δ*t*, then […] the nominally monochromatic irradiation is uncertain in frequency by about 1/Δ*t*." (Derome, 1987)

One should quickly note that at a macroscopic level the basic NMR phenomenon is a purely classical effect (Hoult, 1989 and Hanson, 2008) involving interactions between the oscillating magnetic component of the RF pulse and the sample's bulk magnetizations. Indeed, the fact that a monochromatic pulse can excite spins in a wide frequency range can be well explained classically using the laws of driven mechanical harmonic oscillators and magnetism (Bloch, 1946) without resorting to the vague idea that the ωD RF pulse frequency should be "uncertain" or "effectively" polychromatic according to its Fourier spectrum. Furthermore, except for one case the above arguments are unclear on whether by "Uncertainty Principle" they mean the Heisenberg Uncertainty Principle (HUP) which is a probabilistic, quantum-mechanical statement, or the Fourier Uncertainty Principle (FUP) which is a deterministic theorem in the discipline of time-frequency analysis.

In a series of recent papers (Szántay, 2007, 2008a, 2008b, 2008c) the author of the present work explored the above problem in detail, showing that a) the idea that the frequency of a time-limited sinusoid would be uncertain in the Heisenberg sense is flawed; b) it is the FUP (and not the HUP) that is of relevance to the problem; c) the notion that a monochromatic pulse "effectively" comprises of a range of physically existing frequency components is erroneous. During that discourse the need had come up to contemplate the essence of the Fourier Transform (FT) itself, which may appear somewhat odd considering the fact that the FT is ubiquitously applied in a broad range of sciences and technology, and works explaining its mathematics and its manifold physical applications are available in almost countless abundance. However, it turns out that the FT is so commonplace, it has become so much consolidated into many fields of science and engineering, its basic principles and formulas are so commonly taken for granted, that, paradoxically, this can easily stop people from thinking more deeply about how and why the FT works as a mathematical tool and what exactly it means when evoked to interpret physical phenomena. One of the central features of the present author's approach in discussing the FT was the use of a "touched up" mathematical formalism to provoke a new way of looking at some key ideas of the FT and to avoid some hidden ambiguities in meaning that are otherwise inherent to the conventional formalism and can give rise to the misconceptions mentioned above. Besides coming out with a disambiguative but symbolically elaborate formalism for the FT involving the depiction of the domain intervals on which functions "exist" and introducing the concept of thinking about the FT in either "trigonometric-frequency-space" or "phasor-frequencyspace", those articles also expounded, from a new perspective, on several apparently simple but in practice elusive related concepts such as "periodicity", "harmonic oscillation", "phasor", "negative frequency", "uncertainty", etc.; much emphasis was placed also on a comparison of Fourier analysis and the Fourier transform, on the pros and cons of metaphoric thinking and the dangers of unrecognized linguistic ambiguities (e.g., words and phrases like "*uncertainty*"*,* "*nominally* monochromatic" and "*effectively* polychromatic" deceptively appear to be meaningful, but on deeper examination their meaning is ambiguous and notoriously misunderstood).

Owing to the volume and thematic complexity of those papers the need has repeatedly arisen to present its main thrusts in a more compact and symbolically simplified form. The present work therefore attempts to condense the original discussion into a shorter communication by focusing only on two themes, namely the reformulation of the primary FT equations (other closely related topics such as Fourier analysis will be left out) and the main conclusions on the problem of the "Uncertainty Principle". These thoughts are hoped to provide some fresh insights and a deeper understanding of the essence of the FT and the Uncertainty Principles. In that regard it should be emphasized that the applied mathematical formalism is not a l'art pour l'art design: it serves to facilitate a more conscientious way of thinking about the pertinent topics. For the full technical and philosophical details behind the ideas presented here the reader is encouraged to consult the original articles.

### **2. Nomenclature**

212 Fourier Transform – Signal Processing

sources are as follows: "…although the applied excitation may be precisely centred at a frequency ω<sup>D</sup> , […], our act of turning the excitation power on at time zero and and off at time Δ*t* effectively broadens the spectral range of the excitation (to a bandwidth of ~1/Δ*t*)." (Marshall and Verdun, 1990); "A pulse of monochromatic RF with a rectangular envelope can be described in the frequency domain as a band of frequencies centred at the RF frequency. The Heisenberg principle states that there is a minimum uncertainty in the simultaneous specification of […] the frequency of a system and the duration of the measurement. […] this means that the […] irradiation is spread over a wide frequency band. […] the 'sinc' Fourier spectrum of a rectangular RF pulse shows that a shorter pulse gives a wider 'sinc' band and a longer pulse gives a narrower 'sinc' band." (King and Williams, 1989); "…if the pulse is made shorter, we will no longer have a truly monochromatic Fourier spectrum even though the source is still monochromatic. This is because many different frequencies have to be combined in order to form the rising and falling edges of the rectangular pulse." (Fukushima and Roeder, 1981); "… as the Uncertainty Principle indicates, a pulse [of carrier frequency ω<sup>D</sup> ] will contain, in effect, a range of frequencies centred on ω<sup>D</sup> . […] the distribution of RF magnetic field amplitudes takes the 'sinc' form […] which is the frequency-domain equivalent of a short pulse in the time domain. The two domains are connected by the Fourier transform." (Harris, 1983); "… the RF source […] is monochromatic, so we have to work out a way of using a single frequency to excite multiple frequencies. To see how this can be done we take our cue from the Uncertainty Principle. If the irradiation is applied for a time Δ*t*, then […] the nominally monochromatic irradiation is

One should quickly note that at a macroscopic level the basic NMR phenomenon is a purely classical effect (Hoult, 1989 and Hanson, 2008) involving interactions between the oscillating magnetic component of the RF pulse and the sample's bulk magnetizations. Indeed, the fact that a monochromatic pulse can excite spins in a wide frequency range can be well explained classically using the laws of driven mechanical harmonic oscillators and magnetism (Bloch, 1946) without resorting to the vague idea that the ωD RF pulse frequency should be "uncertain" or "effectively" polychromatic according to its Fourier spectrum. Furthermore, except for one case the above arguments are unclear on whether by "Uncertainty Principle" they mean the Heisenberg Uncertainty Principle (HUP) which is a probabilistic, quantum-mechanical statement, or the Fourier Uncertainty Principle (FUP)

In a series of recent papers (Szántay, 2007, 2008a, 2008b, 2008c) the author of the present work explored the above problem in detail, showing that a) the idea that the frequency of a time-limited sinusoid would be uncertain in the Heisenberg sense is flawed; b) it is the FUP (and not the HUP) that is of relevance to the problem; c) the notion that a monochromatic pulse "effectively" comprises of a range of physically existing frequency components is erroneous. During that discourse the need had come up to contemplate the essence of the Fourier Transform (FT) itself, which may appear somewhat odd considering the fact that the FT is ubiquitously applied in a broad range of sciences and technology, and works explaining its mathematics and its manifold physical applications are available in almost countless abundance. However, it turns out that the FT is so commonplace, it has become so much consolidated into many fields of science and engineering, its basic principles and formulas are so commonly taken for granted, that, paradoxically, this can easily stop people

which is a deterministic theorem in the discipline of time-frequency analysis.

uncertain in frequency by about 1/Δ*t*." (Derome, 1987)

### **2.1 General notation**


A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 215

domain. For example, for a function •−•

x ∈ x

x

*f x*)( = •−•

; for <sup>⇔</sup>

⇔ x x

x

@@ )( @ −δ *xxA* = <sup>⋅</sup> <sup>∞</sup> @*A* if <sup>=</sup> *xx* @ , otherwise zero, and @ )( •

Rule ; *x* <sup>⇒</sup>

operative interval is •−• <sup>⇔</sup>

natural domain must also be

x

; ⇔ •−• x x

**2.2 Trigonometric and phasor frequencies** 

( *t*)( =ω constant) over the operative interval

phasor will be referred to as a "harmonic wave".

frequency of a sinusoid oscillation) will be labeled as ϖ ∈

wave *th* )( <sup>ϖ</sup> and a harmonic phasor )( <sup>ω</sup> *th* / take the following general forms:

frequency of harmonic phasor) will be labeled as ω∈

*x* .

Rule , *x* •−•

⇔ x

⇔ •−• x; x

*f x*)( = Rule

*f x*)( = <sup>⇔</sup>

placed after the function's name or rule of association) with the function's natural

x

should always be conscious of what domains and intervals are relevant to the time scales and frequency ranges in the pertinent mathematical expressions. These aspects are often far from being self-evident from the formulas themselves in their conventional notation, and are usually present as hidden assumptions or unstated premises. To overcome this problem herein the pertinent interval types will be represented as

x

= *xx* @ , otherwise zero. If the properties of *f x*)( are considered on different intervals simultaneously, the pertinent interval symbols will be separated by semicolons, such as

The symbol ω will be used as a general representation of (angular) frequency such that the oscillation frequencies associated with a temporal function have constant values

t

sinusoid (trigonometric) harmonic wave with a complex amplitude (i.e., a 1D oscillation in the complex plane that gives a 2D wave as a function of time) has the general form ωϕ+ω⋅ω )]()cos[()( <sup>D</sup> *A t* and a *harmonic phasor* (i.e., a vector **A** rotating uniformly either clockwise or anti-clockwise in the complex plane, and describing a 3D helix as a function of time) has the general form *iA* exp(])([exp)( ⋅⋅ωϕ⋅⋅ω ω*ti* ) <sup>D</sup> = exp()( ⋅⋅ω ω*tiA* ) , where <sup>D</sup> ϕ is the wave's initial phase at *t* = 0. For the harmonic phasor the phase angle <sup>D</sup> )( *tt* ϕ+ω=ϕ is measured from the positive real axis (+ℜ) with a counterclockwise angle being positive and a clockwise angle being negative. For simplicity both a sinusoid wave and a harmonic

The sinusoid and phasor forms of a harmonic wave are connected by Euler's famous identity, written in its conventional form as exp( ⋅ω*ti* ) = ω + ⋅ ω*tit* )sin()cos( , from which the trigonometric terms can be expressed by rearrangement as ω*t*)cos( = exp(5.0 ⋅⋅ ω*ti* ) + exp(5.0 ⋅−⋅ ω*ti* ) , and ω*t*)sin( = − ⋅ ⋅ exp(5.0 ⋅ω*tii* ) + *i* ⋅ ⋅ exp(5.0 − ⋅ω*ti* ) . Using this conventional notation the concept of the sign of ω, in particular the idea of a negative frequency, can be a confusing issue. Thus, based on making a distinction between "trigonometric frequency" and "phasor frequency" the following notation is introduced: a *trigonometric frequency* (the

*f x*)( = <sup>⇒</sup>

*f x*)( the operative interval is

*f x*)( the natural domain may be

. A proper understanding of the FT requires that one

x

x

x

of the function. In the customary notation a

ϖ

ω

; a *phasor frequency* (the

/ / . Accordingly, a sinusoid harmonic

*f x*)( = ⇐

*<sup>f</sup> <sup>x</sup>* = @ @

Rule ; *x* ⇐

⇔ x ⇔ x

, implying that the

x

• x @ )( <sup>↑</sup> x

*A* = @*A* if

*f x* =

Rule , *x*

but the

*x* ∈ x , we will consider five main interval types labeled as follows: <sup>⇔</sup> x = *xx* ∞=−∞= ),( is open and unbounded on both sides; <sup>=</sup> *xx* @'@ ],[ •−• x is closed and bounded (compact) at arbitrary *x*@ and @' *<sup>x</sup>* values on both sides; ∞=== ),[ <sup>⇒</sup> x 0 *xx* is closed and bounded at zero on the left, open and unbounded on the right and =−∞== <sup>0</sup>],( ⇐ x *xx* is closed and bounded at zero on the right and open and unbounded on the left; @ ↑ x = { @ @ @ @ } , ≤≤ ′ ← *xxxxxx* ′ will be associated with a function which has a finite function value at @ • <sup>=</sup> *xx* but is zero if @ • <sup>≠</sup> *xx* ; @ ↑ x = { @ @ @ @ } , ≤≤ ′ ← *xxxxxx* ′ will be used in connection with a Dirac delta whose function value is infinite at @ ↑ = *xx* but zero if @ ↑ ≠ *xx* .


placed after the function's name or rule of association) with the function's natural domain. For example, for a function •−• x *f x*)( the natural domain may be ⇔ x but the operative interval is •−• <sup>⇔</sup> x ∈ x ; for <sup>⇔</sup> x *f x*)( the operative interval is ⇔ x , implying that the natural domain must also be ⇔ x . A proper understanding of the FT requires that one should always be conscious of what domains and intervals are relevant to the time scales and frequency ranges in the pertinent mathematical expressions. These aspects are often far from being self-evident from the formulas themselves in their conventional notation, and are usually present as hidden assumptions or unstated premises. To overcome this problem herein the pertinent interval types will be represented as ⇔ x *f x*)( = <sup>⇔</sup> x Rule , *x* •−• x *f x*)( = •−• x Rule ; *x* <sup>⇒</sup> x *f x*)( = <sup>⇒</sup> x Rule ; *x* ⇐ x *f x*)( = ⇐ x Rule , *x* @ )( <sup>↑</sup> x *f x* = @@ )( @ −δ *xxA* = <sup>⋅</sup> <sup>∞</sup> @*A* if <sup>=</sup> *xx* @ , otherwise zero, and @ )( • x *<sup>f</sup> <sup>x</sup>* = @ @ • x *A* = @*A* if = *xx* @ , otherwise zero. If the properties of *f x*)( are considered on different intervals simultaneously, the pertinent interval symbols will be separated by semicolons, such as ⇔ •−• x; x *f x*)( = Rule ; ⇔ •−• x x *x* .

#### **2.2 Trigonometric and phasor frequencies**

214 Fourier Transform – Signal Processing

x

and bounded at zero on the right and open and unbounded on the left; @

• Underlined symbols are used to denote complex quantities having a real (ℜ) and an imaginary (ℑ) component, so a complex constant *AAA* ),(<sup>ℜ</sup> <sup>ℑ</sup> = is defined and symbolized as *AiAA* <sup>ℜ</sup> ℑ⋅+= = + *AA* ′ <sup>ℜ</sup> <sup>ℑ</sup> where =⋅ *AAi* ′ <sup>ℑ</sup> <sup>ℑ</sup> . Hence *A* has the absolute

value = *AA* = <sup>22</sup> *AA* )()( ℑℜ + = <sup>22</sup> − *AA* ′)()( ℑℜ . A complex number *A* expressed in the form of a complex exponential *iAA* ϕ⋅⋅= )exp( will be referred to as a *phasor* where the phase ϕ is measured from the positive real axis (+ℜ) with a counterclockwise angle being positive and a clockwise angle being negative. A non-underlined character, if lacking the tags ℜ or ℑ, represents a real-valued quantity by default. Likewise, a function in general is represented as *f x*)( if it is real-valued and as *f x*)( if it is complex-

• The action of a mathematical formula (rule, operator, etc…) upon a set will be denoted as Formula . Thus, a rule acting on a variable *set x* to give a function *f x*)( is written as *f* = Rule)( *xx* , and an operator that converts a function *f x*)( into *F y*)( is depicted as

• The natural (largest possible) domain of a function *f* = Rule)( *xx* is the set of all allowable inputs that the function's argument *x* may assume. However, *f x*)( can of

x

function that starts with a finite value at *x* = 0 but decays exponentially to zero at *x* = ∞

confuse the operative interval (which is always indicated as a subscript following a bar

x

will be called the *operative interval* of *f x*)( . A function's operative

ranging from zero to positive infinity. Care must be taken not to

*f x* = Rule)( *x* and should be interpreted either as having

corresponding to the rectangle's width, while for a

course be restricted to or can simply be of interest to us over a subset

x

x = x

{ @ @ @ @ } , ≤≤ ′ ← *xxxxxx* ′ will be associated with a function which has a finite function

at zero on the left, open and unbounded on the right and =−∞== <sup>0</sup>],( ⇐

• <sup>≠</sup> *xx* ; @ ↑ x

connection with a Dirac delta whose function value is infinite at @

x

x

0 *xx* is closed and bounded

= { @ @ @ @ } , ≤≤ ′ ← *xxxxxx* ′ will be used in

x

is closed and bounded (compact)

*xx* is closed

↑

x

. For example, for a single rectangular

of the natural

= *xx* but zero if

= *xx* ∞=−∞= ),(

↑ x=

, we will consider five main interval types labeled as follows: <sup>⇔</sup>

is open and unbounded on both sides; <sup>=</sup> *xx* @'@ ],[ •−•

• <sup>=</sup> *xx* but is zero if @

at arbitrary *x*@ and @' *<sup>x</sup>* values on both sides; ∞=== ),[ <sup>⇒</sup>

*x* ∈ x

value at @

@ ↑ ≠ *xx* .

valued.

*F y*)( = Operator *f x*)( .

domain, and herein

we have <sup>⇒</sup> x = x

interval will be denoted as

"bump" along *x* we have •−•

x

no values or zero function values for any *x*∉

The symbol ω will be used as a general representation of (angular) frequency such that the oscillation frequencies associated with a temporal function have constant values ( *t*)( =ω constant) over the operative interval t of the function. In the customary notation a sinusoid (trigonometric) harmonic wave with a complex amplitude (i.e., a 1D oscillation in the complex plane that gives a 2D wave as a function of time) has the general form ωϕ+ω⋅ω )]()cos[()( <sup>D</sup> *A t* and a *harmonic phasor* (i.e., a vector **A** rotating uniformly either clockwise or anti-clockwise in the complex plane, and describing a 3D helix as a function of time) has the general form *iA* exp(])([exp)( ⋅⋅ωϕ⋅⋅ω ω*ti* ) <sup>D</sup> = exp()( ⋅⋅ω ω*tiA* ) , where <sup>D</sup> ϕ is the wave's initial phase at *t* = 0. For the harmonic phasor the phase angle <sup>D</sup> )( *tt* ϕ+ω=ϕ is measured from the positive real axis (+ℜ) with a counterclockwise angle being positive and a clockwise angle being negative. For simplicity both a sinusoid wave and a harmonic phasor will be referred to as a "harmonic wave".

The sinusoid and phasor forms of a harmonic wave are connected by Euler's famous identity, written in its conventional form as exp( ⋅ω*ti* ) = ω + ⋅ ω*tit* )sin()cos( , from which the trigonometric terms can be expressed by rearrangement as ω*t*)cos( = exp(5.0 ⋅⋅ ω*ti* ) + exp(5.0 ⋅−⋅ ω*ti* ) , and ω*t*)sin( = − ⋅ ⋅ exp(5.0 ⋅ω*tii* ) + *i* ⋅ ⋅ exp(5.0 − ⋅ω*ti* ) . Using this conventional notation the concept of the sign of ω, in particular the idea of a negative frequency, can be a confusing issue. Thus, based on making a distinction between "trigonometric frequency" and "phasor frequency" the following notation is introduced: a *trigonometric frequency* (the frequency of a sinusoid oscillation) will be labeled as ϖ ∈ϖ ; a *phasor frequency* (the frequency of harmonic phasor) will be labeled as ω∈ω/ / . Accordingly, a sinusoid harmonic wave *th* )( <sup>ϖ</sup> and a harmonic phasor )( <sup>ω</sup> *th* / take the following general forms:

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 217

I ,

<sup>H</sup> as

t*th* )(

<sup>I</sup> , ω/

ϖ

t*tiA* )exp()(<sup>I</sup> <sup>I</sup> . If however we

FT and <sup>←</sup>

corresponding to the time frame over which the

t

t

*tf* )( functions that are of practical relevance to

t

where each wave lasts from −∞ to +∞ , we write <sup>→</sup> <sup>⇔</sup> <sup>ϖ</sup>

)( <sup>ω</sup> *th* = / ⋅⋅ω ω/ <sup>⇔</sup>

<sup>G</sup> and ω/

<sup>G</sup> designates a

. Frequency-sweep

FT connect a *t*-D

FT , so the link

t*tf* )( on a

(that "something"

*tf* )( = "*nothing*" (i.e., *tf* )( is

*tf* )( can be recovered exactly

→ =ω FT and

↔

Furthermore, the concept of a "frequency sweep" is introduced as follows: the symbols ω/

representing a "sweep" of the frequency value from −∞ to +∞ , from 0 to +∞ , and from 0 to

notation will be used, when needed for emphasis, in expressions describing temporal harmonic waves as well as in the function names of FT-spectra. For example, when stressing that we are considering the set of all sinusoid harmonic waves *th* )( <sup>ϖ</sup> covering the entire

)](cos[)( GG <sup>G</sup> <sup>D</sup> *<sup>A</sup> <sup>t</sup>* . Similarly, the set of all perpetual harmonic phasors in the infinite

want to focus on a harmonic wave at a given frequency value we write

/ t

FT , respectively. The so-called Fourier Inversion Theorem (FIT) (see below)

t

t

↔ *tf <sup>F</sup>* FT .

*tf* )( = "*something*" for *<sup>t</sup>* <sup>∈</sup>

states that the FT is an invertible transform: there is a one-to-one correspondence between all temporal and spectral function pairs that we consider here; to reflect this concept the

−∞ , respectively, in phasor-frequency space. Likewise, the symbol ϖ

/ can be written as <sup>⇔</sup> <sup>↔</sup>

**2.3 FT-contextualized function notation in the time and frequency dimensions** 

function *)t(f* and its ω-D counterpart *F* ω)( are denoted as *)t(f)(F*

forward and backward FT operator symbols are combined into the symbol

*In the time dimension* we generally consider complex-valued temporal functions

t. All

t

trigonometric-frequency "sweep" of ϖ from 0 to +∞ in the set <sup>⇒</sup>

ϖ

The ways in which the forward and inverse FT operators <sup>→</sup>

between the two dimensions can be depicted as: ←⎯ → ω⎯ )( )(

may contain values of *tf* )( = zero besides nonzero values) but

us can be Fourier-transformed, i.e., their spectrum exists and

real timeline, with the operative duration

either zero or nonexistent) for all *t* ∉

(forward) FT is performed. This means that

<sup>H</sup> emphasize the fact that we take into account all values for the variable ω/ in the

/ , respectively. Thus we may think of ω/

ω/ <sup>G</sup> and ω/

continuous sets <sup>⇔</sup>

ω/ , <sup>⇒</sup> ω

trigonometric frequency range <sup>⇒</sup>

t

ω

)( cos( @@ ) @ @ <sup>D</sup> ϕ+ϖ⋅= <sup>ϖ</sup> *Ath <sup>t</sup>* or )( exp( <sup>ω</sup>@ ) <sup>ω</sup>@ @ <sup>=</sup> <sup>⋅</sup> <sup>⋅</sup> / *tiAth* / .

= ϖϕ+ϖ⋅ϖ <sup>⇔</sup>

ω= *)(F)t(f* ←

phasor-frequency range <sup>⇔</sup>

/ and ⇐ ω

$$
\underline{\text{h}}\_{\mathfrak{w}}(t) = \underline{\underline{A}}(\mathfrak{w}) \cdot \cos[(\mathfrak{w}t) + \mathfrak{o}^{\circ}(\mathfrak{w})] \, , \tag{1}
$$

$$\underline{\hspace{0.1cm}}\_{\boldsymbol{\upphi}}(t) = A(\boldsymbol{\upphi}) \cdot e^{i \cdot \boldsymbol{\upphi}^{\circ}(\boldsymbol{\upphi})} \cdot e^{i \cdot \boldsymbol{\upphi}t} = \underline{A}(\boldsymbol{\upphi}) \cdot e^{i \cdot \boldsymbol{\upphi}t} = {}^{\underline{\text{\textquotedbl{}}}}A(\boldsymbol{\upphi}) \cdot e^{i \cdot \boldsymbol{\upphi}t} + i \cdot {}^{\underline{\text{\textquotedbl{}}}}A(\boldsymbol{\upphi}) \cdot e^{i \cdot \boldsymbol{\upphi}t} \,. \tag{2}$$

Note from (2) that the phasor's complex amplitude *A* always corresponds to *t* = 0 since *A* is defined by its magnitude (absolute value) and the phasor's initial phase, which are both time-independent quantities. From (2) we see that the phasor ω/ ⋅ exp()( ⋅ω/ *tiA* ) can be equivalently viewed also as a sum of two phasors of the same frequency, one with a real amplitude *A*<sup>ℜ</sup> , and the other with a pure imaginary amplitude *Ai*ℑ⋅ .

For the trigonometric wave *t(h* ) <sup>ϖ</sup> the meaning of ϖ seems straightforward: it expresses the number of full periods (multiplied by 2π) completed by the wave in unit time, whereby ϖ is defined as having positive numerical values only since it would not make sense to speak of a negative number of oscillations per unit time. The algebraically positive nature of ϖ merits emphasis because in the context of the FT tigonometric frequencies are sometimes (mistakenly) given negative values in the literature [e.g. the fact that neither the cosine transform, nor the sine transform alone can distinguish between the two possible rotational directions of a harmonic phasor is often explained by the formal truths that ω− *t*])cos[( = ω)cos( *t* and −ω*t*])sin[( = − ω)sin( *t* ]. However, one should appreciate that the concept of −ω*t*])([cos (or −ϖ *t*])([cos ) or −ω*t*])sin[( (or −ϖ *t*])sin[( ) does not make any physical sense. This issue is partly rooted in the fact that the literature makes no symbolic distinction between " ϖ " and " ω/ ", hence the intended meaning (phasor vs. trigonometric) of "frequency", labeled as " ω ", is typically an implicit consequence of the context in which it appears but is not implied or conveyed by the symbol " ω " itself. Thus the formulas ω− *t*])cos[( or −ω*t*])sin[( may easily escape mental objection when speaking about positive and negative phasor frequencies when the latter are also labeled as +ω and −ω. There is in fact no need to allow sinusoid frequencies to be negative (see below) and we can adhere to the physically appropriate definition that trigonometric frequencies are always positive.

A harmonic phasor ) <sup>ω</sup>(*th* / has either a positive or a negative frequency +ω/ and −ω/ depending on the direction of "skew" (or "helicity") of the 3D helix described by the phasor as a function of time. I employ the convention according to which for positive ω/ the phasor exp()( <sup>ω</sup> ) <sup>ω</sup> *tiAth* <sup>+</sup> / <sup>+</sup> ⋅⋅= / corresponds to the vector **A** rotating in the complex plane in the −ℑ→−ℜ→+ℑ→+ℜ direction as time progresses, while for negative ω/ the phasor exp()( <sup>ω</sup> exp() <sup>ω</sup> ) <sup>ω</sup> *tiAtiAth* <sup>−</sup> <sup>+</sup> / <sup>−</sup> ⋅⋅= / ⋅−⋅= / represents rotation in the opposite sense.

Note that according to the above considerations there is an intrinsic difference between the concepts of the "sign" of the frequency for a harmonic sinusoid wave and a harmonic phasor. If, in a <sup>ℜ</sup> <sup>ℑ</sup> *<sup>t</sup>*),,( space, we associate a positive skew with +ω/ and a negative skew with −ω/ (and this is what we mean by the positive and negative sign of ω/ ), then a trigonometric frequency ϖ , which is "skew-less", should also be "sign-less". Thus our definition of ϖ being always positive carries the implicit understanding that a numerically positive frequency value requires a different physical interpretation for ϖ and <sup>+</sup> ω/ .

All subsequent equations will be formulated in line with the above definitions on the concept of frequency. Forcing oneself to accordingly rethink the common text-book equations related to the FT is an instructive endeavour yielding added levels of insight.

216 Fourier Transform – Signal Processing

<sup>ω</sup>() ( ) () () () *<sup>i</sup> i t i t i t i t ht A e e A e A e i A e* ⋅ϕ ω/ <sup>⋅</sup> / ⋅ℜ ⋅ ℑ ⋅ // / / = ω⋅ ⋅ = ω⋅ = ω⋅ + ⋅ ω⋅ / // / <sup>D</sup>

Note from (2) that the phasor's complex amplitude *A* always corresponds to *t* = 0 since *A* is defined by its magnitude (absolute value) and the phasor's initial phase, which are both time-independent quantities. From (2) we see that the phasor ω/ ⋅ exp()( ⋅ω/ *tiA* ) can be equivalently viewed also as a sum of two phasors of the same frequency, one with a real

For the trigonometric wave *t(h* ) <sup>ϖ</sup> the meaning of ϖ seems straightforward: it expresses the number of full periods (multiplied by 2π) completed by the wave in unit time, whereby ϖ is defined as having positive numerical values only since it would not make sense to speak of a negative number of oscillations per unit time. The algebraically positive nature of ϖ merits emphasis because in the context of the FT tigonometric frequencies are sometimes (mistakenly) given negative values in the literature [e.g. the fact that neither the cosine transform, nor the sine transform alone can distinguish between the two possible rotational directions of a harmonic phasor is often explained by the formal truths that ω− *t*])cos[( = ω)cos( *t* and −ω*t*])sin[( = − ω)sin( *t* ]. However, one should appreciate that the concept of −ω*t*])([cos (or −ϖ *t*])([cos ) or −ω*t*])sin[( (or −ϖ *t*])sin[( ) does not make any physical sense. This issue is partly rooted in the fact that the literature makes no symbolic distinction between " ϖ " and " ω/ ", hence the intended meaning (phasor vs. trigonometric) of "frequency", labeled as " ω ", is typically an implicit consequence of the context in which it appears but is not implied or conveyed by the symbol " ω " itself. Thus the formulas ω− *t*])cos[( or −ω*t*])sin[( may easily escape mental objection when speaking about positive and negative phasor frequencies when the latter are also labeled as +ω and −ω. There is in fact no need to allow sinusoid frequencies to be negative (see below) and we can adhere to the physically appropriate definition that trigonometric frequencies are always positive. A harmonic phasor ) <sup>ω</sup>(*th* / has either a positive or a negative frequency +ω/ and −ω/ depending on the direction of "skew" (or "helicity") of the 3D helix described by the phasor as a function of time. I employ the convention according to which for positive ω/ the phasor

<sup>+</sup> ⋅⋅= / corresponds to the vector **A** rotating in the complex plane in the −ℑ→−ℜ→+ℑ→+ℜ direction as time progresses, while for negative ω/ the phasor

Note that according to the above considerations there is an intrinsic difference between the concepts of the "sign" of the frequency for a harmonic sinusoid wave and a harmonic phasor. If, in a <sup>ℜ</sup> <sup>ℑ</sup> *<sup>t</sup>*),,( space, we associate a positive skew with +ω/ and a negative skew with −ω/ (and this is what we mean by the positive and negative sign of ω/ ), then a trigonometric frequency ϖ , which is "skew-less", should also be "sign-less". Thus our definition of ϖ being always positive carries the implicit understanding that a numerically

All subsequent equations will be formulated in line with the above definitions on the concept of frequency. Forcing oneself to accordingly rethink the common text-book equations related to the FT is an instructive endeavour yielding added levels of insight.

<sup>−</sup> ⋅⋅= / ⋅−⋅= / represents rotation in the opposite sense.

positive frequency value requires a different physical interpretation for ϖ and <sup>+</sup>

amplitude *A*<sup>ℜ</sup> , and the other with a pure imaginary amplitude *Ai*ℑ⋅ .

exp()( <sup>ω</sup> ) <sup>ω</sup> *tiAth* <sup>+</sup>

exp()( <sup>ω</sup> exp() <sup>ω</sup> ) <sup>ω</sup> *tiAtiAth* <sup>−</sup> <sup>+</sup>

/

/

*ht A t* ( ) ( ) cos[( ) ( )] <sup>ϖ</sup> = ϖ ⋅ ϖ +ϕ ϖ<sup>D</sup> , (1)

. (2)

ω/ .

( ) ωω ω ω

Furthermore, the concept of a "frequency sweep" is introduced as follows: the symbols ω/ I , ω/ <sup>G</sup> and ω/ <sup>H</sup> emphasize the fact that we take into account all values for the variable ω/ in the continuous sets <sup>⇔</sup> ω/ , <sup>⇒</sup> ω/ and ⇐ ω/ , respectively. Thus we may think of ω/ <sup>I</sup> , ω/ <sup>G</sup> and ω/ <sup>H</sup> as representing a "sweep" of the frequency value from −∞ to +∞ , from 0 to +∞ , and from 0 to −∞ , respectively, in phasor-frequency space. Likewise, the symbol ϖ <sup>G</sup> designates a trigonometric-frequency "sweep" of ϖ from 0 to +∞ in the set <sup>⇒</sup> ϖ . Frequency-sweep notation will be used, when needed for emphasis, in expressions describing temporal harmonic waves as well as in the function names of FT-spectra. For example, when stressing that we are considering the set of all sinusoid harmonic waves *th* )( <sup>ϖ</sup> covering the entire trigonometric frequency range <sup>⇒</sup> ϖ where each wave lasts from −∞ to +∞ , we write <sup>→</sup> <sup>⇔</sup> <sup>ϖ</sup> t *th* )( = ϖϕ+ϖ⋅ϖ <sup>⇔</sup> t )](cos[)( GG <sup>G</sup> <sup>D</sup> *<sup>A</sup> <sup>t</sup>* . Similarly, the set of all perpetual harmonic phasors in the infinite phasor-frequency range <sup>⇔</sup> ω/ can be written as <sup>⇔</sup> <sup>↔</sup> / t )( <sup>ω</sup> *th* = / ⋅⋅ω ω/ <sup>⇔</sup> t *tiA* )exp()(<sup>I</sup> <sup>I</sup> . If however we want to focus on a harmonic wave at a given frequency value we write )( cos( @@ ) @ @ <sup>D</sup> ϕ+ϖ⋅= <sup>ϖ</sup> *Ath <sup>t</sup>* or )( exp( <sup>ω</sup>@ ) <sup>ω</sup>@ @ <sup>=</sup> <sup>⋅</sup> <sup>⋅</sup> / *tiAth* / .

#### **2.3 FT-contextualized function notation in the time and frequency dimensions**

The ways in which the forward and inverse FT operators <sup>→</sup> FT and <sup>←</sup> FT connect a *t*-D function *)t(f* and its ω-D counterpart *F* ω)( are denoted as *)t(f)(F* → =ω FT and ω= *)(F)t(f* ← FT , respectively. The so-called Fourier Inversion Theorem (FIT) (see below) states that the FT is an invertible transform: there is a one-to-one correspondence between all temporal and spectral function pairs that we consider here; to reflect this concept the forward and backward FT operator symbols are combined into the symbol ↔ FT , so the link between the two dimensions can be depicted as: ←⎯ → ω⎯ )( )( ↔ *tf <sup>F</sup>* FT .

*In the time dimension* we generally consider complex-valued temporal functions t *tf* )( on a real timeline, with the operative duration t corresponding to the time frame over which the (forward) FT is performed. This means that t *tf* )( = "*something*" for *<sup>t</sup>* <sup>∈</sup>t (that "something" may contain values of *tf* )( = zero besides nonzero values) but t *tf* )( = "*nothing*" (i.e., *tf* )( is either zero or nonexistent) for all *t* ∉t . All t *tf* )( functions that are of practical relevance to us can be Fourier-transformed, i.e., their spectrum exists and t*tf* )( can be recovered exactly

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 219

frequency ω/ @ . Although in such cases the value of ω/ @ is contained in the relevant rule,

Rule @ , it will often prove informative to indicate ω/ @ in the function's name as

ω

With the above considerations in mind we can reformulate Euler's identities as follows. For

*eA* @ @ @ *AeAieA Ait t ti ti ti* ϖ⋅⋅±ϖ⋅=⋅⋅+⋅=⋅

On the other hand the entire positive, entire negative, or complete positive and negative phasor frequency range can be embraced via positive trigonometric frequency notation

)( )( )( )sin()()cos()( (<sup>ω</sup> ) <sup>ω</sup> <sup>ω</sup> <sup>ω</sup> *eA AeAieA tAit ti ti ti* / =⋅ω / ⋅+⋅ω / ϖ⋅ϖ⋅±ϖ⋅ϖ=⋅ω ↔ ↔ ↔ ±

*ti ti A eAt eA*

*ti ti A eAieAit*

These equations reflect the concept that we can describe a harmonic phasor's frequency and sign of rotation in both phasor notation and complex trigonometric form, so long as in the latter case we use both the (real) cosine and (imaginary) sine waves - both of which have algebraically positive ϖ frequencies. There is no contradiction in the fact that we can obtain a negative phasor frequency −ω/ by combining a real and an imaginary sinusoid, both of

t

/ <sup>⋅</sup> / ⋅ℜ / ⋅ℑ / <sup>I</sup> <sup>I</sup> <sup>I</sup> GG GG (7)

t

> t

→ ← <sup>⋅</sup> / <sup>⋅</sup> / =ϖ⋅ϖ / +⋅ / <sup>⋅</sup> <sup>ω</sup> <sup>ω</sup> (5.0)cos()( <sup>ω</sup>) (5.0 <sup>ω</sup>) GG <sup>G</sup> <sup>H</sup> (8)

→ ← <sup>⋅</sup> / <sup>⋅</sup> / ⋅−=ϖ⋅ϖ / ⋅+⋅ / <sup>⋅</sup> <sup>ω</sup> <sup>ω</sup> (5.0)sin()( <sup>ω</sup>) (5.0 <sup>ω</sup>) GG <sup>G</sup> <sup>H</sup> (9)

t

/ / ⋅ℜ / ⋅ℑ / <sup>⋅</sup> (4)

t

> t

*ti ti <sup>A</sup> eAt eA* <sup>+</sup> <sup>+</sup> <sup>⋅</sup> / ⋅− / @ ⋅+⋅=ϖ⋅ <sup>ω</sup>@ @ <sup>ω</sup> @ @ @ cos( 5.0) 5.0 (5)

*ti ti <sup>A</sup> eAieAit* <sup>+</sup> <sup>+</sup> <sup>⋅</sup> / ⋅− / @ ⋅⋅+⋅⋅−=ϖ⋅ <sup>ω</sup>@ @ <sup>ω</sup> @ sin( @ 5.0) @ 5.0 (6)

↔ )( ←⎯ →⎯ ω/)(

t

/ / / / =ω / −ω ω/

 <sup>ω</sup> Rule)( @ @ ; <sup>I</sup> *<sup>F</sup>* .

ω/

<sup>I</sup> *tf <sup>F</sup>* FT (3)

ω

t

t

t

t

)cos( @ )sin( @ (ω )

> t

> > t

@

±

t

t

<sup>I</sup> *<sup>F</sup>* are centred about a specific

/ ω/ t

ω

)(

t

Finally, we will find that some spectra of the type <sup>⇔</sup>

well, which will be done by writing <sup>⇔</sup> <sup>⇔</sup>

t

t

t

t

t

which may be represented by writing Equations (4) – (6) as

@ @ <sup>ω</sup> @ <sup>ω</sup> @ <sup>ω</sup> @

t

**3. "Enhanced" Fourier-Transform equations** 

t

⇔ /

ω

**3.1 Euler's identities** 

a given frequency we have

t

t

/ −ω ω/

(for all practical purposes) from its spectrum, and all nonperiodic temporal functions labeled here as t *tf* )( have the extra feature that their function values are zero in the infinite past and in the infinite future - a property needed so that they can be absolute integrable (the integral of t *tf* )( over all *t* exists and is finite) which in turn is required for the spectrum of *tf* )( to exist in a form that contains only finite function values.

*In the frequency dimension*, when thinking of the spectrum ω *F* ω)( as a mathematical object that has been obtained by the forward FT of t *tf* )( , it will prove useful to extend the name ω *F* ω)( such that it can remind us of what specific operative temporal interval t *tf* )( existed on. To that end the temporal function's operative interval t will be indicated in the spectrum's name as t ω *F* ω)( . Although t *tf* )( and t ω *F* ω)( are equivalent representations of the same mathematical object, t *tf* )( , we may regard t *tf* )( as the primary mathematical entity since the physical events that we are interested in and which are modelled by t *tf* )( actually "happen" in time. A function t *tf* )( describing an "event" of duration t is defined as zero outside of t , and in that sense we can ignore all *t* ∉t without actually having to "sweep through" the natural domain from *t* = −∞ to *t* = +∞ . However, most t ω *F* ω)( spectra that we will encounter have an infinite operative interval ⇔ ω/ or ⇒ ϖ such that the spectrum is continuous and has nonzero function values (allowing for zero values at points where ⇒⇔ / ω ; ϖ )( t ω *F* may intercept or touch the ω axis) over its entire domain. Thus for the mathematical equivalence between t *tf* )( and ⇒⇔ / ω ; ϖ )( t ω *F* to hold, the forward FT must be performed by accounting for all possible frequencies in the infinite natural domains ⇔ ω/ or ⇒ ϖ , depending on whether we want to express the spectrum in ω/ -space or ϖ -space. To reflect this concept the notation for the pertinent frequency sweep (as described in the previous section) will be indicated in the spectrum's name. Thus, spectra with an infinite support are written as <sup>⇔</sup> / ω/ t ω )( <sup>I</sup> *<sup>F</sup>* or <sup>ϖ</sup> <sup>⇒</sup> ϖ )( G t*F* .

An ω/ -space spectrum can always be equated with a suitable, complex combination of two ϖ -space spectra, one obtained through the "cosine transform" and one through the "sine transform" of t *tf* )( . Phasor representation usually being more convenient to work with, herein preference will generally be given to ω/ -space formalism (however, the trigonometric form will also be discussed to some extent since this gives some interesting added insight into the nature of the FT). In all, the FT relationship will most generally be thought of as:

t

$$\left. \underline{f}(t) \right|\_{\mathfrak{G}} \longleftarrow \stackrel{\leftrightarrow}{\widetilde{\mathsf{T}}}\_{\mathfrak{G}}(\bar{\otimes}) \!\_{\mathfrak{G}\mathfrak{g}} \tag{3}$$

Finally, we will find that some spectra of the type <sup>⇔</sup> / ω/ t ω )( <sup>I</sup> *<sup>F</sup>* are centred about a specific frequency ω/ @ . Although in such cases the value of ω/ @ is contained in the relevant rule, ⇔ / / −ω ω/ ω Rule @ , it will often prove informative to indicate ω/ @ in the function's name as well, which will be done by writing <sup>⇔</sup> <sup>⇔</sup> / / / / =ω / −ω ω/ t ωω <sup>ω</sup> Rule)( @ @ ; <sup>I</sup> *<sup>F</sup>* .

### **3. "Enhanced" Fourier-Transform equations**

#### **3.1 Euler's identities**

218 Fourier Transform – Signal Processing

(for all practical purposes) from its spectrum, and all nonperiodic temporal functions

past and in the infinite future - a property needed so that they can be absolute integrable

*tf* )( to exist in a form that contains only finite function values.

*F* ω)( such that it can remind us of what specific operative temporal interval

t

spectra that we will encounter have an infinite operative interval

/ ω/ t

ω

<sup>I</sup> *<sup>F</sup>* or <sup>ϖ</sup> <sup>⇒</sup>

)(

t*tf* )( and

*tf* )( , we may regard

entity since the physical events that we are interested in and which are modelled by

t

spectrum is continuous and has nonzero function values (allowing for zero values at points

*tf* )( and ⇒⇔

 , depending on whether we want to express the spectrum in ω/ -space or ϖ -space. To reflect this concept the notation for the pertinent frequency sweep (as described in the previous section) will be indicated in the spectrum's name. Thus, spectra with an infinite

An ω/ -space spectrum can always be equated with a suitable, complex combination of two ϖ -space spectra, one obtained through the "cosine transform" and one through the "sine

herein preference will generally be given to ω/ -space formalism (however, the trigonometric form will also be discussed to some extent since this gives some interesting added insight into the nature of the FT). In all, the FT relationship will most generally be thought of as:

*F* may intercept or touch the ω axis) over its entire domain. Thus for the

/ ω ; ϖ )( t

*tf* )( . Phasor representation usually being more convenient to work with,

ω

, and in that sense we can ignore all *t* ∉

t

ϖ

)( G t*F* .

performed by accounting for all possible frequencies in the infinite natural domains

"sweep through" the natural domain from *t* = −∞ to *t* = +∞ . However, most

*In the frequency dimension*, when thinking of the spectrum

on. To that end the temporal function's operative interval

*F* ω)( . Although

*tf* )( have the extra feature that their function values are zero in the infinite

*tf* )( over all *t* exists and is finite) which in turn is required for the spectrum

t

t

ω

t

*tf* )( describing an "event" of duration

t

⇔ ω/ or

*F* to hold, the forward FT must be

ω

t

*tf* )( , it will prove useful to extend the name

*F* ω)( are equivalent representations of

*tf* )( as the primary mathematical

*F* ω)( as a mathematical object

t*tf* )( existed

> t*tf* )(

> > ω

is defined

t

such that the

*F* ω)(

⇔ ω/ or

will be indicated in the

t

without actually having to

⇒ ϖ

labeled here as

(the integral of

t

ω

spectrum's name as

as zero outside of

where ⇒⇔

transform" of

⇒ ϖ

/ ω ; ϖ )( t

support are written as <sup>⇔</sup>

t

mathematical equivalence between

ω

the same mathematical object,

actually "happen" in time. A function

t

of

t

t

that has been obtained by the forward FT of

t

ω

With the above considerations in mind we can reformulate Euler's identities as follows. For a given frequency we have

$$\underline{A}\_{\partial\boldsymbol{\Theta}} \cdot \boldsymbol{\epsilon}^{\boldsymbol{i}\cdot\boldsymbol{\epsilon}\boldsymbol{\phi}\_{0}t} \bigg|\_{\boldsymbol{\Theta}} = {}^{\mathfrak{N}}A\_{\partial\boldsymbol{\Theta}} \cdot \boldsymbol{\epsilon}^{\boldsymbol{i}\cdot\boldsymbol{\epsilon}\boldsymbol{\phi}\_{0}t} \bigg|\_{\boldsymbol{\Theta}} + {}^{\mathfrak{i}}\tilde{\boldsymbol{\epsilon}}^{\mathfrak{N}}A\_{\partial\boldsymbol{\Theta}} \cdot \boldsymbol{\epsilon}^{\boldsymbol{i}\cdot\boldsymbol{\epsilon}\boldsymbol{\phi}\_{0}t} \bigg|\_{\boldsymbol{\Theta}} = \underline{A}\_{\partial\boldsymbol{\Theta}} \cdot \cos(\boldsymbol{\varpi}\_{\partial\boldsymbol{\Theta}}t) \bigg|\_{\boldsymbol{\Theta}} \pm {}^{\mathfrak{i}}\boldsymbol{i} \cdot \underline{A}\_{\partial\boldsymbol{\Theta}} \cdot \sin(\boldsymbol{\varpi}\_{\partial\boldsymbol{\Theta}}t) \bigg|\_{\boldsymbol{\Theta}}\tag{4}$$

$$\left.A\_{\overrightarrow{\alpha}\overrightarrow{\otimes}} \cdot \cos(\mathfrak{w}\_{\overrightarrow{\otimes}}t)\right|\_{\mathfrak{G}} = 0.5\underline{A}\_{\overrightarrow{\alpha}\overrightarrow{\otimes}} \cdot e^{i\cdot\cdot\otimes\_{\mathbb{Q}}^{+}t}\Big|\_{\mathfrak{G}} + 0.5\underline{A}\_{\overrightarrow{\alpha}\overrightarrow{\otimes}} \cdot e^{-i\cdot\cdot\otimes\_{\mathbb{Q}}^{+}t}\Big|\_{\mathfrak{G}}\tag{5}$$

$$\left. \underline{A}\_{\partial \boldsymbol{\partial}} \cdot \sin(\boldsymbol{\varpi}\_{\partial \boldsymbol{\partial}} t) \right|\_{\boldsymbol{\Theta}} = -\boldsymbol{i} \cdot \boldsymbol{0}.\\ \underline{5A}\_{\partial \boldsymbol{\partial}} e^{\boldsymbol{i} \cdot \boldsymbol{\coth}\_{\partial \boldsymbol{\partial}}^{+} t} \bigg|\_{\boldsymbol{\Theta}} + \boldsymbol{i} \cdot \boldsymbol{0}.\\ \underline{5A}\_{\partial \boldsymbol{\partial}} \cdot e^{-\boldsymbol{i} \cdot \boldsymbol{\coth}\_{\partial \boldsymbol{\partial}}^{+} t} \bigg|\_{\boldsymbol{\Theta}} \end{aligned} \tag{6}$$

On the other hand the entire positive, entire negative, or complete positive and negative phasor frequency range can be embraced via positive trigonometric frequency notation which may be represented by writing Equations (4) – (6) as

$$\underline{A(\bullet)} \cdot e^{i\stackrel{\leftrightarrow}{\cdot \cdot} \bullet} \begin{vmatrix} \stackrel{\leftrightarrow}{\cdot \cdot} \\ \end{vmatrix} = \,^{\Re} \underline{A(\bullet)} \cdot e^{i\stackrel{\leftrightarrow}{\cdot \cdot} \bullet} \begin{vmatrix} \stackrel{\leftrightarrow}{\cdot \bullet} \\ \end{vmatrix} + i \,^{\Im} \overline{A(\bullet)} \cdot e^{i\stackrel{\leftrightarrow}{\cdot \cdot} \bullet} \begin{vmatrix} \stackrel{\leftrightarrow}{\cdot \bullet} \\ \end{vmatrix} = \underline{A(\bullet)} \cdot \cos(\overline{\bullet} \, t) \Big|\_{\mathbf{f}} \stackrel{\leftrightarrow}{\pm} \dot{i} \cdot \underline{A(\bullet)} \cdot \sin(\overline{\bullet} \, t) \Big|\_{\mathbf{f}} \end{vmatrix} \tag{7}$$

$$\underline{A}(\vec{\varpi}) \cdot \cos(\vec{\varpi}t) \Big|\_{\Phi} = 0.5 \underline{A}(\vec{\alpha}) \cdot e^{\dot{\vec{i}} \cdot \vec{\text{@}} \cdot t} \Big|\_{\Phi} + 0.5 \underline{A}(\vec{\alpha}) \cdot e^{\dot{\vec{i}} \cdot \vec{\text{@}} \cdot t} \Big|\_{\Phi} \tag{8}$$

$$\left.\underline{A(\vec{\omega})} \cdot \sin(\vec{\varpi}t)\right|\_{\Phi} = -i \cdot 0.5 \underline{A(\vec{\phi})} \cdot e^{i \cdot \vec{\phi} \cdot t} \Bigg|\_{\Phi} + i \cdot 0.5 \underline{A(\vec{\phi})} \cdot e^{i \cdot \vec{\phi} \cdot t} \Bigg|\_{\Phi} \tag{9}$$

These equations reflect the concept that we can describe a harmonic phasor's frequency and sign of rotation in both phasor notation and complex trigonometric form, so long as in the latter case we use both the (real) cosine and (imaginary) sine waves - both of which have algebraically positive ϖ frequencies. There is no contradiction in the fact that we can obtain a negative phasor frequency −ω/ by combining a real and an imaginary sinusoid, both of

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 221

���� ���

⎪ ⎪ ⎩

t

t

[Figures 2(C)] corresponds to the pure imaginary-amplitude component

@ 5.0 @ 5.0

*A*

*A*

⎪ ⎪ ⎨

@ exp( <sup>ω</sup>@*tiA* ) <sup>±</sup> ⋅⋅ / <sup>ℜ</sup> and

⎧ += �� �� � ��

⎪ ⎪ ⎭

@ exp( <sup>ω</sup>@*tiAi* ) <sup>±</sup> ⋅⋅⋅ / <sup>ℑ</sup> , having at *<sup>t</sup>* <sup>=</sup> <sup>0</sup> a real and

*A* [Figure 2(B)] corresponds to the real-

*Ai* [Figure 2(B,C)]. This tells

<sup>−</sup> •

@

/

ω

<sup>+</sup> •

@

/

ω

⎪ ⎪ ⎬

(12s)

/

@ @ ω

t

⋅

@ exp( <sup>ω</sup>@*tiAi* ) <sup>±</sup> ⋅⋅⋅ / <sup>ℑ</sup>

*Ai*

⎫

∓

+

@ 5.0

*A*

t

/

exp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / into a cosine and a sine wave according to

@ @ ω

@ 5.0

*A*

⎪ ⎪ ⎩

*<sup>A</sup>* and <sup>±</sup> • <sup>ℑ</sup>

/ @ @ ω

⋅

@ exp( <sup>ω</sup>@*tiA* ) <sup>±</sup> ⋅⋅ / <sup>ℜ</sup> [Figure 1(b)], and the imaginary part <sup>±</sup> • <sup>ℑ</sup>

⎪ ⎪ ⎨

+

⎪ ⎪ ⎭

<sup>−</sup> •

@

Equation (10t) tells us that we can decompose the principle phasor into a sum of two

a pure imaginary amplitude, respectively, as shown for a positive phasor in Figure 1(b,c). In

us that if we take the real and imaginary parts of the principle phasor's complex spectrum

Neither the real, nor the imaginary spectrum are normally represented in a 3D complex amplitude vs. frequency graph as in Figure 2(B,C), but as 2D real-amplitude vs. frequency, or imaginary-amplitude vs. frequency graphs [the analogous graphical schemes for a negative principle phasor can be seen in the original work (Szantay, 2007)]. Although in this representation the concepts of "real" and "imaginary" spectrum may seem almost selfevident, one should note that the ordinate value at a given frequency of the real and imaginary spectrum is often erroneously understood as meaning the amplitude of a cosine

t

Eq. (11t). Because their amplitudes are related as @*A* and @ <sup>⋅</sup> *Ai* , the planes of oscillation of the cosine and sine wave are perpendicular to each other in the complex plane as shown in Figure 1(d,e). In spectral form [Expression (11s)] the respective complex amplitudes are depicted in ϖ -space according to Figure 2(D,E). Note that when we translate (11t) into spectral form, (11s) ceases to be an equality. This is because: a) the spectral amplitudes

*A* for the cosine and sine components [Figure 2(D,E)] belong to two different mathematical "species", therefore they cannot simply be added; b) the spectrum of the principle phasor [Figure 2(A)] is a function in ω/ -space, while the spectra of its component sinusoids [Figures 2(D,E)] are functions in ϖ -space. (Again, this is a fundamental difference which can easily escape attention in the absence of a symbolism that differentiates between phasor and trigonometric frequencies.) Note also that we have no analogous problem with the relevant temporal forms of Figures 1(a,d,e), since they have a common argument: time. Thus the amplitudes depicted in Figure 2(D,E) do not add up to those shown in Figure 2(A).

/ @ @ ω

/

ω

<sup>+</sup> •

@

/

ω

⎪ ⎪ ⎬

⎫

⎧

±

<sup>±</sup> • / @ @ω

*A*

spectral form Eq. (10t) becomes (10s), and <sup>±</sup> • <sup>ℜ</sup>

*<sup>A</sup>* [Figure 2(A)], then the real spectrum <sup>±</sup> • <sup>ℜ</sup>

and sine trigonometric harmonic wave, respectively.

component phasors,

amplitude component

[Figure 1(c)] of the principle phasor.

Consider the decomposition of

<sup>±</sup> • / @ @ω

@ @ • ϖ

which have positive ϖ trigonometric frequencies. Equations (5,6) and (8,9) show that while a given sinusoid has no frequency sign ambiguity (it is always positive) in ϖ -space, it exhibits a sign duality when expressed in ω/ -space, i.e., a sinusoid is always equated with the sum of two phasors, one with a positive and one with a negative frequency.

#### **3.2 Interrelationships in temporal and spectral representations**

In our formalism a key stepping stone to the FT involves the consideration of Euler's identities from both a *t*-D and ω-D perspective. Let us take a harmonic phasor t exp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / , called herein the *principle phasor*, which can be viewed as a vector rotating in time as shown in temporal representation for a positive phasor t exp( <sup>ω</sup> ) @ @*tiA* <sup>+</sup> ⋅⋅ / in Figure 1(a). We can also find it informative to represent our phasor as an ω-D entity <sup>±</sup> • / @ @ω *A* as shown for t exp( <sup>ω</sup> ) @ @*tiA* <sup>+</sup> ⋅⋅ / in Figure 2(A). The temporal and spectral representations emphasize different aspects of the phasor: the temporal form shows its progression in time, from which the numerical value of <sup>±</sup> ω/ *@* is not directly accessible, while the spectral representation displays the <sup>±</sup> ω/ *@* value in ω/ -space, but offers no direct sense of the phasor's temporal behavior or of its operative interval t . Considering the principle phasor texp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / in the light of (4)–(6), we can formulate the following scheme:

$$\text{Temporal:} \qquad \qquad \underline{\Lambda}\_{\oplus} \cdot \stackrel{j \cdot \oplus \stackrel{\star}{\oplus} \sharp}{\longleftrightarrow} \Big|\_{\mathfrak{G}} = \,^{\mathfrak{R}} \Lambda\_{\oplus} \cdot \epsilon^{j \cdot \oplus \stackrel{\star}{\oplus} \sharp} \Big|\_{\mathfrak{G}} \qquad + \qquad \,^{\mathfrak{i}^{\mathfrak{T}}} \Lambda\_{\oplus} \cdot \epsilon^{j \cdot \oplus \stackrel{\star}{\oplus} \sharp} \Big|\_{\mathfrak{G}} \tag{10t}$$

$$\left. \underline{A\_{\oplus}} \cdot e^{i \cdot \phi\_{\oplus}^{\pm} t} \right|\_{\mathbf{f}} = \underbrace{\underline{A\_{\oplus}} \cdot \cos(\mathfrak{w}\_{\oplus} t)}\_{|||} \mathbf{f} \quad + \underbrace{\pm \, i \cdot \underline{A\_{\oplus}} \cdot \sin(\mathfrak{w}\_{\oplus} t)}\_{|||} \mathbf{f} \tag{11}$$

$$\begin{aligned} \left. \underline{A\_{\underline{\otimes}\boldsymbol{\theta}} \cdot e^{i \cdot \boldsymbol{\phi}\_{\boldsymbol{\Phi}}^{\dagger} t}} \right|\_{\boldsymbol{\Phi}} = \begin{bmatrix} \overbrace{0.5 \underline{A\_{\underline{\otimes}\boldsymbol{\theta}} \cdot e^{i \cdot \boldsymbol{\phi}\_{\boldsymbol{\Phi}}^{\dagger} t}}}^{\boldsymbol{i} \cdot \boldsymbol{\phi}\_{\boldsymbol{\Phi}}^{\dagger} t} \end{bmatrix}\_{\boldsymbol{\Phi}} + \begin{bmatrix} \overbrace{\pm 0.5 \underline{A\_{\underline{\otimes}\boldsymbol{\theta}} \cdot e^{i \cdot \boldsymbol{\phi}\_{\boldsymbol{\Phi}}^{\dagger} t}}}^{\boldsymbol{i} \cdot \boldsymbol{\phi}\_{\boldsymbol{\Phi}}^{\dagger} t} \\ + \\ \overbrace{\mp 0.5 \underline{A\_{\underline{\otimes}\boldsymbol{\theta}} \cdot e^{i \cdot \boldsymbol{\phi}\_{\boldsymbol{\Phi}}^{\dagger} t}}}^{\boldsymbol{i} \cdot \boldsymbol{\Phi}\_{\boldsymbol{\Phi}}^{\dagger} t} \end{bmatrix}\_{\boldsymbol{\Phi}} \end{aligned} \tag{12}$$

$$\begin{array}{rclcrcl}\text{Spectral:} & & & \underline{\mathbf{A}}\_{\stackrel{\scriptstyle \mathbf{@}}}\Big|\_{\stackrel{\scriptstyle \mathbf{D}}}\mathbf{\overset{\scriptstyle \mathbf{1}}{\;}} & & & \mathbf{\overset{\scriptstyle \mathbf{1}}{\;}}{\mathbf{a}}\Big|\_{\stackrel{\scriptstyle \mathbf{D}}}\mathbf{\overset{\scriptstyle \mathbf{1}}{\;}} & & + & \mathbf{i}\cdot\mathbf{\overset{\scriptstyle \mathbf{3}}{\;}}{\mathbf{a}}\Big|\_{\stackrel{\scriptstyle \mathbf{D}}}\mathbf{\overset{\scriptstyle \mathbf{1}}{\;}} \\ & & & & \mathbf{d}\mathbf{b}\mathbf{\overset{\scriptstyle \mathbf{1}}{\;}}{\mathbf{a}} \end{array} \tag{10s}$$

$$\underline{A\_{\oplus}}\big|\_{\underline{\bullet}\bullet\atop\bullet}^{\bullet}\leftrightarrow\qquad\underline{A\_{\oplus}}\big|\_{\underline{\bullet}\bullet\atop\bullet}^{\bullet}\qquad\qquad\&\qquad\quad\underbrace{\pm\,i\cdot\underline{A\_{\oplus}}\big|\_{\underline{\bullet}\bullet\atop\bullet}^{\bullet}}\_{\odot}\tag{11s}$$

$$\begin{array}{rcl} \mathbf{A}\_{\stackrel{\scriptstyle \mathbf{Q}}} \Big|\_{\mathbf{0}\mathbf{0}^{+}\_{\stackrel{\scriptstyle \mathbf{0}}}} &=& \begin{pmatrix} \overbrace{0.5\underline{A}\_{\stackrel{\scriptstyle \mathbf{Q}}}}\math{\mathbf{0}\mathbf{0}^{+}\_{\stackrel{\scriptstyle \mathbf{0}}}} \\ + \\ 0.5\underline{A}\_{\stackrel{\scriptstyle \mathbf{0}}}\Big|\_{\mathbf{0}\mathbf{0}^{-}\_{\stackrel{\scriptstyle \mathbf{0}}}} \end{pmatrix} & + \quad \begin{pmatrix} \pm 0.5\underline{A}\_{\stackrel{\scriptstyle \mathbf{0}}}\Big|\_{\mathbf{0}\mathbf{0}^{+}\_{\stackrel{\scriptstyle \mathbf{0}}}} \\ + \\ \mp 0.5\underline{A}\_{\stackrel{\scriptstyle \mathbf{0}}}\Big|\_{\mathbf{0}\mathbf{0}^{-}\_{\stackrel{\scriptstyle \mathbf{0}}}} \end{pmatrix} \end{array} \tag{12s}$$

Equation (10t) tells us that we can decompose the principle phasor into a sum of two component phasors, t @ exp( <sup>ω</sup>@*tiA* ) <sup>±</sup> ⋅⋅ / <sup>ℜ</sup> and t @ exp( <sup>ω</sup>@*tiAi* ) <sup>±</sup> ⋅⋅⋅ / <sup>ℑ</sup> , having at *<sup>t</sup>* <sup>=</sup> <sup>0</sup> a real and a pure imaginary amplitude, respectively, as shown for a positive phasor in Figure 1(b,c). In spectral form Eq. (10t) becomes (10s), and <sup>±</sup> • <sup>ℜ</sup> / @ @ ω *<sup>A</sup>* and <sup>±</sup> • <sup>ℑ</sup> / ⋅ @ @ ω *Ai* [Figure 2(B,C)]. This tells us that if we take the real and imaginary parts of the principle phasor's complex spectrum <sup>±</sup> • / @ @ω *<sup>A</sup>* [Figure 2(A)], then the real spectrum <sup>±</sup> • <sup>ℜ</sup> / @ @ ω *A* [Figure 2(B)] corresponds to the realamplitude component t @ exp( <sup>ω</sup>@*tiA* ) <sup>±</sup> ⋅⋅ / <sup>ℜ</sup> [Figure 1(b)], and the imaginary part <sup>±</sup> • <sup>ℑ</sup> / ⋅ @ @ ω *Ai* [Figures 2(C)] corresponds to the pure imaginary-amplitude component t@ exp( <sup>ω</sup>@*tiAi* ) <sup>±</sup> ⋅⋅⋅ / <sup>ℑ</sup>

[Figure 1(c)] of the principle phasor.

220 Fourier Transform – Signal Processing

which have positive ϖ trigonometric frequencies. Equations (5,6) and (8,9) show that while a given sinusoid has no frequency sign ambiguity (it is always positive) in ϖ -space, it exhibits a sign duality when expressed in ω/ -space, i.e., a sinusoid is always equated with

In our formalism a key stepping stone to the FT involves the consideration of Euler's identities from both a *t*-D and ω-D perspective. Let us take a harmonic phasor

exp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / , called herein the *principle phasor*, which can be viewed as a vector rotating

Figure 1(a). We can also find it informative to represent our phasor as an ω-D entity <sup>±</sup> •

emphasize different aspects of the phasor: the temporal form shows its progression in time, from which the numerical value of <sup>±</sup> ω/ *@* is not directly accessible, while the spectral

representation displays the <sup>±</sup> ω/ *@* value in ω/ -space, but offers no direct sense of the phasor's

t

t

���������

⋅

*eA*

⋅ +

*eA*

@ 5.0

@ 5.0

− ⋅ /

*ti*

ω

*<sup>A</sup>* <sup>±</sup> • <sup>ℑ</sup> <sup>±</sup> • <sup>ℜ</sup>

= ⋅+

ω

����� �

ϖ @

⋅ /

*ti*

ω

@

exp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / in the light of (4)–(6), we can formulate the following scheme:

exp( <sup>ω</sup> ) @ @*tiA* <sup>+</sup> ⋅⋅ / in Figure 2(A). The temporal and spectral representations

t

t

*A*

/ @ @ω

exp( <sup>ω</sup> ) @ @*tiA* <sup>+</sup> ⋅⋅ / in

. Considering the principle phasor

t

t

⎪ ⎪ ⎪ ⎭

t

−

@

⋅ /

*ti*

ω

⋅ /

*ti*

ω

@

⎪⎪ ⎪ ⎬

(12t)

t

⎫

*ti ti eA eAi* <sup>±</sup> <sup>ℑ</sup> <sup>±</sup> <sup>ℜ</sup> <sup>⋅</sup> / <sup>⋅</sup> / ⋅= <sup>ω</sup>@ ⋅⋅+ <sup>ω</sup>@ @ @ (10t)

)sin( @ )cos( @ @ @ *<sup>A</sup> <sup>t</sup> Ai* ϖ⋅⋅±+ϖ⋅= *<sup>t</sup>* (11t)

���������

*eA*

⋅

ω

ϖ

*A Ai* (11s)

� ���� �

*A Ai* (10s)

*eA*

+

@ 5.0

⋅±

@ 5.0

�� ��� � ���

∓

⎪ ⎪ ⎪ ⎩

⎪⎪ ⎪ ⎨

+

/ /

@ @ @ @

@ @ @

⎪ ⎪ ⎪ ⎭

t

& @ <sup>↔</sup> • ⋅± •

⎪⎪ ⎪ ⎬

t

⎫

⎧

+ +

the sum of two phasors, one with a positive and one with a negative frequency.

**3.2 Interrelationships in temporal and spectral representations** 

in time as shown in temporal representation for a positive phasor

t

t

t

t

⎪ ⎪ ⎪ ⎩

⎪⎪ ⎪ ⎨

=

⎧

*tieA* <sup>±</sup> <sup>⋅</sup> / <sup>⋅</sup> <sup>ω</sup>@ @ �� �� � ��

temporal behavior or of its operative interval

*tieA* <sup>±</sup> <sup>⋅</sup> / <sup>⋅</sup> <sup>ω</sup>@ @

*tieA* <sup>±</sup> <sup>⋅</sup> / <sup>⋅</sup> <sup>ω</sup>@ @

/ @ @ω

*A*

<sup>±</sup> • / @ @ω

t

t

*Spectral:* <sup>±</sup> •

as shown for

*Temporal:* 

Neither the real, nor the imaginary spectrum are normally represented in a 3D complex amplitude vs. frequency graph as in Figure 2(B,C), but as 2D real-amplitude vs. frequency, or imaginary-amplitude vs. frequency graphs [the analogous graphical schemes for a negative principle phasor can be seen in the original work (Szantay, 2007)]. Although in this representation the concepts of "real" and "imaginary" spectrum may seem almost selfevident, one should note that the ordinate value at a given frequency of the real and imaginary spectrum is often erroneously understood as meaning the amplitude of a cosine and sine trigonometric harmonic wave, respectively.

Consider the decomposition of t exp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / into a cosine and a sine wave according to Eq. (11t). Because their amplitudes are related as @*A* and @ <sup>⋅</sup> *Ai* , the planes of oscillation of the cosine and sine wave are perpendicular to each other in the complex plane as shown in Figure 1(d,e). In spectral form [Expression (11s)] the respective complex amplitudes are depicted in ϖ -space according to Figure 2(D,E). Note that when we translate (11t) into spectral form, (11s) ceases to be an equality. This is because: a) the spectral amplitudes @ @ • ϖ *A* for the cosine and sine components [Figure 2(D,E)] belong to two different mathematical "species", therefore they cannot simply be added; b) the spectrum of the principle phasor [Figure 2(A)] is a function in ω/ -space, while the spectra of its component sinusoids [Figures 2(D,E)] are functions in ϖ -space. (Again, this is a fundamental difference which can easily escape attention in the absence of a symbolism that differentiates between phasor and trigonometric frequencies.) Note also that we have no analogous problem with the relevant temporal forms of Figures 1(a,d,e), since they have a common argument: time. Thus the amplitudes depicted in Figure 2(D,E) do not add up to those shown in Figure 2(A).

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 223

**Spectral representation**

Fig. 2. The spectral representational analogue of Figure 1 according to Equations (10S)-(12S)

Fig. 1. Graphical representation of Equations (10t)-(12t) for a positive principle phasor. For added clarity the side-view projections of the principle phasor onto the real and imaginary axis are shown in (a')

222 Fourier Transform – Signal Processing

**Temporal representation** 

Fig. 1. Graphical representation of Equations (10t)-(12t) for a positive principle phasor. For added clarity the side-view projections of the principle phasor onto the real and imaginary

axis are shown in (a')

Fig. 2. The spectral representational analogue of Figure 1 according to Equations (10S)-(12S)

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 225

<sup>ϖ</sup> <sup>⇔</sup> <sup>→</sup> <sup>⇔</sup> <sup>↔</sup> *dthdthtf*

Employing the nomenclature and concepts outlined above, the Fourier integrals can be

t

=

∫ <sup>ϖ</sup> ∫ ∫ <sup>→</sup> <sup>⇔</sup> <sup>⇔</sup> <sup>⇔</sup>

t

<sup>G</sup>

∞− / <sup>∫</sup> <sup>∫</sup> <sup>⇔</sup>

=ϖ

ϖ

<sup>⇔</sup> =⋅ϖ⋅=⋅ϖ⋅=ϖ

t

t

⋅− /

↔

t

⇔

/ ∫ ∫ / =⋅⋅=⋅⋅=ω

FT FT

ϖ⋅ϖ⋅ϖ⋅

G G

 

ϖ

cos cos (cos)

)()(

*F tf*

t

t

<sup>⇔</sup> <sup>↔</sup>

<sup>π</sup> <sup>=</sup> <sup>⋅</sup> / =ω

)()( )(

∞

0

t

t

cos

π

)cos()( <sup>1</sup> )()(

←

∞

−∞

∞−

∞−

t

∞

∞−

ϖ

ϖ

ω

ϖ

wave and the unit basis functions ⋅ω/ <sup>⇔</sup>

unitary orthogonal basis set ⋅ω/ <sup>⇔</sup>

*F* and

t

)( sin ϖ G t

simply written as ∫

t

t

> t

ω/ ω/)(

which means that irrespective of what operative temporal interval

*Fdthtf dt*

0

expressed as follows:

0

t

=ϖ⋅=

t

t

ϖ

)( cos ϖ G t*F* , t

∞ ∞

=ϖ⋅= <sup>⋅</sup> ω/ ∫∫ ∞

−∞ ω/

> t

> > t ω

)( <sup>2</sup> 1 )( <sup>ω</sup> <sup>ω</sup>

*dth deF ti*

∞

←

FT

<sup>∫</sup> <sup>∫</sup> <sup>∞</sup> <sup>→</sup>

)cos()()cos()()( (cos) )( cos *<sup>F</sup> dtttfdtttf* FT *tf* <sup>G</sup> <sup>G</sup> <sup>G</sup>

t

t

t

Equations (15) and (14) express the forward and inverse FT, respectively. In the spectra

t

can be decomposed into an infinite set of eternal harmonic waves. The eternity of the

fundamentally important aspect of the FT, easily escaping attention in the standard mathematical symbolism. For example, in the conventional notation Equation (14b) is

of showing us that even if *)t(f* is finite in time, the basis phasors *tie* <sup>⋅</sup>ω are perpetual. Further insight may be gained into the FT by noting that the right-hand side of Equation (14b) represents an infinite collection of eternal "principle phasors" on the phasor frequency scale. Thus, in analogy to Equations (10)-(12) we can formulate the following scheme:

t

 )sin()()sin()()( (sin) )( sin *<sup>F</sup> dtttfdtttf tf* ∞ →

)()( )( )( <sup>ω</sup> <sup>ω</sup> *<sup>F</sup> tfdtetfdtetf ti ti* <sup>→</sup> ⋅− / <sup>∞</sup>

=⋅ϖ⋅=⋅ϖ⋅=ϖ ∫ ∫ <sup>⇔</sup> FT <sup>G</sup> <sup>G</sup> <sup>G</sup>

∞−

I

t

t

> t

> > t

FT <sup>I</sup> (15b)

*tf* )( 21 exp()( ω ) *dtiF* ω⋅⋅⋅ω⋅π= , and this expression offers little in the way

t

π +

 

/

ω

/

*tfF*

⋅ / ⋅⋅ω ω/

/ =ω

)()(

←

∞

0

t

<sup>I</sup>

t

> t

↔

t

<sup>I</sup> *<sup>F</sup>* each ω/ or ϖ value corresponds to a pure harmonic

*ti* )exp( <sup>I</sup> are declared on an infinite time scale <sup>⇔</sup>

*ti* )exp( <sup>I</sup> is a generally not fully appreciated, but

⋅ /

↔

t

t

sin

(13)

t

t

t

t

t

*tf* )( "exists" upon, it

=ϖ

ϖ

<sup>G</sup>

t

(14b)

(15acos)

(15asin)

t,

(14a)

ϖ⋅ϖ⋅ϖ⋅

*F dt*

G G

 

sin sin (sin)

)()(

*F tf*

t

)sin()( <sup>1</sup>

ϖ

In order to bring <sup>±</sup> • @ω/ @ *<sup>A</sup>* , @ @ • ϖ *<sup>A</sup>* , and @ @ ⋅ • ϖ *Ai* onto a "common ground" we need to translate the ϖ -space spectra into ω/ -space. In the temporal domain this means that the principle phasor's cosine component [Figure 1(d)] and sine component [Figure 1(e)] are decomposed into a sum of oppositely rotating phasors [Figure 1(f,h)] according to (12t). The decomposition of the cosine term reflects Equations (5) while that of the sine term corresponds to (6) multiplied by "*i*". If we now add up the four phasors in (12t) we re-gain our principle phasor t exp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / [Figures 1(a)]: the two negative phasor-frequency components have identical amplitudes but are in opposite phase [Figures 1(h,i)] and hence cancel each other at every point in time, while the two positive phasor-frequency components, both having an amplitude @ 5.0 *A* , are in-phase [Figure 1(f,g)] and thus

reinforce each other to give the original principle phasor.

In spectral notation the relationships of (12t) are expressed according to (12s) and the pertinent ω/ -space spectra are shown in Figure 2(F,G,H,I). In analogy to the correlation between the left and right sides of (11s), there is a correlation, but not an equation, between the pertinent terms on the right side of (11s) and (12s).

By adding up the four amplitudes in [12s] we obtain the principle phasor's spectrum <sup>±</sup> • @ω/ @ *A* [Figures 2(A)]. We may think of these "secondary-phasor" spectra [Figures 2(F,G,H,I)] as "interfaces" that allow us to connect the principle phasor's ω/ -space spectrum [Figures 2(A)] with its ϖ -space spectra [Figures 2(D,E)].

From a broader viewpoint, the whole basis of the FT rests on the idea that a temporal function can be constructed from suitable harmonic "principle phasors". Any given point on the complex FT-spectrum will represent a constituent harmonic principle phasor. It is according to the above scheme that we can conceptually envisage the way the spectrum of a temporal function (or signal) is obtained.

### **3.3 The Fourier Transform**

The FT is based on the theorem that t *tf* )( can be uniquely expressed in terms of its projections onto an infinite frequency set of basis phasors <sup>⇔</sup> <sup>↔</sup> / t )( <sup>ω</sup> *th* = / ⋅⋅ω ω/ <sup>⇔</sup> t *tiA* )exp()(<sup>I</sup> <sup>I</sup> . In general, the infinite set of unit phasors ⋅ω/ <sup>⇔</sup> t *ti* )exp( <sup>I</sup> constitutes a unitary orthogonal basis set for all Fourier-transformable t *tf* )( functions, which can therefore be represented as a weighted sum of the unit harmonic waves. The spectrum t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* specifies the weighting coefficient for the unit basis phasors at all ⇔∈ω / / ω frequencies. Using both trigonometric and phasor notation [cf. Equations (7)-(9)], the essence of the FT can conceptually be written as:

224 Fourier Transform – Signal Processing

⋅ • ϖ

translate the ϖ -space spectra into ω/ -space. In the temporal domain this means that the principle phasor's cosine component [Figure 1(d)] and sine component [Figure 1(e)] are decomposed into a sum of oppositely rotating phasors [Figure 1(f,h)] according to (12t). The decomposition of the cosine term reflects Equations (5) while that of the sine term corresponds to (6) multiplied by "*i*". If we now add up the four phasors in (12t) we re-gain

components have identical amplitudes but are in opposite phase [Figures 1(h,i)] and hence cancel each other at every point in time, while the two positive phasor-frequency components, both having an amplitude @ 5.0 *A* , are in-phase [Figure 1(f,g)] and thus

In spectral notation the relationships of (12t) are expressed according to (12s) and the pertinent ω/ -space spectra are shown in Figure 2(F,G,H,I). In analogy to the correlation between the left and right sides of (11s), there is a correlation, but not an equation, between

By adding up the four amplitudes in [12s] we obtain the principle phasor's spectrum

*A* [Figures 2(A)]. We may think of these "secondary-phasor" spectra [Figures 2(F,G,H,I)] as "interfaces" that allow us to connect the principle phasor's ω/ -space spectrum

From a broader viewpoint, the whole basis of the FT rests on the idea that a temporal function can be constructed from suitable harmonic "principle phasors". Any given point on the complex FT-spectrum will represent a constituent harmonic principle phasor. It is according to the above scheme that we can conceptually envisage the way the spectrum of a

t

t

ω

and phasor notation [cf. Equations (7)-(9)], the essence of the FT can conceptually be written

projections onto an infinite frequency set of basis phasors <sup>⇔</sup> <sup>↔</sup>

t

weighted sum of the unit harmonic waves. The spectrum

*Ai* onto a "common ground" we need to

*tf* )( can be uniquely expressed in terms of its

*ti* )exp( <sup>I</sup> constitutes a unitary orthogonal basis set

ω/ ω/)(

)( <sup>ω</sup> *th* = / ⋅⋅ω ω/ <sup>⇔</sup>

<sup>I</sup> *<sup>F</sup>* specifies the weighting

frequencies. Using both trigonometric

t

*tiA* )exp()(<sup>I</sup> <sup>I</sup> . In

/ t

*tf* )( functions, which can therefore be represented as a

t

exp( <sup>ω</sup> ) @ @*tiA* <sup>±</sup> ⋅⋅ / [Figures 1(a)]: the two negative phasor-frequency

In order to bring <sup>±</sup> • @

our principle phasor

<sup>±</sup> • @ω/ @

as:

ω/ @ *<sup>A</sup>* , @ @

• ϖ

reinforce each other to give the original principle phasor.

the pertinent terms on the right side of (11s) and (12s).

[Figures 2(A)] with its ϖ -space spectra [Figures 2(D,E)].

temporal function (or signal) is obtained.

The FT is based on the theorem that

general, the infinite set of unit phasors ⋅ω/ <sup>⇔</sup>

coefficient for the unit basis phasors at all ⇔∈ω / /

**3.3 The Fourier Transform** 

for all Fourier-transformable

*<sup>A</sup>* , and @ @

t

$$\underline{f}(t)\Big|\_{\Phi} = \int\_{0}^{\infty} \underline{h}\_{\varpi}^{\rightarrow}(t)\Big|\_{\widetilde{\Phi}} \cdot d\varpi = \int\_{-\infty}^{\infty} \underline{h}\_{\Psi}^{\leftarrow}(t)\Big|\_{\widetilde{\Phi}} \cdot d\phi \tag{13}$$

Employing the nomenclature and concepts outlined above, the Fourier integrals can be expressed as follows:

 = =ϖ ϖ⋅ϖ⋅ϖ⋅ π + =ϖ ϖ⋅ϖ⋅ϖ⋅ π =ϖ⋅= ← ∞ ← ∞ ∞ ∫ <sup>ϖ</sup> ∫ ∫ <sup>→</sup> <sup>⇔</sup> <sup>⇔</sup> <sup>⇔</sup> <sup>G</sup> G G <sup>G</sup> G G t t t t t t t t t t sin sin (sin) 0 sin cos cos (cos) 0 cos 0 )()( )sin()( <sup>1</sup> )()( )cos()( <sup>1</sup> )()( *F tf F dt F tf Fdthtf dt* ϖ ϖ ϖ ϖ FT FT (14a) <sup>I</sup> I ω t t t t t ω )()( )( <sup>2</sup> 1 )( <sup>ω</sup> <sup>ω</sup> *tfF dth deF ti* / =ω ⋅ / ⋅⋅ω ω/ <sup>π</sup> <sup>=</sup> <sup>⋅</sup> / =ω / ← ∞ ∞− ⋅ / / ∞ ∞− / <sup>∫</sup> <sup>∫</sup> <sup>⇔</sup> ↔ <sup>⇔</sup> <sup>↔</sup> FT (14b)

<sup>∫</sup> <sup>∫</sup> <sup>∞</sup> <sup>→</sup> −∞ <sup>⇔</sup> =⋅ϖ⋅=⋅ϖ⋅=ϖ t t t t t t t )cos()()cos()()( (cos) )( cos *<sup>F</sup> dtttfdtttf* FT *tf* <sup>G</sup> <sup>G</sup> <sup>G</sup> ϖ(15acos)

$$\left. \underline{F} \stackrel{\sin}{\mathsf{d}} (\mathsf{\bar{w}}) \right|\_{\mathsf{op}} = \bigcap\_{-\infty}^{\infty} (t) \Big|\_{\mathsf{d}} \cdot \sin(\mathsf{\bar{w}} t) \Big|\_{\mathsf{d}} \cdot dt = \Big| \underline{f}(t) \Big|\_{\mathsf{d}} \cdot \sin(\mathsf{\bar{w}} t) \Big|\_{\mathsf{d}} \cdot dt = \mathsf{\bar{F}}^{\mathsf{\bar{d}}}(\mathsf{\sin}) \Big| \underline{f}(t) \Big|\_{\mathsf{d}} \tag{15a^{\sin}}$$

$$\underline{F}\_{\mathbf{f}}(\overleftarrow{\operatorname{\ddot{o}b}}\Big|\_{\mathbf{0}\mathbf{0}} = \int\_{-\infty}^{\infty} \underline{f}(t) \Big|\_{\mathbf{0}} \cdot e^{-\hat{i}\cdot\stackrel{\scriptstyle \to}{\mathbf{0}}t} \Big|\_{\underset{\mathbf{0}}{\mathbf{0}}} \cdot dt = \int\_{\mathbf{0}} \underline{f}(t) \Big|\_{\mathbf{0}} \cdot e^{-\hat{i}\cdot\stackrel{\scriptstyle \to}{\mathbf{0}}t} \Big|\_{\mathbf{0}} \cdot dt = \overline{\underline{F}\dagger} \Big| \overline{f(t)} \Big|\_{\mathbf{0}}\tag{15b}$$

Equations (15) and (14) express the forward and inverse FT, respectively. In the spectra ϖ )( cos ϖ G t *F* , ϖ )( sin ϖ G t *F* and t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* each ω/ or ϖ value corresponds to a pure harmonic wave and the unit basis functions ⋅ω/ <sup>⇔</sup> t *ti* )exp( <sup>I</sup> are declared on an infinite time scale <sup>⇔</sup> t , which means that irrespective of what operative temporal interval t *tf* )( "exists" upon, it can be decomposed into an infinite set of eternal harmonic waves. The eternity of the unitary orthogonal basis set ⋅ω/ <sup>⇔</sup> t *ti* )exp( <sup>I</sup> is a generally not fully appreciated, but fundamentally important aspect of the FT, easily escaping attention in the standard mathematical symbolism. For example, in the conventional notation Equation (14b) is simply written as ∫ ∞ ∞− *tf* )( 21 exp()( ω ) *dtiF* ω⋅⋅⋅ω⋅π= , and this expression offers little in the way

of showing us that even if *)t(f* is finite in time, the basis phasors *tie* <sup>⋅</sup>ω are perpetual. Further insight may be gained into the FT by noting that the right-hand side of Equation (14b) represents an infinite collection of eternal "principle phasors" on the phasor frequency scale. Thus, in analogy to Equations (10)-(12) we can formulate the following scheme:

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 227

(ω )

/

FT FT

∫ ∫ <sup>⇔</sup> ±

∓

In Equations (16) - (21) the relationships between the various ϖ -space and ω/ -space spectral

/ =ω / ϖ=ω / ϖ⋅−=ω / ϖ⋅=ω

� � � � � � �

)( ;)()()( )( ;)( )(

ω

ϖ

ϖ

 (ω ) 

*i*

±

/

⎪ ⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪ ⎪

⎫

⎬

⎭

/ / / /

*F F F F Fi F Fi*

cos cos cos sin sin sin sin

⋅⋅=ϖ / ⋅⋅−ω / ⋅⋅=ω / ⋅⋅−ω ω/

� � � � �

)(5.0)(5.0)(5.0)(5.0)(

ω

From this formalism one can draw a number of subtle insights into the FT as was detailed in the original papers which also included a discussion on a geometrical interpretation of the FT and on its capability to unravel the oscillating frequency components that are otherwise obscured in a complicated time signal (Szántay, 2007, 2008a, 2008b, 2008c). These aspects will not be elaborated here. Suffice to note only that if a signal, represented by the function

*t(f* ) , comprises of harmonic phasor components (such is the case with the NMR "free

induction decay" except that it contains exponentially damped harmonic phasors), then the signal must be detected from two orthogonal directions simultaneously to be able to tell the sense of rotation of the pertinent phasors. Letting these directions correspond to the +ℜ and

> t *t(f* ) <sup>ℜ</sup> and

⋅=ϖ / ⋅+ω / ⋅=ω / ⋅+ω ω/

� � � � �

)(5.0)(5.0)(5.0)(5.0)(

t

t

t

t

*F FiFiFi Fi*

sin sin sin

ω ⎪ ⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪ ⎪

⎧

*i*

⎨

*i*

⋅

⎩

/ / / /

t

t

t

> ω

ϖ

ϖ

ϖ

t

t

*t(f* ) which are analogous to the principle phasor's real and imaginary

/ / / /

t

�� ���� � ���� �

�� ���� � �����

t

)(

⋅⋅⋅

⋅ ω/

)(5.0

*F*

t

/ =ω / +ω / ϖ⋅+ϖ=ω

� � � � �

)()()()( )(

ω

*F F F F F*

t

cos sin cos sin

cos sin cos sin

ϖ

cos cos cos

ω

ω

t

*F F F F Fi*

t

*F F F F Fi*

t

ω

/ =ω / +ω / ϖ⋅−ϖ=ω

� � � � �

)()()()( )(

/

ω

⋅ /

*dtetf*

←

ω

*ti*

⇔

t

⇔

t

/

ω

⋅ /

*dtetf*

→

ω

*ti*

�� ���� � �����

t

)(

⋅ ω/

)(5.0

*F*

t

+

∞

∞

∞−

∫

∞−

/ / /

t

ω

> t

/ / /

ω

ω

∫

�� ������ � ������

⋅⋅⋅

cos

t

)(

ϖ

ϖ ⇔

t

�� ���� � ���� �

(cos)

*dtttf*

)cos()(

t

*F*

∞

∞−

⎪ ⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪ ⎪

⎧

5.0

⎨

5.0

=

forms are as follows:

ω

ϖ

ϖ

t

t

t

t

t

t

> ω

ω

> t

+ℑ axes we obtain the functions

projections" of <sup>t</sup>

t t

⎩

⋅ϖ⋅=

→

�� ���� � ���� �

�� ������ � ������

⋅⋅⋅⋅−

)(

�� ����� � ������

)(5.0

t

t

�� ����� � ������

)(50

t

t

/

ω

⋅ /

*dtetf*

←

ω

*ti*

/

ω

⇔

t

⋅ /

*dtetf*

→

ω

*ti*

⇔

t

⋅⋅ ω/

*F.i*

ω t

t

⋅⋅⋅⋅

)(

+ ⋅⋅− ω/

*Fi*

∞

∞−

∫

5.0

sin

t

t

*F*

∞

∞−

∫

⋅ϖ⋅⋅

*dtttfi*

→

)(

ϖ

ϖ

t (20s)

⎪ ⎪ ⎪ ⎪ ⎪

⎪ ⎪ ⎪ ⎪ ⎪

⎫

⎬

(21s)

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

⎤

(22)

ϖ

⎦

⎭

t

ω

ω

*t(fi* ) ℑ⋅ [cf. Equation (20t)] as the "side

t

ω

�� ���� � ����

(sin)

� <sup>∓</sup>

∞

∞−

5.0

)sin()(

*Temporal:* ⋅ / ⋅⋅ω / =ω <sup>π</sup> ∫ ∞ ∞− ⋅ / / ⇔ ↔ I t t t ω )( )( <sup>2</sup> 1 ω *tf deF ti* I I t t t ω t t t ω )( )( <sup>2</sup> 1 )( )( <sup>2</sup> 1 ω ω *tfi deFi tf deF ti ti* ℑ ∞ ∞− ⋅ / / ℑ ℜ ∞ ∞− ⋅ / / ℜ ⋅ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ ⋅ / ⋅⋅ω ω/ π ⋅+ <sup>⎟</sup> ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ ⋅ / ⋅⋅ω ω/ <sup>π</sup> <sup>=</sup> ∫ ∫ <sup>⇔</sup> ↔ ⇔ ↔ (16t) GG GG sin 0 sin cos 0 cos )( )sin()( 1 )( )cos()( <sup>1</sup> t t t t t t *tf F dt tf <sup>F</sup> dt* <sup>∫</sup> <sup>∫</sup> <sup>∞</sup> <sup>∞</sup> ϖ⋅ϖ⋅ϖ⋅ π +ϖ⋅ϖ⋅ϖ⋅ <sup>π</sup> <sup>=</sup> <sup>⇔</sup> <sup>⇔</sup> ϖ ϖ (17t) H G H G ⎪ ⎪ ⎪ ⎭ ⎪ ⎪ ⎪ ⎬ ⎫ ⎪ ⎪ ⎪ ⎩ ⎪ ⎪ ⎪ ⎨ ⎧ ⋅ / ⋅⋅ω ω/ π + ⋅ / ⋅⋅ω ω/ π + ⎪ ⎪ ⎪ ⎭ ⎪ ⎪ ⎪ ⎬ ⎫ ⎪ ⎪ ⎪ ⎩ ⎪ ⎪ ⎪ ⎨ ⎧ ⋅ / ⋅⋅ω ω/ π + ⋅ / ⋅⋅ω ω/ π = ∫ ∫ ∫ ∫ ∞− ⋅ / / <sup>∞</sup> <sup>⋅</sup> / / ∞− ⋅ / / <sup>∞</sup> <sup>⋅</sup> / / ⇔ ← ⇔ → ⇔ ← ⇔ → 0 sin ω 0 sin ω <sup>0</sup> cos <sup>ω</sup> 0 cos ω )( <sup>2</sup> 1 )( <sup>2</sup> 1 )( <sup>2</sup> 1 )( <sup>2</sup> 1 *deF deF deF deF ti ti ti ti* t t ω t t ω t t ω t t ω (18t) *Spectral:* t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* ω t ω t / ℑ / <sup>ℜ</sup> <sup>=</sup> ω/ ⋅+ ω/)( )( <sup>I</sup> <sup>I</sup> *<sup>F</sup> Fi* (16s) t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* 7 G 7 G ϖ ϖ )( & )( cos sin ↔ ϖ ϖ t t *F F* (17s) <sup>G</sup>

$$\begin{array}{ll} \begin{array}{l} \begin{array}{l} \overline{F\_{\mathsf{q}}(\overline{\mathsf{o}})} \end{array} \Big|\_{\mathsf{d}\mathsf{o}\mathsf{p}} \end{array} = \begin{array}{l} \begin{array}{l} \overline{F\_{\mathsf{q}}^{\operatorname{\mathsf{cos}}}(\overline{\mathsf{o}})} \Big|\_{\mathsf{d}\mathsf{o}\mathsf{p}} \end{array} \\ + \\ \begin{array}{l} \overline{F\_{\mathsf{q}}^{\operatorname{\mathsf{cos}}}(\overline{\mathsf{o}})} \Big|\_{\mathsf{d}\mathsf{o}\mathsf{p}} \end{array} + \end{array} & \begin{array}{l} \overline{\begin{array}{l} \overline{F\_{\mathsf{q}}^{\operatorname{\mathsf{sin}}}(\overline{\mathsf{o}})} \Big|\_{\mathsf{d}\mathsf{o}\mathsf{p}} \end{array} \Big|\_{\mathsf{d}\mathsf{o}\mathsf{p}} \end{array} \\ \begin{array}{l} \end{array} \end{array} \end{array} \end{array} \tag{18s}$$

From Equations (4) and (5) we can express the forward FT for the phasor-space spectral form also as:

$$\underbrace{\begin{array}{c} \stackrel{\scriptstyle \mathsf{T}}{\mathsf{T}} \\ \stackrel{\scriptstyle \mathsf{T}}{\displaystyle} \\ \stackrel{\scriptstyle \mathsf{T}}{\displaystyle} \stackrel{\scriptstyle \mathsf{G}}{\mathsf{G}} \stackrel{\scriptstyle \mathsf{O}}{\mathsf{O}} \end{array}}\_{\mathsf{T}\mathsf{g}(\stackrel{\scriptstyle \mathsf{O}}{\mathsf{O}}\mathsf{o})\end{array}}\cdot dt = \underbrace{\begin{array}{c} \stackrel{\scriptstyle \mathsf{R}}{\mathsf{R}} \Big(\stackrel{\scriptstyle \mathsf{T}}{\displaystyle}\Big(\stackrel{\scriptstyle \mathsf{S}}{\displaystyle}\Big)\_{\mathsf{O}}\cdot e^{-i\stackrel{\scriptstyle \mathsf{S}}{\mathsf{G}}\cdot\mathsf{t}}\Big|\_{\mathsf{S}}\cdot dt\Big)}\_{\mathsf{R}\mathsf{g}} \\ \stackrel{\scriptstyle \mathsf{S}}{\displaystyle}\_{\mathsf{F}\mathsf{g}(\stackrel{\scriptstyle \mathsf{O}}{\mathsf{O}}\mathsf{o})\Big|\_{\mathsf{Q}\mathsf{M}}}\Big)}\;+ \qquad\qquad\qquad\qquad\qquad\qquad\stackrel{\scriptstyle \mathsf{S}}{\displaystyle}\Big(\begin{array}{c} \stackrel{\scriptstyle \mathsf{S}}{\displaystyle}\Big(\stackrel{\scriptstyle \mathsf{S}}{\displaystyle}\Big)\_{\mathsf{O}}\cdot e^{-i\stackrel{\scriptstyle \mathsf{S}}{\mathsf{G}}\cdot\mathsf{t}}\Big|\_{\mathsf{S}}\cdot dt\Big)}{\mathsf{R}\mathsf{g}}\Big(\stackrel{\scriptstyle \mathsf{S}}{\mathsf{S}} \Big) \end{array}\tag{19s}$$

226 Fourier Transform – Signal Processing

1 ω ω

⎞

<sup>π</sup> <sup>=</sup> ∫ ∫ <sup>⇔</sup>

<sup>π</sup> <sup>=</sup> <sup>⇔</sup> <sup>⇔</sup>

 

*tfi*

 

)(

G

<sup>∞</sup> <sup>⋅</sup> / /

+

ϖ

*tf*

sin

t

sin ω

ω

⋅ / ⋅⋅ω ω/

*deF*

H

t

 

G

7

H

)(

ω/ + ω/

)(

⎪ ⎪ ⎩

I 

)( <sup>ω</sup> <sup>ω</sup>

/ /

⎜ ⎜ ⎜ ⎝

)(

*etf dt ti ti* (19s)

⎛

ℑ

∞

∞−

t

sin

t

sin

⎪ ⎪ ⎨

*F*

*F*

⎧

⎪ ⎪ ⎭

 

*etfi dt*

⋅− /

↔

ω/ ℑ⋅

)(

*Fi*

t

⋅⋅⋅+

ω t

t

⇔

⎟ ⎟ ⎟ ⎠

⎞

/

ω

/

ω

⎪ ⎪ ⎬

⎫

ℑ

t*F F* (17s)

<sup>ℜ</sup> <sup>=</sup> ω/ ⋅+ ω/)( )( <sup>I</sup> <sup>I</sup> *<sup>F</sup> Fi* (16s)

⋅ / ⋅⋅ω ω/

*deF*

ω

/

ϖ

/

ω

sin ω

⋅ /

*ti*

→

*ti*

←

⇔

t

⇔

t

ℑ

⋅

ω

/

⋅ / ⋅⋅ω ω/

)(

GG

*F dt*

ϖ⋅ϖ⋅ϖ⋅

t

I

*deFi*

t

)( <sup>2</sup>

∞

∞−

sin

t

)sin()( 1

1

⎜ ⎜ ⎜ ⎝

π

0

π

⎪ ⎪ ⎪

π

1

∫

)( <sup>2</sup>

t

0

 )( <sup>2</sup>

t

∞−

∫

0

π

1

⎪ ⎪ ⎪

⎧

⎨

+

⎪ ⎪ ⎪

⎪ ⎪ ⎪

⎫

⎬

⎭

ω

/

G ϖ

 

7

H

ω/ + ω/

)(

⎪ ⎪ ⎩

t

t

*F*

ω/

)(

⋅− /

↔

t

ℜ

cos

t

cos

⎪ ⎪ ⎨

*F*

*F*

=

⎧

⎪ ⎪ ⎭

From Equations (4) and (5) we can express the forward FT for the phasor-space spectral

 <sup>I</sup> ω

t

⇔

⎟ ⎟ ⎟ ⎠

⋅⋅ ∫ ∫

⎞

/

ω

/

ω

⎪ ⎪ ⎬

⎫

G

)(

⎩

)( & )( cos sin ↔ ϖ ϖ

<sup>G</sup>

+

⎛

ℑ

*deF ti ti*

⋅+ <sup>⎟</sup> ⎟ ⎟ ⎠

t

⋅ /

↔

t

⎟ ⎟ ⎟ ⎠

> ⎪ ⎪ ⎪

> ⎪ ⎪ ⎪

> ⎫

⎬

⎭

(18s)

(16t)

(17t)

(18t)

⎞

*Temporal:* ⋅ / ⋅⋅ω / =ω <sup>π</sup> ∫ ∞

t

⋅ / / ⇔ ↔

*deF ti*

t

⋅ /

↔

*<sup>F</sup> dt* <sup>∫</sup> <sup>∫</sup> <sup>∞</sup> <sup>∞</sup>

t

t

⇔

 

)(

t

)( <sup>2</sup>

∞

∞−

cos

t *tf*

ω

t

I

ℜ

)cos()( <sup>1</sup>

 

*tf*

 

cos ω

ω

⋅ / ⋅⋅ω ω/

*deF*

H

<sup>0</sup> cos <sup>ω</sup>

⋅ / ⋅⋅ω ω/

*deF*

/

ω

⋅ /

*ti*

→

*ti*

←

⇔

t

⇔

t

t

t

)(

ϖ

*tf*

cos

t

G

<sup>∞</sup> <sup>⋅</sup> / /

+

t

t ω/ ω/)( <sup>I</sup> *<sup>F</sup>*

t ω/ ω/)( <sup>I</sup> *<sup>F</sup>*

t ω/ ω/)( <sup>I</sup> *<sup>F</sup>*

=

⎜ ⎜ ⎜ ⎝

⎛

ℜ

∞

∞−

ω/

)(

t

 <sup>I</sup>

∫ <sup>⇔</sup>

)( <sup>ω</sup>

t

*F*

→

FT

⋅⋅

t

↔ 

/

ω

⋅− /

*dtetf ti*

ω

/

⋅ / ⋅⋅ω ω/

)(

GG

t

+ϖ⋅ϖ⋅ϖ⋅

I

∞−

ℜ

⎪ ⎪ ⎪

π

1

∫

)( <sup>2</sup>

∞−

∫

0

 )( <sup>2</sup>

t

π

1

⎪ ⎪ ⎪

⎧

⎨

=

⎩

⎜ ⎜ ⎜ ⎝

0

⎛

*Spectral:*

form also as:

∞

∞−

)( <sup>2</sup> 1 ω

$$=\underbrace{\begin{array}{c} \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm T}}}(\operatorname{\overline{\rm o}}) \\ = & \underbrace{\begin{array}{c} \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm f}}}(\operatorname{\overline{\rm o}}) \\ \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm f}}}(\operatorname{\overline{\rm o}}) \end{array}}\_{\operatorname{\overline{\rm H}}^{\operatorname{\overline{\rm O}}}(\operatorname{\overline{\rm H}})} \cdot dt \end{array} \quad \stackrel{\scriptstyle \begin{array}{c} \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm f}}}(\operatorname{\overline{\rm sin}}) \\ \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm f}}}(\operatorname{\overline{\rm o}}) \end{array}}\_{\operatorname{\overline{\rm H}}^{\operatorname{\overline{\rm H}}}(\operatorname{\overline{\rm H}})} \cdot dt \end{array} \quad \stackrel{\scriptstyle \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm f}}}(\operatorname{\overline{\rm o}}) \cdot \operatorname{\overline{\rm m}}(\operatorname{\overline{\rm m}}) \cdot \operatorname{\overline{\rm H}}{\operatorname{\overline{\rm H}}} \\ \stackrel{\scriptstyle \overline{\rm H}}{\operatorname{\overline{\rm H}}} \cdot \operatorname{\overline{\rm H}}(\operatorname{\overline{\rm o}}) \end{array} \quad \begin{array}{c} \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm f}}}(\operatorname{\overline{\rm H}}) \cdot \operatorname{\overline{\rm m}}(\operatorname{\overline{\rm m}}) \cdot \operatorname{\overline{\rm H}}{\operatorname{\overline{\rm H}}} \\ \stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\overline{\rm H}}}(\operatorname{\overline{\rm m}}) \cdot \operatorname{\overline{\rm H}}(\operatorname{\overline{\rm H}}) \end{array} \tag{208}$$

$$\stackrel{\scriptstyle \overline{\rm T}}{\operatorname{\$$

In Equations (16) - (21) the relationships between the various ϖ -space and ω/ -space spectral forms are as follows:

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⋅⋅=ϖ / ⋅⋅−ω / ⋅⋅=ω / ⋅⋅−ω ω/ ⋅=ϖ / ⋅+ω / ⋅=ω / ⋅+ω ω/ / =ω / +ω / ϖ⋅+ϖ=ω / =ω / +ω / ϖ⋅−ϖ=ω / =ω / ϖ=ω / ϖ⋅−=ω / ϖ⋅=ω / / / / / / / / / / / / / / / / / / ω ω ω ω ω ω ω ω ω ω ω ω ω ω ω ω ω ω t t t t t t t t t t t t t t t t t t t t t t t t t t t )(5.0)(5.0)(5.0)(5.0)( )(5.0)(5.0)(5.0)(5.0)( )()()()( )( )()()()( )( )( ;)()()( )( ;)( )( sin sin sin cos cos cos cos sin cos sin cos sin cos sin cos cos cos sin sin sin sin � � � � � � � � � � � � � � � � � � � � � � � � � � � *F FiFiFi Fi F F F F F F F F F Fi F F F F Fi F F F F Fi F Fi* ϖ ϖ ϖ ϖ ϖ ϖ ϖ ϖ ϖ (22)

From this formalism one can draw a number of subtle insights into the FT as was detailed in the original papers which also included a discussion on a geometrical interpretation of the FT and on its capability to unravel the oscillating frequency components that are otherwise obscured in a complicated time signal (Szántay, 2007, 2008a, 2008b, 2008c). These aspects will not be elaborated here. Suffice to note only that if a signal, represented by the function t *t(f* ) , comprises of harmonic phasor components (such is the case with the NMR "free induction decay" except that it contains exponentially damped harmonic phasors), then the signal must be detected from two orthogonal directions simultaneously to be able to tell the sense of rotation of the pertinent phasors. Letting these directions correspond to the +ℜ and +ℑ axes we obtain the functions t *t(f* ) <sup>ℜ</sup> and t *t(fi* ) ℑ⋅ [cf. Equation (20t)] as the "side projections" of <sup>t</sup> *t(f* ) which are analogous to the principle phasor's real and imaginary

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 229

Fig. 3. Some special FT relationships

"side-view" components as shown in Figure 1 (a'). Accordingly, in general <sup>t</sup> *t(f* ) <sup>ℜ</sup> and t *t(fi* ) ℑ⋅ both comprise of a mixture of cosine and sine waves, thus when we speak of the "complex forward" FT of t *tf* )( , then this is actually accomplished through the FT of t *tf* )( <sup>ℜ</sup> combined with the FT of t *tfi* )( ℑ⋅ . The relationship between the right-hand sides of (20s) and (21s), or (17s) and (18s), reflects the way in which the ϖ -space outputs of the cosine and sine transforms (cos) → FT and (sin) → FT assume a sign duality in ω/ -space. The addition of the "secondary" ω/ -space spectra t ω/ ω/)( cos <sup>G</sup> *<sup>F</sup>* , t ω/ ω/)( cos <sup>H</sup> *<sup>F</sup>* , t ω/ ω/)( sin <sup>G</sup> *<sup>F</sup>* and t ω/ ω/)( sin <sup>H</sup> *<sup>F</sup>* gives the final FT-spectrum t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* , i.e., only the (cos) → FT and (sin) → FT transforms together can properly identify the sign of the phasor frequency components within t*tf* )( .

A few basic FT relationships, represented graphically as well as in terms of function rules written in the style discussed above are shown in Figure 3 (all of these formulas are of course available in conventional notation from standard textbooks).

### **4. Uncertainty principle(s)**

### **4.1 The Heisenberg and Fourier Uncertainty Principles**

With reference to the issue of whether or not the frequency of a time-limited sinusoid wave becomes inherently "fuzzy" due to the Uncertainty Principle (as was discussed in the Introduction), let us now take a closer look at the meaning of the term "Uncertainty Principle". One mental trap here is that for most people the phrase "Uncertainty Principle" invokes a single famous theorem, while there are actually several uncertainty principles that have come to constitute their independent branches of mathematics, physics and signal analysis theories. In the context of our present discussion we can speak of two main types of uncertainty principle: Heisenberg's Uncertainty Principle (HUP) which is a quantummechanical concept grounded in the physical world, and the Fourier Uncertainty Principle (FUP) which is a classical concept and stems from a mathematical necessity inherent to the way the FT works.

Within our framework of discussion the HUP can be written for the noncommuting variables time and frequency through the expression:

$$
\Delta t \cdot \Delta\_{\text{sd}} \text{oj} \ge \text{constant} \tag{23}
$$

where the symbol Δsd denotes the standard deviation of frequency.

In the classical realm of time-frequency signal analysis there exists another fundamental inequality commonly referred to as the Fourier Uncertainty Principle (FUP). In qualitative terms the FUP states that the "broader" t *tf* )( is in time, the "narrower" <sup>⇔</sup> / ω/ t ω )( <sup>I</sup> *<sup>F</sup>* is in frequency and vice versa. Since there is no universally applicable mathematical or semantic 228 Fourier Transform – Signal Processing

*t(fi* ) ℑ⋅ both comprise of a mixture of cosine and sine waves, thus when we speak of the

(21s), or (17s) and (18s), reflects the way in which the ϖ -space outputs of the cosine and sine

FT and (sin)

A few basic FT relationships, represented graphically as well as in terms of function rules written in the style discussed above are shown in Figure 3 (all of these formulas are of

With reference to the issue of whether or not the frequency of a time-limited sinusoid wave becomes inherently "fuzzy" due to the Uncertainty Principle (as was discussed in the Introduction), let us now take a closer look at the meaning of the term "Uncertainty Principle". One mental trap here is that for most people the phrase "Uncertainty Principle" invokes a single famous theorem, while there are actually several uncertainty principles that have come to constitute their independent branches of mathematics, physics and signal analysis theories. In the context of our present discussion we can speak of two main types of uncertainty principle: Heisenberg's Uncertainty Principle (HUP) which is a quantummechanical concept grounded in the physical world, and the Fourier Uncertainty Principle (FUP) which is a classical concept and stems from a mathematical necessity inherent to the

Within our framework of discussion the HUP can be written for the noncommuting

In the classical realm of time-frequency signal analysis there exists another fundamental inequality commonly referred to as the Fourier Uncertainty Principle (FUP). In qualitative

t

frequency and vice versa. Since there is no universally applicable mathematical or semantic

ω/ ω/)( cos <sup>H</sup> *<sup>F</sup>* ,

→

t

*tf* )( , then this is actually accomplished through the FT of

t

*tfi* )( ℑ⋅ . The relationship between the right-hand sides of (20s) and

FT assume a sign duality in ω/ -space. The addition of the

ω/ ω/)( sin <sup>G</sup> *<sup>F</sup>* and

> t*tf* )( .

ω constant Δ*t* ⋅ Δsd ≥ (23)

*tf* )( is in time, the "narrower" <sup>⇔</sup>

/ ω/ t

ω

)( <sup>I</sup> *<sup>F</sup>* is in

t

FT transforms together can properly

ω/ ω/)( sin <sup>H</sup> *<sup>F</sup>* gives the final

*t(f* ) <sup>ℜ</sup> and

t*tf* )( <sup>ℜ</sup>

"side-view" components as shown in Figure 1 (a'). Accordingly, in general <sup>t</sup>

t

→

<sup>I</sup> *<sup>F</sup>* , i.e., only the (cos)

identify the sign of the phasor frequency components within

**4.1 The Heisenberg and Fourier Uncertainty Principles** 

variables time and frequency through the expression:

terms the FUP states that the "broader"

where the symbol Δsd denotes the standard deviation of frequency.

FT and (sin)

t

t

course available in conventional notation from standard textbooks).

ω/ ω/)( cos <sup>G</sup> *<sup>F</sup>* ,

→

t

"complex forward" FT of

combined with the FT of

"secondary" ω/ -space spectra

t ω/ ω/)(

**4. Uncertainty principle(s)** 

way the FT works.

transforms (cos) →

FT-spectrum

Fig. 3. Some special FT relationships

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 231

being functions of the conjugate variables time and frequency, represent the same

*tf* )( expressed in two different "languages", temporal and spectral.

*tf* )( is transposed according to (13) onto frequency space, we loose

t

*tf* )( as an abstract "ocean" of harmonic waves stretching infinitely

t

t

t*tf* )( and

*tf* )( is a model of some physical process,

*tf* )( , we must assign the same physical

t

*tf* )( in the frequency dimension (in other words the

t

t

*ti* )exp( represents a physically existing

t*ti* )exp( <sup>I</sup> on which the

> t*tf* )(

ω/ ω/)(

*ti* )exp( . It is the entirety of

*tf* )( in its exact shape and at

<sup>I</sup> *<sup>F</sup>* is something that

<sup>I</sup> *<sup>F</sup>* . For the sake of emphasis I will herein refer to

t

ω/ ω/)( <sup>I</sup> *<sup>F</sup>* , as

> t*tf* )( and

**4.2 The law of "Conjugate Physical Equivalence"** 

t

The mathematical truth of the FIT implies that if

then whatever physical meaning we assign to

t

these spectral "letters" that describes

t

meaning and information content to

mathematical object

t

ω/ ω/)(

the sense that when

"silence" for all *t* ∉

forward FT transposes

The Fourier Inversion Theorem (FIT) symbolized by (3) tells us that

t

FIT. The reason for this stems from the innate "linguistic" difference between

this concept as the "Law of Conjugate Physical Equivalence" (LCPE).

in the form of the scaled perpetual basis harmonic waves ⋅ω/ <sup>⇔</sup>

concerted sound, i.e. the superposition of the vowels returns

the LCPE) that each of the (abstract) letters ⋅ω/ <sup>⇔</sup>

t

ω/ ω/)(

Although, when phrased in this way the LCPE may seem rather self-evident, it is actually all too often overlooked when it comes to the physical interpretation of the mathematics of the

<sup>I</sup> *<sup>F</sup>* . The temporal form is intuitively more familiar since it reflects, and is more

congruent with, our natural perception of physical events in the way they actually "happen" in time. The spectral "language" however is an artificial construct which is non-intuitive in

direct information on its temporal progression of; furthermore, the spectral form is highly abstract in that its "alphabet" consists of an infinite number of eternally sounding "vowels"

its precise time location; however, the vowels cancel each other exactly to produce utter

is often not fully appreciated, and the fact that the FT involves transposition between the physical variables time and frequency invites a strong reflexive (and often unjustified) association of its mathematics with physical properties. In particular, since the "letters" of the spectrum are formed of harmonic waves, one may be inclined to think (in violation of

oscillation perceived as being somehow inherent in the temporal function but "revealed" by the FT. However, the spectral letters happen to be harmonic waves simply as a consequence

in both the frequency and time dimensions. The forward FT projects the properties of

of the mathematics of the FT. We can envisage the basis set ⋅ω/ <sup>⇔</sup>

t

). This kind of abstraction being inherent to

definition that would adequately quantify or delineate the concept of a function's "breadth" (Cohen, 1995 and Claerbout, 2004), we adopt from Claerbout the generic symbol "∧", herein called "*characteristic span*", connoting the "compressedness" or "elongatedness" of a function along its domain, the idea being that "∧" symbolizes any mathematical definition, including Δsd , that may be suitably applied to the specific function at hand. Using the symbol "¬" to denote "of", so " x ¬∧ *xfx* )( " means the characteristic span of x *xf* )( , the FUP can thus be expressed as

$$\rho \wedge t \\_\underset{\mathsf{H}}{\overset{\frown}{f}}(t)\Big|\_{\mathsf{H}} \cdot \wedge \phi \leftarrow \underline{F}\_{\mathsf{H}}(\vec{\mathsf{obj}})\Big|\_{\mathsf{H}} = \text{TBP} \geq \Xi \tag{24a}$$

$$
\Delta \clubsuit \cdot \Delta \phi = \mathbf{TBP} \ge \Xi \tag{24b}
$$

where TBP is the so-called "time-bandwidth product", Ξ is the minimum value of the TBP, and (24b) is just a short-hand for (24a). The actual values of TBP and Ξ are specific to the exact way in which t *tft* )( ¬∧ and <sup>⇔</sup> / <sup>∧</sup> / <sup>¬</sup> / t ω ω ( ω) <sup>I</sup> *<sup>F</sup>* have been defined for the particular t *tf* )( and <sup>⇔</sup> / ω/ t ω )( <sup>I</sup> *<sup>F</sup>* functions at hand. The FUP is an equality in the sense that for any given absolute integrable function t *tf* )( the TBP is constant, therefore increasing t *tft* )( ¬∧ entails a concurrent decrease in <sup>⇔</sup> / <sup>∧</sup> / <sup>¬</sup> / t ω ω ( ω) <sup>I</sup> *<sup>F</sup>* and vice versa; the FUP however is an inequality in the sense that the TBP must be greater than the value of Ξ . The FUP asserts nothing more than: a) a *t*-D function can only be "squeezed" at the expense of expanding its ω*-*D counterpart and vice versa; b) t *tf* )( and <sup>⇔</sup> / ω/ t ω)( <sup>I</sup> *<sup>F</sup>* cannot both be made arbitrarily "narrow".

The FUP and HUP share a common mathematical form and terminology which is why, as it was pointed out by Cohen (Cohen, 1995), they are easily mixed up, especially since both are often written simply in the form " constant Δ*t* ⋅ Δω ≥ ". However, the physical interpretations of (23) and (24) are different: in the HUP the word "uncertainty" reflects probabilistic uncertainties in the measurement of quantum-mechanical physical observables, while the FUP is a deterministic theorem resulting from the innate nature of the FT, as was illustrated geometrically earlier (Szántay, 2008c).

The notion that by time-limiting a monochromatic sine wave we impose some "uncertainty" on its frequency stems either from confusing the HUP and the FUP, or from misunderstanding the FUP itself. With regard to the deterministic sine wave (such as the NMR pulse), it is obviously the FUP that is of relevance and not the HUP. As for the FUP, one reason it is often misinterpreted is because "uncertainty" is a dubious word with several possible interpretations in various situations, as was discussed in detail before (Szántay, 2008c). The concept of the characteristic span x *<sup>f</sup> xx* )( ¬∧ simply reflects "broadness" which is not a probabilistic quantity per se; just because a function has a large characteristic span, it is not "uncertain". In that sense, referring to (24) as an "Uncertainty Principle" is a misnomer, as was also emphasized by Cohen (Cohen, 1995).

### **4.2 The law of "Conjugate Physical Equivalence"**

230 Fourier Transform – Signal Processing

definition that would adequately quantify or delineate the concept of a function's "breadth" (Cohen, 1995 and Claerbout, 2004), we adopt from Claerbout the generic symbol "∧", herein called "*characteristic span*", connoting the "compressedness" or "elongatedness" of a function along its domain, the idea being that "∧" symbolizes any mathematical definition, including Δsd , that may be suitably applied to the specific function at hand. Using the

x

t

t

*tft* )( ¬∧ and <sup>⇔</sup>

t

t

ω

*tf* )( and <sup>⇔</sup>

/ <sup>∧</sup> / <sup>¬</sup> / t

ω ( ω)

∧⋅¬∧ / ¬ / <sup>⇔</sup> Ξ≥= / )( <sup>ω</sup> ( <sup>ω</sup> TBP)

t

where TBP is the so-called "time-bandwidth product", Ξ is the minimum value of the TBP, and (24b) is just a short-hand for (24a). The actual values of TBP and Ξ are specific to the

ω

the sense that the TBP must be greater than the value of Ξ . The FUP asserts nothing more than: a) a *t*-D function can only be "squeezed" at the expense of expanding its ω*-*D

> / ω/ t

ω

The FUP and HUP share a common mathematical form and terminology which is why, as it was pointed out by Cohen (Cohen, 1995), they are easily mixed up, especially since both are often written simply in the form " constant Δ*t* ⋅ Δω ≥ ". However, the physical interpretations of (23) and (24) are different: in the HUP the word "uncertainty" reflects probabilistic uncertainties in the measurement of quantum-mechanical physical observables, while the FUP is a deterministic theorem resulting from the innate nature of the FT, as was illustrated

The notion that by time-limiting a monochromatic sine wave we impose some "uncertainty" on its frequency stems either from confusing the HUP and the FUP, or from misunderstanding the FUP itself. With regard to the deterministic sine wave (such as the NMR pulse), it is obviously the FUP that is of relevance and not the HUP. As for the FUP, one reason it is often misinterpreted is because "uncertainty" is a dubious word with several possible interpretations in various situations, as was discussed in detail before (Szántay,

is not a probabilistic quantity per se; just because a function has a large characteristic span, it is not "uncertain". In that sense, referring to (24) as an "Uncertainty Principle" is a

x

*<sup>f</sup> xx* )(

)(

<sup>I</sup> *<sup>F</sup>* functions at hand. The FUP is an equality in the sense that for any given

*tf* )( the TBP is constant, therefore increasing

/ <sup>∧</sup> / <sup>¬</sup> / t

ω ( ω)

ω

¬∧ *xfx* )( " means the characteristic span of

<sup>I</sup> *Ftft* (24a)

∧*t* ⋅ ∧ω/ = TBP ≥ Ξ (24b)

<sup>I</sup> *<sup>F</sup>* have been defined for the particular

<sup>I</sup> *<sup>F</sup>* and vice versa; the FUP however is an inequality in

<sup>I</sup> *<sup>F</sup>* cannot both be made arbitrarily "narrow".

¬∧ simply reflects "broadness" which

x*xf* )( , the

> t*tf* )(

t*tft* )( ¬∧ entails

symbol "¬" to denote "of", so "

FUP can thus be expressed as

exact way in which

)(

/ ω/ t

absolute integrable function

counterpart and vice versa; b)

a concurrent decrease in <sup>⇔</sup>

geometrically earlier (Szántay, 2008c).

2008c). The concept of the characteristic span

misnomer, as was also emphasized by Cohen (Cohen, 1995).

ω

and <sup>⇔</sup>

The Fourier Inversion Theorem (FIT) symbolized by (3) tells us that t *tf* )( and t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* , as being functions of the conjugate variables time and frequency, represent the same mathematical object t *tf* )( expressed in two different "languages", temporal and spectral. The mathematical truth of the FIT implies that if t *tf* )( is a model of some physical process, then whatever physical meaning we assign to t *tf* )( , we must assign the same physical meaning and information content to t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* . For the sake of emphasis I will herein refer to this concept as the "Law of Conjugate Physical Equivalence" (LCPE).

Although, when phrased in this way the LCPE may seem rather self-evident, it is actually all too often overlooked when it comes to the physical interpretation of the mathematics of the FIT. The reason for this stems from the innate "linguistic" difference between t *tf* )( and t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* . The temporal form is intuitively more familiar since it reflects, and is more congruent with, our natural perception of physical events in the way they actually "happen" in time. The spectral "language" however is an artificial construct which is non-intuitive in the sense that when t *tf* )( is transposed according to (13) onto frequency space, we loose direct information on its temporal progression of; furthermore, the spectral form is highly abstract in that its "alphabet" consists of an infinite number of eternally sounding "vowels" in the form of the scaled perpetual basis harmonic waves ⋅ω/ <sup>⇔</sup> t *ti* )exp( . It is the entirety of these spectral "letters" that describes t *tf* )( in the frequency dimension (in other words the concerted sound, i.e. the superposition of the vowels returns t *tf* )( in its exact shape and at its precise time location; however, the vowels cancel each other exactly to produce utter "silence" for all *t* ∉t ). This kind of abstraction being inherent to t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* is something that is often not fully appreciated, and the fact that the FT involves transposition between the physical variables time and frequency invites a strong reflexive (and often unjustified) association of its mathematics with physical properties. In particular, since the "letters" of the spectrum are formed of harmonic waves, one may be inclined to think (in violation of the LCPE) that each of the (abstract) letters ⋅ω/ <sup>⇔</sup> t *ti* )exp( represents a physically existing oscillation perceived as being somehow inherent in the temporal function but "revealed" by the FT. However, the spectral letters happen to be harmonic waves simply as a consequence of the mathematics of the FT. We can envisage the basis set ⋅ω/ <sup>⇔</sup> t *ti* )exp( <sup>I</sup> on which the forward FT transposes t *tf* )( as an abstract "ocean" of harmonic waves stretching infinitely in both the frequency and time dimensions. The forward FT projects the properties of t*tf* )(

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 233

Fig. 4. Simple time-windowed monochromatic temporal waves and their ω/ -space spectra (for simplicity, only the positive phasor-frequency side of the spectra is shown). Each wave's frequency is the same value, ϖ@ , but their duration, time location, and initial phase are different. The value of ϖ@ "codes into" the frequency location of the center of the sincshaped spectra, therefore this location is the same in each spectrum. Differences in the spectral profiles reflect differences in the waves' other features, because those are coded exactly by the FT into the (complex) FT-spectrum. Spectral broadening is due to that fact that the FT-spectrum carries all that coding information, and not to any "uncertainty" in ϖ@

duration and time location of the wave (which is why the spectrum is a Dirac-delta

spectral features to be able to code for the wave's duration and time location. It is, then, this encoding of the "window" into the FT-spectrum that manifests itself in a spectral profile

distinct physical "aspects", such as a "frequency aspect" defined through the term ϖ@*t*)(cos , and a "window aspect" defined through the location and duration of the operative interval

. All of these features are coded exactly into the shape of the spectrum <sup>⇔</sup> •−•

t

ω

Thinking in terms of "curves", we can render to the curve of ϖ •−•

)( @ *th* a Dirac-delta would not provide enough degrees of freedom in its

/ , and not some kind of "uncertainty" in ϖ*@* .

t

)( @ *th* translates into the (exact) central

)( @ *th* [Figure 4(a)] some

t

<sup>I</sup> *<sup>H</sup>*

/ / /

ω(<sup>ω</sup> ) ;ω@

@ <sup>⇔</sup> )( <sup>↑</sup> / ω/ t

<sup>I</sup> *<sup>H</sup>* ), for <sup>ϖ</sup> •−•

t

[Figure 4(a)]. The "frequency aspect" of ϖ •−•

that covers the infinite frequency set <sup>⇔</sup>

ω

•−• t

onto this "ocean" by adjusting the amplitudes and phases of the waves according to the "coding instructions" of the FT, and the backward FT "decodes" this information to reconstruct t *tf* )( . However, ⋅ω/ <sup>⇔</sup> t *ti* )exp( <sup>I</sup> is just (a rather convenient and useful) mathematical vehicle whose waves act as carriers of the FT codes but do not *ab ovo* represent physical oscillations (in principle it would be possible to "translate" t *tf* )( into an infinite number of other mathematical "languages" whose letters would be based on functional forms other than ⋅ω/ <sup>⇔</sup> t *ti* )exp( , in which case the letters would less readily suggest some association with a physical meaning).

To better understand this problem it can be useful to think of t *tf* )( simply as a curve detached initially from any physical meaning which is "morphed" by the FT into the "oceanic profile" of t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* . There are certain features of the temporal curve that we, for convenience, can call by specific names (such as 'oscillation frequency', 'phase', 'amplitude', in the case of, say, a sinusoid). If we render specific physical properties through these features to our temporal signal, then we must assign the same physical properties to the spectral representation according to the LCPE since the FT does not "know" anything about the physics represented by those curves (thus it cannot "create" or "reveal" e.g. any frequency "uncertainty" in the frequency domain that was not there in the time domain). Whatever meaning we therefore attribute to t ω/ ω/)( <sup>I</sup> *<sup>F</sup>* , that meaning emerges through a process of its "a posteriori" physical interpretation, and this must be in keeping with our physical interpretation of the relevant temporal "curve".

Consider e.g. the simple monochromatic real-valued eternal cosine wave ϖ <sup>⇔</sup> t )( @ *th* and its spectrum @ <sup>⇔</sup> )( <sup>↑</sup> / ω/ t ω <sup>I</sup> *<sup>H</sup>* [Figure 3(d)]. In <sup>ϖ</sup> <sup>⇔</sup> t )( @ *th* the wave's frequency ϖ*@* is known exactly by definition and @ <sup>⇔</sup> )( <sup>↑</sup> / ω/ t ω <sup>I</sup> *<sup>H</sup>* is localized to ±ω/ *@* , representing the limit where ¬∧ <sup>ϖ</sup> <sup>⇔</sup> t )( @ *tht* = ∞ and so @ <sup>⇔</sup> )( <sup>↑</sup> / <sup>∧</sup> / ¬ω ω/ t ω <sup>I</sup> *<sup>H</sup>* = 0 in (24). Take, next, the time-windowed harmonic waves ϖ •−• t )( @ *th* and their spectra <sup>⇔</sup> •−• / / / t ω (<sup>ω</sup> ) ;ω@ <sup>I</sup> *<sup>H</sup>* (Figure 4).

Again, in ϖ •−• t )( @ *th* the frequency ϖ*@* has an exactly known value. Since now ¬∧ <sup>ϖ</sup> •−• t )( @ *tht* is finite, the FUP requires that (ω ) <sup>ω</sup>@ <sup>⇔</sup> •−• / / <sup>∧</sup> / ¬ω / t ω ; <sup>I</sup> *<sup>H</sup>* must also broaden to some finite value. However, just because we have time-windowed ϖ <sup>⇔</sup> t )( @ *th* its frequency ϖ*@* has not become "uncertain". The fact that (ω ) <sup>ω</sup>@ <sup>⇔</sup> •−• / / <sup>∧</sup> / ¬ω / t ω ; <sup>I</sup> *<sup>H</sup>* > 0 is not a reflection of any "uncertainty" in ϖ*@* but a mathematical necessity ensuing from the fact that all information that specifies the temporal function or signal must be uniquely coded into the spectrum. One way of thinking about this is that while for <sup>ϖ</sup> <sup>⇔</sup> t)( @ *th* there was no need for the spectrum to code for the 232 Fourier Transform – Signal Processing

onto this "ocean" by adjusting the amplitudes and phases of the waves according to the "coding instructions" of the FT, and the backward FT "decodes" this information to

mathematical vehicle whose waves act as carriers of the FT codes but do not *ab ovo* represent

number of other mathematical "languages" whose letters would be based on functional

detached initially from any physical meaning which is "morphed" by the FT into the

convenience, can call by specific names (such as 'oscillation frequency', 'phase', 'amplitude', in the case of, say, a sinusoid). If we render specific physical properties through these features to our temporal signal, then we must assign the same physical properties to the spectral representation according to the LCPE since the FT does not "know" anything about the physics represented by those curves (thus it cannot "create" or "reveal" e.g. any frequency "uncertainty" in the frequency domain that was not there in the time domain).

> t

process of its "a posteriori" physical interpretation, and this must be in keeping with our

t

Consider e.g. the simple monochromatic real-valued eternal cosine wave ϖ <sup>⇔</sup>

/ / /

/ / <sup>∧</sup> / ¬ω / t

t

;

ω(<sup>ω</sup> ) ;ω@

/ / <sup>∧</sup> / ¬ω / t

;

ω

ϖ*@* but a mathematical necessity ensuing from the fact that all information that specifies the temporal function or signal must be uniquely coded into the spectrum. One way of thinking

<sup>I</sup> *<sup>H</sup>* (Figure 4).

ω/ ω/)(

<sup>I</sup> *<sup>H</sup>* is localized to ±ω/ *@* , representing the limit where ¬∧ <sup>ϖ</sup> <sup>⇔</sup>

<sup>I</sup> *<sup>H</sup>* = 0 in (24). Take, next, the time-windowed harmonic waves

t

)( @ *th* the frequency ϖ*@* has an exactly known value. Since now ¬∧ <sup>ϖ</sup> •−•

ω

*ti* )exp( , in which case the letters would less readily suggest some

<sup>I</sup> *<sup>F</sup>* . There are certain features of the temporal curve that we, for

*ti* )exp( <sup>I</sup> is just (a rather convenient and useful)

t

t

<sup>I</sup> *<sup>F</sup>* , that meaning emerges through a

)( @ *th* the wave's frequency ϖ*@* is known exactly

<sup>I</sup> *<sup>H</sup>* must also broaden to some finite value.

<sup>I</sup> *<sup>H</sup>* > 0 is not a reflection of any "uncertainty" in

)( @ *th* there was no need for the spectrum to code for the

)( @ *th* its frequency ϖ*@* has not become

*tf* )( into an infinite

*tf* )( simply as a curve

t

)( @ *th* and its

t

t

)( @ *tht* =

)( @ *tht*

t

reconstruct

t

forms other than ⋅ω/ <sup>⇔</sup>

"oceanic profile" of

spectrum

∞ and so

ϖ •−• t

by definition and

Again, in ϖ •−•

association with a physical meaning).

t

Whatever meaning we therefore attribute to

@ <sup>⇔</sup> )( <sup>↑</sup> / ω/ t

)( @ *th* and their spectra <sup>⇔</sup> •−•

t

about this is that while for <sup>ϖ</sup> <sup>⇔</sup>

ω

physical interpretation of the relevant temporal "curve".

<sup>I</sup> *<sup>H</sup>* [Figure 3(d)]. In <sup>ϖ</sup> <sup>⇔</sup>

@ <sup>⇔</sup> )( <sup>↑</sup> / ω/ t

> t

ω

@ <sup>⇔</sup> )( <sup>↑</sup> / <sup>∧</sup> / ¬ω ω/ t

is finite, the FUP requires that (ω ) <sup>ω</sup>@ <sup>⇔</sup> •−•

"uncertain". The fact that (ω ) <sup>ω</sup>@ <sup>⇔</sup> •−•

However, just because we have time-windowed ϖ <sup>⇔</sup>

ω

ω/ ω/)(

*tf* )( . However, ⋅ω/ <sup>⇔</sup>

t

physical oscillations (in principle it would be possible to "translate"

To better understand this problem it can be useful to think of

Fig. 4. Simple time-windowed monochromatic temporal waves and their ω/ -space spectra (for simplicity, only the positive phasor-frequency side of the spectra is shown). Each wave's frequency is the same value, ϖ@ , but their duration, time location, and initial phase are different. The value of ϖ@ "codes into" the frequency location of the center of the sincshaped spectra, therefore this location is the same in each spectrum. Differences in the spectral profiles reflect differences in the waves' other features, because those are coded exactly by the FT into the (complex) FT-spectrum. Spectral broadening is due to that fact that the FT-spectrum carries all that coding information, and not to any "uncertainty" in ϖ@

duration and time location of the wave (which is why the spectrum is a Dirac-delta @ <sup>⇔</sup> )( <sup>↑</sup> / ω/ t ω <sup>I</sup> *<sup>H</sup>* ), for <sup>ϖ</sup> •−• t )( @ *th* a Dirac-delta would not provide enough degrees of freedom in its spectral features to be able to code for the wave's duration and time location. It is, then, this encoding of the "window" into the FT-spectrum that manifests itself in a spectral profile that covers the infinite frequency set <sup>⇔</sup> ω/ , and not some kind of "uncertainty" in ϖ*@* . Thinking in terms of "curves", we can render to the curve of ϖ •−• t )( @ *th* [Figure 4(a)] some distinct physical "aspects", such as a "frequency aspect" defined through the term ϖ@*t*)(cos , and a "window aspect" defined through the location and duration of the operative interval •−• t . All of these features are coded exactly into the shape of the spectrum <sup>⇔</sup> •−• / / / t ω (<sup>ω</sup> ) ;ω@ <sup>I</sup> *<sup>H</sup>* [Figure 4(a)]. The "frequency aspect" of ϖ •−• t)( @ *th* translates into the (exact) central

A Reformulative Retouch on the Fourier Transform – "Unprincipled" Uncertainty Principle 235

resonance driving force. Nota bene: a) the common notion that "a shorter pulse excites a wider frequency band" is misleading in that it is valid if by 'excites' one specifically means 'gives absorption-shaped measured signals', but is not valid, and thus does not conflict with the above statements, if by 'excites' one just means 'perturbs' in general (Szántay, 2008c); b) the response of a linear system to an input signal can in fact be calculated from the spectrum of the input signal by way of the so-called Superposition Principle (the NMR system is nonlinear in general, and the linearity approximation is valid only for special limiting

The Fourier Transform (FT) is often thought of as a mathematical device capable of "revealing" the "frequency components" of a temporal signal. Although this notion is correct with regard to any sinusoid oscillatory feature that may be present in the signal, it is deceptive in general, easily leading to the belief that the overall spectral profile represents all the physical "frequency components" that are present in the signal. This idea can give rise to false interpretations such as the concept that the frequency of a time-limited monochromatic sinusoid is "uncertain" or "effectively polychromatic", in line with the "Uncertainty Principle". In fact the Heisenberg Uncertainty Principle (HUP) has nothing to do with this problem; it is the Fourier Uncertainty Principle (FUP) that is of relevance, but the word "uncertainty" is also misleading in that regard: the FUP does not imply that timewindowing a deterministic wave with a monochromatic frequency makes its frequency "uncertain" or that the forward FT "reveals" such uncertainty just because the spectrum has a profile that covers a broad range of frequencies. Whatever physical attributes we associate with a temporal signal (such as that it has an exact monochromatic frequency) must be carried over exactly into the frequency dimension according to the "Law of Conjugate

The author is indebted to Dr. Zsuzsanna Sánta for her critical and constructive mathematical

Claerbout, J. F. (2004). *Earth Sounding Analysis: Processing versus Inversion*. pp. 256-259.

Cohen, L. (1995). *Time-frequency analysis*. Prentice Hall, New Jersey, pp. 44-46 and pp. 195-

Derome, A. E. (1987). *Modern NMR Techniques for Chemistry Research*. Pergamon, New York,

Fukushima E., & Roeder, S. B. W. (1981). *Experimental Pulse NMR, A Nuts and Bolts Approach*.

Hanson, L. G. (2008). Is Quantum Mechanics Necessary for Understanding Magnetic

Resonance? *Concepts in Magnetic Resonance*, Vol.32, pp. 329-340.

Bloch, F. (1946). Nuclear Induction, *Physical Reviews*, Vol. 70, pp. 460-474.

Available at: http://sep.stanford.edu/sep/prof/pvi.pdf

conditions), but that concept should not be confused with the ideas discussed above.

**5. Conclusion** 

Physical Equivalence" (LCPE).

**6. Acknowledgement** 

insights.

**7. References** 

197.

p. 12.

Addison-Wesley, London, p. 51.

frequency value ±ω/ *@* of <sup>⇔</sup> •−• / / / t ω (<sup>ω</sup> ) ;ω@ <sup>I</sup> *<sup>H</sup>* , while the "window aspect" translates into the broadened complex profile of the spectrum. One must therefore understand that the scaled basis elements ⋅ω/ <sup>⇔</sup> t *ti* )exp( in <sup>⇔</sup> •−• / / / t ω (<sup>ω</sup> ) ;ω@ <sup>I</sup> *<sup>H</sup>* do not necessarily represent physical vibrations that are actually or "effectively" present in ϖ •−• t )( @ *th* . Rather, each eternal basis harmonic wave's (complex) amplitude carries a single piece of the entire set of codes that is needed to reconstruct the original temporal function ϖ •−• t )( @ *th* . The physical interpretation that we should, then, correctly assign to <sup>⇔</sup> •−• / / / t ω (<sup>ω</sup> ) ;ω@ <sup>I</sup> *<sup>H</sup>* is simply that the frequency of the central basis element of the spectrum corresponds to the monochromatic frequency of ϖ •−• t )( @ *th* , and the rest of the basis elements code for all of its other features such as its "window aspect".

The popular notion that the forward FT disseminates a temporal function (signal) into its "frequency components" is incorrect if by "frequency component" one understands a physical vibration contained in, and lasting for the duration of, the signal. Rather, the spectrum represents a transposition of all properties of the temporal signal into frequency space, and some of those properties may happen to be periodic events whose frequency will show up as peaks in the spectrum, but other features of the signal also become scrambled into the spectrum. Claiming therefore that any given point of the FT-spectrum represents a "frequency component" of the temporal signal is misleading and violates the LCPE. Indeed, Figures 4(b-e) illustrate this point by showing that if we keep the monochromatic "frequency aspect" of ϖ •−• t )( @ *th* constant but vary its other "aspects", the spectral profiles will change so as to code for every alteration in the temporal curve, but the (unchanged) value of ϖ*@* remains "coded into" the same frequency position for each spectrum.

### **4.3 Back to NMR**

If one reconsiders the statements on PFT-NMR quoted in the Introduction, the flaws in those arguments should now be evident. An RF pulse can be treated as a deterministic, timewindowed harmonic phasor in that it acts as a classical magnetic field vector rotating with a specific monochromatic frequency. The assertion that "a nominally monochromatic short pulse effectively has a (Heisenberg) uncertainty-broadened frequency band" simply reflects a misuse of the HUP and/or a physical misinterpretation of the FUP. The FUP does not warrant the causal inference that "wide frequency band" should mean a wide range of physically existing driving RF oscillations, as is often deemed necessary for explaining the phenomenon of off-resonance excitation. Actually, according to the LCPE the sinc-shaped spectral profile of the pulse is synonymous in physical meaning with its temporal form, therefore the statement that a "short pulse has a wide frequency band" does not tell us any more "physics" about the pulse than saying that a "short pulse is a short pulse". In fact there is nothing paradoxical about the fact that a monochromatic pulse can excite off-resonant spins: the phenomenon is solely due to the pulse's large amplitude, akin to the way a classical mechanical resonator also gives nonzero response to a sufficiently large offresonance driving force. Nota bene: a) the common notion that "a shorter pulse excites a wider frequency band" is misleading in that it is valid if by 'excites' one specifically means 'gives absorption-shaped measured signals', but is not valid, and thus does not conflict with the above statements, if by 'excites' one just means 'perturbs' in general (Szántay, 2008c); b) the response of a linear system to an input signal can in fact be calculated from the spectrum of the input signal by way of the so-called Superposition Principle (the NMR system is nonlinear in general, and the linearity approximation is valid only for special limiting conditions), but that concept should not be confused with the ideas discussed above.

### **5. Conclusion**

234 Fourier Transform – Signal Processing

broadened complex profile of the spectrum. One must therefore understand that the scaled

harmonic wave's (complex) amplitude carries a single piece of the entire set of codes that is

t

central basis element of the spectrum corresponds to the monochromatic frequency of

)( @ *th* , and the rest of the basis elements code for all of its other features such as its

The popular notion that the forward FT disseminates a temporal function (signal) into its "frequency components" is incorrect if by "frequency component" one understands a physical vibration contained in, and lasting for the duration of, the signal. Rather, the spectrum represents a transposition of all properties of the temporal signal into frequency space, and some of those properties may happen to be periodic events whose frequency will show up as peaks in the spectrum, but other features of the signal also become scrambled into the spectrum. Claiming therefore that any given point of the FT-spectrum represents a "frequency component" of the temporal signal is misleading and violates the LCPE. Indeed, Figures 4(b-e) illustrate this point by showing that if we keep the monochromatic

will change so as to code for every alteration in the temporal curve, but the (unchanged)

If one reconsiders the statements on PFT-NMR quoted in the Introduction, the flaws in those arguments should now be evident. An RF pulse can be treated as a deterministic, timewindowed harmonic phasor in that it acts as a classical magnetic field vector rotating with a specific monochromatic frequency. The assertion that "a nominally monochromatic short pulse effectively has a (Heisenberg) uncertainty-broadened frequency band" simply reflects a misuse of the HUP and/or a physical misinterpretation of the FUP. The FUP does not warrant the causal inference that "wide frequency band" should mean a wide range of physically existing driving RF oscillations, as is often deemed necessary for explaining the phenomenon of off-resonance excitation. Actually, according to the LCPE the sinc-shaped spectral profile of the pulse is synonymous in physical meaning with its temporal form, therefore the statement that a "short pulse has a wide frequency band" does not tell us any more "physics" about the pulse than saying that a "short pulse is a short pulse". In fact there is nothing paradoxical about the fact that a monochromatic pulse can excite off-resonant spins: the phenomenon is solely due to the pulse's large amplitude, akin to the way a classical mechanical resonator also gives nonzero response to a sufficiently large off-

value of ϖ*@* remains "coded into" the same frequency position for each spectrum.

/ / /

ω(<sup>ω</sup> ) ;ω@

)( @ *th* constant but vary its other "aspects", the spectral profiles

ω(<sup>ω</sup> ) ;ω@

/ / /

<sup>I</sup> *<sup>H</sup>* , while the "window aspect" translates into the

<sup>I</sup> *<sup>H</sup>* do not necessarily represent physical

t

t

<sup>I</sup> *<sup>H</sup>* is simply that the frequency of the

)( @ *th* . Rather, each eternal basis

)( @ *th* . The physical interpretation

/ / /

*ti* )exp( in <sup>⇔</sup> •−•

vibrations that are actually or "effectively" present in ϖ •−•

needed to reconstruct the original temporal function ϖ •−•

t

ω(<sup>ω</sup> ) ;ω@

> t

t

t

that we should, then, correctly assign to <sup>⇔</sup> •−•

frequency value ±ω/ *@* of <sup>⇔</sup> •−•

basis elements ⋅ω/ <sup>⇔</sup>

ϖ •−• t

"window aspect".

**4.3 Back to NMR** 

"frequency aspect" of ϖ •−•

The Fourier Transform (FT) is often thought of as a mathematical device capable of "revealing" the "frequency components" of a temporal signal. Although this notion is correct with regard to any sinusoid oscillatory feature that may be present in the signal, it is deceptive in general, easily leading to the belief that the overall spectral profile represents all the physical "frequency components" that are present in the signal. This idea can give rise to false interpretations such as the concept that the frequency of a time-limited monochromatic sinusoid is "uncertain" or "effectively polychromatic", in line with the "Uncertainty Principle". In fact the Heisenberg Uncertainty Principle (HUP) has nothing to do with this problem; it is the Fourier Uncertainty Principle (FUP) that is of relevance, but the word "uncertainty" is also misleading in that regard: the FUP does not imply that timewindowing a deterministic wave with a monochromatic frequency makes its frequency "uncertain" or that the forward FT "reveals" such uncertainty just because the spectrum has a profile that covers a broad range of frequencies. Whatever physical attributes we associate with a temporal signal (such as that it has an exact monochromatic frequency) must be carried over exactly into the frequency dimension according to the "Law of Conjugate Physical Equivalence" (LCPE).

### **6. Acknowledgement**

The author is indebted to Dr. Zsuzsanna Sánta for her critical and constructive mathematical insights.

### **7. References**

Bloch, F. (1946). Nuclear Induction, *Physical Reviews*, Vol. 70, pp. 460-474.


**10** 

*Estonia* 

**Aspects of Using Chirp Excitation for** 

Toivo Paavle, Mart Min and Toomas Parve

*Th. J. Seebeck Dept. of Electronics Tallinn University of Technology,* 

**Estimation of Bioimpedance Spectrum** 

Short frequency swept signals, known as chirps, are widely used as excitation or stimulus signals in various areas of engineering as radar and sonar techniques, acoustics and ultrasonics, optical and seismological studies, but also in biomedical investigations, including bioimpedance measurement and impedance spectroscopy (Müller & Massarani, 2001; Misaridis & Jensen, 2005; Barsoukov & Macdonald, 2005; Nahvi & Hoyle, 2009).

Signal processing in the chirp-based applications is often combined with pulse compression via cross-correlation procedure and Fourier analysis. In this chapter, a similar approach is proposed for estimation of the frequency response (the impedance spectrum) of electrical bioimpedance. An advantage of the chirp-based method is that the characteristics of a biological object can be obtained in a wide frequency range during a very short measurement cycle, which nearly eliminates the influence of low-frequency biological

The changes of a spectrum monitored at sequent time intervals by means of the Fourier Transform (known as spectrogram) is an informative base for interpreting the processes in biological objects. Furthermore, the signal treatment by the cross-correlation yields a better noise immunity for the measurement system (Barsoukov & Macdonald, 2005), and adds

The impedance of living tissues or, in general, of arbitrary biological matter (electrical bioimpedance, EBI) can be characterized by its electrical equivalent, which, in turn, can be represented as the frequency-dependent complex vector *Ż*(jω) = Re(*Ż*(jω)) + jIm(*Ż*(jω)) = *Z*(ω) exp(jΦz(ω)), where ω = 2π*f*, *Z*(ω) = (Re(*Ż*(jω))2 + Im(*Ż*(jω))2)½, and

On the other hand, the bioimpedance is not a constant value, but a function, which is changing in time due to numerous biological processes in the living tissue, and as a matter

processes (heart beating, breathing, pulsation of blood) to the result of measurement.

some alternatives for estimation of the bio-impedance properties and behavior.

of fact, is a function of frequency and time altogether as *Ż*=*Z*(jω,*t*).

**2. Basics of bioimpedance measurement** 

**2.1 Object of measurement** 

Φz(ω) = arctg(Im(*Ż*(jω))/Re(*Ż*(jω))).

**1. Introduction** 


## **Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum**

Toivo Paavle, Mart Min and Toomas Parve *Th. J. Seebeck Dept. of Electronics Tallinn University of Technology, Estonia* 

### **1. Introduction**

236 Fourier Transform – Signal Processing

Harris, R. K. (1983). *Nuclear Magnetic Resonance Spectroscopy, a Physicochemical View*. Pitman,

Hoult, D. I. (1989). The Magnetic Resonance Myth of Radio Waves. *Concepts in Magnetic* 

King R. W. & Williams K. R. (1989). The Fourier Transform in Chemistry. *Journal of Chemical* 

Marshall A. G. & Verdun, F. R. (1990). *Fourier Transforms in NMR, Optical, and Mass* 

Szántay, C., Jr. (2007). NMR and the Uncertainty Principle: How to and How not to Interpret

Fundemantals. *Concepts in Magnetic Resonance,* Vol.30A, pp. 309-348. Szántay, C., Jr. (2008a). NMR and the Uncertainty Principle: How to and How not to

Fourier Connection. *Concepts in Magnetic Resonance,* Vol.32A, pp. 1-33. Szántay, C., Jr. (2008b). NMR and the Uncertainty Principle: How to and How not to

Szántay, C., Jr. (2008c). NMR and the Uncertainty Principle: How to and How not to

Uncertainty? *Concepts in Magnetic Resonance,* Vol.32A, pp. 302-325.

Un(?)certainty. *Concepts in Magnetic Resonance,* Vol.32A, pp. 373-404.

Homogeneous Line Broadening and Pulse Nonselectivity. Part I. The

Interpret Homogeneous Line Broadening and Pulse Nonselectivity. Part II. The

Interpret Homogeneous Line Broadening and Pulse Nonselectivity. Part III.

Interpret Homogeneous Line Broadening and Pulse Nonselectivity. Part IV.

London, p. 74.

*Resonance*, Vol.1, pp. 1-5.

*Education*, Vol.66, pp. A213-A219.

*Spectrometry,* Elsevier, New York, p. 31.

Short frequency swept signals, known as chirps, are widely used as excitation or stimulus signals in various areas of engineering as radar and sonar techniques, acoustics and ultrasonics, optical and seismological studies, but also in biomedical investigations, including bioimpedance measurement and impedance spectroscopy (Müller & Massarani, 2001; Misaridis & Jensen, 2005; Barsoukov & Macdonald, 2005; Nahvi & Hoyle, 2009).

Signal processing in the chirp-based applications is often combined with pulse compression via cross-correlation procedure and Fourier analysis. In this chapter, a similar approach is proposed for estimation of the frequency response (the impedance spectrum) of electrical bioimpedance. An advantage of the chirp-based method is that the characteristics of a biological object can be obtained in a wide frequency range during a very short measurement cycle, which nearly eliminates the influence of low-frequency biological processes (heart beating, breathing, pulsation of blood) to the result of measurement.

The changes of a spectrum monitored at sequent time intervals by means of the Fourier Transform (known as spectrogram) is an informative base for interpreting the processes in biological objects. Furthermore, the signal treatment by the cross-correlation yields a better noise immunity for the measurement system (Barsoukov & Macdonald, 2005), and adds some alternatives for estimation of the bio-impedance properties and behavior.

### **2. Basics of bioimpedance measurement**

### **2.1 Object of measurement**

The impedance of living tissues or, in general, of arbitrary biological matter (electrical bioimpedance, EBI) can be characterized by its electrical equivalent, which, in turn, can be represented as the frequency-dependent complex vector *Ż*(jω) = Re(*Ż*(jω)) + jIm(*Ż*(jω)) = *Z*(ω) exp(jΦz(ω)), where ω = 2π*f*, *Z*(ω) = (Re(*Ż*(jω))2 + Im(*Ż*(jω))2)½, and Φz(ω) = arctg(Im(*Ż*(jω))/Re(*Ż*(jω))).

On the other hand, the bioimpedance is not a constant value, but a function, which is changing in time due to numerous biological processes in the living tissue, and as a matter of fact, is a function of frequency and time altogether as *Ż*=*Z*(jω,*t*).

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 239

For adequate estimation of the bioimpedance, the spectrum of excitation signal should cover the frequency range of the object *Ż* as much as possible. For that reason, several types of broadband excitation signals (e.g., Maximal Length Sequence (Gawad *et al*, 2007), multisine (Sanchez *et al* 2011)) are preferable. However, in this chapter, we will focus on the use of chirp excitation due to several advantages of it: wide and flat amplitude spectrum (Nahvi & Hoyle, 2009) together with independent scalability in the time and frequency domain.

<sup>1</sup> ( ) <sup>1</sup>

A sine-wave based chirp with current phase θ(*t*) can be described mathematically as

the duration of the pulse. Chirps of this type can be expressed as

mirrored waveform of the pulse against that moment.

where *A* is the amplitude, and *f*(*t*) = (dθ(*t*)/d*t*)/2π is instantaneous frequency of the chirp

A definite class of chirps has the instantaneous frequency *f*(*t*), which changes accordingly to some power function of the *n*th order (power chirps). Their specific quantity is chirping rate β=(*f*fin–*f*st)/*T*ch*<sup>n</sup>*, where *f*st and *f*fin are the initial and final frequencies, respectively, and *T*ch is

The instantaneous frequency of a chirp can be increasing (up-chirps, β>0) or decreasing (down-chirps, β<0) quantity. Besides, sometimes it is practical to generate chirps with a symmetrical bidirectional frequency change (bidirectional or double chirps). In this case, the actual duration of the pulse is 2*T*ch, and the sign of chirping rate alters at *t* =*T*ch, causing

Fig.3 sketches waveforms of different chirps with the equal pulse duration and almost equal frequency range. A very basic linear (*n*=1) chirp with *f*st=0 can be described in accordance

Fig. 3b shows the waveform of quadratic (*n*=2) down-chirp. However, the rule of frequency change can be arbitrary. For example, in some specific measurements, excitation with the exponential chirping rate β =( *f*fin /*f*st) *<sup>t</sup>*/*<sup>T</sup>*ch is appropriate (Darowicki & Slepski, 2004). The

Generation of a perfect sine-wave chirp requires quite complicated hardware, which can cause problems, especially in on-chip solutions. That is why so-called signum-chirps (known also as pseudo-, binary- or Non-Return-to-Zero (NRZ) chirps) are often implemented (Figs. 3d and 3e; the latter one depicts a binary chirp with bidirectional change

with the expression (4) as *V*ch(*t*)=*A*sin(2π*f*fin*t2*/2*T*ch). It is depicted in Fig. 3a.

waveform of sinusoidal exponential chirps (see Fig. 3c) is described as

( ( ( ))) <sup>1</sup>

*Z s*

**2.2 Essential of chirp excitation** 

**2.2.1 Variety of chirps** 

signal.

( ) ( ) 0 11 10 1

*Vt A t A* ch( ) sin sin 2 d = θ= π ( ( )) ( ∫ *<sup>f</sup>* ( )*t t*) (3)

ch st ( ) sin 2 <sup>β</sup> / 1 *<sup>n</sup> V t A ft t n* <sup>+</sup> = π+ + (4)

*V t fT* ch ( ) =π − sin 2 ( ( *st* ch β 1) /ln(β)) (5)

(2)

*sC R R* <sup>+</sup> <sup>=</sup> + +

*R sC R*

Injection of the excitation current (stimulus) *I*exc(*t*) with known parameters into biological object courses the response voltage *V*z(*t*), analysis of which enables to estimate the impedance spectrum *Ż*(ω) of the object being under the investigation.

Fig. 1 illustrates the path of excitation current through the cells of a tissue, where *r*ext corresponds to the extracellular resistance, and the resistive components *r*int together with intercellular capacitances *C*c constitutes the intracellular impedance (Grimnes & Martinsen, 2008; Min & Parve, 2007).

Fig. 1. Formation of the electrical bioimpedance of tissue

Very often, the response signal is analyzed using the Fourier Transform *F*(*Vz*(*t*)) to get information about the frequency-dependent state and changes of the biological matter. Here, the impedance spectrum of EBI manifests as *Ż*(jω)=*F*(*V*z(*t*))/*F*(*I*exc(*t*)), i.e., the Fourier Transform of the response voltage determines the impedance spectrum of the object one-toone thanks to the predetermined parameters of the excitation signal.

Usually, in theoretical considerations and simulations, the bioimpedance is substituted by a certain RC-circuit. Naturally, the accuracy of such approximation depends on the number and configuration of components. In Fig. 2a, a 5-element circuit is shown, where *R*<sup>0</sup> corresponds to the extracellular resistance *r*ext of a tissue, while *R*1, *C*1, *R*2 and *C*2 stand for the intracellular parameters. The respective Bode diagram has two real poles *f*p1, *f*p2, and two zeros *f*z1, *f*z2 (Fig. 2b), spread typically over the frequency range from some kHz up to several MHz (Nebuya *et al* 1999; Pliquett *et al*, 2000; Grimnes & Martinsen, 2008).

The Laplace transform of the 5-element EBI (Fig. 2a) can be expressed as (Paavle *et al* 2008)

Fig. 2. (a) 5-element model and (b) Bode diagram of the bioimpedance In the simplest case, where *C*2→0 (basic 3-element EBI):

$$Z(s) = \frac{R\_0 \left(1 + sC\_1 R\_1\right)}{1 + sC\_1 \left(R\_0 + R\_1\right)}\tag{2}$$

For adequate estimation of the bioimpedance, the spectrum of excitation signal should cover the frequency range of the object *Ż* as much as possible. For that reason, several types of broadband excitation signals (e.g., Maximal Length Sequence (Gawad *et al*, 2007), multisine (Sanchez *et al* 2011)) are preferable. However, in this chapter, we will focus on the use of chirp excitation due to several advantages of it: wide and flat amplitude spectrum (Nahvi & Hoyle, 2009) together with independent scalability in the time and frequency domain.

### **2.2 Essential of chirp excitation**

### **2.2.1 Variety of chirps**

238 Fourier Transform – Signal Processing

Injection of the excitation current (stimulus) *I*exc(*t*) with known parameters into biological object courses the response voltage *V*z(*t*), analysis of which enables to estimate the

Fig. 1 illustrates the path of excitation current through the cells of a tissue, where *r*ext corresponds to the extracellular resistance, and the resistive components *r*int together with intercellular capacitances *C*c constitutes the intracellular impedance (Grimnes & Martinsen,

paths for current

*I*exc *I*exc

*C*<sup>c</sup> *C*<sup>c</sup> *r*int *r*ext

Vz

low and high frequency

mostly high frequency

Very often, the response signal is analyzed using the Fourier Transform *F*(*Vz*(*t*)) to get information about the frequency-dependent state and changes of the biological matter. Here, the impedance spectrum of EBI manifests as *Ż*(jω)=*F*(*V*z(*t*))/*F*(*I*exc(*t*)), i.e., the Fourier Transform of the response voltage determines the impedance spectrum of the object one-to-

Usually, in theoretical considerations and simulations, the bioimpedance is substituted by a certain RC-circuit. Naturally, the accuracy of such approximation depends on the number and configuration of components. In Fig. 2a, a 5-element circuit is shown, where *R*<sup>0</sup> corresponds to the extracellular resistance *r*ext of a tissue, while *R*1, *C*1, *R*2 and *C*2 stand for the intracellular parameters. The respective Bode diagram has two real poles *f*p1, *f*p2, and two zeros *f*z1, *f*z2 (Fig. 2b), spread typically over the frequency range from some kHz up to

The Laplace transform of the 5-element EBI (Fig. 2a) can be expressed as (Paavle *et al* 2008)

1 1 ( ) (1 ) (1 ) 1 1

Vz

 (a) (b) Fig. 2. (a) 5-element model and (b) Bode diagram of the bioimpedance

( ) ( ) [ ] ( ) ( ) 0 11 22 0 2 11 1 22 11 22

fp1 f z1 f p2 fz2

(1)

logf

*R sC R sC R*

*R sC sC R sC sC R sC R sC R* <sup>+</sup> <sup>+</sup> <sup>=</sup> + + + ++ +

log(|Ż|/R0)

several MHz (Nebuya *et al* 1999; Pliquett *et al*, 2000; Grimnes & Martinsen, 2008).

impedance spectrum *Ż*(ω) of the object being under the investigation.

Fig. 1. Formation of the electrical bioimpedance of tissue

one thanks to the predetermined parameters of the excitation signal.

2008; Min & Parve, 2007).

*Z s*

Iexc R0 R1 R2

<sup>C</sup> C2 <sup>1</sup>

In the simplest case, where *C*2→0 (basic 3-element EBI):

A sine-wave based chirp with current phase θ(*t*) can be described mathematically as

$$V\_{\rm ch}(t) = A \sin \left( \theta(t) \right) = A \sin \left( 2 \pi \int f(t) \,\mathrm{d}t \right) \tag{3}$$

where *A* is the amplitude, and *f*(*t*) = (dθ(*t*)/d*t*)/2π is instantaneous frequency of the chirp signal.

A definite class of chirps has the instantaneous frequency *f*(*t*), which changes accordingly to some power function of the *n*th order (power chirps). Their specific quantity is chirping rate β=(*f*fin–*f*st)/*T*ch*<sup>n</sup>*, where *f*st and *f*fin are the initial and final frequencies, respectively, and *T*ch is the duration of the pulse. Chirps of this type can be expressed as

$$V\_{\rm ch}(t) = A \sin \left( 2\pi \left( f\_{\rm st} t + \beta t^{n+1} \;/\left( n+1 \right) \right) \right) \tag{4}$$

The instantaneous frequency of a chirp can be increasing (up-chirps, β>0) or decreasing (down-chirps, β<0) quantity. Besides, sometimes it is practical to generate chirps with a symmetrical bidirectional frequency change (bidirectional or double chirps). In this case, the actual duration of the pulse is 2*T*ch, and the sign of chirping rate alters at *t* =*T*ch, causing mirrored waveform of the pulse against that moment.

Fig.3 sketches waveforms of different chirps with the equal pulse duration and almost equal frequency range. A very basic linear (*n*=1) chirp with *f*st=0 can be described in accordance with the expression (4) as *V*ch(*t*)=*A*sin(2π*f*fin*t2*/2*T*ch). It is depicted in Fig. 3a.

Fig. 3b shows the waveform of quadratic (*n*=2) down-chirp. However, the rule of frequency change can be arbitrary. For example, in some specific measurements, excitation with the exponential chirping rate β =( *f*fin /*f*st) *<sup>t</sup>*/*<sup>T</sup>*ch is appropriate (Darowicki & Slepski, 2004). The waveform of sinusoidal exponential chirps (see Fig. 3c) is described as

$$V\_{\rm ch} \left( t \right) = \sin \left( 2 \pi f\_{\rm st} T\_{\rm ch} (\triangleright - 1) \right) / \ln(\emptyset) \,\tag{5}$$

Generation of a perfect sine-wave chirp requires quite complicated hardware, which can cause problems, especially in on-chip solutions. That is why so-called signum-chirps (known also as pseudo-, binary- or Non-Return-to-Zero (NRZ) chirps) are often implemented (Figs. 3d and 3e; the latter one depicts a binary chirp with bidirectional change

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 241

chirps with a single cycle or even less (θ(*T*ch) ≤ 2π) can be generated and used, too. For such kind of ultra short chirps, the neologism "titlets" or Minimal Length Chirps (MLC) have

The titlets can be very effective excitation signals in applications, where broadband excitation is necessary, but the minimal power consumption or extremely fast measurement are required at the same time. Because of the prospective use of titlets, a special attention to their properties will be paid below. The diagram in Fig. 4 explains the forming of a single-

0

*f*fin

*f*st= 0 *f*(*t*i)

Time, μs

Considering (4), we can state that the equality of phases θ(*t*) = 2π(*f*st*T*ch+β*T*ch/(*n*+1) = 2π*L* is valid for the *t* =*T*ch. Thereby, the number of cycles *L* can be an integer or fractional quantity. It allows us to derive the relationship between the length of the chirp and its parameters. For

Thus, for the single-cycle linear chirp with *f*st=0, the pulse duration *T*ch=2/*f*fin. It follows for this case that the change of polarity (θ(*t*) = π) occurs at fin *t* = 2 / *f* , while the instantaneous

*n*=2; *L*=2×¼

0.0 5.0 10.0 15.0 20.0 25.0

Fig. 5. Examples of titlet waveforms with *f*fin=100 kHz (*f*st=0 for power titlets)

the *n*th order power chirps this relationship expresses as (Min *et al*, 2011a)

*t*=0

*f*

*t*=0 *t*i *T*ch=2/*f*fin

2/2 fin = *ff*

exponential, *L*=1, *f*st=10 kHz

*T Ln n* ch =+ + ( 1 /) ( *f f st* fin ) (7)

1

sin θ(*t*)

*t*

fin = /2 *ft*

*t*

μs

θ

non-uniformly rotating phasor

Fig. 4. Genesis of the linear single-cycle sine-wave chirp

*n*=1, *L*=¼

*n*=1; *L*=½

cos θ

θi= θ(*t*i)

θfin=2π θst=0

sin θ

1

been used (Min et al, 2011a).

cycle linear sine-wave chirp.

Voltage

1

0


frequency is fin *f f* = 2 /2 (see Fig. 4).

of frequency). This kind of chirps can be defined by the signum-function of respective sinewave chirps as *V*ch(*t*)=*V*sgn(*t*) = sign(*V*sin(*t*)), which have binary values +*A* and –*A* only.

Fig. 3. Examples of chirp waveforms with the equal maximal frequency and duration, *V*ch(*t*) vs. time: (a) linear sine-wave chirp; (b) quadratic down-chirp; (c) exponential chirp; (d) signum-chirp; (e) signum-chirp with bidirectional (down-up) run of frequency; (f) ternary chirp with shortened duty cycle by 30°

Rectangular waveforms simplify signal processing: both generation of excitation and processing of the response will be substantially simpler. Especially simple is calculating of the correlation function or deconvolution in the time domain – shifting and multiplication with a reference signal having only {+1, -1}, or {+1, 0, -1} values (Rufer *et al*, 2005).

Additional advantage of signum-chirps is their unity crest factor (i.e., the peak amplitude ratio to the root-mean-square (RMS) value of the signal), and major energy compared with the sine-wave chirps of the same length. Unfortunately, using of rectangular signals causes the worse purity of the spectrum due to the accompanying higher harmonic components.

Suppression of the particular harmonic component in the spectrum of rectangular signals can be achieved by shortening the duty cycle of the signal by a certain degree αd per every quarter-period (Parve & Land, 2004). Signum-chirps, modified in this way, are called Return-to-Zero (RZ) or ternary chirps. It means that *V*sgn(*t*) returns to zero, if the value of current phase falls into the intervals 2*n*π – αr<θ(*t*)<2*n*π+αr or (2*n*+1)π – αr <θ(*t*)< (2*n*+1)π+αr, where αr = παd /180 in radians, and *n*=0, 1, 2, … (Fig. 3f). For explanation let us remember that spectra of a rectangular signal can be declared as the Fourier series of odd harmonics:

$$F(\text{out}) = \frac{4A}{\pi} \left[ \frac{\cos a\_r}{1} \sin \alpha t + \frac{\cos 3a\_r}{3} \sin 3\alpha t + \dots \right] = \frac{4A}{\pi} \left[ \sum\_{i=1}^{n} \frac{\cos(2i - 1)a\_r}{2i - 1} \sin(2i - 1)\alpha t \right] \tag{6}$$

It follows from (6) that the *k*th harmonic (*k*=2*i*−1, and *i*=1, 2, 3, …) is absent from the series *F*(ω*t*), when *k*αr=±(2*n*+1)π/2, and *n*=0, 1, 2, … , because of cos(*k*αr)=0. Consequently, in the case of αr = π/6, the (3+6*n*)th harmonics are removed, and for αr = π /10, the (5+10*n*)th harmonics are removed, etc. (Paavle et al, 2007).

#### **2.2.2 Short-time chirps (titlets)**

Commonly, referring to chirps, signals of many cycles (rotations by 2π of the chirp generating vector) are considered (multi-cycle chirps). It does not need to be so, and the 240 Fourier Transform – Signal Processing

of frequency). This kind of chirps can be defined by the signum-function of respective sinewave chirps as *V*ch(*t*)=*V*sgn(*t*) = sign(*V*sin(*t*)), which have binary values +*A* and –*A* only.

Fig. 3. Examples of chirp waveforms with the equal maximal frequency and duration, *V*ch(*t*)

Rectangular waveforms simplify signal processing: both generation of excitation and processing of the response will be substantially simpler. Especially simple is calculating of the correlation function or deconvolution in the time domain – shifting and multiplication

Additional advantage of signum-chirps is their unity crest factor (i.e., the peak amplitude ratio to the root-mean-square (RMS) value of the signal), and major energy compared with the sine-wave chirps of the same length. Unfortunately, using of rectangular signals causes the worse purity of the spectrum due to the accompanying higher harmonic components. Suppression of the particular harmonic component in the spectrum of rectangular signals can be achieved by shortening the duty cycle of the signal by a certain degree αd per every quarter-period (Parve & Land, 2004). Signum-chirps, modified in this way, are called Return-to-Zero (RZ) or ternary chirps. It means that *V*sgn(*t*) returns to zero, if the value of current phase falls into the intervals 2*n*π – αr<θ(*t*)<2*n*π+αr or (2*n*+1)π – αr <θ(*t*)< (2*n*+1)π+αr, where αr = παd /180 in radians, and *n*=0, 1, 2, … (Fig. 3f). For explanation let us remember that spectra of a rectangular signal can be declared as the Fourier series of odd harmonics:

4 4 cos cos3 cos(2 1) ( ) sin sin 3 sin(2 1) 1 3 2 1 *r r r*

*A A <sup>i</sup> t tt <sup>i</sup> <sup>t</sup>*

ω = ⎢ ⎥ ω+ ω+ = ⎢ − ω ⎥ π π ⎣ ⎦ − ⎣ ⎦ *<sup>F</sup>* … ∑ (6)

It follows from (6) that the *k*th harmonic (*k*=2*i*−1, and *i*=1, 2, 3, …) is absent from the series *F*(ω*t*), when *k*αr=±(2*n*+1)π/2, and *n*=0, 1, 2, … , because of cos(*k*αr)=0. Consequently, in the case of αr = π/6, the (3+6*n*)th harmonics are removed, and for αr = π /10, the (5+10*n*)th

Commonly, referring to chirps, signals of many cycles (rotations by 2π of the chirp generating vector) are considered (multi-cycle chirps). It does not need to be so, and the

⎡ ⎤ α α ⎡ − α ⎤

1

∞

*i*

*i*

=

vs. time: (a) linear sine-wave chirp; (b) quadratic down-chirp; (c) exponential chirp; (d) signum-chirp; (e) signum-chirp with bidirectional (down-up) run of frequency;

with a reference signal having only {+1, -1}, or {+1, 0, -1} values (Rufer *et al*, 2005).

(a) (b) (c)

(d) (e) (f)

(f) ternary chirp with shortened duty cycle by 30°

harmonics are removed, etc. (Paavle et al, 2007).

**2.2.2 Short-time chirps (titlets)** 

chirps with a single cycle or even less (θ(*T*ch) ≤ 2π) can be generated and used, too. For such kind of ultra short chirps, the neologism "titlets" or Minimal Length Chirps (MLC) have been used (Min et al, 2011a).

The titlets can be very effective excitation signals in applications, where broadband excitation is necessary, but the minimal power consumption or extremely fast measurement are required at the same time. Because of the prospective use of titlets, a special attention to their properties will be paid below. The diagram in Fig. 4 explains the forming of a singlecycle linear sine-wave chirp.

Fig. 4. Genesis of the linear single-cycle sine-wave chirp

Fig. 5. Examples of titlet waveforms with *f*fin=100 kHz (*f*st=0 for power titlets)

Considering (4), we can state that the equality of phases θ(*t*) = 2π(*f*st*T*ch+β*T*ch/(*n*+1) = 2π*L* is valid for the *t* =*T*ch. Thereby, the number of cycles *L* can be an integer or fractional quantity. It allows us to derive the relationship between the length of the chirp and its parameters. For the *n*th order power chirps this relationship expresses as (Min *et al*, 2011a)

$$T\_{\rm ch} = L \left( n + 1 \right) / \left( n f\_{\rm st} + f\_{\rm fin} \right) \tag{7}$$

Thus, for the single-cycle linear chirp with *f*st=0, the pulse duration *T*ch=2/*f*fin. It follows for this case that the change of polarity (θ(*t*) = π) occurs at fin *t* = 2 / *f* , while the instantaneous frequency is fin *f f* = 2 /2 (see Fig. 4).

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 243

Using broadband chirp excitation, the waveform *r*xy(τ) of CCF is similar to the sin(*x*)/*x* type

The CCF includes both the information about the amplitude level and phase shift of the bioimpedance vector. Thanks to this fact, only a single FFT-block is necessary to obtain the complete information about the object. Otherwise, when using the direct Fourier Transform of the response, another FFT channel is required to establish the basis for phase evaluations. In accordance with the cross-correlation theorem (generalized Wiener-Khinchin theorem), the correlation function is equivalent to the inverse Fourier Transform of the complex crosspower spectrum density *P*xy(*f*) with the magnitude and phase spectral components

xy xy xy *Pf Pf Pf* ( ) (Re( ( ))) (Im( ( ))) = + and Φxy(*f*)=arctg(Im(*P*xy(*f*))/ Re(*P*xy(*f*))), respectively (Vaseghi, 2006). Thereby, the phase component Φxy(*f*) specifies the phase difference between the vectors *V*z and *V*ref, and magnitude presents the geometric mean of the power spectral

Accordingly to the main architecture of measurement system (Fig. 6) and model of EBI (Fig. 2a), a special PC-model (Fig. 7) was developed for verifying the theoretical conceptions

*V*z

*V*ref

×

Correlator

Delay τ = var

*Ż R*0 *R*1 *R*<sup>2</sup>

*C C*<sup>2</sup> <sup>1</sup>

The cross-correlation function is calculated by the cumulative adder for every *m*th lag τm as

−

*N*

*N*

*k r m VkV k*

=

[ ] [] [ ] ch <sup>1</sup> xy z ref m ch 0

where *k* = 0 … *N*ch–1, and *N*ch is the number of samples per length of the chirp pulse

<sup>1</sup> ,

= ⋅+ ∑ <sup>τ</sup> (10)

Discrete control of the lag τ<sup>m</sup>

Σ ÷*N*ch

reset at next *m* FFT


Calculations

Corr*(V*z,*V*ref)

sinc-function affected by the nature of *V*z(*t*) and is strongly compressed close to τ = 0.

*r Corr V t V t V t V t* xy( ) ( ), ( ) ( ) ( ), τ = ( z ref z ref ) = + τ (9)

In general, calculation of the CCF proceeds as

where τ is a variable delay (lag) and the overline denotes averaging.

2 2

densities of the signals *V*z(*t*) and *V*ref(*t*) (McGhee *et al*, 2001).

**2.3.2 Modeling of the measurement system** 

Fig. 7. Model of the measurement system

*I*exc

V/I *Ż*ref

(Paavle *et al*, 2008).

Chirp generator

Windowing (optional)

*V*exc

*V*ch

(McGhee *et al*, 2001).

Similarly, considering (5), one can show that for exponential chirps the duration of a pulse is

$$T\_{\rm ch} = L \cdot \ln \left( f\_{\rm fin} \, / \, f\_{\rm st} \right) / \left( f\_{\rm fin} - f\_{\rm st} \right) \tag{8}$$

Simulated waveforms of some typical titlets with *L* ≤ 1 are shown in Fig. 5.

#### **2.3 Signal analysis and bioimpedance measurement by using Fourier Transform**

Spectral analysis is the irreplaceable method as for the study of the behavior of excitation signals in the frequency domain as well as for processing the response signal in the bioimpedance measurements.

It is obligatory to remember that in the modeling and processing of chirp signals the selected sampling rate *f*s must follow the Nyqvist criterion over the whole frequency range, i.e., the condition *f*<sup>s</sup> ≥ 2*f*fin must be fulfilled. The sampling rate determines the total number of samples *N*ch during a chirped pulse as *N*ch = *T*ch *f*s. Yet, the simulation or real processing time *T*tot can be longer than *T*ch. In this case, the zero padding is used at *t*>*T*ch. The length of the input array for the Fast Fourier Transform (FFT) is *N*=*T*tot*f*s ≥ *N*ch, and it determines the frequency resolution Δ*f* (difference between two successive frequency bins) of the FFT processing, whereas Δ*f* = *f*s/*N* according to the uncertainty principle (Vaseghi, 2006). The acquired positive frequency range of the FFT processing in Hz is 0, …, *N*Δ*f*/2 by steps of Δ*f*.

#### **2.3.1 Principles of bioimpedance measurements**

There are several methods to calculate the frequency spectrum of the bioimpedance response. A typical way is the implementation of two FFT-channels for the response and excitation signals separately (Min *et al*, 2011a). Nevertheless, in the following discussion we will focus mainly on the structure and modeling of a specific bioimpedance measurement system, which incorporates the cross-correlation procedure together with following FFTprocessing (Paavle *et al*, 2008, Min *et al*, 2009, see Fig. 6).

Fig. 6. Basic structure of the measurement system

In such a system, the cross-correlation function (CCF) is calculated: (a) between the response *V*z(*t*) and excitation *V*ch(*t*) at the unity gain in the reference channel (*V*ref(*t*)= *V*ch(*t*), *Ż*ref =1), or (b) between the response and predefined reference signal *V*ref(*t*) (reference channel includes the known impedance *Ż*ref ≠1). In the latter case, the system works as the matched filter, enabling detection of mismatches between the impedance vectors Ż and *Ż*ref more precisely together with somewhat better noise reduction. The source noise *n*s(*t*) and object noise *n*z(*t*) can be taken into account in simulations as shown in Fig. 6.

In general, calculation of the CCF proceeds as

242 Fourier Transform – Signal Processing

Similarly, considering (5), one can show that for exponential chirps the duration of a pulse is

**2.3 Signal analysis and bioimpedance measurement by using Fourier Transform** 

Spectral analysis is the irreplaceable method as for the study of the behavior of excitation signals in the frequency domain as well as for processing the response signal in the

It is obligatory to remember that in the modeling and processing of chirp signals the selected sampling rate *f*s must follow the Nyqvist criterion over the whole frequency range, i.e., the condition *f*<sup>s</sup> ≥ 2*f*fin must be fulfilled. The sampling rate determines the total number of samples *N*ch during a chirped pulse as *N*ch = *T*ch *f*s. Yet, the simulation or real processing time *T*tot can be longer than *T*ch. In this case, the zero padding is used at *t*>*T*ch. The length of the input array for the Fast Fourier Transform (FFT) is *N*=*T*tot*f*s ≥ *N*ch, and it determines the frequency resolution Δ*f* (difference between two successive frequency bins) of the FFT processing, whereas Δ*f* = *f*s/*N* according to the uncertainty principle (Vaseghi, 2006). The acquired positive frequency range of the FFT processing in Hz is 0, …, *N*Δ*f*/2 by steps of Δ*f*.

There are several methods to calculate the frequency spectrum of the bioimpedance response. A typical way is the implementation of two FFT-channels for the response and excitation signals separately (Min *et al*, 2011a). Nevertheless, in the following discussion we will focus mainly on the structure and modeling of a specific bioimpedance measurement system, which incorporates the cross-correlation procedure together with following FFT-

In such a system, the cross-correlation function (CCF) is calculated: (a) between the response *V*z(*t*) and excitation *V*ch(*t*) at the unity gain in the reference channel (*V*ref(*t*)= *V*ch(*t*), *Ż*ref =1), or (b) between the response and predefined reference signal *V*ref(*t*) (reference channel includes the known impedance *Ż*ref ≠1). In the latter case, the system works as the matched filter, enabling detection of mismatches between the impedance vectors Ż and *Ż*ref more precisely together with somewhat better noise reduction. The source noise *n*s(*t*) and object

correlator

FFT

*r*xy *P*xy(*f*)

Simulated waveforms of some typical titlets with *L* ≤ 1 are shown in Fig. 5.

bioimpedance measurements.

**2.3.1 Principles of bioimpedance measurements** 

processing (Paavle *et al*, 2008, Min *et al*, 2009, see Fig. 6).

*n*s(*t*) *n*z(*t*)

*Ż*

Reference *Ż*ref

*V*z

*V*ref

Fig. 6. Basic structure of the measurement system

*I*exc

generator Cross-

I V *V*ch

Chirp

noise *n*z(*t*) can be taken into account in simulations as shown in Fig. 6.

*T L* ch = ⋅ln / /( ) ( *ff ff* fin st fin st ) − (8)


Calculation of EBI components

Φxy(*f*)

$$r\_{\rm xy}(\tau) = \operatorname{Corr}\left(V\_x(t), V\_{\rm ref}(t)\right) = V\_x(t)V\_{\rm ref}(t+\tau),\tag{9}$$

where τ is a variable delay (lag) and the overline denotes averaging.

Using broadband chirp excitation, the waveform *r*xy(τ) of CCF is similar to the sin(*x*)/*x* type sinc-function affected by the nature of *V*z(*t*) and is strongly compressed close to τ = 0.

The CCF includes both the information about the amplitude level and phase shift of the bioimpedance vector. Thanks to this fact, only a single FFT-block is necessary to obtain the complete information about the object. Otherwise, when using the direct Fourier Transform of the response, another FFT channel is required to establish the basis for phase evaluations.

In accordance with the cross-correlation theorem (generalized Wiener-Khinchin theorem), the correlation function is equivalent to the inverse Fourier Transform of the complex crosspower spectrum density *P*xy(*f*) with the magnitude and phase spectral components 2 2 xy xy xy *Pf Pf Pf* ( ) (Re( ( ))) (Im( ( ))) = + and Φxy(*f*)=arctg(Im(*P*xy(*f*))/ Re(*P*xy(*f*))), respectively (Vaseghi, 2006). Thereby, the phase component Φxy(*f*) specifies the phase difference between the vectors *V*z and *V*ref, and magnitude presents the geometric mean of the power spectral densities of the signals *V*z(*t*) and *V*ref(*t*) (McGhee *et al*, 2001).

#### **2.3.2 Modeling of the measurement system**

Accordingly to the main architecture of measurement system (Fig. 6) and model of EBI (Fig. 2a), a special PC-model (Fig. 7) was developed for verifying the theoretical conceptions (Paavle *et al*, 2008).

Fig. 7. Model of the measurement system

The cross-correlation function is calculated by the cumulative adder for every *m*th lag τm as

$$\left[r\_{\rm xy}\left[m\right] = \frac{1}{N\_{\rm ch}} \sum\_{k=0}^{N\_{\rm ch}-1} V\_{\rm z}\left[k\right] \cdot V\_{\rm ref}\left[k + \tau\_{\rm m}\right] \right. \tag{10}$$

where *k* = 0 … *N*ch–1, and *N*ch is the number of samples per length of the chirp pulse (McGhee *et al*, 2001).

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 245

Fig. 9 shows the frequency run and respective amplitude spectra of different titlets. It appears that the desired shape of the spectrum, including flatness, satisfying drop-down outside the bandwidth (slopes from –20 dB/dec to –80 dB/dec were observed) and admissible overshoots can be achieved by a proper selection of the type and length of titlet. Moreover, additional correction and shaping of spectra can be attained by windowing of titlets in the time domain (see below in Sect. 3.2). In Fig.10, waveforms and amplitude spectra of some rectangular chirps (see Sect. 2.2.1) with *L*=10 are shown. Naturally, for this

20log(|V(f)|/|

*n*=2; *L*=1

(a) (b)

α

d=0°

d=30°

α

(a) (b)

binary and ternary chirps with αd =30° and αd =18° (Δ*f*=100 Hz)

Time, μs Frequency, Hz

20log(|

Fig. 10. (a) Waveforms of binary and ternary chirps with αd=30°; (b) amplitude spectra of

In principle, the total energy of a chirp signal at the unity load (*R*load=1Ω) expresses in time

According to the Parseval's theorem, the total energy in the frequency domain is the same as the energy in the time domain (Vaseghi, 2006). For chirps with the finite length, a certain

2

() ,

ch

tot ch 0

which leads to *E*tot = (*A*2/2)*T*ch for the long term chirps of sinusoidal waveform.

*T*

*V*(*f* )|/|

i 0

*V*(*f* )|

10




Fig. 9. (a) Frequency run and (b) amplitude spectra of different titlets with *B*exc=0…100 kHz

 i 10




V(f0)|

*n*=1; *L*=½

*n*=1; *L*=2×¼ -80 dB/dec

1E3 1E4 1E5 Hz

d=30°

Frequency, Hz

*E V t dt* <sup>=</sup> ∫ (11)

α

1E3 1E4 1E5 Hz

d=18°

*n*=3; *L*=¼ -20 dB/dec


*n*=1; *L*=1

d=0°

α α

*n*=2; *L*=1

kind of chirps, the rippling of spectra is noticeable.

*<sup>n</sup>*=1; *L*=½ *<sup>n</sup>*=1; *L*=1 *n*=3; *L*=¼

0 5 10 15 20 25 μs

Time, μs

0 40 80 120 160 200 μs

0 40 80 120 160 200 μs

*n*=1; *L*=2×¼

Freq.,MHz

*V*ch(*t*)

*V*ch(*t*)

1


1

0

0


domain as

**3.1 Energy of chirps 3.1.1 Sine-wave chirps** 

0.1

0.05

0

If the uniform delay step is Δτ, then for the overall delay interval (observation time) *t*obs =τmax – τmin (τmin ≤τm≤ τmax) the number of necessary computing cycles is *M* + 1 with *M* = *t*obs/Δτ. As a result, we acquire an array of correlation values *r*xy[*m*] with *m* = 0 … *M*. Supposing that the selected Δτ and τmin are integer multiples of the sampling interval 1/*f*s, the *m*th delay expresses as τm = *f*s(τmin+*m*Δτ).

In the following Fourier Transform, the array *r*xy of length *M*+1 represents the input stream for the FFT-block, while in accordance with the uncertainty principle (see above), the frequency resolution of spectra becomes Δ*f* =1/*t*obs over the frequency range *f* = 0 … 1/(2Δτ).

### **3. Spectral features and energy of chirps**

Next, let us take a look at the results of direct Fourier Transform of distinct chirp pulses. The direct FFT of different excitation signals enables to compare their amplitude and power spectra together with estimation of the energy properties keeping in view the requirements for the bioimpedance measurement.

Important and desired spectral properties of excitation signals to improve the quality of wideband measurement are:


For some specific applications, these properties must coincide with minor power consumption and with shortness of signal to ensure quick measurement.

Fig. 8. Voltage spectral density of linear chirps 0…100 kHz (Δ*f*=50 Hz): (a) linear scaling at the unity amplitude of chirp pulses; (b) normalized logarithmic scaling

Fig. 8 depicts the amplitude spectra of linear sine-wave chirps of different length in the linear and in the normalized logarithmic scales. Thereby, the base of normalizing is the amplitude |*V*(*f*0)|, i.e., the magnitude at the lowest frequency bin *f*0 of the Fourier Transform. It is obvious that long chirps assure the more fitting shape of spectra, except a certain rippling caused by the Gibbs effect. The latter can be suppressed by using a kind of windowing in the time domain (often a boxcar-type window function is used for this purpose), but in this case, the total energy of the signal decreases.

Fig. 9 shows the frequency run and respective amplitude spectra of different titlets. It appears that the desired shape of the spectrum, including flatness, satisfying drop-down outside the bandwidth (slopes from –20 dB/dec to –80 dB/dec were observed) and admissible overshoots can be achieved by a proper selection of the type and length of titlet. Moreover, additional correction and shaping of spectra can be attained by windowing of titlets in the time domain (see below in Sect. 3.2). In Fig.10, waveforms and amplitude spectra of some rectangular chirps (see Sect. 2.2.1) with *L*=10 are shown. Naturally, for this kind of chirps, the rippling of spectra is noticeable.

Fig. 9. (a) Frequency run and (b) amplitude spectra of different titlets with *B*exc=0…100 kHz

Fig. 10. (a) Waveforms of binary and ternary chirps with αd=30°; (b) amplitude spectra of binary and ternary chirps with αd =30° and αd =18° (Δ*f*=100 Hz)

### **3.1 Energy of chirps**

244 Fourier Transform – Signal Processing

If the uniform delay step is Δτ, then for the overall delay interval (observation time) *t*obs =τmax – τmin (τmin ≤τm≤ τmax) the number of necessary computing cycles is *M* + 1 with *M* = *t*obs/Δτ. As a result, we acquire an array of correlation values *r*xy[*m*] with *m* = 0 … *M*. Supposing that the selected Δτ and τmin are integer multiples of the sampling interval 1/*f*s,

In the following Fourier Transform, the array *r*xy of length *M*+1 represents the input stream for the FFT-block, while in accordance with the uncertainty principle (see above), the frequency resolution of spectra becomes Δ*f* =1/*t*obs over the frequency range *f* = 0 … 1/(2Δτ).

Next, let us take a look at the results of direct Fourier Transform of distinct chirp pulses. The direct FFT of different excitation signals enables to compare their amplitude and power spectra together with estimation of the energy properties keeping in view the requirements

Important and desired spectral properties of excitation signals to improve the quality of

• flat amplitude spectrum with minimal fluctuation (ripple) together with the absence of

• maximal energy-efficiency, i.e., the ratio between the energy lying within the generated

For some specific applications, these properties must coincide with minor power

20log(|

0.0 20.0 40.0 60.0 80.0 100.0 kHz 1E3 1E4 1E5 Hz

Fig. 8. Voltage spectral density of linear chirps 0…100 kHz (Δ*f*=50 Hz): (a) linear scaling at

Fig. 8 depicts the amplitude spectra of linear sine-wave chirps of different length in the linear and in the normalized logarithmic scales. Thereby, the base of normalizing is the amplitude |*V*(*f*0)|, i.e., the magnitude at the lowest frequency bin *f*0 of the Fourier Transform. It is obvious that long chirps assure the more fitting shape of spectra, except a certain rippling caused by the Gibbs effect. The latter can be suppressed by using a kind of windowing in the time domain (often a boxcar-type window function is used for this

*V*(*f* )|/|

i -10



*V*(*f )*|

0 dB 10

*L*=100

Frequency, Hz

*L*=1000

*L*=10

*L*=1

overshoots inside the generated (excitation) bandwidth *B*exc =*f*fin - *f*st; • steep drop-down of the amplitude spectrum outside the bandwidth *B*exc;

(excitation) bandwidth *B*exc and total energy of the signal.

consumption and with shortness of signal to ensure quick measurement.

(a) (b)

the unity amplitude of chirp pulses; (b) normalized logarithmic scaling

purpose), but in this case, the total energy of the signal decreases.

the *m*th delay expresses as τm = *f*s(τmin+*m*Δτ).

**3. Spectral features and energy of chirps** 

for the bioimpedance measurement.

L=100 L=10 L=1

L=1000

0.03

0.02

ch Magnitude |*V* (*f* )|

0.01

0

Frequency, kHz

wideband measurement are:

### **3.1.1 Sine-wave chirps**

In principle, the total energy of a chirp signal at the unity load (*R*load=1Ω) expresses in time domain as

$$E\_{\rm tot} = \int\_0^{T\_{\rm ch}} V\_{\rm ch}(t)^2 \, dt \,\,\tag{11}$$

which leads to *E*tot = (*A*2/2)*T*ch for the long term chirps of sinusoidal waveform.

According to the Parseval's theorem, the total energy in the frequency domain is the same as the energy in the time domain (Vaseghi, 2006). For chirps with the finite length, a certain

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 247

1 2 2

δ = ∑ ∑ (12)

1 exc *w AB* = π 8/ / (13)

≈ − ∑ (14)

0 ()/ ()

*Vf Vf* −

fin max

*N N*

*i N i*

= =

E ch i ch i

where |*V*(*f*i)| is the value of the amplitude spectrum at *i*th frequency bin, *N*st and *N*fin are the numbers of frequency bins, corresponding to the *f*st and *f*fin, respectively. *N*max is the total number of frequency bins, and the divisor in (12) corresponds to the total energy *E*tot. Of course, this method enables to calculate the partial energy for any frequency interval inside

The curves in Fig. 11 present the dependence of the energy and energy-efficiency of linear sine-wave chirps on the number of chirp cycles graphically. A selection of power and energy parameters for different chirps and titlets, obtained from the FFT-processing by using the expression (12), are converged into Table 1, where the average power

A specific feature of signum-chirps is the gradually decreasing amplitude spectrum by step

The fundamental harmonic of a regular rectangular signal with amplitude *A*1 = (4/π) *A* has a root-mean-square (RMS) value *A A* <sup>1</sup> / 2 4 /( 2) = π , energy *E*1 = (*A*12/2) *T*ch = (8/π2) *A*<sup>2</sup> *T*ch, and power *W*1 = *E*1/*T*ch= (8/π2) *A*2, which creates a constant value power spectral density

( ) 2 2

The amplitudes of *k*th higher harmonics are of *A*k = *A*1 /*k*. Therefore, the power of every *k*th higher odd harmonic (*k* =3, 5, 7, …) is equal to *W*1 /*k*2, being spread over the frequency range *B*k =*kf*fin − *kf*st =*kB*exc. Due to the fact that higher harmonics have *k* times wider bandwidth than the fundamental (first) harmonic has, the PSD for higher harmonics can be expressed

The total power of generated signal *V*ch(*t*) is gradually distributed over its whole frequency range, theoretically from *f*st to ∞, see Fig. 12. The PSD *p*h of every gradual *h*th level (*h*= 1, 2, 3, 4, …) of the spectrum, beginning from the first one *p*1, is the sum of power spectral densities of the fundamental and higher harmonics, *w*1 and *w*k, which contribute into the given level *h*. Into the PSD *p*1 of the first level contribute all the signal components *p*1= *w*1+ *w*3+ *w*5+ *w*7+ …, but into the second level mere the higher harmonics *p*2 = *w*3+ *w*5+ *w*7+ …, (*k* = 3, 5, 7, …), and

( ) <sup>3</sup>

<sup>+</sup> <sup>−</sup>

h 1 2 1 *h m*

*i h pw i*

in which *i* is an integer beginning from *h*, that is: *i* = *h*, (*h*+1), (*h*+2), (*h*+3), …, (*h*+m), where *m*

=

width of 2*B*exc as shown in Fig. 12 (Min *et al*, 2009). Let us analyze this phenomenon.

(PSD) *w*1 =*W*1/*B*exc, V2/Hz, within the bandwidth *B*exc of fundamental harmonic:

into the third level only the harmonics beginning from *k* = 5, *p*3=*w*5+ *w*7+…, etc.

st

the range of 0 to *N*maxΔ*f*.

ch

*T*

ch 0

Generally,

2 avg ch tot ch

*<sup>T</sup>* <sup>=</sup> <sup>=</sup> ∫ presumes the 1 kΩ load.

through (12) as *w*k= (*w*1/*k*2)/*k* = *w*1/*k*3 (Min *et al*, 2009).

is the number of higher odd harmonics taken into account.

<sup>1</sup> () /

*P V t dt E T*

**3.1.2 Binary signum-chirps** 

part of signal energy falls outside the useful (generated) bandwidth caused by higher frequency components. For that reason, it is necessary to distinguish the total energy of a generated chirp pulse, which varies proportionally with *T*ch, and the useful energy *E*exc, falling inside the chirp bandwidth *B*exc. Typically, *E*exc< *E*tot, while absolute values of both quantities depend on the chirp length, waveform and spectral nature. To characterize the percentage of the useful energy, we employ the term of energy-efficiency as δE=*E*exc/*E*tot. For sine-wave chirps with *L*→∞ the ratio δ<sup>E</sup> →1.

Fig. 11. (a) Total energy and (b) energy-efficiency of linear sine-wave chirps


Table 1. Energy and average power of different chirp pulses with *f*st≈0 and *f*fin=100 kHz

Considering expressions (11) and (4) or (5), the analytical description of the chirp energy becomes very complicated. So it is reasonable to calculate the respective quantities numerically. Nevertheless, a good approximation of total energy and energy-efficiency of arbitrary chirp signal can be obtained using the results from the Fourier Transform as follows (Paavle *et al*, 2010):

$$\delta\_{\rm E} = \sum\_{i=N\_{\rm st}}^{N\_{\rm fin}} \left| V\_{\rm ch}(f\_i) \right|^2 \ / \ \sum\_{i=0}^{N\_{\rm max}-1} \left| V\_{\rm ch}(f\_i) \right|^2 \tag{12}$$

where |*V*(*f*i)| is the value of the amplitude spectrum at *i*th frequency bin, *N*st and *N*fin are the numbers of frequency bins, corresponding to the *f*st and *f*fin, respectively. *N*max is the total number of frequency bins, and the divisor in (12) corresponds to the total energy *E*tot. Of course, this method enables to calculate the partial energy for any frequency interval inside the range of 0 to *N*maxΔ*f*.

The curves in Fig. 11 present the dependence of the energy and energy-efficiency of linear sine-wave chirps on the number of chirp cycles graphically. A selection of power and energy parameters for different chirps and titlets, obtained from the FFT-processing by using the expression (12), are converged into Table 1, where the average power ch 2 avg ch tot ch ch 0 <sup>1</sup> () / *T P V t dt E T <sup>T</sup>* <sup>=</sup> <sup>=</sup> ∫ presumes the 1 kΩ load.

### **3.1.2 Binary signum-chirps**

246 Fourier Transform – Signal Processing

part of signal energy falls outside the useful (generated) bandwidth caused by higher frequency components. For that reason, it is necessary to distinguish the total energy of a generated chirp pulse, which varies proportionally with *T*ch, and the useful energy *E*exc, falling inside the chirp bandwidth *B*exc. Typically, *E*exc< *E*tot, while absolute values of both quantities depend on the chirp length, waveform and spectral nature. To characterize the percentage of the useful energy, we employ the term of energy-efficiency as δE=*E*exc/*E*tot. For

¼ 0 5.0 0.32 1.6 55.0 ½ 0 10.0 0.38 3.8 90.6 1 0 20.0 0.41 8.3 93.5 10 0 200.0 0.47 94.4 97.8

100 0 2000 0.49 982 99.3 ½ 0 15.0 0.29 4.4 93.9

100 0 3000 0.464 1393 99.5

Number of cycles *L* Number of cycles *L*

Type of chirp (titlet) *L f*st, Hz *T*ch, μs *P*avg, mW *E*tot, nJ δE, %

quadratic 1 0 30.0 0.33 10.0 95.7

cubic 1 0 40.0 0.28 11.1 96.4

1 1 115.1 0.135 15.6 97.8 exponential 10 1 1151 0.235 271 99.5

double-quadratic 2×¼ 0 2×7.5 0.23 3.4 93.1

1 0 20.0 1.0 20.0 84.1 NRZ signum-chirp 1000 0 20e3 1.0 20e3 85.1

RZ chirp (18º short.) 1000 0 20e3 0.8 16.0e3 93.1 RZ chirp (30º short.) 1000 0 20e3 0.67 13.3e3 92.1

Considering expressions (11) and (4) or (5), the analytical description of the chirp energy becomes very complicated. So it is reasonable to calculate the respective quantities numerically. Nevertheless, a good approximation of total energy and energy-efficiency of arbitrary chirp signal can be obtained using the results from the Fourier Transform as

Table 1. Energy and average power of different chirp pulses with *f*st≈0 and *f*fin=100 kHz

Energy-efficiency δE, %

1 10 100 1000

sine-wave chirps with *L*→∞ the ratio δ<sup>E</sup> →1.

linear

follows (Paavle *et al*, 2010):

1 10 100 1000

 (a) (b) Fig. 11. (a) Total energy and (b) energy-efficiency of linear sine-wave chirps

Energy of chirp

*E*tot, nJ

A specific feature of signum-chirps is the gradually decreasing amplitude spectrum by step width of 2*B*exc as shown in Fig. 12 (Min *et al*, 2009). Let us analyze this phenomenon.

The fundamental harmonic of a regular rectangular signal with amplitude *A*1 = (4/π) *A* has a root-mean-square (RMS) value *A A* <sup>1</sup> / 2 4 /( 2) = π , energy *E*1 = (*A*12/2) *T*ch = (8/π2) *A*<sup>2</sup> *T*ch, and power *W*1 = *E*1/*T*ch= (8/π2) *A*2, which creates a constant value power spectral density (PSD) *w*1 =*W*1/*B*exc, V2/Hz, within the bandwidth *B*exc of fundamental harmonic:

$$w\_1 = \left(8\,'\,\pi^2\right) A^2 \,'\left/B\_{\text{exc}}\right. \tag{13}$$

The amplitudes of *k*th higher harmonics are of *A*k = *A*1 /*k*. Therefore, the power of every *k*th higher odd harmonic (*k* =3, 5, 7, …) is equal to *W*1 /*k*2, being spread over the frequency range *B*k =*kf*fin − *kf*st =*kB*exc. Due to the fact that higher harmonics have *k* times wider bandwidth than the fundamental (first) harmonic has, the PSD for higher harmonics can be expressed through (12) as *w*k= (*w*1/*k*2)/*k* = *w*1/*k*3 (Min *et al*, 2009).

The total power of generated signal *V*ch(*t*) is gradually distributed over its whole frequency range, theoretically from *f*st to ∞, see Fig. 12. The PSD *p*h of every gradual *h*th level (*h*= 1, 2, 3, 4, …) of the spectrum, beginning from the first one *p*1, is the sum of power spectral densities of the fundamental and higher harmonics, *w*1 and *w*k, which contribute into the given level *h*. Into the PSD *p*1 of the first level contribute all the signal components *p*1= *w*1+ *w*3+ *w*5+ *w*7+ …, but into the second level mere the higher harmonics *p*2 = *w*3+ *w*5+ *w*7+ …, (*k* = 3, 5, 7, …), and into the third level only the harmonics beginning from *k* = 5, *p*3=*w*5+ *w*7+…, etc.

Generally,

$$p\_{\mathbf{h}} \approx w\_1 \sum\_{i=h}^{h+m} \left(2i - 1\right)^{-3} \tag{14}$$

in which *i* is an integer beginning from *h*, that is: *i* = *h*, (*h*+1), (*h*+2), (*h*+3), …, (*h*+m), where *m* is the number of higher odd harmonics taken into account.

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 249

2 2

∑ − =π , (19)

∑ − =ζ , (20)

<sup>E</sup>δ = 7 (3) / ζ π (21)

(2 1) /8

3

(2 1) 7 (3) /8

where ζ(3)≈1.202 is termed the Apéry's constant (Dwight, 1961). Now, using equations (19) and (20), we can express the part of useful energy in the linear NRZ signum-chirp pulse as

Result of 32-tap moving averaging

harmonics

Fig. 13. (a) Amplitude spectrum of signum-chirp with *f*st=50 kHz and *f*fin=100 kHz (*A*=1;

Summary spectrum

When *f*fin / *f*st >> 30 and *m* >> 1, the value of δE = 0.852 and the spectrum has practically uniform energy distribution over the *B*exc. The relative decrement of every *h*th step of the averaged amplitude can be calculated from (14) as δh= (*E*exc /*E*h)1/2=*A*1/*A*h. The values for some lower order levels are δ2 ≈ 4.51, δ3 ≈ 8.44, and δ4 ≈ 12.47 − see the approximations given in Fig. 12. For the next levels, a rough approximation δh ≈ 4(*h*−1) +1 is appropriate for

0 *f*st .. *f*fin ..3*f*st 5*f*st 3*f*fin 7*f*st 5*f*fin 7*f*fin *f* 

*L*=1000); (b) explanatory diagram of forming the amplitude spectrum at *f*st≠0

3rd

Fundamental harmonics *f*st…*f*fin

engineering calculations (Min *et al*, 2009).

Magnitude, |

(b)

*V*(*f*)|

(a)

2

Frequency, kHz

harmonics 3*f*st…3*f*fin

3rd

and 5th harmonics

5th harmonics 5*f*st…5*f*fin

7th harmonics 7*f*st…7*f*fin

Combination of higher harmonics

5th harmonics

Riemann's mathematics gives us the following limit values for the sums in (17):

*i* <sup>∞</sup> <sup>−</sup>

1

*i* <sup>∞</sup> <sup>−</sup>

*i*

=

1

*i*

sine-wave chirp

3*f*st–*f*fin

*f*st

*B*exc *B*zero

Magnitude

*B*exc

=

NRZ signum-chirp

3rd

The partial power *P*1= *p*1*B*exc= *P*exc of the first level (*h*=1) is the useful excitation power

$$P\_{\text{exc}} \approx \mathcal{W}\_1 \sum\_{i=h}^{1+m} \left(2i - 1\right)^{-3} \tag{15}$$

The power *P*out falling outside *B*exc is lost. The lost power can be found as a summed up power of higher harmonics within the next levels of spectrum (*h*= 2, 3, 4, etc., see Fig. 10):

$$P\_{\rm out} \approx W\_1 \sum\_{i=1}^{1+m} 2\left(i-1\right) / \left(2i-1\right)^3\tag{16}$$

where *W*1 = *w*1*B*exc in (15) and (16) is the power of fundamental harmonic. As the minimal frequency for every level is *kf*st, then the equations (14-16) are approximate ones, which become absolutely exact only for *f*st=0. In practice, when the excitation bandwidth *B*exc exceeds one decade significantly, e.g., *f*fin / *f*st > 30, these equations are exact enough for engineering calculations.

The total chirp energy is *E*tot= *E*exc + *E*out = ( *P*exc+*P*out *) T*ch and the useful excitation energy in it is *E*exc= *P*exc *T*ch . From (15) and (16) follows the role of useful energy δE = *E*exc /*E*tot. Considering *f*st=0, the ratio for the energy-efficiency expresses as

() () max max 3 2 E 1 1 21 / 21 *h h i i i i* − − = = ⎛ ⎞ ⎛ ⎞ δ= − ⎜ ⎟ ⎜ ⎟ <sup>−</sup> ⎝ ⎠ ⎝ ⎠ ∑ ∑ (17)

Fig. 12. Amplitude spectra of 1000-cycles sine-wave chirps and signum-chirps with *f*st=0 and *f*fin=100 kHz (*A*=1)

Taking into account all the higher harmonics (*m*→∞ and (max *h*)→∞), the sums in (17) will obtain limit values, which can be found via Riemann zeta function (Dwight, 1961):

$$\zeta(\mathbf{x}) = \sum\_{n=1}^{\infty} n^{-\mathbf{x}} \tag{18}$$

248 Fourier Transform – Signal Processing

exc 1 2 1 *m*

*i h PW i*

The power *P*out falling outside *B*exc is lost. The lost power can be found as a summed up power of higher harmonics within the next levels of spectrum (*h*= 2, 3, 4, etc., see Fig. 10):

where *W*1 = *w*1*B*exc in (15) and (16) is the power of fundamental harmonic. As the minimal frequency for every level is *kf*st, then the equations (14-16) are approximate ones, which become absolutely exact only for *f*st=0. In practice, when the excitation bandwidth *B*exc exceeds one decade significantly, e.g., *f*fin / *f*st > 30, these equations are exact enough for

The total chirp energy is *E*tot= *E*exc + *E*out = ( *P*exc+*P*out *) T*ch and the useful excitation energy in it is *E*exc= *P*exc *T*ch . From (15) and (16) follows the role of useful energy δE = *E*exc /*E*tot.

() () max max 3 2

21 / 21 *h h*

Frequency, kHz

Fig. 12. Amplitude spectra of 1000-cycles sine-wave chirps and signum-chirps with *f*st=0 and

Taking into account all the higher harmonics (*m*→∞ and (max *h*)→∞), the sums in (17) will

1 ( ) *<sup>x</sup> n x n* <sup>∞</sup> <sup>−</sup> =

obtain limit values, which can be found via Riemann zeta function (Dwight, 1961):

0 100 200 300 400 500 kHz

*i i* − −

1 1

*i i*

= = ⎛ ⎞ ⎛ ⎞ δ= − ⎜ ⎟ ⎜ ⎟ <sup>−</sup>

=

( ) <sup>1</sup> <sup>3</sup>

( )( ) <sup>1</sup> <sup>3</sup>

2 1/2 1

≈ − ∑ (15)

≈ −− ∑ (16)

⎝ ⎠ ⎝ ⎠ ∑ ∑ (17)

2*B*exc

*A*3≈*A*1/8.4 *A*4≈*A*1/12.5

ζ = ∑ (18)

<sup>+</sup> <sup>−</sup>

The partial power *P*1= *p*1*B*exc= *P*exc of the first level (*h*=1) is the useful excitation power

1

*i PW i i* +

=

*m*

out 1

Considering *f*st=0, the ratio for the energy-efficiency expresses as

E

*A*0

*A*1=4 *A*0/

*A*2≈ *A*1/4.5

sine-wave chirp

NRZ signum-chirp

π

engineering calculations.

Magnitude, |

*V*(*f*)|

0.04

0.02

0

*f*fin=100 kHz (*A*=1)

Riemann's mathematics gives us the following limit values for the sums in (17):

$$\sum\_{i=1}^{n} (2i - 1)^{-2} = \pi^2 \text{ / 8 }\tag{19}$$

$$\sum\_{i=1}^{\infty} (2i - 1)^{-3} = 7\zeta(3) / 8 \,\text{.}\tag{20}$$

where ζ(3)≈1.202 is termed the Apéry's constant (Dwight, 1961). Now, using equations (19) and (20), we can express the part of useful energy in the linear NRZ signum-chirp pulse as

2 <sup>E</sup>δ = 7 (3) / ζ π (21)

Fig. 13. (a) Amplitude spectrum of signum-chirp with *f*st=50 kHz and *f*fin=100 kHz (*A*=1; *L*=1000); (b) explanatory diagram of forming the amplitude spectrum at *f*st≠0

When *f*fin / *f*st >> 30 and *m* >> 1, the value of δE = 0.852 and the spectrum has practically uniform energy distribution over the *B*exc. The relative decrement of every *h*th step of the averaged amplitude can be calculated from (14) as δh= (*E*exc /*E*h)1/2=*A*1/*A*h. The values for some lower order levels are δ2 ≈ 4.51, δ3 ≈ 8.44, and δ4 ≈ 12.47 − see the approximations given in Fig. 12. For the next levels, a rough approximation δh ≈ 4(*h*−1) +1 is appropriate for engineering calculations (Min *et al*, 2009).

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 251

In the use of some short-time chirps, the problem of flatness of the amplitude spectrum arises (see Fig. 9b). For the single-cycle sine-wave chirp, the maximum overshoot of normalized voltage spectral density is about +7.4 dB inside the chirp bandwidth. To improve the flatness of spectrum, a kind of additional windowing of chirp pulses should be used (in fact, every finite chirp pulse can be dealt as one inside the rectangular window, however we consider this case as the unwindowed one). As a rule, the windowing accompanies with some loss of total energy and power of signals, but still the δE can be rather high. On the other hand, frequently the spectral density attains steeper drop-off outside the chirp bandwidth due to windowing. Hence, the optimal choice of windows

In this work, several typical (Hanning, Hamming, Nuttall, etc.) and some specific window functions *F*win(*t*) were under study (Barsoukov & Macdonald, 2005). For example, a convenient shaping of amplitude spectra was achieved by implementing window functions *F*win(*t*)=sin2(π*t* /*T*ch), which can be dealt as a particular case of the Tukey window with the squared-sine lobes and with the tapering time *T*ch /2. Almost perfect shaping was attained using a non-symmetrical windowing in the form of *F*win(*t*)=(*t*/*T*ch)*a* with the selectable exponent *a* (usually *a* =2…8) (Paavle *et al*, 2010). Some windowing results both in the time

**3.2 Windowing of chirps** 

presumes a certain trade-off always.

and frequency domain are shown in Fig. 15.

unwindowed (rect.window)

Voltage

Time, μs

non-symm., *a*=5.2

(a) (b)

Nuttall

squared-sine (Tukey)

(a) windowed waveforms; (b) normalized spectra of the windowed waveforms

of the squared-sine windowing was almost the same as it of the Hanning window.

Fig. 15. Effect of windowing to the single-cycle sine-wave chirp, fst=0, ffin=100 kHz, *T*ch=20 μs:

20log(|

0 +1 dB

*V*(*f*)|/| i

*V*(*f*0)|

+7.4 dB max

–1 dB *f*fin


unwindowed non-symm.,*a* = 5.2

Nuttall Tukey

Frequency, Hz

Usually, the deviation about ± 3 dB of the amplitude spectrum inside the *B*exc is considered as satisfactory for spectral flatness. There are several ways to achieve such the requirement. Using the non-symmetric windowing makes the deviation even less than ± 1 dB accessible (see Fig.15b), but the quantity of useful energy reduces noticeably due to the plain slope of the spectrum at *f* >*f*fin: for the case, as shown in Fig. 15, the energy-efficiency δE=73.1% was observed in simulations. Substantially higher δE was achieved by using of Nuttall (δE=92.3%) and Hanning (δE=95.3%) windows, but the deviation of amplitude spectra inside the *B*exc is from 0 to -6 dB and from +0.2 to -6 dB, respectively (Paavle *et al*, 2010). The impact

As a rule, windowing of the generated signal demands some additional power consumption and reduces the total energy-efficiency. However, in the case of very short signals this

The shape of amplitude spectrum becomes more complicated, if the initial frequency *f*st≠0. Though, the spectrum retains the gradual character, the levels do not attenuate monotonically. An example of this kind of spectrum is shown in Fig. 13a, where *f*st= 50 kHz and *f*fin=100 kHz. In general, the height and location of spectral levels can be various, depending on the ratio between *f*st and *f*fin.

The forming of spectra as the sum of its harmonic components is explained by the draft diagram in Fig. 13b. Fulfillment of the condition 3*f*st > *f*fin produces the frequency area *B*zero = 3*f*st –*f*fin next to the excitation bandwidth, where the spectrum has almost zero amplitude. In this particular case, the spectral density within the excitation bandwidth is determined only by the fundamental harmonics from *f*st to *f*fin. It can be shown, considering the Eqs.(13) –(16), that here the energy-efficiency δE=8/π2.

### **3.1.3 Ternary chirps**

Caused by the absence of particular harmonics (see Sect. 2.2.1), the position and span of amplitude levels of ternary chirps (RZ chirps) differ from the levels of the respective binary chirp. Fig. 14a enables to compare the amplitude spectra of several rectangular chirps with *L*=1000 at various durations of the zero-level state αd (shortening).

Let us pay attention to the changed stretch of the amplitude levels. For example, in the case of αd =30º, the average amplitude of 2nd level (*h*=2) stretches uniformly from *f*fin up to the 5*f*fin. As the 3rd harmonic are absent, this 2nd level is formed by the amplitudes of the 5th and higher harmonics up to the frequency 5*f*fin.

Fig. 14. (a) Amplitude spectra of ternary chirps with *f*fin=100 kHz (32-tap moving averaging has been applied for smoothing); (b) energy-efficiency of ternary chirps vs. zero-level state (shortening)

Average power of ternary chirps is less than it of respective binary chirps, but the percentage of their useful energy is surprisingly high – over 90% mostly (see also Table 1). Actually, it is possible to generate ternary chirps with any value of the zero-state, but only a few values produce the removing of particular harmonics (see Eq. 6 in Sect. 2.2.1). Nevertheless, we can analyze the energy properties of arbitrary ternary chirp. Employing the equation (12), the maximal energy-efficiency δE ≈93.4% was observed at α<sup>d</sup>≈22.5º, which does not cause disappearance of any odd harmonic component. Dependence of the energyefficiency on the shortening αd is plotted in Fig. 14b.

### **3.2 Windowing of chirps**

250 Fourier Transform – Signal Processing

The shape of amplitude spectrum becomes more complicated, if the initial frequency *f*st≠0. Though, the spectrum retains the gradual character, the levels do not attenuate monotonically. An example of this kind of spectrum is shown in Fig. 13a, where *f*st= 50 kHz and *f*fin=100 kHz. In general, the height and location of spectral levels can be various,

The forming of spectra as the sum of its harmonic components is explained by the draft diagram in Fig. 13b. Fulfillment of the condition 3*f*st > *f*fin produces the frequency area *B*zero = 3*f*st –*f*fin next to the excitation bandwidth, where the spectrum has almost zero amplitude. In this particular case, the spectral density within the excitation bandwidth is determined only by the fundamental harmonics from *f*st to *f*fin. It can be shown, considering

Caused by the absence of particular harmonics (see Sect. 2.2.1), the position and span of amplitude levels of ternary chirps (RZ chirps) differ from the levels of the respective binary chirp. Fig. 14a enables to compare the amplitude spectra of several rectangular chirps with

Let us pay attention to the changed stretch of the amplitude levels. For example, in the case of αd =30º, the average amplitude of 2nd level (*h*=2) stretches uniformly from *f*fin up to the 5*f*fin. As the 3rd harmonic are absent, this 2nd level is formed by the amplitudes of the 5th and

Energy-efficiency, %

Fig. 14. (a) Amplitude spectra of ternary chirps with *f*fin=100 kHz (32-tap moving averaging has been applied for smoothing); (b) energy-efficiency of ternary chirps vs. zero-level state

Average power of ternary chirps is less than it of respective binary chirps, but the percentage of their useful energy is surprisingly high – over 90% mostly (see also Table 1). Actually, it is possible to generate ternary chirps with any value of the zero-state, but only a few values produce the removing of particular harmonics (see Eq. 6 in Sect. 2.2.1). Nevertheless, we can analyze the energy properties of arbitrary ternary chirp. Employing

does not cause disappearance of any odd harmonic component. Dependence of the energy-

α

Shortening αd, deg

<sup>d</sup>≈22.5º, which

depending on the ratio between *f*st and *f*fin.

higher harmonics up to the frequency 5*f*fin.

NRZ; αd=0º

RZ; αd=18º

RZ; αd=30º

Frequency, kHz

**3.1.3 Ternary chirps** 

(shortening)

Magnitude,

*V*(*f*)

the Eqs.(13) –(16), that here the energy-efficiency δE=8/π2.

*L*=1000 at various durations of the zero-level state αd (shortening).

(a) (b)

the equation (12), the maximal energy-efficiency δE ≈93.4% was observed at

efficiency on the shortening αd is plotted in Fig. 14b.

In the use of some short-time chirps, the problem of flatness of the amplitude spectrum arises (see Fig. 9b). For the single-cycle sine-wave chirp, the maximum overshoot of normalized voltage spectral density is about +7.4 dB inside the chirp bandwidth. To improve the flatness of spectrum, a kind of additional windowing of chirp pulses should be used (in fact, every finite chirp pulse can be dealt as one inside the rectangular window, however we consider this case as the unwindowed one). As a rule, the windowing accompanies with some loss of total energy and power of signals, but still the δE can be rather high. On the other hand, frequently the spectral density attains steeper drop-off outside the chirp bandwidth due to windowing. Hence, the optimal choice of windows presumes a certain trade-off always.

In this work, several typical (Hanning, Hamming, Nuttall, etc.) and some specific window functions *F*win(*t*) were under study (Barsoukov & Macdonald, 2005). For example, a convenient shaping of amplitude spectra was achieved by implementing window functions *F*win(*t*)=sin2(π*t* /*T*ch), which can be dealt as a particular case of the Tukey window with the squared-sine lobes and with the tapering time *T*ch /2. Almost perfect shaping was attained using a non-symmetrical windowing in the form of *F*win(*t*)=(*t*/*T*ch)*a* with the selectable exponent *a* (usually *a* =2…8) (Paavle *et al*, 2010). Some windowing results both in the time and frequency domain are shown in Fig. 15.

Fig. 15. Effect of windowing to the single-cycle sine-wave chirp, fst=0, ffin=100 kHz, *T*ch=20 μs: (a) windowed waveforms; (b) normalized spectra of the windowed waveforms

Usually, the deviation about ± 3 dB of the amplitude spectrum inside the *B*exc is considered as satisfactory for spectral flatness. There are several ways to achieve such the requirement. Using the non-symmetric windowing makes the deviation even less than ± 1 dB accessible (see Fig.15b), but the quantity of useful energy reduces noticeably due to the plain slope of the spectrum at *f* >*f*fin: for the case, as shown in Fig. 15, the energy-efficiency δE=73.1% was observed in simulations. Substantially higher δE was achieved by using of Nuttall (δE=92.3%) and Hanning (δE=95.3%) windows, but the deviation of amplitude spectra inside the *B*exc is from 0 to -6 dB and from +0.2 to -6 dB, respectively (Paavle *et al*, 2010). The impact of the squared-sine windowing was almost the same as it of the Hanning window.

As a rule, windowing of the generated signal demands some additional power consumption and reduces the total energy-efficiency. However, in the case of very short signals this

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 253

(d)

20log(|

*V*(*f*)|/| i

*V*(*f*0)|

1.05*R*<sup>2</sup>

1.05*C*<sup>1</sup>

(c)

–80 dB/dec

Frequency, Hz

Fig. 17. The use of double quarter-cycle quadratic chirp excitation (*L*=¼, *n*=2, *T*ch=2×7.5 μs,

Frequency, kHz

matched, *Ż*1*=Ż*ref

(c) amplitude spectrum of the chirp pulse; (d) phase spectra at matched filtering (*Ż*ref = *Ż*1)

For interpreting the variations of frequency responses, the phase spectra should be preferred. First, it is less affected by the spectral fluctuations. Secondly, the phase spectra enable to distinguish changing of object parameters somewhat more clearly than the amplitude ones – compare the respective curves in Figs. 16c and 16d. Nevertheless, substantially better resolution of the EBI for the diagnosing purposes can be achieved by using matched filtering, where instead of *Ż*ref=1 in the reference channel, a predetermined *Ż*ref≈*Ż*x is used, approximately equal to the impedance *Ż*x under study (see Fig. 6). Joining matched filtering together with analysis of phase spectra allows applying the full scale of some degrees only and permits very good sensitivity for detection of small changes of the

Simulation results in implementing of the bidirectional quarter-cycle quadratic chirp excitation for detection of tiny deviations of the EBI vector are presented in Fig. 17. The waveform of excitation pulse and the corresponding frequency run are shown in Figs. 17a and 17b, respectively. Fig. 17c depicts the normalized amplitude spectrum of the excitation signal and Fig. 17d manifests deviation of phase spectra, when the value of single

*f*st =0, *f*fin=100 kHz): (a) signal waveform; (b) variation of instantaneous frequency;

1.05*C*<sup>2</sup>

and at changing of the EBI components +5%

1.05*R*<sup>1</sup>

Time, μs

Time, μs

(a)

(b)

components increases +5% over their initial quality.

object parameters.

Phase deviation

0

50

100

*V*exc, V

Freq., kHz

ΔΦz(*f*), deg

drawback can be overcome using the look-up table, which stores the externally calculated values of windowed titlets and which is loaded into the FPGA (Min *et al*, 2011a).

### **4. Simulation examples**

The following examples refer to the linear and quadratic sine-wave chirps and are the results of simulation using the modeling structure in Fig. 7. In all the cases, the chirp excitation from *f* = 0 to 100 kHz and amplitude *A*=1 V was implemented. The basic model of EBI was a 5-element impedance *Ż*1, consisting of *R*0=1 kΩ, *R*1=200 Ω, *R*2=100 Ω, *C*1=30 nF, *C*2=20 nF and having the following corner frequencies in the Bode diagram (see Fig. 2b): *f*p1=2.9, *f*z1=26.5, *f*p2=45.2, and *f*z2=79.6 kHz.

Fig. 16. Examples of the time-frequency analysis of the EBI (sine-wave chirp excitation with *L*=1000): (a) waveforms of excitation and response; (b) normalized correlation functions; (c, d) cross-power spectral density and phase spectra at variations of the EBI components

Fig. 16 demonstrates simulation results in the case of multi-cycle chirp excitation with pulse duration of *T*ch=20 ms (1000 cycles). Fig. 16a shows the response voltage *V*z(*t*) on the background of the non-windowed excitation signal. Fig. 16b shows cross-correlation functions in a stretched time scale at the *Ż*= *Ż*1, and at the unity value impedance *Ż*= *Ż*0 =1 (in fact, the latter case corresponds to the autocorrelation function of the excitation signal). Results of the Fourier Transform of the CCF are shown in Figs. 16c and 16d, where one can watch the swing of spectral curves, if the capacitive components of EBI change by +50%. In addition, Fig. 16c demonstrates, how the boxcar-type windowing (here squared-sine Tukey window with the tapering time of 0.05*T*ch) reduces the influence of the Gibbs effect.

Intensive fluctuation of spectra, caused by the concurrent higher harmonics, can be more disturbing for signum-chirp excitation (see Figs. 10b, 12, and 13a). This problem can be diminished by proper selection of correlation parameters, e.g., the shorter *t*obs with rougher frequency resolution smoothes spectrograms essentially (Paavle *et al*, 2008).

252 Fourier Transform – Signal Processing

drawback can be overcome using the look-up table, which stores the externally calculated

The following examples refer to the linear and quadratic sine-wave chirps and are the results of simulation using the modeling structure in Fig. 7. In all the cases, the chirp excitation from *f* = 0 to 100 kHz and amplitude *A*=1 V was implemented. The basic model of EBI was a 5-element impedance *Ż*1, consisting of *R*0=1 kΩ, *R*1=200 Ω, *R*2=100 Ω, *C*1=30 nF, *C*2=20 nF and having the following corner frequencies in the Bode diagram (see Fig. 2b):

values of windowed titlets and which is loaded into the FPGA (Min *et al*, 2011a).

(c) (d)

Fig. 16. Examples of the time-frequency analysis of the EBI (sine-wave chirp excitation with *L*=1000): (a) waveforms of excitation and response; (b) normalized correlation functions; (c, d) cross-power spectral density and phase spectra at variations of the EBI components

Phase

kHz

Φzr(*f*), deg

(a) (b)

Correlation

*Ż*0

Lag, μs

*C*<sup>1</sup> \* =1.5 *C*<sup>1</sup>

autocorrelation *r*xx

*Ż*1

(Ż=Ż0) cross-correlation *r*xy

(Ż=Ż1)

Frequency, kHz

*C*<sup>2</sup> \* =1.5 *C*<sup>2</sup>

Fig. 16 demonstrates simulation results in the case of multi-cycle chirp excitation with pulse duration of *T*ch=20 ms (1000 cycles). Fig. 16a shows the response voltage *V*z(*t*) on the background of the non-windowed excitation signal. Fig. 16b shows cross-correlation functions in a stretched time scale at the *Ż*= *Ż*1, and at the unity value impedance *Ż*= *Ż*0 =1 (in fact, the latter case corresponds to the autocorrelation function of the excitation signal). Results of the Fourier Transform of the CCF are shown in Figs. 16c and 16d, where one can watch the swing of spectral curves, if the capacitive components of EBI change by +50%. In addition, Fig. 16c demonstrates, how the boxcar-type windowing (here squared-sine Tukey

Intensive fluctuation of spectra, caused by the concurrent higher harmonics, can be more disturbing for signum-chirp excitation (see Figs. 10b, 12, and 13a). This problem can be diminished by proper selection of correlation parameters, e.g., the shorter *t*obs with rougher

window with the tapering time of 0.05*T*ch) reduces the influence of the Gibbs effect.

frequency resolution smoothes spectrograms essentially (Paavle *et al*, 2008).

**4. Simulation examples** 

*V*(*t*)

Magnitude. |*P*zr(*f*

)|, V2/Hz

6e-4

4e-4

2e-4

0

*f*p1=2.9, *f*z1=26.5, *f*p2=45.2, and *f*z2=79.6 kHz.

Time, ms

*Ż*0

*Ż*<sup>1</sup> *C*<sup>1</sup> \* =1.5 *C*<sup>1</sup> *C*<sup>2</sup> \* =1.5 *C*<sup>2</sup>

Frequency, kHz

*L*=1000

*V Ż=Ż*0=1 *<sup>z</sup>* at *Ż=Ż*<sup>1</sup>

*Ż*0, windowed

Fig. 17. The use of double quarter-cycle quadratic chirp excitation (*L*=¼, *n*=2, *T*ch=2×7.5 μs, *f*st =0, *f*fin=100 kHz): (a) signal waveform; (b) variation of instantaneous frequency; (c) amplitude spectrum of the chirp pulse; (d) phase spectra at matched filtering (*Ż*ref = *Ż*1) and at changing of the EBI components +5%

For interpreting the variations of frequency responses, the phase spectra should be preferred. First, it is less affected by the spectral fluctuations. Secondly, the phase spectra enable to distinguish changing of object parameters somewhat more clearly than the amplitude ones – compare the respective curves in Figs. 16c and 16d. Nevertheless, substantially better resolution of the EBI for the diagnosing purposes can be achieved by using matched filtering, where instead of *Ż*ref=1 in the reference channel, a predetermined *Ż*ref≈*Ż*x is used, approximately equal to the impedance *Ż*x under study (see Fig. 6). Joining matched filtering together with analysis of phase spectra allows applying the full scale of some degrees only and permits very good sensitivity for detection of small changes of the object parameters.

Simulation results in implementing of the bidirectional quarter-cycle quadratic chirp excitation for detection of tiny deviations of the EBI vector are presented in Fig. 17. The waveform of excitation pulse and the corresponding frequency run are shown in Figs. 17a and 17b, respectively. Fig. 17c depicts the normalized amplitude spectrum of the excitation signal and Fig. 17d manifests deviation of phase spectra, when the value of single components increases +5% over their initial quality.

Aspects of Using Chirp Excitation for Estimation of Bioimpedance Spectrum 255

The described measurement method, based on the sequentially performed cross-correlation and Fourier Transform, enables the joint time-frequency spectral analysis of bioimpedance. As the time domain cross-correlation function includes the full information about the complex vector Ż(ω), only a single FFT-block is necessary for the Fourier analysis and the complete frequency domain determination of the object. Moreover, the cross-correlation

Applying the modification of system architecture with the cross-correlation based matched filtering enables to avoid the impact of noise in much higher degree then a simple crosscorrelation. In addition, such the matched filtering permits to increase the sensitivity of measurement, especially through the analysis of phase spectra. Identification and interpretation of relative deviations of the object parameters through the detected tiny phase shifts is of great importance, which in turn can give valuable information about the state and

This work was supported by the European Union through the European Regional Development Fund, Estonian target-financed project SF0142737s06 and by Enterprise

Barsoukov, E.; Macdonald, J. R. (Eds.) (2005). *Impedance Spectroscopy. Theory, Experiment, and Applications* (2nd ed.). John Wiley & Sons Ltd, Hoboken, New Jersey. Darowicki, K.; Slepski, P. (2004). Determination of electrode impedance by means of

Dwight, H. B. (1961). *Tables of Integrals and Other Mathematical Data*, The McMillan Company,

Gawad, S.; Sun, T.; Green, N. G.; Morgan, H. (2007). Impedance Spectroscopy Using

Grimnes, S.; Martinsen, Ø. G. (2008), Bioimpedance and Bioelectricity Basics (2nd ed.),

McGhee, J.; Kulesza, W.; Henderson, I. A.; Korczynski, M. J. (2001). *Measurement Data Handling. Theoretical Technique.* Vol. 1, The Technical University of Lodz, Poland. Min, M.; Parve, T. (2007). Improvement of Lock-in Electrical Bio-Impedance Analyzer for

Min, M.; Paavle, T.; Annus, P.; Land, R. (2009). Rectangular Wave Excitation in Wideband

*and Applications (MeMeA2009)*, Cetraro, Italy, May 29-30, 2009, pp.268-271. Min, M.; Land, R.; Paavle, T.; Annus, P., Parve, T.; Trebbels, D. (2011). Broadband

*Scientific Instruments*, Vol. 78, No.5 (May, 2007).

*Measurement*, Vol. 32, No. 7 (July 2011), pp.945-958.

exponential chirp signal. *Electrochemistry Communications*, No. 6, (June, 2004),

Maximum Length Sequences: Application to Single Cell Analysis. *Review of* 

Implantable Medical Devices. *IEEE Trans. Instrumentation and Measurement*, Vol. 56,

Bioimpedance Spectroscopy, *Proc. IEEE 4th Int. Workshop on Medical Measurements* 

Spectroscopy of Dynamic Impedances with Short Chirp Pulses, *Physiological* 

procedure assures substantial suppression of noise introduced by the object.

processes in the biological objects.

Estonia through the ELIKO Competence Center.

Elsevier-Academic Press, 2008.

No. 3 (June 2007), pp.968-974.

**6. Acknowledgment** 

pp.898-902.

New York, 1961.

**7. References** 

Another advantage of the matched filtering is reducing the impact of additive noise, especially the impact of source noise *n*s(*t*). Fig. 18a depicts the noisy input signal, where the generated chirp is affected by the Gaussian noise with the zero mean and variance σN2 – see Fig. 6. The source noise causes some error in the phase spectra at the higher frequencies (Fig. 18b). This error depends on the deviation of component parameters of the equivalent circuit (here *C*1 was changed), and on the signal-to-noise ratio (SNR=*P*avg/σN2), but as a rule, the error remains small even at considerably high level of the source noise.

Fig. 18. Linear single-cycle chirp (*T*ch=20 μs) at the presence of additive source noise: (a) noisy input signal; (b) spectra of phase differences at 1.05*C*1 and at different input signalto-noise ratio (SNR) levels

As the noise from object (*n*z(*t*) in Fig. 6) affects the cross-correlation procedure nonsymmetrically, then suppression of noise effect is of importance and serves special attention even at the higher SNR values (Min *et al*, 2011b).

### **5. Conclusion**

The advantages of using chirps as excitation signals in bioimpedance measurement are their wide and almost flat amplitude spectrum together with the independent scalability both in the time and frequency domain – we can choose the frequency range and duration of the excitation pulse almost independently from each other. These features enable to accommodate the generation of excitation signals with the expected properties of the object to be estimated comparatively simply.

It was shown that the shortening of chirp pulses retains the general benefits of chirps – their flat amplitude spectrum within the predetermined frequency range, which permits their implementation in energy-efficient measuring instruments. More than 90% of the generated excitation energy falls into the desired bandwidth even in the case of very short excitation pulses required for providing ultra quick measurement and analysis of dynamic objects. These requirements are obligatory for investigation of objects with rapidly changing parameters (e.g., for identification of fast moving bioparticles such as cells and droplets in high-throughput micro-fluidic systems), and in the devices, in which the low power consumption is important (e.g., wearable units and medical implants). However, shortening of chirp excitation and measurement time should not be excessive. It must be as short as possible to avoid significant impedance changes during the Fourier analysis, but as long as possible to enlarge the excitation energy and to obtain a better signal-to-noise ratio.

The described measurement method, based on the sequentially performed cross-correlation and Fourier Transform, enables the joint time-frequency spectral analysis of bioimpedance. As the time domain cross-correlation function includes the full information about the complex vector Ż(ω), only a single FFT-block is necessary for the Fourier analysis and the complete frequency domain determination of the object. Moreover, the cross-correlation procedure assures substantial suppression of noise introduced by the object.

Applying the modification of system architecture with the cross-correlation based matched filtering enables to avoid the impact of noise in much higher degree then a simple crosscorrelation. In addition, such the matched filtering permits to increase the sensitivity of measurement, especially through the analysis of phase spectra. Identification and interpretation of relative deviations of the object parameters through the detected tiny phase shifts is of great importance, which in turn can give valuable information about the state and processes in the biological objects.

### **6. Acknowledgment**

This work was supported by the European Union through the European Regional Development Fund, Estonian target-financed project SF0142737s06 and by Enterprise Estonia through the ELIKO Competence Center.

### **7. References**

254 Fourier Transform – Signal Processing

Another advantage of the matched filtering is reducing the impact of additive noise, especially the impact of source noise *n*s(*t*). Fig. 18a depicts the noisy input signal, where the generated chirp is affected by the Gaussian noise with the zero mean and variance σN2 – see Fig. 6. The source noise causes some error in the phase spectra at the higher frequencies (Fig. 18b). This error depends on the deviation of component parameters of the equivalent circuit (here *C*1 was changed), and on the signal-to-noise ratio (SNR=*P*avg/σN2), but as a rule, the

error remains small even at considerably high level of the source noise.

(a) (b)

to-noise ratio (SNR) levels

**5. Conclusion** 

Amplitude, V

even at the higher SNR values (Min *et al*, 2011b).

SNR≈0 dB pure signal

Time, μs

to be estimated comparatively simply.

Fig. 18. Linear single-cycle chirp (*T*ch=20 μs) at the presence of additive source noise: (a) noisy input signal; (b) spectra of phase differences at 1.05*C*1 and at different input signal-

As the noise from object (*n*z(*t*) in Fig. 6) affects the cross-correlation procedure nonsymmetrically, then suppression of noise effect is of importance and serves special attention

Phase deviation, deg

Frequency, kHz

SNR≈0 dB

SNR≈6 dB

pure signal

The advantages of using chirps as excitation signals in bioimpedance measurement are their wide and almost flat amplitude spectrum together with the independent scalability both in the time and frequency domain – we can choose the frequency range and duration of the excitation pulse almost independently from each other. These features enable to accommodate the generation of excitation signals with the expected properties of the object

It was shown that the shortening of chirp pulses retains the general benefits of chirps – their flat amplitude spectrum within the predetermined frequency range, which permits their implementation in energy-efficient measuring instruments. More than 90% of the generated excitation energy falls into the desired bandwidth even in the case of very short excitation pulses required for providing ultra quick measurement and analysis of dynamic objects. These requirements are obligatory for investigation of objects with rapidly changing parameters (e.g., for identification of fast moving bioparticles such as cells and droplets in high-throughput micro-fluidic systems), and in the devices, in which the low power consumption is important (e.g., wearable units and medical implants). However, shortening of chirp excitation and measurement time should not be excessive. It must be as short as possible to avoid significant impedance changes during the Fourier analysis, but as long as

possible to enlarge the excitation energy and to obtain a better signal-to-noise ratio.


**11** 

*Estonia* 

**Simple Signals for System Identification** 

Event accessible for observation can be investigated by examining timeline of certain measureable values. Usually the process is referred to as time domain analysis of signals. Such a signal can be of many different origins. It could be comprised of values of electrical current or voltage, mechanical displacement or force, value of stock or popularity of politicians, reaction of the patient to medication, and essentially anything which is observable, measurable and quantifiable with reasonable accuracy. Observer will quite intuitively search for certain patterns and periods in signal if phenomena changes along the timeline. Some of them are easily detectable, such as periodic variation of the suns activity; some might appear as random fluctuations at the first glance, such as parameters of the seismic waves in Earth's crust. Of course sources of noise and other disturbances are usually

It is well known already from early days of modern science that signals can be analyzed in frequency domain as well or instead of time domain analysis. It turns out that for observable events both domains can be easily interchanged or in other words transformed one into other without any loss of information, and even joint time-frequency analysis can be

First known usage of what is essentially known today as fast Fourier transform (FFT) is contributed to Johann Carl Friedrich Gauss (Goldstine, 1977, as cited in Heideman et al., 1984). In 1805 Gauss describes his computationally efficient method for interpolation of the orbits of celestial bodies. Intriguingly it was almost forgotten. Few years later, in 1807, Jean Baptiste Joseph Fourier, while interested in heat propagation, claimed that any continuous periodic signal could be represented as the sum of properly chosen sinusoids during his presentation to the Academy of Sciences in Paris. That claim ignited dispute between the reviewers of his paper: Pierre-Simon Laplace, and Joseph-Louis Lagrange, and delayed the publication until 1822 (Heideman et al., 1984). Joseph-Louis Lagrange's protests were based on the fact that such an approach could not be used to represent signals with sharp corners, or in another words with discontinuous slopes, such as square waves. Dispute lasted almost hundred years until Maxime Bôcher gave a detailed mathematical analysis of the phenomenon and named it after the Josiah Willard Gibbs (Bôcher, 1906). In essence both Lagrange and Fourier were right. While it is not possible to construct signals with sharp corners from sinusoids it is possible to get so close that the difference in energy between these signals is zero. If real signals from nature are concerned instead of exact and purely

omnipresent and will make matters much more difficult to observe and explain.

**1. Introduction** 

performed.

Paul Annus, Raul Land, Mart Min and Jaan Ojarand

*Tallinn University of Technology / Eliko* 


## **Simple Signals for System Identification**

Paul Annus, Raul Land, Mart Min and Jaan Ojarand

*Tallinn University of Technology / Eliko Estonia* 

### **1. Introduction**

256 Fourier Transform – Signal Processing

Min, M.; Paavle, T.; Ojarand, J. (2011). Time-Frequency Analysis of Biological Matter Using

Müller, S.; Massarani, P. (2001). Transfer-Function Measurement with Sweeps*, J. Audio Eng.* 

Nahvi, M.; Hoyle, B. S. (2009). Electrical Impedance Spectroscopy Sensing for Industrial Processes, *IEEE Sensors Journal*, Vol. 9, No. 12 (Dec. 2009), pp.1808-1816. Nebuya, S.; Brown, B. H.; Smallwood, R. H.; Milnes, P.; Waterworth, A. R.; Noshiro, M.

Paavle, T.; Min, M.; Parve, T. (2008). Using of Chirp Excitation for Bioimpedance Estimation:

Paavle, T.; Min, M.; Ojarand, J.; Parve, T. (2010). Short-Time Chirp Excitations for Using in

Parve, T.; Land, R. (2004). Improvement of Lock-in Signal Processing for Applications in

Pliquett, U.; Gershing, E.; Pliquett, F. (2000). Evaluation of Fast Time-Domain Based

Rufer, L.; Mir, S.; Simeu, E.; Domingues, C. (2005). On Chip Pseudorandom MEMS Testing. *Journal of Electronic Testing: Theory and Applications*, Vol. 21, No. 3, pp.233-241. Sanchez, B.; Vandersteen, G; Bragos, R.; Schoukens, J. (2011). Optimal Multisine Excitation

Vaseghi, S. V. (2006)., *Advanced Digital Signal Processing and Noise Reduction*, (3rd ed.), John

*Conf. (BEC2010)*, Tallinn, Estonia, Oct. 4-6, 2010, pp.253-256.

*Technology*, IOP Publishing, 2011, Vol. 22, No. 11, 11 p.

Wiley & Sons Ltd, Chichester, England, 2006.

*Design (ECCTD2011)*, Linköping, Sweden, Aug. 29-31, 2011, pp.585-588. Misaridis, T. X.; Jensen, J. A. (2005). Use of Modulated Excitation Signals in Medical

2005), pp. 192-207.

*Soc.*, Vol. 49, No. 6, (June 2001), pp. 443-471.

Tallinn, Estonia, Oct. 6-8, 2008, pp. 325-328.

No. 3 (Sept. 2004), pp.185-197.

2000), pp.6-13.

Short-Time Chirp Excitation. *Proc. of European Conference on Circuit Theory and* 

Ultrasound. Part II: Design and Performance for Medical Imaging Application*s*, *IEEE Trans. on Ultrasonics, Ferroelectrics, and Frequency Control*, Vol. 52, No. 2 (Feb.

(1999). Measurement of High Frequency Electrical Transfer Impedances from Biological Tissues. *Electronics Letters Online*, Vol. 35, No. 23 (1999), pp.1985-1987. Paavle, T.; Annus. P. ; Kuusik, A. ; Land, R. ; Min, M (2007). Bioimpedance Monitoring with

Improved Accuracy Using Three-Level Stimulus. *Proc. of European Conference on Circuit Theory and Design (ECCTD2007)*, Seville, Spain, Aug. 26-30, 2006, pp.412-415.

Theoretical Aspects and Modeling. *Proc. of 11th Baltic Electronics Conf. BEC2008*,

Wideband Characterization of Objects: an Overview. *Proc. of 12th Baltic Electronics* 

Measurement of Electrical Bioimpedance. *Proc. of Estonian Acad. Sci. Eng*., Vol. 10,

Impedance Measurements on Biological Tissue. *Biomed. Technik*, Vol. 45 (Jan.-Feb.

Design for Broadband Electrical Impedance Spectroscopy. *Measurement Science and* 

Event accessible for observation can be investigated by examining timeline of certain measureable values. Usually the process is referred to as time domain analysis of signals. Such a signal can be of many different origins. It could be comprised of values of electrical current or voltage, mechanical displacement or force, value of stock or popularity of politicians, reaction of the patient to medication, and essentially anything which is observable, measurable and quantifiable with reasonable accuracy. Observer will quite intuitively search for certain patterns and periods in signal if phenomena changes along the timeline. Some of them are easily detectable, such as periodic variation of the suns activity; some might appear as random fluctuations at the first glance, such as parameters of the seismic waves in Earth's crust. Of course sources of noise and other disturbances are usually omnipresent and will make matters much more difficult to observe and explain.

It is well known already from early days of modern science that signals can be analyzed in frequency domain as well or instead of time domain analysis. It turns out that for observable events both domains can be easily interchanged or in other words transformed one into other without any loss of information, and even joint time-frequency analysis can be performed.

First known usage of what is essentially known today as fast Fourier transform (FFT) is contributed to Johann Carl Friedrich Gauss (Goldstine, 1977, as cited in Heideman et al., 1984). In 1805 Gauss describes his computationally efficient method for interpolation of the orbits of celestial bodies. Intriguingly it was almost forgotten. Few years later, in 1807, Jean Baptiste Joseph Fourier, while interested in heat propagation, claimed that any continuous periodic signal could be represented as the sum of properly chosen sinusoids during his presentation to the Academy of Sciences in Paris. That claim ignited dispute between the reviewers of his paper: Pierre-Simon Laplace, and Joseph-Louis Lagrange, and delayed the publication until 1822 (Heideman et al., 1984). Joseph-Louis Lagrange's protests were based on the fact that such an approach could not be used to represent signals with sharp corners, or in another words with discontinuous slopes, such as square waves. Dispute lasted almost hundred years until Maxime Bôcher gave a detailed mathematical analysis of the phenomenon and named it after the Josiah Willard Gibbs (Bôcher, 1906). In essence both Lagrange and Fourier were right. While it is not possible to construct signals with sharp corners from sinusoids it is possible to get so close that the difference in energy between these signals is zero. If real signals from nature are concerned instead of exact and purely

Simple Signals for System Identification 259

largely on time variance of the parameters of the system. Such a variance could be caused by slow ageing, rapid decay, and fast spatial movement of the SUT relative to the measurement system, as well as by modulation of some of the parameters of the SUT with outside signals. Ideally variance should be excluded by taking readings from different locations simultaneously and rapidly enough. One way to achieve this is to conduct measurements at several slightly differing frequencies. In this case system properties between different points

can be separated, and values will not vary much due to almost identical frequencies:

Fig. 1. System identification in case of two simultaneous excitation signal with slightly

Fig. 2. System identification in case of two excitation waveforms with highly differing

In even more complex cases multisite and multifrequency measurement are needed simultaneously. Question arises whether it is possible to optimize such a complex measurement by applying different signals and processing methods. Following matter discusses some suitable signals with increasing complexity. For comparison both waveform

Frequency response of any system is comparison of the magnitudes and phases of the output signal spectral components with the input signal components in real world measurement situations. It is best viewed as complex function of frequency. Instead of magnitudes and phases it is often useful to represent the result in Cartesian coordinates.

Similar but different task is accomplished when system properties at vastly different frequencies are of interest. Again sequential in time measurements can be considered, but in case of fast variations in system properties simultaneous multifrequency measurement is

differing frequencies f1, and f2, injected from different points

frequencies f1, and f2, injected from the same point

and frequency content of these signals is given.

Link is straightforward:

essential:

mathematical curiosities problem is even smaller. So for all practical signal processing tasks it is indeed possible to state that any real signal can be constructed from sinusoids.

While these transforms, or more precisely their modern counterparts, work very well for signals emanating from natural phenomena, matter can also be investigated by deliberately exciting it with known signals and analyzing its response. This process may be called system identification, synchronous measurement, lock in measurement, or in some application areas something entirely different. The task may even be reversed, in a sense that system can be designed or modified by knowing what output is desirable to certain excitation. It could be filter design in electronics or eigensystem realization algorithm (ERA) in civil engineering, to name a few.

Topic in general is too broad to discuss in one book or even in series of books, and many good papers and books are already written on the topic (Godfrey, 1993; Pintelon & Schoukens, 2001; etc. to name a few), however when low complexity, limited energy consumption and highly optimized measurement systems are targeted new solutions are often warranted, and some of them are briefly discussed in the following pages. So emphasis is given to properties and practical design of custom excitation signals.

### **2. General considerations**

Any system has large set of different parameters, and many subsets of them to be characterized according to requirements imposed by task at hand. Sometimes they can be measured separately and sequentially one at a time, but quite often not. Bulk of system identification theory is based on an assumption that systems are linear and time invariant (LTI), which they are not. Generally certain set of measurements has to be conducted within short enough timeframe for the system to remain reasonable motionless, and with signals which will only very moderately drive system under investigation (SUT) into the non-linear region of operation. Limited magnitudes of excitation signals and the need to consider frequency spectrum of these signals very carefully are both among implications rising from the last requirement. Unfortunately there is also third factor to consider. Disturbances from surrounding world, and noise impact, can effectively render useless any and all of the measurement results, if not dealt with care. Signal to noise ratio (SNR) is often used for numeric quantification of the problem. Similarly known signal to noise and distortion ratio (SINAD) does a little better when covering the problem area and considering likely nonlinear behavior of the SUT as well. Shorter excitation time and limited excitation signal energy are always paired with lower SNR.

Therefore it is clear that whenever real systems are characterized, then choice of excitation signals is always a subject of optimization and compromise. Effectiveness of said optimization depends on level of prior knowledge. Things to consider include measurement conditions, SUT itself and the cost of the measurement. Successive approximation and adaptation can be considered when prior knowledge is limited, unless it is one of a kind or very rare event. Fortunately these very rare cases are indeed rare, and furthermore there is usually at least some amount of general prior knowledge available.

Real objects and systems are seldom fully homogenous and isotropic; therefore several measurements from different locations might be warranted for sufficient characterization of the SUT. Possibility of sequential in time measurements from different locations depends 258 Fourier Transform – Signal Processing

mathematical curiosities problem is even smaller. So for all practical signal processing tasks

While these transforms, or more precisely their modern counterparts, work very well for signals emanating from natural phenomena, matter can also be investigated by deliberately exciting it with known signals and analyzing its response. This process may be called system identification, synchronous measurement, lock in measurement, or in some application areas something entirely different. The task may even be reversed, in a sense that system can be designed or modified by knowing what output is desirable to certain excitation. It could be filter design in electronics or eigensystem realization algorithm (ERA)

Topic in general is too broad to discuss in one book or even in series of books, and many good papers and books are already written on the topic (Godfrey, 1993; Pintelon & Schoukens, 2001; etc. to name a few), however when low complexity, limited energy consumption and highly optimized measurement systems are targeted new solutions are often warranted, and some of them are briefly discussed in the following pages. So

Any system has large set of different parameters, and many subsets of them to be characterized according to requirements imposed by task at hand. Sometimes they can be measured separately and sequentially one at a time, but quite often not. Bulk of system identification theory is based on an assumption that systems are linear and time invariant (LTI), which they are not. Generally certain set of measurements has to be conducted within short enough timeframe for the system to remain reasonable motionless, and with signals which will only very moderately drive system under investigation (SUT) into the non-linear region of operation. Limited magnitudes of excitation signals and the need to consider frequency spectrum of these signals very carefully are both among implications rising from the last requirement. Unfortunately there is also third factor to consider. Disturbances from surrounding world, and noise impact, can effectively render useless any and all of the measurement results, if not dealt with care. Signal to noise ratio (SNR) is often used for numeric quantification of the problem. Similarly known signal to noise and distortion ratio (SINAD) does a little better when covering the problem area and considering likely nonlinear behavior of the SUT as well. Shorter excitation time and limited excitation signal

Therefore it is clear that whenever real systems are characterized, then choice of excitation signals is always a subject of optimization and compromise. Effectiveness of said optimization depends on level of prior knowledge. Things to consider include measurement conditions, SUT itself and the cost of the measurement. Successive approximation and adaptation can be considered when prior knowledge is limited, unless it is one of a kind or very rare event. Fortunately these very rare cases are indeed rare, and furthermore there is

Real objects and systems are seldom fully homogenous and isotropic; therefore several measurements from different locations might be warranted for sufficient characterization of the SUT. Possibility of sequential in time measurements from different locations depends

usually at least some amount of general prior knowledge available.

emphasis is given to properties and practical design of custom excitation signals.

it is indeed possible to state that any real signal can be constructed from sinusoids.

in civil engineering, to name a few.

**2. General considerations** 

energy are always paired with lower SNR.

largely on time variance of the parameters of the system. Such a variance could be caused by slow ageing, rapid decay, and fast spatial movement of the SUT relative to the measurement system, as well as by modulation of some of the parameters of the SUT with outside signals. Ideally variance should be excluded by taking readings from different locations simultaneously and rapidly enough. One way to achieve this is to conduct measurements at several slightly differing frequencies. In this case system properties between different points can be separated, and values will not vary much due to almost identical frequencies:

Fig. 1. System identification in case of two simultaneous excitation signal with slightly differing frequencies f1, and f2, injected from different points

Similar but different task is accomplished when system properties at vastly different frequencies are of interest. Again sequential in time measurements can be considered, but in case of fast variations in system properties simultaneous multifrequency measurement is essential:

Fig. 2. System identification in case of two excitation waveforms with highly differing frequencies f1, and f2, injected from the same point

In even more complex cases multisite and multifrequency measurement are needed simultaneously. Question arises whether it is possible to optimize such a complex measurement by applying different signals and processing methods. Following matter discusses some suitable signals with increasing complexity. For comparison both waveform and frequency content of these signals is given.

Frequency response of any system is comparison of the magnitudes and phases of the output signal spectral components with the input signal components in real world measurement situations. It is best viewed as complex function of frequency. Instead of magnitudes and phases it is often useful to represent the result in Cartesian coordinates. Link is straightforward:

Simple Signals for System Identification 261

First the simplest and spectrally worse signal is examined. It is square wave signal or signal

Time 0 0,25 0,5 0,75 1 1,25 1,5 1,75 2

Fourier series of this square wave function will contain only sinusoidal members, since

4 sin((2 1) ) 4 1 1 ( ) (sin( ) sin(3 ) sin(5 ) ) 2 1 3 5 *<sup>n</sup> n t f t ttt*

It can be viewed as signal with fundamental frequency *ω* having higher harmonics on odd multiplies of *ω*. If multiplication of the system response signal with orthogonal references is chosen as signal processing means, and these reference signals are not sinusoids, then all the members of the Fourier series of the response signal are multiplied with all the members of the Fourier series of the reference signals, i.e. not only fundamental components of the signal, but also all higher harmonics. Therefore if those members of the Fourier series coincide in frequency, then the results of multiplication will be added together and become undistinguishable. The measurement is no longer conducted on single well defined frequency, but instead produces results also on all higher harmonics. As it was discussed above it could be largely ignored, if during signal processing multiplication is conducted with sinusoidal signals, unfortunately it is often accomplished with the same rectangular signal instead, and energy form higher harmonics is summed together. Also spectral impact due to non-linearity of the object (or apparatus) cannot be separated from desired response signal anymore. Worst case impact of the coinciding spectral components summed

There is another way of looking at how the errors appear (Kuhlberg, Land, Min, & Parve, 2003), by considering phase sensitivity characteristics of the synchronous demodulator (SD). They are easy to draw by varying phase shift *φ* between two signals. For simple square waves such a characteristic is presented on Fig. 6. For comparison it is drawn together with

 π ωωω

<sup>−</sup> <sup>=</sup> =+++ <sup>−</sup> <sup>∑</sup> … (3)

ω /2 1 π= *Hz*

ω*t* :

**3. Square waves for measurement at single frequency** 

which can be described as the sign of sin( )

1,5


Fig. 4. An odd square wave, with frequency *f* =

1

∞ =

*n*

eventually all together can be seen on Fig. 5 with dotted line.

ideal circle which appears when two sinusoids are multiplied.

ω

function is odd, i.e. − *fx f x* () ( ) = − :

π



Amplitude

0

0,5

1

Fig. 3. Complex number with unity magnitude in polar and Cartesian coordinates

If sinusoidal signal with frequency *ω*, magnitude *A* and relative phase *φ* compared to reference signal is multiplied by orthogonal set of reference sinusoids, i.e. sin( ) ω*t* and cos( ) ω*t* , then using simple trigonometric product-to-sum identity it is possible to write:

$$A\sin(\alpha t + \varphi) \cdot \sin(\alpha t) = \frac{A}{2}\cos(\varphi) - \frac{A}{2}\cos(2\omega t + \varphi) \tag{1}$$

And:

$$A\sin(\alpha t + \varphi) \cdot \cos(\alpha t) = \frac{A}{2}\sin(\varphi) + \frac{A}{2}\sin(2\omega t + \varphi) \tag{2}$$

From those equations it is clear that if double frequency component and factor of two is disregarded, then first equation can be viewed as giving real part of the complex response, and second equation an imaginary part. When response on only few frequencies is needed, then such a multiplication is often preferred signal processing method both in analog and digital domains. Of course it should be complemented with low pass filtering in order to remove *2ω* component. If other than sinusoidal signals are used for excitation and reference, then more frequency components should be considered when calculating response. As long as the reference signal is kept sinusoidal result of the multiplication is still faithful representation of the complex response of the system at the frequency of the reference signal. Some scaling is needed though, due to the fact that magnitude of the fundamental frequency component of the non-sinusoidal signal is different from the magnitude of said signal itself. Unfortunately quite often the reference waveform is far from sinusoidal. Square wave is preferred simply because multiplication of two functions will be replaced by simple signed summing. In analog signal processing domain it will enable to use simple switches instead of sophisticated, error prone, and power hungry multipliers. Same is true in digital domain, where multiply accumulate operation can be replaced with simpler accumulation. Also in this case system response is correct if excitation signal is kept sinusoidal, and in addition to that the system is truly linear. Last part requires very careful analysis, since appearing higher harmonics might coincide with reference signal harmonics, and after multiplication became undistinguishable from the true response. Still, SINAD of the response will degrade even if system can be considered sufficiently linear. If nothing else, then noise will leak into the result, at frequencies which will coincide with higher harmonics of the reference signal. Therefore methods which enable reduction of higher harmonics in reference signal, while keeping it reasonable simple for signal processing, are of utmost importance.

260 Fourier Transform – Signal Processing

Fig. 3. Complex number with unity magnitude in polar and Cartesian coordinates

cos( ) ω

And:

importance.

If sinusoidal signal with frequency *ω*, magnitude *A* and relative phase *φ* compared to reference signal is multiplied by orthogonal set of reference sinusoids, i.e. sin( )

*t* , then using simple trigonometric product-to-sum identity it is possible to write:

+⋅ = − +

2 2

2 2

 ϕ

 ϕ

 ϕ

 ϕ⋅= + + *ωt )* (2)

sin sin cos cos 2

sin cos sin sin 2

From those equations it is clear that if double frequency component and factor of two is disregarded, then first equation can be viewed as giving real part of the complex response, and second equation an imaginary part. When response on only few frequencies is needed, then such a multiplication is often preferred signal processing method both in analog and digital domains. Of course it should be complemented with low pass filtering in order to remove *2ω* component. If other than sinusoidal signals are used for excitation and reference, then more frequency components should be considered when calculating response. As long as the reference signal is kept sinusoidal result of the multiplication is still faithful representation of the complex response of the system at the frequency of the reference signal. Some scaling is needed though, due to the fact that magnitude of the fundamental frequency component of the non-sinusoidal signal is different from the magnitude of said signal itself. Unfortunately quite often the reference waveform is far from sinusoidal. Square wave is preferred simply because multiplication of two functions will be replaced by simple signed summing. In analog signal processing domain it will enable to use simple switches instead of sophisticated, error prone, and power hungry multipliers. Same is true in digital domain, where multiply accumulate operation can be replaced with simpler accumulation. Also in this case system response is correct if excitation signal is kept sinusoidal, and in addition to that the system is truly linear. Last part requires very careful analysis, since appearing higher harmonics might coincide with reference signal harmonics, and after multiplication became undistinguishable from the true response. Still, SINAD of the response will degrade even if system can be considered sufficiently linear. If nothing else, then noise will leak into the result, at frequencies which will coincide with higher harmonics of the reference signal. Therefore methods which enable reduction of higher harmonics in reference signal, while keeping it reasonable simple for signal processing, are of utmost

*A A A ( t ) ( t) ( ) (*

*A A A ( t ) ( t) ( ) (*

ϕω

ϕω

ω

ω+ ω*t* and

*ωt )* (1)

### **3. Square waves for measurement at single frequency**

First the simplest and spectrally worse signal is examined. It is square wave signal or signal which can be described as the sign of sin( ) ω*t* :

Fig. 4. An odd square wave, with frequency *f* = ω /2 1 π= *Hz*

Fourier series of this square wave function will contain only sinusoidal members, since function is odd, i.e. − *fx f x* () ( ) = − :

$$f(t) = \frac{4}{\pi} \sum\_{n=1}^{\infty} \frac{\sin((2n-1)\alpha t)}{2n-1} = \frac{4}{\pi} (\sin(\alpha t) + \frac{1}{3}\sin(3\alpha t) + \frac{1}{5}\sin(5\alpha t) + ...\right) \tag{3}$$

It can be viewed as signal with fundamental frequency *ω* having higher harmonics on odd multiplies of *ω*. If multiplication of the system response signal with orthogonal references is chosen as signal processing means, and these reference signals are not sinusoids, then all the members of the Fourier series of the response signal are multiplied with all the members of the Fourier series of the reference signals, i.e. not only fundamental components of the signal, but also all higher harmonics. Therefore if those members of the Fourier series coincide in frequency, then the results of multiplication will be added together and become undistinguishable. The measurement is no longer conducted on single well defined frequency, but instead produces results also on all higher harmonics. As it was discussed above it could be largely ignored, if during signal processing multiplication is conducted with sinusoidal signals, unfortunately it is often accomplished with the same rectangular signal instead, and energy form higher harmonics is summed together. Also spectral impact due to non-linearity of the object (or apparatus) cannot be separated from desired response signal anymore. Worst case impact of the coinciding spectral components summed eventually all together can be seen on Fig. 5 with dotted line.

There is another way of looking at how the errors appear (Kuhlberg, Land, Min, & Parve, 2003), by considering phase sensitivity characteristics of the synchronous demodulator (SD). They are easy to draw by varying phase shift *φ* between two signals. For simple square waves such a characteristic is presented on Fig. 6. For comparison it is drawn together with ideal circle which appears when two sinusoids are multiplied.

Simple Signals for System Identification 263

phase difference 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90

Fig. 7. Relative magnitude error when square wave signals are used during synchronous

natural, because negative time is usually meaningless, and signals start at *t=0*. Care must be taken that in many mathematical textbooks, and more importantly in different programs, even functions are considered instead. Should the summary phase shift *2β* be equal to the half period or odd multiply of half periods of any of the higher harmonic, then such a harmonic will be eliminated from the signal, since sum of two equal sinusoids with 180 degree shift is zero. Such an operation is essentially comb filtering. Main difference of the resulting signal compared with simple square wave is in appearing third level with zero value, so it is reasonable to call them shortened square waves. More generally spectrum of

4 cos((2 1) )sin((2 1) ) 4 cos3 ( ) (cos sin( ) sin(3 ) ) 2 1 <sup>3</sup> *<sup>n</sup> n nt f t <sup>t</sup> <sup>t</sup>*

Two of these shortened square waves are of special interest. In order to remove 3rd and 5th harmonics from the signal (as they cause most significant errors) 18 degree and 30 degree shifts are useful. First of them is void of 5th, 15th, etc. harmonics, and second 3rd, 9th, 15th etc. harmonics. It means that impact of these harmonics is drastically reduced. Both of these three level signals with amplitude A are shown on Fig. 8. The third level does not introduce

2π *t*/*T*exc

Fig. 8. 18, and 30 degree shortened signals with amplitude A (Min, Kink, Land, & Parve, 2006)

 π

− − <sup>=</sup> <sup>=</sup> <sup>+</sup> <sup>+</sup> <sup>−</sup> <sup>∑</sup> … (4)

βω

π 2π

β

 ω

 ω

25

demodulation

these signals can be derived from Fourier series:

0

*V*exc; *V*ref

β1=π/6

β2=π/10

*A*

*A*

*n* β

1

∞ =

π

Relative magnitude error

Fig. 5. Worst case relative impact of the higher harmonics to the multiplication result compared to the level of first harmonic in dB. Case of ordinary square wave (dotted line), simple shortened square wave (white boxes), and multilevel shortened square wave (black boxes) (Annus, Min, & Ojarand, 2008)

Fig. 6. Quality of synchronous demodulation in case of square wave signals (bold line). For clarity only one quarter is shown

From the Fig. 6 the magnitude error can be easily computed, as the difference between two lines, and Fig. 7. shows this relative magnitude error when square waves are used instead of sinusoids.

Relative magnitude errors of such magnitude as shown in Fig. 7 are generally unacceptable. Not to mention large (several degrees) phase errors in addition to the magnitude error. Only in very specific cases is measurement with pure square waves useful, such as measurement of electrical bioimpedance in implanted pacemakers, where energy constraints are severe.

Fortunately there is very simple method for reducing errors introduced by higher harmonics. Let's consider sum of two square waves with same frequency and amplitude, one of them shifted in phase by *β* degrees, and another - *β* degrees. Such a double shift is preferable, since resulting function is again odd. In signal processing odd functions are more 262 Fourier Transform – Signal Processing

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 **order of the harmonic**

> Re 0,0 0,2 0,4 0,6 0,8 1,0 1,2 1,3

Fig. 6. Quality of synchronous demodulation in case of square wave signals (bold line). For

From the Fig. 6 the magnitude error can be easily computed, as the difference between two lines, and Fig. 7. shows this relative magnitude error when square waves are used instead of

Relative magnitude errors of such magnitude as shown in Fig. 7 are generally unacceptable. Not to mention large (several degrees) phase errors in addition to the magnitude error. Only in very specific cases is measurement with pure square waves useful, such as measurement of electrical bioimpedance in implanted pacemakers, where energy constraints are severe. Fortunately there is very simple method for reducing errors introduced by higher harmonics. Let's consider sum of two square waves with same frequency and amplitude, one of them shifted in phase by *β* degrees, and another - *β* degrees. Such a double shift is preferable, since resulting function is again odd. In signal processing odd functions are more

Fig. 5. Worst case relative impact of the higher harmonics to the multiplication result compared to the level of first harmonic in dB. Case of ordinary square wave (dotted line), simple shortened square wave (white boxes), and multilevel shortened square wave (black


boxes) (Annus, Min, & Ojarand, 2008)

clarity only one quarter is shown

sinusoids.

1,3

1,2

0,0

0,2

0,4

0,6

Im

0,8

1,0

**impact of the harmonic n in dB**

Fig. 7. Relative magnitude error when square wave signals are used during synchronous demodulation

natural, because negative time is usually meaningless, and signals start at *t=0*. Care must be taken that in many mathematical textbooks, and more importantly in different programs, even functions are considered instead. Should the summary phase shift *2β* be equal to the half period or odd multiply of half periods of any of the higher harmonic, then such a harmonic will be eliminated from the signal, since sum of two equal sinusoids with 180 degree shift is zero. Such an operation is essentially comb filtering. Main difference of the resulting signal compared with simple square wave is in appearing third level with zero value, so it is reasonable to call them shortened square waves. More generally spectrum of these signals can be derived from Fourier series:

$$f(t) = \frac{4}{\pi} \sum\_{n=1}^{\infty} \frac{\cos((2n-1)\beta)\sin((2n-1)\alpha t)}{2n-1} = \frac{4}{\pi} (\cos \beta \sin(\alpha t) + \frac{\cos 3\beta}{3} \sin(3\alpha t) + ...) \tag{4}$$

Two of these shortened square waves are of special interest. In order to remove 3rd and 5th harmonics from the signal (as they cause most significant errors) 18 degree and 30 degree shifts are useful. First of them is void of 5th, 15th, etc. harmonics, and second 3rd, 9th, 15th etc. harmonics. It means that impact of these harmonics is drastically reduced. Both of these three level signals with amplitude A are shown on Fig. 8. The third level does not introduce

Fig. 8. 18, and 30 degree shortened signals with amplitude A (Min, Kink, Land, & Parve, 2006)

Simple Signals for System Identification 265

If complete elimination of some of the harmonics is not pursued, then arbitrary shift between component square waves is allowed. One possible optimization approach would be to find shortened square wave where minimal energy is leaked into higher harmonics. Relative dependence of the energy of the higher harmonics from shortening angle can be

20 21 22 23 24 25 26 27 28

Shortening angle, degrees Fig. 11. Relative dependence of the energy of the higher harmonics from shortening angle

> Re 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1,0 1,1

phase difference 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90

Fig. 13. Relative magnitude error when 22,5 degrees shortened square wave signals are used

Fig. 12. Quality of synchronous demodulation in case of shortened square wave signals when both are shortened by 22,5 degree bold line, compared with sinusoidal signals dashed line

seen on Fig. 11:

6


during system identification

Relative magnitude error

Relative impact of higher harmonics

1,1

Im

0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1,0

much added complexity from signal generation or processing point of view. Both generation with digital logic, and also synchronous rectification with CMOS switches is straightforward (Min, Kink, Land, & Parve, 2006). If one of them is used as excitation signal and other as rectifying reference, then the result will be much cleaner spectrally when compared to simple square waves, Fig. 5 white rectangles. These two waveforms were chosen, because complete elimination of certain harmonics was desired. Due to that quality of synchronous demodulation is drastically improved, as it can be seen on Fig. 9 and 10. Compared to Fig. 7 improvement is considerable. Not substantial, but still drawback is the need to generate different signals for excitation and synchronous demodulation. What would happen if both signals are shortened by the same amount, and is there an optimum?

Fig. 9. Quality of synchronous demodulation in case of shortened square wave signals bold line, compared with sinusoidal signals dashed line

Fig. 10. Relative magnitude error when 18 and 30 degrees shortened square wave signals are used during system identification

264 Fourier Transform – Signal Processing

much added complexity from signal generation or processing point of view. Both generation with digital logic, and also synchronous rectification with CMOS switches is straightforward (Min, Kink, Land, & Parve, 2006). If one of them is used as excitation signal and other as rectifying reference, then the result will be much cleaner spectrally when compared to simple square waves, Fig. 5 white rectangles. These two waveforms were chosen, because complete elimination of certain harmonics was desired. Due to that quality of synchronous demodulation is drastically improved, as it can be seen on Fig. 9 and 10. Compared to Fig. 7 improvement is considerable. Not substantial, but still drawback is the need to generate different signals for excitation and synchronous demodulation. What would happen if both

> Re 0,0 0,2 0,4 0,6 0,8 1,0 1,2 1,3

Fig. 9. Quality of synchronous demodulation in case of shortened square wave signals bold

phase difference 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90

Fig. 10. Relative magnitude error when 18 and 30 degrees shortened square wave signals are

signals are shortened by the same amount, and is there an optimum?

1,3

1,2

0,0

line, compared with sinusoidal signals dashed line

3


used during system identification


0

1

Relative magnitude error

2

0,2

0,4

0,6

Im

0,8

1,0

If complete elimination of some of the harmonics is not pursued, then arbitrary shift between component square waves is allowed. One possible optimization approach would be to find shortened square wave where minimal energy is leaked into higher harmonics. Relative dependence of the energy of the higher harmonics from shortening angle can be seen on Fig. 11:

Fig. 11. Relative dependence of the energy of the higher harmonics from shortening angle

Fig. 12. Quality of synchronous demodulation in case of shortened square wave signals when both are shortened by 22,5 degree bold line, compared with sinusoidal signals dashed line

Fig. 13. Relative magnitude error when 22,5 degrees shortened square wave signals are used during system identification

Simple Signals for System Identification 267

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

*A*

*A*

*A*

Fig. 15. Spectrum of the signal on Fig. 14 (Annus, Min, & Ojarand, 2008)

β = /10 <sup>2</sup> π

β = /6 <sup>1</sup> π

β = 7 /30 <sup>0</sup> π

*3A*

0 π 2π

Fig. 16. Sum of three shortened waveforms with coefficients 1, 1, and 1 (Annus, Min, &

25

20

15

10

 **magnitude of the harmonic-**

Ojarand, 2008)

**% from fundamental frequency**

5

0

Such a signal could be used for excitation as well as for synchronous demodulator. Since curve is relatively flat around minimum then shortening angels between 22 and 24 degrees produce almost equally good results. Obvious choice for real system identification task would be 22,5 degrees. Compared with previously discussed pair of shortened square waves generation of such a signal requires much lower clock frequency. When 18 and 30 degree shortening is required, then the clock frequency must be at least 30 times higher than the frequency of the resulting signal. With 22,5 degree shortened signal only 8 times higher clock is required. It allows either better energy efficiency by lowering the system clock, or alternatively allows usage of higher frequency signals at the same clock rate. An added benefit is, that noise leakage into result is lower, then in case of 18 and 30 degree shortened pair.

While performing considerably better than simple square waves these shortened signals are far from ideal sinusoid. Could the same summing procedure produce further improvement without much added complexity, if more square waves are added together? The answer is yes. It is enough to add third member into palette consisting already from two square waves shortened by 18 and 30 degrees, namely square wave shortened by 42 degrees. By combining these three signals promising results can be achieved. Three interesting and still simple signals are considered as combinations of previously mentioned summed signals. First and perhaps most obvious is a sum of 18, 30, and 42 degrees shortened square wave signals with signs 1,-1, and 1. Resulting waveform is on Fig. 14, and spectrum of this waveform is on Fig. 15 (Annus, Min, & Ojarand, 2008). It is much cleaner compared to ordinary square wave.

Fig. 14. Resulting waveform from summing of three shortened signals with weights 1, -1, and 1 (Annus, Min, & Ojarand, 2008)

If on the other hand excitation is also shortened square wave, then following pair of signals is suggested (Annus, Min, & Ojarand, 2008). First of them is sum of all three components with coefficients 1,1, and 1, Fig. 16. Spectrum of this summed signal is on Fig. 17.

266 Fourier Transform – Signal Processing

Such a signal could be used for excitation as well as for synchronous demodulator. Since curve is relatively flat around minimum then shortening angels between 22 and 24 degrees produce almost equally good results. Obvious choice for real system identification task would be 22,5 degrees. Compared with previously discussed pair of shortened square waves generation of such a signal requires much lower clock frequency. When 18 and 30 degree shortening is required, then the clock frequency must be at least 30 times higher than the frequency of the resulting signal. With 22,5 degree shortened signal only 8 times higher clock is required. It allows either better energy efficiency by lowering the system clock, or alternatively allows usage of higher frequency signals at the same clock rate. An added benefit is, that noise leakage into result is lower, then in case of 18 and 30 degree shortened

While performing considerably better than simple square waves these shortened signals are far from ideal sinusoid. Could the same summing procedure produce further improvement without much added complexity, if more square waves are added together? The answer is yes. It is enough to add third member into palette consisting already from two square waves shortened by 18 and 30 degrees, namely square wave shortened by 42 degrees. By combining these three signals promising results can be achieved. Three interesting and still simple signals are considered as combinations of previously mentioned summed signals. First and perhaps most obvious is a sum of 18, 30, and 42 degrees shortened square wave signals with signs 1,-1, and 1. Resulting waveform is on Fig. 14, and spectrum of this waveform is on Fig. 15 (Annus, Min, & Ojarand, 2008). It is much cleaner compared to

0 0,02 0,04 0,06 0,08 0,1

Time, s

Fig. 14. Resulting waveform from summing of three shortened signals with weights 1, -1,

with coefficients 1,1, and 1, Fig. 16. Spectrum of this summed signal is on Fig. 17.

If on the other hand excitation is also shortened square wave, then following pair of signals is suggested (Annus, Min, & Ojarand, 2008). First of them is sum of all three components

pair.

ordinary square wave.

1

0,5


0

Normalized amplitude

and 1 (Annus, Min, & Ojarand, 2008)

Fig. 15. Spectrum of the signal on Fig. 14 (Annus, Min, & Ojarand, 2008)

Fig. 16. Sum of three shortened waveforms with coefficients 1, 1, and 1 (Annus, Min, & Ojarand, 2008)

Simple Signals for System Identification 269

Suitable counterpart summed with coefficients 2, -1, and 1 is on Fig. 18, and spectrum on

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

Comparison of the worst case multiplication results (Fig. 5) shows significant improvement over previous result. Same improvement can be seen on Fig. 20 as well, where result of synchronous demodulation is shown. Relative magnitude error compared to sinusoid is given on Fig. 21. Nevertheless same clock speed penalty still applies as with simpler

> Re 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1,0 1,1

Fig. 20. Quality of synchronous demodulation in case of described multilevel shortened

Different way of cleaning square wave spectrally it is described in (Min, Parve, & Ronk, Design Concepts of Instruments for Vector Parameter Identification, 1992). Simple piecewise (over number of system clock periods) constant approximation of the sine wave values is

Fig. 19. Spectrum of the signal on Fig. 18 (Annus, Min, & Ojarand, 2008)

1,1

Im

0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1,0

Fig. 19:

**magnitude of the harmonic-**

solution.

**% from fundamental frequency**

16

14

12

10

8

6

4

2

0

square wave signals

Fig. 17. Spectra of the signal on Fig. 16 (Annus, Min, & Ojarand, 2008)

Fig. 18. Sum of three shortened waveforms with coefficients 2, -1, and 1 (Annus, Min, & Ojarand, 2008)

268 Fourier Transform – Signal Processing

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

π 2π

*A*

Fig. 17. Spectra of the signal on Fig. 16 (Annus, Min, & Ojarand, 2008)

β = /10 <sup>2</sup> π

*2A*

β = /6 <sup>1</sup> π

*A*

β = 7 /30 <sup>0</sup> π

*2A*

Fig. 18. Sum of three shortened waveforms with coefficients 2, -1, and 1 (Annus, Min, &

**magnitude of the harmonic-**

**% from fundamental frequency**

16

14

12

10

8

6

4

2

0

0

Ojarand, 2008)

Suitable counterpart summed with coefficients 2, -1, and 1 is on Fig. 18, and spectrum on Fig. 19:

Fig. 19. Spectrum of the signal on Fig. 18 (Annus, Min, & Ojarand, 2008)

Comparison of the worst case multiplication results (Fig. 5) shows significant improvement over previous result. Same improvement can be seen on Fig. 20 as well, where result of synchronous demodulation is shown. Relative magnitude error compared to sinusoid is given on Fig. 21. Nevertheless same clock speed penalty still applies as with simpler solution.

Fig. 20. Quality of synchronous demodulation in case of described multilevel shortened square wave signals

Different way of cleaning square wave spectrally it is described in (Min, Parve, & Ronk, Design Concepts of Instruments for Vector Parameter Identification, 1992). Simple piecewise (over number of system clock periods) constant approximation of the sine wave values is

Simple Signals for System Identification 271

the analysis and, at the same time, as long as possible for enlarging the excitation energy

Chirp signals can be considered to remedy those shortcomings. They will mentioned here only briefly, since more comprehensive overview is in different chapter. Chirp signals, i.e. multi-cycle sine wave based signals in which the frequency increases ('up-chirp') or decreases ('down-chirp') continuously as a function of time, are widely used in radar and sonar applications, acoustic, ultrasonic, optical and seismological studies. The main advantage of chirp signals is their well-defined frequency range and predetermined power spectral density and good crest factor. Rectangular chirps can be used to further simplify signal generation and processing. Moreover, the rectangular waveform has the minimal possible value crest factor (ratio of a peak value to a root-mean-square level) of 1. Two signals are viewed here briefly: binary chirp and ternary chirp (Min et al., 2012). Binary or

signum chirp is defined as signum of the sinusoidal counterpart. For linear chirps:

Fig. 22. NRZ or non-return-to-zero binary rectangular chirp pulse (Min et al., 2012)

Fig. 23. Return to zero (RZ) ternary rectangular chirp pulse (Min et al., 2012)

Instead of chirp signals other waveforms can be considered as well. A widely used method is to generate a pseudo-random maximum length sequence (MLS). Spectrum of the MLS signal follows a *square(sin(x)/x)* law. Signal processing is usually accomplished by taking circular cross-correlation of the output signal with the excitation MLS. MLS and chirp have however one serious disadvantage – their energy is distributed equally, or almost equally, over the whole frequency band of interest. Therefore, the power spectral density *A2/Hz* is

( ( )) (sin 2 ) <sup>2</sup> *B t sign ch t sign <sup>T</sup>*

Where *T* is duration of the chirp signal, 0 ≤ *t T*≤ , and *B* is bandwidth of the signal. It has crest factor of 1, and it means that energetically the signum chirp is two times more powerful than sinusoidal chirp. Waveform of the binary chirp can be seen on Fig. 22. Third level can be introduced by comparing sinusoidal chirp with two levels instead of one as it is

<sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> ⎢⎣ ⎥⎦

2

(8)

π

and improving signal to noise ratio.

done in case of binary chirp (Fig. 23).

comparatively low.

Fig. 21. Dependence of the relative magnitude error from phase difference

used. Waveforms with relatively small number of different levels (3,4,5) are used, and as with already described shortened square wave method different waveforms are suggested for multiplication, resulting in cleaner multiplication product. Values of separate discrete levels are determined according to:

$$a\_q = \sin(\frac{\pi}{4m}(2\eta - 1))\tag{5}$$

Where *m* is the total number of approximation levels, and *q = 0, 1, 2, …, m* is the approximation level number. Spectral composition of these approximated harmonic functions can be found according to the following equation:

$$k\_h = 4mi \pm 1\tag{6}$$

Where *kh* is the number of the higher harmonic, which exist in the spectra, and *i=1,2,3,…* If two such signals with number of levels *m1* and *m2* are multiplied, then coinciding harmonics can be found according to:

$$k\_c = 4m\_1m\_2j \pm 1\tag{7}$$

Where *j=1,2,3,…* If two waveforms with *m=3*, and *m=4* are considered, then first coinciding harmonics are 47th, 49th, 95th, 97th, 143rd, 145th, etc. As with shortened square waves clock frequency should be relatively high, and furthermore these waveforms are relatively sensitive to level errors, which prohibit usage of higher m values, and manifest itself in reappearing higher harmonics.

### **4. Square waves for multifrequency measurement**

If system parameters vary with frequency, as they usually do, single frequency measurement is not enough to fully describe object under test. In complex cases, like measuring electrical properties of biological specimens, sweeping over wide frequency band may be warranted. Sweeping on the other hand is slow, and prohibits examination of faster changes in object under investigation, since transfer function of dynamic system is time dependent. Measurements must be as short as possible to avoid significant changes during 270 Fourier Transform – Signal Processing

phase difference 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90

used. Waveforms with relatively small number of different levels (3,4,5) are used, and as with already described shortened square wave method different waveforms are suggested for multiplication, resulting in cleaner multiplication product. Values of separate discrete

> sin( (2 1)) <sup>4</sup> *<sup>q</sup> a q m* π

Where *m* is the total number of approximation levels, and *q = 0, 1, 2, …, m* is the approximation level number. Spectral composition of these approximated harmonic

Where *kh* is the number of the higher harmonic, which exist in the spectra, and *i=1,2,3,…* If two such signals with number of levels *m1* and *m2* are multiplied, then coinciding harmonics

Where *j=1,2,3,…* If two waveforms with *m=3*, and *m=4* are considered, then first coinciding harmonics are 47th, 49th, 95th, 97th, 143rd, 145th, etc. As with shortened square waves clock frequency should be relatively high, and furthermore these waveforms are relatively sensitive to level errors, which prohibit usage of higher m values, and manifest itself in

If system parameters vary with frequency, as they usually do, single frequency measurement is not enough to fully describe object under test. In complex cases, like measuring electrical properties of biological specimens, sweeping over wide frequency band may be warranted. Sweeping on the other hand is slow, and prohibits examination of faster changes in object under investigation, since transfer function of dynamic system is time dependent. Measurements must be as short as possible to avoid significant changes during

= − (5)

4 1 *<sup>h</sup> k mi* = ± (6)

1 2 4 1 *<sup>c</sup> k mm j* = ± (7)

Fig. 21. Dependence of the relative magnitude error from phase difference

functions can be found according to the following equation:

**4. Square waves for multifrequency measurement** 

0


levels are determined according to:

can be found according to:

reappearing higher harmonics.

Relative magnitude error

the analysis and, at the same time, as long as possible for enlarging the excitation energy and improving signal to noise ratio.

Chirp signals can be considered to remedy those shortcomings. They will mentioned here only briefly, since more comprehensive overview is in different chapter. Chirp signals, i.e. multi-cycle sine wave based signals in which the frequency increases ('up-chirp') or decreases ('down-chirp') continuously as a function of time, are widely used in radar and sonar applications, acoustic, ultrasonic, optical and seismological studies. The main advantage of chirp signals is their well-defined frequency range and predetermined power spectral density and good crest factor. Rectangular chirps can be used to further simplify signal generation and processing. Moreover, the rectangular waveform has the minimal possible value crest factor (ratio of a peak value to a root-mean-square level) of 1. Two signals are viewed here briefly: binary chirp and ternary chirp (Min et al., 2012). Binary or signum chirp is defined as signum of the sinusoidal counterpart. For linear chirps:

$$\operatorname{sign}(\operatorname{ch}(t)) = \operatorname{sign}(\sin\left[2\pi\frac{B}{T}\frac{t^2}{2}\right])\tag{8}$$

Where *T* is duration of the chirp signal, 0 ≤ *t T*≤ , and *B* is bandwidth of the signal. It has crest factor of 1, and it means that energetically the signum chirp is two times more powerful than sinusoidal chirp. Waveform of the binary chirp can be seen on Fig. 22. Third level can be introduced by comparing sinusoidal chirp with two levels instead of one as it is done in case of binary chirp (Fig. 23).

Fig. 22. NRZ or non-return-to-zero binary rectangular chirp pulse (Min et al., 2012)

Fig. 23. Return to zero (RZ) ternary rectangular chirp pulse (Min et al., 2012)

Instead of chirp signals other waveforms can be considered as well. A widely used method is to generate a pseudo-random maximum length sequence (MLS). Spectrum of the MLS signal follows a *square(sin(x)/x)* law. Signal processing is usually accomplished by taking circular cross-correlation of the output signal with the excitation MLS. MLS and chirp have however one serious disadvantage – their energy is distributed equally, or almost equally, over the whole frequency band of interest. Therefore, the power spectral density *A2/Hz* is comparatively low.

Simple Signals for System Identification 273

measurement is void. Upper limit is ultimately determined by supply voltage. Form that it is clear that large peaks in excitation signal should be avoided, as well as very low level components which get lost in noise. Fig. 24 is to give an impression of what happens when just eleven sinusoidal signals, one octave apart from each other in frequency, each with equal amplitude, and with random initial phase are summed together. The crest factor of this noise like signal is 3,54, and it is clearly worse than crest factor of a single sinusoid.

Interestingly almost all the parameters of the multisinusoidal signal can be improved considerably by simplifying it drastically (Annus et al., 2011). Such a signal is derived from

> <sup>1</sup> ( ( )) sin(2 ) *i n i ii <sup>i</sup> sign x t sign A f*

0 200u 400u 600u 800u 1m

Fig. 25. Typical binary multifrequency signal with eleven components and random phases

components, and roughly 30% of the total energy is lost for measurement:

Clearly multifrequency binary signal is easier to generate then multisinusoidal signal, and has far superior crest factor of 1. Minor drawback can be seen on Fig. 26, where spectrum of such a signal is shown. So called "snow" lines appear between the wanted frequency

1 10 100 1000 10000 Frequency, Hz Fig. 26. Spectrum of binary multifrequency signal with ten components and random phases

Truth is revealed when spectrum of optimized multisine is drawn together with spectrum of

π

<sup>=</sup>

ϕ

<sup>=</sup> = ⋅+ ∑ (10)

Time, s

the original multisinusoidal signal by detecting its zero crossings:

In case of *n= 11* the resulting waveform is shown on Fig. 25.

1.2 1

0.5

0

Normalized amplitude

1

0.1

0.01

Amplitude

0.001

the binary counterpart:



In practice, there is seldom a need to measure at all of the frequencies within the bandwidth simultaneously, except perhaps when system under test is highly resonant. Usually it is enough to know the parameters at several arbitrarily spaced frequencies separately. Therefore, it is reasonable to concentrate the energy of the excitation signals to frequencies of interest instead of using uniform energy distribution over full measurement bandwidth. That can be achieved by summing up several sinusoids:

$$\mathbf{x}(t) = \sum\_{i=1}^{i=n} A\_i \cdot \sin(2\pi f\_i + \varphi\_i) \tag{9}$$

Unfortunately while single sinusoid is technically feasible signal, and can be reproduced quite accurately, it becomes increasingly costly to use simultaneously many sinusoidal signals. There is another drawback associated with simultaneous use of multiple sinusoidal signals – crest factor. With single sinusoid the crest factor is 2 1,414 ≈ . By summing two or more sinusoidal signals together the crest factor can take many different values, generally bigger then 1,414. Why is crest factor so important? Two reasons are worth considering: nonlinear behavior of the object under investigation, and dynamic range of the measurement apparatus itself. Real objects can rarely be described as linear. It means that different excitation levels do not produce linearly dependent responses. For practical purposes measurement signals are usually kept within narrow range of amplitudes where object behaves approximately linearly. It is clear that such an approximation is better the narrower the range is kept. In worst case high energy pulses can even permanently alter or destroy the object under investigation, and that is certainly not acceptable when performing measurements for example on living human tissue. Also dynamic diapason of the apparatus is limited. From the lower side the limit is set by omnipresent noise signal. If the measurement signal is completely buried in the noise and cannot be restored any more the

Fig. 24. Multisinusiodal excitation signal (a), containing eleven equal amplitude components, with peak value of 1. For simplicity starting phases of all components are zero. Frequencies in Hz are:1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024. Same signal with some optimization of phases (b); achieved crest factor is 2,5. Please note that the peak values of these two signals are equal

272 Fourier Transform – Signal Processing

In practice, there is seldom a need to measure at all of the frequencies within the bandwidth simultaneously, except perhaps when system under test is highly resonant. Usually it is enough to know the parameters at several arbitrarily spaced frequencies separately. Therefore, it is reasonable to concentrate the energy of the excitation signals to frequencies of interest instead of using uniform energy distribution over full measurement bandwidth.

> <sup>1</sup> ( ) sin(2 ) *i n i ii <sup>i</sup> xt A f*

Unfortunately while single sinusoid is technically feasible signal, and can be reproduced quite accurately, it becomes increasingly costly to use simultaneously many sinusoidal signals. There is another drawback associated with simultaneous use of multiple sinusoidal signals – crest factor. With single sinusoid the crest factor is 2 1,414 ≈ . By summing two or more sinusoidal signals together the crest factor can take many different values, generally bigger then 1,414. Why is crest factor so important? Two reasons are worth considering: nonlinear behavior of the object under investigation, and dynamic range of the measurement apparatus itself. Real objects can rarely be described as linear. It means that different excitation levels do not produce linearly dependent responses. For practical purposes measurement signals are usually kept within narrow range of amplitudes where object behaves approximately linearly. It is clear that such an approximation is better the narrower the range is kept. In worst case high energy pulses can even permanently alter or destroy the object under investigation, and that is certainly not acceptable when performing measurements for example on living human tissue. Also dynamic diapason of the apparatus is limited. From the lower side the limit is set by omnipresent noise signal. If the measurement signal is completely buried in the noise and cannot be restored any more the

<sup>=</sup>

0 200u 400u 600u 800u 1m

Fig. 24. Multisinusiodal excitation signal (a), containing eleven equal amplitude components, with peak value of 1. For simplicity starting phases of all components are zero. Frequencies in Hz are:1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024. Same signal with some optimization of phases (b); achieved crest factor is 2,5. Please note that the peak values of these two signals

0 200u 400u 600u 800u 1m

π ϕ

<sup>=</sup> =⋅ + ∑ (9)

Time, s

Time, s

That can be achieved by summing up several sinusoids:

 1 0.75 0.5 0.25 0 -0.25 -0.5 -0.75 -1

a)

b)

are equal

 1 0.75 0.5 0.25 0 -0.25 -0.5 -0.75 -1

Normalized amplitude Normalized amplitude

measurement is void. Upper limit is ultimately determined by supply voltage. Form that it is clear that large peaks in excitation signal should be avoided, as well as very low level components which get lost in noise. Fig. 24 is to give an impression of what happens when just eleven sinusoidal signals, one octave apart from each other in frequency, each with equal amplitude, and with random initial phase are summed together. The crest factor of this noise like signal is 3,54, and it is clearly worse than crest factor of a single sinusoid.

Interestingly almost all the parameters of the multisinusoidal signal can be improved considerably by simplifying it drastically (Annus et al., 2011). Such a signal is derived from the original multisinusoidal signal by detecting its zero crossings:

$$\text{sign}(\mathbf{x}(t)) = \text{sign}\sum\_{i=1}^{i=n} A\_i \cdot \text{sim}(2\pi f\_i + \varphi\_i) \tag{10}$$

In case of *n= 11* the resulting waveform is shown on Fig. 25.

Fig. 25. Typical binary multifrequency signal with eleven components and random phases

Clearly multifrequency binary signal is easier to generate then multisinusoidal signal, and has far superior crest factor of 1. Minor drawback can be seen on Fig. 26, where spectrum of such a signal is shown. So called "snow" lines appear between the wanted frequency components, and roughly 30% of the total energy is lost for measurement:

Fig. 26. Spectrum of binary multifrequency signal with ten components and random phases

Truth is revealed when spectrum of optimized multisine is drawn together with spectrum of the binary counterpart:

Simple Signals for System Identification 275

1 10 100 1000 10000 Frequency, Hz

Fig. 30. Magnitude spectrum of the ternary multifrequency signal (grey rings) compared with magnitudes of multisinusoid components (grey lines), and binary multifrquency signal

With some minor drawbacks it is possible to construct relatively simple square wave signals in order to replace more sophisticated sinusoidal or arbitrary waveforms when system identification is warrantied. Simplest square wave nevertheless might not be sufficiently good measurement signal. By adding few more levels situation can be improved considerably. These signals can be used to replace single sinusoids, chirps, and arbitrary sums of sinusoids. Generally it is a good idea to choose best signal for given identification

This research was supported by the European Union through the European Regional Development Fund and through the projects of Eliko Competence Centre, and Centre for

Heideman, M.; Johnson, D.; Burrus, C. (1984). Gauss and the history of the fast fourier

Maxime Bocher (1906). Introduction to the Theory of Fourier's Series. The Annals of

Keith Godfrey (1993). Perturbation Signals for System Identification (Prentice Hall

Rik Pintelon, Johan Schoukens (2001). System Identification: A Frequency Domain

Annus, Paul; Min, Mart; Ojarand, Jaan (2008). Shortened square wave waveforms in

Min, M.; Parve, T.; & Ronk, A. (1992). Design Concepts of Instruments for Vector Parameter

synchronous signal processing. IEEE International Instrumentation and Measurement Technology Conference Proceedings: 2008 IEEE International Instrumentation and Measurement Technology Conference (I2MTC 2008), 12-15

Identification. IEEE Transactions on Instrumentation and Measurement, Volume:

International Series in Acoustics, Speech, and Signal Processing)

May 2008, Victoria, British Columbia, Canada, 2008. , 1259 - 1262.

task. Some choices are shown, and reasoning behind theme given.

transform, ASSP Magazine, IEEE , vol.1, no.4, pp.14-21

Mathematics, Second Series, Vol. 7, No. 3, pp. 81-152

Approach. Wiley-IEEE Press; 1 edition

41 Issue:1, pp. 50 – 53

Amplitude

0.3

0.1

0.01

(crosses)

**5. Conclusion** 

**6. Acknowledgment** 

Research Excellence CEBE.

**7. References** 

Fig. 27. Magnitudes of spectral lines in binary multifrequency signal (grey squares) together with spectrum of multisinusoidal signal (black squares)

Useful spectral components in binary multifrequency signal contain 1,34927 times more energy than in case of optimized multisine, or 4,32933 times more than the multisine above without optimization. At first glance there is another drawback, since magnitudes of useful components are not exactly equal anymore. Fortunately said magnitudes can be easily equalized by iteratively manipulating magnitudes of the components of the original multisinusoidal signal. Residual error is generally well below one percent. Furthermore almost arbitrary magnitudes of spectral components can be achieved (Fig. 28).

Fig. 28. Magnitude control example

When two level comparison is introduced instead, then energy leakage into unwanted components can be reduced. Such a ternary multifrequency signal can be seen on Fig. 29.

Fig. 29. Ternary multifrequency signal. Comparison levels are set on +/- 0,23V

Comparison of the spectrum of the original multisinusoid with ternary signal from Fig. 29 can be seen on Fig. 30.

Fig. 30. Magnitude spectrum of the ternary multifrequency signal (grey rings) compared with magnitudes of multisinusoid components (grey lines), and binary multifrquency signal (crosses)

### **5. Conclusion**

274 Fourier Transform – Signal Processing

1 10 100 1000 10000 Frequency, Hz Fig. 27. Magnitudes of spectral lines in binary multifrequency signal (grey squares) together

Useful spectral components in binary multifrequency signal contain 1,34927 times more energy than in case of optimized multisine, or 4,32933 times more than the multisine above without optimization. At first glance there is another drawback, since magnitudes of useful components are not exactly equal anymore. Fortunately said magnitudes can be easily equalized by iteratively manipulating magnitudes of the components of the original multisinusoidal signal. Residual error is generally well below one percent. Furthermore

1 10 100 1000 10000 Frequency, Hz

Time, s 0,04 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1

Comparison of the spectrum of the original multisinusoid with ternary signal from Fig. 29

Fig. 29. Ternary multifrequency signal. Comparison levels are set on +/- 0,23V

When two level comparison is introduced instead, then energy leakage into unwanted components can be reduced. Such a ternary multifrequency signal can be seen on Fig. 29.

almost arbitrary magnitudes of spectral components can be achieved (Fig. 28).

with spectrum of multisinusoidal signal (black squares)

1

0.1

0.01

Amplitude

0.001

1

0.1

0.01

Amplitude

Normalized amplitude

can be seen on Fig. 30.

0.001

Fig. 28. Magnitude control example

With some minor drawbacks it is possible to construct relatively simple square wave signals in order to replace more sophisticated sinusoidal or arbitrary waveforms when system identification is warrantied. Simplest square wave nevertheless might not be sufficiently good measurement signal. By adding few more levels situation can be improved considerably. These signals can be used to replace single sinusoids, chirps, and arbitrary sums of sinusoids. Generally it is a good idea to choose best signal for given identification task. Some choices are shown, and reasoning behind theme given.

### **6. Acknowledgment**

This research was supported by the European Union through the European Regional Development Fund and through the projects of Eliko Competence Centre, and Centre for Research Excellence CEBE.

### **7. References**


**Part 3** 

**Image Processing** 

