**2.2 Extensions of interval arithmetic**

The different extensions of IA try to improve the accuracy of the computed results at the expense of more complex representations. A classification of the main variants of IA is given in Figure 1.

According to the representation of the uncertainties, the extensions of IA can be classified in three different types: Extended IA (EIA), Parameterized IA and Centered Forms (CFs). In a further division, these methods are further classified as follows. In the first group, Directed Intervals (DIs) and Modal Intervals (MIs); in the second group, Generalized IA (GIA); and in the third group, Mean Value Forms (MVFs), slopes, Taylor Models (TMs) and Affine Arithmetic (AA). A brief description of each formulation is given below.

DIs (Kreinovich, 2004) include the direction or sign of each interval to avoid the cancellation problem in the subtraction operations (I1 + - I1 + = 0), which is the most important source of overestimation (Kaucher, 1980; Ortolf, Bonn, 1969).


Fig. 1. Classification of interval-based computations methods.

280 Applications of Digital Signal Processing

of interval-based computations; (ii) an analysis of the application of interval-based computations to measure and compare the sensitivity of the signals in the frequency domain; and (iii) an analysis of the application of interval-based techniques to the Monte-

Since its formalization in 1962 by R. Moore (Moore, 1962), Interval Arithmetic (IA) has been widely used to bound uncertainties in complex systems (Moore, 1966). The main advantage of traditional IA is that it is able to obtain the range of all the possible results of a given function. On the other hand, it suffers from three different types of problems (Neumaier, 2002): the dependency problem, the cancellation problem, and the wrapping

The dependency problem expresses that IA computations overestimate the output range of a given function whenever it depends on one or more of its variables through two or more different paths. The cancellation problem occurs when the width of the intervals is not canceled in the inverse functions. In particular, this situation occurs in the subtraction

case of the dependency problem, but its effect is clearly identified. The wrapping effect occurs because the intervals are not able to accurately represent regions of space whose

These overestimations are propagated in the computations and make the results inaccurate, and even useless in some cases. For this reason, the Overestimation Factor (*OF*) (Makino &

to quantify the accuracy of the results. Another interesting definition used to evaluate the performance of these methods is the Approximation Order (Makino & Berz, 2003;

The different extensions of IA try to improve the accuracy of the computed results at the expense of more complex representations. A classification of the main variants of IA is given

According to the representation of the uncertainties, the extensions of IA can be classified in three different types: Extended IA (EIA), Parameterized IA and Centered Forms (CFs). In a further division, these methods are further classified as follows. In the first group, Directed Intervals (DIs) and Modal Intervals (MIs); in the second group, Generalized IA (GIA); and in the third group, Mean Value Forms (MVFs), slopes, Taylor Models (TMs) and Affine

DIs (Kreinovich, 2004) include the direction or sign of each interval to avoid the cancellation

+ - I1

[0,1]) that contains the difference between the bounds of the interval function and

z

OF = (Estimated Range – Exact Range) / (Exact Range), (1)

 *0*), what can be seen as a particular

H

+ = 0), which is the most important source of

*<sup>S</sup>* (where *C* is constant,

Carlo method. Finally, Section 4 concludes this work.

operations (i.e., given the non-empty interval *I1 – I1*

boundaries are not parallel to the coordinate axes.

Berz, 2003; Neumaier, 2002) has been defined as

the target function in the range of interest.

**2.2 Extensions of interval arithmetic** 

problem in the subtraction operations (I1

overestimation (Kaucher, 1980; Ortolf, Bonn, 1969).

Neumaier, 2002), defined as the minimum order of the monomial *C*

Arithmetic (AA). A brief description of each formulation is given below.

**2.1 Interval arithmetic** 

effect.

and H

in Figure 1.

**2. General overview of interval-based computations** 

In MIs (Gardenes, 1985; Gardenes & Trepat, 1980; SIGLA/X, 1999a, 1999b), each element is composed of one interval and a parameter called "modality" that indicates if the equation of the MIs holds for a single value of the interval or for all its values. These two descriptions are used to generate equations that bound the target function. If both descriptions exist and are equal, the result is exact. Among the publications on MIs, the underlying theoretical formulation and the justifications are given in (SIGLA/X, 1999a) and the applications, particularly for control systems, are given in (Armengol, et al., DX-2001; SIGLA/X, 1999b; Vehí, 1998)

GIA (Hansen, 1975; Tupper, 1996) is based on limiting the regions of the represented domain using intervals with parameterizable endpoints, such as [1 – 2x, 3 + 4x] with x [0,1]. The authors define different types of parameterized intervals (constant, linear, quadratic, linear, multi-dimensional, functional and symbolic), but their analysis has focused on evaluating whether the target function is increasing or decreasing, concave or convex, in the region of interest using constant, linear and polynomial parameters. In the experiments, they have obtained the areas where the existence of the function is impossible, but they conclude that this type of analysis is too complex for parameterizations greater than the linear case.

In the different representations, CFs are based on representing a function as a Taylor Series expansion with one or more intervals that incorporate the uncertainties. Therefore, all these techniques are composed of one independent value (the central point of the function) and a set of summands that incorporate the intervals in the representation.

MVFs (Alefeld, 1984; Coconut\_Group, 2002; Moore, 1966; Neumaier, 1990; Schichl & Neumaier, 2002) are based on developing an expression of a first-order Taylor Series that bounds the region of interest. The general expression is as follows:

$$f\begin{pmatrix} \mathbf{x} \end{pmatrix} = f\begin{pmatrix} \mathbf{x}\_0 \end{pmatrix} + f'\begin{pmatrix} \mathbf{x} \end{pmatrix} \begin{pmatrix} \mathbf{x} - \mathbf{x}\_0 \end{pmatrix} \qquad \in \qquad f\_{\text{MVF}}\begin{pmatrix} I\_\mathbf{x} \end{pmatrix} = f\begin{pmatrix} \mathbf{x}\_0 \end{pmatrix} + f'\begin{pmatrix} I\_\mathbf{x} \end{pmatrix} \begin{pmatrix} I\_\mathbf{x} - \mathbf{x}\_0 \end{pmatrix} \tag{2}$$

where *x* is the point or region where *f*(*x*) must be evaluated, *x*0 is the central point of the Taylor Series, and *Ix* is the interval that bounds the uncertainty range. The computation of the derivative is not complex when the function is polynomial, as it is usually the case in function approximation methods. Since the approximation error is quadratic, this method does not provide good results when the input intervals are large. However, if the input intervals are small, it provides better results than traditional IA.

its associated coefficient. In AA the operations are classified in two types: affine and nonaffine operations. Affine operations (addition and constant multiplication) are computed without error, but non-affine operations need to include additional NTs to provide the bounds of the results. The main advantage of AA is that it keeps track of the different noise symbols and cancels all the first-order uncertainties, so it is capable of providing accurate results in linear sequences of operations. In nonlinear systems, AA obtains quadratic convergence, but the increment of the number of NTs in the nonlinear operations makes the computations less accurate and more time-consuming. A detailed analysis of the implementation of AA and a description of the most relevant computation algorithms is

Among other applications, AA has been successfully used to evaluate the tolerance of circuit components (Femia & Spagnuolo, 2000), the sizing of analog circuits (Lemke, et al., Nov. 2002), the evolution of deformable models (Goldenstein, et al., 2001), the evaluation of polynomials (Shou, et al., 2002), and the analysis of the Round-Off Noise (RON) in Digital Signal Processing (DSP) systems (Fang, 2003; López, 2004; López et al., 2007, 2008), etc. Modified AA (MAA) (Shou, et al., 2003) has been proposed to accurately compute the evolution of the uncertainties in nonlinear descriptions. Its mathematical expression is as

> *k 2 2 k MAA i <sup>c</sup> ¬ ¬ 11 2 3 1 41 <sup>n</sup> <sup>i</sup> i,k f (e ) x' x x e x e x e x e e x e ... x*

It is easy to see that MAA is an extension of AA that includes the polynomial NTs in the description. Thus, it is capable of computing the evolution of higher-order uncertainties that appear in polynomial descriptions (of a given smooth system), but the number of terms of the representation grows exponentially with the number of uncertainties and the order of the polynomial description. Thus, in this case it is particularly important to keep the number

Obviously, the higher order NTs are not required when computing the evolution of the

This Section examines the variations of the properties of the signals that occur in the evaluation of the DSP systems when Monte-Carlo Simulations (MCS) are performed using Extensions of IA (EIA) instead of the traditional numerical simulations. The simulations based on IA and EIA can handle the uncertainties and nonlinearities associated, for example, to the quantization operations of fixed-point digital filters, and other types of

The most relevant advantages of using EIA to evaluate DSP systems can be summarized in

1. It is capable of managing the uncertainties associated with the quantization of

uncertainties in LTI systems, so MAA is less convenient than AA in this case.

coefficients, signals, complex computations and nonlinearities.

3. It provides faster results than the traditional numerical simulations.

2 *x*2 + ... +

H

H

*<sup>n</sup> xn* (6)

H

(7)

*<sup>i</sup>* and *xi* are the NS and

H0 + *x*1 H1 + H

 *fAA* (

given in (Stolfi & Figuereido, 1997).

follows:

H

0 0 0 0

of NTs of the representation under a reasonable limit.

**3. Interval-based analysis of DSP systems** 

2. It avoids the cancellation problem of IA.

systems in the general case.

the following points:

*<sup>i</sup>*) = *x'* = *xc* + *x*<sup>0</sup>

where *x'* represents the affine form, *xc* is the central point, and each

The slopes (Moore, 1966; Neumaier, 1990; Schichl & Neumaier, 2002) also use a first-order Taylor Series expansion, but they apply the Newton's method to recursively compute the values of the derivatives. Its general expression is as follows:

$$f\begin{pmatrix}\mathbf{x}\end{pmatrix} = f\begin{pmatrix}\mathbf{x}\_0\end{pmatrix} + f\begin{pmatrix}\mathbf{x}\end{pmatrix}\begin{pmatrix}\mathbf{x} - \mathbf{x}\_0\end{pmatrix} \qquad \in \qquad f\_S\begin{pmatrix}I\_S \ I\_x\end{pmatrix} = f\begin{pmatrix}\mathbf{x}\_0\end{pmatrix} + I\_S\begin{pmatrix}I\_x - \mathbf{x}\_0\end{pmatrix} \tag{3}$$

where *IS* is determined according to the expression (Garloff, 1999):

$$I\_S = \begin{cases} \frac{f(\mathbf{x}) - f(\mathbf{x}\_0)}{\mathbf{x} - \mathbf{x}\_0} & \text{if } \mathbf{x} \neq \mathbf{x}\_0 \\\\ \mathbf{x}\_0 & \text{if } \mathbf{x} = \mathbf{x}\_0 \end{cases} \tag{4}$$

It is worth mentioning that slopes typically provide better estimates than MVFs by a factor of 2, and that the results can be further improved by combining their computation with IA (Schichl & Neumaier, 2002)

TMs (Berz, 1997, 1999; Makino & Berz, 1999) combine a *N*-order Taylor Series expansion with an interval that incorporates the uncertainty in the function under analysis. Its mathematical expression is as follows:

$$f\_{\rm TM} \left( \mathbf{x}, I\_n \right) = a\_n \mathbf{x}^n + a\_{n-1} \mathbf{x}^{n-1} + \dots + a\_1 \mathbf{x} + a\_0 + I\_n \tag{5}$$

where *ai* is the *i*-th coefficient of the interpolation polynomial of order *n*, and *In* is the uncertainty interval for this polynomial. The approximation error has now order *N*+1, rather than quadratic as in previous cases. In addition, TMs improve the representation of the domain regions, which reduces the wrapping effect. The applications of TMs have been largely studied thanks to the development of the tool COSY INFINITY (Berz, 1991, 1999; Berz, et al., 1996; Berz & Makino, 1998, 2004; Hoefkens, 2001; Hoefkens, et al., 2001, 2003; Makino, 1998, 1999). The main features of this tool include the resolution of Ordinary Differential Equations (ODEs), higher order ODEs and systems, multivariable integration, and techniques for relieving the wrapping effect, the dimensionality course, and the cluster effect (Hoefkens, 2001; Makino & Berz, 2003; Neumaier, 2002). Another relevant contributor in the development of the TMs is the GlobSol project (Corliss, 2004; GlobSol\_Group, 2004; Kearfott, 2004; Schulte, 2004; Walster, 2004), focused on the application of interval computations to different applications, including systems modeling, computer graphics, gene prediction, missile design tips, portfolio management, foreign exchange market, parameter optimization in medical measures, software development of Taylor operators, interval support for the GNU Fortran compiler, improved methods of automatic differentiation, resolution of chemical models, etc. (GlobSol\_Group, 2004).

There are discussions about the capabilities of TMs to solve the different theoretical and applied problems. In this sense, it is worth mentioning that "the TMs only reduce the problem of bounding a factorable function to bounding the range of a polynomial in a small box centered at 0. However, they are good or bad depending on how they are applied to solve each problem." (Neumaier, 2002). This statement is also applicable to the other uncertainty computation methods.

In AA (Comba & Stolfi, 1993; Figuereido & Stolfi, 2002; Stolfi & Figuereido, 1997), each element or affine form consists of a central value plus a set of noise terms (NTs). Each NT is composed of one uncertainty source identifier, called Noise Symbol (NS), and a constant coefficient associated to it. The mathematical expression is:

282 Applications of Digital Signal Processing

The slopes (Moore, 1966; Neumaier, 1990; Schichl & Neumaier, 2002) also use a first-order Taylor Series expansion, but they apply the Newton's method to recursively compute the

0

° ¯

*f(x) f(x ) if x x I x x*

° z ®

0

It is worth mentioning that slopes typically provide better estimates than MVFs by a factor of 2, and that the results can be further improved by combining their computation with IA

TMs (Berz, 1997, 1999; Makino & Berz, 1999) combine a *N*-order Taylor Series expansion with an interval that incorporates the uncertainty in the function under analysis. Its

 *fTM* (*x, In*) = *an xn* + *an*-1 *xn*-1 + ... + *a*1 *x* + *a*0 + *In* (5) where *ai* is the *i*-th coefficient of the interpolation polynomial of order *n*, and *In* is the uncertainty interval for this polynomial. The approximation error has now order *N*+1, rather than quadratic as in previous cases. In addition, TMs improve the representation of the domain regions, which reduces the wrapping effect. The applications of TMs have been largely studied thanks to the development of the tool COSY INFINITY (Berz, 1991, 1999; Berz, et al., 1996; Berz & Makino, 1998, 2004; Hoefkens, 2001; Hoefkens, et al., 2001, 2003; Makino, 1998, 1999). The main features of this tool include the resolution of Ordinary Differential Equations (ODEs), higher order ODEs and systems, multivariable integration, and techniques for relieving the wrapping effect, the dimensionality course, and the cluster effect (Hoefkens, 2001; Makino & Berz, 2003; Neumaier, 2002). Another relevant contributor in the development of the TMs is the GlobSol project (Corliss, 2004; GlobSol\_Group, 2004; Kearfott, 2004; Schulte, 2004; Walster, 2004), focused on the application of interval computations to different applications, including systems modeling, computer graphics, gene prediction, missile design tips, portfolio management, foreign exchange market, parameter optimization in medical measures, software development of Taylor operators, interval support for the GNU Fortran compiler, improved methods of automatic

*f* (*x*) = *f* (*x*0) + *f* ´(*x* )(*x* – *x*0) *fS* (*IS, Ix*) = *f* (*x*0) + *IS* (*Ix* – *x*0) (3)

0 0

*x if x x*

0

(4)

values of the derivatives. Its general expression is as follows:

(Schichl & Neumaier, 2002)

mathematical expression is as follows:

uncertainty computation methods.

where *IS* is determined according to the expression (Garloff, 1999):

*S*

differentiation, resolution of chemical models, etc. (GlobSol\_Group, 2004).

coefficient associated to it. The mathematical expression is:

There are discussions about the capabilities of TMs to solve the different theoretical and applied problems. In this sense, it is worth mentioning that "the TMs only reduce the problem of bounding a factorable function to bounding the range of a polynomial in a small box centered at 0. However, they are good or bad depending on how they are applied to solve each problem." (Neumaier, 2002). This statement is also applicable to the other

In AA (Comba & Stolfi, 1993; Figuereido & Stolfi, 2002; Stolfi & Figuereido, 1997), each element or affine form consists of a central value plus a set of noise terms (NTs). Each NT is composed of one uncertainty source identifier, called Noise Symbol (NS), and a constant

$$f\_{AA} \left( \varrho\_{\bar{i}} \right) = \mathbf{x}' = \mathbf{x}\_c + \mathbf{x}\_0 \ \varrho\_0 + \mathbf{x}\_1 \ \varrho\_1 + \varrho\_2 \mathbf{x}\_2 + \dots + \varrho\_n \mathbf{x}\_n \tag{6}$$

where *x'* represents the affine form, *xc* is the central point, and each H*<sup>i</sup>* and *xi* are the NS and its associated coefficient. In AA the operations are classified in two types: affine and nonaffine operations. Affine operations (addition and constant multiplication) are computed without error, but non-affine operations need to include additional NTs to provide the bounds of the results. The main advantage of AA is that it keeps track of the different noise symbols and cancels all the first-order uncertainties, so it is capable of providing accurate results in linear sequences of operations. In nonlinear systems, AA obtains quadratic convergence, but the increment of the number of NTs in the nonlinear operations makes the computations less accurate and more time-consuming. A detailed analysis of the implementation of AA and a description of the most relevant computation algorithms is given in (Stolfi & Figuereido, 1997).

Among other applications, AA has been successfully used to evaluate the tolerance of circuit components (Femia & Spagnuolo, 2000), the sizing of analog circuits (Lemke, et al., Nov. 2002), the evolution of deformable models (Goldenstein, et al., 2001), the evaluation of polynomials (Shou, et al., 2002), and the analysis of the Round-Off Noise (RON) in Digital Signal Processing (DSP) systems (Fang, 2003; López, 2004; López et al., 2007, 2008), etc.

Modified AA (MAA) (Shou, et al., 2003) has been proposed to accurately compute the evolution of the uncertainties in nonlinear descriptions. Its mathematical expression is as follows:

$$f\_{\rm MAA}(e\_i^k) = \mathbf{x}' = \mathbf{x}\_c + \mathbf{x}\_0 e\_0 + \mathbf{x}\_1 e\_1 + \mathbf{x}\_2 e\_0^2 + \mathbf{x}\_3 e\_0 e\_1 + \mathbf{x}\_4 e\_1^2 + \dots \\ \dots \\ + \mathbf{x}\_n \prod\_{i,k} e\_i^k \tag{7}$$

It is easy to see that MAA is an extension of AA that includes the polynomial NTs in the description. Thus, it is capable of computing the evolution of higher-order uncertainties that appear in polynomial descriptions (of a given smooth system), but the number of terms of the representation grows exponentially with the number of uncertainties and the order of the polynomial description. Thus, in this case it is particularly important to keep the number of NTs of the representation under a reasonable limit.

Obviously, the higher order NTs are not required when computing the evolution of the uncertainties in LTI systems, so MAA is less convenient than AA in this case.

#### **3. Interval-based analysis of DSP systems**

This Section examines the variations of the properties of the signals that occur in the evaluation of the DSP systems when Monte-Carlo Simulations (MCS) are performed using Extensions of IA (EIA) instead of the traditional numerical simulations. The simulations based on IA and EIA can handle the uncertainties and nonlinearities associated, for example, to the quantization operations of fixed-point digital filters, and other types of systems in the general case.

The most relevant advantages of using EIA to evaluate DSP systems can be summarized in the following points:


Figure 2.a shows a second-order Infinite Impulse Response (IIR) filter realized in direct

1 1 ( ) <sup>1</sup> 1 0.75

It is initially assumed that the filter is implemented using infinite precision, which implies that the quantization effects are negligible and that all signals are generated as linear combinations of the input and the state variables. This assumption allows: (i) to perform a separate analysis of the mean and the width of the intervals; and (ii) to generalize the results

Figure 2.b shows the oversizing that occurs in the IA simulation. The input is set to the normalized interval [-1, 1], and the state variables are initially set to zero. Here, the representations are based on oriented intervals to keep track of the position of the samples in each interval, and to detect the overestimations. The initial values and the evolution of the

= [1, -1] = [1, -1] = [1, -1] = [-1, 1] = [-1, 1] = [0.75, -0.75]

= [1, -1] = [1, -1] = [-1, 1] = [-1.75, 1.75] = [0.75, -0.75]

instead of *tsum* = [–0.25, 0.25], which is the correct value. Figure 2.b also shows that this oversizing occurs because signal *tsum* depends on the input signal through two different

Since AA includes a separate signed identifier per uncertainty source, it avoids such overestimations and provides the smallest intervals. In this case, the initial values and the

= -1.5

This simple example confirms the selection of AA instead of IA, particularly in structures with feedback loops. Although the cancellation effect is not necessarily present in all the structures, it commonly appears in most DSP realizations. For this reason, it is highly recommended to

When there are multiple simultaneous uncertainty sources, it is necessary to use an oriented identifier for each source, in addition to the average value of the signals, which are the elements offered by AA to perform the computations. Moreover, the objective of AA is to

H

 H

H

*2*

¯

*sv y t <sup>t</sup>*

½

= 2 = 2 = 2

use this arithmetic when performing interval-based analysis of DSP systems.

*1 a1*

HH

*2*

½

¯

*a1 sum 1*

¾ ¿

= 2 = 2 = 2

H

 H

= 0.5

H

*sum*

*a1 sum 1*

¾ ¿

*tt sv*

*sum*

*t*

(9)

(10)

(11)

(12)

*tt sv*

1 2

obtained in the simulation of a normalized interval to larger or smaller ones.

12 1 2

*az az z z* . (8)

form, whose transfer function is

intervals are:

paths.

*H z*

*x y sv*

= 2 = 2

= -1.5

H

*2*

*sv*

*x y sv*

H

which corresponds to the most accurate interval [-0.25, 0.25].

 H

®

and in the next sampled time the values are:

*2*

evolution of the affine forms are:

and in the next sampled time

*sv*

®

*1 a1*

*sv y t*

The intuitive reason that determines the benefits of EIA is simple. Since EIA is capable of processing large sets of data in a single interval-based simulation, the results are obtained faster than in the separate computation of the numerical samples. Although the use of intervals imposes a limitation of connectivity on the computation of the results, both the speed and the accuracy are improved with respect to the numerical processing of the same number of samples.

Section 3.1 discusses the cancellation problem in the analysis of digital filter structures using IA, and justifies the selection of AA for such analysis, indicating the cases in which it can be used, and under what types of restrictions. Section 3.2 examines how the Fourier Transform is affected when uncertainties are included in one or all of the samples. Section 3.3 evaluates the changes that occur in the parameters of the random signals (mean, variance and Probability Density Function (PDF)) when a specific width is introduced in the samples, and how these changes affect the computed estimates using the Monte-Carlo method. Finally, Section 3.4 provides a brief discussion to highlight the capabilities of interval-based simulations.

#### **3.1 Analysis of digital filter structures using IA and AA**

The main problem that arises when performing interval-based analyses of DSP systems using IA is that the addition and subtraction operations always increase the interval widths. If there are variables that depend on other variables through two or more different paths, such as in *z*(*k*) = *x*(*k*) - *x*(*k*), the ranges provided by IA are oversized. This problem, called the cancellation problem, is particularly severe when there are feedback loops in the realizations, a characteristic which is common in most DSP systems.

Fig. 2. Interval oversizing due to the cancellation effect of IA: (a) Signal names and initial (interval) values. (b) Computed intervals until the oversizing in the variable *tsum* is detected. In each small figure, the abscissa axis represents the sampled time, and the ordinate axis represents the interval values. A dot in a given position represents the interval [0,0].

Figure 2.a shows a second-order Infinite Impulse Response (IIR) filter realized in direct form, whose transfer function is

$$H(z) = \frac{1}{1 + a\_1 z^{-1} + a\_2 z^{-2}} = \frac{1}{1 + z^{-1} + 0.75z^{-2}} \,\, . \tag{8}$$

It is initially assumed that the filter is implemented using infinite precision, which implies that the quantization effects are negligible and that all signals are generated as linear combinations of the input and the state variables. This assumption allows: (i) to perform a separate analysis of the mean and the width of the intervals; and (ii) to generalize the results obtained in the simulation of a normalized interval to larger or smaller ones.

Figure 2.b shows the oversizing that occurs in the IA simulation. The input is set to the normalized interval [-1, 1], and the state variables are initially set to zero. Here, the representations are based on oriented intervals to keep track of the position of the samples in each interval, and to detect the overestimations. The initial values and the evolution of the intervals are:

$$\mathbf{x} = \begin{bmatrix} -1, 1 \end{bmatrix} \implies y = \begin{bmatrix} -1, 1 \end{bmatrix} \implies \begin{cases} t\_{a1} = \begin{bmatrix} 1, -1 \end{bmatrix} \implies t\_{sum} = \begin{bmatrix} 1, -1 \end{bmatrix} \implies sv\_1 = \begin{bmatrix} 1, -1 \end{bmatrix} \\\ s v\_2 = \begin{bmatrix} 0.75 \ -0.75 \end{bmatrix} \end{cases} \tag{9}$$

and in the next sampled time the values are:

284 Applications of Digital Signal Processing

The intuitive reason that determines the benefits of EIA is simple. Since EIA is capable of processing large sets of data in a single interval-based simulation, the results are obtained faster than in the separate computation of the numerical samples. Although the use of intervals imposes a limitation of connectivity on the computation of the results, both the speed and the accuracy are improved with respect to the numerical processing of the same

Section 3.1 discusses the cancellation problem in the analysis of digital filter structures using IA, and justifies the selection of AA for such analysis, indicating the cases in which it can be used, and under what types of restrictions. Section 3.2 examines how the Fourier Transform is affected when uncertainties are included in one or all of the samples. Section 3.3 evaluates the changes that occur in the parameters of the random signals (mean, variance and Probability Density Function (PDF)) when a specific width is introduced in the samples, and how these changes affect the computed estimates using the Monte-Carlo method. Finally, Section 3.4 provides a brief discussion to highlight the capabilities of interval-based

The main problem that arises when performing interval-based analyses of DSP systems using IA is that the addition and subtraction operations always increase the interval widths. If there are variables that depend on other variables through two or more different paths, such as in *z*(*k*) = *x*(*k*) - *x*(*k*), the ranges provided by IA are oversized. This problem, called the cancellation problem, is particularly severe when there are feedback loops in the

Fig. 2. Interval oversizing due to the cancellation effect of IA: (a) Signal names and initial (interval) values. (b) Computed intervals until the oversizing in the variable *tsum* is detected. In each small figure, the abscissa axis represents the sampled time, and the ordinate axis represents the interval values. A dot in a given position represents the

*z*1

**sv2**

**t***a***2**

(a) (b)

**sv1**

**tsum** *z*1

**x y**

**t***a***1**

*a***1 = 1**

Oversizing with IA

*a***2 = 0.75**

**3.1 Analysis of digital filter structures using IA and AA** 

realizations, a characteristic which is common in most DSP systems.

Signal names and initial values

number of samples.

simulations.

interval [0,0].

*z*1

**sv2**

**t***a***2**

**sv1**

**tsum** *z*1

**x y**

**t***a***1**

*a***1 = 1**

*a***2 = 0.75**

$$\begin{aligned} s s v\_1 &= [1, -1] \implies y = [1, -1] \implies t\_{a1} = [-1, 1] \\ s v\_2 &= [0.75, -0.75] \end{aligned} \implies t\_{sum} = [-1.75, 1.75] \tag{10}$$

instead of *tsum* = [–0.25, 0.25], which is the correct value. Figure 2.b also shows that this oversizing occurs because signal *tsum* depends on the input signal through two different paths.

Since AA includes a separate signed identifier per uncertainty source, it avoids such overestimations and provides the smallest intervals. In this case, the initial values and the evolution of the affine forms are:

$$\begin{array}{rcl} \infty = \mathfrak{2}\varepsilon & \Rightarrow \ y = \mathfrak{2}\varepsilon & \Rightarrow \end{array} \begin{cases} t\_{a1} = \mathfrak{2}\varepsilon & \Rightarrow \ t\_{sum} = \mathfrak{2}\varepsilon & \Rightarrow \ s v\_1 = \mathfrak{2}\varepsilon \\\ s v\_2 = \text{-} 1.5\varepsilon & & \end{array} \tag{11}$$

and in the next sampled time

$$\begin{array}{ccccc} s\upsilon\_1 = 2\varepsilon & \Rightarrow & y = 2\varepsilon & \Rightarrow & t\_{a1} = 2\varepsilon \\ s\upsilon\_2 = \text{-1.5}\varepsilon \end{array} \implies \quad t\_{sum} = 0.5\varepsilon \tag{12}$$

which corresponds to the most accurate interval [-0.25, 0.25].

This simple example confirms the selection of AA instead of IA, particularly in structures with feedback loops. Although the cancellation effect is not necessarily present in all the structures, it commonly appears in most DSP realizations. For this reason, it is highly recommended to use this arithmetic when performing interval-based analysis of DSP systems.

When there are multiple simultaneous uncertainty sources, it is necessary to use an oriented identifier for each source, in addition to the average value of the signals, which are the elements offered by AA to perform the computations. Moreover, the objective of AA is to

shows another cosine signal of the same amplitude and width, length 256 and period 8. Figures 3.b and 3.d show the computed FFTs for each case, where each black line

(a) (b)

Fig. 3. Examples of FFTs of deterministic interval signals: (a) First 200 samples of a cosine signal of length 1024, period 32, and interval widths 1/8 in all the samples. (b) FFT of the previous signal. (c) First 75 samples of a cosine signal of length 256, period 8, and interval

As expected, these figures clearly show that the output intervals in the transformed domain have the form of the numerical transform, plus a given level of uncertainty in all the samples. In addition, Figures 3.b and 3.d also provide: (i) the values of the deviations in the transformed domain in each sample with respect to the numerical case, and (ii) the

The second part of this experiment evaluates how each uncertainty separately affects to the FFT samples. As mentioned above, by performing a separate analysis of how each uncertainty affects to the input samples, we are characterizing the quantization effects of the

which is performed by generating a delta interval in the specified position, and adding it to

maximum levels of uncertainty associated with the uncertainties of the inputs.

(c) (d)

widths 1/8 in all the samples. (d) FFT of the previous signal.

FFT. In this case, step 3 is replaced by the following statement:

the input signal.

3. Include one uncertainty in the specified sample of the input signals.

represents a data interval.

accurately determine the results of the linear operations (additions, subtractions, constant multiplications and delays), and the purpose of the filters is to perform a given linear transformation of the input signal. Consequently, the features offered by AA match perfectly with the requirements of the interval-based simulations of the unquantised digital filter structures.

When the quantization operations are included in this type of analysis, the affine forms must be adjusted to include all the possible values of the results. Since AA keeps track of the effects of the uncertainty sources (the noise terms can be seen as the first-order relationship between each uncertainty source and the signals), the affine forms are easily modified to simulate the effects of the quantization operations in the structures containing feedback loops.

In summary, one of the most important problems of IA to perform accurate interval-based simulations of the DSP realizations is the cancellation problem. The use of AA, in combination with the modification of the affine forms in the quantization operations, solves this problem and allows performing accurate analysis of the linear structures, even when they contain feedback loops.
