**3.2 Computation of the fourier transform of deterministic interval-based signals**

The analysis of deterministic signals in DSP systems is of great importance, since most systems use or modify their properties in the frequency domain to send the information. In this sense, the decomposition of the signals using the Fourier transform as finite or infinite sums of sinusoids allows to evaluate these properties. Conversely, it is also widely known that a sufficient condition to characterize the linear systems is to determine the variations of the properties of the sinusoids of the different frequencies.

The following experiment shows the variations of the properties of deterministic signals when intervals of a given width are included in one or all of their samples. These widths represent the possible uncertainties in these signals and their effect on their associated signals in the transformed domain.

First, we evaluate the effects of including uncertainties of the same width in all the samples of the sequence. The steps required to perform this example are as follows:


Steps 1 to 4 generate the FFT of the interval-based sinusoidal signals. Step 5 has been included to investigate the effects of incorporating uncertainties of a given width to all input samples of the FFT. By superposition, this should be equal to the numerical FFT of the mean values of the original signal, plus another FFT in which all the input intervals are centered in zero and they all have the same width. Finally, step 6 allows us to investigate the variations of the computed results according to the periods of the sinusoids.

Figure 3 shows two examples of cosine signals with equal-width intervals in all the samples and their respective computed FFTs. Figure 3.a corresponds to a cosine signal of amplitude 1, length 1024, period 32, and width 1/8 in all the samples, and Figure 3.c 286 Applications of Digital Signal Processing

accurately determine the results of the linear operations (additions, subtractions, constant multiplications and delays), and the purpose of the filters is to perform a given linear transformation of the input signal. Consequently, the features offered by AA match perfectly with the requirements of the interval-based simulations of the unquantised digital

When the quantization operations are included in this type of analysis, the affine forms must be adjusted to include all the possible values of the results. Since AA keeps track of the effects of the uncertainty sources (the noise terms can be seen as the first-order relationship between each uncertainty source and the signals), the affine forms are easily modified to simulate the effects of the quantization operations in the structures containing feedback

In summary, one of the most important problems of IA to perform accurate interval-based simulations of the DSP realizations is the cancellation problem. The use of AA, in combination with the modification of the affine forms in the quantization operations, solves this problem and allows performing accurate analysis of the linear structures, even when

The following experiment shows the variations of the properties of deterministic signals when intervals of a given width are included in one or all of their samples. These widths represent the possible uncertainties in these signals and their effect on their associated

First, we evaluate the effects of including uncertainties of the same width in all the samples

1. Generate the Fast Fourier Transform (FFT) program file, specifying the number of

Steps 1 to 4 generate the FFT of the interval-based sinusoidal signals. Step 5 has been included to investigate the effects of incorporating uncertainties of a given width to all input samples of the FFT. By superposition, this should be equal to the numerical FFT of the mean values of the original signal, plus another FFT in which all the input intervals are centered in zero and they all have the same width. Finally, step 6 allows us to investigate the variations

Figure 3 shows two examples of cosine signals with equal-width intervals in all the samples and their respective computed FFTs. Figure 3.a corresponds to a cosine signal of amplitude 1, length 1024, period 32, and width 1/8 in all the samples, and Figure 3.c

of the sequence. The steps required to perform this example are as follows:

4. Compute the Fourier Transform (run the interval-based simulation). 5. Repeat the steps 1-4 modifying the widths of the intervals of step 3. 6. Repeat the previous steps modifying the periods of the sinusoids of step 2.

2. Generate the sampled sinusoidal signals to be used as inputs. 3. Include the uncertainty specifications in the input signals.

of the computed results according to the periods of the sinusoids.

**3.2 Computation of the fourier transform of deterministic interval-based signals**  The analysis of deterministic signals in DSP systems is of great importance, since most systems use or modify their properties in the frequency domain to send the information. In this sense, the decomposition of the signals using the Fourier transform as finite or infinite sums of sinusoids allows to evaluate these properties. Conversely, it is also widely known that a sufficient condition to characterize the linear systems is to determine the variations of

the properties of the sinusoids of the different frequencies.

filter structures.

they contain feedback loops.

signals in the transformed domain.

stages.

loops.

shows another cosine signal of the same amplitude and width, length 256 and period 8. Figures 3.b and 3.d show the computed FFTs for each case, where each black line represents a data interval.

As expected, these figures clearly show that the output intervals in the transformed domain have the form of the numerical transform, plus a given level of uncertainty in all the samples. In addition, Figures 3.b and 3.d also provide: (i) the values of the deviations in the transformed domain in each sample with respect to the numerical case, and (ii) the maximum levels of uncertainty associated with the uncertainties of the inputs.

The second part of this experiment evaluates how each uncertainty separately affects to the FFT samples. As mentioned above, by performing a separate analysis of how each uncertainty affects to the input samples, we are characterizing the quantization effects of the FFT. In this case, step 3 is replaced by the following statement:

3. Include one uncertainty in the specified sample of the input signals.

which is performed by generating a delta interval in the specified position, and adding it to the input signal.

(a) (b)

(c) (d)

Fig. 5. Details of the ripples that occur in the transformed domain due to the presence of uncertainty intervals in the deterministic signals: (a) in a position which is a factor of the number of FFT points (16). (b) - (d) in other non-factor positions (17, 20 and 27, respectively). The vertical lines above the figures indicate the positions of the deltas, whose heights exceed

The first part of this section analyzes the changes in the PDFs. To do this, data sequences following a particular PDF are generated, and they are later reconstructed and compared

1. Generate the traces of the random samples following the specified PDF, and assign the

Step 1 generates the sequences of samples that follow the specified PDF, and in step 2 the PDFs are recomputed from these samples. In this experiment, three types of PDFs have been used: (i) a uniform PDF in [-1, 1], a normalized normal PDF (mean 0 and variance 1), and a bimodal PDF composed of two normal PDFs, with means -3 and 3 and variance 1. Steps 3 and 4 have been included to reduce the variance of the results. Finally, step 5 allows

Figure 6 presents the results of the three histograms using the Monte-Carlo method with: (i) numerical samples, (ii) intervals whose width is set to 1/8 of the variance, and (iii) intervals

with the original results. The steps used to perform the experiments are as follows:

2. Obtain the histogram of the trace, group the samples and plot the computed PDF.

3. Repeat steps 1 and 2 to reduce the variance of the parameters (*M* times).

5. Repeat the previous steps assigning other interval widths.

the representable values in the graph.

4. Average the histograms obtained in step 3.

width of the intervals.

selecting other interval widths.

Figure 4.a shows a cosine signal of length 1024 and period 32, in which only an interval of width 1/5 in the sample 27 has been included, and Figure 4.b shows the computed FFT of the previous interval trace. In this case, two small intervals appear in the sampled frequencies 32 and 968, as well as in the values near 0 in the other frequencies. Unlike the results shown in Figure 3, the uncertainties associated with the input interval are very small in this case.

Fig. 4. Example of an FFT of a deterministic signal with a single interval: (a) First 200 samples of a cosine signal of length 1024, period 32 and interval width 1/5 in the sample 27. (b) FFT of the previous signal, with two small uncertainties in the sampled frequencies 32 and 968.

Figure 5 shows the details of the ripples generated by the uncertainties according to their positions in each trace. In the first case (Figure 5.a), the interval has been included in sample 16, which is a factor of the number of FFT points. In this case, there is no ripple. In the other three cases (Figures 5.b-d), the interval has been included in three different positions (17, 20 and 27, respectively), and there is a small ripple in the transformed domain, different in each case. Since the FFTs are linear systems, the large ripples that appear in the Figures 3.b and 3.d are the sum of all the possible equal-width ripples in the frequency domain.

In summary, the inclusion of intervals in sinusoidal signals and the computation of the FFTs show the maximum and minimum deviations in the frequency domain due to the different uncertainties. It has been found that the uncertainties do not affect to all the frequencies of the FFT in the same way, and that their effects depend on their positions in the trace. Although the intervals represent the maximum values of the uncertainties and the noise is commonly associated to the second-order statistics, the variations in the computed interval widths implies that the noise generated by the FFT is not white, but follows a deterministic pattern.

#### **3.3 Analysis of the statistical parameters of random signals using interval-based simulations**

The following experiments show the variations of the statistical parameters of random signals (mean, variance and PDF) when random sequences are generated using the Monte-Carlo method, using intervals of a specified width instead of the traditional numerical simulations.

288 Applications of Digital Signal Processing

Figure 4.a shows a cosine signal of length 1024 and period 32, in which only an interval of width 1/5 in the sample 27 has been included, and Figure 4.b shows the computed FFT of the previous interval trace. In this case, two small intervals appear in the sampled frequencies 32 and 968, as well as in the values near 0 in the other frequencies. Unlike the results shown in Figure 3, the uncertainties associated with the input interval are very small

Fig. 4. Example of an FFT of a deterministic signal with a single interval: (a) First 200 samples of a cosine signal of length 1024, period 32 and interval width 1/5 in the sample 27. (b) FFT of the previous signal, with two small uncertainties in the sampled frequencies 32

(a) (b)

3.d are the sum of all the possible equal-width ripples in the frequency domain.

Figure 5 shows the details of the ripples generated by the uncertainties according to their positions in each trace. In the first case (Figure 5.a), the interval has been included in sample 16, which is a factor of the number of FFT points. In this case, there is no ripple. In the other three cases (Figures 5.b-d), the interval has been included in three different positions (17, 20 and 27, respectively), and there is a small ripple in the transformed domain, different in each case. Since the FFTs are linear systems, the large ripples that appear in the Figures 3.b and

In summary, the inclusion of intervals in sinusoidal signals and the computation of the FFTs show the maximum and minimum deviations in the frequency domain due to the different uncertainties. It has been found that the uncertainties do not affect to all the frequencies of the FFT in the same way, and that their effects depend on their positions in the trace. Although the intervals represent the maximum values of the uncertainties and the noise is commonly associated to the second-order statistics, the variations in the computed interval widths implies that the noise generated by the FFT is not white, but follows a deterministic

**3.3 Analysis of the statistical parameters of random signals using interval-based** 

The following experiments show the variations of the statistical parameters of random signals (mean, variance and PDF) when random sequences are generated using the Monte-Carlo method, using intervals of a specified width instead of the traditional numerical

in this case.

and 968.

pattern.

**simulations** 

simulations.

Fig. 5. Details of the ripples that occur in the transformed domain due to the presence of uncertainty intervals in the deterministic signals: (a) in a position which is a factor of the number of FFT points (16). (b) - (d) in other non-factor positions (17, 20 and 27, respectively). The vertical lines above the figures indicate the positions of the deltas, whose heights exceed the representable values in the graph.

The first part of this section analyzes the changes in the PDFs. To do this, data sequences following a particular PDF are generated, and they are later reconstructed and compared with the original results. The steps used to perform the experiments are as follows:


Step 1 generates the sequences of samples that follow the specified PDF, and in step 2 the PDFs are recomputed from these samples. In this experiment, three types of PDFs have been used: (i) a uniform PDF in [-1, 1], a normalized normal PDF (mean 0 and variance 1), and a bimodal PDF composed of two normal PDFs, with means -3 and 3 and variance 1. Steps 3 and 4 have been included to reduce the variance of the results. Finally, step 5 allows selecting other interval widths.

Figure 6 presents the results of the three histograms using the Monte-Carlo method with: (i) numerical samples, (ii) intervals whose width is set to 1/8 of the variance, and (iii) intervals

(a) (b)

Fig. 7. Details of the normal distribution generated with numerical and interval traces:

Therefore, this experiment has shown that signals with normal distributions maintain their shape and statistical parameters in the interval-based simulations, but they require fewer

The second part of this section evaluates the variations of the statistical estimators when interval samples of a specific width are used to compute the mean and variance of the

1. Generate the traces of the random samples following the specified PDF, and assign the

4. Group the means and variances of the computed traces, and obtain the estimation and

These steps allow the computation of the means and variances of the estimators, instead of averaging the computed histograms. Step 2 computes the mean and variance of the signals specified in step 1, and step 4 averages the results of the mean and variance of the estimators

Figure 8 shows the evolution of the estimators of the mean and the variance as a function of

the lengths of the traces (500, 1000 and 5000 samples) and the widths of the intervals

(a) and (b) Central part of the distribution; (c) and (d) Tail of the distribution.

(c) (d)

random signals in the simulations. Now, the sequence of steps is as follows:

3. Repeat steps 1 and 2 to reduce the variance of the parameters (*M* times).

(in this experiment *M* is high, to ensure the reliability of estimator statistics).

computations to obtain similar degrees of accuracy.

2. Compute the mean and the variance of the trace.

the variations of the statistical parameters.

5. Repeat the previous steps assigning other interval widths.

width of the intervals.

whose width is set to the variance of the distribution. All the histograms have been computed using 20 averages of 5000 data items each. It can be seen that the areas near the edges on the uniform distribution are modified, but the remaining parts of the distribution are also computed taking into account a larger number of points. It is also noticeable that the new PDFs are smoother than the ones computed using the numerical traces, which can be explained from the Central Limit Theorem.

Fig. 6. Distributions generated using traces of numbers, traces of intervals whose widths are set to 1/8 of the variance, and traces of intervals whose widths are set to the variance of the distribution. These traces are applied using the Monte Carlo Method to: (a) - (c) a uniform distribution in [-1, 1]; (d) - (f) a normal distribution with mean 0 and variance 1; (g) - (i) a bimodal distribution with modes 3 and -3 and variance 1.

Figure 7 details the central part and the tails of a normal distribution generated using traces of 100000 numbers and 5000 intervals. It can be observed that the transitions of the histograms are much smoother in the distribution generated using intervals. Although there are slight deviations from the theoretical values, these deviations (approximately 5% in the central part and 15% in the tails) are comparable to the deviations obtained by the numerical trace using 100000 numbers.

290 Applications of Digital Signal Processing

whose width is set to the variance of the distribution. All the histograms have been computed using 20 averages of 5000 data items each. It can be seen that the areas near the edges on the uniform distribution are modified, but the remaining parts of the distribution are also computed taking into account a larger number of points. It is also noticeable that the new PDFs are smoother than the ones computed using the numerical traces, which can be

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Fig. 6. Distributions generated using traces of numbers, traces of intervals whose widths are set to 1/8 of the variance, and traces of intervals whose widths are set to the variance of the distribution. These traces are applied using the Monte Carlo Method to: (a) - (c) a uniform distribution in [-1, 1]; (d) - (f) a normal distribution with mean 0 and variance 1; (g) - (i) a

Figure 7 details the central part and the tails of a normal distribution generated using traces of 100000 numbers and 5000 intervals. It can be observed that the transitions of the histograms are much smoother in the distribution generated using intervals. Although there are slight deviations from the theoretical values, these deviations (approximately 5% in the central part and 15% in the tails) are comparable to the deviations obtained by the numerical

bimodal distribution with modes 3 and -3 and variance 1.

trace using 100000 numbers.

explained from the Central Limit Theorem.

Fig. 7. Details of the normal distribution generated with numerical and interval traces: (a) and (b) Central part of the distribution; (c) and (d) Tail of the distribution.

Therefore, this experiment has shown that signals with normal distributions maintain their shape and statistical parameters in the interval-based simulations, but they require fewer computations to obtain similar degrees of accuracy.

The second part of this section evaluates the variations of the statistical estimators when interval samples of a specific width are used to compute the mean and variance of the random signals in the simulations. Now, the sequence of steps is as follows:


These steps allow the computation of the means and variances of the estimators, instead of averaging the computed histograms. Step 2 computes the mean and variance of the signals specified in step 1, and step 4 averages the results of the mean and variance of the estimators (in this experiment *M* is high, to ensure the reliability of estimator statistics).

Figure 8 shows the evolution of the estimators of the mean and the variance as a function of the lengths of the traces (500, 1000 and 5000 samples) and the widths of the intervals

(between 0 and 1). Figures 8.a-c show the averaged mean values computed by the estimator for the previous three lengths. It can be observed that the interval-based estimators tend to obtain slightly better results than the ones of the numerical simulation, although they are roughly of the same order of magnitude. Figures 8.d-f show the variances of these computations. In this case, all the results are approximately equal, and the values decrease (i.e. they become more precise) with longer simulations. Figures 8.g-i show the mean of the variance of the interval-based simulations estimator. It can be observed that when the intervals have small widths, the ideal values are obtained, but when the interval widths are comparable to the variance of the distribution (approximately from 1/4 of its value) the computed values increase significantly the variance of the estimator. Figures 8.j-l show the evolution of the variance estimator. The results are approximately equal in all cases, and

Therefore, interval-based simulations tend to reduce the edges of the PDFs and to equalize the other parts of the distribution according to the interval widths. If no additional operation is performed, the edges of the PDFs may change significantly, particularly in uniform distributions. However, since these effects are known, they can possibly be compensated. When using normal signals, the mean and variance of the MC method are similar to the ones obtained in numerical simulations, but the mean of the variance tends to grow for widths above 1/8 of the variance. However, since the improvement in the computed accuracy is

Section 3.1 has revealed the importance of using EIA in the interval-based simulation of DSP systems, particularly when they contain feedback loops. It has also shown that traditional IA provides overestimated results due to the cancellation problem. Although the analysis has been performed through a simple example, it can be shown that this problem occurs in most IIR realizations of order equal or greater than two. If there are no dependencies, IA provides the same results than AA, but AA is recommended to be used in the general case. In interval-based simulations of quantized systems, the affine forms must be modified to include all the possible values of the quantization operations without increasing the number of noise terms. The proposed approach solves the overestimation problem, and allows

Another important conclusion is that, since the propagation of uncertainties in AA is accurate for linear computations, the features of AA perfectly match with the requirements

Section 3.2 has evaluated the effects of including one or more uncertainties in a deterministic signal. In addition to determining the maximum and minimum bounds of the variations of the signals in the frequency domain, the analyses have shown the position of the largest uncertainties. Since these amplitudes are not equal, the noise at the output of the FFT does not seem to be white. Moreover, its effect seems to be dependent on the position of the uncertainties in the time domain. The analyses based on interval computations have detected this effect, but they must be combined with statistical techniques to verify the results. A more precise understanding of these effects would help to recover weak signals in

In Section 3.3 the effects of using intervals or extended intervals of a given width in the Monte-Carlo method instead of the traditional numerical simulations has been analyzed. In the first part, the results show that this type of processing softens the edges and the peaks of

small, it does not seem to compensate the increased complexity of the process.

performing accurate analysis of linear systems with feedback loops.

of the interval-based simulations of digital filters and transforms.

decrease with the longer simulations.

**3.4 Discussion on interval-based simulations** 

environments with low signal-to-noise ratios.

Fig. 8. Analysis of the values provided by the mean and variance interval-based estimators depending on the lengths of the traces: (a) - (c) average of the mean estimator, (d) - (f) variance of the mean, (g) - (i) mean of the variance of the estimator, (j) - (l) variance of the variance. In the four cases, the first column represents the average of 1000 simulations using traces of 500 samples; the second column, of 1000 samples; and the third column, of 5000 samples. The values of the abscissa (1 to 8) respectively represent the interval widths: 0, 1/64, 1/32, 1/16, 1/8, 1/4, 1/2 and 1.

(between 0 and 1). Figures 8.a-c show the averaged mean values computed by the estimator for the previous three lengths. It can be observed that the interval-based estimators tend to obtain slightly better results than the ones of the numerical simulation, although they are roughly of the same order of magnitude. Figures 8.d-f show the variances of these computations. In this case, all the results are approximately equal, and the values decrease (i.e. they become more precise) with longer simulations. Figures 8.g-i show the mean of the variance of the interval-based simulations estimator. It can be observed that when the intervals have small widths, the ideal values are obtained, but when the interval widths are comparable to the variance of the distribution (approximately from 1/4 of its value) the computed values increase significantly the variance of the estimator. Figures 8.j-l show the evolution of the variance estimator. The results are approximately equal in all cases, and decrease with the longer simulations.

Therefore, interval-based simulations tend to reduce the edges of the PDFs and to equalize the other parts of the distribution according to the interval widths. If no additional operation is performed, the edges of the PDFs may change significantly, particularly in uniform distributions. However, since these effects are known, they can possibly be compensated. When using normal signals, the mean and variance of the MC method are similar to the ones obtained in numerical simulations, but the mean of the variance tends to grow for widths above 1/8 of the variance. However, since the improvement in the computed accuracy is small, it does not seem to compensate the increased complexity of the process.

#### **3.4 Discussion on interval-based simulations**

292 Applications of Digital Signal Processing

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Fig. 8. Analysis of the values provided by the mean and variance interval-based estimators depending on the lengths of the traces: (a) - (c) average of the mean estimator, (d) - (f) variance of the mean, (g) - (i) mean of the variance of the estimator, (j) - (l) variance of the variance. In the four cases, the first column represents the average of 1000 simulations using traces of 500 samples; the second column, of 1000 samples; and the third column, of 5000 samples. The values of the abscissa (1 to 8) respectively represent the interval widths: 0,

1/64, 1/32, 1/16, 1/8, 1/4, 1/2 and 1.

Section 3.1 has revealed the importance of using EIA in the interval-based simulation of DSP systems, particularly when they contain feedback loops. It has also shown that traditional IA provides overestimated results due to the cancellation problem. Although the analysis has been performed through a simple example, it can be shown that this problem occurs in most IIR realizations of order equal or greater than two. If there are no dependencies, IA provides the same results than AA, but AA is recommended to be used in the general case. In interval-based simulations of quantized systems, the affine forms must be modified to include all the possible values of the quantization operations without increasing the number of noise terms. The proposed approach solves the overestimation problem, and allows performing accurate analysis of linear systems with feedback loops.

Another important conclusion is that, since the propagation of uncertainties in AA is accurate for linear computations, the features of AA perfectly match with the requirements of the interval-based simulations of digital filters and transforms.

Section 3.2 has evaluated the effects of including one or more uncertainties in a deterministic signal. In addition to determining the maximum and minimum bounds of the variations of the signals in the frequency domain, the analyses have shown the position of the largest uncertainties. Since these amplitudes are not equal, the noise at the output of the FFT does not seem to be white. Moreover, its effect seems to be dependent on the position of the uncertainties in the time domain. The analyses based on interval computations have detected this effect, but they must be combined with statistical techniques to verify the results. A more precise understanding of these effects would help to recover weak signals in environments with low signal-to-noise ratios.

In Section 3.3 the effects of using intervals or extended intervals of a given width in the Monte-Carlo method instead of the traditional numerical simulations has been analyzed. In the first part, the results show that this type of processing softens the edges and the peaks of

Alefeld, G. (1984), The Centered Form and the Mean Value Form - A Necessary Condition

Armengol, J.; Vehí, J.; Travé-Massuyès, L. & Sainz, M. A. (DX-2001), Application of Multiple

Berz, M. (1999), *Modern Map Methods in Particle Beam Phisics*, Academic Press, San Diego. Berz, M.; Bischof, C.; Griewank, A. & Corliss, G. (1996), *Computational Differentiation:* 

Berz, M. & Makino, K. (1998), "Verified Integration of ODEs and Flows Using Differential

Caffarena, G.; López, J.A.; Leyva, G.; Carreras C.; Nieto-Taladriz, O., (2009), Architectural

Caffarena, G.; López, J.A.; Leyva, G.; Carreras C.; Nieto-Taladriz, O., (2010), SQNR

Clark, M.; Mulligan, M.; Jackson, D.; & Linebarger, D. (2005), Accelerating Fixed-Point

Coconut\_Group (2002), *COCONUT, COntinuous COnstraints - UpdatiNg the Technology - IST* 

Fang, C. F.; Chen, T. & Rutenbar, R. A. (2003), "Floating-point error analysis based on affine

Femia, N. & Spagnuolo, G. (2000), "True Worst-Case Circuit Tolerance Analysis Using

Figuereido, L. H. d. & Stolfi, J. (2002), "Affine Arithmetic: Concepts and Applications", *10th* 

Gardenes, E. (1985), "Modal Intervals: Reasons and Ground Semantics", *Lecture Notes in* 

Gardenes, E. & Trepat, A. (1980), "Fundamentals of SIGLA, an Interval Computing System

Goldenstein, S.; Vogler, C. & Metaxas, D. (2001), *Affine Arithmetic Based Estimation of Cue* 

Hill, T. (2006), *Acceldsp synthesis tool floating-point to fixed-point conversion of matlab algorithms* 

Comba, J. L. D. & Stolfi, J. (1993), *Affine Arithmetic and Its Applications to Computer Graphics*, 9-18.

*http://www.commsdesign.com/showArticle.jhtml?articleID=57703818.*

Corliss, G. F. (2004), *G.F. Corliss Homepage*, http://www.eng.mu.edu/corlissg/

over the Completed Set of Intervals", *Computing*, 24, 161-179.

*Fundamental Theory and Applications*, 47, 9, 1285-1296.

*Arithmetic, and Validated Numerics, SCAN 2002*.

http://caneos.mcmaster.ca/solvers/GLOB:GLOBSOL/

*Distributions in Deformable Model Tracking*. Hansen, E. R. (1975), *A Generalized Interval Arithmetic*, 7-18.

*Computer Science*, 212, 27-35.

GlobSol\_Group (2004), *GlobSol Homepage*,

Garloff (1999), *Introduction to Interval Computations*.

*targeting fpgas*. White paper, Xilinx.

Sliding Time Windows to Fault Detection Based on Interval Models, *12th* 

Algebraic Methods on High-Order Taylor Models", *Reliable Computing*, 4, 4, 361-369.

Synthesis of Fixed-Point DSP Datapaths using FPGAs, *International Journal of* 

Estimation of Fixed-Point DSP Algorithms, *EURASIP Journal on Advances in Signal* 

Design for MB-OFDM UWB Systems. *CommsDesign*. Online available at:

arithmetic", *Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP* 

Genetic Algorithms and Affine Arithmetic", *IEEE Trans. Circuits and Systems I:* 

*GAMM - IMACS International Symposium on Scientific Computing, Computer* 

that They Yield the Range, *Computing*, 33, 165-169.

*International Workshop on Principles of Diagnosis.*  Berz, M. (1991), *Forward Algorithms for High Orders and Many Variables*.

Berz, M. & Makino, K. (2004), *Taylor Model Research: Results and Reprints*.

*Reconfigurable Computing*, vol. 2009, 14 pages.

*Processing*, vol. 2010, article 21, 12 pages.

*Project funded by the European Union*.

*'03)*, 2, 561-564.

Berz, M. (1997), *COSY INFINITY Version 8 Reference Manual.*

*Techniques, Applications and Tools*.

**6. References** 

the PDFs, although these effects can be reduced by selecting smaller intervals or by preprocessing the probability function. In particular, normal distributions are better defined (due to the Central Limit Theorem) and, if the widths of the intervals are significantly smaller than the variance of the distribution, the differences with respect to the theoretical PDFs are smaller than with numerical simulations using the same number of samples. In the second part, the evolution of the mean and the variance of the mean and variance estimators has been studied for a normal PDF using the Monte-Carlo method for different interval widths. These estimators behave similarly than their numerical counterparts (slightly better in most cases), but the mean of the variance increases when the interval widths are greater than 1/8 of the variance of the distribution. Moreover, the increased complexity associated to the interval-based computations does not seem to compensate the small improvement of the accuracy of the statistical estimators in the general case.

In summary, interval-based simulations are preferred when the PDFs are being evaluated, but these improvements are not significant when only the statistical parameters are computed. If the distributions contain edges (for example in the uniform or histogram-based distributions), a pre-processing or post-processing stage can be included to cancel the smoothing performed by the interval sets. Otherwise (such in normally distributed signals), this step can be avoided.
