**2. Deconvolution procedure**

#### **2.1 Background**

126 Advances in Wavelet Theory and Their Applications in Engineering, Physics and Technology

Essah, 1988; Brianzi, 1994; Stone, 1974; Connolly et al, 1998; Berger et al, 1999; Thompson et al, 1991; Fischer et al, 1998). To this end, the solution is superimposed with certain limitations by introducing some additional limitative operators, whose shape is chosen depending on the formalism used for the solution of the ill-posed problem, into a goal function; usually the goal function is the mismatch between the convoluted solution and the initial data (Makarov, 1999). Indeed, different forms of limitative operator have been used. For example, Collins and Dowssett (Collins et al, 1992) and Allen and Dowssett (Allen et al, 1993) have used the entropy function as a limitative operator. Based on the Tikhonov-Miller regularization, Gautier et al (Gautier et al, 1998) have used a limitative operator that was defined as smoothness of the solution. Mancina et al (Mancina et al, 2000) have introduced *a priori* a pre-deconvoluted signal as a model of solution in an iterative regularized method. Nevertheless, the results of most of these approaches contain artifacts with negative concentrations, which are not physically acceptable. The origin of these artifacts is related to the presence of strong local components of high frequencies in the signal which form part of the noise. To remove the negative components from the deconvoluted profile, some algorithms with non-negativity constraints have been proposed (Makarov, 1999; Gautier et al, 1998; Gautier et al; 1998; Prost et al, 1984). These methods, which constrain the data to be positive everywhere, are sensitive to noise, i.e., a little perturbation in the data can lead to a great difference in the deconvoluted solution. A truncation of the negative data is an arbitrary operation and it is not acceptable, since it results in an artificially steep slope and can lead to the adoption of subjective criteria for profile assessment (Herzel et al, 1995). Moreover, noise in the data increases the distance between the real signal and its estimate, therefore a priori constraint is not enough, and a

To overcome these limits, it is important to adopt a powerful deconvolution that leads to a smoothed and stable solution without application of any kind of constraints. In this context, multiscale deconvolution (MD), which is never used to recover SIMS profiles, may be the

The MD provides a local smoothness property with a high smoothness level in unstructured regions of the spectrum where only background occurs and a low smoothness level where structures arise (Fischer et al, 1998). Based on wavelet transform, the MD seems to be a good solution that can yield information about the location of certain frequencies in the profile on different frequency scales. Therefore, high frequencies, which are related to noise, can be localized and controlled at different levels of wavelet decomposition. The multiscale description of signals has facilitated the development of wavelet theory and its application to numerous fields (Averbuch & Zheludev, 2009; Charles et al, 2004; Fan & Koo, 2002; Neelamani et al, 2004; Zheludev, 1999; Rashed et al, 2007; Garcia-Talavera et al, 2003; Starck et al, 2003; Jammal et al, 2004; Rucka et al, 2006). This chapter is intended to explore capabilities of wavelets for the deconvolution framework. The proposed idea is to introduce a wavelet-based methodology in the Tikhonov-Miller regularization scheme and shrinking the wavelet coefficients of the blurred and the estimated solution at each resolution level allow a local adaptation of limitative operator in the quadratic Tikhonov-Miller regularization. This leads to compensation for high frequencies which are related to noise. As a result, the oscillations which appear in classical regularization methods can be removed. This leads

free-oscillation deconvolution is necessary.

most appropriate technique.

to a smoothed and stable solution.

Depth profiling in SIMS analysis is mathematically described by the convolution integral which is governed by the depth resolution function (DRF), h(z). If the integral over h(z) is normalized to unity, then the measured (convolved) signal is given by the well-known convolution integral

$$\text{l.}\,y(z) = \int\_{-\phi}^{+\phi} h(z - z')\,\text{x}(z')dz' + \text{b}(z),\tag{1}$$

where x(z) is the compositional depth distribution function and b(z) is the additive noise.

This work deals with the deconvolution of depth profiling SIMS data. Therefore, it is important for further consideration to know the shape of the DRF that is typical of SIMS profiles. We have chosen to describe the DRF analytically in a form initially proposed by Dowsett et al (Dowsett et al, 1994), which is constituted by the convolution of double exponential functions with a Gaussian function. This DRF can be described by three parameters: λu (the rising exponential decay), σg (the standard deviation of the Gaussian function), and λd (the falling exponential decay). The latter characterizes the residual mixing effect, which is considered to be the main mechanism responsible for the degradation of the depth resolution (Boulakroune et al, 2007; Yang et al, 2006). For any possible values of these parameters, the DRF is normalized to unity. The consequences of the fact that the resolution function can be represented in the form of a convolution have been described elsewhere (Gautier et al, 1998; Dowsett et al, 1994; Collins et al, 1992; Allen et al, 1993).

For a discrete system, eq. (1) can be written as

$$y\_k = \sum\_{i=0}^{N-1} x\_i h\_{k-i} + b\_{k\text{\textquotedbl{}}} \qquad \mathbf{k} = \mathbf{0}, \dots, \mathbf{2N-2},\tag{2}$$

where N is the number of samples of vectors h, x. Equation (2) can be rewritten as

$$\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{b}\_{\prime} \tag{3}$$

where H is a matrix built from h(z). In the case of a linear and shift-invariant system, H is a convolution operator (circular Toeplitz matrix). This means that the multiplication of H with the vector x leads numerically to the same operation as the analytical convolution of h(z) with x(z).

Multi-Scale Deconvolution of Mass Spectrometry Signals 129

The basic underlying idea in the regularization approaches is formulated as an optimization

where L1 is a quadratic distance, which provides a maximum fidelity to the data; 0 *x* is the least squares solution, consistent with the data; L2 is a stabilizing function, which measures the distance between x and an extreme solution *x* corresponding to an a priori ultrasmooth solution. The restoration methods, cited in references (Iqbal, 2003; Varah, 1983; Essah, 1988; Brianzi, 1994; Stone, 1974; Connolly et al, 1998; Berger et al, 1999; Thompson et al, 1991), differ from each other in the choice of the distance L2. The choice leads either to a deterministic or a stochastic regularization. α is the regularization parameter which controls the trade-off between stability (fidelity to the a priori) and accuracy of the solution

As shown in the previous section, the regularization is achieved through a compromise between choosing one solution in the set of solutions that lead to a reconstructed signal close to the measured data (fidelity to the data), and in the set of stable solutions that conform to some prior knowledge of the original signal (fidelity to the a priori). This means that the solution is considered to be close to the data if the reconstruction signal Hx is close to the

procedure is to minimize the quadratic distance between y and Hx. Unfortunately, solutions

to get a stable solution, one must choose another criterion that checks whether the solution is consistent with the solution of the deconvolution problem: it must be physically acceptable, i.e., a smoothed solution. The smoothness of the solution can be described by its

where D is a stabilizing operator. The choice of D is based on the processing context and some a priori knowledge about the original signal. D is usually designed to smooth the estimated signal, and then a gradient or a discrete Laplacian is conventionally chosen. In this work, the filter used is a discrete Laplacian: [1 -2 1], its spectrum is a high-pass filter (Gautier et al, 1998; Mancina et al, 2000; Burdeau et al, 2000). This results in the

*<sup>x</sup>* argmin <sup>2</sup> <sup>2</sup> [ ( *y Hx Dx*

where "argmin" denotes the argument that minimizes the expression between the brackets. Perfect fidelity to the data is achieved for α = 0, whereas perfect matching with a priori knowledge is achieved for α = ∞. Therefore, it is necessary to find the optimum α and, hence, a smoothing factor at which the solution of eq. (6) is well-stabilized and still close to a

= L1(x, 0 *x* ) + α L2(x, *x* ) <sup>x</sup> X, (4)

*y Hx* is reasonably small. Thus, the first task of the deconvolution

*y Hx* oscillate and are therefore unacceptable. In order

<sup>2</sup> *Dx* r2, (5)

r²)], (6)

problem whose general expression is

(fidelity to the data). X represents the set of possible solutions.

minimization of the quadratic functional proposed by Tikhonov

L *x y* (,)

**2.2 Tikhonov-Miller regularization** 

measured one y, i.e., if <sup>2</sup>

regularity r2, defined as

that lead to very small values of <sup>2</sup>

The problem of the recovery of the actual function x from eq. (3) is an ill-posed problem in the sense of Hadamard. (Varah, 1983) Therefore, it is affected by numerical instability, since y contains experimental noise. The term incorrectly posed or ill-posed problem means that the solution x of eq. (3) may not be unique, may not exist, or may not depend continuously on the data. In other words, H is an ill-conditioned matrix, or/and small variations in the data due to noise result in an unbounded perturbation in the solution.

It is well-known that the function H(υ) (the spectrum of the DRF) is a low-pass filter. (Allen, 1993; Barakat, 1997; Berger, 1999; Gautier, 1998) Its components are thus equal or very close to zero for frequencies above a certain cut-off frequency υc. Components close to υc are very attenuated by the convolution. Furthermore, in the presence of an ill-posed problem, some components below υc can be very small, almost null (see Fig. 1). In this case, the inversion of the convolution equation fails for these components, or leads to a very unstable solution.

Fig. 1. DFT of the depth resolution function; DRF measured at 8.5 keV/O2 +.

To solve an ill-posed problem, it is mandatory to find a solution so that the small components of H(υ) do not hinder the deconvolution process, i.e., to stabilize the solution. Moreover, the resolution of an ill-posed problem in the presence of noise leads to an infinite number of solutions, among which it is necessary to choose the unique solution that fits the problem we are trying to solve.

Therefore, in order to solve this problem, a regularization method must be included. This means that the original problem is replaced by an approximate one whose solutions are significantly less sensitive to errors in the data, y. Several regularization methods have been discussed in refs. (Iqbal, 2003; Varah, 1983; Essah, 1988; Brianzi, 1994; Stone, 1974; Connolly et al, 1998; Berger et al, 1999; Thompson et al, 1991). All of these methods are based on the incorporation of a priori knowledge into the restoration process to achieve stability of the solution.

The problem of the recovery of the actual function x from eq. (3) is an ill-posed problem in the sense of Hadamard. (Varah, 1983) Therefore, it is affected by numerical instability, since y contains experimental noise. The term incorrectly posed or ill-posed problem means that the solution x of eq. (3) may not be unique, may not exist, or may not depend continuously on the data. In other words, H is an ill-conditioned matrix, or/and small variations in the

It is well-known that the function H(υ) (the spectrum of the DRF) is a low-pass filter. (Allen, 1993; Barakat, 1997; Berger, 1999; Gautier, 1998) Its components are thus equal or very close to zero for frequencies above a certain cut-off frequency υc. Components close to υc are very attenuated by the convolution. Furthermore, in the presence of an ill-posed problem, some components below υc can be very small, almost null (see Fig. 1). In this case, the inversion of the convolution equation fails for these components, or leads to a very unstable solution.

data due to noise result in an unbounded perturbation in the solution.

Fig. 1. DFT of the depth resolution function; DRF measured at 8.5 keV/O2+.

problem we are trying to solve.

solution.

To solve an ill-posed problem, it is mandatory to find a solution so that the small components of H(υ) do not hinder the deconvolution process, i.e., to stabilize the solution. Moreover, the resolution of an ill-posed problem in the presence of noise leads to an infinite number of solutions, among which it is necessary to choose the unique solution that fits the

Therefore, in order to solve this problem, a regularization method must be included. This means that the original problem is replaced by an approximate one whose solutions are significantly less sensitive to errors in the data, y. Several regularization methods have been discussed in refs. (Iqbal, 2003; Varah, 1983; Essah, 1988; Brianzi, 1994; Stone, 1974; Connolly et al, 1998; Berger et al, 1999; Thompson et al, 1991). All of these methods are based on the incorporation of a priori knowledge into the restoration process to achieve stability of the The basic underlying idea in the regularization approaches is formulated as an optimization problem whose general expression is

$$\mathcal{L}\left\{ \begin{array}{c} \tilde{\mathbf{x}}(\alpha, y) \end{array} \right\} = \left\{ \begin{array}{c} \mathcal{L}\_{\mathsf{I}}(\mathbf{x}, \tilde{\mathbf{x}}\_{0}) + \mathfrak{a} \operatorname{\mathsf{L}}\_{\mathsf{I}}(\mathbf{x}, \tilde{\mathbf{x}}\_{w}) \end{array} \right\} \chi \in \chi. \tag{4}$$

where L1 is a quadratic distance, which provides a maximum fidelity to the data; 0 *x* is the least squares solution, consistent with the data; L2 is a stabilizing function, which measures the distance between x and an extreme solution *x* corresponding to an a priori ultrasmooth solution. The restoration methods, cited in references (Iqbal, 2003; Varah, 1983; Essah, 1988; Brianzi, 1994; Stone, 1974; Connolly et al, 1998; Berger et al, 1999; Thompson et al, 1991), differ from each other in the choice of the distance L2. The choice leads either to a deterministic or a stochastic regularization. α is the regularization parameter which controls the trade-off between stability (fidelity to the a priori) and accuracy of the solution (fidelity to the data). X represents the set of possible solutions.

#### **2.2 Tikhonov-Miller regularization**

As shown in the previous section, the regularization is achieved through a compromise between choosing one solution in the set of solutions that lead to a reconstructed signal close to the measured data (fidelity to the data), and in the set of stable solutions that conform to some prior knowledge of the original signal (fidelity to the a priori). This means that the solution is considered to be close to the data if the reconstruction signal Hx is close to the measured one y, i.e., if <sup>2</sup> *y Hx* is reasonably small. Thus, the first task of the deconvolution procedure is to minimize the quadratic distance between y and Hx. Unfortunately, solutions that lead to very small values of <sup>2</sup> *y Hx* oscillate and are therefore unacceptable. In order to get a stable solution, one must choose another criterion that checks whether the solution is consistent with the solution of the deconvolution problem: it must be physically acceptable, i.e., a smoothed solution. The smoothness of the solution can be described by its regularity r2, defined as

$$\left\| D\mathbf{x} \right\|^2 \le \mathbf{r}^2,\tag{5}$$

where D is a stabilizing operator. The choice of D is based on the processing context and some a priori knowledge about the original signal. D is usually designed to smooth the estimated signal, and then a gradient or a discrete Laplacian is conventionally chosen. In this work, the filter used is a discrete Laplacian: [1 -2 1], its spectrum is a high-pass filter (Gautier et al, 1998; Mancina et al, 2000; Burdeau et al, 2000). This results in the minimization of the quadratic functional proposed by Tikhonov

$$\tilde{\mathbf{x}} = \operatorname\*{argmin} \left\lbrack \left\lVert y - H\tilde{\mathbf{x}} \right\rVert^2 + \alpha (\left\lVert D\tilde{\mathbf{x}} \right\rVert^2 - \mathbf{r}^2) \right\rbrack,\tag{6}$$

where "argmin" denotes the argument that minimizes the expression between the brackets. Perfect fidelity to the data is achieved for α = 0, whereas perfect matching with a priori knowledge is achieved for α = ∞. Therefore, it is necessary to find the optimum α and, hence, a smoothing factor at which the solution of eq. (6) is well-stabilized and still close to a real distribution. This regularization parameter α can be estimated by a variety of techniques (Iqbal, 2003; Varah, 1983; Essah, 1988; Brianzi, 1994; Stone, 1974; Connolly et al, 1998; Berger et al, 1999; Thompson et al, 1991). In a simulation where the regularity of the solution is known, α = b2/r2, where b2 is an upper bound for the total power of the noise. Unfortunately, in the real case, there is no knowledge of the regularity of the real profile, but it can be estimated by means of the generalized cross-validation (Thompson et al, 1991) which applies well to Gaussian white noise. The regularized solution takes the following form:

$$\tilde{\mathbf{x}} = \left(\mathbf{H}^T \mathbf{H} + a \mathbf{D}^T \mathbf{D}\right)^{-1} \mathbf{H}^T \mathbf{y} = \left(\mathbf{H}^+\right)^{-1} \mathbf{H}^T \mathbf{y} \tag{7}$$

Multi-Scale Deconvolution of Mass Spectrometry Signals 131

obtain a solution close to the ideal solution because this regularization provides global properties of the signal. Barakat et al (Barakat et al, 1997) have proposed a method based on Tikhonov regularization combined with an a priori model of the solution. The idea of such a model is to introduce local characteristics of the signal. This model may contain discontinuities whose locations and amplitudes are imposed. The new functional to be

> L = <sup>2</sup> <sup>2</sup> mod *y Hx D x x*

1 mod ( ) ( ). *T T TT x H H D D H y D Dx*

The strategy developed in Barakat's algorithm is useful if the a priori information is quite

Mancina et al (Mancina et al, 2000) proposed to reiterate the algorithm of Barakat (Barakat et al, 1997) and to use a pre-deconvoluted signal as model of the solution (an intermediate solution between the ideal solution, i.e., the input signal, and the measured one) with sufficient regularization. The mathematical formulation of the Mancina approach in Fourier

\* 2

*HY D X*

1 2 2

[ ] ˆ

where H\* is the conjugate of H, and C represents the constraint operator which must be

Actually, in most of the classical monoresolution deconvolution methods, the results obtained are oscillatory Makarov, 1999; Gautier et al, 1998; Fares et al, 2006; Dowsett et al, 1994; Mancina et al, 2000; Yang et al, 2006; Shao et al, 2004. The generated artifacts are mainly due to the strong presence of high-frequency components which are not compensated by the regularization parameter α associated with the regularization operator D, since, in these methods, this parameter is applied in a global manner to all the frequency bands of the signal. This leads to the treatment of the low frequencies, which contain the useful signal, as opposed to the high frequencies, which are mainly noise. Thus, at α = 0, eq. (7) corresponds to the minimum of argmin [eq. (6)] without smoothing of x. The corresponding solution is applicable only in the perfect case, i.e., if there are no errors or noise in the experimental distribution y. Real y always contains errors, and minimization of eq. (6) using α = 0 produces strong fluctuations of the solution (a parameter α that is too weak leads to a solution dominated by the noise within the observed data). With an increase of α, the role of the second term in eq. (6) increases, and the solution stabilizes and becomes increasingly smooth. However, if α is too large,

*n*

*H D*

 

mod

*n*

(9)

where xmod is an a priori model of the solution. The solution of eq. (8) is given by:

precise and the quality of solution depends on the accuracy of a priori information.

0

*n*

1

ˆ ˆ [ ] 0

*X TF Cx x TF X*

*n n*

mod

 

ˆ

*X*

*n*

mod

*X*

applied in the time domain after an inverse Fourier transformation.

( ), (8)

, (10)

minimized with respect to x is defined as follows:

space is as follows:

where *T T H HH DD* .

The matrix H characterizing the deconvolution process before regularization is replaced by the generalized matrix H+ = (HTH + αDTD), which is more conditioned. That step is carried out by the modification of the eigenvalues of H; thus the system becomes more stable. Figure 2 shows the spectra of the DRF (H), the filter D and the generalized matrix H+.

The choice of the regularization operator D should not constitute a difficulty since the rule on the modification of the eigenvalues is respected. The most appropriate choice to be determined for the reconstruction quality is that of the regularization parameter α. Indeed, a poor estimation of this parameter leads to worse conditioning of the matrix H, and as a consequence, the solution is degenerated.

Fig. 2. Spectra of DRF (*H*), filter *D*, and the generalized matrix *H*+. Here the regularization parameter α is overestimated, which leads to a well-conditioned *H*+.

Actually, the regularization can guarantee unicity and stability of the solution but cannot lead to a very satisfactory result. The quantity of information brought is not sufficient to

real distribution. This regularization parameter α can be estimated by a variety of techniques (Iqbal, 2003; Varah, 1983; Essah, 1988; Brianzi, 1994; Stone, 1974; Connolly et al, 1998; Berger et al, 1999; Thompson et al, 1991). In a simulation where the regularity of the solution is known, α = b2/r2, where b2 is an upper bound for the total power of the noise. Unfortunately, in the real case, there is no knowledge of the regularity of the real profile, but it can be estimated by means of the generalized cross-validation (Thompson et al, 1991) which applies well to Gaussian white noise. The regularized solution takes the following

1 1 ( ) () *T TT T x HH DD H*

The matrix H characterizing the deconvolution process before regularization is replaced by the generalized matrix H+ = (HTH + αDTD), which is more conditioned. That step is carried out by the modification of the eigenvalues of H; thus the system becomes more stable. Figure 2 shows the spectra of the DRF (H), the filter D and the generalized matrix H+.

The choice of the regularization operator D should not constitute a difficulty since the rule on the modification of the eigenvalues is respected. The most appropriate choice to be determined for the reconstruction quality is that of the regularization parameter α. Indeed, a poor estimation of this parameter leads to worse conditioning of the matrix H, and as a

Fig. 2. Spectra of DRF (*H*), filter *D*, and the generalized matrix *H*+. Here the regularization

Actually, the regularization can guarantee unicity and stability of the solution but cannot lead to a very satisfactory result. The quantity of information brought is not sufficient to

parameter α is overestimated, which leads to a well-conditioned *H*+.

*y H H y* , (7)

form:

where *T T H HH DD*

.

consequence, the solution is degenerated.

obtain a solution close to the ideal solution because this regularization provides global properties of the signal. Barakat et al (Barakat et al, 1997) have proposed a method based on Tikhonov regularization combined with an a priori model of the solution. The idea of such a model is to introduce local characteristics of the signal. This model may contain discontinuities whose locations and amplitudes are imposed. The new functional to be minimized with respect to x is defined as follows:

$$\mathcal{L} = \left\| y - H\vec{\boldsymbol{x}} \right\|^2 + \alpha \left\| D(\vec{\boldsymbol{x}} - \boldsymbol{x}\_{\text{mod}}) \right\|^2,\tag{8}$$

where xmod is an a priori model of the solution. The solution of eq. (8) is given by:

$$\mathfrak{X} = \left(\boldsymbol{H}^{\mathrm{T}}\boldsymbol{H} + a\boldsymbol{D}^{\mathrm{T}}\boldsymbol{D}\right)^{-1}\left(\boldsymbol{H}^{\mathrm{T}}\boldsymbol{y} + a\boldsymbol{D}^{\mathrm{T}}\boldsymbol{D}\mathbf{x}\_{\mathrm{mod}}\right). \tag{9}$$

The strategy developed in Barakat's algorithm is useful if the a priori information is quite precise and the quality of solution depends on the accuracy of a priori information.

Mancina et al (Mancina et al, 2000) proposed to reiterate the algorithm of Barakat (Barakat et al, 1997) and to use a pre-deconvoluted signal as model of the solution (an intermediate solution between the ideal solution, i.e., the input signal, and the measured one) with sufficient regularization. The mathematical formulation of the Mancina approach in Fourier space is as follows:

$$\begin{cases} \hat{X}\_{n+1} = \frac{H^\*Y + \alpha \left| D \right|^2 X\_{\text{mod}\_n}}{\left| H \right|^2 + \alpha \left| D \right|^2} \\ X\_{\text{mod}\_n} = TF \left[ \hat{\mathbf{C}} \hat{\mathbf{x}}\_n \right] \\ \hat{\boldsymbol{x}}\_n = TF^{-1} \left[ \hat{X}\_n \right] \\ X\_{\text{mod}\_0} = \mathbf{0} \end{cases} \tag{10}$$

where H\* is the conjugate of H, and C represents the constraint operator which must be applied in the time domain after an inverse Fourier transformation.

Actually, in most of the classical monoresolution deconvolution methods, the results obtained are oscillatory Makarov, 1999; Gautier et al, 1998; Fares et al, 2006; Dowsett et al, 1994; Mancina et al, 2000; Yang et al, 2006; Shao et al, 2004. The generated artifacts are mainly due to the strong presence of high-frequency components which are not compensated by the regularization parameter α associated with the regularization operator D, since, in these methods, this parameter is applied in a global manner to all the frequency bands of the signal. This leads to the treatment of the low frequencies, which contain the useful signal, as opposed to the high frequencies, which are mainly noise. Thus, at α = 0, eq. (7) corresponds to the minimum of argmin [eq. (6)] without smoothing of x. The corresponding solution is applicable only in the perfect case, i.e., if there are no errors or noise in the experimental distribution y. Real y always contains errors, and minimization of eq. (6) using α = 0 produces strong fluctuations of the solution (a parameter α that is too weak leads to a solution dominated by the noise within the observed data). With an increase of α, the role of the second term in eq. (6) increases, and the solution stabilizes and becomes increasingly smooth. However, if α is too large,

Multi-Scale Deconvolution of Mass Spectrometry Signals 133

Since the real distribution that is to be deconvoluted is unknown other than in some special cases, the choice of the optimum α requires the use of indirect and sometimes non strict and ambiguous criteria. This causes a clear indeterminacy in the choice of α. One should note that this indeterminacy in ill-posed problems is inherent to any data

Conventionally, the Tikhonov-Miller approach of searching for the optimum α uses additional information on the level of noise in the initial data. This is often inconvenient, for example, if the data vary over a wide range, and the statistical noise level changes

Generally, the choice of optimum smoothing for deconvolution of an arbitrary set of data requires a special study, and this work only outlines the principle for solving this problem. The example in Fig. 3 shows that the regularization parameter must be accurately determined and locally adapted in the differently treated frequency bands in order to ensure a non aberrant result. This allows the deconvolution of signals previously decomposed by

Wavelet theory is widely used in many engineering disciplines (Rashed et al, 2007; Garcia-Talavera et al, 2003; Starck et al, 2003; Jammal et al, 2004; Rucka et al, 2006), and it provides a rich source of useful tools for applications in time-scale types of problems. The attention to study of wavelets becomes more attractive when Mallat (Mallat, 1989) established a connection between wavelets and signal processing. Discrete wavelet transform (DWT) is an extremely fast algorithm that transforms data into wavelet coefficients at discrete intervals of time and scale, instead of at all scales. It is based on dyadic scaling and translating, and it is possible if the scale parameter varies only along the dyadic sequence (dyadic scales and positions). It is basically a filtering procedure that separates high and low frequency components of signals with high-pass and low-pass filters by a multiresolution decomposition algorithm (Mallat, 1989). Hence, the DWT is represented by the following

<sup>2</sup> ( , ) ( )2 (2 ), *<sup>j</sup> <sup>j</sup>*

(11)

*W j k y k nk*

where y is discretized heights of the original profile measurements, ψ is the discrete wavelet coefficients, and n is the sample number. The translation parameter k determines the location of the wavelet in the time domain, while the dilatation parameter j determines the location in the frequency domain as well as the scale or the extent of the space-frequency

The DWT analysis can be performed using a fast, pyramidal algorithm by iteratively applying low-pass and high-pass filters, and subsequent down-sampling by 2 (Mallat, 1989). Each level of the decomposition algorithm then yields to low-frequency components of the

*j k*

considerably from point to point depending on signal level.

deconvolution method.

projection onto a wavelet basis.

**3.1 Background** 

equation:

localization.

**3. Discrete wavelet transform** 

surplus smoothing may noticeably broaden the solution and conceal its important features (a parameter α that is too high leads to a solution that is not very sensitive to noise, but it is very far from the measured data). It is therefore necessary to find the optimum α and, hence, a smoothing factor at which the solution to problem (6) is wellstabilized but still close to the real distribution. Moreover, in the iterative algorithms, if the regularization parameter is not well calculated, the oscillations created at iteration n are amplified at iteration n + 1, which degenerates the final solution. The value of α, the regularization parameter associated with D, is very important in the regularization process, and its value determines the quality of the final solution. This can easily be understood if one analyzes the generalized matrix H+. As α increases, the matrix (H+)-1 tends toward a diagonal matrix, while the vector HTy, which is broadened in comparison to the initial data vector y due to the multiplication by the transposed distortion matrix, remains unchanged. As a result, with an increase of α, the shape of the solution tends to HTy, i.e., to the considerably broadened initial data. Figure 3 shows an example of the evolution of the spectrum of the matrix H+ for various values of the regularization parameter α.

According to Fig. 3, the determination of the classical regularization parameter αc for the SIMS profile led to a value of 5.9290×10-4. For this value, the spectrum of the generalized matrix H+ is oscillatory (the matrix is not well-conditioned), leading to an unstable solution. In order to stabilize the system more, Mancina et al (Mancina et al, 2000) proposed multiplying the regularization parameter by a positive factor. The multiplication of this parameter by a factor K (K = 10, 102, 103) leads to increasingly regularized matrix H+ (Fig. 2). Nevertheless, this multiplication is arbitrary and it is not based on any physical support

Fig. 3. Evolution of the spectrum of the matrix *H*+ according to kαc (K =1, 10, 102, 103).

Since the real distribution that is to be deconvoluted is unknown other than in some special cases, the choice of the optimum α requires the use of indirect and sometimes non strict and ambiguous criteria. This causes a clear indeterminacy in the choice of α. One should note that this indeterminacy in ill-posed problems is inherent to any data deconvolution method.

Conventionally, the Tikhonov-Miller approach of searching for the optimum α uses additional information on the level of noise in the initial data. This is often inconvenient, for example, if the data vary over a wide range, and the statistical noise level changes considerably from point to point depending on signal level.

Generally, the choice of optimum smoothing for deconvolution of an arbitrary set of data requires a special study, and this work only outlines the principle for solving this problem. The example in Fig. 3 shows that the regularization parameter must be accurately determined and locally adapted in the differently treated frequency bands in order to ensure a non aberrant result. This allows the deconvolution of signals previously decomposed by projection onto a wavelet basis.
