**2.1. The Moore-Penrose inverse**

We shall denote by **<sup>R</sup>***r*×*<sup>m</sup>* the algebra of all *<sup>r</sup>* <sup>×</sup> *<sup>m</sup>* real matrices. For *<sup>T</sup>* <sup>∈</sup> **<sup>R</sup>***r*×*m*, *<sup>R</sup>*(*T*) will denote the range of *T* and *N*(*T*) the kernel of *T*. The generalized inverse *T*† is the unique matrix that satisfies the following four Penrose equations:

$$TT^\dagger = (TT^\dagger)^\*, \qquad T^\dagger T = (T^\dagger T)^\*, \qquad TT^\dagger T = T, \qquad T^\dagger TT^\dagger = T^\dagger.$$

where *T*∗ denotes the transpose matrix of *T*.

Let us consider the equation *Tx* <sup>=</sup> *<sup>b</sup>*, *<sup>T</sup>* <sup>∈</sup> **<sup>R</sup>***r*×*m*, *<sup>b</sup>* <sup>∈</sup> **<sup>R</sup>***r*, where *<sup>T</sup>* is singular. If *<sup>T</sup>* is an arbitrary matrix, then there may be none, one or an infinite number of solutions, depending on whether *b* ∈ *R*(*T*) or not, and on the rank of *T*. But if *b* ∈/ *R*(*T*), then the equation has no solution. Therefore, another point of view of this problem is the following: instead of trying to solve the equation �*Tx* − *b*� = 0, we are looking for a minimal norm vector *u* that minimizes the norm �*Tu* − *b*�. Note that this vector *u* is unique. So, in this case we consider the equation *Tx* = *PR*(*T*)*b*, where *PR*(*T*) is the orthogonal projection on R(*T*). Since we are interested in the distance between *Tx* and *b*, it is natural to make use of �*T*�<sup>2</sup> norm.

The following two propositions can be found in [12].

**Proposition 0.1.** *Let T* <sup>∈</sup> **<sup>R</sup>***r*×*<sup>m</sup> and b* <sup>∈</sup> **<sup>R</sup>***r*, *<sup>b</sup>* <sup>∈</sup>/ *<sup>R</sup>*(*T*)*. Then, for u* <sup>∈</sup> **<sup>R</sup>***m, the following are equivalent:*

*(i) Tu* = *PR*(*T*)*b (ii)* �*Tu* <sup>−</sup> *<sup>b</sup>*� ≤� *Tx* <sup>−</sup> *<sup>b</sup>*�, <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>m</sup> (iii) T*∗*Tu* = *T*∗*b*

Let **<sup>B</sup>** <sup>=</sup> {*<sup>u</sup>* <sup>∈</sup> **<sup>R</sup>***m*|*T*∗*Tu* <sup>=</sup> *<sup>T</sup>*∗*b*}. This set of solutions is closed and convex; it therefore has a unique vector *u*<sup>0</sup> with minimal norm. In fact, **B** is an affine manifold; it is of the form *u*<sup>0</sup> + N (*T*). In the literature (c.f. [12]), **B** is known as the set of the least square solutions.

**Proposition 0.2.** *Let T* <sup>∈</sup> **<sup>R</sup>***r*×*<sup>m</sup> and b* <sup>∈</sup> **<sup>R</sup>***<sup>r</sup>* , *<sup>b</sup>* <sup>∈</sup>/ *<sup>R</sup>*(*T*)*, and the equation Tx* <sup>=</sup> *b. Then, if T*† *is the generalized inverse of T, we have that T*†*b* = *u, where u is the minimal norm solution defined above.*

We shall make use of this property for the construction of an alternative method in image processing inverse problems.

#### **2.2. Image restoration problems**

2 Will-be-set-by-IN-TECH

The Moore-Penrose pseudoinverse is a useful concept in dealing with optimization problems, as the determination of a ¸Sleast squaresT solution of linear systems. A typical application of

the Moore-Penrose inverse is its use in Image and signal Processing and Image restoration. The field of image restoration has seen a tremendous growth in interest over the last two decades, see [1]; [5]; [6]; [14]; [28]; [29]. The recovery of an original image from degraded observations is of crucial importance and finds application in several scientific areas including medical imaging and diagnosis, military surveillance, satellite and astronomical imaging, and remote sensing. A number of various algorithms have been proposed and intensively studied for achieving a fast recovered and high resolution reconstructed images, see [10]; [15]; [22]. The presented method in this article is based on the use of the Moore-Penrose generalized inverse of a matrix and provides us a fast computational algorithm for a fast and accurate

ˇ

digital image restoration. This article is an extension of the work presented in [7]; [8].

We shall denote by **<sup>R</sup>***r*×*<sup>m</sup>* the algebra of all *<sup>r</sup>* <sup>×</sup> *<sup>m</sup>* real matrices. For *<sup>T</sup>* <sup>∈</sup> **<sup>R</sup>***r*×*m*, *<sup>R</sup>*(*T*) will denote the range of *T* and *N*(*T*) the kernel of *T*. The generalized inverse *T*† is the unique matrix that

*TT*† = (*TT*†)∗, *T*†*T* = (*T*†*T*)∗, *TT*†*T* = *T*, *T*†*TT*† = *T*†,

Let us consider the equation *Tx* <sup>=</sup> *<sup>b</sup>*, *<sup>T</sup>* <sup>∈</sup> **<sup>R</sup>***r*×*m*, *<sup>b</sup>* <sup>∈</sup> **<sup>R</sup>***r*, where *<sup>T</sup>* is singular. If *<sup>T</sup>* is an arbitrary matrix, then there may be none, one or an infinite number of solutions, depending on whether *b* ∈ *R*(*T*) or not, and on the rank of *T*. But if *b* ∈/ *R*(*T*), then the equation has no solution. Therefore, another point of view of this problem is the following: instead of trying to solve the equation �*Tx* − *b*� = 0, we are looking for a minimal norm vector *u* that minimizes the norm �*Tu* − *b*�. Note that this vector *u* is unique. So, in this case we consider the equation *Tx* = *PR*(*T*)*b*, where *PR*(*T*) is the orthogonal projection on R(*T*). Since we are interested in the

**Proposition 0.1.** *Let T* <sup>∈</sup> **<sup>R</sup>***r*×*<sup>m</sup> and b* <sup>∈</sup> **<sup>R</sup>***r*, *<sup>b</sup>* <sup>∈</sup>/ *<sup>R</sup>*(*T*)*. Then, for u* <sup>∈</sup> **<sup>R</sup>***m, the following are*

Let **<sup>B</sup>** <sup>=</sup> {*<sup>u</sup>* <sup>∈</sup> **<sup>R</sup>***m*|*T*∗*Tu* <sup>=</sup> *<sup>T</sup>*∗*b*}. This set of solutions is closed and convex; it therefore has a unique vector *u*<sup>0</sup> with minimal norm. In fact, **B** is an affine manifold; it is of the form *u*<sup>0</sup> + N (*T*). In the literature (c.f. [12]), **B** is known as the set of the least square solutions.

**2. Theoretical background**

**2.1. The Moore-Penrose inverse**

satisfies the following four Penrose equations:

where *T*∗ denotes the transpose matrix of *T*.

distance between *Tx* and *b*, it is natural to make use of �*T*�<sup>2</sup> norm.

The following two propositions can be found in [12].

*equivalent:*

*(i) Tu* = *PR*(*T*)*b*

*(iii) T*∗*Tu* = *T*∗*b*

*(ii)* �*Tu* <sup>−</sup> *<sup>b</sup>*� ≤� *Tx* <sup>−</sup> *<sup>b</sup>*�, <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>*

The general pointwise definition of the transform *τ*(*u*, *v*) that is used in order to convert an *r* × *r* pixel image *s*(*x*, *y*) from the spatial domain to some other domain in which the image exhibits more readily reducible features is given in the following equation:

$$\tau(\boldsymbol{u}, \boldsymbol{v}) = \frac{1}{r} \sum\_{\mathbf{x}=1}^{r} \sum\_{y=1}^{r} \mathbf{s}(\mathbf{x}, \boldsymbol{y}) \mathbf{g}(\mathbf{x}, \boldsymbol{y}, \boldsymbol{u}, \boldsymbol{v}) \tag{1}$$

where *u* and *v* are the coordinates in the transform domain and *g*(*x*, *y*, *u*, *v*) denote the general basis function used by the transform. Similarly, the inverse transform is given as:

$$s(x, y) = \frac{1}{r} \sum\_{u=1}^{r} \sum\_{v=1}^{r} \tau(u, v) h(x, y, u, v) \tag{2}$$

where *h*(*x*, *y*, *u*, *v*) represents the inverse of the basis function *g*(*x*, *y*, *u*, *v*).

The two dimensional version of the function *g*(*x*, *y*, *u*, *v*) in Equation (1) can typically be derived as a series of one dimensional functions. Such functions are referred to as being 'separable', we can derive the separable two dimensional functions as follows: The transform been performed across *x*

$$\pi'(\mu, y) = \frac{1}{\sqrt{r}} \sum\_{\mathbf{x}=1}^{r} s(\mathbf{x}, y) g(\mathbf{x}, \mu) \tag{3}$$

Moreover we transform across *y*:

$$\tau(u,v) = \frac{1}{\sqrt{r}} \sum\_{y=1}^{r} \tau'(u,y)g(y,u) \tag{4}$$

and using Equation (3) we have

$$\tau(u,v) = \frac{1}{r} \sum\_{\mathbf{x}=1}^{r} \sum\_{y=1}^{r} s(\mathbf{x},y)\mathbf{g}(\mathbf{x},\boldsymbol{\mu})\mathbf{g}(y,\boldsymbol{\mu})\tag{5}$$

We can use an identical approach in order to write Equation (1) and its inverse (Equation 2) in matrix form , using the standard orthonormal basis:

$$T = \boldsymbol{G} \boldsymbol{S} \boldsymbol{G}^T, \quad \boldsymbol{S} = \boldsymbol{H} \boldsymbol{T} \boldsymbol{H}^T \tag{6}$$

in which *T*, *S*, *G* and *H* are the matrix equivalents of *τ*,*s*, *g* and *h* respectively. This is due to our use of orthogonal basis functions, meaning the basis function is its own inverse. Therefore, it is easy to see that the complete process to perform the transform, and then invert it is thus:

$$S = HGSG^T H^T \tag{7}$$

In order for the transform to be reversible we need *H* to be the inverse of *G* and *H<sup>T</sup>* to be the inverse of *GT*, i.e. , *HG* = *GTHT* = *I*.

Given that G is orthogonal it is trivial to show that this is satisfied when *H* = *GT*. Given *H* is merely the transpose of *G* the inverse function for *g*(*x*, *y*, *u*, *v*)*h*(*x*, *y*, *u*, *v*) is also separable.

In the scientific area of image processing the analytical form of a linear degraded image is given by the following integral equation :

$$\mathfrak{x}\_{out}(\mathfrak{i}, \mathfrak{j}) = \int \int\_{D} \mathfrak{x}\_{\mathrm{in}}(\mathfrak{u}, \mathfrak{v}) h(\mathfrak{i}, \mathfrak{j}; \mathfrak{u}, \mathfrak{v}) dudv$$

where *xin*(*u*, *v*) is the original image, *xout*(*i*, *j*) represents the measured data from where the original image will be reconstructed and *h*(*i*, *j*; *u*, *v*) is a known Point Spread Function (PSF). The PSF depends on the measurement imaging system and is defined as the output of the system for an input point source.

A digital image described in a two dimensional discrete space is derived from an analogue image *xin*(*u*, *v*) in a two dimensional continuous space through a sampling process that is frequently referred to as digitization. The two dimensional continuous image is divided into *r* rows and *m* columns. The intersection of a row and a column is termed a pixel. The discrete model for the above linear degradation of an image can be formed by the following summation

$$\left(\mathbf{x}\_{out}(i,j) = \sum\_{\mu=1}^{r} \sum\_{v=1}^{m} \mathbf{x}\_{in}(\mu, v) h(i, j; \mu, v) \right) \tag{8}$$

where *i* = 1, 2, . . . ,*r* and *j* = 1, 2, . . . , *m*.

In this work we adopt the use of a shift invariant model for the blurring process as in [11]. Therefore, the analytically expression for the degraded system is given by a two dimensional (horizontal and vertical) convolution i.e.,

$$\mathbf{x}\_{out}(\mathbf{i}, \mathbf{j}) = \sum\_{u=1}^{r} \sum\_{v=1}^{m} \mathbf{x}\_{in}(\boldsymbol{\mu}, \boldsymbol{v}) \mathbf{h}(\mathbf{i} - \boldsymbol{\mu}, \mathbf{j} - \boldsymbol{v}) = \mathbf{x}\_{in}(\mathbf{i}, \mathbf{j}) \ast \ast \mathbf{h}(\mathbf{i}, \mathbf{j}) \tag{9}$$

where ∗∗ indicates two dimensional convolution.

In the formulation of equation (8) the noise can also be simulated by rewriting the equation as

$$\mathbf{x}\_{\text{out}}(\mathbf{i}, \mathbf{j}) = \sum\_{u=1}^{r} \sum\_{v=1}^{m} \mathbf{x}\_{\text{in}}(u, v)h(\mathbf{i}, \mathbf{j}; u, v) + n(\mathbf{i}, \mathbf{j}) = \mathbf{x}\_{\text{in}}(\mathbf{i}, \mathbf{j}) \ast \ast h(\mathbf{i}, \mathbf{j}) + n(\mathbf{i}, \mathbf{j}) \tag{10}$$

where *n*(*i*, *j*) is an additive noise introduced by the system.

However, in this work the noise is image related which means that the noise has been added to the image.

## **2.3. The Fourier Transform, the Haar basis and the moments in image reconstruction problems**

Moments are particularly popular due to their compact description, their capability to select differing levels of detail and their known performance attributes (see [3]; [9];[17]; [18]; [19]; [20]; [26]; [27]; [28]. It is a well-recognised property of moments that they can be used to reconstruct the original function, i.e., none of the original image information is lost in the projection of the image on to the moment basis functions, assuming an infinite number of moments are calculated. Another property for the reconstruction of a band-limited image using its moments is that while derivatives give information on the high frequencies of a signal, moments provide information on its low frequencies. It is known that the higher order moments capture increasingly higher frequencies within a function and in the case of an image the higher frequencies represent the detail of the image. This is also consistent with work on other types of reconstruction, such as eigenanalysis where it has been found that increasing numbers of eigenvectors are required to capture image detail ([23], ) and again exceed the number required for recognition. Describing images with moments instead of other more commonly used image features means that global properties of the image are used rather than local properties. Moments provide information on its low frequency of an image. Applying the Fourier coefficients a low pass approximation of the original image is obtained. It is well known that any image can be reconstructed from its moments in the least-squares sense. Discrete orthogonal moments provide a more accurate description of image features by evaluating the moment components directly in the image coordinate space.

The reconstruction of an image from its moments is not necessarily unique. Thus, all possible methods must impose extra constraints in order to its moments uniquely solve the reconstruction problem.

The most common reconstruction method of an image from some of its moments is based on the least squares approximation of the image using orthogonal polynomials ([19]; [21]). In this paper the constraint that introduced is related to the bandwidth of the image and provides a more general reconstruction method. We must keep in mind that this constraint is a global, for a local one a joint bilinear distribution such as Wigner or wavelet must be used.

#### *2.3.1. The Fourier Basis*

4 Will-be-set-by-IN-TECH

In order for the transform to be reversible we need *H* to be the inverse of *G* and *H<sup>T</sup>* to be the

Given that G is orthogonal it is trivial to show that this is satisfied when *H* = *GT*. Given *H* is merely the transpose of *G* the inverse function for *g*(*x*, *y*, *u*, *v*)*h*(*x*, *y*, *u*, *v*) is also separable.

In the scientific area of image processing the analytical form of a linear degraded image is

where *xin*(*u*, *v*) is the original image, *xout*(*i*, *j*) represents the measured data from where the original image will be reconstructed and *h*(*i*, *j*; *u*, *v*) is a known Point Spread Function (PSF). The PSF depends on the measurement imaging system and is defined as the output of the

A digital image described in a two dimensional discrete space is derived from an analogue image *xin*(*u*, *v*) in a two dimensional continuous space through a sampling process that is frequently referred to as digitization. The two dimensional continuous image is divided into *r* rows and *m* columns. The intersection of a row and a column is termed a pixel. The discrete model for the above linear degradation of an image can be formed by the following

*xin*(*u*, *v*)*h*(*i*, *j*; *u*, *v*)*dudv*

inverse of *GT*, i.e. , *HG* = *GTHT* = *I*.

given by the following integral equation :

system for an input point source.

where *i* = 1, 2, . . . ,*r* and *j* = 1, 2, . . . , *m*.

(horizontal and vertical) convolution i.e.,

*xout*(*i*, *j*) =

to the image.

*xout*(*i*, *j*) =

where ∗∗ indicates two dimensional convolution.

*r* ∑ *u*=1

*m* ∑ *v*=1

where *n*(*i*, *j*) is an additive noise introduced by the system.

summation

*xout*(*i*, *j*) =

*xout*(*i*, *j*) =

*r* ∑ *u*=1

*m* ∑ *v*=1

*r* ∑ *u*=1

*m* ∑ *v*=1

In this work we adopt the use of a shift invariant model for the blurring process as in [11]. Therefore, the analytically expression for the degraded system is given by a two dimensional

In the formulation of equation (8) the noise can also be simulated by rewriting the equation as

However, in this work the noise is image related which means that the noise has been added

 *D*

*S* = *HGSGTHT* (7)

*xin*(*u*, *v*)*h*(*i*, *j*; *u*, *v*) (8)

*xin*(*u*, *v*)*h*(*i* − *u*, *j* − *v*) = *xin*(*i*, *j*) ∗ ∗*h*(*i*, *j*) (9)

*xin*(*u*, *v*)*h*(*i*, *j*; *u*, *v*) + *n*(*i*, *j*) = *xin*(*i*, *j*) ∗ ∗*h*(*i*, *j*) + *n*(*i*, *j*) (10)

is easy to see that the complete process to perform the transform, and then invert it is thus:

In view of the importance of the frequency domain, the Fourier Transform (FT) has become one of the most widely used signal analysis tool across many disciplines of science and engineering. The FT is generated by projecting the signal on to a set of basis functions, each of which is a sinusoid with a unique frequency. The FT of a time signal *s*(*t*) is given by

$$\mathfrak{s}(\omega) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{+\infty} s(t) \exp(-i\omega t) dt$$

where *ω* = 2*π f* is the angular frequency. Since the set of exponentials forms an orthogonal basis the signal can be reconstructed from the projection values

$$s(t) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{+\infty} \tilde{s}(\omega) \exp(i\omega t) d\omega$$

Following the property of the FT that the convolution in the spatial domain is translated into simple algebraic product in the spectral domain Equation (8) can be written in the form

$$
\mathfrak{x}\_{out} = \mathfrak{x}\_{in}\tilde{H} \tag{11}
$$

In a discrete Fourier domain the two-dimensional Fourier coefficients are defined as

$$F(m,n) = \frac{1}{\sqrt{XY}} \sum\_{x=1}^{X} \sum\_{y=1}^{Y} \mathcal{S}\_{XY} \exp(-2\pi i(\frac{(x-1)(m-1)}{X} + \frac{(y-1)(n-1)}{Y})) \tag{12}$$

rearranging the above equation leads to

$$F(m,n) = \frac{1}{\sqrt{XY}} \sum\_{x=1}^{X} \exp(-2\pi i \frac{(x-1)(m-1)}{X}) \sum\_{y=1}^{Y} S\_{XY} \exp(-2\pi i \frac{(y-1)(n-1)}{Y})^2$$

thus, *F*(*m*, *n*) can be written in matrix form as:

$$F(m, n) = K\_{\mathbb{S}}(\mathfrak{x}, m) \mathcal{S}\_{XY} K\_{\mathbb{S}}(y, n)^{\*}$$

where *KS*(*y*, *n*)<sup>∗</sup> denotes the conjugate transpose of the forward kernel *KS*(*y*, *n*).

Using the same principles but writing Equation (12) in a form where the increasing indexes correspond to higher frequency coefficients we obtain

$$F(m,n) = \frac{1}{\sqrt{XY}} \sum\_{x=1}^{X} \sum\_{y=1}^{Y} \mathbb{S}\_{XY} \cdot \exp\left[-2\pi i(\frac{x-1}{X})(m - \frac{(k-1)}{2} - 1) + \frac{(y - \frac{(l-1)}{2} - 1)(n-1)}{Y})\right]$$

The Fourier coefficients can be seen as the projection coefficients of the image *SXY* onto a set of complex exponential basis functions that lead to the basis matrix:

$$B\_{kl}(m,n) = \frac{1}{\sqrt{k}} \exp[-2\pi i \frac{(m-1)(n-\frac{(l-1)}{2}-1)}{k}]$$

The approximation of an image *SXY* in the least square sense, can be expressed in terms of the projection matrix *Pkl* : *Pkl* = (*BXk*)*TSXYBYl*

as

$$\begin{aligned} S\_{XY}^{\prime} &= B\_{Xk} (B\_{Xk}^T B\_{Xk}^T)^{-1} P\_{kl} (B\_{Yl}^T B\_{Yl})^{-1} B\_{Yl}^T \\ &= \left( B\_{Xk} \right)^{-} P\_{kl} \left( B\_{Yl} \right)^{\dagger} \end{aligned}$$

where ()*<sup>T</sup>* and ()−<sup>1</sup> denote the transpose and the inverse of the given matrix. The operations ()<sup>−</sup> and ()† stand for the left and right inverses, both are equal to the Moore-Penrose inverse, and are unique. Among the multiple inverse solutions it chooses the one with minimum norm. When considering image reconstruction from moments, the number of moments required for accurate reconstruction will be related to the frequencies present within the original image. For a given image size it would appear that there should be a finite limit to the frequencies that are present in the image and for a binary image that limiting frequency will be relatively low. As the higher order moments approach this frequency the reconstruction will become more accurate.

#### *2.3.2. The Haar basis*

6 Will-be-set-by-IN-TECH

Following the property of the FT that the convolution in the spatial domain is translated into simple algebraic product in the spectral domain Equation (8) can be written in the form

In a discrete Fourier domain the two-dimensional Fourier coefficients are defined as

*SXYexp*(−2*πi*(

(*x* − 1)(*m* − 1) *<sup>X</sup>* )

*F*(*m*, *n*) = *KS*(*x*, *m*)*SXYKS*(*y*, *n*)<sup>∗</sup>

Using the same principles but writing Equation (12) in a form where the increasing indexes

The Fourier coefficients can be seen as the projection coefficients of the image *SXY* onto a set

The approximation of an image *SXY* in the least square sense, can be expressed in terms of the

*Pkl* = (*BXk*)*TSXYBYl*

= (*BXk*)<sup>−</sup>*Pkl*(*BYl*)† where ()*<sup>T</sup>* and ()−<sup>1</sup> denote the transpose and the inverse of the given matrix. The operations ()<sup>−</sup> and ()† stand for the left and right inverses, both are equal to the Moore-Penrose inverse, and are unique. Among the multiple inverse solutions it chooses the one with minimum norm. When considering image reconstruction from moments, the number of moments required for accurate reconstruction will be related to the frequencies present within the original image. For a given image size it would appear that there should be a finite limit to the frequencies that are present in the image and for a binary image that limiting frequency will be relatively

*Xk*)−<sup>1</sup>*Pkl*(*B<sup>T</sup>*

*exp*[−2*πi*

*XkB<sup>T</sup>*

(*<sup>x</sup>* <sup>−</sup> <sup>1</sup>)(*<sup>m</sup>* <sup>−</sup> (*k*−1)

<sup>2</sup> − 1) *<sup>X</sup>* <sup>+</sup> (*<sup>y</sup>* <sup>−</sup> (*l*−1)

(*<sup>m</sup>* <sup>−</sup> <sup>1</sup>)(*<sup>n</sup>* <sup>−</sup> (*l*−1)

*YlBYl*)−1*B<sup>T</sup>*

<sup>2</sup> − 1) *<sup>k</sup>* ]

*Yl*

where *KS*(*y*, *n*)<sup>∗</sup> denotes the conjugate transpose of the forward kernel *KS*(*y*, *n*).

*SXY* · exp[−2*πi*(

of complex exponential basis functions that lead to the basis matrix:

√*k*

*XY* <sup>=</sup> *BXk*(*B<sup>T</sup>*

*Bkl*(*m*, *<sup>n</sup>*) = <sup>1</sup>

*S*�

*<sup>F</sup>*(*m*, *<sup>n</sup>*) = <sup>1</sup>

*<sup>F</sup>*(*m*, *<sup>n</sup>*) = <sup>1</sup>

*<sup>F</sup>*(*m*, *<sup>n</sup>*) = <sup>1</sup>

projection matrix *Pkl* :

as

<sup>√</sup>*XY*

<sup>√</sup>*XY*

*X* ∑ *x*=1

thus, *F*(*m*, *n*) can be written in matrix form as:

*X* ∑ *x*=1

correspond to higher frequency coefficients we obtain

*Y* ∑ *y*=1

rearranging the above equation leads to

<sup>√</sup>*XY*

*X* ∑ *x*=1

*Y* ∑ *y*=1

*exp*(−2*πi*

*x*˜*out* = *x*˜*inH*˜ (11)

*<sup>X</sup>* <sup>+</sup> (*<sup>y</sup>* <sup>−</sup> <sup>1</sup>)(*<sup>n</sup>* <sup>−</sup> <sup>1</sup>)

*SXYexp*(−2*πi*

*<sup>Y</sup>* )) (12)

(*y* − 1)(*n* − 1) *<sup>Y</sup>* ))

<sup>2</sup> − 1)(*n* − 1) *<sup>Y</sup>* )]

(*x* − 1)(*m* − 1)

*Y* ∑ *y*=1 The reconstruction of an image from its moments is not necessarily unique. Thus, all possible methods must impose extra constraints in order to its moments uniquely solve the reconstruction problem. In this method the constraint that introduced is related to the number of coefficients and the spatial resolution of the image. The Haar basis is unique among the functions we have examined as it actually defines what is referred to as a 'wavelet'. Wavelet functions are a class of functions in which a 'mother' function is translated and scaled to produce the full set of values required for the full basis set. Limiting the resolution of an image means eliminating those regions of smaller size than a given one. The Haar coefficients are obtained from the projection of the image onto the discrete Haar functions *Bk*,*l*(*m*) for *k* a power of 2, and are defined as

$$B\_{k,l}(m) = \frac{1}{\sqrt{k}'} $$

in the case *l* = 1, and for *l* > 1

$$B\_{k,l}(m) = \begin{cases} +\sqrt{\frac{q}{k'}} \text{ if } & p \le m < p + \frac{k}{2q} \\ -\sqrt{\frac{q}{k'}} \text{ if } & p + \frac{k}{2q} \le m \le p + \frac{k}{q} \\ 0, & \text{otherwise} \end{cases}$$

with *q* = 2[*log*2(*l*−1)] and *p* = *<sup>k</sup>*(*l*−1−*q*) *<sup>q</sup>* + 1, where [.] stands for the function fix(*x*), which rounds the elements of *x* to the nearest integer towards zero.

#### **3. Restoration of a blurry image in the spatial domain**

This work introduces a new technique for the removal of blur in an image caused by the uniform linear motion. The method assumes that the linear motion corresponds to a discrete number of pixels and is aligned with the horizontal or vertical sampling.

Given *xout*, then *xin* is the deterministic original image that has to be recovered. The relation between these two components in matrix structure is the following :

$$H\mathfrak{x}\_{\rm in} = \mathfrak{x}\_{\rm out}.\tag{13}$$

where *H* represents a two dimensional (*r* × *m*) priori knowledge matrix or it can be estimated from the degraded X-ray image using its Fourier spectrum ([24]) . The vector *xout*, is of *r* entries, while the vector *xin* is of *m*(= *r* + *n* − 1) entries, where *m* > *r* and *n* is the length of the blurring process in pixels. The problem consists of solving the underdetermined system of equations (Eq. 13).

However, since there is an infinite number of exact solutions for *xin* that satisfy the equation *Hxin* = *xout*, an additional criterion that finds a sharp restored vector is required. Our work provides a new criterion for restoration of a blurred image including a fast computational method in order to calculate the Moore-Penrose inverse of full rank *r* × *m* matrices. The method retains a restored signal whose norm is smaller than any other solution. The computational load for the method is compared with the already known methods.

The criterion for restoration of a blurred image that we are using is the minimum distance of the measured data, i.e.,

$$\min(||\mathfrak{x}\_{in}^\* - \mathfrak{x}\_{out}||)\_{\prime}$$

where *x*∗ *in* are the first *r* elements of the unknown image *xin* that has to be recovered subject to the constraint �*Hxin* − *xout*� = 0. In fact, zero is not always attained, but following Proposition 0.1(ii) the norm is minimized.

In general, the PSF varies independently with respect to both (horizontal and vertical) directions, because the degradation of a PSF may depend on its location in the image. An example of this kind of behavior is an optical system that suffers strong geometric aberrations. However, in most of the studies, the PSF is accurately written as a function of the horizontal and vertical displacements independently of the location within the field of view.

#### **3.1. The generalized inverse approach**

A blurred image that has been degraded by a uniform linear motion in the horizontal direction, usually results of camera panning or fast object motion can be expressed as follows, as desribed in Eq. (13):

$$
\begin{bmatrix}
k\_1 \ \dots \ k\_{n\_l} \ 0 \ 0 \ 0 \ 0 \ 0 \\
0 \ k\_1 \ \dots \ k\_{n\_l} \ 0 \ 0 \ 0 \\
0 \ 0 \ 0 \ k\_1 \ \dots \ k\_{n\_l} \ 0 \ 0 \\
\vdots \ \vdots \ \vdots \ \vdots \ \vdots \ \vdots \ \vdots \\
0 \ 0 \ 0 \ 0 \ \dots \ k\_1 \ \dots \ k\_{n\_l} \ 1 \end{bmatrix} \cdot \begin{bmatrix}
\mathbf{x}\_{in\_l} \\
\mathbf{x}\_{in\_r} \\
\mathbf{x}\_{in\_r} \\
\vdots \\
\mathbf{x}\_{in\_r} \end{bmatrix} = \begin{bmatrix}
\mathbf{x}\_{out\_r} \\
\mathbf{x}\_{out\_r} \\
\mathbf{x}\_{out\_r} \\
\vdots \\
\mathbf{x}\_{out\_r} \end{bmatrix} \tag{14}
$$

where the index *n* indicates the linear motion blur in pixels. The element *k*1,..., *kn* of the matrix are defined as: *kl* = 1/*n* (1 ≤ *l* ≤ *n*).

Equation (3) can also be written in the pointwise form for *i* = 1, . . . ,*r*,

$$\mathbf{x}\_{out}(i) = \frac{1}{n} \sum\_{h=0}^{n-1} \mathbf{x}\_{in}(i+h)^2$$

that describes an underdetermined system of *r* simultaneous equations and *m* = *r* + *n* − 1 unknowns. The objective is to calculate the original column per column data of the image.

For this reason, given each column [*xout*\_1, *xout*\_2, *xout*\_3,... *xout*\_*r*] *<sup>T</sup>* of a degraded blurred image *xout*, Eq. (3) results the corresponding column

[*xin*\_1, *xin*\_2, *xin*\_3,..., *xin*\_*m*] *<sup>T</sup>* of the original image.

As we have seen, the matrix *H* is a *r* × *m* matrix, and the rank of *H* is less or equal to *m*. Therefore, the linear system of equations is underdetermined. The proper generalized inverse for this case is a left inverse, which is also called a {1,2,4} inverse, in the sense that it needs to

satisfy only the three of the four Penrose equations. A left inverse gives the minimum norm solution of this underdetermined linear system, for every *xout* ∈ R(*H*). The Moore-Penrose Inverse is clearly suitable for our case, since we can have a minimum norm solution for every *xout* ∈ R(*H*), and a minimal norm least squares solution for every *xout* ∈ R/ (*H*).

The proposed algorithm has been tested on a simulated blurred image produced by applying the matrix *H* on the original image. This can be represented as

$$\mathbf{x}\_{\rm out}(i,j) = \frac{1}{n} \sum\_{h=0}^{n-1} \mathbf{x}\_{\rm in}(i,j+h).$$

where *i* = 1, . . . ,*r j* = 1, . . . , *m* for *m* = *r* + *n* − 1, and *n* is the linear motion blur in pixels.

Following the above, and the analysis given in Section 3, there is an infinite number of exact solutions for *xin* that satisfy the equation *Hxin* = *xout*, but from proposition 2.2, only one of them minimizes the norm �*Hxin* − *xout*�.

We shall denote this unique vector by *x*ˆ*in*. So, *x*ˆ*in* can be easily found from the equation :

$$\pounds\_{in} = H^{\dagger} \mathbf{x}\_{out}$$

The following section presents results that highlight the performance of the generalized inverse.
