**8.4 Gauss-Markov-Potts models**

To introduce the Gauss-Markov-Potts model, let us have a look at the images in **Figure 11**.

The question we want to answer is: which prior model can be more appropriate for these images? One way, to answer to this question is either look at the histogram of the pixels or the pixels of the gradient images. Then, if we take one typical line of the gradient or one typical line of the image itself and draw them as a 1D-signal, we obtain the cases in **Figure 12**.

From these two figures, we see that for some cases, a Gaussian or generalized Gaussian may be very good models. But, for other cases, if we want to explicitly account for the presence of the contours, we can introduce a binary hidden variable to represent it. Finally, for the last example of the image in **Figure 10** and its corresponding typical line in **Figure 11**, we need to introduce a hidden variable *z* which encodes the following fact that:

**Figure 11.** *Different images with different characteristics in different imaging systems.*

#### **Figure 12.**

*Different possible prior modeling in relation to the different images of Figure 11.*

In NDT applications of CT, the objects are, in general, composed of a finite number of materials, and the voxels corresponding to each material are grouped in compact regions.

How to model this prior information?

To answer to this question, first consider such an image *f* (*r*) with its segmentation *z*(*r*) and contours *q*(*r*) as shown in **Figure 13**.

As it can be seen, we introduced two hidden variables *z*(*r*) and *q*(*r*), the first representing the segmentation and the second the contours of the image. *z*(*r*) takes the integer values f g *k* ¼ 1, ⋯,*K* , each presented by a different color and *q*(*r*) a binary

**Figure 13.** *An image of an object composed of homogeneous compact regions, its segmentation and the contours of thoses regions.*

value {0, 1}. The second can easily be obtained from the first. So, from now, we consider only *z*(*r*).

As each value of *z* represents a homogeneous material, we can translate this by:

$$p(f(\mathbf{r})|z(\mathbf{r})=k,m\_k,v\_k) = \mathcal{N}(m\_k,v\_k) \tag{72}$$

encoding the fact that inside each homogeneous material, i.e.; all the pixels having *z r*ð Þ¼ *k*, represent a homogenous material characterized by the two parameters *f* (*mk*, *vk*). This results to:

$$p(f(r)) = \sum\_{k} P(z(r) = k) \mathcal{N}(m\_k, v\_k) \text{ Mixture of Gaussian} \tag{73}$$

which shows the mixture of Gaussian model of the pixel values. See also **Figure 14**.

The next step is to propose a probability distribution for *z*. As we want a compactness of the regions, a Markov modeling is appropriate:

$$p(\boldsymbol{z}(\boldsymbol{r})|\boldsymbol{z}(\boldsymbol{r}'),\boldsymbol{r}'\in\mathcal{V}(\boldsymbol{r})) \propto \exp\left[-\gamma\sum\_{\boldsymbol{r}'\in\mathcal{V}(\boldsymbol{r})}\delta(\boldsymbol{z}(\boldsymbol{r})-\boldsymbol{z}(\boldsymbol{r}'))\right] \tag{74}$$

A Potts Markov model is still more appropriate:

$$p(z(r), r \in \Omega) \propto \exp\left[ -\gamma \sum\_{r \in \Omega} \sum\_{r' \in \mathcal{V}(r)} \delta(z(r) - z(r')) \right] \tag{75}$$

#### **Figure 14.**

*A metalic object with a default area inside it: Black pixels represent air, white pixels metal and gray pixels the defaults area. On left image these are codes by colors (z* ¼ 1 *represents air, z* ¼ 2 *represents metal and z* ¼ 3 *represents default area.*

*Bayesian Inference for Inverse Problems DOI: http://dx.doi.org/10.5772/intechopen.104467*

### **Figure 15.**

*Two proposed gauss-Markov-Potts models used in many NDT applications.*

where Ω represents all pixels of the image.

Thus, to each pixel of the image is associated 2 variables *f* (*r*) and *z*(*r*) with the following possible properties:


From these four different cases, we consider two which are illustrated in **Figure 15**. Using the notations on this figure, and noting by *f* all the pixels of the image, by *z* all the pixels of the segmented image, and by *θ* all the parameters f g *vϵ*,ð Þ *αk*, *mk*, *vk* , *k* ¼ 1, ⋯,*K* , we can write:

$$p(\mathbf{f}, \mathbf{z}, \theta | \mathbf{g}) \propto p(\mathbf{g} | \mathbf{f}, \mathbf{v}\_{\epsilon}) p(\mathbf{f} | \mathbf{z}, \mathbf{m}, \mathbf{v}) p(\mathbf{z} | \mathbf{y}, \alpha) p(\theta) \tag{76}$$

where

$$\mathfrak{m} = \{m\_k, k = 1, \cdot, K\}, \\ \upsilon = \{\upsilon\_k, k = 1, \cdot, K\}, \\ a = \{a\_k, k = 1, \cdot, K\}, \\ \theta = \{\upsilon\_c, m, \upsilon, a \} \\ \text{label} \\ \beta\_c = \{\upsilon\_k, \upsilon\_c\}, \\ \eta = \{\upsilon\_k, \upsilon\_c\}, \\ \eta = \{\upsilon\_k, \upsilon\_c\}$$

The expressions of *p*ð Þ *g*j*f*, *v<sup>ϵ</sup>* , *p*ð Þ *f*j*z*, *m*, *υ* and *p*ð Þ *z*j*γ*, *α* have been given before. We need to define *p*ð Þ*θ* which can be chosen as the conjugate priors: Dirichlet for *α*, Gaussian for *m* and Inverse-Gamma for all the variances.

Direct computation and use of *p*ð Þ *f*, *z*, *θ*j*g*;M is too complex, because we do not have analytical expression for the proportionality term of the joint probability law:

$$p(\mathbf{f}, \mathbf{z}, \theta | \mathbf{g}) \propto p(\mathbf{g} | \mathbf{f}, \mathbf{z}, \theta) p(\mathbf{f} | \mathbf{z}, \theta) p(\mathbf{z}) p(\theta) \tag{77}$$

As we have three sets of variables *f*, *z* and *θ*, we can use different schemes, for example a Gibbs sampling scheme:

$$\hat{f} \sim p\left(f|\hat{\mathbf{z}}, \hat{\theta}, \mathbf{g}\right) \to \hat{\mathbf{z}} \sim p\left(\pi|\hat{f}, \hat{\theta}, \mathbf{g}\right) \to \hat{\theta} \sim \left(\theta|\hat{f}, \hat{\mathbf{z}}, \mathbf{g}\right) \tag{78}$$

with:

• Sample *<sup>f</sup>* from *<sup>p</sup> <sup>f</sup>*j^*z*, ^*θ*, *<sup>g</sup>* � �∝*p*ð Þ *<sup>g</sup>*j*f*, *<sup>θ</sup> <sup>p</sup> <sup>f</sup>*j^*z*, ^*<sup>θ</sup>* � �

Needs optimisation of a quadratic criterion.

• Sample *z* from *p z*j ^*f*, ^*θ*, *g* � � <sup>∝</sup>*<sup>p</sup> <sup>g</sup>*<sup>j</sup> ^*f*, ^*z*, ^*θ* � �*p*ð Þ*<sup>z</sup>*

Needs sampling of a Potts Markov field.

• Sample *θ* from *p θ*j ^*f*, ^*z*, *g* � �∝*<sup>p</sup> <sup>g</sup>*<sup>j</sup> ^*f*, *σ*<sup>2</sup> *ϵ I* � �*<sup>p</sup>* ^*f*j^*z*,ð Þ *mk*, *vk* � �*p*ð Þ*<sup>θ</sup>*

More details and other schemes such as JMAP and VBA can be found in Refs. [26].

To illustrate an example of application, we considered a NDT application, where a metalic object is tested to detect a default inside it. As, the problem was, not only to detect the default, but also to characterize its shape and size, an X-ray computed tomography (CT) with only two projections is proposed and used. This problem is illustrated in **Figure 16**.

The mathematical part of this very ill-posed inverse problem is the following: Given the functions *g*1(*x*) and *g*2(*y*) find the image *f* (*x*, *y*).

This problem also arise in probability theory and statistics, where *f* (*x*, *y*) is a joint distribution and *g*1(*x*) and *g*2(*y*) its two marginals. We know that this problem has infinite number of solutions: *f x*ð Þ¼ , *y g*1ð Þ *x g*2ð Þ*y* Ωð Þ *x*, *y* where Ωð Þ *x*, *y* is called a Copula:

$$\int \Omega(\mathfrak{x}, \mathfrak{y}) \mathrm{d}\mathfrak{x} = \mathbf{1} \text{ and } \int \Omega(\mathfrak{x}, \mathfrak{y}) \mathrm{d}\mathfrak{y} = \mathbf{1}$$

**Figure 16.**

*A non destructive testing (NDT) application where f (x, y) has to be reconstructed from its marginals g1(x) and g2(y).*

*Bayesian Inference for Inverse Problems DOI: http://dx.doi.org/10.5772/intechopen.104467*

So, any arbitrary copula function defines a solution. The problem is ill-posed and we need to use any possible prior information to try to obtain a unique or acceptable solution. The probabilistic solution we proposed is illustrated in **Figure 17**.

Unsupervised Bayesian estimation:

$$p(\mathbf{f}, \mathfrak{z}, \boldsymbol{\theta} | \mathbf{g}) \propto p(\mathbf{g} | \mathbf{f}, \mathfrak{z}, \boldsymbol{\theta}) p(\mathbf{f} | \mathfrak{z}, \boldsymbol{\theta}) p(\boldsymbol{\theta})$$

A summary of the results is given in **Figure 18** where the proposed method result.

**Figure 17.**

*Probabilistic Bayesian method for the NDT image resonstruction problem.*

#### **Figure 18.**

*Probabilistic Bayesian method for the NDT image resonstruction problem. a) Shows the original image f, b) is the result of Back-projection, c) is the result of filtered Back-projection, d) and e) are the result of a Markov model with hidden line process, and f), g) and h) show the results of the gauss-Markov-Potts method.*
