**8.3 A four level hierarchical model**

To account separately for the measurement and forward modeling error, a more detailed and four level hierarchical model has been proposed:

$$\begin{cases} \mathbf{g} = \mathbf{g}\_0 + \boldsymbol{\varepsilon}, & \text{measurement error} \\ \mathbf{g}\_0 = H\mathbf{f} + \boldsymbol{\xi}, & \text{modeling error} \\ \boldsymbol{f} = \mathbf{D}\mathbf{z} + \boldsymbol{\xi}, & \text{Prior model} \end{cases} \tag{69}$$

which accounts for two terms of errors (variable splitting) and Sparsity enforcing in Transformed domain prior: *f* ¼ *Dz* þ *ζ* with *z* sparse, modeled itself by Normal-IG. This model is presented graphically here.

*Bayesian Inference for Inverse Problems DOI: http://dx.doi.org/10.5772/intechopen.104467*

In this model, there are three error terms: *ϵ* the observation error, *ζ* the forward modeling error and *ζ* the transform domain modeling error. These are detailed in the following:

• *g* ¼ *g*<sup>0</sup> þ *ϵ*, : *ϵ* is assumed to be Gaussian:

$$p\left(\mathbf{g}|\mathbf{g}\_0, \boldsymbol{v}\_\epsilon\right) = \mathcal{N}\left(\mathbf{g}|\mathbf{g}\_0, \boldsymbol{v}\_\epsilon I\right), \\ p\left(\boldsymbol{v}\_\epsilon\right) = \mathcal{IG}\left(\boldsymbol{v}\_\epsilon |a\_{\boldsymbol{e}\_0}, \boldsymbol{\beta}\_{\boldsymbol{e}\_0}\right),$$

• *g*<sup>0</sup> ¼ *H f* þ *ξ*, *ξ* is assumed to be Student-t:

$$\begin{cases} p\left(\mathbf{g}\_{0}|\mathbf{f},v\_{\boldsymbol{\xi}}\right) = \mathcal{N}\left(\mathbf{g}\_{0}|\mathbf{H}\mathbf{f},\mathbf{V}\_{\boldsymbol{\xi}}\right), \mathbf{V}\_{\boldsymbol{\xi}} = \text{diag}[v\_{\boldsymbol{\xi}}],\\ p\left(v\_{\boldsymbol{\xi}}\right) = \Pi\_{i=1}^{M} p\left(v\_{\boldsymbol{\xi}\_{i}}\right) = \Pi\_{i=1}^{M} \mathcal{Z}\mathcal{G}\left(v\_{\boldsymbol{\xi}\_{i}}|\boldsymbol{a}\_{\boldsymbol{\xi}\_{i}},\boldsymbol{\beta}\_{\boldsymbol{\xi}\_{\boldsymbol{\xi}}}\right),\end{cases}$$

• *f* ¼ *D z* þ *ζ*, *ζ* is assumed to be Gaussian:

$$p\left(\mathbf{f}|\mathbf{z},v\_{\xi}\right) = \mathcal{N}\left(\mathbf{f}|\mathbf{D}\mathbf{z},v\_{\xi}I\right), \\ p\left(v\_{\xi}\right) = \mathcal{Z}\mathcal{G}\left(v\_{\xi}|a\_{\xi\_{0}},\beta\_{\xi\_{0}}\right),$$

• *z* is assumed to be sparse and thus modeled via Normal-IG:

$$\begin{cases} p(\mathfrak{z}|\boldsymbol{\mathfrak{v}}\_{\mathfrak{z}}) = \mathcal{N}(\mathfrak{z}|\mathtt{0}, \mathtt{V}\_{\mathfrak{z}}), \mathtt{V}\_{\mathfrak{z}} = \text{diag}[\boldsymbol{\mathfrak{v}}\_{\mathfrak{z}}] \\ p(\boldsymbol{\mathfrak{v}}\_{\mathfrak{z}}) = \boldsymbol{\Pi}\_{j=\mathtt{1}}^{N} p\left(\boldsymbol{\mathfrak{v}}\_{\mathfrak{z}\_{j}}\right) = \boldsymbol{\Pi}\_{j=\mathtt{1}}^{N} \boldsymbol{\mathcal{Z}} \boldsymbol{\mathcal{G}}\left(\boldsymbol{\mathfrak{v}}\_{\mathfrak{z}\_{j}} | \boldsymbol{\alpha}\_{\mathfrak{z}\_{0}}, \boldsymbol{\beta}\_{\mathfrak{z}\_{0}}\right) \end{cases}$$

which results in:

$$p\left(\mathbf{f}, \mathbf{g}\_0, \mathbf{z}, \boldsymbol{\nu}\_\varepsilon, \boldsymbol{\nu}\_\natural, \boldsymbol{\nu}\_\varepsilon | \mathbf{g}\right) \propto \exp\left[-J\left(\mathbf{f}, \mathbf{g}\_0, \mathbf{z}, \boldsymbol{\nu}\_\varepsilon, \boldsymbol{\nu}\_\natural, \boldsymbol{\nu}\_\varepsilon\right)\right] \tag{70}$$

with

$$\begin{split} J\left(\mathbf{f}, \mathbf{g}\_{0}, \mathbf{z}, \boldsymbol{\nu}\_{\varepsilon}, \boldsymbol{\nu}\_{\varepsilon}, \boldsymbol{\nu}\_{\varepsilon}\right) &= \frac{1}{2\nu\_{c}} \left\lVert \mathbf{g} - \mathbf{g}\_{0} \right\rVert\_{2}^{2} + (a\_{\varepsilon\_{0}} + 1) \ln \boldsymbol{\nu}\_{c} + \frac{\boldsymbol{\beta}\_{\varepsilon\_{0}}}{\mathbf{\nu}\_{c}} \\ &+ \frac{1}{2} \left\lVert \mathbf{V}\_{\varepsilon}^{-1/2} \left(\mathbf{g}\_{0} - Hf\right) \right\rVert\_{2}^{2} + \sum\_{i=1}^{M} \left[ \left(a\_{\varepsilon\_{i}} + 1\right) \ln \boldsymbol{\nu}\_{\varepsilon\_{i}} + \frac{\boldsymbol{\beta}\_{\varepsilon\_{i}}}{\mathbf{\nu}\_{\varepsilon\_{i}}} \right] \\ &+ \frac{1}{2\nu\_{\xi}} \left\lVert \mathbf{f} - \mathbf{D} \,\mathbf{z} \right\rVert\_{2}^{2} + \left(a\_{\xi\_{0}} + 1\right) \ln \boldsymbol{\nu}\_{\xi} + \frac{\boldsymbol{\beta}\_{\xi\_{0}}}{\mathbf{\nu}\_{\xi}} \\ &+ \frac{1}{2} \left\lVert \mathbf{V}\_{\mathbf{z}}^{-1/2} \mathbf{z} \right\rVert\_{2}^{2} + \sum\_{j=1}^{N} \left[ \left(a\_{\mathbf{z}\_{0}} + 1\right) \ln \boldsymbol{\nu}\_{\mathbf{z}\_{j}} + \frac{\boldsymbol{\beta}\_{\varepsilon\_{0}}}{\mathbf{\nu}\_{\varepsilon\_{j}}} \right] \end{split} \tag{71}$$

Using then the JMAP approach with an alternate optimization strategy needs the following optimization steps:

$$\bullet \text{ with respect to} \\ f \colon J(\mathbf{f}) = \frac{1}{2} \left\| V\_{\xi}^{-1/2} (\mathbf{g}\_0 - \mathbf{H} \mathbf{f}) \right\|\_{2}^{2} + \frac{1}{2\nu\_{\xi}} \left\| \mathbf{f} - \mathbf{D} \mathbf{z} \right\|\_{2}^{2}$$

$$\bullet \text{ with respect to } \mathbf{g}\_0; J(\mathbf{g}\_0) = \frac{1}{2v\_\epsilon} \left\| \mathbf{g} - \mathbf{g}\_0 \right\|\_2^2 + \frac{1}{2} \left\| \mathbf{V}\_\xi^{-1/2} (\mathbf{g}\_0 - \mathbf{H}f) \right\|\_2^2$$

• with respect to *<sup>z</sup>*: *<sup>J</sup>*ð Þ¼ *<sup>z</sup>* <sup>1</sup> <sup>2</sup>*v<sup>ζ</sup>* k k *<sup>f</sup>* � *D z* <sup>2</sup> <sup>2</sup> <sup>þ</sup> <sup>1</sup> <sup>2</sup> *<sup>V</sup>*�1*=*<sup>2</sup> *<sup>z</sup> <sup>z</sup>* � � � � 2 2


This approach has the following main advantages and limitations. Advantages:


Limitations:

