*9.2.2 Sampling based methods*

Second solution is generating samples from the posterior law and use them to compute the variances and covariances. So, the problem is how to generate a sample from the posterior law

$$\begin{cases} p(\mathbf{f}|\mathbf{g}) = \mathcal{N}\left(\mathbf{f}|\hat{\mathbf{f}}, \hat{\Sigma}\right) \quad \text{with } : \\ \hat{\mathbf{f}} = \mathbf{f}\_0 + \left[\mathbf{H}'\mathbf{H} + \lambda\mathbf{I}\right]^{-1}\mathbf{H}'(\mathbf{g} - \mathbf{H}\mathbf{f}\_0) \\ \hat{\Sigma} = \boldsymbol{\nu}\_e[\mathbf{H}'\mathbf{H} + \lambda\mathbf{I}]^{-1}, \quad \lambda = \frac{\boldsymbol{\nu}\_e}{\boldsymbol{\nu}\_f} \end{cases} \tag{88}$$

One solution is to compute the Cholesky decomposition of the covariance matrix <sup>Σ</sup>^ <sup>¼</sup> *AA*<sup>0</sup> , generate a vector, *<sup>u</sup>* � <sup>N</sup> ð Þ *<sup>u</sup>*j0,*<sup>I</sup>* and then generate a sample *<sup>f</sup>* <sup>¼</sup> *Au* <sup>þ</sup> ^*<sup>f</sup>* [27]. We can compute ^*f* by optimizing

$$J(\mathbf{f}) = \frac{1}{2} \|\mathbf{g} - \mathbf{H}\mathbf{f}\|^2 + \lambda \|\mathbf{f} - \mathbf{f}\_0\|\_2^2,\\ \lambda = \frac{\nu\_e}{\nu\_f}, \tag{89}$$

but the main computational cost is the Cholesky factorization.

Another approach, called Perturbation-Optimization [28, 29] is based on the following property:

If we note *<sup>x</sup>* <sup>¼</sup> *<sup>f</sup>* <sup>þ</sup> *<sup>H</sup>*<sup>0</sup> ½ � *<sup>H</sup>* <sup>þ</sup> *<sup>λ</sup><sup>I</sup>* �<sup>1</sup> *H*<sup>0</sup> ð Þ *g* � *H f* and look for its expected and covariance matrix, it can be shown that:

$$\begin{cases} \mathbf{E}\{\mathbf{x}\} &= \hat{\mathbf{f}} \\ \text{Cov } [\mathbf{x}] = \hat{\Sigma} \end{cases} \tag{90}$$

So, to generate a sample from the posterior law, we can do the following:


$$J\left(\bar{\hat{f}}\right) = \frac{1}{2}||\bar{\mathbf{g}} - \mathbf{H}\boldsymbol{\tilde{f}}||^2 + \lambda \left\|\bar{\hat{f}} - \mathbf{f}\_0\right\|\_2^2 \tag{91}$$

*Bayesian Inference for Inverse Problems DOI: http://dx.doi.org/10.5772/intechopen.104467*

• The obtained solution *<sup>f</sup>*ð Þ *<sup>n</sup>* <sup>¼</sup> arg min <sup>~</sup>*<sup>f</sup> <sup>J</sup>* <sup>~</sup>*<sup>f</sup>* n o � � is a sample from the desired posterior law.

By repeating this process for a great number of times, we can use them to obtain good approximations for the posterior mean ^*f* and the posterior covariance Σ^ by computing their empirical mean values. We need however fast and accurate optimization algorithms.
