**9. Bayesian computation**

As we could see, very often, we can find the expression of the posterior law *p*ð Þ *f*j*g* , sometimes exactly as is the case of the linear models with Gaussian priors in the previous section, but often up to the normalization constant (the evidence term) *p*(*g*) in:

$$p(\mathbf{f}|\mathbf{g}) = \frac{1}{p(\mathbf{g})} p(\mathbf{g}|\mathbf{f}) p(\mathbf{f}) = \frac{1}{p(\mathbf{g})} p(\mathbf{g}, \mathbf{f}).\tag{79}$$

This term is not necessary for Maximum A Posteriori (MAP) but it is needed for Expected A Posteriori (EAP) and for doing any other expectation computation.

This is the case, almost in all *Non-Gaussian prior models* or *Non-Gaussian noise models* or the *Non-Linear forward models*. In this chapter, a few cases are considered more in detail. Even in the Gaussian and linear case which is the simplest case, and we have analytical expressions for almost everything, the computational cost for large scale problems brings us to search for approximate but fast solutions.
