**5. Hyperparameter estimation**

When applying the Bayes rule, the main terms which are the likelihood and prior depend on parameters, which cannot be fixed in practical situation. We may thus want to estimate them from the data. In the Bayesian approach, this can be done easily:

$$p(\mathbf{f}, \theta\_1, \theta\_2 | \mathbf{g}) = \frac{p(\mathbf{g} | \mathbf{f}, \theta\_1) p(\mathbf{f}, \theta\_2) p(\theta\_1) p(\theta\_2)}{p(\mathbf{g})} \tag{22}$$

**Figure 7.** *Illustration of the Bayesian approach for inverse problems with unknown hyperparameters.*

where *p*ð Þ *θ*<sup>1</sup> and *p*ð Þ *θ*<sup>2</sup> are the prior probability laws assigned to *θ*<sup>1</sup> and *θ*<sup>2</sup> and often *p*ð Þ¼ *θ p*ð Þ *θ*<sup>1</sup> *p*ð Þ *θ*<sup>2</sup> *:* We can then write more succinctly:

$$p(\mathbf{f}, \theta | \mathbf{g}, \theta\_0) = \frac{p(\mathbf{g} | \mathbf{f}, \theta\_1) p(\mathbf{f}, \theta\_2) p(\theta)}{p(\mathbf{g})} \tag{23}$$

The scheme of this situation is illustrated in **Figure 7**. From here, we have different directions for doing estimation:
