*5.3.1 EM and GEM algorithms*

To summarize these methods, we use the vocabulary of the main authors of EM method, where *f* is considered as *hidden variable*, *g* as *incomplete data*, (*g*, *f*) as *complete data*, ln *p*ð Þ *g*j*θ incomplete data log-likelihood* and ln *p*ð Þ *g*,*f*j*θ* as *complete data log-likelihood*. Then, the following iterative algorithms describe the EM and GEM algorithms.:

• EM Iterative algorithm:

$$\begin{cases} \text{E-step}: & \mathcal{Q}\left(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}^{(k)}\right) = \text{E}\_{p\left(f|\mathbf{g}, \boldsymbol{\theta}^{(k)}\right)}\left\{\ln p\left(\mathbf{g}, f|\boldsymbol{\theta}\right)\right\} \\\\ \text{M-step}: & \hat{\boldsymbol{\theta}}^{(k)} = \text{arg}\max\_{\boldsymbol{\theta}} \left\{\mathcal{Q}\left(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}^{(k-1)}\right)\right\} \end{cases} \tag{31}$$

• GEM (Bayesian) algorithm:

$$\begin{cases} \text{E-step}: & Q\left(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}^{(k)}\right) = \text{E}\_{p\left(f|\mathbf{g}, \hat{\boldsymbol{\theta}}^{(k)}\right)}\left\{\ln p(\mathbf{g}, \mathbf{f}|\boldsymbol{\theta}) + \ln p(\boldsymbol{\theta})\right\} \\\\ \text{M-step}: & \hat{\boldsymbol{\theta}}^{(k)} = \text{arg}\max\_{\boldsymbol{\theta}} \left\{Q\left(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}^{(k-1)}\right)\right\} \end{cases} \tag{32}$$

These methods can be summarized in the following scheme:

$$
\boxed{p(f,\theta|\mathbf{g})} \to \boxed{\mathbf{EM},\mathbf{GEM}} \to \hat{\theta} \to \boxed{p\left(f|\hat{\theta},\mathbf{g}\right)} \to \hat{f}',
$$
