**3. Affinity fusion deep multimodal subspace clustering**

For completeness, we provide a brief overview of the Deep Multimodal Subspace Clustering algorithm which was proposed in [4]. As noted earlier for DRoGSuRe and similarly for Affinity Fusion Deep Multimodal Subspace clustering (AFDMSC), the network is composed of three main parts: a multimodal encoder, a self-expressive layer, and a multimodal decoder. The output of the encoder contributes to a common latent space for all modalities. The self-expressiveness property applied through a fully connected layer between the encoder and the decoder results in one common set of weights for all the data sensing modalities. This marks a divergence in defining the latent space with DRoGSuRe. Our proposed approach, as a result, safeguards the private information *Xp*ð Þ*t* ; *t* ¼ **1**, … , *T* individually for each of the sensors, i.e., dedicating more degrees of freedom for each of the sensors. This contrasts with AFDMSC. The reconstruction of the input data by the decoder, can yield the following loss function to secure the proper training of the self-expressive network,

**Figure 2.** *Deep multimodal subspace clustering diagram.*

*Scaling Subspace-Driven Approaches Using Information Fusion DOI: http://dx.doi.org/10.5772/intechopen.109946*

$$\min\_{\mathbf{W}\mathbf{W}:\mathbb{W}\mathbf{w}=\mathbf{0}}\left\lVert\mathbf{W}\right\rVert\_{2} + \frac{\mathsf{y}}{2}\sum\_{t=1}^{T} \left\lVert\mathbf{X}(t) - \mathbf{X}\_{r}(t)\right\rVert\_{F}^{2} + \frac{\mu}{2}\sum\_{t=1}^{T} \left\lVert\mathbf{L}(t) - \mathbf{L}(t)\mathbf{W}\right\rVert\_{F}^{2},\tag{17}$$

where *W* represents the parameters of the self expressive layer, *X t*ð Þ is the input to the encoder, *Xr*ð Þ*t* denote the output of the decoder and *L t*ð Þ denotes the output of the encoder. *μ* and *γ* are regularization parameters. An overview for the DMSC approach is illustrated in **Figure 2**.
