5.2. Incomplete view

When constructing different views, we may find that for some views, the information is not complete. In other words, even though we know how to construct views appropriately, we do not have enough information to do it, which is very common in practical problems. In real world, it is very difficult to ensure the completeness of data. This unbalanced relationship between complete views and incomplete views could cause huge problems. Moreover, these incomplete views may influence views with complete information. To solve it, one possible way is to construct these lost information from other views.

#### 5.3. Single-view to multi-view

In multi-view learning, sometimes researchers will convert single-view data into multiple views and apply relevant algorithms on them. In practice, it may give good performance, but there are few theoretical researches on the proof of its reliability. Since the original data is single view, it is important to make it clear: is it necessary to complicate a simple task? We should not only focus on the final performance, the trade-off between cost and benefit is also important.

#### 5.4. Deep leaning in multi-view

Deep learning has shown remarkable performance in many fields. One common way to deal with data composed of different types of sources is to combine them together and then feed them into a deep learning model. It often works well. Although multi-view learning seems to be a more reasonable way to deal with data composed of different types of sources, there is no evidence showing that multi-view learning has an obvious advantage over deep learning. Another issue is that when using deep learning in multi-view learning, we need to train different neural networks for different views separately. This method has two drawbacks. One is that the number of neural networks depends on the number of views. When there are many views, the calculation is huge. The other is that it fails to unify different views during training.

References

5634

Feb 2013;23(7–8):2031-2038

challenges. Information Fusion. 2017;38:43-54

Web Conferences Steering Committee; 2015. pp. 278-288

correlation analysis. In: ICML. ACM; 2009. pp. 129-136

sentation. Neurocomputing. 2015;156:12-21

means using l21-norm. In: IJCAI; 2015

kernels. In: AAAI: 2017. pp. 2259-2265

4. IEEE; 2002. pp. 289-292

551-556

tion. Leuven, Belgium: KU Leuven; 2011

[1] Bickel S, Scheffer T. Multi-view clustering. ICDM. 2004;4:19-26

[2] Chang X, Tao D, Xu C. A survey on multi-view learning. arXiv preprint arXiv. 2013;1304:

New Approaches in Multi-View Clustering http://dx.doi.org/10.5772/intechopen.75598 215

[3] Sun S. A survey of multi-view machine learning. Neural Computing and Applications.

[4] Zhao J, Xie X, Xin X, Sun S. Multi-view learning overview: Recent progress and new

[5] Liu X. Learning from Multi-View Data: Clustering Algorithm and Text Mining Applica-

[6] Ali Mamdouh E, Yang S, Xiaodong H. A multi-view deep learning approach for cross domain user modeling in recommendation systems. In: WWW, International World Wide

[7] Blaschko MB, Lampert CH. Correlational spectral clustering. In: CVPR. IEEE; 2008. pp. 1-8

[8] Kailing K, Kriegel H-P, Pryakhin A, Schubert M. Clustering multi-represented objects with noise. In: PAKDD. Springer Berlin Heidelberg: Springer; 2004. pp. 394-403

[9] Chaudhuri K, Kakade SM, Livescu K, Sridharan K. Multi-view clustering via canonical

[10] Yin Q, Shu W, He R, Wang L. Multi-view clustering via pairwise sparse subspace repre-

[12] Maldonado S, Carrizosa E, Weber R. Kernel penalized k-means: A feature selection

[13] Liang D, Zhou P, Shi L, Wang H, Fan M, Wang W, Shen Y-D. Robust multiple kernel k-

[14] Liu X, Li M, Wang L, Dou Y, Yin J, Zhu E. Multiple kernel k-means with incomplete

[15] Wang S, Gittens A, Mahoney MW. Scalable kernel k-means clustering with nystrom

[16] Zhang R, Rudnicky AI. A large scale clustering scheme for kernel k-means. In: ICPR. Vol.

[17] Dhillon IS, Guan Y, Kulis B. Kernel k-means. In: SIGKDD. ACM, ACM Press; 2004. pp.

approximation: Relative-error bounds. arXiv preprint arXiv. 2017;1706:02803

[11] Jain AK. Data clustering: 50 years beyond k-means. PRL. Jun 2010;31(8):651-666

method based on kernel k-means. Information Sciences. 2015;322:150-160
