**8. Conclusion**

In this chapter, we firstly introduce the background and explain why transfer learning is important for helping learn real-world tasks. Then we give a strict definition of transfer learning and its scope. In particular, we pay our attention to deep domain adaptation, which is a subset of transfer learning and it mainly addresses the situation where we have different but related datasets for a common learning task. Next, we categorize the deep domain adaptation based on three aspects: the specific implementing approaches, the learning methods, and the data space. In general, deep domain adaptation is one type of method that mainly utilizes deep neural networks to reduce the domain shift or data distribution so that we can enhance the performance of the target task with the help of the knowledge obtained from the source domain. Specifically, we mainly discuss the recent advanced methods for domain adaptation from the deep learning community, including finetuning networks, adversarial domain adaptation, and data-reconstruction approaches. Finally, we introduce and summarize the typical real-world applications in computer vision from recently published articles, from which we can see that the unsupervised learning approach based on GANs gets the most attention. In addition, we discuss many other applications beyond the context of image processing. And we notice that many deep domain adaptation methods that are

initially proposed for processing images are also suitable for addressing a variety of tasks in natural language processing, speech recognition, and time-series data processing.

**References**

[1] Pan SJ, Yang Q. A survey on transfer

*Transfer Learning and Deep Domain Adaptation DOI: http://dx.doi.org/10.5772/intechopen.94072*

> [10] Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. InProceedings of the IEEE international conference on computer

vision 2017 (pp. 2223–2232).

[11] Bousmalis K, Trigeorgis G, Silberman N, Krishnan D, Erhan D. Domain separation networks. InAdvances in neural information processing systems 2016 (pp. 343–351).

[12] Wang M, Deng W. Deep visual domain adaptation: A survey. Neurocomputing. 2018 Oct 27;312:

[13] Chu B, Madhavan V, Beijbom O, Hoffman J, Darrell T. Best practices for fine-tuning visual classifiers to new domains. InEuropean conference on computer vision 2016 Oct 8 (pp. 435–

[14] Motiian S, Piccirilli M, Adjeroh DA, Doretto G. Unified deep supervised domain adaptation and generalization. InProceedings of the IEEE International Conference on Computer Vision 2017

[16] Long M, Zhu H, Wang J, Jordan MI. Unsupervised domain adaptation with residual transfer networks. InAdvances in neural information processing systems 2016 (pp. 136–144).

[17] Long M, Zhu H, Wang J, Jordan MI. Deep transfer learning with joint adaptation networks. InInternational conference on machine learning 2017 Jul

[15] Borgwardt KM, Gretton A, Rasch MJ, Kriegel HP, Schölkopf B, Smola AJ. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics. 2006

442). Springer, Cham.

(pp. 5715–5725).

Jul 15;22(14):e49–57.

17 (pp. 2208–2217).

135–53.

knowledge and data engineering. 2009

[2] Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. Journal of

[3] Zhang J, Li W, Ogunbona P. Transfer learning for cross-dataset recognition: a

[4] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:

[5] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition 2016 (pp. 770–778).

[6] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y.

in neural information processing systems 2014 (pp. 2672–2680).

[7] Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. InProceedings of the IEEE conference on computer vision and pattern recognition 2017 (pp. 7167–

7176).

Cham.

**59**

Generative adversarial nets. InAdvances

[8] Ganin Y, Lempitsky V. Unsupervised domain adaptation by backpropagation. InInternational conference on machine learning 2015 Jun 1 (pp. 1180–1189).

[9] Ghifary M, Kleijn WB, Zhang M, Balduzzi D, Li W. Deep reconstructionclassification networks for unsupervised

domain adaptation. InEuropean Conference on Computer Vision 2016 Oct 8 (pp. 597–613). Springer,

learning. IEEE Transactions on

Oct 16;22(10):1345–59.

Big data. 2016 Dec 1;3(1):9.

survey. arXiv preprint arXiv: 1705.04396. 2017 May.

1409.1556. 2014 Sep 4.

Although deep domain adaptation has been successfully used for solving various types of tasks, we should be careful to conduct transfer learning, as brute-force transfer may hurt the performance of our model. The above applications mainly focus on homogeneous domain adaptation, which means that the data between the source domain and the target domain is related and we assume that deep neural networks can find some shared representation from these two domains. However, the data collected from real-world may not always meet this requirement. Therefore, the future challenge is how to apply a heterogeneous domain adaptation method effectively. From the above analyses, we notice that transfer learning has been mainly applied to a limited scale of applications. Therefore, more challenges are also needed to address in the future such as logical inference and graph neural networks based tasks.
