**6. Transfer learning and data augmentation**

As neural networks increase their number of layers and the connections between them, their complexity increases. Neural networks with many layers have demonstrated more than satisfactory performance in several tasks, many of them superior to human performance [24]. However, when working with these complex networks it is necessary to have a large amount of data for training, to avoid overfitting, and to

expand the power of generalization. The use of networks with few layers trained on small datasets has also been researched, which has shown that there is a tendency to overfitting or underfitting [25]. In the AI medical field, it is very difficult to have very large datasets, as this demands a lot of specialized manpower. Specialized manpower (doctors, biologists, geneticists, etc.) is the one that analyzes the data and aggregates the labels to train the algorithms [25, 26]. Therefore, two solutions have been found to deal with this problem of small datasets. The first one consists of data augmentation. This group of techniques creates images in virtual form from the original images of the dataset. For example, you can alter the position of the images, rotate them on their axis, and change the contrast and brightness, just to mention a few [25, 27]. The second solution is transfer learning. This technique consists of training a complex network on a massive dataset (ImageNet) usually of common images (dogs, cats, etc.), and then performing a finetuning. The finetunning is the training with the specific medical dataset, which only alters the weights of the last layers of the neural network. This helps to obtain better results than training the network from scratch on the specific dataset [25, 28].
