*4.1.2 Prototyping examples in Artificial Neural Networks*

In order to introduce a different perspective on Deep Learning models' interpretability, Zintgra and co-authors [50] conducted a study to simplify the black box structure of Artificial Neural Networks (ANNs). They made use of prototypic examples method that indicate tools in order to diagnose trained ANNs. In general, ANNs analyse discrete decision-making processes and obtain high-performance prediction results.

The prototype examples may be computationally intractable, including a pre-determined normal distribution to prevent the proliferation of unreasonable prototype cases. They provided an explanation of tools to train ANNs based on two datasets. Moreover, it can often be like such a losing battle to describe precisely how ANNs operate mathematically. Therefore, a much more comprehensive preprocessing methodology could also be used in a related development (e.g., generative adversarial network proposed by Goodfellow et al. in [49]). Furthermore, experimental results and hypotheses in ANNs were portrayed and tested only on two datasets. Alternatively, a more detailed analysis is required to rely on the empirical results, which might be achieved by including rich data containing imbalance issue, different types of features. Selection bias was another potential concern because it could involve possible measurement errors. It could be extended through more set of data with various features. Finally, conclusions and interpretations of data were drawn from an inevitably subjective mechanism on the investigator's basis. This was because to examine whether the produced case studies should satisfy the investigator's standards about the phenomena of been modelled (e.g., decisions could be only made by the time it came). This was established based on approaches or standards for collecting and analysing concepts that might be more unbiased. As a result, this could also enable investigators/analysers to understand the implications and weaknesses of the use of ANNs for the discrete decisionmaking process, which might enhance the strictness of the approach. However, many healthcare methods are required to reconstruct conventional prediction methods (e.g., the identification of cancers), but so far, different ideas to interpret previous clinical records have been discovered.

#### *4.1.3 Visualisation in deep learning*

For the time being, the possibility of an AI physician planning to roll new prognosis without direct human intervention is a significant distance in which the more presumably in decades rather than a few years later. Recent developments in several technologies in the Deep Learning area have been powered by the steadily declining expense of computing and storage. That being said, realistic apps, including certain integrated smartphone and electronic devices, have intensified explainability issues for Deep Learning in the black box resource-limited environments. Liu et al. in [51] introduced the leading solution to address these issues where a deteriorated image of Binary Convolutionary Networks caused by binarising Filtres. They offered a range of Circulant Filtres (CiFs) and a Circulant Binary Convolution (CBConv) to strengthen efficiency and to tackle those limitations for Binary Convolutionary functionalities through their proposed Circulant Backpropagation (CBP). Then, CiFs effortlessly was integrated into the current deep neural networks (DCNNs). Enormous research has indicated that perhaps the output difference among one-bit and total-precision DCNNs could be reduced by

#### *Predicting Type 2 Diabetes Complications and Personalising Patient Using Artificial… DOI: http://dx.doi.org/10.5772/intechopen.94228*

extending the variety and distributing the filtres. Zintgraf et al. in [52] identified numerous tools to test the model and understand how DCNNs could provide a reliable outcome by using the visualisation method.

Overall, the existing explanatory Deep Learning approaches would need to be adapted for further sophisticated longitudinal modelling strategy (rather than with a multivariate distribution). This would result in better outcomes, for example, in pixel values which could be estimated reliably by everyone's environment while it skewed down much more. By providing the black box models with sufficient data, machine learning seemed to be overconfident that completely different health knowledge could then be generated without user intervention. The black box models of Deep Learning can be simplified in several aspects. For example, if an object is detected, an image detection machine can breakdown back and towards specific attributes including shape, colour and texture of the image, and then reduce the predictions to a mathematical method by checking the classification error and then background diffusion to improve the practises. In particular, in the world that it is possible to fully allocate decision making to computer systems, confidence in AI systems will be hard to achieve. In the future work, one approach that can be applied to the small-sized T2DM dataset can be the use of Bayesian Neural Networks, which will deal with uncertainties in data and model structure by exploiting the advantages of both Neural Networks and Bayesian modelling. To conclude, AI can improve current methods of medical diagnosis in terms of interpretability but cautioned that the technology would need to be more evaluated to be trusted by both patients and practitioners.

In black box models, it can be challenging to determine what is coordinating the visible patterns. Such models are problematic not only for lack of transparency but also for possible biases inherited by the algorithms from clinicians mistakes [53]. This issue is caused based on the human errors and biased sampling of training data as well as the underestimation of the impact of the risk factors underlying behaviour/ pattern. In general, as observed from prior studies, it is difficult to obtain performance enhancement while simultaneously trying to explain hidden factors. Lakkaraju in [54] suggested that there is a trade-off between patient personalisation (in a descriptive analysis) and prediction performance (in predictive analysis). Generally speaking, an improvement in explainability is often possible through a less accurate model or at a higher cost of the predictive accuracy (in a Black box model) [6]. There are quite few research studies on predicting T2DM complications and T2DM black box models. However, studies on explaining an unknown risk factor/ latent phenotype by using a hybrid data mining methodology (including descriptive and predictive) are rare to find in literature. Therefore, this study attempts to open the AI, black box model by using both predictive and descriptive strategies.
