*4.1.1 Google's novel approach*

utilised structure learning algorithm depends on REVEAL which takes in the ideal arrangement of guardians for every hub of a system autonomously, in light of the theoretical data ideas of common data examination. Be that as it may, the twoarrange fleeting Bayes organise as the 2TBN which cannot be all around recuperated by use of REVEAL. Rijmen in [45] exploited an HMM to study the temporal pattern of symptoms burden in brain tumour patients. He showed that the discovery of symptom experience over time is necessary for treatment and follow-up of patients with symptom-specific intervention. In general, Bayesian learning methods could determine network structure and how the networks variables should be represented along with the causal links among them. Moreover, it addressed the difficulty of qualifying causal relationships in terms of Conditional Probability Tables (CPTs). Witting focused on using hidden variables in a known structure [40] as the knowledge of the latent variable in predictive modelling is important for an understanding of the complex AI models. Discovering latent variables can potentially capture unmeasured effects from clinical data, simplifying complex networks of interactions and giving us a better understanding of disease processes. In addition, it can improve classification accuracy and boost user confidence in the classification models [46]. Elidan and co-authors in [47] emphasised the importance of the presence of hidden variables. In addition, they determined a hidden variable that interacted with observed variables and located them within the Bayesian Network structure. They also showed that networks without hidden variables are clearly less useful because of the increased number of edges needed to model all interactions, which caused overfitting. Despite the productivity of exploring trees of hidden variables to render all observable variables independently [48], these hidden variables were non-optimal with independencies among observable variables. Overall, previous works on learning DBNs have presented both network structures and parameters from clinical data sets and learning parameters for a fixed network of incomplete data, in the presence of missing data and latent variables [20]. Much of the current literature on disease prediction have argued that a complex AI model, with many unexplainable hidden variables, also has several serious drawbacks. Therefore, this chapter has chosen AI DBNs model to learn parameters and latent variables to predict complications. The next section intends to emphasise the explainability of the proposed methodology in order to uncover the meaning behind

*Type 2 Diabetes - From Pathophysiology to Cyber Systems*

the latent AI model.

transparency and trust.

**204**

**4.1 Explainability in deep learning**

**4. Black box models and AI in medicine**

Investigating unmeasured risk factors can improve the modelling of disease progression and thus enable clinicians to focus on early diagnosis and treatment of unexpected conditions. However, the overuse of hidden variables and lack of explainability can lead to complex models, which are not well understood (being black box in nature). Models need to be understood by clinicians to facilitate

This stage outlines and discusses the limitations of Deep Learning approaches that have been proposed, so far, to gain deeper insights into the understanding of black box AI models. AI medical machine such as Deep Learning has become ubiquitous to provide a high-performance prediction. Nevertheless, understanding their mechanisms has become a significant concern worldwide whereby the goal is to gain clinicians and patients trust. The reason behind this is due to several

Most medical algorithms proposed by [49], such as AI Doctor designed to reproduce current problem-solving methods (e.g., the detection of cancers). In addition, the concept assignment can help people to strengthen their skills and talents for a computer system that showcased superhuman effectiveness and efficiency.

Google's AI Doctor can be demonstrated how they could be used to provide an explanation further into predictions generated by local classifiers, first from conventional image classification networks to a focused clinical application. The concept attribution approach in AI Doctor offers several promising avenues for future work. In addition to this, the concept assignment can help people to strengthen their skills and talents for a computer system that showcases superhuman effectiveness and efficiency. The concepts of explanatory power are outlined by Google under three principle assumptions/limitations: firstly, comprehension for whatever hidden layer and artificial neurons would offer. This is based on most of the information in a deep neural network consists of hidden layers. Secondly, it recommends that acknowledging the numerous hidden layers and understanding their design on a meta-level would lead to more in-depth modelling. Finally, to comprise how nodes become active, it considers groups of interconnected neurons that trigger at the same time and space. These principles are defined instead of explaining the structural nature of each neuron in each network. This is because the stratification of a network for the categories of interconnected neurons would enable its configurations even more abstractable. This is the main weakness of the black box models.

One of the most highlighted ones is Google's approach to resolve the explainability issues while enabling human-like description of the internal state of a deep network by employing Concept Activation Vectors (CAVs). While medical systems are mostly designed to reproduce current decision-making methods such as the classifier used in the detection of cancers, Google has claimed that its novel strategy can interpret existing clinical data. Although Google has made a claim that the CAVs can directly relate to one's anticipated theories, to draw conclusions about the decision-making process, it needs to consider the human needs of a higher level of understandability.

Nevertheless, cardiac specialists have been critical of the conclusions derived by Google in the clinical domain. With the proper information, AI is optimistic that innovative, unique healthcare insights might be created without human intervention. Unfortunately, this new approach is only established based on extensive and adequate datasets. This is presumably part of the explanation of why Google has established projects as its benchmark research proposal is capturing detailed patients' history of 100,000 population across four years. However, the investigation conducted out by Google did not necessarily indicate that the suggestion was entirely distant. Such as image classifiers that could be applied to low-level structures. The central concept and assumption are to consider a neural network as additional assistance that can cause issues related to the internal representation. As a result, the clinicians commented on the deep explanatory networks. They questioned the hypotheses, by stating that although the AI algorithms and Deep Learning could improve current prediction methods of clinical domain, the research would not be trustworthy unless it had been assessed with caution while a broader range of disease had been explored. Difficulties arose, when an attempt was made in order to implement the principles and these assumptions. It seemed to be evident that their approach was overconfident and yet to be trusted.

extending the variety and distributing the filtres. Zintgraf et al. in [52] identified numerous tools to test the model and understand how DCNNs could provide a

*Predicting Type 2 Diabetes Complications and Personalising Patient Using Artificial…*

Overall, the existing explanatory Deep Learning approaches would need to be adapted for further sophisticated longitudinal modelling strategy (rather than with a multivariate distribution). This would result in better outcomes, for example, in pixel values which could be estimated reliably by everyone's environment while it skewed down much more. By providing the black box models with sufficient data, machine learning seemed to be overconfident that completely different health knowledge could then be generated without user intervention. The black box models of Deep Learning can be simplified in several aspects. For example, if an object is detected, an image detection machine can breakdown back and towards specific attributes including shape, colour and texture of the image, and then reduce the predictions to a mathematical method by checking the classification error and then background diffusion to improve the practises. In particular, in the world that it is possible to fully allocate decision making to computer systems, confidence in AI systems will be hard to achieve. In the future work, one approach that can be applied to the small-sized T2DM dataset can be the use of Bayesian Neural Networks, which will deal with uncertainties in data and model structure by exploiting the advantages of both Neural Networks and Bayesian modelling. To conclude, AI can improve current methods of medical diagnosis in terms of interpretability but cautioned that the technology would need to be more evaluated to be trusted by

In black box models, it can be challenging to determine what is coordinating the visible patterns. Such models are problematic not only for lack of transparency but also for possible biases inherited by the algorithms from clinicians mistakes [53]. This issue is caused based on the human errors and biased sampling of training data as well as the underestimation of the impact of the risk factors underlying behaviour/ pattern. In general, as observed from prior studies, it is difficult to obtain performance enhancement while simultaneously trying to explain hidden factors.

Lakkaraju in [54] suggested that there is a trade-off between patient personalisation (in a descriptive analysis) and prediction performance (in predictive analysis). Generally speaking, an improvement in explainability is often possible through a less accurate model or at a higher cost of the predictive accuracy (in a Black box model) [6]. There are quite few research studies on predicting T2DM complications and T2DM black box models. However, studies on explaining an unknown risk factor/ latent phenotype by using a hybrid data mining methodology (including descriptive and predictive) are rare to find in literature. Therefore, this study attempts to open

the AI, black box model by using both predictive and descriptive strategies.

Most of the previously published studies in diabetes prediction have tended to focus on all patients as one integrated database rather than separating patients [16]. It can be challenging to stratify patients based on their longitudinal data in order to determine what is triggering the visible patterns that may be specific to one cohort of patients. There is some research, such as [55] that assesses the disease prediction performance based upon different IDA techniques. For example, the onset of the disease is modelled in [56] while other studies focus on patient modelling [57]. The approach described in this chapter aims to personalise patients by using

The proposed descriptive strategy in this chapter has been regarded as a useful

**5. Patient personalisation and explanation**

**207**

unsupervised methodologies to group time-series patient data.

reliable outcome by using the visualisation method.

*DOI: http://dx.doi.org/10.5772/intechopen.94228*

both patients and practitioners.
