**2. Literature review: intelligent data analysis in complex disease progression modelling**

This article reviews the current literature on some of the most common AI methodologies, including probabilistic modelling, association rule mining, and latent variable discovery. Intelligent Data Analysis (IDA) is a subcategory of AI that is focused on data analysis and modelling. These methods are known to be highly successful in combining advantages of modern data analytics, classical statistics and the expertise of scientists and experts [1–3]. IDA techniques have already proved successful in clinical modelling [4]. A large and growing body of literature has investigated IDA approaches that have shown excellent results modelling crosssectional clinical data for classification. There has also been substantial modelling on longitudinal data using IDA techniques. However, there is still an urgent need to improve these models to take account of the variability of disease progression from person to person, and explicitly model the time-varying nature of the disease. Many studies have attempted to find automated ways of helping clinicians predict disease progression [5].

For many clinical problems, the underlying structure of unmeasured variables may play an essential role in the progress of the disease. However, it is still a relatively unexplored area. Identifying these unmeasured variables as hidden or latent variables is key. What is more, understanding the semantics behind these unmeasured risk factors can improve the understanding of the disease mechanisms and thus better improve clinical decision making. Interpreting these latent variables is complicated; however, as they may represent different many types of unmeasured information such as social deprivation, missing clinical data, environmental factors, time-based information or some combination of these. To gain trust in any AI model, it is mandatory to understand/explain influencing factors of

*Predicting Type 2 Diabetes Complications and Personalising Patient Using Artificial… DOI: http://dx.doi.org/10.5772/intechopen.94228*

disease that guide predictions or decisions. This is because clinicians expect to understand AI diagnoses to be able to make decisions. There is a great deal of debate over the importance of explanation in AI models inferred from health data. In particular, there is a balance that needs to be made between the accuracy of complex deep models such as convolutional neural networks (in predictive strategy) and the transparency of models (in descriptive strategy) that aims to model data in a more human way such as expert systems.

A combination of explainable and deep strategies rather than either one of them alone would have a better prognostic value. Furthermore, in order to obtain a more accurate and explainable prediction of progression, the predictive models need to be personalised based on how an individual patient matches historical data by identifying patient subgroups.
