**5. The challenge of bias and fairness in AI-based services: the cases of financial and health sectors**

As big data involves vast amounts of data reflecting society, AI-driven models could just perpetuate biases that already exist in society and are reflected in such databases. Bias can lead to unfair outcomes for certain groups of people such as women and minorities.

To be effective and avoid ethical pitfalls, companies need to ensure that AI is not programmed with biases that could lead to ethically charged decision-making or cause AI to malfunction in some way. In a report of NTT data that surveyed eight sectors of business in the United States [20], about one-fifth of respondents who used AI models in their companies say that they offered them suggestions that reflected bias against a particular vulnerable group. Organizations cannot risk wasting on technology investments gone wrong, therefore they must pivot their organizations to focus on ethics and other pressing issues.

*Human Factor on Artificial Intelligence: The Way to Ethical and Responsible Economic Growth DOI: http://dx.doi.org/10.5772/intechopen.111915*

The independent high-level expert group on artificial intelligence [9] defines bias as "*an inclination of prejudice toward or against a person, object, or position.*" Bias drives the value of most risk prediction models (wanted bias), but it can also be detrimental to it. In certain cases, bias can result in unwanted discriminatory and/or unfair outcomes, labeled in this document as unfair bias.

Bias in AI models can arise from all the steps of the machine learning algorithm pipeline. These include bias in training data, algorithmic bias, bias in logic-based AI, bias arising from self-learning and adaptation, or bias arising from personalization. Bias can be caused by several factors such as underrepresented populations, erroneous data, outlier data, and biased human decision-making in data collection or labeling. Bias can also arise from limited contexts in which a system is used, resulting in a lack of opportunity to generalize it to other contexts.

Addressing bias in AI services should result in fairness in their implementation. This means ensuring that the effect a model has on individuals and groups is free of unfair bias, discrimination, and stigmatization. Popular notions of fairness include demographic parity (also called statistical parity, e.g., women and men have the same chance to get a loan), equalized odds (women and men who all meet certain other requirements have the same chance to get a loan), or the well-calibrated ness (among those who got a loan, women and men are equally represented as in any random sample).

Sources of bias are likely to be present in the data utilized for training predictive models in financial and healthcare services. Therefore, it is crucial to identify these sources of bias and implement effective measures to mitigate their impact. In this regard, we produce a guide of best practices aimed at minimizing undesired bias and ensuring the reliability and validity of the models. To frame this in context, we set out two domains concerning the population as a whole that can be highly useful for illustrative purposes.

Firstly, AI is becoming an essential tool for financial services such as fraud detection, risk prevention, credit scoring, loan approval, or insurance underwriting. Moreover, given the nature of data in the banks, AI has a significant role in processing data to predict the future of the economy and banking industry.

Secondly, regarding healthcare, AI systems are used for population and individual segmentation, personalized screening, diagnosis, massive data treatment, and personalized interventions. Technology derived from wearable devices can be applied for disease management and monitoring. In addition, artificial intelligence has the potential to revolutionize biomedical research and drug development, including immunological therapies for rare diseases and less frequent types of cancer. Furthermore, AI is effectively applied in clinical management such as prediction of demand and intelligent use of healthcare resources, optimization of operating rooms, or intelligent scheduling. The adoption of AI in the healthcare field presents a series of HCAI particularities. On the one hand, it demands an important level of data privacy and algorithmic robustness that exceeds those in other domains. On the other hand, it entails the need to establish clear accountability while not discouraging medical professionals from utilizing these tools [19].

#### **5.1 Illustrative case in financial sector**

Thinking from the big picture of ethical AI in financial services, a model for automating credit decisions, the results of which affect human lives and are publicly visible, should be free of unwanted bias and meet requirements for model transparency. In some legislations, credit customers even have the explicit right to request an explanation of the reasons behind credit decisions pertaining to themselves, whether the actual decision was positive or negative [21].

A typical example would be if we suppose that the training data for a credit pricing model show that men have higher salaries on average than women, which is actually a societal fact. Any bank should be aware that this gender bias can arise in models even though gender itself is not an explanatory variable in the model. It may be that a higher loan rejection rate for women would be statistically justifiable from the training data (and might comply with an equalized odd definition), but a bank should reject that model for ethical (or reputational) reasons. Another known example of bias has been racial discrimination in mortgage approvals [22]. Minority applicants were found to have a significantly lower chance of receiving algorithmic approval to receive a mortgage from race-blind government automated underwriting systems compared to Caucasians.

#### **5.2 Illustrative case in healthcare sector**

In the case of unwanted bias and fairness, while AI is improving diagnosis, treatments, and lowering the costs to discover and develop drugs, it has also introduced biases detrimental to demographic minorities in automated decision-making. These biases are partly due to the disproportionate overrepresentation of Caucasian and higher income patients in electronic health records datasets [22]. If training data consists predominantly of medical records from white males, an AI clinical decision support system may perform poorly or be less accurate when making diagnoses or treatment recommendations for women or people from racial minorities. This is because the AI model has not been trained on enough heterogeneous data to account for the diverse ways diseases occur in different demographic groups. Such bias can have profound consequences on patient outcomes and exacerbate existing healthcare disparities.

In view of the risk of undesired bias in AI-based products for the financial and health sectors, we propose a good practice guide on bias and fairness in AI. These principles are defined in such a way that they can be applied to all business sectors, as a one-size-fits-all guide.
