**6.2 Include testing and mitigating bias as a routine in the development and deployment of algorithms**

To ensure that an algorithm is unbiased and fair, it is important to thoroughly evaluate its inputs and outputs against known biases and check for fairness in decision-making. This can be achieved by analyzing the data and examining the algorithm's decision-making process to identify any potential biases. Once identified, strategies can be implemented to mitigate these biases. These strategies may include adjusting the algorithm parameters, adding more data, or changing the data collection process.

Open-source tools such as the IBM AI Fairness 360 [26] or the Holistic AI library can be useful in this process. These tools include a comprehensive set of metrics that can be used to assess biases in both datasets and models. Additionally, it provides explanations for these metrics and algorithms that can be used to mitigate bias in datasets and models. Using these tools and techniques, data scientists can help guarantee that their algorithms are unbiased and fair, promoting greater equity and inclusion in the decision-making process.

#### **6.3 Build multidisciplinary and collaborative teams in charge of the AI models**

A multidisciplinary team, in which close cooperation of IT staff with experts of relevant sectors such as financial or health occurs, could be one way to adjust the tradeoff between the predictability of the model and explainability and respond to the legal and regulatory requirements for auditability and transparency. There may be a need to build bridges between disciplines that currently work in silos, such as deep learning and symbolic approaches, with the latter involving rules created through human intervention [27].

This approach is formally known in the literature as developing collaborative machine learning. Collaborative machine learning allows the development of machine learning models that involve multiple stakeholders working together to create, test, and deploy the model. The stakeholders can include data scientists, domain experts, and/or end users. A team composed of individuals with diverse backgrounds enables varied perspectives in analyzing data, thereby reducing the likelihood of overlooking biases in datasets and the methods used for developing predictive models. This underscores the relevance of diversity in mitigating potential biases in the whole pipeline of the predictive models. Similarly, if end users are surveyed to provide feedback on the model, they can identify any issues or biases that may not have been apparent to the developers and help them make decisions about the model.

#### **6.4 Monitor and review**

Continuously monitor and review the algorithm's performance to ensure it remains unbiased over time and update it as needed. The goal is to identify potential issues or biases that may arise and correct them before they cause harm or inaccuracies. Continuous testing of AI models is indispensable to identify and correct model drifts. Model drift occurs when the model's performance starts to deteriorate over time because of changes in the data it is processing or other external factors. Capturing and correcting model drifts early allows to maintain the algorithm's accuracy and avoid unintended consequences.

The frequency of review and validation may need to be defined depending on the complexity of the model, the pace of new data generation, and the relevance of the decisions made by such a model. For instance, an algorithm that is used to make highstakes decisions on people may require more frequent reviews than one used for less critical tasks.

#### **6.5 Ensure AI transparency and governance**

Be transparent about the algorithm and its limitations. Explain how it works and provide clear explanations about how decisions are made. Transparency in AI transparency requires the availability of model and system documentation that is understandable and trustworthy. This allows a consumer of the model to determine if it is appropriate for their situation. AI governance allows companies to specify and enforce policies describing how an AI model or service should be constructed and deployed. This can prevent undesirable situations such as a model training with unapproved datasets, models having biases, or models having unexpected performance variations. Several methodologies have been developed to assure accountability and transparency in the development of AI models and systems. Among them, IBM FactSheets [28] and the model card framework [29] are widely known. These methodologies share a common approach of using document templates that mimic 'nutrition labels,' which contain basic information on the purpose of the model, data selection and preparation, algorithm selection and adjustment, and testing for accuracy, bias, or privacy risks. Templates can be customized to suit a diverse range of stakeholders, including risk officers, end users, affected subjects, or bank officers, among others. Additionally, there are instructional materials, guidelines, and case studies available for financial AI products and medical decision systems.

#### **6.6 Ensure traceability of our models**

Traceability and quality management are important aspects of business performance in the industry. Requirements for businesses to report in writing operational details and design characteristics of the models used were already in place before the advent of AI. Documentation of the logic behind the algorithm, to the extent feasible, is being used by some regulators to ensure that the outcomes produced by the model are explainable, traceable, and repeatable [27, 28].

Traceability in AI is considered a key requirement for trustworthy AI outputs, related to the need to maintain a complete account of the provenance of data, processes, and artifacts involved in the production of an AI model. A comprehensive approach to traceability would require on one hand a repeatable execution of the computational steps, but also to capture aspects as metadata that may not be explicit or evident in the digital artifacts. To ensure traceability, a documentation mechanism must be incorporated to the best possible standard. A review of existing methods and tools to do this documentation can be further consulted by Mora-Cantallops et al. [30].

*Human Factor on Artificial Intelligence: The Way to Ethical and Responsible Economic Growth DOI: http://dx.doi.org/10.5772/intechopen.111915*

## **6.7 Implement practices of algorithmic auditing by internal or external parts**

The area of 'algorithmic auditing' is emerging and becoming an important aspect in the adoption of AI products in companies from all sectors as it institutionalizes accountability and robust due diligence in technology. Companies may incorporate formal ethics reviews and model validation exercises in addition to internal and external algorithmic auditing to ensure that the adoption of AI is transparent and has gone through screening and formal validation processes. The broader outcome of an auditing process is to improve confidence or ensure trust of the underlying system and then to capture that in some certification process. After analyzing the system and implementing mitigation strategies, the auditing process assesses whether the system conforms to regulatory, governance, and ethical standards. Providing assurance needs to be understood through different dimensions, and steps need to be taken so that the algorithm can be shown to be trustworthy [29, 31].
