**6.8 Introduce mechanisms to ensure that humans verify the final decision of the model**

AI applications are designed and used by humans, and humans decide the degree of autonomy assigned to an AI application, whether that be human-controlled, semiautonomous, or fully autonomous. Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values in automated decisionmaking, and build trust in the technology. Therefore, delegation of autonomy comes with great responsibility, and organizations must remember that technologies such as AI are not a complete substitute for humans [32].

Appropriate emphasis could be placed on human oversight in decisionmaking when it comes to higher-value use cases (e.g., lending decisions), which significantly affect the population [27]. The final decision about the model to be used, which one needs to be reviewed, which models should be discontinued, and, importantly, the action to be done with the decision of the algorithm (e.g., approval of a mortgage or medical diagnosis), should always be made by a human being. Therefore, AI models should be used as a decision support tool rather than being left to act on their own. This ensures that the responsibility resides with the


#### **Table 1.**

*Good practice guide for addressing Bias in AI-based solutions. Source: Prepared by CTIC.*

respective human decision-maker but it is also an important control for drift in selflearning models [21]. An overview of the 8 items presented in this section is shown on **Table 1**.
