**2. The gap between the technical and ethical adoption of AI in business**

Despite these advances, most companies are still in the initial stages of digitalization and adoption of AI, thus having the potential to create enormous value for consumers, business, and society, but this position also implies many profound challenges and risks [4].

Important AI-related risks are concerns in cybersecurity, regulatory compliance, personal privacy, explainability of AI models, organizational reputation, or equity and fairness [2, 5]. An area of consistent concern is the extent to which organizations are actively involved in risk mitigation to enhance digital trust. While AI adoption and investment have increased, there have been no substantial increases in reported mitigation of any AI-related risks compared to 2019 (**Figure 1**), the first year of McKinsey's survey [2]. This situation is worrying as research investment in AI technologies is also substantial. For instance, the EU invested €10 billion into AI through its framework programs between 2014 and 2020, representing 13.4% of all available funding. However, only 30.3% of funding calls related to AI mention trustworthiness, privacy, or ethics [6].

This flattening in risk mitigation strategies by companies contrasts with the increasing and severe incidence of ethical misuses of AI tools. This is compounded by the growing trend of AI ethics and security researcher layoffs within the tech giants [7].

According to the most recent report of Stanford University's AI index [3], the number of AI-related controversies has increased up to 26 times in 10 years. Similarly,

#### **Figure 1.**

*Survey of risk mitigation strategies in AI technologies in United States companies. Source. McKinsey report [2].*

academic developments and scientific publications in ethical aspects of AI tools and methods to mitigate these risks have had an exponential increase since 2012. The most researched topics have been strategies for better management of privacy concerns, explainability, equity, and regulatory processes [3, 8]. This evidences a gap between the scientific development of ethics in AI and the practical adoption of these developments by companies.

These issues are exacerbated by the lack of up-to-date regulation despite recent progress at the EU level. Several initiatives are being developed to enhance the adoption of the ethical aspects of AI. In the European Union, the European Commission's High-Level Expert Group on AI (HLEG). In 2019, the ethics guidelines for trustworthy artificial intelligence were defined [9], putting forward seven key requirements that AI systems should meet to be deemed trustworthy.

Additionally, the rapid expansion of AI is already outpacing the development and deployment of legal and regulatory frameworks. In this sense, in May 2021, the EU Commission became the first worldwide governmental body to present the so-called "first legal framework on AI," aimed at regulating the use of AI (AI Act, European Commission) [10]. Since data is the essential foundation of AI, other regulations have progressively joined in such as the Data Act, the data governance Act, or the Digital Services Act. On top of this, there are recent national initiatives as proposals to regulate AI (e.g., UK pro-innovation approach to AI regulation) [11].

The challenge now is for the industry to harness that power to face current challenges and create sustainable and efficient solutions. As companies are adopting and deploying AI tools and technologies more routinely, the complicated ethical challenges mentioned are expected to continue to rise and negatively impact companies and consumers.

The research community suggests that technology companies, admissions officers, hiring managers, banking executives, and other decision-makers adopt a human-centered approach to AI products rather than a purely technological one. In fact, several firms that famously adopted purely technological processes have found it necessary to reintroduce humans to provide control in AI products [5]. This implies that 1) more agile strategies for translating scientific development into operational lines of business are needed to offer more ethical and sustainable AI-based products and 2) As AI becomes more prevalent in productive processes and across labor market demands, countries will need to make extra efforts to provide effective training opportunities for individuals. This will enable them to benefit from the advantages that this innovative technology can offer [12].

This book chapter, thus, presents the essential knowledge needed by business about how to behave more ethically in AI development and deployment. We will begin by exposing the concept and principles of human-centered AI (HCAI). We then continue by examining the possible challenges of AI for economic growth if HCAI principles are not enough considered in the business strategy, illustrated by two case studies in the financial and healthcare sectors. Third, we will discuss how to increase user trust and usage of AI devices by proposing a good practice guide around the principles of HCAI to address bias.
