**4. Possible consequences that resulted from the failure to include HCAI principles: the case of smart speakers**

By the end of 2022, many journal articles have been published about the challenges that the big company Amazon has faced with its Alexa voice assistant in the business market. According to several articles published in specialized media such as Business Insider, Ars Technica, or The Guardian [13–15], Amazon has reportedly lost around \$10 billion in its efforts to gain a foothold in the enterprise market, which has been largely dominated by competitors such as Microsoft and Google. Amazon has faced challenges in convincing businesses and expected home users to adopt Alexa

*Human Factor on Artificial Intelligence: The Way to Ethical and Responsible Economic Growth DOI: http://dx.doi.org/10.5772/intechopen.111915*

for the workplace and online shopping tasks and has struggled with reliability and security issues.

The first controversy faced by Amazon and Google and their voice assistants has been their previous evidenced violation of privacy policies. A study carried out by researchers from the University of Clemson [16] showed that Amazon and Google voice assistants had high-level privacy issues such as broken and incorrect privacy policy URLs, duplicate privacy policy links, lack of privacy policies in skills where they were needed, or inconsistencies and errors in the content of the policies. The results of this study reached the community and resulted in discussions about the security of smart speakers, spreading a lack of confidence in the product among end users, and negatively impacting the reputation of the companies.

However, one of the greatest challenges faced by Amazon in relation to its voice assistant is related to the expected profitability of the product. The primary objective of this US-based multinational corporation was not only merely to generate revenue from the sale of the devices but also to capitalize on their usage by customers such as through shopping on Amazon. The aim was to establish these devices as a novel interface for consumers, comparable to the adoption of smartphones for online purchases. However, the actual usage of Echo speakers and the Alexa assistant has not conformed to this profile in most cases. Although smart speakers have exceeded projections made a few years ago and achieved widespread adoption in United States households, the primary use of these devices by most of their final users has been for routine activities such as information retrieval and music playback [13–15]. Although these services are valuable, they hinder the company's original objective of monetization.

In sum, Amazon developed an AI product aimed at creating a new human and potentially profitable need (i.e., do online shopping using smart voice assistants) with a specific strategy for its scaling up (for instance, selling the Echo speakers nearly at their cost of production) and reaching a big success selling the Echo speakers, but the end users are not using Alexa for the initial purpose expected by the company. The overall financial losses incurred by the company in its efforts to break into the business market have been significant.

It is not expected to oversimplify a complex case such as Amazon with its voice assistant as the market scenario involves a multifaceted interplay of factors beyond the scope of this chapter. However, from a perspective rooted in HCAI, we posit that a more conscientious integration of HCAI principles in the design of voice assistants could lead to greater market success for companies interested in producing smart speakers.

One possibility for dealing with privacy concerns considering the principles of HCAI, as proposed by Liao et al. [16], might be that companies could add a solution to inform users about the data collection capabilities of a voice app. They propose a built-in intent that scans for data collection capabilities and notifies users about it. The intent could be invoked when the app is enabled and provide a brief privacy notice. Additionally, the intent could advise users to look at a detailed policy provided by the developers. The authors also proposed to extend this approach to automatically generate privacy policies for voice-apps in the future.

Regarding the profitability of smart speakers, one plausible scenario is that companies could formulate an effective strategy if the involvement and feedback of relevant stakeholders, including prospective end users in domestic settings, are involved. By conducting thorough and extensive research on the requirements, feasibility, and acceptability of smart speakers among the target audience, companies could achieve a properly aligned product with user expectations and make a more beneficial investment.

This case exemplifies that a safeguard economic growth in AI-based products inherently needs to include the principles of HCAI in the design of the devices. AI devices should be designed and used with ethics, transparency, and trust in their pipeline to ensure that end expected users adopt them. This idea is also supported by several studies showing that the adoption of AI services is positively associated with the ability of customers to understand the product, its perceived usefulness, knowledge or awareness of AI technology, positive attitude, and trust in AI [17, 18].

The case of smart speakers serves as an illustration of the potential negative consequences that can arise from a failure to apply the principles of HCAI in companies' AI products. Such consequences may include loss of profitability, reputation damage, and negative social impact. It is important to note that these risks are not limited to large companies or specific sectors but are increasingly relevant in other domains where the adoption of AI is rapidly expanding such as finance or healthcare technology. Therefore, it is essential to prioritize the principles of HCAI to avoid such negative consequences.

With regards to general ethical principles and the application of HCAI in the fields of finance and health, there exist some interesting guidelines. Regarding the first one, the "Code of Conduct for the Ethical Use of AI in Canadian Financial Services" is a valued soft-law source in Canada. The document is a set of principles, developed in consultation with various Canadian financial service organizations. The objective of this document is to promote the ethical use of AI in financial institutions by offering practical guidance to prevent ethical implications in the daily usage of AI. This code represents a milestone toward practical and industry-specific ethical principles. For the case of healthcare companies, an interesting starting point could be the World Health Organization [19] guide: "*Ethics and governance of artificial intelligence for health.*" The report identifies the ethical challenges and risks associated with the use of artificial intelligence in healthcare. It presents six consensus principles that should be followed to ensure that AI works for the public's worldwide benefit.

Next, with the aim to provide a more operational and concise insight in HCAI, we focus on the principle of ensuring equity and fairness in AI systems because of the potential impact that both issues can make in terms of social justice. To this end, the next section addresses the challenge of detecting biases in AI-based financial and healthcare services, as well as providing a set of best practices aimed at promoting and achieving equity and fairness in AI.
