**2. Defining ethics**

What is ethics?

As per the Oxford dictionary ethics means "the moral principles that govern a person's behaviour or the conducting of an activity".

Ethics is based on well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms of rights, obligations, benefits to society, fairness, or specific virtues [10].

An ethical virtual assistant should be designed with the ethical standards of the society it affects. These standards should extend to virtual assistants and their creators who should design, build and maintain virtual assistants to ensure that their interactions with consumers foster honesty, loyalty, refrain from doing harm, or fraud and provide the right to privacy.

In the subsequent section, we discuss in detail the ethical principles of virtual assistants.

#### **3. Ethical principles for virtual assistants**

AI based virtual assistants ability to act intelligently has long been evaluated by the Turing Test [11] and Loebner Prize [12]. The focus is on intelligence of the system to respond to human questions. Looking through the lens of ethical principles, other questions arise, beyond "What can the virtual assistant answer?" For example, "Does the answer promote consumers interests or business interests like recommending the most profitable product which does not best suited".

In a recent paper published by Jobin, A., Ienca, M. & Vayena, E. on the global landscape of AI ethics guidelines [13], they found globally five emerging ethical principles that are deemed important – Transparency, Justice and fairness, Nonmaleficence, Responsibly and Privacy. In this section we interpret these principles with a view on virtual assistants and what considerations should designers, developers and consumers understand when developing and interacting with virtual assistants.

#### **3.1 Transparency**

AI transparency refers to the explainability [14], interpretability, disclosure [15] of the algorithmic models including their training data, accuracy, performance, bias and other metrics.

#### *Virtual Assistants and Ethical Implications DOI: http://dx.doi.org/10.5772/intechopen.95479*

When dealing with virtual assistants, transparency [16] often refers to informing the consumers who they are chatting with ie virtual assistant, not actual human, sharing details on what information can the consumer search, and how his data will be used, stored, analyzed for improving experience.

Brands build trust with consumers by being transparent and honest in communication. Virtual assistants are an extension of brands consumer experience. If virtual assistants impersonate a human, it can lead to poor experience and lack of trust with the brand. This can also be harmful when interacting with consumers on sensitive areas like healthcare or banking.

Designer and developers of virtual assistants should be transparent in disclosing information to consumers in terms of what they can search and disclose how their data would be shared and analyzed. When the consumers know what they can search they will be able to ask questions on the topics that virtual assistant has been trained on and get desirable answers. This will create a delightful experience. Further, consumer should have the choice to opt in their interaction data for other purposes like development of the AI models or for advertisements and more. This will help to gain consumer confidence in virtual assistants and increase adoption. Lastly, consumers should also have the option to connect to a real person, request callback or send an email if they are uncomfortable in interacting with a virtual assistant.

#### **3.2 Justice, fairness and equity**

Justice means that AI algorithms are fair and do not discriminate against particular groups intentionally or unintentionally [17]. There have been numerous publications on fairness and how to identify, mitigate bias in Algorithms [18–20]. In case of virtual assistants justice, fairness and equity refer primarily to prioritizing the consumer interests and providing impartial recommendations [21] .

AI models on recommendation generally use techniques of collaborative filtering, ie filtering for consumer preferences based on information gathered from many similar consumers. The models constantly learn from consumer feedback ie likes or dislikes and adjust accordingly.

However these models can be biased based on the consumer training data or based on overarching business rules like recommend the most profitable product. For example, will the virtual assistant recommend the meat which is most expensive and near expiry date or the meat which is cheaper and fresh?

Virtual assistants being viewed favorable towards certain recommendations raises the question on fairness especially for consumers. When virtual assistants are used within an organization, then sometimes recommendation may rule driven, which is as per the employee policy.

Designers and developers should regularly test the virtual assistants against the fairness metrics, publish them to consumers and also give consumers the option to provide feedback on recommendation. The more virtual assistant adapts to consumers interest and provides fair recommendation, the more popular the virtual assistant will become with consumers.

#### **3.3 Non-maleficence**

This term is used to define consumers safety, security and the commitment that AI model will not cause harm for example, by spamming, hacking, discrimination, violation of privacy or abuse.

In case of virtual assistants, we focus on abuse and sexual harassment for this principle. Abuse refers to both receiving abuse from consumers and giving back abuse to consumers.

#### *Virtual Assistant*

Many times, virtual assistants are at the beginning of a consumers journey, and if the responses are not helpful it leads to frustration and abuse from consumers. Although, virtual assistants are AI models and do not have feelings (like humans), as consumers, we should refrain from abusing since it impacts the way we behave in society and transcends similar behavior towards even our fellow humans.

Designers and developers need to design the conversation experience with consideration that virtual assistants will receive abuse. They should design the conversation flow empathically so that the consumers are provided a positive response and transferred to a more helpful channel like voice or email on request [22].

Another consideration is gender stereo-typing ie the gender of virtual assistant. In many cases, virtual assistants have a default female voice or persona. Designers and developers can provide options to consumers to select the virtual assistant persona and alter language, voice, tone of responses specific to chosen persona.

In a related study on sexual harassment of virtual assistants, "#MeToo: How Conversational Systems Respond to Sexual Harassment [23]" points different behaviors in commercial, supervised and unsupervised learning based virtual assistants. The unsupervised learning based assistants have more freedom in learning from user conversation and responding similarly. In these cases language correction models should also be deployed to protect users from chatbot abuse. For example, Microsoft's Tay chatbot was corrupted in less than 24 hours by self-learning through user conversation [24].

#### **3.4 Responsibility and accountability**

Responsibility and accountability refer to the AI acting with integrity, clarifying the attribution of responsibility and data ownership. In case of virtual assistants this refers to being transparent, fair, disclosing information on responsibility, legal liability and data ownership to consumers.

There has been much debate on who is ultimately responsible – is it the AI based virtual assistants or the humans who built it. Generally, terms of service agreement which consumers have to agree before using virtual assistants, define the limitations on responsibilities and liabilities in line with regulations.

Data ownership requires special mention here. Questions typically arise on who owns the data when it is captured and generated during conversation with virtual assistant. For example, new data is generated when a virtual assistant interacts with consumers using voice. It will over time develop data related to consumers preferences (preference in music), personality [25] (words and tone of language), family (number of different voices in family or type of requests made ~ nursery rhymes) and more. Sometimes, organization may have built the business model on leveraging this derived data for profit. For example, Virtual assistant derives data on the age of your children and serves you advertisements on children toothbrush.

Designer and developers should be transparent on data ownership and have an opt in feature, if the consumers want to share this new data generated or want to keep it private. If the business model of the virtual assistant is based on offering free services and leverage consumer data for advertisements, then that should also be transparent to the consumer.

#### **3.5 Privacy**

Privacy means that your personal information is kept confidential and only shared with consent. Many countries have passed laws and regulations to protect the privacy of their citizens like General Data Protection Regulation [26]. In relation to Virtual assistants, privacy is often referred in relation to data protection and security. Deeper questions on privacy for Virtual assistants arise from

