**2.3 Trustworthy artificial intelligence**

Several organizations worldwide have devoted efforts in the recent years to reflect on the ethical impact of artificial intelligence (AI) systems. The main goal of initiatives on AI and ethics is to raise awareness about the ethical considerations related to these

systems, deepen our understanding of them and minimize the potential risks of AI while maximizing their benefits.

For example, the high level expert group (HLEG) of the European Commission developed the ethical guidelines for trustworthy AI [50] having in mind the respect for fundamental rights in various contexts where AI systems are used. These guidelines put forward a set of seven key requirements that AI systems should meet in order to be deemed trustworthy, which are:


These ethical guidelines are complemented by an assessment list for trustworthy AI (ALTAI) [51], which is designed as a practical tool to help organizations self-assess the trustworthiness of their AI systems. ALTAI is a list of 69 self-evaluation questions grouped into the aforementioned seven requirements. Although these ethics guidelines are designed for general populations, it refers to children as a relevant vulnerable population, and it states the need to pay particular attention to them.

With a focus on children, UNICEF developed a policy guidance [52] that aims to raise awareness of children's rights in the context of AI systems. The guidance is based on nine requirements, including: (1) supporting children's development and well-being, (2) promoting inclusiveness for children, (3) prioritizing fairness and avoiding discrimination for children, (4) protecting children's data and privacy, (5) ensuring safety for children, (6) providing transparency, explainability and accountability for children, (7) empowering knowledge of AI and children's rights, (8) preparing children for present and future AI developments and (9) creating an enabling environment.

Previous research [13] has shown that most of the requirements in UNICEF's policy guidance for AI and children align with HLEG ALTAI (**Table 1**), except for requirement 8, which focuses on educational policies, of special relevance for children and broadly addressed by the HLEG in the context of jobs and skills. Despite this alignment, the two guidelines differ in their focus: while UNICEF's ones include policy considerations, HLEG places an emphasis on the development and evaluation of AI systems.


*This table shows the correspondence between HLEG ALTAI's seven requirements (1) Human agency and oversight, (2) Technical robustness and safety, (3) Privacy and data governance, (4) Transparency, (5) Diversity, non-discrimination, and fairness, (6) Societal and environmental well-being, and (7) Accountability, and the corresponding requirements in UNICEF's AI for children guidelines (rows). The cells are marked with x, xx, or xxx to indicate the degree of correspondence between the related requirements, with low correspondence indicated by x, mid-level correspondence indicated by xx, and high correspondence indicated by xxx.*

#### **Table 1.**

*Mapping between HLEG ALTAI and UNICEF AI for children requirements.*

A recent report by the Joint Research Centre of the European Commission [28] recognized the need for a connection between existing research in the area of AI and children's rights and the current policy initiatives and needs, proposing an integrated agenda for research and policy. In order to do that, the authors conducted a series of workshops with policymakers, researchers and children to gain insights into the interplay between different stakeholders, to connect scientific evidence with policymaking and to change the focus from the identification of ethical guidelines towards the definition of methods for practical future AI implementations. The report highlights the need for strategic and systemic choices to develop AI-based services that limit the use of AI to tasks that serve a valuable purpose. It emphasizes the importance of minimizing the environmental impact of AI technology, particularly in reducing CO2 emissions from data centres. Developers must ensure that AI technology is child-friendly and free of discriminatory biases, while children must have control over their personal data. Transparency, explainability and accountability are critical to empowering young users of AI technology. The report also stresses the need for further research to understand how agency is developed in children when interacting with AI-based systems.

### **3. A scenario from the field of child-robot interaction**

In this section, we present current scientific evidence regarding the impact of conversational agents on children. The objective is to emphasize the importance of ethical guidelines, as outlined in Section 2.3, and to highlight the need for their adaptation in the context of CAs and children.

### **3.1 Motivation and rationale**

As discussed in previous sections, conversational agents are present in various contexts and embodiments. For instance, a written interaction with an open-domain chatbot on a computer differs significantly from a voice interaction with a driving assistant in a car. We acknowledge that the embodiment and context of the conversational agent impact the user's perception and behavior, beyond the dialog. However, given the diverse range of conversational agents available and the findings of our literature review in Section 2.2.1, which showed the popularity of child-robot interaction studies, we have chosen to present a use case of a social robot in a controlled educational context.

To illustrate the potential impact of CAs on children, this use case reflects current research on social robots and builds upon previous research with educational focus [53–55]. This aligns with procedures suggested by UNICEF and provides a relevant scientific context of application of CAs.

Specifically, we discuss a large-scale experimental study with hybrid small-group settings, where we investigated the effects of certain robot behaviors on children's problem-solving processes, social dynamics and perceptions of the robot. This experimental study serves as a starting point for identifying emerging issues that could apply to other conversational agents. In the following subsections, we provide a brief overview of the methodology and results of the study. For a more detailed description, we direct the reader to ref. [56, 57].
