**7. Conclusions**

The full impact that AI technology may have in business and society is yet to be determined, but several technical, strategic, and stakeholders cooperation questions need to be addressed. In this sense, hot topics to address in the upcoming years include developing new concepts for testing and validation, defining dataset requirements, and ensuring their quality for AI, as well as embedding ethics guidelines to guarantee trustworthy AI.

Summarizing all the above, to achieve equity and fairness in AI, developers must ensure that their algorithms are trained on unbiased data and that they are transparent and explainable. They must also continuously monitor and audit their AI systems to identify and address any potential sources of bias or discrimination. Additionally, it is important to involve diverse stakeholders in the development and deployment of AI systems to ensure that a variety of perspectives and needs are considered.

This best practices guide is proposed to underscore biases and ensure fairness in AI-based products, but many of its principles extend to and positively impact other dimensions of HCAI such as transparency, accountability, or explainability.

Improving people's understanding of how AI models operate and having a clear HCAI strategy that gauges negative potential biases of AI systems will increase user trust, spread, and usage of AI devices, thus ensuring full acceptance of AI in society and the economy.
