**4. Pragmatic solutions**

All tech solutionism aside, there is a place for human interventions, organisational approaches and socio-technical tools to develop and govern AI. There is no one size fits all approach. There is no one tool that can provide a silver bullet. It requires a holistic approach.

Understanding the purpose and outcomes to be achieved is a necessary first step. Many governments around the world are looking to algorithmic transparency to find ways of explaining automated decision making to its citizens. This on the one hand shows government to be open and accountable, but on the other hand it is a ruse to publicly legitimise their actions or inactions. Is not government responsible for the outcomes it creates in the public interest whilst also under a duty of care to ensure the safety of the wider public? If the public does not legitimise certain AI or ADM uses by government, what does that say to government in how it does or does not exercise its duty of care. How can we expect the government to fulfil its duty to

the masses without leaving the less represented and marginalised groups in society exactly that….marginalised!

Transparency in all its forms is an important key step but must be accompanied with meaningful stakeholder engagement. Transparency is the gateway to many of the other ethical principles, but for transparency to do its work, it must be explainable and understood in context in a way which is relevant to the recipients of the information – the message received is after all the message given.

Tools such as AI registers and risk analytics platforms are needed to accompany governance but more need to be done. In order for there to be a holistic and pragmatic approach, AI governance need to take into account human intervention, organisational processes as well as technological tools, especially those that increase our understanding and provide meaning and interpretation of what exactly goes on in that opaque box. This way ethics can be turned into something that is operational. It also has opportunity to legitimise governmental use of AI and to reaffirm their societal mandate to act in the public interests.

### **5. Current trends and way forward**

The European Commission has made a brave and bold move to seek to regulate in the area of AI. In an effort to build an ecosystem of excellence and trust, it seeks to preserve European values and protect the Fundamental rights of European citizens. It's human centred approach to AI is to be applauded, especially as it seeks to provide a governance structure for AI, with scope for risk and impact assessment, adherence to standards and other voluntary codes of conduct, providing for conformity assessment (akin to product liability legislation) for those AI deployments which are deemed "high risk".

Whilst this piece of legislation seeks to have extra-territorial effect like GDPR [X], it is not the GDPR of AI. Furthermore, it is a risk based, not principles-based piece of legislation like GDPR, but it does share something in common with GDPR: it is making the world's ears prick up. We may indeed see that all important "Brussels Effect" for AI governance crossing jurisdictional, geographical, and cultural divide, decolonising AI and AI ethics.

Barriers to global roll out and wider spread adoption of a regulatory approach such as this will be economic (determined by views of regulation stifling innovation), political (in the AI race), and will concern ethical disparities (public good versus equity and justice for the individual).

From a broader ethical perspective, three key areas of concern in development and deployment of ADM/ALS relate to Accountability, Transparency and freedom from unacceptable Algorithmic Bias. To this end, the IEEE-Standards Association has developed a suite of detailed criteria for evaluation, assessment and certification of these properties of ADM/ALS products and services under the "Ethics Certification Programme for Autonomous and Intelligent Systems" (ECPAIS). This programme [11] is a key facet of the IEEE-SA's Global Initiative and Ethically Aligned Design portfolio.

The three classes of ethical dysfunctions that may emerge in the embedding of ADM/ALS in products, systems and services require a systematic and credible independent evaluation and assurance to allay the public and private sectors' concerns and foster acceptance and deployment. To this end, IEEE-SA's suite of pragmatic and holistic certification criteria are now ready for deployment and tailoring for specific sectors and applications.

The high-level principles (Evaluation and Certification Factors) for each of the currently three ECPAIS suites are broadly defined as hierarchy of more detailed

#### *Introductory Chapter: AI's Very Unlevel Playing Field DOI: http://dx.doi.org/10.5772/intechopen.99857*

factors and criteria (typically, 10–20 for each of the depicted high-level factors) which are S.M.A.R.T i.e. specific, measurable, achievable, realistic and timely at the pertinent system or component level.

*Transparency* relates to the criteria and values embedded in a system design and the openness and disclosure of choices and decisions made for development and operation. This applies to the entire ADM/ALS context of application for the product or service under consideration such as data sets and not restricted to technical and algorithmic aspects alone.

*Accountability* considerations concern the commitment by individuals and institutions involved in the design, development or deployment of ADM/ALS to remain responsible for the behaviour of the system as long as its integrity is respected. This is predicated on the recognition that the system/service autonomy and learning capacities are the result of algorithms and computational processes designed by humans and those humans should remain responsible for their outcomes. A key driver in accountability is explicit, sufficient and proper documentation and traceability for system design, development and deployment.

*Algorithmic Bias* relates to systematic errors and repeatable undesirable behaviours in an ADM/ALS that create unfair outcomes, such as granting privileges to one group of users over others where they are expected to be neutral and unbiased. This can emerge due to many factors, from the design of the algorithm influenced by pre-existing cultural or institutional practices, the decisions relating to the way data is classified, collected, selected or used to train the algorithm, the unanticipated context of application and even presentational aspects emerging from search engines and social media.

The ECPAIS suites of ethics certification criteria are currently being extended to include ethical Privacy and tailored suites for high social impact domains including a bespoke suite for ethical assurance of COVID-19 pandemic related Contact Tracing Technologies [12]. This trend will continue to ensure ECPAIS embodies a broader and more comprehensive range of concerns in technology ethics.
