**7. The AI and legal issues**

A detailed discussion on the various legal issues relevant to AI and healthcare is beyond the scope of the current note and the reader shall look elsewhere. A brief account of the legal issues in relation to the healthcare is presented for awareness of the doctor as a user and stakeholder, especially in case of untoward reaction and damages occurring during the course of one's actions using the AI. As per the common law, a person of unsound mind is not responsible for his actions. It means only a person with sound mind is responsible for his actions. The common law also talks of subjective element of criminal intent. Is the computer which lacks the human mind and intent held responsible for its actions. The AI is considered as a technological tool with ability to simulate human brain and perform some of the duties that require human intelligence [29–33]. In case of wrong decisions, adverse reactions and untoward outcomes resulting after usage of AI, what is the liability of the AI and the healthcare team? The machine does not have its own identity but there are multiple persons involved finally in AI in healthcare—the vendor, the owner of the company, the designer, the hardware or the software developer, the persons who evaluates and tests the tool, the person who supplies the data or the database itself, or the doctor who uses the AI platform on a patient. Who is responsible or accountable for the damages caused in using the AI? The legal issues that arise in addition to adverse reaction or the outcome are—a foreseeable damage, a human rights violation, violation of privacy, a criminal intent, cybercrime, and risk of a hacker laying his hands on the data. While the machine is not responsible in itself, can the person behind the machine be held responsible? To what extent is the doctor accountable as a user. Are the people who built it and use it responsible. Lack of accountability raises concerns about the possible safety consequences of using unverified or unvalidated AI in clinical settings. Awareness of the potential of AI, responsible use of the AI and the insights provided, potential harms, taking an informed consent from the patient while using AI interpreted results, is the responsibility of the doctor [34–38]. The interesting case of Google vs. Information Commission (UK) shows the issues that need consideration. Issues in handling sensitive data like privacy, transparency need attention. A code of ethics has to be developed [39–41].

Explainability is an issue that needs consideration [42–44]. How do the AI driven algorithms arrive at a prediction or a conclusion in a given situation? Should the process of AI be a part of the informed consent? Is it necessary to explain the process? Ethically the medical doctor is accountable for his actions. The informed consent one obtains shall be really informed. When using the AI driven clinical decision support systems, the doctor has to be aware of the reasoning behind such decisions. Four principles are considered when we talk of explainability of AI—the algorithms used in a language the user understands, the evidence and reasoning behind the conclusions drawn, the reliability of the processes used, and the proof for the outcomes or insights. Doctor patient relationship involves mutual trust. The doctor has to explain his actions and decisions and they shall be transparent, patient centred and holistic. Are the processes and algorithms involved in reaching the conclusions in AI systems explainable to a doctor?
