**8. Privacy and security of data**

The AI in healthcare deals significant sensitive personally identifiable information (PII) consisting of demographic and health data of the patient. Apart from the doctors, the nurses, pharmacists, diagnostic lab personnel, other therapists, some statutory bodies like regulatory authorities and the patients themselves access the digital platform in healthcare at various stages. This data generated is accessed at the point of care, at data analysis, deep mining, and when looking for insights. The data necessarily has to be transparent and portable and is stored cloud. This scenario is a hacker's haven. Whose responsibility is its security? When in a specific case the AI fails or is misused, the ethical principles of privacy, autonomy, and justice could be violated. Data theft and misuse are common threats in any computer program, and it is the responsibility of the user to protect the privacy, and security of the owner of the data. All the stakeholders who have access to the data have to be careful of the data in such a situation. The AI developer and user has to keep a watch on the impact of the misuse or discrimination. What processes should we implement to monitor the impact and how to overcome the unintended clinical outcomes. What skills does a developer, or a user has to acquire to enable performance of these tasks. A dialogue between all the stakeholders is necessary on these issues to protect the rights of those involved against direct or indirect coercion. Should the doctor as a user of the AI systems, and as a person involved in the management of the patient, the ultimate beneficiary, be involved in the various processes of AI [45–47]?
