**6. How to improve cybersecurity for AI**

The development of AI and machine learning technologies will impact cybersecurity in several ways. Cyber attackers can attack any network systems from anywhere in the world, at any time. It is noticed that cybersecurity applications have received massive technological advancement over the last few years. There are many ways to improve cybersecurity for AI, like improving cyber threat detection with machine learning, AI and machine learning plays an important role in mitigating phishing attacks, automated network security, robust behavioral analytics, etc. AI and machine learning make smarter cybersecurity possible and these emerging technologies have vast potential applications in healthcare, finance, retail, etc. There are several similar issues to deal with the question of how AI systems are secure when they are used to augment the security of the collected healthcare data and computer networks. The application of AI security solutions to respond to quickly evolving threats makes the need to secure AI itself even more pressing. It is all the more important that those algorithms be protected from interference, compromise, or misuse if we rely on machine learning algorithms to detect and protect

## *Smart Health and Cybersecurity in the Era of Artificial Intelligence DOI: http://dx.doi.org/10.5772/intechopen.97196*

from cyberattacks. Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms, but also the potential for each successful attack to have more severe consequences.

The improvement of cybersecurity and safety for AI is one of the key challenges. The US Government has already indicated their interest in cybersecurity targeting certain types of technology, including the IoT, CPS, and voting systems. Recently, AI has become more popular and widely used technology in many different sectors including the healthcare industry. The policymakers find it increasingly necessary to consider the intersection of cybersecurity with AI. Recently, several researchers working on to reduce the possibility for adversaries to access confidential AI training data or models in healthcare systems during the era of Covid-19.

As mentioned above, one of the key security threats to AI systems is the possibility for adversaries to compromise the integrity of their decision-making processes. The way to achieve this when adversaries take the direct control of an AI system so that they can decide the outputs the system generates and the decisions it makes. An attacker might try to influence those decisions directly by delivering malicious inputs or training data to an AI model.
