**4.4 Application of AI in UI design for cyber security threat modeling**

Threat Modeling in cyber security assists in examining current and potential vulnerabilities within a system, and has been an instrumental process against security threats [71]. Though favorable, this process has hardly warded off looming security threats that have kept a constant necessity for cyber security improvement, leading to conventional approaches on Threat Modeling. The introduction of AI into UI results in behavioral analysis, which can be applied to the second step of threat modeling called threat determination for forestalling cyber attacks. An effective AI approach tailored to vulnerability management has been on behavioral analysis on an attacker [72].

Threat Modeling incorporates AI to analyze various user interactions with interfaces and detect anomalies for potential attacks. Some software solutions or applications designed to detect cyber threats are Darkrace Immune System, a cyber security platform that uses AI to learn human interaction patterns on system interface for anomaly detection, Vectors' Cognito and Paladion [73]. Codesealer is another software application that provides UI security [74]. Another application is Automated Virtual Agent for Truth Assessment in Real-Time (AVATAR), a United State government security screening tool designed to detect false information during user interaction with the system and is used for automated interviews at Airport checkpoints [75]. Some of the other applications of AI in cyber security are in the areas of spam filtering and malicious traffic detection. These are cyber threats which requires intelligent models to mitigate the attacks.

It is vital to note that AI has become a valuable tool for cyber criminals, thus reinforcing the significant practice of AI in UI designs [73]. AI application in UI is not left without drawbacks which are generally centred on the lack of patterndriven dataset, computing and data resources. In addition, the introduction of AI attracts AI-modeled attacks like model evasion, data poisoning and data-stealing, though they can be managed by AI domain expertise with good security practices and safeguards.
