**3. Cyber psychological operations and hybrid threats**

The strategic level of hybrid operations involves the definition of the main objectives for hybrid operations, the targets, and possible collaboration networks. The choice of resources and ways to combine them to operationalize the hybrid strategy depends upon the strategic deliberation. On the other hand, the means also condition the set of available tactics that may allow one to operationalize a given strategy.

The strategic power of hybrid operations in allowing for a state or non-state agents to achieve their strategic objectives has increased due to the resources available that allow for high yield with low investment; these resources are linked to the network power and AI power, defined at the beginning of the present chapter.

In what regards hybrid operations, the network power and AI power cannot presently be considered separately, since it is precisely the synergy of cyberspace and AI, in particular through ML, that determine the present strategic and tactical momentum of hybrid operations and that allow one to anticipate the future of hybrid threats. We now address one of the major components of hybrid operations, namely, information warfare and psychological operations using cyberspace.

Psychological operations (*psyops*) involve the use of different means and tactics in order to influence the behavior of target audiences. While, traditionally, *psyops* were employed by countries and constitute an integrating part of military doctrine, the expansion of cyberspace has led to the possibility of groups that are not part of

any country's official military branch to implement these operations. An example of this is ISIS' online propaganda as well as hacktivist groups such as anonymous online activities.

The use of cyberspace and hacking, including the defacement of a country's websites, the online dispersal of sensitive and/or compromising data through social media platforms, the possibility of using the *dark web* for the disclosure of sensitive data that can then be made public in the *surface web*, and the use of social media for propaganda and recruitment, for the denouncement of different causes, and for the manipulation of citizen journalism as a way to publish both true and fake news as well as to disperse other fake contents (including images, audio, and videos), all these are examples of ways in which *psyops* can be implemented using cyberspace, so, at present, *psychological operations* are an integral component of hybrid warfare in what constitute *cyber psychological operations* or *cyops* for short.

As stated previously, in the present chapter, *cyops* are a major part of hybrid operations, and *cyber psychological tactics* involved in *cyops* typically include [3]:


Each of these tactics takes advantage of network power, AI power, and cooperation power. There are three drivers that have amplified the effectiveness of the above tactics:


Added to this infrastructural accessibility to these devices is the high frequency use of these devices and sometimes addictive component associated with this use, an addictive component usually linked to social networks.

A specific pattern of usage favors the dispersal of sensitive data, news, and general contents in social media: the fact that the online reading of social media contents usually does not involve a high level of reflection but rather engages the users in a way that is meant to be appealing and to be shared quickly with as most people as possible, users seldom read or reflect deeply on the contents that they are sharing, usually skimming through them and sharing the most appealing ones.

This is a pattern that is particularly useful for dispersal of contents that are presented in the form of scandals, sensitive information that was not known, conspiracies' denouncements, and so on. This point leaves a marker in data on fake content dispersal as shown in a study on the differential diffusion of verified (true) and false rumors on Twitter from 2006 to 2017, published in [12]. In the study, *politics* and *urban legends* stand out as the two categories with the highest frequency in rumor cascades.

The study concluded that rumors about politics, urban legends, and science spread to the most people, while politics and urban legends exhibited more intense viral patterns [12].

The study found a significant difference in the spread of fake contents vis-à-vis true contents, namely, true contents are *rarely diffused to more than 1000 people,* while

**63**

select them.

*Cyberspace and Artificial Intelligence: The New Face of Cyber-Enhanced Hybrid Threats*

*pattern that exhibit an unbroken retweet chain with a common, singular origin*. The result that fake contents *reached more people at every depth of a cascade* means that more people retweeted fake contents than true ones, a spread that was amplified by a viral dynamics. The authors found that fake contents did not just spread through broadcast dynamics but, instead, through peer-to-peer diffusion

the top 1% of fake rumor cascades are *routinely diffused between 1000 and 100,000 people* [12]. The authors' results showed that fake contents reached more people at every depth of a cascade, which the authors defined as *instances of a rumor spreading* 

Another relevant point, for hybrid operations, was that fake political contents traveled deeper and are more broadly reaching more people and exhibiting a stronger viral pattern than any other categories and diffusing deeper more quickly. This dynamics is not however due to users who spread fake contents having a greater number of followers; the study found exactly the opposite with a high statistical significance. In inferential terms, users who spread fake contents tend to have fewer followers, to follow fewer people, to be less active on Twitter, are verified less often, and have been on Twitter for less time. However, fake contents were 70% more likely to be retweeted than true contents with a *p-value* of 0.0 in Wald chi-square test. The fact that user connectedness and network structure did not seem to play a relevant role in *fake content* dispersal made the authors seek other explanations for the differences in *fake content* versus *true content* dispersal. The authors reported that fake contents usually inspired greater number of replies exhibiting surprise and disgust. The authors' hypothesis is that novelty may be a key factor in *false rumor dispersal*. However, there is a relevant point to take into account when looking at the study's results, which can be expressed by the following extreme example: an account with no followers and not following anyone can still get a high number of retweets and exposure on a content if it uses *hashtags* on hot topics and builds its tweet in a specific way that increases the probability of it being retweeted.

Moving beyond this specific study and considering social networks in general, working with the conceptual basis of strategic studies, we are led to introduce the concept of *tactical accounts*, defined as accounts that are created for tactical purposes in the support of a *cyop* strategy; these accounts can be managed by a single individual or staffs, and its operations can involve the use of *bots* that automatically generate contents with certain specifications, mostly aimed at making the contents

The viral content design along with multiple accounts operated by *bots* are major tools for a *tactical account system manager*, that is, any operative can use multiple tactical accounts simultaneously to create a fake content dispersal so that it can gain

In general, fake contents can spread on hot topics by the use of *hashtags* or other means of dispersal, which diminishes the connectivity need for any single *tactical account*'s effective impact. Furthermore, from a *cyops'* standpoint, it is easier to *fly under the radar* by managing multiple newly created fake accounts that can even be managed by a single agent, who may then use these accounts to disperse fake contents incorporating *hashtags* on political issues and composing the messages so that they have an appealing emotive content, making it more likely for people to

Returning to the study [12], the fact that the authors did not find strong evidence that algorithms were biased toward spreading of fake contents but rather that fake contents were being dispersed by people is favorable to the point of the way in which the message is built as the key factor in getting a fake content to gain traction. This point is echoed in [13] where it is argued that the belief in fake contents is driven by emotional responses amplified by macro social, political, and cultural trends.

*DOI: http://dx.doi.org/10.5772/intechopen.88648*

with viral branching.

viral in the spread.

momentum and become viral.

*Cyberspace and Artificial Intelligence: The New Face of Cyber-Enhanced Hybrid Threats DOI: http://dx.doi.org/10.5772/intechopen.88648*

the top 1% of fake rumor cascades are *routinely diffused between 1000 and 100,000 people* [12]. The authors' results showed that fake contents reached more people at every depth of a cascade, which the authors defined as *instances of a rumor spreading pattern that exhibit an unbroken retweet chain with a common, singular origin*.

The result that fake contents *reached more people at every depth of a cascade* means that more people retweeted fake contents than true ones, a spread that was amplified by a viral dynamics. The authors found that fake contents did not just spread through broadcast dynamics but, instead, through peer-to-peer diffusion with viral branching.

Another relevant point, for hybrid operations, was that fake political contents traveled deeper and are more broadly reaching more people and exhibiting a stronger viral pattern than any other categories and diffusing deeper more quickly. This dynamics is not however due to users who spread fake contents having a greater number of followers; the study found exactly the opposite with a high statistical significance. In inferential terms, users who spread fake contents tend to have fewer followers, to follow fewer people, to be less active on Twitter, are verified less often, and have been on Twitter for less time. However, fake contents were 70% more likely to be retweeted than true contents with a *p-value* of 0.0 in Wald chi-square test.

The fact that user connectedness and network structure did not seem to play a relevant role in *fake content* dispersal made the authors seek other explanations for the differences in *fake content* versus *true content* dispersal. The authors reported that fake contents usually inspired greater number of replies exhibiting surprise and disgust. The authors' hypothesis is that novelty may be a key factor in *false rumor dispersal*.

However, there is a relevant point to take into account when looking at the study's results, which can be expressed by the following extreme example: an account with no followers and not following anyone can still get a high number of retweets and exposure on a content if it uses *hashtags* on hot topics and builds its tweet in a specific way that increases the probability of it being retweeted.

Moving beyond this specific study and considering social networks in general, working with the conceptual basis of strategic studies, we are led to introduce the concept of *tactical accounts*, defined as accounts that are created for tactical purposes in the support of a *cyop* strategy; these accounts can be managed by a single individual or staffs, and its operations can involve the use of *bots* that automatically generate contents with certain specifications, mostly aimed at making the contents viral in the spread.

The viral content design along with multiple accounts operated by *bots* are major tools for a *tactical account system manager*, that is, any operative can use multiple tactical accounts simultaneously to create a fake content dispersal so that it can gain momentum and become viral.

In general, fake contents can spread on hot topics by the use of *hashtags* or other means of dispersal, which diminishes the connectivity need for any single *tactical account*'s effective impact. Furthermore, from a *cyops'* standpoint, it is easier to *fly under the radar* by managing multiple newly created fake accounts that can even be managed by a single agent, who may then use these accounts to disperse fake contents incorporating *hashtags* on political issues and composing the messages so that they have an appealing emotive content, making it more likely for people to select them.

Returning to the study [12], the fact that the authors did not find strong evidence that algorithms were biased toward spreading of fake contents but rather that fake contents were being dispersed by people is favorable to the point of the way in which the message is built as the key factor in getting a fake content to gain traction. This point is echoed in [13] where it is argued that the belief in fake contents is driven by emotional responses amplified by macro social, political, and cultural trends.

*Cyberspace*

online activities.

above tactics:

any country's official military branch to implement these operations. An example of this is ISIS' online propaganda as well as hacktivist groups such as anonymous

The use of cyberspace and hacking, including the defacement of a country's websites, the online dispersal of sensitive and/or compromising data through social media platforms, the possibility of using the *dark web* for the disclosure of sensitive data that can then be made public in the *surface web*, and the use of social media for propaganda and recruitment, for the denouncement of different causes, and for the manipulation of citizen journalism as a way to publish both true and fake news as well as to disperse other fake contents (including images, audio, and videos), all these are examples of ways in which *psyops* can be implemented using cyberspace, so, at present, *psychological operations* are an integral component of hybrid warfare

As stated previously, in the present chapter, *cyops* are a major part of hybrid operations, and *cyber psychological tactics* involved in *cyops* typically include [3]:

Each of these tactics takes advantage of network power, AI power, and coopera-

• Search engines and online services that adapt to each user's interaction pattern

Added to this infrastructural accessibility to these devices is the high frequency use of these devices and sometimes addictive component associated with this use,

A specific pattern of usage favors the dispersal of sensitive data, news, and general contents in social media: the fact that the online reading of social media contents usually does not involve a high level of reflection but rather engages the users in a way that is meant to be appealing and to be shared quickly with as most people as possible, users seldom read or reflect deeply on the contents that they are sharing, usually skimming through them and sharing the most appealing ones. This is a pattern that is particularly useful for dispersal of contents that are presented in the form of scandals, sensitive information that was not known, conspiracies' denouncements, and so on. This point leaves a marker in data on fake content dispersal as shown in a study on the differential diffusion of verified (true) and false rumors on Twitter from 2006 to 2017, published in [12]. In the study, *politics* and *urban legends* stand out as the two categories with the highest frequency in rumor cascades.

The study concluded that rumors about politics, urban legends, and science spread to the most people, while politics and urban legends exhibited more intense

The study found a significant difference in the spread of fake contents vis-à-vis true contents, namely, true contents are *rarely diffused to more than 1000 people,* while

tion power. There are three drivers that have amplified the effectiveness of the

• The increased dispersal of connected devices, including *smartphones* and

*tablets* that allow an easy and frequent access to the Internet

• The growing use of social media over traditional media

an addictive component usually linked to social networks.

• *Propaganda* (in particular, dispersed online through social media)

in what constitute *cyber psychological operations* or *cyops* for short.

• *Fake contents* (in particular, *fake news*)

• *Online dispersal of sensitive data* (*leaks*)

**62**

viral patterns [12].

Social media are particularly sensitive to the careful crafting of the message to fit viral conditions, in the sense that these media are managed by platform-based businesses, optimized for quick spread of information to reach target audiences and mass dispersal; in this sense, they are aimed by design at viral dynamics and addictive usage patterns that increase the interaction time with the platform and create value for these businesses.

The technology is thus an enabler of viral dynamics and, in that way, facilitates fake contents' dispersal by the way in which these contents are produced, in terms of the message that they contain, the emotional responses which they are aimed to evoke, and their timing and their management of conditions of dispersal (for instance, the use of *hashtags* on trending topics in Twitter); all this contributes to the increased likelihood that fake rather than true carefully crafted and reflection demanding content become viral.

Hybrid tactics can take advantage of multiple (*fake*) *tactical accounts* and use methods of automation of content generation, with possible applications of data science, in order to generate the content presentation that may be the most effective in getting people to adhere to and, thus, share. By working with data on viral tweets, ML algorithms may be trained in predicting the structure of a content that may make it more probable to become viral and use this to help a *cyops* operative design the message in order to make it more viral and then use *tactical accounts* to disperse it. Message contents, including *hashtags*, *emoticons*, and *gifs*, are useful tools in manipulating the message content to better fit a target audience [14].

Another way to manipulate viral content dispersal is cyberattacks aimed at compromising search engines and recommendation engines in order to disperse fake content that fits the goals of an intended *cyop*.

Search engine poisoning or even a more sophisticated *search engineering* has been applied in the past by *black hats* to spread malware and fake contents [15, 16].

There are various methods employed in this last context that can be highly effective for hybrid operations: the first is content injection in websites, online forums, and social media in the form of spam posts that can also point to specific websites used within a *cyop*; this is a task that can be automated.

A second level is the creation of networks of websites and social media accounts that spread alternate media messages and that reinforce *echo chambers* for specific content that can, thus, become viral, taking advantage of a concerted social media campaign that divulges these accounts.

The sharing of these accounts can, in turn, become viral and link to different alternate media websites that can be used for *cyops* and manipulate a user's web search and interaction with different content platforms. If, in the interaction, with any search engine and content platform, there is a powerful algorithmic adaptation to each user's pattern, then, any user, influenced by viral content, will have a tendency to be fed back the content that the *cyop* is aimed at. In this way, by strategically using viral dynamics, a *cyop* can manipulate a vast amount of users and engineer massive *echo chambers* where massive amounts of users get personalized content that fits the *cyop* in question.

In this case, the hacker or hackers do not need to compromise the AI systems that manage a social media platform; rather, they are *hacking people's behaviors* and are taking advantage of the effectiveness of the platform's own AI systems in adapting content to user interaction profile. Since the way a user interacts with a platform leads to a specific response on the part of the platform for automatic user personalization, each user gets his/her own experience; however, the commonality of usage patterns allows for collectives of users with common tastes to receive similar or confluent viral contents.

**65**

*Cyberspace and Artificial Intelligence: The New Face of Cyber-Enhanced Hybrid Threats*

Creating and financing tactical networks of social media accounts amplify this *hybrid strategy*, as long as the platform adapts very quickly to a user's profile facilitating the *echo chamber engineering* needed for the *cyop* to be successful. Similar tactics can be employed on any type of social network. However, of the different online media platforms, Facebook seems to stand out in terms of effectiveness of fake news dispersal, with a higher frequency of cases of visits to fake news websites

Besides content injection in *blogs*, *forums*, and *social media*, another way for search poisoning involves content injection in compromised websites, as well as search redirection. Search redirection attacks employ sites that have been compromised to be used in a search redirection operation and whose owners usually do not suspect that their website has been compromised [16]. These *source infections* in turn redirect to traffic brokers that redirect traffic to specific destinations that fit the hackers' main goal [16]. Currently, ML algorithms are being trained against redirection as a defense against it [18]; however, ML algorithms and data science can also be employed to manipulate content, including written text, pictures, and even videos. In the foreseeable future, a higher ability of *deep fake videos* to fool people

While disinformation and propaganda, through online fake content and propaganda dispersal operations, have become highly impactful in terms of their strategic and tactical value [3, 17], there is another level of *cyops* that may be implemented by any state or non-state agent that can have a strong impact on society. This is exemplified by the *Blue Whale Challenge*, which is an example of the power of what can

*Gamification attacks* use the Internet to introduce a game which leads the players through a series of challenges down a path where those players are led to either selfharm or even murder. In the case of the *Blue Whale Challenge*, the players were led to self-harm. The game involved a series of life-threatening tasks given to players by a curator, and each player had to fulfill these tasks which ended with the suicide of the player [19]. In a certain sense, this constitutes a cyberspace-enabled form of murder, by leading a person to commit suicide. The *Blue Whale Challenge*'s curators can be treated as a new breed of serial killers that use the Internet for psychological and physical torture, eventually leading their victims to kill themselves as the

If we replace the final task of *suicide* with a final task where the player has to *murder* someone else or even a number of people, perhaps even in exchange for the player's own life (an *either kill yourself or commit murder* option), then, the *Blue Whale Challenge* becomes the first example of designing a web game that can lead not only people to suicide but also to murder on a scale and intensity that can be

One should stress that this is a form of *cyops* that can easily be engineered by someone not affiliated to any terrorist group. A single person can take advantage of the power of cyberspace and of social networks to create such *challenges*; furthermore, even if the individual is caught and arrested, the game can go on independently of the individual, where anyone can become a *curator*. The game itself

This breaks with any traditional approach to engaging and handling terrorist organizations, since a *terror game* can be played by anyone, without any political goal, without any political affiliation, and with no end other than the exercise of violence. These new serial killers that become curators of these games can be caught and imprisoned, but the game can go on with different iterations. There is a form of digital autonomy and continuation of a *terror* game as a collective dynamics that is sustained by its players, but that goes on despite the catching of particular curator players, as

*DOI: http://dx.doi.org/10.5772/intechopen.88648*

occurring near a Facebook visit, as reported in [17].

may greatly enhance the impact of fake content dispersal.

be considered a *gamification attack*.

endgame of the tasks that they give their victims.

comparable to those of standard terrorist networks.

becomes the terror referent and the platform for terror practices.

#### *Cyberspace and Artificial Intelligence: The New Face of Cyber-Enhanced Hybrid Threats DOI: http://dx.doi.org/10.5772/intechopen.88648*

Creating and financing tactical networks of social media accounts amplify this *hybrid strategy*, as long as the platform adapts very quickly to a user's profile facilitating the *echo chamber engineering* needed for the *cyop* to be successful. Similar tactics can be employed on any type of social network. However, of the different online media platforms, Facebook seems to stand out in terms of effectiveness of fake news dispersal, with a higher frequency of cases of visits to fake news websites occurring near a Facebook visit, as reported in [17].

Besides content injection in *blogs*, *forums*, and *social media*, another way for search poisoning involves content injection in compromised websites, as well as search redirection. Search redirection attacks employ sites that have been compromised to be used in a search redirection operation and whose owners usually do not suspect that their website has been compromised [16]. These *source infections* in turn redirect to traffic brokers that redirect traffic to specific destinations that fit the hackers' main goal [16]. Currently, ML algorithms are being trained against redirection as a defense against it [18]; however, ML algorithms and data science can also be employed to manipulate content, including written text, pictures, and even videos. In the foreseeable future, a higher ability of *deep fake videos* to fool people may greatly enhance the impact of fake content dispersal.

While disinformation and propaganda, through online fake content and propaganda dispersal operations, have become highly impactful in terms of their strategic and tactical value [3, 17], there is another level of *cyops* that may be implemented by any state or non-state agent that can have a strong impact on society. This is exemplified by the *Blue Whale Challenge*, which is an example of the power of what can be considered a *gamification attack*.

*Gamification attacks* use the Internet to introduce a game which leads the players through a series of challenges down a path where those players are led to either selfharm or even murder. In the case of the *Blue Whale Challenge*, the players were led to self-harm. The game involved a series of life-threatening tasks given to players by a curator, and each player had to fulfill these tasks which ended with the suicide of the player [19]. In a certain sense, this constitutes a cyberspace-enabled form of murder, by leading a person to commit suicide. The *Blue Whale Challenge*'s curators can be treated as a new breed of serial killers that use the Internet for psychological and physical torture, eventually leading their victims to kill themselves as the endgame of the tasks that they give their victims.

If we replace the final task of *suicide* with a final task where the player has to *murder* someone else or even a number of people, perhaps even in exchange for the player's own life (an *either kill yourself or commit murder* option), then, the *Blue Whale Challenge* becomes the first example of designing a web game that can lead not only people to suicide but also to murder on a scale and intensity that can be comparable to those of standard terrorist networks.

One should stress that this is a form of *cyops* that can easily be engineered by someone not affiliated to any terrorist group. A single person can take advantage of the power of cyberspace and of social networks to create such *challenges*; furthermore, even if the individual is caught and arrested, the game can go on independently of the individual, where anyone can become a *curator*. The game itself becomes the terror referent and the platform for terror practices.

This breaks with any traditional approach to engaging and handling terrorist organizations, since a *terror game* can be played by anyone, without any political goal, without any political affiliation, and with no end other than the exercise of violence. These new serial killers that become curators of these games can be caught and imprisoned, but the game can go on with different iterations. There is a form of digital autonomy and continuation of a *terror* game as a collective dynamics that is sustained by its players, but that goes on despite the catching of particular curator players, as

*Cyberspace*

value for these businesses.

demanding content become viral.

fake content that fits the goals of an intended *cyop*.

used within a *cyop*; this is a task that can be automated.

campaign that divulges these accounts.

content that fits the *cyop* in question.

confluent viral contents.

audience [14].

Social media are particularly sensitive to the careful crafting of the message to fit viral conditions, in the sense that these media are managed by platform-based businesses, optimized for quick spread of information to reach target audiences and mass dispersal; in this sense, they are aimed by design at viral dynamics and addictive usage patterns that increase the interaction time with the platform and create

The technology is thus an enabler of viral dynamics and, in that way, facilitates fake contents' dispersal by the way in which these contents are produced, in terms of the message that they contain, the emotional responses which they are aimed to evoke, and their timing and their management of conditions of dispersal (for instance, the use of *hashtags* on trending topics in Twitter); all this contributes to the increased likelihood that fake rather than true carefully crafted and reflection

Hybrid tactics can take advantage of multiple (*fake*) *tactical accounts* and use methods of automation of content generation, with possible applications of data science, in order to generate the content presentation that may be the most effective in getting people to adhere to and, thus, share. By working with data on viral tweets, ML algorithms may be trained in predicting the structure of a content that may make it more probable to become viral and use this to help a *cyops* operative design the message in order to make it more viral and then use *tactical accounts* to disperse it. Message contents, including *hashtags*, *emoticons*, and *gifs*, are useful tools in manipulating the message content to better fit a target

Another way to manipulate viral content dispersal is cyberattacks aimed at compromising search engines and recommendation engines in order to disperse

applied in the past by *black hats* to spread malware and fake contents [15, 16].

Search engine poisoning or even a more sophisticated *search engineering* has been

There are various methods employed in this last context that can be highly effective for hybrid operations: the first is content injection in websites, online forums, and social media in the form of spam posts that can also point to specific websites

A second level is the creation of networks of websites and social media accounts that spread alternate media messages and that reinforce *echo chambers* for specific content that can, thus, become viral, taking advantage of a concerted social media

The sharing of these accounts can, in turn, become viral and link to different alternate media websites that can be used for *cyops* and manipulate a user's web search and interaction with different content platforms. If, in the interaction, with any search engine and content platform, there is a powerful algorithmic adaptation to each user's pattern, then, any user, influenced by viral content, will have a tendency to be fed back the content that the *cyop* is aimed at. In this way, by strategically using viral dynamics, a *cyop* can manipulate a vast amount of users and engineer massive *echo chambers* where massive amounts of users get personalized

In this case, the hacker or hackers do not need to compromise the AI systems that manage a social media platform; rather, they are *hacking people's behaviors* and are taking advantage of the effectiveness of the platform's own AI systems in adapting content to user interaction profile. Since the way a user interacts with a platform leads to a specific response on the part of the platform for automatic user personalization, each user gets his/her own experience; however, the commonality of usage patterns allows for collectives of users with common tastes to receive similar or

**64**

long as it is available for playing; the game can even come back with new variations and remain, and even if it has no players, it can be played again at any time.

This is not a *terror network* that one can address with traditional tactics; it is a *terror game*, and the *Blue Whale Challenge* is just the first example of this. The game becomes the *referent* for any players, who may never have physically met. Systemically, the game becomes a *dispositional driver* for a typological order of cyber-enabled terrorist practices. Another point is that, potentially, such *terror games* can be sustained by non-humans, that is, by AI systems, and even if all human curators were caught and arrested, *bots* could take over and play the same role as a human curator (the player that abuses the other players). In this sense, a single individual, using AI systems, can create *terror games*, sustained by an "army" of *cyop bots* that will be difficult to stop. A new breed of the twenty-first-century serial killers can become a source of new cyber-enabled terrorism that uses *gamification* as a way to resiliently murder on a global scale, with an impact on par with that of major standard terrorist organizations.

The reason why *bots* can be used here as *cyber psychological weapons*, in such *games* as the *Blue Whale Challenge*, is linked to the algorithmic basis of these games' approach; in particular, the behavior of curators can be algorithmically replicated by *cyop bots*. Indeed, the process involves using social networks to search for young people who fit specific profiles, which can include being depressed or showing addictive behavior. The list of tasks includes dynamics that introduce sleep deprivation, listening to psychedelic music, watching videos with disturbing contents sent by the curator, and inflicting wounds on one's body, among other tasks [20]. The tasks follow a prescribed set of steps that lead the victim into a disturbed mental state and susceptible to the influence of the curator, the victim is a target of a form of *cyop* that falls within a pattern that can easily be turned into an algorithm.

The *gamification* of *cyops* in terror operations is in its infancy; however, the tools available to it are amplified by the IoT, mobile devices, and platform usage. In the *Blue Whale Challenge*, we see a new tactics based on platform weaponization, that is, the use of platform-based businesses to compromise its users and eventually lead to their deaths (in the case of the *Blue Whale Challenge*) or even to the killing of others (if instead of suicide the player is led to kill others).

Empowered by *cyop* bots, a small number of individuals, or even one individual, can create a game that may go on independently of them; the game can persist as a dynamics that continues to be played in the platform, which functions as a replicator for the deviant and predatory behavioral patterns needed for the *terror game to go on*. Having been played once, the dynamics that characterize the game can always come back; in this sense, the platform works as a way for the digital continuation of the terror game.

This is very different from the case of a terrorist network that has a hierarchical structure and that has cells and individuals that play different roles within an organization.

A *terror game* is just a set of behavioral patterns, with algorithmic components, that can be replicated like a form of social virus which goes on as long as there are players. There is no stable hierarchy and no cells and no individuals that can be targeted which may harm the game, because the game has a virtual fluid existence that can be perpetuated as a dynamics to be retrieved any time, any place.

The *terror game* is characteristic of a side of platforms, especially social networking platforms that make them highly weaponizable, namely, platforms are means for the exercise of biopower in the sense of Foucault [21], a point that is convergent with the issues addressed in [22].

**67**

*Cyberspace and Artificial Intelligence: The New Face of Cyber-Enhanced Hybrid Threats*

was the case with the *Blue Whale Challenge*, lead to a person's death.

Platforms can function as means for the exercise of control, reward, and punishment and of manipulation of its users' desires, fears, and sources of inclusion and exclusion, integration and segregation, connection and isolation, and friendship

By increasingly sharing one's life in platforms and by using integrated systems, in particular IoT devices, the new stage of the Internet revolution is such that any heavy user of these systems can be *datafied*, profiled, and manipulated by hacked devices (including hacked AI systems) and manipulated by predators that use fake accounts and their victims' profiles to launch directed *cyops* that can, in the end, as

According to data, reported in [20], Instagram ranks higher in posts than the Russian VK social network (which was where the game spread initially) and Twitter. On Twitter, the large majority number of posts related to the *Blue Whale Challenge* was identified by the authors as coming from *smartphones* with the Android OS, which shows how mobile devices are useful in feeding *terror gamification* operations. Another pattern revealed in these authors' research is a key common factor in online *cyop* campaigns. In particular, many accounts talking about the *Blue Whale Challenge* were new accounts with not many followers; this shows again the possible use of *tactical accounts*. This is a basic necessary tactical choice for predators operating online, who will want to hide their identity; furthermore, in order to gain online traction on a *cyop*, the use of multiple *tactical accounts* is a necessary step. Thus, just as in *state agents*, *non-state agents*, including *cyber-enabled serial killers*, may tend to use multiple *tactical accounts* in online platforms when addressing their targets. The use of challenges like the *Blue Whale Challenge* and the *Momo Challenge* directly targets a large amount of victims and constitutes a security and law

Returning to *cyops*, whatever their profile, these are currently about using cyberspace and ML for *hacking people*'s behaviors. In this sense, while a *cyop* against a given country may take advantage of resilience, authority, or legitimacy vulnerabilities, the increasing use of the platform-based technologies, managed by ML algorithms, where each user's data is exposed and available for exploitation, leads to another level of vulnerability which is the ability to use citizens' own data and behavioral patterns against them or to manipulate citizens into patterns of behavior

The fact that *cyops* have certain components that are algorithmizable implies that one can program bots as *cyop* weapons that function as a form of new computer virus, a behaviorally conditioning content-based virus that is aimed at hacking people's behaviors, delivered through platforms for both mass exposure and personalization. The current trend of using algorithms for decision-making and in everyday life, integrated in platforms and that feed on each user's data and adapting the service and contents to each user's profile, makes AI weapons, employed in *cyops*,

While the *cyops* that were discussed above include the creation and manipulation of contents to produce responses and manipulate people's behaviors, the impact of these contents can become even more amplified if the dispersal of these contents is timed with the leak of true contents. In this case, people tend to believe the fake content that is consistent with the true content. The leak of true content can initiate a fake content campaign, where the true content provides the context for the fake

In this case, *leak platforms*, like *WikiLeaks*, can be used by hackers, whistleblowers, as well as other agents (state and non-state agents) to disperse true content and provide the timing for initiating fake content campaigns. However, besides *leak platforms*, there

*DOI: http://dx.doi.org/10.5772/intechopen.88648*

and bullying.

enforcement problem [23].

increasingly effective tools.

that interest a given state or non-state agent.

contents that will be used in the fake content campaign.

*Cyberspace and Artificial Intelligence: The New Face of Cyber-Enhanced Hybrid Threats DOI: http://dx.doi.org/10.5772/intechopen.88648*

Platforms can function as means for the exercise of control, reward, and punishment and of manipulation of its users' desires, fears, and sources of inclusion and exclusion, integration and segregation, connection and isolation, and friendship and bullying.

By increasingly sharing one's life in platforms and by using integrated systems, in particular IoT devices, the new stage of the Internet revolution is such that any heavy user of these systems can be *datafied*, profiled, and manipulated by hacked devices (including hacked AI systems) and manipulated by predators that use fake accounts and their victims' profiles to launch directed *cyops* that can, in the end, as was the case with the *Blue Whale Challenge*, lead to a person's death.

According to data, reported in [20], Instagram ranks higher in posts than the Russian VK social network (which was where the game spread initially) and Twitter. On Twitter, the large majority number of posts related to the *Blue Whale Challenge* was identified by the authors as coming from *smartphones* with the Android OS, which shows how mobile devices are useful in feeding *terror gamification* operations.

Another pattern revealed in these authors' research is a key common factor in online *cyop* campaigns. In particular, many accounts talking about the *Blue Whale Challenge* were new accounts with not many followers; this shows again the possible use of *tactical accounts*. This is a basic necessary tactical choice for predators operating online, who will want to hide their identity; furthermore, in order to gain online traction on a *cyop*, the use of multiple *tactical accounts* is a necessary step. Thus, just as in *state agents*, *non-state agents*, including *cyber-enabled serial killers*, may tend to use multiple *tactical accounts* in online platforms when addressing their targets.

The use of challenges like the *Blue Whale Challenge* and the *Momo Challenge* directly targets a large amount of victims and constitutes a security and law enforcement problem [23].

Returning to *cyops*, whatever their profile, these are currently about using cyberspace and ML for *hacking people*'s behaviors. In this sense, while a *cyop* against a given country may take advantage of resilience, authority, or legitimacy vulnerabilities, the increasing use of the platform-based technologies, managed by ML algorithms, where each user's data is exposed and available for exploitation, leads to another level of vulnerability which is the ability to use citizens' own data and behavioral patterns against them or to manipulate citizens into patterns of behavior that interest a given state or non-state agent.

The fact that *cyops* have certain components that are algorithmizable implies that one can program bots as *cyop* weapons that function as a form of new computer virus, a behaviorally conditioning content-based virus that is aimed at hacking people's behaviors, delivered through platforms for both mass exposure and personalization. The current trend of using algorithms for decision-making and in everyday life, integrated in platforms and that feed on each user's data and adapting the service and contents to each user's profile, makes AI weapons, employed in *cyops*, increasingly effective tools.

While the *cyops* that were discussed above include the creation and manipulation of contents to produce responses and manipulate people's behaviors, the impact of these contents can become even more amplified if the dispersal of these contents is timed with the leak of true contents. In this case, people tend to believe the fake content that is consistent with the true content. The leak of true content can initiate a fake content campaign, where the true content provides the context for the fake contents that will be used in the fake content campaign.

In this case, *leak platforms*, like *WikiLeaks*, can be used by hackers, whistleblowers, as well as other agents (state and non-state agents) to disperse true content and provide the timing for initiating fake content campaigns. However, besides *leak platforms*, there

*Cyberspace*

long as it is available for playing; the game can even come back with new variations

This is not a *terror network* that one can address with traditional tactics; it is a *terror game*, and the *Blue Whale Challenge* is just the first example of this. The game becomes the *referent* for any players, who may never have physically met. Systemically, the game becomes a *dispositional driver* for a typological order of cyber-enabled terrorist practices. Another point is that, potentially, such *terror games* can be sustained by non-humans, that is, by AI systems, and even if all human curators were caught and arrested, *bots* could take over and play the same role as a human curator (the player that abuses the other players). In this sense, a single individual, using AI systems, can create *terror games*, sustained by an "army" of *cyop bots* that will be difficult to stop. A new breed of the twenty-first-century serial killers can become a source of new cyber-enabled terrorism that uses *gamification* as a way to resiliently murder on a global scale, with an impact on par with

The reason why *bots* can be used here as *cyber psychological weapons*, in such *games* as the *Blue Whale Challenge*, is linked to the algorithmic basis of these games' approach; in particular, the behavior of curators can be algorithmically replicated by *cyop bots*. Indeed, the process involves using social networks to search for young people who fit specific profiles, which can include being depressed or showing addictive behavior. The list of tasks includes dynamics that introduce sleep deprivation, listening to psychedelic music, watching videos with disturbing contents sent by the curator, and inflicting wounds on one's body, among other tasks [20]. The tasks follow a prescribed set of steps that lead the victim into a disturbed mental state and susceptible to the influence of the curator, the victim is a target of a form of *cyop* that falls within a pattern that can easily

The *gamification* of *cyops* in terror operations is in its infancy; however, the tools available to it are amplified by the IoT, mobile devices, and platform usage. In the *Blue Whale Challenge*, we see a new tactics based on platform weaponization, that is, the use of platform-based businesses to compromise its users and eventually lead to their deaths (in the case of the *Blue Whale Challenge*) or even to the killing of

Empowered by *cyop* bots, a small number of individuals, or even one individual, can create a game that may go on independently of them; the game can persist as a dynamics that continues to be played in the platform, which functions as a replicator for the deviant and predatory behavioral patterns needed for the *terror game to go on*. Having been played once, the dynamics that characterize the game can always come back; in this sense, the platform works as a way for the digital continuation of

This is very different from the case of a terrorist network that has a hierarchical structure and that has cells and individuals that play different roles within an

A *terror game* is just a set of behavioral patterns, with algorithmic components, that can be replicated like a form of social virus which goes on as long as there are players. There is no stable hierarchy and no cells and no individuals that can be targeted which may harm the game, because the game has a virtual fluid existence

The *terror game* is characteristic of a side of platforms, especially social networking platforms that make them highly weaponizable, namely, platforms are means for the exercise of biopower in the sense of Foucault [21], a point that is convergent

that can be perpetuated as a dynamics to be retrieved any time, any place.

and remain, and even if it has no players, it can be played again at any time.

that of major standard terrorist organizations.

others (if instead of suicide the player is led to kill others).

be turned into an algorithm.

the terror game.

organization.

with the issues addressed in [22].

**66**

is another level of hybrid operations which also increases the threat of these types of operations for any country; this is the new *hybrid human intelligence/counterintelligence (CI)* context, in which the concept of a new field agent is a key factor.
