**2. Modelling opinions and its dynamics**

Opinion formation has been studied from many angles and different mathematical techniques, including mean-field theory and kinetic models of opinion formation [27], or by agents on a social network. Individual opinions on a certain topic are usually modelled as a single-valued number contained in some closed interval which represents extreme (opposing) opinions, for example, left–right leaning voters [28], the level of production of an employee in a plant [7] or perceptions between security and insecurity [29]. The process of opinion updating then is modelled as the result of interaction with other views, a process of self-thinking, some memory loss, or external factors. Interactions between individuals are usually modelled on some social structure, such as a network, considering some spatial proximity, or considering some social aspects, such as the level of influence of one individual to others [30]. A long-term, steady distribution of opinions is usually obtained, either as an analytical solution to some differential equations or through simulations, which reveals among others, the formation of opinion clusters, political segregation [31], vaccine hesitancy [12], the use of certain tools [32], the spread of fear of crime more as a result of opinion dynamics than crime itself [29] or even the diffusion of fake news [14].

### **2.1 The key ingredients in opinion dynamics models**

There are four ingredients in opinion dynamics models [30, 31]:


the fact that in a highly polarised population, most individuals might not be aware that so many people with different views even exist, whereas in a polarised society with little levels of homophily, encounters between people with opposite views happen frequently. Furthermore, a highly polarised society might be a steady state of some opinion dynamics but given the right circumstances (parameters) that state could be highly homophilic or a state in which most individuals interact frequently

*Opinions (represented by the different colours of the nodes) are shared between individuals who interact (if there is a link between the nodes). Different states in which opinions are distributed show a small polarisation (left part, where most individuals have similar views) or high polarisation (right part, where opinions are split in half) and might show low homophily (bottom part, where opposite opinions are frequently shared among interacting individuals) or high homophily (top part, where opposite opinions are rarely shared among*

*Theory of Complexity - Definitions, Models, and Applications*

Social media and other technological changes could increase exposure to diverse

perspectives [19], but at the same time facilitate some mechanisms, such as the creation of links or friendships in the network, filter algorithms and rank information which may accelerate the formation of homophilic communities [16, 20]. People frequently aggregate in groups of interest, and those existent communities frequently adopt narratives from different topics, reinforcing polarisation across distinct themes, for instance, political ideology and perceptions with respect to the COVID-19 pandemic [21, 22]. People interacting with homogeneous communities tend to grow more extreme opinions and become more certain in their beliefs [13] which can favour the spread of misinformation from partisan media and increase animosity within the population [23]. For COVID-19, for example, most of the misinformation detected involves reconfigurations, where existing (often true) facts are adjusted to fit different narratives [24] which are then reproduced by large homophilic groups as facts. Massive misinformation is becoming one of the main threats to our society [14, 25, 26] which might be fostered by an increasingly

homophilic opinion dynamic process and a polarised society.

with people with different views.

**Figure 1.**

**28**

*connected nodes).*

Although some analytical results are available [27, 28], the dynamics are usually simulated on a network. The technique allows considering individual aspects, such as assertiveness, persuasiveness, supportiveness, extremists or opinion volatility [28, 31, 34, 35].

#### *2.1.1 Measuring polarisation and homophily*

A group might reach an agreement or *consensus* on some opinions if the majority of the individuals share similar views, whereas a group might be *polarised* if opinions are divergent, with *extremism* being the state in which opinions are mostly concentrated among the two extremes. One way to measure the level of polarisation in a population by the variance of the opinion profile, where a large variance means a more polarised society and a small variance means consensus. Formally, the polarisation Φ of an opinion profile *S* is given by

$$\Phi(\mathcal{S}) = Var(\mathcal{S}) = \frac{1}{N} \sum\_{i=1}^{N} \left( s\_i - \mathfrak{s} \right)^2. \tag{1}$$

<sup>Λ</sup>ð Þ¼ *<sup>S</sup>* <sup>1</sup> � <sup>1</sup>

*Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society*

*DOI: http://dx.doi.org/10.5772/intechopen.96989*

such as opinions, with high values if individuals interact with others of similar views and has lower values (possibly negative) if interactions are more frequent between individuals of very different views. Notice that the metric depends on the opinion profile but also on the network topology. On a linear network, for instance, where all nodes have two neighbours, except for the two extremes, opinions in the ½ � �1, þ1 interval are highly polarised (or have extremism) if half of the individuals have �1 and the other half have þ1 as opinions, and Φð Þ*S* ≈1. Such opinion profile is not homophilic with alternating opinions, *S* ¼ þð Þ 1, �1, þ1, �1, … , �1 , and Λð Þ¼ *S* �1, but a high level of homophily is observed when opposite opinions are located on the two extremes of the network, so *S* ¼ þð Þ 1, þ1, … , þ1, �1, … , �1, �1 , in which case, only the two neighbouring individuals located at the boundary between the opinion groups have an interaction with a person who has a distinct opinion than their own and so Λð Þ¼ *S* 1 � 2*=N* ≈1. The expected opinion distance between two

randomly selected opinions is 2*=*3, from which Λð Þ*S* ≈1 means *preferential interactions* between individuals of similar views; Λð Þ*S* > 1*=*3 means *homophilic interactions*; Λð Þ*S* ≈1*=*3 means random interactions; and Λð Þ*S* <1*=*3 means

Consider a diffusion process of opinions on a network, where the four key ingredients (individual opinions, updating process due to individual or external forces, interactions, and the corresponding metrics) are defined as follows. Initially, *N* individuals have a randomly-distributed opinion *si*ð Þ 0 ∈½ � �1, þ1 , which represent two extreme views on a certain topic. As external forces in the dynamics, we consider exposure from the individuals to some "propaganda" in favour of one of the views. Each time step, a randomly selected group of 1% of the individuals are exposed, in an alternative sequence, to some supporting mechanism in favour of any of the two views. It is assumed that the views fully favour one of the two extreme opinions, so that they are considered as *v*<sup>1</sup> ¼ þ1, the first force which favours view þ1, and then *v*<sup>2</sup> ¼ �1, which supports view �1 and so on, with *vk* ¼

. As opinion dynamics, individuals who are exposed to any of the views (*vk* ¼ �1) decide whether to "trust" or to "dismiss" the views based on their current opinion and on the persuasiveness of the views *θ*, where *θ* ∈½ � 0, 1 is a parameter which captures how seductive are the views (where large values of *θ* mean that views are more seductive and individuals are more inclined to trust them and smaller values mean that views are likely to be dismissed). Due to confirmation bias, individuals with opinion closer to þ1 are more likely to trust a *vk* ¼ þ1 propaganda, as it confirms their views and more likely to ignore *vl* ¼ �1 propaganda for the same reason. To capture confirmation bias, it is assumed that person *i*

if *vk* ¼ þ1, and (4)

if *vk* ¼ �1*:* (5)

*discouraged interactions* between people of similar views.

**2.2 An opinion dynamics model**

with opinion *si* trusts view *vk* with probability

ð Þ 1 þ *si θ* 2

> ð Þ 1 � *si θ* 2

� �ð Þ<sup>1</sup> *<sup>k</sup>*

**31**

*N* X *N*

a metric suited for measuring homophily based on a continuous node attribute,

*i*¼1

D*i*, (3)

For opinions bounded inside the ½ � �1, þ1 interval, very small populations could have Φð Þ*S* values larger than 1, but for a population with more than 100 individuals, Φ <1*:*01 and so for large enough populations, it might be considered that Φ obtains values in the 0 (if there is consensus) to 1 (if there is extremism) interval. If a random opinion is sampled for each individual *sk*ð Þ 0 ∈½ � �1, þ1 , a 95% interval of the polarisation is Φð Þ*S* ∈½ � 0*:*327, 0*:*338 and therefore, the distribution of opinions *S* is classified as *consensus* if Φð Þ*S* ≈0; *consensual* if Φð Þ*S* <1*=*3; *homogeneous* if Φð Þ*S* ≈1*=*3, so that the polarisation is similar to a random distribution of opinions; *polarising* if Φð Þ*S* >1*=*3 and as *extremism* if Φð Þ*S* ≈1 (**Figure 2**).

**Figure 2.**

*Classification of collective opinions according to their distribution (represented as the height of each colour bar), from consensus (left) where* Φð Þ*S* ≈0*, to extremism (right) where* Φð Þ*S* ≈1*.*

The process of opinion dynamics has a high level of *homophily* if most of the interactions happen between individuals of similar views and has a low level otherwise. Formally, if A*<sup>i</sup>* are the adjacent nodes of *i*, then the average opinion distance D*<sup>i</sup>* experienced by *i* is given by

$$\mathcal{O}\_i = \frac{1}{d\_i} \sum\_{j \in \mathcal{A}\_i} |\mathfrak{s}\_i - \mathfrak{s}\_j|, \tag{2}$$

where *di* is the degree of *i*, so that D*<sup>i</sup>* gives the average opinion distance from a node to its adjacent neighbours (and define D*<sup>i</sup>* ¼ 0 if *i* has no neighbours). The opinion homophily Λð Þ*S* is defined as

*Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society DOI: http://dx.doi.org/10.5772/intechopen.96989*

$$\Lambda(\mathbf{S}) = \mathbf{1} - \frac{\mathbf{1}}{N} \sum\_{i=1}^{N} \mathcal{O}\_{i},\tag{3}$$

a metric suited for measuring homophily based on a continuous node attribute, such as opinions, with high values if individuals interact with others of similar views and has lower values (possibly negative) if interactions are more frequent between individuals of very different views. Notice that the metric depends on the opinion profile but also on the network topology. On a linear network, for instance, where all nodes have two neighbours, except for the two extremes, opinions in the ½ � �1, þ1 interval are highly polarised (or have extremism) if half of the individuals have �1 and the other half have þ1 as opinions, and Φð Þ*S* ≈1. Such opinion profile is not homophilic with alternating opinions, *S* ¼ þð Þ 1, �1, þ1, �1, … , �1 , and Λð Þ¼ *S* �1, but a high level of homophily is observed when opposite opinions are located on the two extremes of the network, so *S* ¼ þð Þ 1, þ1, … , þ1, �1, … , �1, �1 , in which case, only the two neighbouring individuals located at the boundary between the opinion groups have an interaction with a person who has a distinct opinion than their own and so Λð Þ¼ *S* 1 � 2*=N* ≈1. The expected opinion distance between two randomly selected opinions is 2*=*3, from which Λð Þ*S* ≈1 means *preferential interactions* between individuals of similar views; Λð Þ*S* > 1*=*3 means *homophilic interactions*; Λð Þ*S* ≈1*=*3 means random interactions; and Λð Þ*S* <1*=*3 means *discouraged interactions* between people of similar views.

#### **2.2 An opinion dynamics model**

Although some analytical results are available [27, 28], the dynamics are usually simulated on a network. The technique allows considering individual aspects, such as assertiveness, persuasiveness, supportiveness, extremists or opinion volatility

A group might reach an agreement or *consensus* on some opinions if the majority of the individuals share similar views, whereas a group might be *polarised* if opinions are divergent, with *extremism* being the state in which opinions are mostly concentrated among the two extremes. One way to measure the level of polarisation in a population by the variance of the opinion profile, where a large variance means a more polarised society and a small variance means consensus. Formally, the

> *N* X *N*

For opinions bounded inside the ½ � �1, þ1 interval, very small populations could have Φð Þ*S* values larger than 1, but for a population with more than 100 individuals, Φ <1*:*01 and so for large enough populations, it might be considered that Φ obtains values in the 0 (if there is consensus) to 1 (if there is extremism) interval. If a random opinion is sampled for each individual *sk*ð Þ 0 ∈½ � �1, þ1 , a 95% interval of the polarisation is Φð Þ*S* ∈½ � 0*:*327, 0*:*338 and therefore, the distribution of opinions *S* is classified as *consensus* if Φð Þ*S* ≈0; *consensual* if Φð Þ*S* <1*=*3; *homogeneous* if Φð Þ*S* ≈1*=*3, so that the polarisation is similar to a random distribution of opinions; *polarising* if

The process of opinion dynamics has a high level of *homophily* if most of the interactions happen between individuals of similar views and has a low level otherwise. Formally, if A*<sup>i</sup>* are the adjacent nodes of *i*, then the average opinion distance

*Classification of collective opinions according to their distribution (represented as the height of each colour bar),*

where *di* is the degree of *i*, so that D*<sup>i</sup>* gives the average opinion distance from a node to its adjacent neighbours (and define D*<sup>i</sup>* ¼ 0 if *i* has no neighbours). The

<sup>D</sup>*<sup>i</sup>* <sup>¼</sup> <sup>1</sup> *di* X *j*∈A*<sup>i</sup>* *i*¼1

ð Þ *si* � *s* 2

*:* (1)

∣*si* � *sj*∣, (2)

[28, 31, 34, 35].

*2.1.1 Measuring polarisation and homophily*

*Theory of Complexity - Definitions, Models, and Applications*

polarisation Φ of an opinion profile *S* is given by

Φð Þ*S* >1*=*3 and as *extremism* if Φð Þ*S* ≈1 (**Figure 2**).

*from consensus (left) where* Φð Þ*S* ≈0*, to extremism (right) where* Φð Þ*S* ≈1*.*

D*<sup>i</sup>* experienced by *i* is given by

**Figure 2.**

**30**

opinion homophily Λð Þ*S* is defined as

<sup>Φ</sup>ð Þ¼ *<sup>S</sup> Var S*ð Þ¼ <sup>1</sup>

Consider a diffusion process of opinions on a network, where the four key ingredients (individual opinions, updating process due to individual or external forces, interactions, and the corresponding metrics) are defined as follows. Initially, *N* individuals have a randomly-distributed opinion *si*ð Þ 0 ∈½ � �1, þ1 , which represent two extreme views on a certain topic. As external forces in the dynamics, we consider exposure from the individuals to some "propaganda" in favour of one of the views. Each time step, a randomly selected group of 1% of the individuals are exposed, in an alternative sequence, to some supporting mechanism in favour of any of the two views. It is assumed that the views fully favour one of the two extreme opinions, so that they are considered as *v*<sup>1</sup> ¼ þ1, the first force which favours view þ1, and then *v*<sup>2</sup> ¼ �1, which supports view �1 and so on, with *vk* ¼ � �ð Þ<sup>1</sup> *<sup>k</sup>* . As opinion dynamics, individuals who are exposed to any of the views (*vk* ¼ �1) decide whether to "trust" or to "dismiss" the views based on their current opinion and on the persuasiveness of the views *θ*, where *θ* ∈½ � 0, 1 is a parameter which captures how seductive are the views (where large values of *θ* mean that views are more seductive and individuals are more inclined to trust them and smaller values mean that views are likely to be dismissed). Due to confirmation bias, individuals with opinion closer to þ1 are more likely to trust a *vk* ¼ þ1 propaganda, as it confirms their views and more likely to ignore *vl* ¼ �1 propaganda for the same reason. To capture confirmation bias, it is assumed that person *i* with opinion *si* trusts view *vk* with probability

$$\frac{(\mathbf{1} + \mathbf{s}\_i)\theta}{2} \quad \text{if} \quad v\_k = +\mathbf{1}, \text{and} \tag{4}$$

$$\frac{(\mathbf{1} - s\_i)\theta}{2} \quad \text{if} \quad v\_k = -\mathbf{1}.\tag{5}$$

With this condition, a person with opinion *si* ¼ 0*:*4 trusts view *vk* ¼ þ1 with probability 0*:*7*θ*, but trust view *v <sup>j</sup>* ¼ �1 with probability 0*:*3*θ*, for some *θ* which depends on how seductive the corresponding propaganda is, so that individuals are more inclined to trust views which favour their own opinions. Individuals who are seduced by any propaganda, share it with all their contacts as an active effort to persuade them, say by sharing or posting the views on social media. Individuals who dismiss propaganda, make a permanent decision to ignore it, do not update their views and do not share it with their contacts. Thus, when individuals are exposed for the first time to the views, they make a permanent choice whether to accept it (and update their views and share it) or to ignore it (and do nothing). Therefore, after 1% of the individuals are first exposed to propaganda, some individuals trust and share it with their contacts and so on until no one is exposed for the first time to that propaganda. It is assumed that the sharing mechanism (social media, say) works faster than the creation of new propaganda, so that by the time a new view *vk*þ<sup>1</sup> is created and distributed, the dynamics of the previous (opposing) propaganda *vk* has finished. Each wave of propaganda follows a similar diffusion process as the SIR model used in epidemics, where a small percentage of the individuals are initially exposed. The "infection" (the propaganda) passes through individuals, and the distribution of the recovered individuals is observed [12, 32].

Individuals who accept some propaganda at time *t* update their views according to the *volatility* of their opinions, *μ*∈½ � 0, 1 , such that individuals who accepted view *vk* update their opinion between *t* and *t* þ 1 according to

$$s\_i(t+1) = \mu v\_k + (1-\mu)s\_i(t),\tag{6}$$

social media group in which information flows easily-, has opinion hubs influencers, for example, who reach a large population- is likely to be strongly connected with many shortcuts between people who are not directly connected and therefore, the network in which propaganda is shared is also a key element in the model. Four network topologies with *N* ¼ 2, 000 nodes are analysed here: (1) a fully connected network; (2) a proximity network (nodes are located randomly on a square and pairs at a distance smaller than a certain threshold are connected); (3) a

*Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society*

The model has two parameters: the persuasiveness *θ*, which is assumed to be the same for all propaganda, and the volatility of opinions *μ*, which is assumed to be the same for all individuals. For some values of *θ* and *μ* and for some randomly assigned initial opinions, individuals are exposed to a total of 128 waves of propaganda (64

The trajectory of a society in terms of its polarisation Φ and its homophily Λ after each round of propaganda shows that for different network topologies, opinion dynamics yields different states. On a fully connected network, in which there is no relevant network structure, each round of propaganda reaches all individuals (if at least one person trusted it) and seduce some of them based only on their current opinion (**Figure 4**). After some propaganda rounds, most individuals

have an opinion either close to þ1 or to �1 so that polarisation is eventually maximum. Also, since all individuals interact with others, the homophily is reduced when polarisation increases. However, on some other topologies, there is a different impact of each wave of propaganda, particularly after repetition. On a proximity network, most rounds of propaganda tend to increase the level of polarisation, but

after repetition, most of the propaganda rounds also increase the level of homophily. Thus, after many rounds, the network has regions with similar

(extreme) views and therefore, at a local level, nodes are mostly connected to others with similar views. On a small-world network and a scale-free network, most rounds of propaganda increase the level of polarisation, but the presence of network shortcuts and hubs reduce considerably the homophily so that most of the trajectories are less homophilic than its initial levels after 128 rounds of propaganda.

*Trajectories of social polarisation (horizontal axis) and homophily (vertical axis) simulated in four different network topologies. Each realisation for some persuasiveness, θ and opinion volatility μ is marked with a curve. All curves or realisations have a nearby starting point, which marks the polarisation and homophily of a random distribution of opinions. For each topology, the four quadrants with a higher (or lower) polarisation and a higher (or lower) homophily are coloured and the three trajectories with the highest and lowest*

*polarisation and the highest homophily are marked with thick curves.*

small-world network and (4) a scale-free network.

*DOI: http://dx.doi.org/10.5772/intechopen.96989*

supporting each view).

**3. Results**

**Figure 4.**

**33**

so that if opinions are volatile (that is, individuals easily update their views, with a large value of *μ*), then most of their opinion at time *t* þ 1 depends only on the views of the propaganda they accepted, but with more rigid opinions (individuals change little their past views, with small *μ*), then the impact of propaganda becomes small. For example, for view *vk* ¼ þ1 and with volatility *μ* ¼ 0*:*5, a person with opinion *si*ðÞ¼ *t* 0*:*8 updates their view to *si*ð Þ¼ *t* þ 1 0*:*9 if they accept *vk*, whereas a person with view *sj*ðÞ¼� *t* 0*:*8 updates their view to *sj*ð Þ¼ *t* þ 1 0*:*1, meaning that a person with very different views from certain propaganda is more difficult to convince, but once convinced, the impact in their opinion is larger (**Figure 3**).

A crucial element in the opinion models is the way in which interactions between individuals are structured. Society has opinion clusters -for example, a

#### **Figure 3.**

*Probability of trusting any of the two types of propaganda, vk* ¼ �1*, represented as the two triangles on the left, based on the individual opinions, represented as the colour of the nodes and based on how seductive are two views, θ. Propaganda which supports the views of a person is more likely to be trusted by the person, but still, all propaganda has a certain level of persuasiveness, θ (the maximum height of the triangles). The impact of trusting some propaganda on individual opinions is higher if opinions are more volatile, that is, higher values of μ and has little impact if opinions are more rigid, which is shown as a slight colour change for rigid opinions and a drastic colour change for volatile opinions on the right.*

#### *Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society DOI: http://dx.doi.org/10.5772/intechopen.96989*

social media group in which information flows easily-, has opinion hubs influencers, for example, who reach a large population- is likely to be strongly connected with many shortcuts between people who are not directly connected and therefore, the network in which propaganda is shared is also a key element in the model. Four network topologies with *N* ¼ 2, 000 nodes are analysed here: (1) a fully connected network; (2) a proximity network (nodes are located randomly on a square and pairs at a distance smaller than a certain threshold are connected); (3) a small-world network and (4) a scale-free network.

The model has two parameters: the persuasiveness *θ*, which is assumed to be the same for all propaganda, and the volatility of opinions *μ*, which is assumed to be the same for all individuals. For some values of *θ* and *μ* and for some randomly assigned initial opinions, individuals are exposed to a total of 128 waves of propaganda (64 supporting each view).

### **3. Results**

With this condition, a person with opinion *si* ¼ 0*:*4 trusts view *vk* ¼ þ1 with probability 0*:*7*θ*, but trust view *v <sup>j</sup>* ¼ �1 with probability 0*:*3*θ*, for some *θ* which depends on how seductive the corresponding propaganda is, so that individuals are more inclined to trust views which favour their own opinions. Individuals who are seduced by any propaganda, share it with all their contacts as an active effort to persuade them, say by sharing or posting the views on social media. Individuals who dismiss propaganda, make a permanent decision to ignore it, do not update their views and do not share it with their contacts. Thus, when individuals are exposed for the first time to the views, they make a permanent choice whether to accept it (and update their views and share it) or to ignore it (and do nothing). Therefore, after 1% of the individuals are first exposed to propaganda, some individuals trust and share it with their contacts and so on until no one is exposed for the first time to that propaganda. It is assumed that the sharing mechanism (social media, say) works faster than the creation of new propaganda, so that by the time a new view *vk*þ<sup>1</sup> is created and distributed, the dynamics of the previous (opposing) propaganda *vk* has finished. Each wave of propaganda follows a similar diffusion process as the SIR model used in epidemics, where a small percentage of the individuals are initially exposed. The "infection" (the propaganda) passes through individuals, and

the distribution of the recovered individuals is observed [12, 32].

*vk* update their opinion between *t* and *t* þ 1 according to

*Theory of Complexity - Definitions, Models, and Applications*

**Figure 3.**

**32**

*a drastic colour change for volatile opinions on the right.*

Individuals who accept some propaganda at time *t* update their views according to the *volatility* of their opinions, *μ*∈½ � 0, 1 , such that individuals who accepted view

so that if opinions are volatile (that is, individuals easily update their views, with

a large value of *μ*), then most of their opinion at time *t* þ 1 depends only on the views of the propaganda they accepted, but with more rigid opinions (individuals change little their past views, with small *μ*), then the impact of propaganda becomes small. For example, for view *vk* ¼ þ1 and with volatility *μ* ¼ 0*:*5, a person with opinion *si*ðÞ¼ *t* 0*:*8 updates their view to *si*ð Þ¼ *t* þ 1 0*:*9 if they accept *vk*, whereas a person with view *sj*ðÞ¼� *t* 0*:*8 updates their view to *sj*ð Þ¼ *t* þ 1 0*:*1, meaning that a person with very different views from certain propaganda is more difficult to convince, but once convinced, the impact in their opinion is larger (**Figure 3**). A crucial element in the opinion models is the way in which interactions between individuals are structured. Society has opinion clusters -for example, a

*Probability of trusting any of the two types of propaganda, vk* ¼ �1*, represented as the two triangles on the left, based on the individual opinions, represented as the colour of the nodes and based on how seductive are two views, θ. Propaganda which supports the views of a person is more likely to be trusted by the person, but still, all propaganda has a certain level of persuasiveness, θ (the maximum height of the triangles). The impact of trusting some propaganda on individual opinions is higher if opinions are more volatile, that is, higher values of μ and has little impact if opinions are more rigid, which is shown as a slight colour change for rigid opinions and*

*si*ð Þ¼ *t* þ 1 *μvk* þ ð Þ 1 � *μ si*ð Þ*t* , (6)

The trajectory of a society in terms of its polarisation Φ and its homophily Λ after each round of propaganda shows that for different network topologies, opinion dynamics yields different states. On a fully connected network, in which there is no relevant network structure, each round of propaganda reaches all individuals (if at least one person trusted it) and seduce some of them based only on their current opinion (**Figure 4**). After some propaganda rounds, most individuals have an opinion either close to þ1 or to �1 so that polarisation is eventually maximum. Also, since all individuals interact with others, the homophily is reduced when polarisation increases. However, on some other topologies, there is a different impact of each wave of propaganda, particularly after repetition. On a proximity network, most rounds of propaganda tend to increase the level of polarisation, but after repetition, most of the propaganda rounds also increase the level of homophily. Thus, after many rounds, the network has regions with similar (extreme) views and therefore, at a local level, nodes are mostly connected to others with similar views. On a small-world network and a scale-free network, most rounds of propaganda increase the level of polarisation, but the presence of network shortcuts and hubs reduce considerably the homophily so that most of the trajectories are less homophilic than its initial levels after 128 rounds of propaganda.

#### **Figure 4.**

*Trajectories of social polarisation (horizontal axis) and homophily (vertical axis) simulated in four different network topologies. Each realisation for some persuasiveness, θ and opinion volatility μ is marked with a curve. All curves or realisations have a nearby starting point, which marks the polarisation and homophily of a random distribution of opinions. For each topology, the four quadrants with a higher (or lower) polarisation and a higher (or lower) homophily are coloured and the three trajectories with the highest and lowest polarisation and the highest homophily are marked with thick curves.*

For some of the trajectories, it is observed that the first few rounds of propaganda increase the polarisation and decrease the homophily. After many rounds of propaganda, the level of homophily might increase, indicating the formation of clusters of nodes with similar opinions, particularly on a proximity network. In some cases, polarisation might be decreased, but only after homophily has decreased (and not the other way around), meaning that first, the observed changes in opinion dynamics happen at a local level and then, they might be perceived at a global scale. Notice, however, that very few trajectories reach less polarisation than their starting point. Thus, propaganda or similar external forces tend to increase polarisation and frequently will produce a higher level of polarisation than the one observed with a random distribution of opinions.

society might be alternatively highly polarised or close to a consensus after many rounds of propaganda with very small changes in the two parameters. Furthermore, even with the same initial opinions and with the same values of *θ* and *μ*, society might reach very different levels of polarisation and homophily (right part of **Figure 5**). Individuals exposed to propaganda are randomly picked and according to their opinion, they might be seduced by it and share it with their contacts, or ignore it, thus, altering the outcome after that round of propaganda. With only a few waves of propaganda, the outcome might be similar, but those small changes are cumulative and so after many rounds, the outcome might be a society close to extremism or even close to consensus, even if the starting point is the same.

*Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society*

The first rounds of propaganda decrease the homophily of society so that people

The fully-connected network helps to observe the dynamics of opinions without any relevant network structure. With some level of persuasiveness *θ*, and opinion volatility *μ*, society eventually reach polarisation. With more rounds of propaganda, polarisation increases up to extremism, and only with no persuasiveness (*θ* ¼ 0) or no volatility (*μ* ¼ 0) society remains without extremism. However, for different network topologies, propaganda might have a different impact. Particularly in the case of a proximity network (with high values of *θ*) and in the case of a scale-free network (with medium values of *θ*) propaganda might increase homophily and in

*Observed levels of polarisation (top) and homophily (bottom) according to some values of the persuasiveness of propaganda θ (horizontal axis) and the volatility of opinions μ (vertical axis) after 128 rounds of propaganda. Four network topologies are considered, a fully-connected, a proximity, a small-world and a scale-free network*

with some extreme view have frequent interactions with others with different views. As the number of propaganda rounds evolves, opinion clusters are formed, and so interactions become more and more frequent between individuals with similar views. Thus, even if at a global scale the level of polarisation is increasing, after many rounds of propaganda, people might be less aware of the existence and abundance of different views. Extreme opinions might become more frequent because of propaganda. A similar -although less pronounced- polarising and homophilic society might be frequently observed on a scale-free and a small-world network, although the presence of hubs and shortcuts in the network reduces the

creation of opinion clusters (**Figure 6**).

*DOI: http://dx.doi.org/10.5772/intechopen.96989*

some cases, reduce polarisation.

**Figure 6.**

**35**

*from left to right.*

#### **3.1 Parameter space**

The observed levels of polarisation and homophily depend on the persuasiveness of the propaganda *θ*, and the opinion volatility *μ*. On a proximity network, for instance, with highly persuasive propaganda (*θ* ≈1) and volatile opinions (*μ*≈1) after only a few rounds of propaganda, there is a highly polarised society, with highly homophilic interactions. However, if propaganda is not as seductive or if individuals do not update their views easily, it takes several rounds of propaganda to observe a polarised society (**Figure 5**).

For some values of *θ* and *μ*, there is extreme sensitivity to the parameters. On a proximity network, with higher values of the persuasiveness of propaganda *θ*,

#### **Figure 5.**

*Observed levels of polarisation (left) and homophily (middle) on a proximity network according to some values of the persuasiveness of propaganda θ (horizontal axis) and the volatility of opinions μ (vertical axis) after 8, 32 and 128 rounds of propaganda. Higher levels of polarisation and homophily are darker, representing extreme views and a homophilic society respectively, and lower levels of polarisation and homophily are lighter, representing consensus and frequent exchanges between people with different views. For the same values of θ* ¼ 0*:*85 *and μ* ¼ 0*:*35*, with the same initial (random) opinions, 250 realisations of the dynamics follow different trajectories (right), where the levels of polarisation and homophily after 8, 32 and 128 rounds of propaganda are highlighted.*

#### *Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society DOI: http://dx.doi.org/10.5772/intechopen.96989*

society might be alternatively highly polarised or close to a consensus after many rounds of propaganda with very small changes in the two parameters. Furthermore, even with the same initial opinions and with the same values of *θ* and *μ*, society might reach very different levels of polarisation and homophily (right part of **Figure 5**). Individuals exposed to propaganda are randomly picked and according to their opinion, they might be seduced by it and share it with their contacts, or ignore it, thus, altering the outcome after that round of propaganda. With only a few waves of propaganda, the outcome might be similar, but those small changes are cumulative and so after many rounds, the outcome might be a society close to extremism or even close to consensus, even if the starting point is the same.

The first rounds of propaganda decrease the homophily of society so that people with some extreme view have frequent interactions with others with different views. As the number of propaganda rounds evolves, opinion clusters are formed, and so interactions become more and more frequent between individuals with similar views. Thus, even if at a global scale the level of polarisation is increasing, after many rounds of propaganda, people might be less aware of the existence and abundance of different views. Extreme opinions might become more frequent because of propaganda. A similar -although less pronounced- polarising and homophilic society might be frequently observed on a scale-free and a small-world network, although the presence of hubs and shortcuts in the network reduces the creation of opinion clusters (**Figure 6**).

The fully-connected network helps to observe the dynamics of opinions without any relevant network structure. With some level of persuasiveness *θ*, and opinion volatility *μ*, society eventually reach polarisation. With more rounds of propaganda, polarisation increases up to extremism, and only with no persuasiveness (*θ* ¼ 0) or no volatility (*μ* ¼ 0) society remains without extremism. However, for different network topologies, propaganda might have a different impact. Particularly in the case of a proximity network (with high values of *θ*) and in the case of a scale-free network (with medium values of *θ*) propaganda might increase homophily and in some cases, reduce polarisation.

#### **Figure 6.**

For some of the trajectories, it is observed that the first few rounds of propaganda increase the polarisation and decrease the homophily. After many rounds of propaganda, the level of homophily might increase, indicating the formation of clusters of nodes with similar opinions, particularly on a proximity network. In some cases, polarisation might be decreased, but only after homophily has

decreased (and not the other way around), meaning that first, the observed changes in opinion dynamics happen at a local level and then, they might be perceived at a global scale. Notice, however, that very few trajectories reach less polarisation than their starting point. Thus, propaganda or similar external forces tend to increase polarisation and frequently will produce a higher level of polarisation than the one

The observed levels of polarisation and homophily depend on the persuasiveness

For some values of *θ* and *μ*, there is extreme sensitivity to the parameters. On a

*Observed levels of polarisation (left) and homophily (middle) on a proximity network according to some values of the persuasiveness of propaganda θ (horizontal axis) and the volatility of opinions μ (vertical axis) after 8, 32 and 128 rounds of propaganda. Higher levels of polarisation and homophily are darker, representing extreme views and a homophilic society respectively, and lower levels of polarisation and homophily are lighter, representing consensus and frequent exchanges between people with different views. For the same values of θ* ¼ 0*:*85 *and μ* ¼ 0*:*35*, with the same initial (random) opinions, 250 realisations of the dynamics follow different trajectories (right), where the*

*levels of polarisation and homophily after 8, 32 and 128 rounds of propaganda are highlighted.*

of the propaganda *θ*, and the opinion volatility *μ*. On a proximity network, for instance, with highly persuasive propaganda (*θ* ≈1) and volatile opinions (*μ*≈1) after only a few rounds of propaganda, there is a highly polarised society, with highly homophilic interactions. However, if propaganda is not as seductive or if individuals do not update their views easily, it takes several rounds of propaganda

proximity network, with higher values of the persuasiveness of propaganda *θ*,

observed with a random distribution of opinions.

*Theory of Complexity - Definitions, Models, and Applications*

to observe a polarised society (**Figure 5**).

**3.1 Parameter space**

**Figure 5.**

**34**

*Observed levels of polarisation (top) and homophily (bottom) according to some values of the persuasiveness of propaganda θ (horizontal axis) and the volatility of opinions μ (vertical axis) after 128 rounds of propaganda. Four network topologies are considered, a fully-connected, a proximity, a small-world and a scale-free network from left to right.*

### **4. Conclusions**

Social models are a simplification of very complex processes which happen at an individual level but might be able to capture some collective emergent aspects. In terms of opinion dynamics, modelling individual views as a number, simplifying external forces such as propaganda, simulating interactions and a process of opinion updating let us detect emergent patterns, including an increase in the global levels of polarisation and the frequency of homophilic interactions between individuals.

**Conflict of interest**

**Author details**

**37**

Rafael Prieto Curiel

Centre for Advanced Spatial Analysis, University College London, London, UK

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: rafael.prieto.13@ucl.ac.uk

provided the original work is properly cited.

The author declare no conflict of interest.

*DOI: http://dx.doi.org/10.5772/intechopen.96989*

*Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society*

The network structure plays a significant role, as the emergence of homophilic clusters which reinforce their opinions is detected, particularly on a network where there is a large distance between nodes, such as a proximity network.

The observed results in terms of the trajectories and the observed levels of polarisation and homophily after many rounds of propaganda show that there might be a high sensitivity concerning the parameters. Two simulations under the same network structure and even the same initial opinions and parameters might follow different trajectories and end with substantially distinct levels of homophily and polarisation. The model initially exposes 1% of the population to some propaganda and depending on who is exposed, the dynamic changes and eventually reach very different states. For some regions in the parameter space, there is unpredictability in the state in which society will be after propaganda.

In the simulated networks, the average degree is 7.6 for the proximity network and 10 for the small-world and the scale-free network. The intensity of interactions, measured as the degree of the nodes, accelerates or frictions the diffusion of propaganda, and thus, accelerates of frictions polarisation and homophily as well. A lessconnected society is more prone to the creation of homophilic clusters.

#### **4.1 What is different between a highly polarised society and one with little polarisation**

On a highly polarised society, individuals become "immune" to propaganda which does not support their views and dismiss it easily, whereas propaganda which supports their views is confirmation of their beliefs and takes individuals into even more extreme and plarised views. On a polarised society, even with little levels of homophily (meaning that individuals are likely to be exposed to both types of propaganda), individuals are eventually too biased in favour of any of the extreme views, which becomes too difficult to change.

On a society with little levels of polarisation, views could either have a consensus on one of the two extremes, in which case, propaganda in favour of any of the opinions has little impact. This case happens when one of the two views becomes dominant at early stages, in which case, individuals also become "immune" to propaganda (and since the first propaganda they are exposed is þ1, that view is slightly more likely to become dominant in the long run).

However, the most frequently observed consensus is one in which barely anyone has extreme views, propaganda in favour of the two views flows between most individuals and they update their opinion accordingly, but not enough to reject further waves of propaganda and keep updating their opinion.

#### **Acknowledgements**

This chapter was completed with support from the PEAK Urban programme, funded by UKRI's Global Challenge Research Fund, Grant Ref: ES/P011055/1.

*Opinion Dynamics and the Inevitability of a Polarised and Homophilic Society DOI: http://dx.doi.org/10.5772/intechopen.96989*
