3. Modeling and simulation

Like the ability of humans to receive, filter and understand information, a computer agent can receive information about the current conditions in its environment through the code component called the perceptor. Almost without exception, perceptors are message-handling routines the purpose of which is to 'observe' the current state of the agent's environment, and filter and translate that information into a form that can be used by the choice-making component of the agent, converting the messages from the environment into an internal form of use to the agent. This internal translation is unique to each agent, which thus allows for agents that interpret the same external message differently. This would be important if different agents have different narrative events that were triggered by the same

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

The actions of the agents to external messages, both those which are put into the environment as output messages by other agents, those required for changes in the internal state vector of the agent, and those that require the attention of other agents, are managed by the ratiocinator component. The ratiocinator makes choices and adapts the behavior of the agent. This is the part of the agent which replicates how the human intersects with the simulated world in accordance with the then-

When it is required that an agent create input to other components of the agent-

based simulation, it can issue messages out to the environment by way of what we call an actor. It specifically engages the agent environment. Messages intended for other agents are detected by those agents as they interact with the environment. For even the simplest case of the atomic narrative with a single event, the agent components must contain significant data. The perceptor needs to be designed to recognize the occurrence of the event and the perceived state of the environment at the time of the occurrence. The ratiocinator must be programmed to perform one of a set of choice protocols the agent will apply to exercise the choice required by the event, including the availability and allocation conditions of the resources at the agent's disposal. And the actor must have, in its repertoire of possible actions, those that are suitable given the event perception and protocol requirements. If the narrative construct driving the event recognition and intervention is a molecular narrative, with a number of events connected in a time-ordered and contingency network, then each atomic narrative must be delineated as described above. Furthermore, the connections between the

component events must also be precisely represented in order to reliably represent

This definition of agent contains the basic outline of all the pieces of the programmer's art necessary to build and execute a virtual market simulation. At first glance, these requirements may seem onerous. But in cases where the construct has been applied, the problem reduces itself to a tractable, if perhaps complex, computer algorithm programming task. As with all computing programs, two elements are present: the data on which the algorithms operate and the computing code which executes the algorithms themselves. Looking at the agent definition from that perspective, the state vector holds the data, and the perceptor, ratiocinator and actor consist of algorithms. From experience to date, and from reflection of how a particular agent behavior might be implemented in a variety of other contexts, the programming of the choice protocol set that resides in the ratiocinator seems the

Why do care in the least about narratives? Because they represent—in fact are the existential structure—of what us advanced species of humans refer to as models.

active narrative structure. How such stochastic choice mechanisms are implemented are within the purview of the agent's ratiocinator.

external environmental conditions.

the agent's actions.

most daunting.

80

Now let us look at that!

Human beings think in terms of mental models of the world around them and their relationship to it [12]. We formulate concepts and ideas and link them together to represent the way we think the world works in some significant regard and use those representations to make decisions on our future actions. These models can range from simple statements of assumed cause and effect—"if I step out in front of a moving bus, there's a good chance I could be seriously hurt"—through physical scale models of buildings or vehicles in their design stages through mathematical representations of complex social or physical systems. Common to models of whatever composition or subject is that they are abstractions, and therefore simplifications, of reality, retaining what is believed to be salient features of the problem at hand.

We are only concerned with models that are represented mathematically, either in the form of one or more equations or as a computer program. Consider the simple "What if?" scenario analysis applied to the results from a conjoint analysis, or more elaborate "war games" in which competing teams of managers (or MBA students) develop and implement strategies in an interactive fashion. An underlying principle of modeling is the representation of one process or set of processes with a simpler process or set of processes. The Monopoly® board game, for example, reduces a complex economic system into a limited set of transactions. Part of the fun of playing the game derives from the degree to which the outcomes (wealth accumulation and bankruptcy) resemble the outcomes from a real-world process that is, sort of being simulated.

An important concept regarding simulation is time. This temporal property is an essential characteristic of process-representative simulation and differentiates the application of simulation as an analytic and scientific tool from many other approaches, such as deductive logic or statistical inference. As noted in the previous section, the central role of time is also an integral part of agent-based model simulations. With an express representation of time, the dynamics of a system can be explicitly studied.

In many circumstances, the phenomenon under investigation cannot be ethically or safely subjected to experimentation. The study of disease epidemics, social intolerance and military tactics are obvious examples. In many situations the scale of the process under study prohibits any other approach. In astronomy and astrophysics, the universe is not available for experimental manipulation, but a simulation of important aspects of it are available for such study. Similarly, explorations of cultural development or species evolution cannot be executed with physical laboratory environment, while a simulation permits hypothesis testing and inference on a reasonable time scale. Finally, some systems are so complex that traditional experimental science seems hopeless as a research approach. Among these systems are ecological dynamics and evolutionary economics.

Figure 6 lays out the "ethnology" of mathematical modeling in general and simulation modeling in particular, starting with the invention of the calculus in the 1700s and continuing up to the present day. (This diagram is adapted from Gilbert and Troitszch [13].) The two broad categories of stochastic and deterministic simulation models are indicated by the shaded ovals. The bold face labels define the mathematical contexts of the various modeling formalizations, while their genealogy is spelled out with the lines. As illustrated in this diagram, agent-based simulation belongs to the class of stochastic simulations and descended from a form of simulation called discrete-event simulations. Discrete event simulations are stochastic simulations that attempt to mimic the behavior of the discrete parts of a system and

A good deal of applied mathematics and a substantial part of modern statistics is aimed at easing the problem of the generation of reductive models. Beyond linear regression, there are literally thousands of other applied techniques; Fourier series [16], time series analysis [17], discrete choice modeling (e.g., [18, 19]), and proportional hazard models [20] to name only a very few. Indeed, the so-called Stone-Weierstrass theorem in classical abstract analysis [21] describes a set of general conditions under which an arbitrary data set can be approximated to any desired degree of accuracy by a broad class of simpler functional forms. And reductive models are powerful and extremely useful tools. In econometrics, for example, reductive models are widely used for forecasting [22, 23]. For a thorough treatment of the structures and formal aspects of mathematical models, see Chang

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

But in the "harder" sciences like physics, reductive models give way to structural models. Newton's laws of motion, or Einstein's theories of relativity, are mathematical constructs which purport to not only represent the results of data sets arising from observations of natural phenomena, but also how the objects in the natural world interact with each other to generate the observed data. That is, the models do not merely represent the observations, but also describe the process by which the observations come about. As such, they carry more explanatory weight than reductive descriptions. They are more likely to be valid beyond the range of the initial observed data sets that lead to their formulation. Also, they have repeatedly been shown to be robust across varied data sets and when connected to other models

Simulation is one of the more powerful structural modeling techniques. A simulation is a model which, by design, represents the relationships between entities in a system. In particular, a simulation captures the dynamics of the relationships between components of a system, reflecting how changes in one component create changes and affect the response of other components. Humphreys [25], in fact, defines a simulation as a structural model that explicitly includes time as a dimension, so that the dynamics between the variables described in the model can be appropriately portrayed.<sup>1</sup> The increasingly wide spread use of simulation also is tied

Note that the use of the word simulation here as a modeling technique should not be confused with the phrase when applied to certain methods of finding solutions to equations. For example, computing the volume of a complex solid in a multi-dimensional space, such as the area under a multi-dimensional normal probability distribution, can be done by generating a very large number of points in the relevant space and determining the ratio of those in the solid vs. those not in the

Modeling and simulation which incorporates a definition of time (unidirectionality, non-repeatability, uncontrollability) are representations of narratives. Fisher incorporates the rationality of traditional logic into the narrative paradigm

"Narrative rationality is not simply an account of the 'laws of thought,' nor is it normative in the sense that one must reason according to prescribed rules of calculation or inference making. Traditional rationality prescribes the way people should think when they reason truly or toward certainty. … Traditional

<sup>1</sup> He uses time as the dimension carrying the dynamics, primarily because of its unidirectionality. Other dimensions, such as spatial coordinates, could also be used, but they do not necessarily have this

and Keisler [24].

solid.

which represent related systems.

to the development of computing power.

(with a somewhat critical slant):

unidirectional property.

83

Figure 6. The ethnology of mathematical modeling and simulation.

their interaction. As indicated in Figure 6, a number of other varieties of simulation have evolved from the discrete event form.

Also contributing to the heritage of agent simulation are game theory, artificial intelligence and cellular automata models. Some of the early applications of agentbased models were in the area of rational game play. Axelrod's [14] experiments with the prisoner's dilemma game is the classic example and is cited by virtually every practitioner of agent-modeling somewhere in their writings. Artificial intelligence models are a key component of the development of robotic systems (Wooldridge [15] is a chief proponent). If the behavior modeled by the discrete event simulation contains stochastic elements, as it usually does, then the simulation can be run repeatedly using random numbers generated by the relevant probability distributions, and the distributional characteristics of the resulting dynamics portrayed. In fact, for a system of even moderate scope this is the only way such dynamics can be validly studied. The ability to express dynamics through ordinary or stochastic partial differential equations can quickly exceed any reasonable closed form.

Mathematical models can be classified into two forms: reductive models and structural models. In a reductive model, a set of data is reduced to one or more equations or similar relationships that represent the data, but in a simpler, more parsimonious form. There is some loss of accuracy, the price of which is greater understandability. Linear regression is a ready example, where a set of pairs (or n-tuples) of data is replaced with an equation representing the relationship between the data values of one element of the set, called the dependent variable, as a linear combination of some collection of the other (n1)-tuples, called independent variables. Structural models, on the other hand, are intended to be representations that not only reproduce patterns of observed data, but also characterize the process, or structure, by which the variables represented by the data relate to one another. Reductive models are not required to completely replicate the process by which the observed data is generated. They need only reproduce the observed results in a more economical and parsimonious way. Structural models focus on the way in which the observed values come about.

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

A good deal of applied mathematics and a substantial part of modern statistics is aimed at easing the problem of the generation of reductive models. Beyond linear regression, there are literally thousands of other applied techniques; Fourier series [16], time series analysis [17], discrete choice modeling (e.g., [18, 19]), and proportional hazard models [20] to name only a very few. Indeed, the so-called Stone-Weierstrass theorem in classical abstract analysis [21] describes a set of general conditions under which an arbitrary data set can be approximated to any desired degree of accuracy by a broad class of simpler functional forms. And reductive models are powerful and extremely useful tools. In econometrics, for example, reductive models are widely used for forecasting [22, 23]. For a thorough treatment of the structures and formal aspects of mathematical models, see Chang and Keisler [24].

But in the "harder" sciences like physics, reductive models give way to structural models. Newton's laws of motion, or Einstein's theories of relativity, are mathematical constructs which purport to not only represent the results of data sets arising from observations of natural phenomena, but also how the objects in the natural world interact with each other to generate the observed data. That is, the models do not merely represent the observations, but also describe the process by which the observations come about. As such, they carry more explanatory weight than reductive descriptions. They are more likely to be valid beyond the range of the initial observed data sets that lead to their formulation. Also, they have repeatedly been shown to be robust across varied data sets and when connected to other models which represent related systems.

Simulation is one of the more powerful structural modeling techniques. A simulation is a model which, by design, represents the relationships between entities in a system. In particular, a simulation captures the dynamics of the relationships between components of a system, reflecting how changes in one component create changes and affect the response of other components. Humphreys [25], in fact, defines a simulation as a structural model that explicitly includes time as a dimension, so that the dynamics between the variables described in the model can be appropriately portrayed.<sup>1</sup> The increasingly wide spread use of simulation also is tied to the development of computing power.

Note that the use of the word simulation here as a modeling technique should not be confused with the phrase when applied to certain methods of finding solutions to equations. For example, computing the volume of a complex solid in a multi-dimensional space, such as the area under a multi-dimensional normal probability distribution, can be done by generating a very large number of points in the relevant space and determining the ratio of those in the solid vs. those not in the solid.

Modeling and simulation which incorporates a definition of time (unidirectionality, non-repeatability, uncontrollability) are representations of narratives. Fisher incorporates the rationality of traditional logic into the narrative paradigm (with a somewhat critical slant):

"Narrative rationality is not simply an account of the 'laws of thought,' nor is it normative in the sense that one must reason according to prescribed rules of calculation or inference making. Traditional rationality prescribes the way people should think when they reason truly or toward certainty. … Traditional

their interaction. As indicated in Figure 6, a number of other varieties of simulation

Also contributing to the heritage of agent simulation are game theory, artificial intelligence and cellular automata models. Some of the early applications of agentbased models were in the area of rational game play. Axelrod's [14] experiments with the prisoner's dilemma game is the classic example and is cited by virtually every practitioner of agent-modeling somewhere in their writings. Artificial intelli-

gence models are a key component of the development of robotic systems (Wooldridge [15] is a chief proponent). If the behavior modeled by the discrete event simulation contains stochastic elements, as it usually does, then the simulation can be run repeatedly using random numbers generated by the relevant probability distributions, and the distributional characteristics of the resulting dynamics portrayed. In fact, for a system of even moderate scope this is the only way such dynamics can be validly studied. The ability to express dynamics through ordinary or stochastic partial differential equations can quickly exceed any reasonable closed

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

Mathematical models can be classified into two forms: reductive models and structural models. In a reductive model, a set of data is reduced to one or more equations or similar relationships that represent the data, but in a simpler, more parsimonious form. There is some loss of accuracy, the price of which is greater understandability. Linear regression is a ready example, where a set of pairs (or n-tuples) of data is replaced with an equation representing the relationship between the data values of one element of the set, called the dependent variable, as a linear combination of some collection of the other (n1)-tuples, called independent variables. Structural models, on the other hand, are intended to be representations that not only reproduce patterns of observed data, but also characterize the process, or structure, by which the variables represented by the data relate to one another. Reductive models are not required to completely replicate the process by which the observed data is generated. They need only reproduce the observed results in a more economical and parsimonious way. Structural models focus on the way in

have evolved from the discrete event form.

The ethnology of mathematical modeling and simulation.

which the observed values come about.

form.

82

Figure 6.

<sup>1</sup> He uses time as the dimension carrying the dynamics, primarily because of its unidirectionality. Other dimensions, such as spatial coordinates, could also be used, but they do not necessarily have this unidirectional property.

rationality is, therefore, a normative construct. Narrative rationality is, on the other hand, descriptive; it offers an account, an understanding, of any instance of human choice and action, including science." [6].

there can be no human universals. (For example, by taking a contrary view, Brown

Narratives are also the mechanisms by which agents communicate with each other—so-called shared narratives—and understand the world around them. They

compounded from sequences of atomic events (are molecular narratives) and thus the choice process of the atomic narrative is the key focus. The set of choice protocols can be classified into four broad groups: (1) rational methods, including various concepts of bounded rationality, (2) heuristics, which are quick and easy (but often very inaccurate) rules-of-thumb, (3) social network protocols, which rely on communications between individuals to make choices, and (4) biases, which

Consider what are usually referred to as protocols for rational choice. Perhaps the easiest of these would be rule-invocation methods. This is the situation where the agent making the choice has an available set of rules, and, depending on value of the state space when the event is encountered, one or more of these rules are used to determine the choice. Rule-based agents are widely used in agent-based models. Axelrod [14], Epstein [28], and Wooldridge [15] insist that rule-based agency is the wisest course of agent construction. This press for simplicity is in response to the need to explore and understand some of the unusual emergent results that are observed with agent-based models. A complicated agent structure makes analysis of such emergent structure much more arduous. And such a protocol is trivial to build into an agent. Merely specify the action to be engaged for each appropriate set of state space variable values. But this is not a choice protocol, since the outcome of the choice is predetermined by the rule set. It's a pre-defined action invocation, and since there is no probability associated with the rule-invocation, there can be no associated narrative event. Therefore, this kind of choice mechanism is not within the purview of agent-based models as defined here, which require such a stochastic

The classic statistical decision problem is perhaps the oldest, and most 'rational' of this class of choice protocols. The fundamental problem of statistical decision theory is to select a possible action from a set of actions that minimizes expected loss. The loss function L að Þ ; θ associates a real-valued number (the loss) with an action a in some set of possible actions A and a state-space value θ ∈ Θ (referred to as the state of nature in the statistical literature). The triple ð Þ Θ; A; L represents the statistical decision problem. Generally, the choice at hand is which value of θ represents the "true" state of nature. Nominally, there exists empirical data

<sup>4</sup> Specifically, by pointing out that adolescents in Samoa indeed led stressful lives, just like teenagers

need not, however, be true descriptions of the world. All narratives are

for this analysis because, if there were no human traits that were independent of culture, then the problem of simulating human behavior with agents becomes extensively more difficult. In that case every culture would have a unique heritage and historical path, making generalization very difficult. Brown's refutation of the relativistic view of human behavior is therefore valuable to the arguments justifying agent modeling in computer science. If we cannot characterize human behavior in some reasonably perspicacious and parsimonious way, the task of defining human

) This position is important

contradicted the great anthropologist Margaret Mead.4

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

simulation agents will be significantly more onerous.

are significant errors often found in choice-making.

4. Rational choice protocols

mechanism.

everywhere else in the world.

85

How does this narrative construct relate in the modeling of human agents? As was stated earlier, the narrative provides the mechanism for separating reality from what the agent thinks is reality. That is, it defines the context, values and resources required to change the realization of a narrative to a desired outcome. Many consider individual narrative discovery as the central problem of marketing research, as exemplified by the strong methodological presence of ethnology in some marketing research quarters, such as ESOMAR (a European society of marketing research). It is clear that the choice process is a vital aspect of any such description. In fact, the narrative framework provides an ontological justification for pursuing the study of choice as the critical component of replicating the behavior of humans in agentbased models. It is not necessary to know the full constellation of narratives maintained by an individual to incorporate the concept into agent models. It is often sufficient, at least as a starting point, to model a few essential atomic narratives.

Narratives are needed for the construction of agents in a computing context because the distinction between the reality of the environment that an agent finds itself in and the perception and interpretation of that environment by the agent must be kept clear. Formally speaking, all narratives are models.2 And, since a narrative is a sequence of events, there exists a finite sequence {1, 2, 3, …, k} such that each member of which has an associated stochastic process Ei = PΛ(<sup>i</sup>)(Y|Xi). So a narrative can be expressed as the ordered k-tuple (E1, E2, E3, …, Ek), representing the possible outcomes of the narrative execution.

Narratives are to a significant degree a product of evolutionary history. As we observe human society across multiple cultures we should repeatedly encounter common behavioral attributes, since the environment in which humans have evolved have many essential elements in common. Moreover, we can then assert a behavioral universality that would support hypotheses that could be tested across cultures. Finally, if such universality can be supported, then the construction of agents that replicate important behavioral characteristics can be expected to be applicable in a wide range of contexts.

A thorough cataloging of human behavior patterns in cultures around the world has been assembled by the anthropology community. The University of Illinois at Urbana-Champaign maintains the Human Relations Area Files (HRAF), an organized and indexed compilation of every reported ethnographic study of human culture (UIUC [26]). Many scholars have used this resource in recent years to undertake cross-cultural studies of human behavior. Brown [27] has compiled a list of patterns of behavior that have been recorded in every human society that has been studied—anywhere in the world, large or small, old or modern. Brown focused on the hypothesis that a number of features of human culture would be found in all human experience, regardless of time, place, or history. When originally published, his views were sharply opposed to the prevailing anthropological wisdom. At the heart of the controversy was the nature-nurture debate, which still circulates actively today. Brown spent considerable space refuting the concept of cultural relativism, which holds that human cultures are vastly different with limitless variety.<sup>3</sup> Further, culture completely determines human behavior, and therefore

<sup>2</sup> But not all models are narratives. There is no need for a time dimension in every case.

<sup>3</sup> Cultural relativism per se became prominent in post-World War II sociological research largely in an attempt to refute the naïve "survival of the fittest" philosophy exemplified by the German Nazi era.

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

there can be no human universals. (For example, by taking a contrary view, Brown contradicted the great anthropologist Margaret Mead.4 ) This position is important for this analysis because, if there were no human traits that were independent of culture, then the problem of simulating human behavior with agents becomes extensively more difficult. In that case every culture would have a unique heritage and historical path, making generalization very difficult. Brown's refutation of the relativistic view of human behavior is therefore valuable to the arguments justifying agent modeling in computer science. If we cannot characterize human behavior in some reasonably perspicacious and parsimonious way, the task of defining human simulation agents will be significantly more onerous.

### 4. Rational choice protocols

rationality is, therefore, a normative construct. Narrative rationality is, on the other hand, descriptive; it offers an account, an understanding, of any instance

maintained by an individual to incorporate the concept into agent models. It is often sufficient, at least as a starting point, to model a few essential atomic narratives. Narratives are needed for the construction of agents in a computing context because the distinction between the reality of the environment that an agent finds itself in and the perception and interpretation of that environment by the agent must be kept clear. Formally speaking, all narratives are models.2 And, since a narrative is a sequence of events, there exists a finite sequence {1, 2, 3, …, k} such that each member of which has an associated stochastic process Ei = PΛ(<sup>i</sup>)(Y|Xi). So a narrative can be expressed as the ordered k-tuple (E1, E2, E3, …, Ek), representing the possible

Narratives are to a significant degree a product of evolutionary history. As we observe human society across multiple cultures we should repeatedly encounter common behavioral attributes, since the environment in which humans have evolved have many essential elements in common. Moreover, we can then assert a behavioral universality that would support hypotheses that could be tested across cultures. Finally, if such universality can be supported, then the construction of agents that replicate important behavioral characteristics can be expected to be

A thorough cataloging of human behavior patterns in cultures around the world has been assembled by the anthropology community. The University of Illinois at Urbana-Champaign maintains the Human Relations Area Files (HRAF), an organized and indexed compilation of every reported ethnographic study of human culture (UIUC [26]). Many scholars have used this resource in recent years to undertake cross-cultural studies of human behavior. Brown [27] has compiled a list of patterns of behavior that have been recorded in every human society that has been studied—anywhere in the world, large or small, old or modern. Brown focused on the hypothesis that a number of features of human culture would be found in all human experience, regardless of time, place, or history. When originally published, his views were sharply opposed to the prevailing anthropological wisdom. At the heart of the controversy was the nature-nurture debate, which still circulates actively today. Brown spent considerable space refuting the concept of cultural relativism, which holds that human cultures are vastly different with limitless variety.<sup>3</sup> Further, culture completely determines human behavior, and therefore

<sup>2</sup> But not all models are narratives. There is no need for a time dimension in every case.

<sup>3</sup> Cultural relativism per se became prominent in post-World War II sociological research largely in an attempt to refute the naïve "survival of the fittest" philosophy exemplified by the German Nazi era.

How does this narrative construct relate in the modeling of human agents? As was stated earlier, the narrative provides the mechanism for separating reality from what the agent thinks is reality. That is, it defines the context, values and resources required to change the realization of a narrative to a desired outcome. Many consider individual narrative discovery as the central problem of marketing research, as exemplified by the strong methodological presence of ethnology in some marketing research quarters, such as ESOMAR (a European society of marketing research). It is clear that the choice process is a vital aspect of any such description. In fact, the narrative framework provides an ontological justification for pursuing the study of choice as the critical component of replicating the behavior of humans in agentbased models. It is not necessary to know the full constellation of narratives

of human choice and action, including science." [6].

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

outcomes of the narrative execution.

applicable in a wide range of contexts.

84

Narratives are also the mechanisms by which agents communicate with each other—so-called shared narratives—and understand the world around them. They need not, however, be true descriptions of the world. All narratives are compounded from sequences of atomic events (are molecular narratives) and thus the choice process of the atomic narrative is the key focus. The set of choice protocols can be classified into four broad groups: (1) rational methods, including various concepts of bounded rationality, (2) heuristics, which are quick and easy (but often very inaccurate) rules-of-thumb, (3) social network protocols, which rely on communications between individuals to make choices, and (4) biases, which are significant errors often found in choice-making.

Consider what are usually referred to as protocols for rational choice. Perhaps the easiest of these would be rule-invocation methods. This is the situation where the agent making the choice has an available set of rules, and, depending on value of the state space when the event is encountered, one or more of these rules are used to determine the choice. Rule-based agents are widely used in agent-based models. Axelrod [14], Epstein [28], and Wooldridge [15] insist that rule-based agency is the wisest course of agent construction. This press for simplicity is in response to the need to explore and understand some of the unusual emergent results that are observed with agent-based models. A complicated agent structure makes analysis of such emergent structure much more arduous. And such a protocol is trivial to build into an agent. Merely specify the action to be engaged for each appropriate set of state space variable values. But this is not a choice protocol, since the outcome of the choice is predetermined by the rule set. It's a pre-defined action invocation, and since there is no probability associated with the rule-invocation, there can be no associated narrative event. Therefore, this kind of choice mechanism is not within the purview of agent-based models as defined here, which require such a stochastic mechanism.

The classic statistical decision problem is perhaps the oldest, and most 'rational' of this class of choice protocols. The fundamental problem of statistical decision theory is to select a possible action from a set of actions that minimizes expected loss. The loss function L að Þ ; θ associates a real-valued number (the loss) with an action a in some set of possible actions A and a state-space value θ ∈ Θ (referred to as the state of nature in the statistical literature). The triple ð Þ Θ; A; L represents the statistical decision problem. Generally, the choice at hand is which value of θ represents the "true" state of nature. Nominally, there exists empirical data

<sup>4</sup> Specifically, by pointing out that adolescents in Samoa indeed led stressful lives, just like teenagers everywhere else in the world.

represented by the random variable X (which could be a multidimensional entity), the probability distribution of which, Pθð Þ x depends on this true state of nature. A decision rule d for a given value of the random variable X, say x, maps the value of one of the actions in A to x; that is, d xð Þ∈ A: The loss is therefore the random quantity Lð Þ θ; d xð Þ , and the expected value of Lð Þ θ; d xð Þ , when θ is the actual state of nature, is called the risk function

$$R(\theta, d) = \mathbb{E}[L(\theta, d(X))] = \int\_{\mathbf{x} \in X} L(\theta, d(X)) dP\_{\theta}(\mathbf{x}).\tag{1}$$

Ui <sup>z</sup>ij <sup>¼</sup> Vi <sup>z</sup>ij <sup>þ</sup> <sup>ε</sup><sup>i</sup> <sup>z</sup>ij : (4)

, (5)

Very often the ε<sup>i</sup> zij terms are assumed to have an Extreme Value Type 1 distributed with common location parameter γ (which, without loss of generality, can be set to zero) and common scale parameter μ. Then it can be shown ([17]

<sup>μ</sup>Við Þ <sup>z</sup>ij <sup>∗</sup>

<sup>μ</sup>Við Þ <sup>z</sup>ij

∑<sup>j</sup> <sup>∈</sup><sup>J</sup> e

The general discrete choice problem is based on the assumption that there exists a set of alternatives J with finite number of elements #( J). Furthermore, the agent making the choice has determined a complete preference order ≺ over the elements of J. A complete ordering on a set is a relation having the following conditions: [the idea is easier to understand by writing a≺b for ð Þ a; b ∈ ≺, read as "a is less than or equal to b."]: (1) antisymmetry: if a, b∈J and a≺b, and b≺a, then a=b; (2) reflexivity: for every a∈ A it is true that a≺a; (3) transitivity: If a≺b and b≺c then a≺c, and (4) completeness: for every pair a, b∈ A it is true that either a≺b or b≺a. If the preference order fails to meet these conditions, then the utility function does not necessarily exist, and the discrete choice problem cannot be formulated in a

However, human beings are not bound by the definitions of preference orders. Non-transitive, circular orderings are common. For example, when Mary is asked to choose between chocolate and vanilla ice cream, she selects chocolate. When asked her preference between vanilla and strawberry, she chooses vanilla, and when asked her choice between strawberry and chocolate, she prefers strawberry. The ordering is not transitive. This situation occurs because humans generally determine orderings pair-wise over some (possibly quite short) period of time, and the circular inconsistency is quite easy to manage if the time of the preference comparison can vary. Moreover, there's no reason why the completeness property needs to be met in real-life situations. (An ordering that does not meet the transitivity and complete conditions is termed a partial ordering.) Fortunately, it is possible to derive a set of completely ordered sets from any partially ordered set (by considering each completely ordered set as a separate entity, and ignoring singleton sets), so the utility maximization problem reduced to a bookkeeping issue, (assuming there is sufficient data to estimate the number of models that might arise). In some cases, the collection of completely ordered sets can be represented in a hierarchy. But the point is that agent designers do not need to insist on complete choice sets. In fact, any kind of pairing relationship can be used as a preference ordering, and each can

The domain of rational choice models is not exhausted by the utility maximiza-

tion of a discrete choice structure. Indeed, most choice situations are not even discrete, often requiring the selection of a parameter vector from a multipledimensioned real-valued vector space. Other methods are called upon here. Bayesian statistics have faded in and out of fashion over the past two centuries. Of special note are simulation approaches to statistical parameter estimation, confidence interval determinations, and hypothesis testing. In this context, the meaning of the word simulation is somewhat different from when it is applied to an agent-based model. What is referred to in this case is actually artificial sampling, where data values are generated with a computer from a known probability distribution, so that complex and otherwise intractable parameter estimates can be determined without

Pi j

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

<sup>∗</sup> ð Þ¼ <sup>e</sup>

p. 106) that

utility maximization context.

have a unique (empirically derived) utility function.

expensive, perhaps even impossible, data collection.

87

The choice problem is to select the decision rule d from the set of all possible decision rules D that minimizes R. If it is assumed that each d∈ D is such that, for each θ ∈ Θ, the distribution function FXð Þ xjθ is continuous on a set of probability one, then the above is the Lebesgue integral

$$R(\theta, d) = \int\_{\mathbf{x} \in X} L(\theta, d(X)) dF\_X(\mathbf{x}|\theta). \tag{2}$$

Ferguson [29] delineates a conceptualization of much of the field of statistics based on this definition, coupling it in with a game-theoretic construction. In fact, the triple ð Þ <sup>Θ</sup>; <sup>A</sup>; <sup>L</sup> is a formal game in this sense.<sup>5</sup> It is easy to see that this is a quite well-defined problem. However, implementing that definition in a specific context can be a considerable endeavor. On the other hand, creating a computer routine to implement a statistical decision rule in an agent does not seem to be conceptually prohibitive, although it might require significant time and resources to build and execute.

In a general sense, all "rational" choice protocols are described by the statistical decision process defined above. Indeed, some would consider this formulation the axiomatic definition of a rational decision. This general formulation says nothing about the nature of the decision rules d∈ D: They could be hugely complex or trivially simple. Moreover, the set of actions A can be finite or infinite. Much more familiar to the economic community is the choice process described in the random utility discrete choice case. The general formulation of this family of protocols is as follows. There exists a finite set J of possible choices with #(J) being the number of elements in the set J. Assume that the choices and the choosers are characterized by a vector of variables zij for decision-maker i and choice j. Each decision-making agent has an associated real-valued function Ui <sup>z</sup>ij � � : <sup>J</sup> ! <sup>R</sup> that assigns a utility to each choice. The alternative with the highest value of Ui is defined as the choice made, that is, the value j\* for which

$$U\_i(\mathbf{z}\_{ij\text{\textquotedblleft}\mathbf{z}}) = \max\_{j \in J} U\_i(\mathbf{z}\_{ij}) \tag{3}$$

This utility function is assumed to have an observable part Vi zij � � and a stochastic component ε<sup>i</sup> zij � �, and is therefore written as:

<sup>5</sup> Ferguson goes on to justify the application of Bayes' theory to statistics with his treatise, arguing that this game-theoretic perspective created a persuasive demonstration of the superiority of Bayesian statistical analysis. The text was written amid the roiling Bayes versus frequentist debate within the statistics community prominent in the latter half of the twntieth century, and which chugs along today with a steady, if tedious, background din.

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

represented by the random variable X (which could be a multidimensional entity), the probability distribution of which, Pθð Þ x depends on this true state of nature. A decision rule d for a given value of the random variable X, say x, maps the value of one of the actions in A to x; that is, d xð Þ∈ A: The loss is therefore the random quantity Lð Þ θ; d xð Þ , and the expected value of Lð Þ θ; d xð Þ , when θ is the actual state

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

ð

Lð Þ θ; d Xð Þ dPθð Þ x : (1)

Lð Þ θ; d Xð Þ dFXð Þ xjθ : (2)

� � : <sup>J</sup> ! <sup>R</sup> that assigns a utility to

� � (3)

� � and a

x∈X

The choice problem is to select the decision rule d from the set of all possible decision rules D that minimizes R. If it is assumed that each d∈ D is such that, for each θ ∈ Θ, the distribution function FXð Þ xjθ is continuous on a set of probability

Ferguson [29] delineates a conceptualization of much of the field of statistics based on this definition, coupling it in with a game-theoretic construction. In fact, the triple ð Þ <sup>Θ</sup>; <sup>A</sup>; <sup>L</sup> is a formal game in this sense.<sup>5</sup> It is easy to see that this is a quite well-defined problem. However, implementing that definition in a specific context can be a considerable endeavor. On the other hand, creating a computer routine to implement a statistical decision rule in an agent does not seem to be conceptually prohibitive, although it might require significant time and resources to build and

In a general sense, all "rational" choice protocols are described by the statistical decision process defined above. Indeed, some would consider this formulation the axiomatic definition of a rational decision. This general formulation says nothing about the nature of the decision rules d∈ D: They could be hugely complex or trivially simple. Moreover, the set of actions A can be finite or infinite. Much more familiar to the economic community is the choice process described in the random utility discrete choice case. The general formulation of this family of protocols is as follows. There exists a finite set J of possible choices with #(J) being the number of elements in the set J. Assume that the choices and the choosers are characterized by a vector of variables zij for decision-maker i and choice j. Each decision-making

each choice. The alternative with the highest value of Ui is defined as the choice

� � <sup>¼</sup> max j ∈J

� �, and is therefore written as:

<sup>5</sup> Ferguson goes on to justify the application of Bayes' theory to statistics with his treatise, arguing that this game-theoretic perspective created a persuasive demonstration of the superiority of Bayesian statistical analysis. The text was written amid the roiling Bayes versus frequentist debate within the statistics community prominent in the latter half of the twntieth century, and which chugs along today

Ui zij

Ui zij <sup>∗</sup>

This utility function is assumed to have an observable part Vi zij

of nature, is called the risk function

one, then the above is the Lebesgue integral

execute.

Rð Þ¼ θ; d E½ �¼ Lð Þ θ; d Xð Þ

Rð Þ¼ θ; d

agent has an associated real-valued function Ui zij

made, that is, the value j\* for which

stochastic component ε<sup>i</sup> zij

with a steady, if tedious, background din.

86

ð

x∈X

$$U\_i(\mathbf{z}\_{i\bar{\jmath}}) = V\_i(\mathbf{z}\_{i\bar{\jmath}}) + \varepsilon\_i(\mathbf{z}\_{i\bar{\jmath}}).\tag{4}$$

Very often the ε<sup>i</sup> zij terms are assumed to have an Extreme Value Type 1 distributed with common location parameter γ (which, without loss of generality, can be set to zero) and common scale parameter μ. Then it can be shown ([17] p. 106) that

$$P\_i(j^\*) = \frac{e^{\mu V\_i(\mathbf{z}\_{\vec{\eta}^\*})}}{\sum\_{j \in J} e^{\mu V\_i(\mathbf{z}\_{\vec{\eta}})}},\tag{5}$$

The general discrete choice problem is based on the assumption that there exists a set of alternatives J with finite number of elements #( J). Furthermore, the agent making the choice has determined a complete preference order ≺ over the elements of J. A complete ordering on a set is a relation having the following conditions: [the idea is easier to understand by writing a≺b for ð Þ a; b ∈ ≺, read as "a is less than or equal to b."]: (1) antisymmetry: if a, b∈J and a≺b, and b≺a, then a=b; (2) reflexivity: for every a∈ A it is true that a≺a; (3) transitivity: If a≺b and b≺c then a≺c, and (4) completeness: for every pair a, b∈ A it is true that either a≺b or b≺a. If the preference order fails to meet these conditions, then the utility function does not necessarily exist, and the discrete choice problem cannot be formulated in a utility maximization context.

However, human beings are not bound by the definitions of preference orders. Non-transitive, circular orderings are common. For example, when Mary is asked to choose between chocolate and vanilla ice cream, she selects chocolate. When asked her preference between vanilla and strawberry, she chooses vanilla, and when asked her choice between strawberry and chocolate, she prefers strawberry. The ordering is not transitive. This situation occurs because humans generally determine orderings pair-wise over some (possibly quite short) period of time, and the circular inconsistency is quite easy to manage if the time of the preference comparison can vary. Moreover, there's no reason why the completeness property needs to be met in real-life situations. (An ordering that does not meet the transitivity and complete conditions is termed a partial ordering.) Fortunately, it is possible to derive a set of completely ordered sets from any partially ordered set (by considering each completely ordered set as a separate entity, and ignoring singleton sets), so the utility maximization problem reduced to a bookkeeping issue, (assuming there is sufficient data to estimate the number of models that might arise). In some cases, the collection of completely ordered sets can be represented in a hierarchy. But the point is that agent designers do not need to insist on complete choice sets. In fact, any kind of pairing relationship can be used as a preference ordering, and each can have a unique (empirically derived) utility function.

The domain of rational choice models is not exhausted by the utility maximization of a discrete choice structure. Indeed, most choice situations are not even discrete, often requiring the selection of a parameter vector from a multipledimensioned real-valued vector space. Other methods are called upon here. Bayesian statistics have faded in and out of fashion over the past two centuries. Of special note are simulation approaches to statistical parameter estimation, confidence interval determinations, and hypothesis testing. In this context, the meaning of the word simulation is somewhat different from when it is applied to an agent-based model. What is referred to in this case is actually artificial sampling, where data values are generated with a computer from a known probability distribution, so that complex and otherwise intractable parameter estimates can be determined without expensive, perhaps even impossible, data collection.

The significant power for efficient allocation of resources in aid of narrative fulfillment represented by these rational protocols gives them superior positions in the pantheon of choice processes. It is this superior performance, historically incontrovertible, that suggests an ever-widening venue of application. But most humans cannot enlist the aid of these methods without extensive training and the assistance of a variety of other parties. Even as powerful as they are, they are bounded by the time and other resources required for their utilization in the face of the urgency and importance of the particular choice problem at hand. That is, they are examples of bounded rationality, in spite of what they may seem. Bounded rationality is almost always portrayed in contrast to the messianic alternative of the demonic methods noted by Gigerenzer. This "Laplacian Daemon" is the all-seeing, all-knowing supreme intelligence that can solve any resource allocation problem and select the globally best option for all individuals for all time. Fortunately, but beyond in the context of this discussion, it cannot exist.

with the available options are examined in arbitrary order to determine which is selected by that particular customer. The utility shown in Eq. (4) above has the following form for the value of V(i, j), where i is the indicator of the customer and j

V ið Þ¼ ; j βfð Þi ln f j ð Þþ d j ð Þ βdðÞþi βbdð Þi ln dbase ½ �

In this equation, the β's are coefficients that reflect the values carried by the traveler with index i associated with the of the flight option reflected by the attribute denoted by the j-value. For example, βf(i) is the value for traveler i with respect to the fare f( j) associated with flight option j. Other important flight attributes include travel time, d(j), the shortest flight time dbase, the number of stops in the flight that are associated with allied airlines Ndc( j) and non-allied carriers Nic( j), and the cabin class of the flight option (first or economy). The β coefficients are

The function G(τ(i) – t( j)) is peculiar in the sense that it represents the desirability of the departure or arrival time of a flight option. (Note that either departure or arrival time is dominant, since the actual flight time cannot be altered by the traveler.) The passenger does not care if a flight takes off (or arrives) between two values a or b. If it is outside this range, then the further away from the a or b value, the less desirable the option is. The function G is referred to as a Box-Cox formula-

þ β1stð Þi X1stð Þþj βecð Þi Xecð Þþj Gð Þ τðÞ�i t j ð Þ

(6)

τðÞ�i t j ð Þ< � a

�a< τðÞ�i t j ð Þ<

b

(7)

τðÞ�i t j ð Þ> b

<sup>∑</sup><sup>k</sup><sup>∈</sup> <sup>Φ</sup>ð Þ <sup>m</sup> eV ið Þ ;kjτð Þ<sup>i</sup> <sup>d</sup>Θð Þ<sup>τ</sup> : (8)

þ βdcð Þi Ndcð Þþj βicð Þi Nicð Þj

estimated from extensive empirical data collected for that purpose.

<sup>β</sup>Eð Þ<sup>i</sup> ð Þ t j ð Þ� <sup>τ</sup>ðÞ�<sup>i</sup> <sup>a</sup> <sup>þ</sup> <sup>1</sup> <sup>λ</sup><sup>E</sup> � <sup>1</sup> λE

<sup>β</sup>Lð Þ<sup>i</sup> ð Þ <sup>τ</sup>ðÞ�<sup>i</sup> t j ð Þ� <sup>b</sup> <sup>þ</sup> <sup>1</sup> <sup>λ</sup><sup>L</sup> � <sup>1</sup> λL

In this equation, t(j) is the departure (arrival) time of flight option j, τ(i) is the desired departure (arrival) time by traveler i, βE(i) and βL(i) are coefficients associated with the traveler i if the departure (arrival) is early E or late L, and λ<sup>E</sup> and λ<sup>L</sup> are empirical values for the traveling population estimated from observed data. If the G function expresses the cost to a traveler of not departing (or arriving) when desired, then the compliment to it is the preference structure in the travel population of the desired departure (arrival) times. This function is purely empirical, and is denoted by Θ(τ), where τ is the desired departure (arrival) time. Then, in accordance with the probability function illustrated by Eq. (5), we can represent the probability that traveler i will select flight option j over the time period [0, W]

eV ið Þ ;jjτð Þ<sup>i</sup>

This gives us an idea of the nature of a rational narrative protocol of use in an agent-based model describing human behavior with a computer system. The data which characterizes the behavior of the 27 million traveling parties moving by air in

the indicator of the air travel option:

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

tion, and is of the following form:

(nominally 1 week) is given by

89

8

>>>>>>>><

>>>>>>>>:

0

pi ð Þ¼j W ð

0

Gð Þ¼ τðÞ�i t j ð Þ

All of the rational choice mechanisms mentioned above are available for the specification of agent choice protocols. All have computer programs that fully specify how they should be executed, what the data should look like, how the results should be presented and the limitations on and conditions of their application. Moreover, it is clear that a number of organizations and institutions make extensive use of these methods. Companies routinely use operations research for a variety of optimal resource allocation tasks. Perhaps one of the more interesting and successful applications of operations research is the revenue management process used in the sale of tickets in the airline industry, now being extended to similar perishable goods such as hotel rooms, rental cars, and theater seats.

But the individual human being does not routinely engage such mechanisms in making choices. In fact, as noted above, they are very likely to be reductive and not structural models, and therefore may describe no actual process found in real world. There is no evidence that human beings actually make routine choices using any of these tools. Humans tend to employ much simpler approaches to day-to-day choices, and in many instances extend these simple protocols to serious, far reaching and life-changing circumstances where the more sophisticated, rational methods would seem to be called for. Given that an agent model of the human decision-maker must describe what the modeled human actually does, and not what it could or should do, these less rigorous and more ad hoc choice protocols must also be made available to the human agent modeler.

#### 5. An example: the AirMarkets simulator flight choice model

A specific example of an agent-based, rational choice model is the AirMarkets Simulator, which is a representation of the narrative structure and related choice protocol which portrays the behavior of customers selecting from a set of alternative air flight choices. Because the available flights at any point in time is partially a function of the choices made by others previously, a run of the simulation consists of all individuals (or groups traveling together) in the world traveling on commercial air service for a week's time period, with travel bookings starting 120 days before the subject week. (That time period insures that no flight is unavailable at any time before the 120-day booking period, so all individuals booking before the start can be done at once.) About 27 million travel individuals or parties are booked on each simulation run, making up about 42,000,000 individual customers.

Each travel party chooses from all available service connecting the desired origin to the desired destination. The alternative is chosen using a random number generator to produce a probability between 0 and 1, and then the probabilities associated Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

The significant power for efficient allocation of resources in aid of narrative fulfillment represented by these rational protocols gives them superior positions in the pantheon of choice processes. It is this superior performance, historically incontrovertible, that suggests an ever-widening venue of application. But most humans cannot enlist the aid of these methods without extensive training and the assistance of a variety of other parties. Even as powerful as they are, they are bounded by the time and other resources required for their utilization in the face of the urgency and importance of the particular choice problem at hand. That is, they are examples of bounded rationality, in spite of what they may seem. Bounded rationality is almost always portrayed in contrast to the messianic alternative of the demonic methods noted by Gigerenzer. This "Laplacian Daemon" is the all-seeing, all-knowing supreme intelligence that can solve any resource allocation problem and select the globally best option for all individuals for all time. Fortunately, but

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

All of the rational choice mechanisms mentioned above are available for the specification of agent choice protocols. All have computer programs that fully specify how they should be executed, what the data should look like, how the results should be presented and the limitations on and conditions of their application. Moreover, it is clear that a number of organizations and institutions make extensive use of these methods. Companies routinely use operations research for a variety of optimal resource allocation tasks. Perhaps one of the more interesting and successful applications of operations research is the revenue management process used in the sale of tickets in the airline industry, now being extended to similar perishable

But the individual human being does not routinely engage such mechanisms in making choices. In fact, as noted above, they are very likely to be reductive and not structural models, and therefore may describe no actual process found in real world. There is no evidence that human beings actually make routine choices using any of these tools. Humans tend to employ much simpler approaches to day-to-day choices, and in many instances extend these simple protocols to serious, far reaching and life-changing circumstances where the more sophisticated, rational methods would seem to be called for. Given that an agent model of the human decision-maker must describe what the modeled human actually does, and not what it could or should do, these less rigorous and more ad hoc choice protocols must also

beyond in the context of this discussion, it cannot exist.

goods such as hotel rooms, rental cars, and theater seats.

be made available to the human agent modeler.

88

5. An example: the AirMarkets simulator flight choice model

A specific example of an agent-based, rational choice model is the AirMarkets Simulator, which is a representation of the narrative structure and related choice protocol which portrays the behavior of customers selecting from a set of alternative air flight choices. Because the available flights at any point in time is partially a function of the choices made by others previously, a run of the simulation consists of all individuals (or groups traveling together) in the world traveling on commercial air service for a week's time period, with travel bookings starting 120 days before the subject week. (That time period insures that no flight is unavailable at any time before the 120-day booking period, so all individuals booking before the start can be done at once.) About 27 million travel individuals or parties are booked on each simulation run, making up about 42,000,000 individual customers.

Each travel party chooses from all available service connecting the desired origin to the desired destination. The alternative is chosen using a random number generator to produce a probability between 0 and 1, and then the probabilities associated with the available options are examined in arbitrary order to determine which is selected by that particular customer. The utility shown in Eq. (4) above has the following form for the value of V(i, j), where i is the indicator of the customer and j the indicator of the air travel option:

$$\begin{aligned} \mathbf{V}(i,j) &= \beta\_f(i)\ln f(j) + d(j)[\beta\_d(i) + \beta\_{bd}(i)\ln d\_{base}] \\ &+ \beta\_{dc}(i)\mathbf{N}\_{dc}(j) + \beta\_{ic}(i)\mathbf{N}\_{ic}(j) \\ &+ \beta\_{1\mathfrak{s}}(i)\mathbf{X}\_{1\mathfrak{s}t}(j) + \beta\_{ec}(i)\mathbf{X}\_{ec}(j) + \mathbf{G}(\boldsymbol{\tau}(i) - \boldsymbol{t}(j)) \end{aligned} \tag{6}$$

In this equation, the β's are coefficients that reflect the values carried by the traveler with index i associated with the of the flight option reflected by the attribute denoted by the j-value. For example, βf(i) is the value for traveler i with respect to the fare f( j) associated with flight option j. Other important flight attributes include travel time, d(j), the shortest flight time dbase, the number of stops in the flight that are associated with allied airlines Ndc( j) and non-allied carriers Nic( j), and the cabin class of the flight option (first or economy). The β coefficients are estimated from extensive empirical data collected for that purpose.

The function G(τ(i) – t( j)) is peculiar in the sense that it represents the desirability of the departure or arrival time of a flight option. (Note that either departure or arrival time is dominant, since the actual flight time cannot be altered by the traveler.) The passenger does not care if a flight takes off (or arrives) between two values a or b. If it is outside this range, then the further away from the a or b value, the less desirable the option is. The function G is referred to as a Box-Cox formulation, and is of the following form:

$$G(\boldsymbol{\tau}(i) - t(j)) = \begin{cases} \boldsymbol{\theta}\_{\boldsymbol{E}}(i) \frac{(\boldsymbol{t}(j) - \boldsymbol{\tau}(i) - \boldsymbol{a} + \mathbf{1})^{\boldsymbol{i}\_{\mathcal{E}}} - \mathbf{1}}{\boldsymbol{\lambda}\_{\mathcal{E}}} & \boldsymbol{\tau}(i) - \boldsymbol{t}(j) < \boldsymbol{b} \\\ 0 & -\boldsymbol{a} < \boldsymbol{\tau}(i) - \boldsymbol{t}(j) < \boldsymbol{b} \\\ \boldsymbol{\theta}\_{\boldsymbol{L}}(i) \frac{(\boldsymbol{\tau}(i) - \boldsymbol{t}(j) - \boldsymbol{b} + \mathbf{1})^{\boldsymbol{i}\_{\mathcal{E}}} - \mathbf{1}}{\boldsymbol{\lambda}\_{\mathcal{E}}} & \boldsymbol{\tau}(i) - \boldsymbol{t}(j) > \boldsymbol{b} \end{cases} \tag{7}$$

In this equation, t(j) is the departure (arrival) time of flight option j, τ(i) is the desired departure (arrival) time by traveler i, βE(i) and βL(i) are coefficients associated with the traveler i if the departure (arrival) is early E or late L, and λ<sup>E</sup> and λ<sup>L</sup> are empirical values for the traveling population estimated from observed data.

If the G function expresses the cost to a traveler of not departing (or arriving) when desired, then the compliment to it is the preference structure in the travel population of the desired departure (arrival) times. This function is purely empirical, and is denoted by Θ(τ), where τ is the desired departure (arrival) time. Then, in accordance with the probability function illustrated by Eq. (5), we can represent the probability that traveler i will select flight option j over the time period [0, W] (nominally 1 week) is given by

$$p\_i(j) = \int\_0^W \frac{e^{V(ij|\tau(i))}}{\sum\_{k \in \Phi(m)} e^{V(i,k|\tau(i))}} d\Theta(\tau). \tag{8}$$

This gives us an idea of the nature of a rational narrative protocol of use in an agent-based model describing human behavior with a computer system. The data which characterizes the behavior of the 27 million traveling parties moving by air in a typical week around the world is a substantial, but not at all difficult to create or maintain. (It is, however, expensive. Over \$2.5 million were spent collecting the data that represents the empirical values of appropriate traveler data.) The AirMarkets Simulator executes on an Intel 8-processor desktop computer in about 35 minutes, with no other activity being executed simultaneously. A complete, detailed description of the underlying structure of the Simulator is given in [3], pp. 156–269.

Gigerenzer notes include: random search, where the agent explores the decision environment without any apparent organization until time runs out; ordered search, using the validity of environmental cues as they apply to the choice problem at hand as the ordering mechanism; search by imitation using apparent similarities of this decision problem to those encountered in the past (imitation allows us to know where to look and what to look for, but does limit results if the environment within which we are searching is novel or unexpected); and emotions, which apparently act to narrow down the search space in effective, but not well understood, ways. Other search methods readily come to mind, but all of them can be interpreted as being enabled by the narrative context in which the choice event is presented. That is, the search process is governed by what the agent, because of the controlling narrative that is, creating the decision context, considers important to the resource allocation

Stopping the search is where satisficing comes in—when have we searched long enough and established enough options? When we are satisfied that further research will not add any important alternatives, or when we have no more time to gain additional knowledge? Gigerenzer [31], pp. 129–165, proposes what he calls the probabilistic mental model (PMM) as a construct to account for the satisficing and fast and frugal heuristic choice protocols. In a PMM, the individual puts the choice event in hand into a mental construct of similar choice situations it has encountered in the past, or has learned by one method or another, and uses that context as the satisficing criterion. In other words, people fit decision problems into models that seem somehow appropriate to the problem, make the decision, then modify the model if expected results are not realized. This approach argues that the limited cognitive and computational ability of humans mitigate against a purely analytic benefit/cost structure in favor of agile and adaptive, if less than optimal, heuristic decision rules.

Heuristic protocols are choice mechanisms that rely on relatively little information and rule-of-thumb thinking. There is strong evidence that much of the choice behavior of humans is of this variety, if for no other reason that bounds on available time for decision-making prohibit any other approach. For example, Malcolm Gladwell's discussion of virtually instantaneous human decision making in his book Blink [32] addresses this phenomenon. Todd [33] offers a simple listing of some of

• When choosing between two alternatives, one recognized and the other not, the recognition heuristic says choose the recognized one. The basis for this heuristic seems to be that recognized alternatives are apt to be more successful, and therefore more likely to be recognized, and thus to evaluate the remaining options using the second most important criterion. He proceeds this way, moving down the prioritized list of criteria, until only one alternative remains. If he gets through the choosing them is a better idea. Note that increased option search can reduce choice efficiency if more recognized options are added.

• In the take the best heuristic the agent selects the best alternative as measured by one single criterion (e.g., price). Other dimensions which characterize the issue in question are not considered at all. One can see how this fits neatly into the narrative framework if the criterion reflects a resource deemed supremely important for the realization of the narrative, as it becomes the dominant factor in the decision. And different agents may have different criteria for what

• An extended form of the take-the-best heuristic (which in fact can be shown to actually be rational) is lexicographic ordering. This is a multi-dimensional

and outcome probabilities associated with the atomic narrative.

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

the more important fast and frugal heuristics:

constitutes "best."

91

### 6. Heuristic choice protocols

The models of rational human behavior implicit in the realization in economic theory briefly described so far—homo economicus—are not the only kind of human choice that is, possible. In fact, there is scant evidence that people behave anything like the optimizing behavior suggested by these protocols. Virtually every scholar who examines the problem derides the idyllic nature of rational, economic humans. Indeed, the economic man model is often more normative than descriptive. While quite useful, as demonstrated by its singular success, applying a normative model ultimately begs the question of how people 'really' make choices, and to that extent the economic theory that is, the foundation of discrete choice and utility maximization methods is defective. And it is not necessary with the development of agent-based simulation models. The alternative to economic man is often couched somewhat inappropriately in terms of bounded rationality.

Bounded rationality refers to the limitations in resources available to undertake and perform data collection and analysis leading up to a decision, in other words, the execution of benefit/cost analyses, formal or otherwise. In this regard, two streams of thought are discernable in the analysis of human rational behavior. The one, stemming from game theory explores human decision making as a real-valued trade-off endeavor. Indeed, classic economic theory assumes that all the entities in a given economy converge to such utility function rationality as equilibrium is reached. Among the many tributaries of this line of thought is the utility structure that leads to discrete choice theory and the modern study of consumer choice behavior discussed previously. The second stream is the notion of satisficing as a decision structure. Satisficing is choice-making based on being "good enough" rather than "utility maximizing." This idea fits into the narrative framework.

Gigerenzer and Selten[30] captures this idea with a simple taxonomy of rational choice. Which he refers to as "visions of rationality." He breaks rationality down into two broad classes. One he calls demons, referring to the demonic capabilities he views as necessary to carry out rational decision-making in the real world without regard for constraints of time or resources, as mentioned. Demonic reasoning is dissected into unbounded rationality and optimization under constraints. The former is literally applying limitless resources to a decision problem or being presented with a decision problem so simple (such as a statistical estimation problem) that all relevant issues are easily known. The latter refers to concepts such as those frequently seen in operations research, in which the problem at hand has been constrained to become manageable. In this case, however, the choice of the nature and values of the constraints are subject to the same resource limitations as any decision problem, thus only begging the issue of what level of demonic strength is available. Gigerenzer suggests that the bounded rationality side consists of two components; the search for options or alternatives, encompassed under the label of "satisficing," and the actual choice among alternatives, referred to with the term "fast and frugal" heuristics. The searching activity includes methods for finding options and methods for stopping the search. Some of the search methods

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

a typical week around the world is a substantial, but not at all difficult to create or maintain. (It is, however, expensive. Over \$2.5 million were spent collecting the data that represents the empirical values of appropriate traveler data.) The AirMarkets Simulator executes on an Intel 8-processor desktop computer in about 35 minutes, with no other activity being executed simultaneously. A complete, detailed description of the underlying structure of the Simulator is given in [3],

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

The models of rational human behavior implicit in the realization in economic theory briefly described so far—homo economicus—are not the only kind of human choice that is, possible. In fact, there is scant evidence that people behave anything like the optimizing behavior suggested by these protocols. Virtually every scholar who examines the problem derides the idyllic nature of rational, economic humans. Indeed, the economic man model is often more normative than descriptive. While quite useful, as demonstrated by its singular success, applying a normative model ultimately begs the question of how people 'really' make choices, and to that extent the economic theory that is, the foundation of discrete choice and utility maximization methods is defective. And it is not necessary with the development of agent-based simulation models. The alternative to economic man is often couched

Bounded rationality refers to the limitations in resources available to undertake and perform data collection and analysis leading up to a decision, in other words, the execution of benefit/cost analyses, formal or otherwise. In this regard, two streams of thought are discernable in the analysis of human rational behavior. The one, stemming from game theory explores human decision making as a real-valued trade-off endeavor. Indeed, classic economic theory assumes that all the entities in a given economy converge to such utility function rationality as equilibrium is reached. Among the many tributaries of this line of thought is the utility structure that leads to discrete choice theory and the modern study of consumer choice behavior discussed previously. The second stream is the notion of satisficing as a decision structure. Satisficing is choice-making based on being "good enough" rather than "utility maximizing." This idea fits into the narrative framework.

Gigerenzer and Selten[30] captures this idea with a simple taxonomy of rational choice. Which he refers to as "visions of rationality." He breaks rationality down into two broad classes. One he calls demons, referring to the demonic capabilities he views as necessary to carry out rational decision-making in the real world without regard for constraints of time or resources, as mentioned. Demonic reasoning is dissected into unbounded rationality and optimization under constraints. The former is literally applying limitless resources to a decision problem or being presented with a decision problem so simple (such as a statistical estimation problem) that all relevant issues are easily known. The latter refers to concepts such as those frequently seen in operations research, in which the problem at hand has been constrained to become manageable. In this case, however, the choice of the nature and values of the constraints are subject to the same resource limitations as any decision problem, thus only begging the issue of what level of demonic strength is available. Gigerenzer suggests that the bounded rationality side consists of two components; the search for options or alternatives, encompassed under the label of "satisficing," and the actual choice among alternatives, referred to with the term "fast and frugal" heuristics. The searching activity includes methods for finding options and methods for stopping the search. Some of the search methods

somewhat inappropriately in terms of bounded rationality.

pp. 156–269.

90

6. Heuristic choice protocols

Gigerenzer notes include: random search, where the agent explores the decision environment without any apparent organization until time runs out; ordered search, using the validity of environmental cues as they apply to the choice problem at hand as the ordering mechanism; search by imitation using apparent similarities of this decision problem to those encountered in the past (imitation allows us to know where to look and what to look for, but does limit results if the environment within which we are searching is novel or unexpected); and emotions, which apparently act to narrow down the search space in effective, but not well understood, ways. Other search methods readily come to mind, but all of them can be interpreted as being enabled by the narrative context in which the choice event is presented. That is, the search process is governed by what the agent, because of the controlling narrative that is, creating the decision context, considers important to the resource allocation and outcome probabilities associated with the atomic narrative.

Stopping the search is where satisficing comes in—when have we searched long enough and established enough options? When we are satisfied that further research will not add any important alternatives, or when we have no more time to gain additional knowledge? Gigerenzer [31], pp. 129–165, proposes what he calls the probabilistic mental model (PMM) as a construct to account for the satisficing and fast and frugal heuristic choice protocols. In a PMM, the individual puts the choice event in hand into a mental construct of similar choice situations it has encountered in the past, or has learned by one method or another, and uses that context as the satisficing criterion. In other words, people fit decision problems into models that seem somehow appropriate to the problem, make the decision, then modify the model if expected results are not realized. This approach argues that the limited cognitive and computational ability of humans mitigate against a purely analytic benefit/cost structure in favor of agile and adaptive, if less than optimal, heuristic decision rules.

Heuristic protocols are choice mechanisms that rely on relatively little information and rule-of-thumb thinking. There is strong evidence that much of the choice behavior of humans is of this variety, if for no other reason that bounds on available time for decision-making prohibit any other approach. For example, Malcolm Gladwell's discussion of virtually instantaneous human decision making in his book Blink [32] addresses this phenomenon. Todd [33] offers a simple listing of some of the more important fast and frugal heuristics:


extension of take-the-best. This line of thought has been explored more fully by Tversky [34] with his elimination by aspects approach. Elimination by aspects is a choice method wherein the individual has a set of criteria in mind on which he will evaluate a set of alternative choices. He ranks the criteria from most to least important, and then proceeds to evaluate each alternative against the first criterion. If two more alternatives have equal values according to that criterion, he eliminates all the others from consideration and moves on to the next criterion. If he reaches the last criterion evaluation and he has more than one choice alternative left, he selects among the remaining alternatives at random; that is, engages a random protocol. The phrase 'lexicographic' is also used to describe this protocol, since alphabetic ordering is done this way. Tversky also showed the equivalence of elimination by aspects to discrete choice thus moving this seeming heuristic into the domain of the rational.

cause individuals to tend to adjust their choices to be consistent with the anchor, even though the anchor does not bear on the choice event itself.

7. Social network choice protocols

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

where there is no conjugate prior.

93

There are many additional heuristic processes that could be identified, and this would seem to be a fertile area for further research. There is a considerable psychological and sociological literature that should be explored to extract current understanding of the choice mechanisms and formulate computing structures that would be applicable to agent-based models. There is little doubt these mechanisms are used very frequently in many day-to-day choice situations and they should be available to the human agent modeler as much as the more flamboyant rational methods are. But in their implementation their sometime severe bias must also be recognized. That is, just as much a part of the protocol as the actual choice itself.

Humans rarely make decisions completely alone. Many choices are subject to consideration and examination by not only the chooser directly, but also by other individuals who are connected to her in some way. Friends, relatives, other

respected (or not so respected) experts, celebrities, people in authority, co-workers and many others enter into the choice making process in a host of ways, and with a variety of consequences. Only a few of such mechanisms can be considered here. In a sense, social network choice protocols are somewhere between the rational

approaches and the individual heuristics. Gigerenzer poses the dilemma:

"In many real-world situations, there are multiple pieces of information, which are not independent, but redundant. Here Bayes' rule and other 'rational' algorithms quickly become mathematically intractable, at least for ordinary human minds.<sup>6</sup> These situations make neither of these two views [laws of probability, and reasoning error] look promising. If one was to apply the classical view to complex, real-world environments, this would suggest that the mind is a supercalculator—carrying around the collected works of

Kolmogoroff, Fisher and Neyman—and simply needs a memory jog …. On the other hand, the heuristics and biases view of human irrationality would lead us to believe that humans are hopelessly lost in the face of real-world complexity, given their supposed inability to reason according to the canon of classical rationality, even in simple laboratory experiments." ([31], pp. 167)

That most people survive without falling into Gigerenzer's abyss is due in part to social network choice methods. Clark [36] makes a compelling argument in support of this vital role, suggesting that the "scaffolding" of the social network in which all humans are embedded is central to our ability to make decisions, survive and advance. Kunda provides a broad but insightful survey of the field, and Sternberg and Ben-Zeev [37] offer an excellent introduction. The rapid rise of social networking sites on the internet—Facebook, Twitter—testify to both the importance of the social network and the ease with which people adapt to new forms of it. The development of formal methods of social network analysis has become quite

active, as well, partly because of the advances in computing and agent-based

<sup>6</sup> It is easy to create a simple example where Bayesian analysis generates formulations which are not only mentally intractable, but mathematically intractable as well. Consider virtually any non-trivial situation


Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

extension of take-the-best. This line of thought has been explored more fully by Tversky [34] with his elimination by aspects approach. Elimination by aspects is a choice method wherein the individual has a set of criteria in mind on which he will evaluate a set of alternative choices. He ranks the criteria from most to least important, and then proceeds to evaluate each alternative against the first criterion. If two more alternatives have equal values according to that criterion, he eliminates all the others from consideration and moves on to the next criterion. If he reaches the last criterion evaluation and he has more than one choice alternative left, he selects among the remaining alternatives at random; that is, engages a random protocol. The phrase 'lexicographic' is also used to describe this protocol, since alphabetic ordering is done this way. Tversky also showed the equivalence of elimination by aspects to discrete choice thus moving this seeming heuristic into the domain of the rational.

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

• Another fast and frugal heuristic approach that lies on the boundary between pure rule of thumb and the rational choice operations is Dawes's rule [35]. This is a type of linear result choice method. Evaluate the alternatives against a set of criteria by determining if the alternative is either positive or negative with respect to each criterion and subtract the number of negatives from the number of positives. The option with the highest score is chosen.

• Other heuristics are described by scholars in several fields, especially cognitive

representative heuristic, wherein a choice is based on the similarity of the choice situation a category of choices that have been faced or witnessed before (Kunda [12], pp. 57–89). The determination of similarity is based on

characteristics of the situation at hand to the one or more of the attributes that define a class of situations but may differ from members of the class in details. This is conceptually coherent if viewed from a narrative perspective, in the sense that the value of resources and the weight put on the factors which assess that value that play in a narrative may be the among the criteria that define the class similarity. In this sense it is somewhat like the recognition heuristic

psychology. Kunda [12] offers an extensive array. She mentions the

• Another family of heuristic methods cited by Kunda is the collection of

statistical heuristics, referring to statistical rules-of-thumb most people seem to have learned and carry around with them. They generally seem to arise as a result of dealing with the pervasive uncertainty life brings. For example, the observation that while having all nine of one's grandchildren be of the same gender would seem quite unusual to most people, having all three of your grandchildren would not seem that odd. But why? The suggestion is that people apply an elementary bit of statistics to the problem, reasoning that the gender of a child is a fifty-fifty proposition, and, equating that to the tosses of a coin, where tossing nine heads in a row happens much less often then than

• One final contribution to the heuristic array is more subtle than the others. This is anchoring and adjustment ([12] pp. 102–109). This is the tendency for people to base a decision about a specific issue based on a reference to other (perhaps completely irrelevant) situations. That is, some change in an element of the context in which the choice operation is taking place may cause a choice to vary from one instance to another, even though the context change is not part of the choice event. This setting of an anchor (the changing context element) will

described above.

tossing three in a row.

92

cause individuals to tend to adjust their choices to be consistent with the anchor, even though the anchor does not bear on the choice event itself.

There are many additional heuristic processes that could be identified, and this would seem to be a fertile area for further research. There is a considerable psychological and sociological literature that should be explored to extract current understanding of the choice mechanisms and formulate computing structures that would be applicable to agent-based models. There is little doubt these mechanisms are used very frequently in many day-to-day choice situations and they should be available to the human agent modeler as much as the more flamboyant rational methods are. But in their implementation their sometime severe bias must also be recognized. That is, just as much a part of the protocol as the actual choice itself.

### 7. Social network choice protocols

Humans rarely make decisions completely alone. Many choices are subject to consideration and examination by not only the chooser directly, but also by other individuals who are connected to her in some way. Friends, relatives, other respected (or not so respected) experts, celebrities, people in authority, co-workers and many others enter into the choice making process in a host of ways, and with a variety of consequences. Only a few of such mechanisms can be considered here.

In a sense, social network choice protocols are somewhere between the rational approaches and the individual heuristics. Gigerenzer poses the dilemma:

"In many real-world situations, there are multiple pieces of information, which are not independent, but redundant. Here Bayes' rule and other 'rational' algorithms quickly become mathematically intractable, at least for ordinary human minds.<sup>6</sup> These situations make neither of these two views [laws of probability, and reasoning error] look promising. If one was to apply the classical view to complex, real-world environments, this would suggest that the mind is a supercalculator—carrying around the collected works of Kolmogoroff, Fisher and Neyman—and simply needs a memory jog …. On the other hand, the heuristics and biases view of human irrationality would lead us to believe that humans are hopelessly lost in the face of real-world complexity, given their supposed inability to reason according to the canon of classical rationality, even in simple laboratory experiments." ([31], pp. 167)

That most people survive without falling into Gigerenzer's abyss is due in part to social network choice methods. Clark [36] makes a compelling argument in support of this vital role, suggesting that the "scaffolding" of the social network in which all humans are embedded is central to our ability to make decisions, survive and advance. Kunda provides a broad but insightful survey of the field, and Sternberg and Ben-Zeev [37] offer an excellent introduction. The rapid rise of social networking sites on the internet—Facebook, Twitter—testify to both the importance of the social network and the ease with which people adapt to new forms of it.

The development of formal methods of social network analysis has become quite active, as well, partly because of the advances in computing and agent-based

<sup>6</sup> It is easy to create a simple example where Bayesian analysis generates formulations which are not only mentally intractable, but mathematically intractable as well. Consider virtually any non-trivial situation where there is no conjugate prior.

modeling. Network analysis as a formal field of academic endeavor dates back at least to Erdos and Renyi [38] but the emergence of the worldwide web has spurred more recent advances, including the exploration of scale free and stochastic network analysis. An easily accessible introductory survey of modern methods can be found in Barabasi [39] or Buchanan [40]. A more advanced and formal treatment is offered by Dorogovtsev and Mendes [41]. Newman et al., [42] have compiled a compendium of more recent developments in the field.

method. Taleb ([44], pp. 145–156) examines what he refers to as the "expert problem" in some depth, classifying experts into those who have expertise in subjects for which expertise exists, such as science and medicine, and reserves the phrase "empty suits" for those who claim expertise in things for which no expertise can possibly exist, such as the forecasting the value of a stock exchange index

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

This raises an issue to be addressed by an agent model that uses either imitation or reference to experts: How do those experts make a decision which can be imitated or on which expert opinion can be founded? One might suggest that the "imitand" or the expert use a rational choice method, for example. Or there could be hierarchies of imitators or experts, each imitating others while providing expertise to other agents. It would be quite interesting to explore how such networks might work with a simple agent model. In particular, emergent properties of agent-based simulations which contain such social mechanisms could be most curious. Finally, note that the required stochastic property of the choice process of an agent using imitation or experts could be inherited from the stochastic property of the imitated

A third kind of social network choice protocol is voting. An individual can make a choice by polling a set of other individuals to see what they would choose, and then determine his or her choice by tallying the results. For a simple binary choice, this technique is trivial. (Again, how those who cast votes determined their respective choices is a modeling design issue.) For choice events where three or more options and three or more individuals are polled, however, Kenneth Arrow's Impossibility Theorem enters the picture, and some form of bias has to be introduced to guarantee an outcome [45]. Again, implementation of this type of social network decision protocol is straightforward in the design of an agent-based model. The agent in question maintains a list of voters and polls each one by supplying the voter with the choice problem and accepting that voter's choice as the vote. The agent then

A particularly strong form of social choice is not extensively discussed in the literature, at least as far as he research of this author has been able to find. The

turns over to another agent complete control over the choice to be made. A trivial example is the husband leaving to the wife the choice of restaurant for dinner. Consilvocation happens all the time in a democratic political context. The individual citizen elects a representative to sit in a legislative body and make decisions on his behalf. The citizen has thus turned over the choice function to the representative. Consilvocation is a way of eliminating choice events that are out of an individual's control but will have an impact on the course of a compound narrative. It simplifies matters considerably. Another attribute of the consilvocation choice protocol is that the consilvocated right to make the decision can be revoked. The process by which revocation occurs can be simple (the husband elects to pick the restaurant himself) or complex (the citizen must wait for the next election or invoke a recall process). Adding consilvocation to an agent model is generally not difficult but could well have a dramatic impact on the number of individuals demonstrating a specific behavior in relatively a large-scale agent model simulation.

<sup>7</sup> This term was coined by a colleague who graduated from Oxford and has a sharp interest in the English language. He was assisted—although he did not know it—by a friend whose command of English is also admirable, and American. The American refined the Oxford graduate's original construction, which was

<sup>7</sup> In this situation, an agent

tomorrow morning.

or expert agents.

virtually unpronounceable.

95

tallies the votes and determines its choice.

concept has been termed by associates as consilvocation.

In an agent-based modeling context, networks are an expression of the topology of the explicit space required by Epstein's definition of agent model (Epstein [28]). That is, a network defines which agent is "close to" which other agent. Moreover, it defines what the word "close" means in a particular model. Epstein and Axtell illustrate the network role on the Sugerscape grid with respect to economic and social interaction ([43], pp. 130–135). As they show, a network in an agent-based model is a communication connection between one agent and another. A single agent can have such connections with a number of other agents. The connection can be oneway or two-way. Different kinds of connections can reflect differences in the nature of the inter-agent communication. And, perhaps more importantly, networks change over time, with new connections being made and older ones dying out.

A convenient way to consider the network structure of an agent-based model is to stipulate that each agent will maintain a list of the other agents to whom it is connected. Separate lists can be kept for different kinds of communications. If two agents have each other in their individual lists, then the network communication link is mutual, otherwise it is just one way. The message posting function of the computer implementation of an agent model can then be engaged to manage the communications between agents during the simulation. However, network structures are not a requirement of an agent model. Space can be portrayed in other ways. In a cellular automaton, agents reside on a grid where communications between agents is based on being physically next to each other on the grid. In the AirMarkets Simulator, simple agent communication networks are used to define the relationships between distribution system agents and airline agents.

Perhaps the most common form of social network-dictated choice protocol is imitation. There is strong evidence human beings learn by imitation, and thus it is reasonable that the same approach would be called on when faced with a new choice situation. What action did others do in this same situation? Very often the narrative event which creates the choice is encountered in the context of a shared narrative, and thus the choice by the individual is apt to follow the course of the underlying narrative supporting the event definition. In terms of agent-modeling, the agent which uses this protocol must be linked through a social network to the individual or group of individuals whom it wants to imitate. The imitation cannot be certain, however, for that is, reserved for consilvocation. There must be a randomizing mechanism that allows for the imitation to have a stochastic element, such as being linked to two or more individuals who can be imitated, but with a randomization device that dictates which one is followed in a particular event.

Closely related to imitation is expert advice. It is natural that someone believed to know more about a particular narrative event—an expert in the field—would make a wiser choice than a novice. And as humans learn as children and adolescents, the courses of action suggested by experts with more experience are important techniques in determining the probability of the outcomes of various choice options. From an agent modeling perspective, clearly a social network link is needed between the agent and the expert. Again, some stochastic mechanism needs to be present if the requirements of the narrative construct are going to be met. In this case, however, in addition to the selection of one of a possible set of experts, whether or not to follow the expert advice can be employed as the randomization

#### Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

modeling. Network analysis as a formal field of academic endeavor dates back at least to Erdos and Renyi [38] but the emergence of the worldwide web has spurred more recent advances, including the exploration of scale free and stochastic network analysis. An easily accessible introductory survey of modern methods can be found in Barabasi [39] or Buchanan [40]. A more advanced and formal treatment is offered by Dorogovtsev and Mendes [41]. Newman et al., [42] have compiled a

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

In an agent-based modeling context, networks are an expression of the topology of the explicit space required by Epstein's definition of agent model (Epstein [28]). That is, a network defines which agent is "close to" which other agent. Moreover, it defines what the word "close" means in a particular model. Epstein and Axtell illustrate the network role on the Sugerscape grid with respect to economic and social interaction ([43], pp. 130–135). As they show, a network in an agent-based model is a communication connection between one agent and another. A single agent can have such connections with a number of other agents. The connection can be oneway or two-way. Different kinds of connections can reflect differences in the nature of the inter-agent communication. And, perhaps more importantly, networks change over time, with new connections being made and older ones dying out. A convenient way to consider the network structure of an agent-based model is to stipulate that each agent will maintain a list of the other agents to whom it is connected. Separate lists can be kept for different kinds of communications. If two agents have each other in their individual lists, then the network communication link is mutual, otherwise it is just one way. The message posting function of the computer implementation of an agent model can then be engaged to manage the communications between agents during the simulation. However, network structures are not a requirement of an agent model. Space can be portrayed in other ways. In a cellular automaton, agents reside on a grid where communications between agents is based on being physically next to each other on the grid. In the AirMarkets Simulator, simple agent communication networks are used to define the

compendium of more recent developments in the field.

relationships between distribution system agents and airline agents.

device that dictates which one is followed in a particular event.

94

Perhaps the most common form of social network-dictated choice protocol is imitation. There is strong evidence human beings learn by imitation, and thus it is reasonable that the same approach would be called on when faced with a new choice situation. What action did others do in this same situation? Very often the narrative event which creates the choice is encountered in the context of a shared narrative, and thus the choice by the individual is apt to follow the course of the underlying narrative supporting the event definition. In terms of agent-modeling, the agent which uses this protocol must be linked through a social network to the individual or group of individuals whom it wants to imitate. The imitation cannot be certain, however, for that is, reserved for consilvocation. There must be a randomizing mechanism that allows for the imitation to have a stochastic element, such as being linked to two or more individuals who can be imitated, but with a randomization

Closely related to imitation is expert advice. It is natural that someone believed to know more about a particular narrative event—an expert in the field—would make a wiser choice than a novice. And as humans learn as children and adolescents, the courses of action suggested by experts with more experience are important techniques in determining the probability of the outcomes of various choice options. From an agent modeling perspective, clearly a social network link is needed between the agent and the expert. Again, some stochastic mechanism needs to be present if the requirements of the narrative construct are going to be met. In this case, however, in addition to the selection of one of a possible set of experts, whether or not to follow the expert advice can be employed as the randomization

method. Taleb ([44], pp. 145–156) examines what he refers to as the "expert problem" in some depth, classifying experts into those who have expertise in subjects for which expertise exists, such as science and medicine, and reserves the phrase "empty suits" for those who claim expertise in things for which no expertise can possibly exist, such as the forecasting the value of a stock exchange index tomorrow morning.

This raises an issue to be addressed by an agent model that uses either imitation or reference to experts: How do those experts make a decision which can be imitated or on which expert opinion can be founded? One might suggest that the "imitand" or the expert use a rational choice method, for example. Or there could be hierarchies of imitators or experts, each imitating others while providing expertise to other agents. It would be quite interesting to explore how such networks might work with a simple agent model. In particular, emergent properties of agent-based simulations which contain such social mechanisms could be most curious. Finally, note that the required stochastic property of the choice process of an agent using imitation or experts could be inherited from the stochastic property of the imitated or expert agents.

A third kind of social network choice protocol is voting. An individual can make a choice by polling a set of other individuals to see what they would choose, and then determine his or her choice by tallying the results. For a simple binary choice, this technique is trivial. (Again, how those who cast votes determined their respective choices is a modeling design issue.) For choice events where three or more options and three or more individuals are polled, however, Kenneth Arrow's Impossibility Theorem enters the picture, and some form of bias has to be introduced to guarantee an outcome [45]. Again, implementation of this type of social network decision protocol is straightforward in the design of an agent-based model. The agent in question maintains a list of voters and polls each one by supplying the voter with the choice problem and accepting that voter's choice as the vote. The agent then tallies the votes and determines its choice.

A particularly strong form of social choice is not extensively discussed in the literature, at least as far as he research of this author has been able to find. The concept has been termed by associates as consilvocation. <sup>7</sup> In this situation, an agent turns over to another agent complete control over the choice to be made. A trivial example is the husband leaving to the wife the choice of restaurant for dinner. Consilvocation happens all the time in a democratic political context. The individual citizen elects a representative to sit in a legislative body and make decisions on his behalf. The citizen has thus turned over the choice function to the representative.

Consilvocation is a way of eliminating choice events that are out of an individual's control but will have an impact on the course of a compound narrative. It simplifies matters considerably. Another attribute of the consilvocation choice protocol is that the consilvocated right to make the decision can be revoked. The process by which revocation occurs can be simple (the husband elects to pick the restaurant himself) or complex (the citizen must wait for the next election or invoke a recall process). Adding consilvocation to an agent model is generally not difficult but could well have a dramatic impact on the number of individuals demonstrating a specific behavior in relatively a large-scale agent model simulation.

<sup>7</sup> This term was coined by a colleague who graduated from Oxford and has a sharp interest in the English language. He was assisted—although he did not know it—by a friend whose command of English is also admirable, and American. The American refined the Oxford graduate's original construction, which was virtually unpronounceable.

Recall that in the AirMarkets Simulator, a single agent actually represents multiple passengers, and one agent buys tickets for everyone in the group it represents.

often is seen carrying a book. Is John a factory worker or a librarian? Many would say a librarian. But the likelihood that he is a factory worker is far higher than he is a librarian. None of the distinguishing criteria disqualifies him from being a factory worker, and there are far more factory workers than librarians. This is the base rate fallacy; failing to account for the actual rate of incidence of a factor in making a

The Let us-Make-a-Deal fallacy is significantly more subtle. The name refers to a game show popular on American daytime television. A contestant stands before three closed doors, behind one of which is a valuable prize, usually a car. Behind the other two are valueless prizes, historically goats (but animal rights advocates objected, so some other worthless offering is now used). Which door hides which prize is unknown to the contestant. The contestant chooses one of the doors and is awarded the prize behind that door, hopefully the car. But before the chosen door is opened, the host opens one of the other two doors (always revealing a goat) and asks the contestant if she wants to change her door choice and take the prize behind the remaining door. Should the contestant take the offer, or should she stay with her original selection? Most people will say it does not matter. Assuming that the likelihood of a car being behind any of the three doors is the same (one third) for each, then knowing that it is not behind one of them only means that the probability of it being behind either of the remaining two is now changed to one half, and therefore switching does not affect the odds of winning. But that is, incorrect. In fact, the probability that the car is behind the door that was chosen originally by the contestant is one third, and that it is behind the unrevealed other door is two thirds.<sup>10</sup> The explanation is clear (but for many, not convincing). The contestant faces three possibilities: the doors can hide (1) car, goat, goat; (2) goat, car, goat; or (3) goat, goat, car. Suppose she starts the game by selecting door number one. Then if she switches and the first possibility is the true situation, she loses. But if either of the other two possibilities are the case and she switches, she wins. Thus, the probability of winning by switching is two thirds. A simple computer application is easy to write that simulates the game any desired number of times. Execution of that

There are, of course, many similar examples of incorrect reasoning. That they exist and should be avoided in the making of careful decisions is obvious. But it is equally obvious that these "errors" can be subtle, and difficult to detect. Once again, human agents in computer simulations must be modeled as they are, not as they should be. That means determining when bias is an important part of the agent behavior and building that bias into the agent. But as difficult as the task might seem initially, it is ameliorated by the knowledge that choice protocols span cultures and societies, so what is learned on one context can be applied in others, and, with the connection of the choice protocol to the narrative framework, complex behav-

This presentation sets the stage for more exhaustive incorporation of narrative structures into human behavior-based computer artificial intelligence applications.

<sup>10</sup> This puzzle was first presented by the columnist Marilyn vos Savant in a U.S. national Sunday newspaper supplement a few years ago. When she explained how the probability of winning went to 2/3 if the other door were selected, a firestorm of protest erupted from the academic statistics world. She is

judgment. It is a primary reason for bias in the representative heuristic.

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

simulation verifies the correctness of the analysis.

iors can be built up out of simpler, atomic elements.

9. Conclusions

right, however.

97

## 8. Bias in choice

Finally, the term bias in the context of this analysis refers to the difference between the choice that is, actually made and the choice that would be made if the "correct" alternative were selected. Obviously, this calls for a definition of "correct." In a formal statistical decision problem, bias refers to the difference between the expectation of a particular statistic that is, used to estimate the parameter and the value of the estimate itself. That is, a statistic S used to estimate a parameter θ is unbiased if E[S] = θ, and the search for unbiased estimators is a long-standing topic of statistical research. The definition of correct is not so easily determined in the agent choice circumstance.

Many authors identify and describe choice bias in a manner similar to the statistical definition, holding that the choice that results from the engagement of a rational decision protocol is the correct one, and other heuristic or social network protocols that lead to different choices for the same narrative event are biased. As has been said before, if human beings are to be validly represented by agent-based models, then they must be represented as they are, not how someone thinks they ought to be.8

Some biases are perceptual in nature, and arise from inaccurate representations of reality, which in terms of the formal definition of agent can be accommodated with properties of the perceptor component. Festinger et al. [46] and Tumminia [47] explore some of the implication of cognitive dissonance, which occurs when an individual believes in a reality that is, directly contradicted by the sensory evidence before him. The mistakes-were-made assertion described by Shermer [8], pp. 67–71 is an example of self-justification bias [48]. Inattentional blindness is the failure to recognize some feature of the surrounding environment because attention is focused on some other environmental feature [49]. Blind spot bias is the ability to see biased perception on the part of others but fail to see it in oneself. It is similar to better-than-average bias, which causes a person to think they are more capable at any given skill or talent than the average individual [50]. Humans also tend to see themselves in a more positive light than they see others [51], and therefore create a self-serving bias. People tend to accept credit when they behave in socially acceptable ways and blame circumstances outside themselves when they do not do so well. This is an example of an attribution bias [52].

But not all biases are perceptual. Two, in particular, are based on misunderstanding fundamental concepts in probability theory. Kunda ([12], pp. 54–62) notes that probability theory, as a formal mathematical discipline, dates only back 300 years,<sup>9</sup> and that its relatively late development speaks to its intuitive difficulty. One is the base rate bias, described nicely by Kunda, and the other is the so-called Let us Make A Deal fallacy, described by Shermer ([8] pp. 83–84). Base rate bias is very common. It stems from misunderstanding the incidence of some particular characteristic in the underlying population, and thus from a miss-application of Bayes' Rule to an intuitive inference. For example, consider John, who is a small gentleman with a quiet demeanor, who wears glasses, dresses conservatively, and

<sup>8</sup> However, there are potentially some interesting agent-based models to be built that explore the effects of the biased choice protocol versus the rational on the outcome of a narrative.

<sup>9</sup> She's wrong, in one respect. Formal probability theory, where probability is defined in terms of normed measure spaces, dates back only 85 years or so.

#### Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

Recall that in the AirMarkets Simulator, a single agent actually represents multiple passengers, and one agent buys tickets for everyone in the group it represents.

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

Finally, the term bias in the context of this analysis refers to the difference between the choice that is, actually made and the choice that would be made if the "correct" alternative were selected. Obviously, this calls for a definition of "correct." In a formal statistical decision problem, bias refers to the difference between the expectation of a particular statistic that is, used to estimate the parameter and the value of the estimate itself. That is, a statistic S used to estimate a parameter θ is unbiased if E[S] = θ, and the search for unbiased estimators is a long-standing topic of statistical research. The definition of correct is not so easily determined in the

Many authors identify and describe choice bias in a manner similar to the statistical definition, holding that the choice that results from the engagement of a rational decision protocol is the correct one, and other heuristic or social network protocols that lead to different choices for the same narrative event are biased. As has been said before, if human beings are to be validly represented by agent-based models, then they must be represented as they are, not how someone thinks they ought to be.8 Some biases are perceptual in nature, and arise from inaccurate representations of reality, which in terms of the formal definition of agent can be accommodated with properties of the perceptor component. Festinger et al. [46] and Tumminia [47] explore some of the implication of cognitive dissonance, which occurs when an individual believes in a reality that is, directly contradicted by the sensory evidence before him. The mistakes-were-made assertion described by Shermer [8], pp. 67–71 is an example of self-justification bias [48]. Inattentional blindness is the failure to recognize some feature of the surrounding environment because attention is focused on some other environmental feature [49]. Blind spot bias is the ability to see biased perception on the part of others but fail to see it in oneself. It is similar to better-than-average bias, which causes a person to think they are more capable at any given skill or talent than the average individual [50]. Humans also tend to see themselves in a more positive light than they see others [51], and therefore create a self-serving bias. People tend to accept credit when they behave in socially acceptable ways and blame circumstances outside themselves when they do not do so well.

But not all biases are perceptual. Two, in particular, are based on misunderstanding fundamental concepts in probability theory. Kunda ([12], pp. 54–62) notes that probability theory, as a formal mathematical discipline, dates only back 300 years,<sup>9</sup> and that its relatively late development speaks to its intuitive difficulty. One is the base rate bias, described nicely by Kunda, and the other is the so-called Let us Make A Deal fallacy, described by Shermer ([8] pp. 83–84). Base rate bias is very common. It stems from misunderstanding the incidence of some particular characteristic in the underlying population, and thus from a miss-application of Bayes' Rule to an intuitive inference. For example, consider John, who is a small gentleman with a quiet demeanor, who wears glasses, dresses conservatively, and

<sup>8</sup> However, there are potentially some interesting agent-based models to be built that explore the effects

<sup>9</sup> She's wrong, in one respect. Formal probability theory, where probability is defined in terms of

of the biased choice protocol versus the rational on the outcome of a narrative.

8. Bias in choice

agent choice circumstance.

This is an example of an attribution bias [52].

normed measure spaces, dates back only 85 years or so.

96

often is seen carrying a book. Is John a factory worker or a librarian? Many would say a librarian. But the likelihood that he is a factory worker is far higher than he is a librarian. None of the distinguishing criteria disqualifies him from being a factory worker, and there are far more factory workers than librarians. This is the base rate fallacy; failing to account for the actual rate of incidence of a factor in making a judgment. It is a primary reason for bias in the representative heuristic.

The Let us-Make-a-Deal fallacy is significantly more subtle. The name refers to a game show popular on American daytime television. A contestant stands before three closed doors, behind one of which is a valuable prize, usually a car. Behind the other two are valueless prizes, historically goats (but animal rights advocates objected, so some other worthless offering is now used). Which door hides which prize is unknown to the contestant. The contestant chooses one of the doors and is awarded the prize behind that door, hopefully the car. But before the chosen door is opened, the host opens one of the other two doors (always revealing a goat) and asks the contestant if she wants to change her door choice and take the prize behind the remaining door. Should the contestant take the offer, or should she stay with her original selection? Most people will say it does not matter. Assuming that the likelihood of a car being behind any of the three doors is the same (one third) for each, then knowing that it is not behind one of them only means that the probability of it being behind either of the remaining two is now changed to one half, and therefore switching does not affect the odds of winning. But that is, incorrect. In fact, the probability that the car is behind the door that was chosen originally by the contestant is one third, and that it is behind the unrevealed other door is two thirds.<sup>10</sup> The explanation is clear (but for many, not convincing). The contestant faces three possibilities: the doors can hide (1) car, goat, goat; (2) goat, car, goat; or (3) goat, goat, car. Suppose she starts the game by selecting door number one. Then if she switches and the first possibility is the true situation, she loses. But if either of the other two possibilities are the case and she switches, she wins. Thus, the probability of winning by switching is two thirds. A simple computer application is easy to write that simulates the game any desired number of times. Execution of that simulation verifies the correctness of the analysis.

There are, of course, many similar examples of incorrect reasoning. That they exist and should be avoided in the making of careful decisions is obvious. But it is equally obvious that these "errors" can be subtle, and difficult to detect. Once again, human agents in computer simulations must be modeled as they are, not as they should be. That means determining when bias is an important part of the agent behavior and building that bias into the agent. But as difficult as the task might seem initially, it is ameliorated by the knowledge that choice protocols span cultures and societies, so what is learned on one context can be applied in others, and, with the connection of the choice protocol to the narrative framework, complex behaviors can be built up out of simpler, atomic elements.

#### 9. Conclusions

This presentation sets the stage for more exhaustive incorporation of narrative structures into human behavior-based computer artificial intelligence applications.

<sup>10</sup> This puzzle was first presented by the columnist Marilyn vos Savant in a U.S. national Sunday newspaper supplement a few years ago. When she explained how the probability of winning went to 2/3 if the other door were selected, a firestorm of protest erupted from the academic statistics world. She is right, however.

One such application has been in operation for the last several years in the AirMarkets Corporation, consisting of an agent-based model of air passenger behavior, including flight schedule development, revenue management of air fares as a function of advanced booking time, departure and arrival times, group size and available service. Because of the interdependency of air fares, available service and air demand, for each run the AirMarkets Simulation replicates the air travel utilization for every seat on every scheduled flight taking place in the world for a week time period. That is, more than 42,000,000 passengers buying tickets in approximately 288,000 directional city-pair markets over a single week time period, with advanced booking being as much as 120 days in advance of departure. The narrative structure supporting this agent-based simulation is not complex. Since travel is usually a utility associated with some other activity, a choice function based on the utility value of departure/arrival times, fares, booking times and travel purpose is sufficient for the AirMarkets Simulation. Other behavior activities, however, will require more in-depth structuring.

Beyond the semi-rational construction of narratives using hard logic and mathematics, the description of human narrative decision-making gets much softer and more obscure. The heuristic thinking by individuals is perhaps rational, perhaps not. It depends on how accurate the time-dependent, context-constricted thinking of the decision-maker turns out to be. In a logically-looser setting, social structure can become the basis for narrative behavior and the guidelines for assessing the validity of the option choices become even less rigorous. Finally, there is a substantial level of human narrative activity that can only be classified as bias. There is no logic, no bounded rationality and no social context which explains such narrative behavior. In this area several frivolous, and several dangerous, actions by human narrative holders are justified.

However, there is one pressing issue among the several that exist must be addressed. It is necessary to explore the impacts of at least four mathematical anomalies on the structure of even a simple, atomic narrative. These are: (1) the Stone-Weierstrass Theorem, which stipulates the minimum mathematical structure for a data set contents to be represented by a polynomial (which in turn can become part of a narrative, but might not be consistent across several data sets); (2) the Arrow Impossibility Theorem, which shows the theoretical limits of rational decision-making on electoral processes; (3) Godel's Theorem, which determines that any logical structure is subject to questions about completeness that can only be addressed using a logical structure more in question than the one being assessed; and (4) the Heisenberg Uncertainty Principle, which cites limits on observable, non-probabilistic statements of the physical universe. The exploration of these issues are the subject of my current research.

Author details

AirMarkets Corporation, Edmonds, WA, USA

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

provided the original work is properly cited.

\*Address all correspondence to: rap@airmarkets.aero

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

Roger Parker

99

Human Behavior Modeling: The Necessity of Narrative DOI: http://dx.doi.org/10.5772/intechopen.86686

One such application has been in operation for the last several years in the AirMarkets Corporation, consisting of an agent-based model of air passenger behavior, including flight schedule development, revenue management of air fares as a function of advanced booking time, departure and arrival times, group size and available service. Because of the interdependency of air fares, available service and air demand, for each run the AirMarkets Simulation replicates the air travel utilization for every seat on every scheduled flight taking place in the world for a week time period. That is, more than 42,000,000 passengers buying tickets in approximately 288,000 directional city-pair markets over a single week time period, with advanced booking being as much as 120 days in advance of departure. The narrative structure supporting this agent-based simulation is not complex. Since travel is usually a utility associated with some other activity, a choice function based on the utility value of departure/arrival times, fares, booking times and travel purpose is sufficient for the AirMarkets Simulation. Other behavior activities, however, will

Computer Architecture in Industrial, Biomechanical and Biomedical Engineering

Beyond the semi-rational construction of narratives using hard logic and mathematics, the description of human narrative decision-making gets much softer and more obscure. The heuristic thinking by individuals is perhaps rational, perhaps not. It depends on how accurate the time-dependent, context-constricted thinking of the decision-maker turns out to be. In a logically-looser setting, social structure can become the basis for narrative behavior and the guidelines for assessing the validity of the option choices become even less rigorous. Finally, there is a substantial level of human narrative activity that can only be classified as bias. There is no logic, no bounded rationality and no social context which explains such narrative behavior. In this area several frivolous, and several dangerous, actions by human

However, there is one pressing issue among the several that exist must be addressed. It is necessary to explore the impacts of at least four mathematical anomalies on the structure of even a simple, atomic narrative. These are: (1) the Stone-Weierstrass Theorem, which stipulates the minimum mathematical structure for a data set contents to be represented by a polynomial (which in turn can become part of a narrative, but might not be consistent across several data sets); (2) the Arrow Impossibility Theorem, which shows the theoretical limits of rational decision-making on electoral processes; (3) Godel's Theorem, which determines that any logical structure is subject to questions about completeness that can only be addressed using a logical structure more in question than the one being assessed; and (4) the Heisenberg Uncertainty Principle, which cites limits on observable, non-probabilistic statements of the physical universe. The exploration of these

require more in-depth structuring.

narrative holders are justified.

98

issues are the subject of my current research.
