Application of Quantum Physics Assumptions for Risk Assessment

*Marek Rozycki*

#### **Abstract**

Risk assessment is the result of assumptions of people performing it. Therefore, its use may be limited, because in principle it is difficult to predict events that we are not aware of. A certain solution to this problem seems to be the application of inception theory and quantum physics assumptions to describe future phenomena. The aim of the study will be to demonstrate the experience of risk assessment attempts using quantum physics assumptions. The current application of new assumptions for risk assessment in the case of road infrastructure allows for the thesis that a change in the approach to risk assessment is necessary in all areas related to human activity.

**Keywords:** risk assessment, quantum physics assumptions

#### **1. Introduction**

With the formulation of the theory of relativity, we gained a new tool with which to explain the world. It appears that the laws of quantum mechanics explain the processes governing the deepest layers of reality, operating at the levels of the smallest particles of our world. Through experiments and analyses, we can assume that these laws explain phenomena both on the micro- and macroscale. Rules different from classical physics explain the heretofore unexplained and, crucially, allow us to design new experiments within a world entirely unavailable to our senses. The analyses of the visible outcomes of interactions between components that is to say, events in the real world—return little information about the structure of the observed reality. By interpreting outcomes (events), instead of images of detailed relations, we project our expectations; the mathematical structures we use to explain outcomes are merely an attempt to fit our model to reality, and we can only determine the model's applicability when, and insofar as, the observable reality confirms our reasoning. Nevertheless, the fact remains that mathematics allows us to draw conclusions as to the rules governing the functioning of the world. The more appropriate the model we use, the better will be our results. If reality is like music, then the tools of analysis are our music score [1]. The notes we use will be the substance of music, but not the music itself. Similarly with analyses, a model of reality used for analysis is the score whose reality is created with many unspecified or loosely defined components. This may lead us to conclude that reality can only perform a score once it has been written. Such an anthropocentric approach leads us to believe that we can influence and shape events. This is especially clear in analysing risk. We usurp the right to assess risk and event probability and expect

that reality will perform our freshly composed score. We must, however, allow that mathematical analysis may try to impose its assumptions on reality, which may or may not succeed (e.g. in management methods used in banks and insurance companies).

charge are concentrated in the nucleus, around which negatively charged electrons orbit. During his work on black-body radiation, Max Plank formulated the hypothesis of quanta of energy, when existing analyses based on classical physics proved ineffectual. Further discoveries and ideas followed; one of which was the waveparticle duality, which posits that the entire universe behaves consistently with laws governing either the behaviour of waves or that of particles. This idea, and the results of many experiments which it enabled (including the work of Wheeler [5]), suggests that the observed structures change as the result of observation. So, it is probable that a given event's result, which we wish to observe, will occur depending on whether and how we observe the event. The manner in which we choose to measure the "present moment" will influence that which caused the present event. Ideas like these are far removed from those of Aristotle and Democritus and require a completely fresh gaze. The premises of quantum physics appear to be verifiable both on micro and macro level, which begs the question: if quantum physics applies to the level of elementary particles, will they apply to modelling real events? If the answer is yes, there should be no objection to the use of analytical tools proper to quantum physics in analysing events such as those which risk analysts are concerned with. The language of mathematics seems to be the only tool precise enough to describe the subtle and rich structures of reality. Structures transparent to human senses are revealed in mathematics, and numbers allow the identification of their states and properties (e.g. their minima, maxima or functions). Let us attempt a certain simplification. The classical, Newtonian understanding of phenomena is determined by the notions of motion, point particle and rigid body. Classical mechanics relies on the premise that there exist objective, quantifiable objects, in motion along specific trajectories, possessing other specific properties, such as position, mass or charge. Elements of a system interact, as do point particles, in strictly defined ways. Contacts and collisions occur, whether directly or indirectly through fields (e.g. of energy, temperature, etc.). These observations lead to the conclusion that the properties of a physical situation can be defined by absolute terms and numbers. We identify laws of cause and effect and conjecture that, under identical conditions, objects will behave identically. We conclude that all objects in the world are determinate and their behaviour is strictly defined and uniform. In the standard (classical) probability measure of a given state occurring, a family of

*Application of Quantum Physics Assumptions for Risk Assessment*

*DOI: http://dx.doi.org/10.5772/intechopen.90825*

There exists a sample space Ω and a family Z of subsets of the sample space Ω,

It follows from premises (1), (2) and (3) that if A and B are events, *A*∩*B* is also

*P A*ð Þ¼ ∪*B P A*ð Þþ *P B*ð Þ*:* (1)

events is characterised as follows:

4.*P*ð Þ¼ Ω 1.

an event.

**43**

called events. The following premises are true:

1.∅ (empty set) and Ω (sample space) are events.

2. If A is an event, then A' = Ω - A also is an event.

Probability is the function *P* : *Z* ! h i 0, 1 where:

3. If A and B are events, the sum of sets A∪B is also an event.

5. If A and B are events and the product of sets *A*∩*B* ¼ ∅, then

The job of the risk assessor at such institutions is to draw conclusions about the structure of the world based on mathematical structures. In this context, the ideas of quantum physics—in particular, the concept of the state of an object in a Hilbert space [2], the phase space and the quantum system—may help our analyses, allowing a fuller understanding of reality.

In light of the present investigation, we can conclude that quantum mechanics does not apply to individual events but is a theory of interactions between groups of events (composition series) whose behaviour observably conforms to the laws of statistics. All measurement attempts made within a quantum system will, in essence, be performed on groups of identically prepared objects. These objects may realise every possible state. The results of these measurement attempts come in the form of probability distribution of all possible measurement results. In line with this interpretation, we can focus strictly on looking for the probability distribution and ignore individual events.

Risk assessors have been using this interpretation for some time now. The theory of inertia [3], as an example, asserts that the probabilities of a given state occurring and not occurring are equal. This means that at any given moment a given event may occur or not. If we assign a value of 1 to the state occurring, and a value of 0 to the state not occurring, the distribution will result in a mean of 0 and standard deviation of 1. The potential of the event occurring has equal probability, which means that at any given moment we can expect a given state to occur and not occur—just as described by Schrödinger in his famous 1935 experiment [4]. Each object interacts with its environment, and at the quantum level this interaction can be described as consistent with the second law of thermodynamics, i.e. with quantum decoherence, where every system moves towards increased entropy if it is devoid of energy needed to preserve its current state. The identified risk potential is an expression of quantum entanglement and exhibits a tendency to equalisation (entanglement reduction), towards irreversible change of interference between the system and the environment. Risk, therefore, should be understood as the measurement of loss of information about a given system, as a result of its interaction with the environment. In this context, it is imperative to attempt a quantum-mechanical analysis of risk.

#### **2. Quantum mechanics in interpretation of phenomena**

The desire to understand the world and to describe it in terms of mathematical formulae is as old as the human desire to dominate it. Each age has tried to explain observable correlations as causes and effects, in a manner peculiar to itself. In ancient Greece, atomistic theories of the likes of Democritus, according to which matter consisted of final, eternal, unchanging and indivisible atoms, clashed with continuous theories of Aristotle and others, who believed that matter was fluid and ever-changing. Such theoretical clashes across the ages have always encouraged further investigations into our questions about the world, leading to new questions, new theories and new clashes. This progress of human knowledge sped up with the industrial revolution and the creation of more efficient tools of observation. When electrons were discovered, the structure of previously indivisible atoms was called into question. The discovery of the atomic nucleus led to the formulation of the planetary model of the atom, according to which nearly all the mass and positive

#### *Application of Quantum Physics Assumptions for Risk Assessment DOI: http://dx.doi.org/10.5772/intechopen.90825*

that reality will perform our freshly composed score. We must, however, allow that mathematical analysis may try to impose its assumptions on reality, which may or may not succeed (e.g. in management methods used in banks and insurance

The job of the risk assessor at such institutions is to draw conclusions about the structure of the world based on mathematical structures. In this context, the ideas of quantum physics—in particular, the concept of the state of an object in a Hilbert space [2], the phase space and the quantum system—may help our analyses,

In light of the present investigation, we can conclude that quantum mechanics does not apply to individual events but is a theory of interactions between groups of events (composition series) whose behaviour observably conforms to the laws of statistics. All measurement attempts made within a quantum system will, in essence, be performed on groups of identically prepared objects. These objects may realise every possible state. The results of these measurement attempts come in the form of probability distribution of all possible measurement results. In line with this interpretation, we can focus strictly on looking for the probability distribution and

Risk assessors have been using this interpretation for some time now. The theory of inertia [3], as an example, asserts that the probabilities of a given state occurring and not occurring are equal. This means that at any given moment a given event may occur or not. If we assign a value of 1 to the state occurring, and a value of 0 to the state not occurring, the distribution will result in a mean of 0 and standard deviation of 1. The potential of the event occurring has equal probability, which means that at any given moment we can expect a given state to occur and not occur—just as described by Schrödinger in his famous 1935 experiment [4]. Each object interacts with its environment, and at the quantum level this interaction can be described as consistent with the second law of thermodynamics, i.e. with quantum decoherence, where every system moves towards increased entropy if it is devoid of energy needed to preserve its current state. The identified risk potential is an expression of quantum entanglement and exhibits a tendency to equalisation (entanglement reduction), towards irreversible change of interference between the system and the environment. Risk, therefore, should be understood as the measurement of loss of information about a given system, as a result of its interaction with the environment. In this context, it is imperative to attempt a

companies).

*Risk Management and Assessment*

allowing a fuller understanding of reality.

ignore individual events.

quantum-mechanical analysis of risk.

**42**

**2. Quantum mechanics in interpretation of phenomena**

The desire to understand the world and to describe it in terms of mathematical formulae is as old as the human desire to dominate it. Each age has tried to explain observable correlations as causes and effects, in a manner peculiar to itself. In ancient Greece, atomistic theories of the likes of Democritus, according to which matter consisted of final, eternal, unchanging and indivisible atoms, clashed with continuous theories of Aristotle and others, who believed that matter was fluid and ever-changing. Such theoretical clashes across the ages have always encouraged further investigations into our questions about the world, leading to new questions, new theories and new clashes. This progress of human knowledge sped up with the industrial revolution and the creation of more efficient tools of observation. When electrons were discovered, the structure of previously indivisible atoms was called into question. The discovery of the atomic nucleus led to the formulation of the planetary model of the atom, according to which nearly all the mass and positive

charge are concentrated in the nucleus, around which negatively charged electrons orbit. During his work on black-body radiation, Max Plank formulated the hypothesis of quanta of energy, when existing analyses based on classical physics proved ineffectual. Further discoveries and ideas followed; one of which was the waveparticle duality, which posits that the entire universe behaves consistently with laws governing either the behaviour of waves or that of particles. This idea, and the results of many experiments which it enabled (including the work of Wheeler [5]), suggests that the observed structures change as the result of observation. So, it is probable that a given event's result, which we wish to observe, will occur depending on whether and how we observe the event. The manner in which we choose to measure the "present moment" will influence that which caused the present event. Ideas like these are far removed from those of Aristotle and Democritus and require a completely fresh gaze. The premises of quantum physics appear to be verifiable both on micro and macro level, which begs the question: if quantum physics applies to the level of elementary particles, will they apply to modelling real events? If the answer is yes, there should be no objection to the use of analytical tools proper to quantum physics in analysing events such as those which risk analysts are concerned with. The language of mathematics seems to be the only tool precise enough to describe the subtle and rich structures of reality. Structures transparent to human senses are revealed in mathematics, and numbers allow the identification of their states and properties (e.g. their minima, maxima or functions). Let us attempt a certain simplification. The classical, Newtonian understanding of phenomena is determined by the notions of motion, point particle and rigid body. Classical mechanics relies on the premise that there exist objective, quantifiable objects, in motion along specific trajectories, possessing other specific properties, such as position, mass or charge. Elements of a system interact, as do point particles, in strictly defined ways. Contacts and collisions occur, whether directly or indirectly through fields (e.g. of energy, temperature, etc.). These observations lead to the conclusion that the properties of a physical situation can be defined by absolute terms and numbers. We identify laws of cause and effect and conjecture that, under identical conditions, objects will behave identically. We conclude that all objects in the world are determinate and their behaviour is strictly defined and uniform. In the standard (classical) probability measure of a given state occurring, a family of events is characterised as follows:

There exists a sample space Ω and a family Z of subsets of the sample space Ω, called events. The following premises are true:

1.∅ (empty set) and Ω (sample space) are events.

2. If A is an event, then A' = Ω - A also is an event.

3. If A and B are events, the sum of sets A∪B is also an event.

Probability is the function *P* : *Z* ! h i 0, 1 where:

4.*P*ð Þ¼ Ω 1.

5. If A and B are events and the product of sets *A*∩*B* ¼ ∅, then

$$P(A \cup B) = P(A) + P(B). \tag{1}$$

It follows from premises (1), (2) and (3) that if A and B are events, *A*∩*B* is also an event.

This model works in general but fails in particular, detailed analyses where it turns out that it is impossible to define generic behaviours of such elementary particles as electrons.

4.When the size of the system approaches macroscale, the quantum-mechanical description ought to yield results consistent with the results from a classical

An attempt to model future events within a given system consistent with the classical probability modelling must result in producing questions inadequate to the possibility of eliciting responses, as we are unable to predict all variants of events and correlations. Classical probability attempts to ignore reality and proceeds against logic, which allows us to accept the instances of "black swans". Meanwhile, if we accept that the real-life result is a wave function of the relations between the preparation of the system and its measurement and that it has an operational character inextricable from the observer, we can attempt to identify the final states and define the occurrence probability of the elements leading up to these

Let us consider the following model. Let us suppose, in accordance with the above premises, that the probability *p* is described as the squared modulus of a

This pattern has been confirmed in multiple experiments, of which the most

or traffic organisation in a tunnel). Let us launch electrons towards screen E through an obstacle with two apertures **S**1,2 and define the variable *x* (probability) along the screen **E**. The number of apertures may be greater. The screen is positioned at the distance **L**. The system may be illustrated as shown in **Figure 1**.

Let us consider an electron gun (although it could equally well be a ball launcher

Let us assume that the passage of an electron through the apertures can happen in two distinct ways, ð Þ *S***<sup>1</sup>** and ð Þ *S***<sup>2</sup>** , each of which is described by the probability

In classical reasoning, the launched electron can reach the screen either through aperture ð Þ *S***<sup>1</sup>** (trajectory *S***1**) or aperture ð Þ *S***<sup>2</sup>** (trajectory *S***2**). This level will result in flares on the screen appropriate to the location of the apertures, which is illustrated below (**Figure 2**). The position of the flares conforms to Gaussian (normal) distribution. In practice, such an ideal model cannot occur. This is because it is also probable that the electron will not pass through the aperture, or that events will suffer from

*<sup>p</sup>* <sup>¼</sup> j j <sup>A</sup> <sup>2</sup> (2)

certain complex number A, which is the amplitude of probability:

representative is the one using a quantum gun.

amplitude *A S*ð Þ**<sup>1</sup>** and *A S*ð Þ**<sup>2</sup>** .

**Figure 1.**

**45**

*Aperture experiment.*

treatment (Bohr's and Heisenberg's correspondence principle).

*Application of Quantum Physics Assumptions for Risk Assessment*

*DOI: http://dx.doi.org/10.5772/intechopen.90825*

final states.

We would reach similar conclusions in analysing human behaviours. We cannot predict human reactions to specific stimuli; we can only predict the probability that the person will behave in this or that specific way. In creating models of reality, we try to describe complicated reactions which consist both of defined and undefined situations. In effect, individual points of such a system become a blur. They become less sharply defined as discrete entities and tend towards the state which is the result of our analyses. Whilst we can define, as an example, the length of a road tunnel using classical physics, the result of the measurement of the total velocity of the motion of vehicles in the tunnel, or the time needed to evacuate people from the tunnel in an emergency, will depend on the chosen method of measurement. Defining the method of measurement is key in assessing the safety of a tunnel, both for particular tunnels and all tunnels in general. This in turn creates the risk that, in defining an object, one can describe its properties so that the correct (i.e. well defined) response to the question about its state will be elicited only if the object is in the ground state (assumed by the analysis operator). To return to the example of a road tunnel, the very question whether the tunnel is "safe" is, in effect, a question not about the state of the object but the properties of the analysis operator. The response to such a question will be incidental and unreliable. The basic problem here appears to be expressing the measurement numerically. If we do not use a precise measurement method, the result will always be a blur.

It remains a fact that there are such values, or their relations, which can never be specified precisely (expressed as discrete numbers) at the same time, for example, the number of people at risk in a particular road tunnel emergency. We can assume the minimum, maximum or mean (expected) value, but we can only arrive at a probability of the value of our prediction.

It could be argued that an object in a particular state has the unique ability to respond to the demands of its environment and display a particular property. When its state conforms with the state expected by the operator, the object can return an unambiguous response to the operator's "question". In the case of the road tunnel, its state usually does not conform with the state expected by the operator (questioning the tunnel's safety), and as a result the question generates a random response out of a set of the operator's ground states. Of the many states identified by the analysis operator, the current state of the object may be constructed, but each of these states may reveal itself as the response to the operator's question, without the possibility of predicting which. In other words, the complete state of the object fractures into multiple ground states of the operator, and the object picks a state haphazardly and returns it to the operator as its response.

These observations are consistent with the tenets of the Copenhagen interpretation of quantum mechanics. Accordingly, we can set forth the following theses:


This model works in general but fails in particular, detailed analyses where it turns out that it is impossible to define generic behaviours of such elementary

We would reach similar conclusions in analysing human behaviours. We cannot predict human reactions to specific stimuli; we can only predict the probability that the person will behave in this or that specific way. In creating models of reality, we try to describe complicated reactions which consist both of defined and undefined situations. In effect, individual points of such a system become a blur. They become less sharply defined as discrete entities and tend towards the state which is the result of our analyses. Whilst we can define, as an example, the length of a road tunnel using classical physics, the result of the measurement of the total velocity of the motion of vehicles in the tunnel, or the time needed to evacuate people from the tunnel in an emergency, will depend on the chosen method of measurement. Defining the method of measurement is key in assessing the safety of a tunnel, both for particular tunnels and all tunnels in general. This in turn creates the risk that, in defining an object, one can describe its properties so that the correct (i.e. well defined) response to the question about its state will be elicited only if the object is in the ground state (assumed by the analysis operator). To return to the example of a road tunnel, the very question whether the tunnel is "safe" is, in effect, a question not about the state of the object but the properties of the analysis operator. The response to such a question will be incidental and unreliable. The basic problem here appears to be expressing the measurement numerically. If we do not use a

It remains a fact that there are such values, or their relations, which can never be specified precisely (expressed as discrete numbers) at the same time, for example, the number of people at risk in a particular road tunnel emergency. We can assume the minimum, maximum or mean (expected) value, but we can only arrive at a

It could be argued that an object in a particular state has the unique ability to respond to the demands of its environment and display a particular property. When its state conforms with the state expected by the operator, the object can return an unambiguous response to the operator's "question". In the case of the road tunnel,

(questioning the tunnel's safety), and as a result the question generates a random response out of a set of the operator's ground states. Of the many states identified by the analysis operator, the current state of the object may be constructed, but each of these states may reveal itself as the response to the operator's question, without the possibility of predicting which. In other words, the complete state of the object fractures into multiple ground states of the operator, and the object picks a state

These observations are consistent with the tenets of the Copenhagen interpretation of quantum mechanics. Accordingly, we can set forth the following theses:

1.Every system is fully described by the wave function Y which fully describes

2.A description of nature is probabilistic. The probability of an event is the squared modulus of the wave function associated with the event (Max Born).

3.We cannot know the values of all properties of a system at a given time; imprecise properties may be expressed as probabilities (Heisenberg's

its state usually does not conform with the state expected by the operator

precise measurement method, the result will always be a blur.

haphazardly and returns it to the operator as its response.

the observer's understanding of the system (Heisenberg).

probability of the value of our prediction.

uncertainty principle).

**44**

particles as electrons.

*Risk Management and Assessment*

4.When the size of the system approaches macroscale, the quantum-mechanical description ought to yield results consistent with the results from a classical treatment (Bohr's and Heisenberg's correspondence principle).

An attempt to model future events within a given system consistent with the classical probability modelling must result in producing questions inadequate to the possibility of eliciting responses, as we are unable to predict all variants of events and correlations. Classical probability attempts to ignore reality and proceeds against logic, which allows us to accept the instances of "black swans". Meanwhile, if we accept that the real-life result is a wave function of the relations between the preparation of the system and its measurement and that it has an operational character inextricable from the observer, we can attempt to identify the final states and define the occurrence probability of the elements leading up to these final states.

Let us consider the following model. Let us suppose, in accordance with the above premises, that the probability *p* is described as the squared modulus of a certain complex number A, which is the amplitude of probability:

$$\mathbf{p} = \left| \mathbf{A} \right|^2 \tag{2}$$

This pattern has been confirmed in multiple experiments, of which the most representative is the one using a quantum gun.

Let us consider an electron gun (although it could equally well be a ball launcher or traffic organisation in a tunnel). Let us launch electrons towards screen E through an obstacle with two apertures **S**1,2 and define the variable *x* (probability) along the screen **E**. The number of apertures may be greater. The screen is positioned at the distance **L**. The system may be illustrated as shown in **Figure 1**.

Let us assume that the passage of an electron through the apertures can happen in two distinct ways, ð Þ *S***<sup>1</sup>** and ð Þ *S***<sup>2</sup>** , each of which is described by the probability amplitude *A S*ð Þ**<sup>1</sup>** and *A S*ð Þ**<sup>2</sup>** .

In classical reasoning, the launched electron can reach the screen either through aperture ð Þ *S***<sup>1</sup>** (trajectory *S***1**) or aperture ð Þ *S***<sup>2</sup>** (trajectory *S***2**). This level will result in flares on the screen appropriate to the location of the apertures, which is illustrated below (**Figure 2**). The position of the flares conforms to Gaussian (normal) distribution.

In practice, such an ideal model cannot occur. This is because it is also probable that the electron will not pass through the aperture, or that events will suffer from

**Figure 1.** *Aperture experiment.*

which is described by the probability amplitude *A S*ð Þ**<sup>1</sup>** and *A S*ð Þ**<sup>2</sup>** . The influence of an interference must be considered if we cannot identify the aperture through which the electron will travel. If we define the phenomenon occurrence probability

cosð Þ *φ*1ð Þ� *x φ*2ð Þ *x* (5)

cosð Þ *αx* (6)

: ! is nonnegative

2

, that is the

*<sup>a</sup> f x*ð Þ*dx* .

where *I*(x) denotes the influence of interference calculated as:

q

*I x*ð Þ¼ 2

(1) ∅ (empty set) and Ω (sample space) are k-events.

tives for the k-events whose conjunction is not itself a k-event.

A random variable will be continuous if the function Ð

particle *ψ*ð Þ*q* is a wave function which we can define as *f q*ð Þ¼ *ψ*ð Þ*q*

distribution density for the position of the particle along a straight line, then

and for each interval h i *<sup>a</sup>*, *<sup>b</sup>* ,*P*ðf g *ω ϵ* <sup>Ω</sup> : <sup>X</sup>ð Þ *<sup>ω</sup> <sup>ϵ</sup>*h i *<sup>a</sup>*, *<sup>b</sup>* Þ ¼ <sup>Ð</sup> *<sup>b</sup>*

product *A*∩*B* is a k-event if and only if the sum *A*∪*B* is a k-event.

(2) If A is a k-event, then A' also is a k-event.

Probability if the function *P* : *Zk* ! h i 0, 1 , so.

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi *p*1ð Þ *x p*2ð Þ *x*

q

where the constant α depends on the mass and energy of the launched particles, distance of screen from apertures and other conditions. To consider questions other than the particles, we need to define the influence of interference

and we must define values for the correlation φ1;2 of x, then in the particle behaviour analysis we can assume that the correlation is linear and can be expressed

> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi *p*1ð Þ *x p*2ð Þ *x*

Probability calculations informed by quantum physics can be defined as follows. There exists a sample space Ω and a family Zk of subsets of Ω, called k-events. The

(3K) If A and B are k-events and the product of sets *A*∩*B* ¼ ∅, then *A*∪*B* is also a

(5) If A and B are k-events and the product of sets *A*∩*B* ¼ ∅, then *P A*ð ∩*B*) =

It follows from premises (1), (2) and (3K) that if A and B are k-events, the

The difference between the two approaches hinges on exchanging premise (3), considered earlier, with premise (3K), which prevents us from considering alterna-

This condition is met when considering random variables which are significant

The function *X* : Ω ! is a random variable if for each interval h i *a*, *b* ⊂ the set

The function *f* will be the distribution density of the variable *X.* So if for each

d*q* defines the probability that the particle will be within the interval h i *a*, *b* .

*I x*ð Þ¼ 2

*Application of Quantum Physics Assumptions for Risk Assessment*

*DOI: http://dx.doi.org/10.5772/intechopen.90825*

as = *p*11(x) + *p*<sup>2</sup> (x) + *I*(x),

with the equation

as a corrective.

following premises are true:

k-event.

P(A) + P(B).

in analysing possible real events.

f g *ω ϵ* Ω X ð Þ *ω ϵ*h i *a*, *b* is an event.

Ð *b <sup>a</sup>* j j *ψ*ð Þ*q* 2

**47**

(4) *P Q*ð Þ¼ 1.

**Figure 2.** *Probability distribution for positions of electrons hitting the screen, consistent with the tenets of classical physics.*

mutual interference. In effect, there will be areas of maximum likelihood of the electron hitting the screen, as well as areas which will never be hit by the electron (under constant conditions of the experiment). This situation will result in a probability amplitude which can be formulated as

$$\mathbf{A}(\mathbf{S}\_1 \operatorname{or} \mathbf{S}\_2) = \mathbf{A}(\mathbf{S}\_1) + \mathbf{A}(\mathbf{S}\_2). \tag{3}$$

This observation replaces the practice of adding up probabilities of classical physics: the final result is not a sum of probabilities of the electron passing through the aperture. It can be illustrated as shown in **Figure 3** above.

The following equation is, therefore, true:

$$\left|\mathbf{p}\_{\left(\mathbf{S\_1}\text{ or }\mathbf{S\_2}\right)} = \left|\mathbf{A(S\_1\,or\,\mathbf{S\_2})}\right|^2 = \left|\mathbf{A(S\_1)} + \mathbf{A(S\_2)}\right|^2 = \left|\mathbf{A(S\_1)}^2\right| + \left|\mathbf{A(S\_2)}^2\right|\tag{4}$$

The probability depends on the relative phase of the amplitudes *A S*ð Þ**<sup>1</sup>** and *A S*ð Þ**<sup>2</sup>** , whilst in *<sup>p</sup>*ð Þ *<sup>S</sup>***<sup>1</sup>** and *<sup>p</sup>*ð Þ *<sup>S</sup>***<sup>2</sup>** such phases do not occur. Consequently, event probabilities transfer onto the amplitudes, and this results in the occurrence of new phenomena, unaccounted for by classical physics. Let us assume, for the purposes of our argument, that a given phenomenon can occur in two distinct ways, S1 and S2, each of

which is described by the probability amplitude *A S*ð Þ**<sup>1</sup>** and *A S*ð Þ**<sup>2</sup>** . The influence of an interference must be considered if we cannot identify the aperture through which the electron will travel. If we define the phenomenon occurrence probability as = *p*11(x) + *p*<sup>2</sup> (x) + *I*(x),

where *I*(x) denotes the influence of interference calculated as:

$$I(\mathbf{x}) = 2\sqrt{p\_1(\mathbf{x})p\_2(\mathbf{x})}\cos\left(\rho\_1(\mathbf{x}) - \rho\_2(\mathbf{x})\right) \tag{5}$$

and we must define values for the correlation φ1;2 of x, then in the particle behaviour analysis we can assume that the correlation is linear and can be expressed with the equation

$$I(\mathbf{x}) = 2\sqrt{p\_1(\mathbf{x})p\_2(\mathbf{x})}\cos\left(a\mathbf{x}\right) \tag{6}$$

where the constant α depends on the mass and energy of the launched particles, distance of screen from apertures and other conditions. To consider questions other than the particles, we need to define the influence of interference as a corrective.

Probability calculations informed by quantum physics can be defined as follows. There exists a sample space Ω and a family Zk of subsets of Ω, called k-events. The following premises are true:


Probability if the function *P* : *Zk* ! h i 0, 1 , so.

(4) *P Q*ð Þ¼ 1.

mutual interference. In effect, there will be areas of maximum likelihood of the electron hitting the screen, as well as areas which will never be hit by the electron (under constant conditions of the experiment). This situation will result in a prob-

*Probability distribution for locations of electrons hitting the screen, accounting for possible interferences.*

*Probability distribution for positions of electrons hitting the screen, consistent with the tenets of classical physics.*

This observation replaces the practice of adding up probabilities of classical physics: the final result is not a sum of probabilities of the electron passing through

<sup>2</sup> <sup>¼</sup> j j *A S*ð Þþ **<sup>1</sup>** *A S*ð Þ**<sup>2</sup>**

The probability depends on the relative phase of the amplitudes *A S*ð Þ**<sup>1</sup>** and *A S*ð Þ**<sup>2</sup>** , whilst in *<sup>p</sup>*ð Þ *<sup>S</sup>***<sup>1</sup>** and *<sup>p</sup>*ð Þ *<sup>S</sup>***<sup>2</sup>** such phases do not occur. Consequently, event probabilities transfer onto the amplitudes, and this results in the occurrence of new phenomena, unaccounted for by classical physics. Let us assume, for the purposes of our argument, that a given phenomenon can occur in two distinct ways, S1 and S2, each of

the aperture. It can be illustrated as shown in **Figure 3** above.

*A S*ð Þ¼ **<sup>1</sup>** *or S***<sup>2</sup>** *A S*ð Þþ **<sup>1</sup>** *A S*ð Þ**<sup>2</sup>** *:* (3)

<sup>2</sup> <sup>¼</sup> *A S*ð Þ**<sup>1</sup>** <sup>2</sup> 

 þ *A S*ð Þ**<sup>2</sup>** <sup>2</sup> 

(4)

ability amplitude which can be formulated as

**Figure 2.**

*Risk Management and Assessment*

**Figure 3.**

**46**

The following equation is, therefore, true:

*<sup>p</sup>*ð Þ *<sup>S</sup>***<sup>1</sup>** *or <sup>S</sup>***<sup>2</sup>** <sup>¼</sup> j j *A S*ð Þ **<sup>1</sup>** *or <sup>S</sup>***<sup>2</sup>**

(5) If A and B are k-events and the product of sets *A*∩*B* ¼ ∅, then *P A*ð ∩*B*) = P(A) + P(B).

The difference between the two approaches hinges on exchanging premise (3), considered earlier, with premise (3K), which prevents us from considering alternatives for the k-events whose conjunction is not itself a k-event.

It follows from premises (1), (2) and (3K) that if A and B are k-events, the product *A*∩*B* is a k-event if and only if the sum *A*∪*B* is a k-event.

This condition is met when considering random variables which are significant in analysing possible real events.

The function *X* : Ω ! is a random variable if for each interval h i *a*, *b* ⊂ the set f g *ω ϵ* Ω X ð Þ *ω ϵ*h i *a*, *b* is an event.

A random variable will be continuous if the function Ð : ! is nonnegative and for each interval h i *<sup>a</sup>*, *<sup>b</sup>* ,*P*ðf g *ω ϵ* <sup>Ω</sup> : <sup>X</sup>ð Þ *<sup>ω</sup> <sup>ϵ</sup>*h i *<sup>a</sup>*, *<sup>b</sup>* Þ ¼ <sup>Ð</sup> *<sup>b</sup> <sup>a</sup> f x*ð Þ*dx* .

The function *f* will be the distribution density of the variable *X.* So if for each particle *ψ*ð Þ*q* is a wave function which we can define as *f q*ð Þ¼ *ψ*ð Þ*q* 2 , that is the distribution density for the position of the particle along a straight line, then Ð *b <sup>a</sup>* j j *ψ*ð Þ*q* 2 d*q* defines the probability that the particle will be within the interval h i *a*, *b* .

In probability theory, the pair of random variables ð Þ *X*, *Y* is called a twodimensional (bivariate) variable. A nonnegative function *h x*ð Þ , *y* is called the distribution density of a bivariate random variable ð Þ *X*, *Y* , if for any numbers *a*<*b* and *c*<*d* there is equality:

$$P(\{\alpha \in \Omega : (\mathbf{X}(\alpha)e\langle a, b \rangle \land (Y(\alpha)e\langle c, d \rangle)\}) = \bigcup\_{a \prec c}^{b \text{ } d} h(\mathbf{x}, \mathbf{y}) d\mathbf{y} d\mathbf{x} \tag{7}$$

We can also demonstrate that

$$\mathbf{f}\left(\mathbf{x}\right) = \int\_{-\infty}^{+\infty} \mathbf{h}\left(\mathbf{x}, \mathbf{y}\right) d\mathbf{y} \tag{8}$$

The conclusion to which quantum physics points us is that event occurrence or nonoccurrence probability is always the same at 50%, and the certainty of this result

If we want to apply mathematical models to risk analysis, we must clarify our premises and definitions. Risk, in popular understanding, measures the possibility of loss of a given state and may be positive (profit) and negative (loss). Most commonly, "risk" is applied in the context of safety. Most people identify safety as a primary need, without which they experience anxiety and insecurity. It is psychological needs like these that cause individuals, societies, states and organisations to act on their environments in order to remove or reduce factors which increase anxiety, fear, uncertainty or insecurity. As a result, no matter how we define safety, it will ultimately remain an individual interpretation of a given phenomenon. For some people dangerous actions which, if successful, will make them a hero seem right, and their evaluation of possible consequences does not stop them from taking such actions, which would cause fear and inaction in another person. In this context, the security of larger organisations should not rely solely on such subjective assessments. State security is not the same as the sum total of individual securities of each of the state's citizens, and the safety of an organisation is not tantamount to the safety of each of its stakeholders. In the aftermath of the financial crash which bankrupted many companies in 2008, renewed efforts were undertaken to clarify safety for use both in financial management and in other areas of life. In line with the proposed guidelines, safety must be defined as freedom from unacceptable risk. In process safety procedures used in chemical process facilities, "safety" is understood as the absence of unacceptable risk to health, life, property

or environment, whereas risk is the product of probability (frequency of occurrence) of a given phenomenon and the scale of losses (size of undesired

considerations, we can reformulate risk as follows:

The use of this formula is, however, vitiated by the cognitive determinism of the person identifying the probability and the results. With the 2009 ISO 31000 standard, a new understanding of risk has been proposed. Both the standard and the UN Recommendations on the Transport of Dangerous Goods [6] redefine "risk" as the "effect of uncertainty on objectives". The standard is a collection of frameworks, processes and rules which ought to be complied with during the risk assessment process in every organisation, commercial and otherwise. Building on the earlier

where "uncertainty" is defined as blurred probability of an event, which cannot be foreseen with absolute certainty, and all the related possibilities and probabilities

In using measurement instruments (in this context, mathematical analysis) to measure a quantum state, a certain aspect of this state must be adjusted to the state of the instrument used. This is called an observable. In accordance with quantum mechanics'second postulate, each observable is represented by the linear map (vector—Hermitian operator) acting in a Hilbert space, and the eigenvalues of this

risk ¼ probability � results (9)

Risk � ∣uncertainty∣objective (10)

will be the derivative of the set probability distribution.

*Application of Quantum Physics Assumptions for Risk Assessment*

*DOI: http://dx.doi.org/10.5772/intechopen.90825*

**3. Risk identification**

results) formulated as

are variable and possible.

**49**

is the distribution density of the variable *<sup>X</sup>* and *g x*ð Þ¼ <sup>Ð</sup> <sup>þ</sup><sup>∞</sup> �<sup>∞</sup> *h x*ð Þ , *<sup>y</sup> dx* is the distribution density of the variable *Y*.

If instead of classical probability we employ quantum-mechanical density amplitude probability, we arrive at a correct definition. This leads us to conclusively abandon the "objective realism" which determines classical probability and replace it with quantum probability. To examine our reasoning, we shall consider the following example. A pair of random variables (q, p) are given, with known distribution densities. We need to establish the distribution density of the bivariate random variable (q, p) as a nonnegative function h(q,p) such that

$$\text{1.} \int\_{-\infty}^{+\infty} h(q, p) dp = |\psi(q)|^2 \text{ and } \int\_{-\infty}^{+\infty} h(q, p) dp = |\phi(p)|^2.$$


Following Leon Cohen's interpretation, there exist functions fulfilling conditions (1) and (2), but a function fulfilling all three conditions does not exist. Consequently, we cannot go too far in probabilistic interpretations of quantum mechanics.

Even so, we can treat both *p* and *q* as random variables, but if we consider *p* and *q* jointly, we are leaving probability theory behind: the pair *p*, *q* � � is not a random variable. Instead, we must broaden the applicability of probability theory. None of the above premises contradict our knowledge of real-life phenomena inside road tunnels so it seems justifiable to use quantum probability in risk assessment calculation models. Quantum physics focuses primarily on an object's state which may, for its final manifestation, depend on the tools of analysis we use. The state of the object at a given moment is represented by the direction (radius) in the Hilbert space. A Hilbert space illustrates the type of phase space in which referring to real numbers alone is incomplete. A given state should be treated as a superposition of all possible states at every moment, and quantum mechanics typically deals with complex vector spaces. The resultant model of reality is very rich and composed of many interconnected structures whose states are synchronised and the probability of any state's occurrence at any given moment is always the same.

The conclusion to which quantum physics points us is that event occurrence or nonoccurrence probability is always the same at 50%, and the certainty of this result will be the derivative of the set probability distribution.
