**Uncertainties and Risk Analysis Related to Geohazards: From Practical Applications to Research Trends**

Olivier Deck and Thierry Verdel

*LAEGO – Laboratoire Environnement Géomécanique et Ouvrages, Université de Lorraine, Ecole des Mines de Nancy – Parc de Saurupt – F 54042 – Nancy Cedex - France* 

#### **1. Introduction**

Geohazards correspond to hazards that involve geological or geotechnical phenomena like earthquake, landslide, subsidence… Such hazards are generally classed into natural hazards even if their origin is not always natural, as for mining subsidences that are the consequence of industrial underground excavations. Geohazards are mostly investigated for the purpose of risk analysis. Studies first concern hazard (HAZUS®MH MR4, Romeo et al. 2000, Wahlström and Grünthal 2000) and vulnerability assessment (Zhai *et al.* 2005, McGuire 2004, Hazus 1999, Spence *et al.* 2005, Ronald *et al.* 2008), and secondly risk assessment and management (Karmakar *et al.* 2010, Merad *et al*. 2004). However, risk management must deal with many uncertainties that concern different aspects of risk assessment. This chapter aims to clarify the interactions between risk management and uncertainties within the context of geohazards.

Uncertainties may first be semantic when applied to the definition of the vocabulary used in risk analysis related to geohazards. Risk is generally synthesised as the conjunction of hazard and vulnerability (Karimi and Hüllermeier 2007), but many definitions are available for both terms. Moreover, themselves use terms that are mostly discussed in the literature, such as resilience (Klein *et al.* 2003). As a consequence, a precise definition of risk is necessary, in a given context, to avoid many misunderstandings amongst public authorities, scientists and citizens that may arise from semantic problems. Such a precise definition should not be interpreted as better than others but as a consensual definition adapted to the specific context of each study. This goal is addressed in Section 2 of this chapter.

The third section addresses the question of the relationship between risk and uncertainties and the identification or classification of the different possible uncertainties. While many authors consider aleatoric and semantic uncertainties (Bogardi 2004, Adger 2006, Ezell 2007) as the two main groups, this part focuses on other classifications and highlights definitions of uncertainties.

Finally, this chapter focuses on two specific aspects of the uncertainties and risk analysis related to geohazards: risk prioritisation and vulnerability assessment. These two aspects are illustrated with recent trends developed in the field of risk management within the context of mining subsidence hazards. Some final remarks are offered in Section 4.

Uncertainties and Risk Analysis Related to

**2.2 About hazard** 

poor knowledge.

statistical distribution for floods.

Geohazards: From Practical Applications to Research Trends 161

The term hazard is primarily used to designate natural feared events as opposed to industrial accidents. This common designation is a consequence of the chosen risk definitions, which do not consider the same components: hazard x vulnerability for natural risks and frequency x severity for industrial risks. Natural hazards include geoenvironmental hazards, such as earthquakes, floods, ground movements (e.g., landslides or subsidence) and fire. The International Strategy for Disaster Reduction defines a Hazard as "*a potentially damaging physical event, phenomenon or human activity that may cause the loss of life* 

The definition of hazard is, on the whole, clearly stated as the combination of both the probability of occurrence of an event (e.g., an earthquake and flood) and the intensity of the event. Assessment of these two components is based on different methods, which can be empirically, theoretically or statistically defined. Whatever the method used, assessment of both the intensity and the probability are spoilt by uncertainties because of

When natural hazards are cyclic, i.e., when phenomena are assumed to be stationary, a Poisson's statistical process can be considered. If historical data are available and sufficient in number, statistical analysis can then be used to assess a return period of the hazard in a given place or the probability of occurrence during a fixed period of time. For extreme events, historical data may be insufficient in number, and assumptions are necessary with respect to, for instance, the Gutemberg-Richter law for seismic activity or the Weibull

The advantage of a statistical analysis is that predictions can be validated by observations at a certain level of confidence. When a very large number of historical data are available, they can be used to assess both the trend of a hazard (probability and intensity) and the sensitivity to each parameter to understand the physical and mechanical phenomena that lead to an event. When historical data are few in numbers, as is often the case for ground movements such as subsidence and landslides, only the trend can be studied. However, risk analysis may require a good understanding of the influence of a set of parameters to go deeper into the analysis and the prediction of occurrence. Statistical analyses can then be

combined with analytical models to capture the influence of the studied parameters.

simplification of the phenomenon, as in the following two examples:

of the flood and the speed of the stream may also be important;

volume of unstable rock masses for landslides and other conditions.

Finally, the choice of the intensity parameter is difficult because it corresponds to a

The height of water for flooding situations might be chosen, even though the duration

 The peak ground acceleration for earthquakes might be chosen, even though the amount of damage also depends on the duration of seismic activity and its spectra; the

When different parameters are available, it is necessary to select the one that is the most appropriate to assess the damage. Indeed, the two concepts of hazard and vulnerability are not completely independent. For example, in the case of ground subsidence, the maximal vertical ground subsidence is appropriated to assess the intensity in regard to the modification of rivers or pipe flow, but the horizontal ground strain that is not maximum in

*or injury, property damage, social and economic disruption or environmental degradation".* 

#### **2. Risk, hazard and vulnerability**

Before any risk analysis, a precise definition of the risk is required to avoid misunderstandings amongst public authorities, scientists and citizens, the main actors involved in or concerned with such an analysis.

Many definitions are available, and none of them should be considered "the best". Of greatest importance is being aware of the complexity of risk and the fact that each definition focuses on different aspects of risk. In this chapter, definitions given by the United Nations (UN/ISDR 2004) are first presented as a reference and then discussed by comparison with other definitions.

#### **2.1 About risk**

The International Strategy for Disaster Reduction defines Risk as "the probability of harmful consequences or expected losses (deaths, injuries, property, livelihoods, economic activity disrupted or environment damaged) resulting from interactions between natural or humaninduced hazards and vulnerable conditions" (UN/ISDR 2004). UN/ISDR then adopts the classical definition expressed by the formula: Risk = Hazards x Vulnerability. A note specifies that "some disciplines also include the concept of exposure to refer particularly to the physical aspects of vulnerability".

Risk is also a mathematical concept, mostly used in economics and engineering and defined as the mathematical expectation of losses or gains. This definition takes into account the occurrence probability of each possible consequence in terms of gains or losses. Therefore, it is usually necessary to estimate the occurrence probability of every event responsible for the consequences.

However, in natural risk assessment or industrial risk assessment, the mathematical expectation of losses is rarely computable because of uncertainties. One usually prefers to work with a risk matrix based on simple formulas, such as Risk = Frequency × Severity for industrially generated risks (Cox 2009, Suddle, 2009, Aven 2010) or Risk = Hazard x Vulnerability for naturally generated risks (Karimi and Hüllermeier 2007), where Frequency, Severity, Hazard or Vulnerability are defined using simple scales with few levels.

Even for these simpler definitions, risk assessment involves many difficulties. Consequently, most practical applications for risk analysis related to geohazards aim to identify different geographical areas associated with a reference event, i.e., the largest event in intensity that is likely to occur over a specific area during a fixed period of time. In many cases, the risk analysis is limited to the hazard assessment, and the vulnerability is basically investigated with an identification of the assets only. These assessments and the associated zoning are used to define legal specifications as standards for civil engineering projects.

On the whole, the definition of risk is a challenge, but it is necessary to avoid confusion and misunderstandings. In this chapter, we adopt the definition suggested by the (UN/ISDR 2004), where a risk is defined by the two components of hazard and vulnerability. In this definition, hazard is a notion that includes both the probability of an event and its intensity, while vulnerability characterises the assets' susceptibility to damage.

#### **2.2 About hazard**

160 Risk Management for the Future – Theory and Cases

Before any risk analysis, a precise definition of the risk is required to avoid misunderstandings amongst public authorities, scientists and citizens, the main actors

Many definitions are available, and none of them should be considered "the best". Of greatest importance is being aware of the complexity of risk and the fact that each definition focuses on different aspects of risk. In this chapter, definitions given by the United Nations (UN/ISDR 2004) are first presented as a reference and then discussed by comparison with

The International Strategy for Disaster Reduction defines Risk as "the probability of harmful consequences or expected losses (deaths, injuries, property, livelihoods, economic activity disrupted or environment damaged) resulting from interactions between natural or humaninduced hazards and vulnerable conditions" (UN/ISDR 2004). UN/ISDR then adopts the classical definition expressed by the formula: Risk = Hazards x Vulnerability. A note specifies that "some disciplines also include the concept of exposure to refer particularly to

Risk is also a mathematical concept, mostly used in economics and engineering and defined as the mathematical expectation of losses or gains. This definition takes into account the occurrence probability of each possible consequence in terms of gains or losses. Therefore, it is usually necessary to estimate the occurrence probability of every event responsible for the

However, in natural risk assessment or industrial risk assessment, the mathematical expectation of losses is rarely computable because of uncertainties. One usually prefers to work with a risk matrix based on simple formulas, such as Risk = Frequency × Severity for industrially generated risks (Cox 2009, Suddle, 2009, Aven 2010) or Risk = Hazard x Vulnerability for naturally generated risks (Karimi and Hüllermeier 2007), where Frequency, Severity, Hazard or Vulnerability are defined using simple scales with few

Even for these simpler definitions, risk assessment involves many difficulties. Consequently, most practical applications for risk analysis related to geohazards aim to identify different geographical areas associated with a reference event, i.e., the largest event in intensity that is likely to occur over a specific area during a fixed period of time. In many cases, the risk analysis is limited to the hazard assessment, and the vulnerability is basically investigated with an identification of the assets only. These assessments and the associated zoning are

On the whole, the definition of risk is a challenge, but it is necessary to avoid confusion and misunderstandings. In this chapter, we adopt the definition suggested by the (UN/ISDR 2004), where a risk is defined by the two components of hazard and vulnerability. In this definition, hazard is a notion that includes both the probability of an event and its intensity,

used to define legal specifications as standards for civil engineering projects.

while vulnerability characterises the assets' susceptibility to damage.

**2. Risk, hazard and vulnerability** 

the physical aspects of vulnerability".

other definitions.

**2.1 About risk** 

consequences.

levels.

involved in or concerned with such an analysis.

The term hazard is primarily used to designate natural feared events as opposed to industrial accidents. This common designation is a consequence of the chosen risk definitions, which do not consider the same components: hazard x vulnerability for natural risks and frequency x severity for industrial risks. Natural hazards include geoenvironmental hazards, such as earthquakes, floods, ground movements (e.g., landslides or subsidence) and fire. The International Strategy for Disaster Reduction defines a Hazard as "*a potentially damaging physical event, phenomenon or human activity that may cause the loss of life or injury, property damage, social and economic disruption or environmental degradation".* 

The definition of hazard is, on the whole, clearly stated as the combination of both the probability of occurrence of an event (e.g., an earthquake and flood) and the intensity of the event. Assessment of these two components is based on different methods, which can be empirically, theoretically or statistically defined. Whatever the method used, assessment of both the intensity and the probability are spoilt by uncertainties because of poor knowledge.

When natural hazards are cyclic, i.e., when phenomena are assumed to be stationary, a Poisson's statistical process can be considered. If historical data are available and sufficient in number, statistical analysis can then be used to assess a return period of the hazard in a given place or the probability of occurrence during a fixed period of time. For extreme events, historical data may be insufficient in number, and assumptions are necessary with respect to, for instance, the Gutemberg-Richter law for seismic activity or the Weibull statistical distribution for floods.

The advantage of a statistical analysis is that predictions can be validated by observations at a certain level of confidence. When a very large number of historical data are available, they can be used to assess both the trend of a hazard (probability and intensity) and the sensitivity to each parameter to understand the physical and mechanical phenomena that lead to an event. When historical data are few in numbers, as is often the case for ground movements such as subsidence and landslides, only the trend can be studied. However, risk analysis may require a good understanding of the influence of a set of parameters to go deeper into the analysis and the prediction of occurrence. Statistical analyses can then be combined with analytical models to capture the influence of the studied parameters.

Finally, the choice of the intensity parameter is difficult because it corresponds to a simplification of the phenomenon, as in the following two examples:


When different parameters are available, it is necessary to select the one that is the most appropriate to assess the damage. Indeed, the two concepts of hazard and vulnerability are not completely independent. For example, in the case of ground subsidence, the maximal vertical ground subsidence is appropriated to assess the intensity in regard to the modification of rivers or pipe flow, but the horizontal ground strain that is not maximum in

Uncertainties and Risk Analysis Related to

and adaptation".

hazard intensity.

relative importance.

weakness increases.

of the resilience.

value.

entity".

Geohazards: From Practical Applications to Research Trends 163

Schmidtleim et al. (2008) define vulnerability as "the likelihood of sustaining losses from some actual or potential hazard event, as well as the ability to recover from those losses". This definition highlights the importance of resilience. Resilience is a concept that is largely discussed by Klein et al. (2003). Based on several definitions, they suggest restricting this term to describe "the amount of disturbance a system can absorb and still remain within the same state or domain of attraction; the degree to which the system is capable of selforganisation; the degree to which the system can build and increase the capacity for learning

Based on similar considerations, Bogardi (2004) still reveals uncertainties due to several points: the question of "how far should vulnerability be seen as the 'susceptibility' alone or being rather the product of hazard exposure and that very susceptibility?"; the question of the "proper scale (national, regional, community, household or individual) to capture and to quantify vulnerability"; and "whether (social) vulnerability can adequately be characterised without considering simultaneously the response (coping) capacity of the same social

From a theoretical point of view, the susceptibility of assets to damage may be dependent on the intensity of the hazard. Considering that this intensity may differ widely in its probability, a study of the vulnerability might lead to as many elementary studies as the number of various potential hazard intensities. Because of the number of studies that this theoretical point of view would lead to, engineers used to consider a reference event to make a single assessment of the vulnerability for a specific value of the

The second question is also considered by (Balandier 2004), and it highlights the fact that the same risk element does not have the same importance depending on the type of hazard and the surface area concerned (e.g., country, city, district) because of their

In summary, the vulnerability term has many different meanings. It is of greatest importance, then, to clearly define the held meaning before engaging in any study. Figure 1

1. Weakness includes the physical vulnerability and is linked to the strength of assets (buildings and facilities in particular). The vulnerability increases as the value of the

2. The stakes value includes the functional vulnerability and is linked to losses associated with functional damage. The vulnerability increases with the increase of the stakes

3. Resilience as defined by Klein *et al.* (2003). The vulnerability decreases with the increase

In this synthesis, the assets are used to define elements that may be damaged (e.g., people, buildings, infrastructures, goods, and activities). The stakes value is used to define the importance of these elements according to the cost of repairs or the possible other

consequences of damage (e.g., functional damage and social damage).

The third question refers to the resilience concept that has already been discussed.

shows a possible synthesis, where the vulnerability is split into three components:

the same area as the vertical subsidence is appropriated to assess the intensity in regard to the building resistance.

On the whole, hazard is the component of risk that is the most investigated and the easiest to assess and to plot on a map for risk management purposes. When hazard consists into a range of possible intensities associated to different occurrence probabilities, engineering studies generally simplify analysis and also consider a maximum reasonable event. This event may correspond to the maximum intensity that already occurred when such data are available, or to a selected occurrence probability or return period (a century return period for flooding for example). On the contrary, vulnerability analysis involve more difficulties due to its more complex definition.

#### **2.3 About vulnerability**

The concept of vulnerability is used in many definitions of risk, but its definition still raises discussions. The United Nations, through the International Strategy for Disaster Reduction, defines vulnerability as "*the conditions determined by physical, social, economic, and environmental factors or processes, which increase the susceptibility of a community to the impact of hazards"* (UN/ISDR 2004).

A comparison of definitions is useful to grasp the notions included in the term "vulnerability". Ezell (2007) summarises 14 definitions of vulnerability, and (Griots and Ayral 2001) make an inventory of 17 definitions and split the vulnerability concept into two elementary notions: 1) the notion of sensibility, susceptibility, weakness and predisposition, and 2) the notion of damage, impact, consequences and losses. To study vulnerability, then, it is necessary to assess the susceptibility (first concept) of exposed assets to the considered hazard and the potential consequences (second concept) that may occur as a result.

The susceptibility (first concept) depends on different factors that are not only physical but also social. Schmidtleim et al. (2008) give an example that proves the importance that should be given to the social characteristics of the concerned community in the case of a hurricane. Other definitions lead to the same conclusion. For example, (Haimes 2006) defines vulnerability as "*the manifestation of the inherent states of the system (e.g., physical, technical, organizational, cultural) that can be exploited to adversely affect (cause harm or damage to) that system"* to measure the risks to critical infrastructures from terrorist attacks and natural disasters.

In France, the Ministry in charge of the Environment defines vulnerability as the "level of foreseeable consequences of one natural phenomenon upon assets" in which assets are "people, goods, activities, means, heritage... likely to be affected by a natural hazard" (MATE, 1997). This definition highlights that damage and its consequences (concept 2) may involve a large number of different assets.

For human-caused security threats, (McGill *et al.* 2007) describe five dimensions of the assetlevel consequences: fatalities that take into account the number of deaths and injuries; repair costs measured in dollars; value of assets lost (e.g., goods, property, and information) measured in dollars; time to recuperate measured in units of time; and environmental damage. This definition highlights the fact that consequences may be of different types and that it is simplistic to reduce them to a cost. Mining subsidence events in Lorraine, France, showed, for instance, that social damage occurred in addition to direct costs associated with damage.

the same area as the vertical subsidence is appropriated to assess the intensity in regard to

On the whole, hazard is the component of risk that is the most investigated and the easiest to assess and to plot on a map for risk management purposes. When hazard consists into a range of possible intensities associated to different occurrence probabilities, engineering studies generally simplify analysis and also consider a maximum reasonable event. This event may correspond to the maximum intensity that already occurred when such data are available, or to a selected occurrence probability or return period (a century return period for flooding for example). On the contrary, vulnerability analysis involve more difficulties

The concept of vulnerability is used in many definitions of risk, but its definition still raises discussions. The United Nations, through the International Strategy for Disaster Reduction, defines vulnerability as "*the conditions determined by physical, social, economic, and environmental factors or processes, which increase the susceptibility of a community to the impact of* 

A comparison of definitions is useful to grasp the notions included in the term "vulnerability". Ezell (2007) summarises 14 definitions of vulnerability, and (Griots and Ayral 2001) make an inventory of 17 definitions and split the vulnerability concept into two elementary notions: 1) the notion of sensibility, susceptibility, weakness and predisposition, and 2) the notion of damage, impact, consequences and losses. To study vulnerability, then, it is necessary to assess the susceptibility (first concept) of exposed assets to the considered

The susceptibility (first concept) depends on different factors that are not only physical but also social. Schmidtleim et al. (2008) give an example that proves the importance that should be given to the social characteristics of the concerned community in the case of a hurricane. Other definitions lead to the same conclusion. For example, (Haimes 2006) defines vulnerability as "*the manifestation of the inherent states of the system (e.g., physical, technical, organizational, cultural) that can be exploited to adversely affect (cause harm or damage to) that system"* to measure the risks to critical infrastructures from terrorist attacks and natural disasters.

In France, the Ministry in charge of the Environment defines vulnerability as the "level of foreseeable consequences of one natural phenomenon upon assets" in which assets are "people, goods, activities, means, heritage... likely to be affected by a natural hazard" (MATE, 1997). This definition highlights that damage and its consequences (concept 2) may

For human-caused security threats, (McGill *et al.* 2007) describe five dimensions of the assetlevel consequences: fatalities that take into account the number of deaths and injuries; repair costs measured in dollars; value of assets lost (e.g., goods, property, and information) measured in dollars; time to recuperate measured in units of time; and environmental damage. This definition highlights the fact that consequences may be of different types and that it is simplistic to reduce them to a cost. Mining subsidence events in Lorraine, France, showed, for instance, that social damage occurred in addition to direct costs associated with damage.

hazard and the potential consequences (second concept) that may occur as a result.

the building resistance.

due to its more complex definition.

involve a large number of different assets.

**2.3 About vulnerability** 

*hazards"* (UN/ISDR 2004).

Schmidtleim et al. (2008) define vulnerability as "the likelihood of sustaining losses from some actual or potential hazard event, as well as the ability to recover from those losses". This definition highlights the importance of resilience. Resilience is a concept that is largely discussed by Klein et al. (2003). Based on several definitions, they suggest restricting this term to describe "the amount of disturbance a system can absorb and still remain within the same state or domain of attraction; the degree to which the system is capable of selforganisation; the degree to which the system can build and increase the capacity for learning and adaptation".

Based on similar considerations, Bogardi (2004) still reveals uncertainties due to several points: the question of "how far should vulnerability be seen as the 'susceptibility' alone or being rather the product of hazard exposure and that very susceptibility?"; the question of the "proper scale (national, regional, community, household or individual) to capture and to quantify vulnerability"; and "whether (social) vulnerability can adequately be characterised without considering simultaneously the response (coping) capacity of the same social entity".

From a theoretical point of view, the susceptibility of assets to damage may be dependent on the intensity of the hazard. Considering that this intensity may differ widely in its probability, a study of the vulnerability might lead to as many elementary studies as the number of various potential hazard intensities. Because of the number of studies that this theoretical point of view would lead to, engineers used to consider a reference event to make a single assessment of the vulnerability for a specific value of the hazard intensity.

The second question is also considered by (Balandier 2004), and it highlights the fact that the same risk element does not have the same importance depending on the type of hazard and the surface area concerned (e.g., country, city, district) because of their relative importance.

The third question refers to the resilience concept that has already been discussed.

In summary, the vulnerability term has many different meanings. It is of greatest importance, then, to clearly define the held meaning before engaging in any study. Figure 1 shows a possible synthesis, where the vulnerability is split into three components:


In this synthesis, the assets are used to define elements that may be damaged (e.g., people, buildings, infrastructures, goods, and activities). The stakes value is used to define the importance of these elements according to the cost of repairs or the possible other consequences of damage (e.g., functional damage and social damage).

Uncertainties and Risk Analysis Related to

of uncertainty that they primarily consider.

samples) (Baecher and Christian, 2002).

person to another (Baecher and Christian, 2002).

epistemic uncertainty.

Geohazards: From Practical Applications to Research Trends 165

In summary, we can state that the notion of risk is associated with a rational decision based on the knowledge, which is possibly limited, of the states of the world, while uncertainty refers to a difficulty in describing, in deciding or in assessing the consequences of possible decisions. Risk is something for engineers or managers, while uncertainty would be more for the researcher. However, this differentiation is mainly that of economists, decision theorists or even sociologists. Physicists and engineers are more tolerant of speaking about uncertainty and even of describing probabilistic states, and we suggest introducing now the different types

As shown in (Hacking 1975), from its emergence as a field (in approximately 1660), probability has been considered to have two faces ("Janus-faced" nature of probability). On one side, probability is statistical and applies to stochastic distributions of random processes; on the other side, it is epistemic and may express a degree of belief in the truth of propositions having no statistical nature. It is remarkable that a single term could continue, to this day, to designate these two radically different concepts: frequency and belief, objective and subjective

On this basis, different authors have tried to qualify the different types of uncertainties, such as (Haimes 2004), who suggested distinguishing between aleatoric uncertainty and

In geoengineering or geotechnical engineering, Benjamin and Cornell (1970), Ang and Tang (1984), Veneziano (1994), Paté-Cornell (1996), Hofer (1996) and, more recently, Baecher and

Aleatoric uncertainty is sometimes called the natural variability, being inherent in nature and not reducible (Gilbert and Tang, 1995). It has also been called objective or external uncertainty (NRC, 1996). This uncertainty is the result of the variability observed in known populations (Paté-Cornell, 1996), and it has a relationship with a long series of similar events (Baecher and Christian, 2003). Aleatoric uncertainty can also be described as a measurable uncertainty (as Keynes did in 1921 in his Treatise on Probability) by statistical methods. For simplicity, we can say that this uncertainty refers primarily to uncertainties in the data resulting from their random nature in space and time and from their small number, their inconsistency, their poor handling, transcription errors or low representativeness (in

Epistemic uncertainty is itself often referred to as model or parameter uncertainty reflecting a lack of knowledge (Baecher and Christian, 2003; Gilbert and Tang, 1995; NRC, 1996) or a subjective or internal uncertainty (NRC, 1996). Some authors use the term ambiguous uncertainty (Pate-Cornell, 1996; Smith, 2000). This type of uncertainty reflects the inability of a model to translate the physical reality modelled or the impossibility to choose a model or the fact that a model cannot fit in time or be able to integrate new data (Baecher and Christian, 2002), which also refers to the robustness of a model. When this uncertainty is the value of a parameter, it may result from the uncertainty of the data (random uncertainty). Epistemic uncertainties may also reflect the uncertainty regarding the veracity of a used scientific theory or a belief about the occurrence of an event. These uncertainties do not rely (or rely very little) on experimental data; they are subjective and typically vary from one

Christian (2000, 2003) have also discussed the meanings of uncertainty in their field.

probability, *a posteriori* and *a priori* probability, random and epistemic probability.

Fig. 1. Synthesis of different definitions for the Vulnerability term.

We have so far discussed risk, hazards and vulnerability, but without taking into account uncertainty. Next, we will investigate uncertainty as opposed to risk.

### **3. Uncertainty versus risk**

A simple illustration of the notion of uncertainty is given by (Chacko 1990): when you throw a coin or a die, you know what results are possible, but it is unclear which of these outcomes will be achieved. For Chacko, uncertainty is not the ignorance of the possible outcomes (there is even a certainty on this issue in the case of the dice game), but the indeterminacy of the result that will happen. This uncertainty results, in particular, from the equi-probability of possible outcomes that prevents any rational estimate of the result. If the possible outcomes were not equally probable, the player would be well advised to bet on the number with the highest probability of occurrence. In making this choice, the player would certainly take a risk (that a different number falls anyway) but a calculated risk.

We see in this example, if a little simplistic and formal, that the concepts of uncertainty and risk are related. In common parlance, these concepts are often confused, but they refer to different situations that (Knight 1921) proposed be defined as follows: a "risky" situation is defined as a situation in which random outcomes can be attached to an objective probability distribution, while the outcomes of an "uncertain" situation cannot be associated to any of such probability distributions. This distinction has been discussed at length, particularly by economists. The notion of risk as understood by Knight is indeed "*only valid for repetitive-type decisions taken within a relatively stable economy"* (Galesne, 1996), while for (Hoskins 1973), this definition would in fact "*eliminate any reference to the notion of risk as part of the business life"* because the different possible states of the world are, in practice, difficult to define objectively. Moreover, economic practice has repeatedly shown that one can assign subjective probabilities, the fruits of experience, expertise or beliefs, to improve forecasting and decision making. Therefore, many authors have preferred to define a situation at risk as a situation for which a probability distribution, whatever its nature, objective or subjective, could be associated with its possible states. Inversely, an uncertain situation would be a situation in which no probability distribution can be assigned (Galesne, 1996). Within the same idea, (Callon *et al.* 2001) suggested a simple definition: "*we know we do not know, but that's about all we know: there is no better definition of uncertainty"*. In contrast, the risky situation is a situation where we know something, whether the knowledge is probabilistic or not (objectively or subjectively), provided that this knowledge can assist in making a decision.

Susceptibility Impacts

Vulnerability General definition (UN, 1992)

Likelihood Losses Schmidtleim *et al.* (2008)

UNISDR, 2005

Possible synthesis

Natural Hazard (MATE, 1997)

Weakness Resilience Stakes

We have so far discussed risk, hazards and vulnerability, but without taking into account

A simple illustration of the notion of uncertainty is given by (Chacko 1990): when you throw a coin or a die, you know what results are possible, but it is unclear which of these outcomes will be achieved. For Chacko, uncertainty is not the ignorance of the possible outcomes (there is even a certainty on this issue in the case of the dice game), but the indeterminacy of the result that will happen. This uncertainty results, in particular, from the equi-probability of possible outcomes that prevents any rational estimate of the result. If the possible outcomes were not equally probable, the player would be well advised to bet on the number with the highest probability of occurrence. In making this choice, the player would certainly

We see in this example, if a little simplistic and formal, that the concepts of uncertainty and risk are related. In common parlance, these concepts are often confused, but they refer to different situations that (Knight 1921) proposed be defined as follows: a "risky" situation is defined as a situation in which random outcomes can be attached to an objective probability distribution, while the outcomes of an "uncertain" situation cannot be associated to any of such probability distributions. This distinction has been discussed at length, particularly by economists. The notion of risk as understood by Knight is indeed "*only valid for repetitive-type decisions taken within a relatively stable economy"* (Galesne, 1996), while for (Hoskins 1973), this definition would in fact "*eliminate any reference to the notion of risk as part of the business life"* because the different possible states of the world are, in practice, difficult to define objectively. Moreover, economic practice has repeatedly shown that one can assign subjective probabilities, the fruits of experience, expertise or beliefs, to improve forecasting and decision making. Therefore, many authors have preferred to define a situation at risk as a situation for which a probability distribution, whatever its nature, objective or subjective, could be associated with its possible states. Inversely, an uncertain situation would be a situation in which no probability distribution can be assigned (Galesne, 1996). Within the same idea, (Callon *et al.* 2001) suggested a simple definition: "*we know we do not know, but that's about all we know: there is no better definition of uncertainty"*. In contrast, the risky situation is a situation where we know something, whether the knowledge is probabilistic or not (objectively or

Risk

Probability Intensity Forseeable Consequences

Fig. 1. Synthesis of different definitions for the Vulnerability term.

uncertainty. Next, we will investigate uncertainty as opposed to risk.

take a risk (that a different number falls anyway) but a calculated risk.

subjectively), provided that this knowledge can assist in making a decision.

Hazard

**3. Uncertainty versus risk** 

In summary, we can state that the notion of risk is associated with a rational decision based on the knowledge, which is possibly limited, of the states of the world, while uncertainty refers to a difficulty in describing, in deciding or in assessing the consequences of possible decisions. Risk is something for engineers or managers, while uncertainty would be more for the researcher.

However, this differentiation is mainly that of economists, decision theorists or even sociologists. Physicists and engineers are more tolerant of speaking about uncertainty and even of describing probabilistic states, and we suggest introducing now the different types of uncertainty that they primarily consider.

As shown in (Hacking 1975), from its emergence as a field (in approximately 1660), probability has been considered to have two faces ("Janus-faced" nature of probability). On one side, probability is statistical and applies to stochastic distributions of random processes; on the other side, it is epistemic and may express a degree of belief in the truth of propositions having no statistical nature. It is remarkable that a single term could continue, to this day, to designate these two radically different concepts: frequency and belief, objective and subjective probability, *a posteriori* and *a priori* probability, random and epistemic probability.

On this basis, different authors have tried to qualify the different types of uncertainties, such as (Haimes 2004), who suggested distinguishing between aleatoric uncertainty and epistemic uncertainty.

In geoengineering or geotechnical engineering, Benjamin and Cornell (1970), Ang and Tang (1984), Veneziano (1994), Paté-Cornell (1996), Hofer (1996) and, more recently, Baecher and Christian (2000, 2003) have also discussed the meanings of uncertainty in their field.

Aleatoric uncertainty is sometimes called the natural variability, being inherent in nature and not reducible (Gilbert and Tang, 1995). It has also been called objective or external uncertainty (NRC, 1996). This uncertainty is the result of the variability observed in known populations (Paté-Cornell, 1996), and it has a relationship with a long series of similar events (Baecher and Christian, 2003). Aleatoric uncertainty can also be described as a measurable uncertainty (as Keynes did in 1921 in his Treatise on Probability) by statistical methods. For simplicity, we can say that this uncertainty refers primarily to uncertainties in the data resulting from their random nature in space and time and from their small number, their inconsistency, their poor handling, transcription errors or low representativeness (in samples) (Baecher and Christian, 2002).

Epistemic uncertainty is itself often referred to as model or parameter uncertainty reflecting a lack of knowledge (Baecher and Christian, 2003; Gilbert and Tang, 1995; NRC, 1996) or a subjective or internal uncertainty (NRC, 1996). Some authors use the term ambiguous uncertainty (Pate-Cornell, 1996; Smith, 2000). This type of uncertainty reflects the inability of a model to translate the physical reality modelled or the impossibility to choose a model or the fact that a model cannot fit in time or be able to integrate new data (Baecher and Christian, 2002), which also refers to the robustness of a model. When this uncertainty is the value of a parameter, it may result from the uncertainty of the data (random uncertainty). Epistemic uncertainties may also reflect the uncertainty regarding the veracity of a used scientific theory or a belief about the occurrence of an event. These uncertainties do not rely (or rely very little) on experimental data; they are subjective and typically vary from one person to another (Baecher and Christian, 2002).

Uncertainties and Risk Analysis Related to

Fig. 2. Four categories of uncertainty in risk analysis

Data - Use of confidence intervals or safety margins - Use of probability functions

Model - Comparison between model outputs and reality

danger, for instance)




Table 1. Some methods to deal with uncertainty, depending on the type of uncertainty (after

Research trends concern all of the features of the risk. It is always possible to go further in the analysis, the assessment of the occurrence probability and the intensity of a hazard. A large number of researchers deal with this matter. In this section, two different topics are



Type of

Expertise

Resources

Cauvin *et al.,* 2008)

specifically investigated.

**4. Research trends for risk analysis** 

uncertainty Strategy

Geohazards: From Practical Applications to Research Trends 167

Hofer (1996) provided a good illustration of the difference between aleatoric and epistemic uncertainty. Consider two dice, one of which (A) is covered and left untouched, and it is unknown which side is up, while the other (B) is being cast continuously. The uncertainty about the number shown is then epistemic in the case of (A) because there is a lack of knowledge about it, while it is aleatoric in the case of (B). We can estimate the likelihood (objective probability) of each number in case (B), while for (A) we can only assess subjective probabilities based on what we know about the preferences of the person who put the dice on the table.

However, this distinction is sometimes a modeller's choice. For instance, in a calculation with a numerical model, it is much different to consider the cohesion of material as a space random variable (aleatoric uncertainty) or as an ordinary random variable with a unique and constant unknown value to be selected (epistemic uncertainty).

As suggested by (Haimes 2004), the aleatoric uncertainty category refers to temporal or spatial variability and to individual heterogeneities, while epistemic uncertainty comprises model uncertainty, parameter uncertainty in used models, and uncertainty in decision making from modelling.

While no misunderstanding arises from these definitions, epistemic uncertainty encompasses too many aspects to be practically used as a unique concept, which is why (Baecher and Christian 2003) suggested considering the uncertainty in decision making as a 3rd specific category of uncertainty excluded.

Similarly, for the purposes of a risk analysis, we have suggested (Cauvin *et al.,* 2008) distinguishing between not only two or three categories but four classes of uncertainty, ranging from a very general to a very specific uncertainty attributed to (1) the scientific, economical, and politic context of the risk study; (2) the expertise applying to deterministic human choices; (3) the use of models; and (4) the randomness and/or the lack of knowledge on data.

Figure 2 illustrates these 4 categories.


The first 3 categories clearly cover the previously defined epistemic uncertainty, while the last one refers partly to the aleatoric uncertainty (natural variability) and partly to the epistemic uncertainty (lack of knowledge).

Moreover, we suggest in Table 1 some methods that can be carried out to deal with these uncertainties depending on their types.

Hofer (1996) provided a good illustration of the difference between aleatoric and epistemic uncertainty. Consider two dice, one of which (A) is covered and left untouched, and it is unknown which side is up, while the other (B) is being cast continuously. The uncertainty about the number shown is then epistemic in the case of (A) because there is a lack of knowledge about it, while it is aleatoric in the case of (B). We can estimate the likelihood (objective probability) of each number in case (B), while for (A) we can only assess subjective probabilities based on what we know about the preferences of the person who put the dice

However, this distinction is sometimes a modeller's choice. For instance, in a calculation with a numerical model, it is much different to consider the cohesion of material as a space random variable (aleatoric uncertainty) or as an ordinary random variable with a unique

As suggested by (Haimes 2004), the aleatoric uncertainty category refers to temporal or spatial variability and to individual heterogeneities, while epistemic uncertainty comprises model uncertainty, parameter uncertainty in used models, and uncertainty in decision

While no misunderstanding arises from these definitions, epistemic uncertainty encompasses too many aspects to be practically used as a unique concept, which is why (Baecher and Christian 2003) suggested considering the uncertainty in decision making as a

Similarly, for the purposes of a risk analysis, we have suggested (Cauvin *et al.,* 2008) distinguishing between not only two or three categories but four classes of uncertainty, ranging from a very general to a very specific uncertainty attributed to (1) the scientific, economical, and politic context of the risk study; (2) the expertise applying to deterministic human choices; (3) the use of models; and (4) the randomness and/or the lack of knowledge

 *Resources uncertainty* deals with knowledge about both the general scientific context of the study and its local particularities. More specifically, it concerns the existence of information about the processes being investigated and the objects being studied. *Expertise uncertainty* concerns all of the choices, actions or decisions that can be made by the expert while realising the risk study. It mainly relies on his particular experience as an individual, on his subjectivity and on the way he represents and interprets the

 *Model uncertainty* is basically induced by the use of tools to represent reality. Finally, *data uncertainty* represents both the natural variability existing in the data, the lack of knowledge about their exact values and the difficulty of clearly evaluating them. The first 3 categories clearly cover the previously defined epistemic uncertainty, while the last one refers partly to the aleatoric uncertainty (natural variability) and partly to the

Moreover, we suggest in Table 1 some methods that can be carried out to deal with these

and constant unknown value to be selected (epistemic uncertainty).

on the table.

on data.

making from modelling.

3rd specific category of uncertainty excluded.

Figure 2 illustrates these 4 categories.

information he has gathered.

epistemic uncertainty (lack of knowledge).

uncertainties depending on their types.


Fig. 2. Four categories of uncertainty in risk analysis


Table 1. Some methods to deal with uncertainty, depending on the type of uncertainty (after Cauvin *et al.,* 2008)

#### **4. Research trends for risk analysis**

Research trends concern all of the features of the risk. It is always possible to go further in the analysis, the assessment of the occurrence probability and the intensity of a hazard. A large number of researchers deal with this matter. In this section, two different topics are specifically investigated.

Uncertainties and Risk Analysis Related to

Geohazards: From Practical Applications to Research Trends 169

Fig. 3. Development of risk assessment in the Lorraine iron-ore field.

investigations to understand the extent of the problem.

As a result of the subsidence events in Lorraine in 1996 (Auboué), 1997 (Moutiers), and 1999 (Roncourt), as well as sinkholes in 1998 (Moyeuvre), public authorities ordered

**4.1.2 First ranking system** 

The first is the use of multi-criteria methods for risk assessment. These methods display different advantages that can be useful to improve risk management and to take into account expert or data uncertainties, while they have been widely used for decision-aid problems.

The second topic is the assessment of the physical vulnerability, i.e., the assessment of the physical expected losses, such as damages to buildings and infrastructures. An interesting approach is the use of vulnerability and fragility functions. These functions are already widely used in seismic risk assessment and are now being developed for other hazards. These functions offer an effective response to the problematic uncertainties regarding damage assessment because they are probabilistic in nature. Moreover, these functions are easy to compile into software, and they can then be used to develop a probabilistic assessment of the vulnerability.

#### **4.1 Multi-criteria methods for risk assessment**

Multi-criteria decision-aid methods that have been used and developed in and for economics are now more widely used in all types of problems involving a decision process with uncertainties and the knowledge of the preferences of decision makers.

We are introducing here an example of the use of a multi-criteria method, more specifically, a method named ELECTRE, to deal with a risk zoning problem regarding a mining hazard in Lorraine.

#### **4.1.1 Mining hazard in Lorraine (France)**

The extensive mining activity in the French "Lorraine" region has created a large number of underground abandoned cavities that are now responsible for mining subsidence events, i.e., significant movements at the surface. These events result in serious damage to housing and other buildings in the area of influence of such movements. Mining subsidence is of a highly accidental nature when it takes place over mines that use the abandoned rooms and pillars method, even though this method should have allowed endless ground stability. Recent cases of mining subsidence (1996, 1997 and 1999) that have taken place in the Lorraine iron mining area highlight the hazard of such mining works when left abandoned.

The subsidence events that have happened in Lorraine have led public authorities to carry out investigations over the entire Lorraine iron-mining field to assess the hazard, vulnerability and risk of the whole territory. The first investigations highlighted the existence of approximately 20 km2 of urbanised areas undermined by abandoned works consisting of rooms and pillars.

Different strategies and prioritisations have been put forward since 1996 (see Figure 3). There was an initial prioritisation based on a very simple assessment of the hazard (section 4.1.2). Then, a second prioritisation based on a multi-criteria analysis was carried out (section 4.1.3) to handle more sophisticated considerations regarding both the hazard and the vulnerability (Deck *et al.* 2009)

The first is the use of multi-criteria methods for risk assessment. These methods display different advantages that can be useful to improve risk management and to take into account expert or data uncertainties, while they have been widely used for decision-aid

The second topic is the assessment of the physical vulnerability, i.e., the assessment of the physical expected losses, such as damages to buildings and infrastructures. An interesting approach is the use of vulnerability and fragility functions. These functions are already widely used in seismic risk assessment and are now being developed for other hazards. These functions offer an effective response to the problematic uncertainties regarding damage assessment because they are probabilistic in nature. Moreover, these functions are easy to compile into software, and they can then be used to develop a probabilistic

Multi-criteria decision-aid methods that have been used and developed in and for economics are now more widely used in all types of problems involving a decision process

We are introducing here an example of the use of a multi-criteria method, more specifically, a method named ELECTRE, to deal with a risk zoning problem regarding a mining hazard

The extensive mining activity in the French "Lorraine" region has created a large number of underground abandoned cavities that are now responsible for mining subsidence events, i.e., significant movements at the surface. These events result in serious damage to housing and other buildings in the area of influence of such movements. Mining subsidence is of a highly accidental nature when it takes place over mines that use the abandoned rooms and pillars method, even though this method should have allowed endless ground stability. Recent cases of mining subsidence (1996, 1997 and 1999) that have taken place in the Lorraine iron mining area highlight the hazard of such mining

The subsidence events that have happened in Lorraine have led public authorities to carry out investigations over the entire Lorraine iron-mining field to assess the hazard, vulnerability and risk of the whole territory. The first investigations highlighted the existence of approximately 20 km2 of urbanised areas undermined by abandoned works

Different strategies and prioritisations have been put forward since 1996 (see Figure 3). There was an initial prioritisation based on a very simple assessment of the hazard (section 4.1.2). Then, a second prioritisation based on a multi-criteria analysis was carried out (section 4.1.3) to handle more sophisticated considerations regarding both the hazard and

with uncertainties and the knowledge of the preferences of decision makers.

problems.

in Lorraine.

assessment of the vulnerability.

**4.1 Multi-criteria methods for risk assessment** 

**4.1.1 Mining hazard in Lorraine (France)** 

works when left abandoned.

consisting of rooms and pillars.

the vulnerability (Deck *et al.* 2009)

#### **4.1.2 First ranking system**

As a result of the subsidence events in Lorraine in 1996 (Auboué), 1997 (Moutiers), and 1999 (Roncourt), as well as sinkholes in 1998 (Moyeuvre), public authorities ordered investigations to understand the extent of the problem.

Uncertainties and Risk Analysis Related to

confirm that decision.

uncertainties.

Geohazards: From Practical Applications to Research Trends 171

change the global order already obtained, which is not easy to manage in practice when the

Therefore, in the Lorraine case, the ELECTRE TRI method has been used. This method consists of comparing each studied zone to 3 predefined virtual zones that a group of experts has considered as the limiting zones between each level (group) of risk. Practically, as illustrated in Figure 4, if you call Zr1 the virtual zone that experts consider to be the limit for all criteria between the level of risk 2 and the level of risk 1 (the most risky), and if you want to classify a new zone Zi (dashed line in the Figure) relative to Zr1 (resp. Zr2, Zr3), you have to compare Zi with Zr1 (Zr2 and Zr3, respectively) regarding all criteria, and you decide that Zi is more risky than Zr1 (Zr2 and Zr1, respectively) if a majority of criteria

Fig. 4. Principle of virtual limiting Zones (Zr1, Zr2 and Zr3) between classes of risk and

This explanation of the ELECTRE method is a bit simplistic. In fact, the method offers many nuances, which can be found in (Merad *et al*. 2004). One is the way the method deals with

Because of uncertainties, it is not always easy in practice to say whether the value of a criterion for the studied zone Zi is really higher than the value of the same criterion for the reference zone, Zri. ELECTRE methods deal with such problems by the use of fuzzy logic so

Figure 5 illustrates the principles of dealing with uncertainties in the ELECTRE TRI method. At the bottom of the figure, we can see how a zone Zi is compared with a virtual zone Zri regarding the criterion j. The value cg(Zi S Zri), called the local agreement index, is a value between 0 and 1 that indicates to what degree we can consider that Zi is more risky than Zri regarding the selected criterion j (x-axis). The figure shows that we consider a certain uncertainty characterised by both p and q so that Zi is considered more risky than Zri when the value of the selected criterion on Zi (called gj(Zi)) starts to be higher than gj(Zri)-p. Then, the weighted mean of the local agreement indices over all

that the limits between the risk levels become fuzzy intervals (Figure 5, top).

comparison of a zone Zi (dashed line) with them

objective is to decide actions to be taken on the most risky zones first.

Because of the lack of knowledge regarding subsidence phenomena, the first ranking system was based upon two main considerations:


The first ranking system was in agreement with the regulations of the French urban code (articles R111-2 and R111-3). The system defines three types of areas depending on the maximum possible subsidence and the associated recommendations for building projects, see Table 2.


Table 2. The first ranking system used in Lorraine regarding the mining subsidence hazard and corresponding recommendations.

This first ranking system is called step 1 in Figure 3. This step deals mainly with the urban side of the vulnerability and is not suitable for the other aspects of vulnerability, especially human, social, and economic vulnerability.

#### **4.1.3 Multi-criteria ranking system**

Supplementary investigations were necessary to perform a more detailed hazard assessment and for the human vulnerability assessment. We developed (Merad *et al*., 2004) a method based upon a multi-criteria analysis using the ELECTRE methodology (Roy, 1985; Roy and Bouyssou, 1993). The mathematical functions included in the method allow the management of a "*complex decision-making problem where the available information is uncertain and imprecise and where knowledge is incomplete*" (Merad *et al*., 2004). This method uses weighted factors for all criteria, and it stresses their relative importance in risk assessment.

In this method, each studied zone is described by a set of criteria that characterises the hazard (probability and intensity of the subsidence) and the vulnerability. Each criterion is expressed in its own unit, and weighting factors are taken into consideration. What is unique in the multi-criteria ELECTRE methods is the specific way used to combine the criteria in the global assessment (here risk assessment) process. Indeed, the global level of risk does not rely on a mean value computed over the criteria but on the comparison of the studied zones in pairs or with predetermined virtual zones characterising the limits between the four risk classes or levels.

More practically, in the case of the ELECTRE III method, all zones are compared with each other based on all criteria. A zone is then considered as more risky than another if a majority of criteria (weighted factors taken into account) give a higher risk for this zone compared with the other. Such a comparison in pairs leads to a global order of the zones from the most risky to the least. The main disadvantage of this method is that any new analysed zone may

Because of the lack of knowledge regarding subsidence phenomena, the first ranking system


The first ranking system was in agreement with the regulations of the French urban code (articles R111-2 and R111-3). The system defines three types of areas depending on the maximum possible subsidence and the associated recommendations for building projects,

Maximum subsidence < 1 m Surface of building < 400 m2, Maximum length < 25 m,

Maximum subsidence < 2.5 m Surface of building < 150 m2, Maximum length < 15 m,

Table 2. The first ranking system used in Lorraine regarding the mining subsidence hazard

This first ranking system is called step 1 in Figure 3. This step deals mainly with the urban side of the vulnerability and is not suitable for the other aspects of vulnerability, especially

Supplementary investigations were necessary to perform a more detailed hazard assessment and for the human vulnerability assessment. We developed (Merad *et al*., 2004) a method based upon a multi-criteria analysis using the ELECTRE methodology (Roy, 1985; Roy and Bouyssou, 1993). The mathematical functions included in the method allow the management of a "*complex decision-making problem where the available information is uncertain and imprecise and where knowledge is incomplete*" (Merad *et al*., 2004). This method uses weighted factors for

In this method, each studied zone is described by a set of criteria that characterises the hazard (probability and intensity of the subsidence) and the vulnerability. Each criterion is expressed in its own unit, and weighting factors are taken into consideration. What is unique in the multi-criteria ELECTRE methods is the specific way used to combine the criteria in the global assessment (here risk assessment) process. Indeed, the global level of risk does not rely on a mean value computed over the criteria but on the comparison of the studied zones in pairs or with predetermined virtual zones characterising the limits between

More practically, in the case of the ELECTRE III method, all zones are compared with each other based on all criteria. A zone is then considered as more risky than another if a majority of criteria (weighted factors taken into account) give a higher risk for this zone compared with the other. Such a comparison in pairs leads to a global order of the zones from the most risky to the least. The main disadvantage of this method is that any new analysed zone may

all criteria, and it stresses their relative importance in risk assessment.

Number of floors ≤ ground floor + 3

Number of floors ≤ ground floor + 1


was based upon two main considerations:

Maximum subsidence > 2.5 m Forbidden

and corresponding recommendations.

**4.1.3 Multi-criteria ranking system** 

the four risk classes or levels.

human, social, and economic vulnerability.

subsidence.

see Table 2.

change the global order already obtained, which is not easy to manage in practice when the objective is to decide actions to be taken on the most risky zones first.

Therefore, in the Lorraine case, the ELECTRE TRI method has been used. This method consists of comparing each studied zone to 3 predefined virtual zones that a group of experts has considered as the limiting zones between each level (group) of risk. Practically, as illustrated in Figure 4, if you call Zr1 the virtual zone that experts consider to be the limit for all criteria between the level of risk 2 and the level of risk 1 (the most risky), and if you want to classify a new zone Zi (dashed line in the Figure) relative to Zr1 (resp. Zr2, Zr3), you have to compare Zi with Zr1 (Zr2 and Zr3, respectively) regarding all criteria, and you decide that Zi is more risky than Zr1 (Zr2 and Zr1, respectively) if a majority of criteria confirm that decision.

Fig. 4. Principle of virtual limiting Zones (Zr1, Zr2 and Zr3) between classes of risk and comparison of a zone Zi (dashed line) with them

This explanation of the ELECTRE method is a bit simplistic. In fact, the method offers many nuances, which can be found in (Merad *et al*. 2004). One is the way the method deals with uncertainties.

Because of uncertainties, it is not always easy in practice to say whether the value of a criterion for the studied zone Zi is really higher than the value of the same criterion for the reference zone, Zri. ELECTRE methods deal with such problems by the use of fuzzy logic so that the limits between the risk levels become fuzzy intervals (Figure 5, top).

Figure 5 illustrates the principles of dealing with uncertainties in the ELECTRE TRI method. At the bottom of the figure, we can see how a zone Zi is compared with a virtual zone Zri regarding the criterion j. The value cg(Zi S Zri), called the local agreement index, is a value between 0 and 1 that indicates to what degree we can consider that Zi is more risky than Zri regarding the selected criterion j (x-axis). The figure shows that we consider a certain uncertainty characterised by both p and q so that Zi is considered more risky than Zri when the value of the selected criterion on Zi (called gj(Zi)) starts to be higher than gj(Zri)-p. Then, the weighted mean of the local agreement indices over all

Uncertainties and Risk Analysis Related to

weight factors in the analysis.

with population.

process.

Level 1: Very high risk Real-time monitoring

Level 4: Slight risk No monitoring is required.

hazard assessment rather than on risk assessment.

vulnerability functions that take into account some uncertainties.

forewarning.

and the corresponding recommendations in terms of monitoring.

periodic monitoring.

Geohazards: From Practical Applications to Research Trends 173

Level 2: High risk Periodic monitoring, which will become real-time at the first

Level 3: Medium risk Supplementary investigations are required to assess the need for

Only levelling measurements are made.

Table 3. Second ranking system used in Lorraine regarding the mining subsidence hazard

Regarding the vulnerability assessment within the risk analysis method, two kinds of assets were considered: buildings and infrastructures. Figure 6 shows the used criteria and their

In the case of buildings, no other asset is taken into account. The buildings' assets are then assessed with one criterion that may have 5 values, from "business park", which corresponds to a small vulnerability level because of its single daily activity, to "city", which corresponds to the highest vulnerability level because of its daily and nightly activities and the potential number of affected people. This classification reflects the population vulnerability and, to a lesser degree, the economic or structural vulnerability, as these kinds of vulnerabilities are indirectly taken into account because they increase

The weight factors linked with the probability, intensity and vulnerability criteria raise a question related to the previously given definitions of risk. If risk is the product of hazard and vulnerability, does the sum of the weight factors for each component have to be equal? In a multi-criteria ranking system, the sum of the weight factors related to the hazard criteria reach the value of 46, while the sum of the weight factors related to the vulnerability criteria reaches "only" a value between 2 and 14, depending on the assets in the area. This difference produces results that are more dependent on hazard than on vulnerability. Consequently, this multi-criteria ranking system may be criticised because it focuses on

Apart from this problem, we are convinced that multi-criteria methods are exceptionally well suited to risk analysis because they easily accommodate uncertainties and inaccuracies, and they can easily take into account experts' or decision makers' opinions in the evaluation

This section shows how the uncertainties influence may be taken into account with multicriteria methods in order to develop operational tools for experts or decision makers. However, the final results are mostly dependent on the uncertainties about each component of the risk assessment. For instance, the vulnerability assessment is very simple in the presented case and strictly restricted to the presence of assets in the studied area. For this reason, the next section focuses on the vulnerability assessment and the development of

criteria provide a global agreement index (value between 0 and 1), which indicates whether Zi is globally more risky than Zri. The same calculations have to be performed to reversely compare Zri and Zi because this comparison is not symmetrical (i.e., Zi can be considered as more risky than Zri, and Zri can also be considered as more risky than Zi because of uncertainties). Finally, the methods can state the risk group to which any zone Zi belongs.

Fig. 5. How the ELECTRE TRI method deals with uncertainties. gj is the value on criterion j, q is the indifference threshold and p is the preference threshold. The top chart shows the uncertainty on each criterion as a band at the left of each Zri profile.

This methodology has been applied to constructed zones (civil security objectives), where the main objective was to identify zones requiring specific surveillance because of their high-risk level. Therefore, the studied zones have been placed in one of the risk groups given in Table 3. An extension has then been applied to non-urbanised zones from a landuse planning perspective.

criteria provide a global agreement index (value between 0 and 1), which indicates whether Zi is globally more risky than Zri. The same calculations have to be performed to reversely compare Zri and Zi because this comparison is not symmetrical (i.e., Zi can be considered as more risky than Zri, and Zri can also be considered as more risky than Zi because of uncertainties). Finally, the methods can state the risk group to which any

Fig. 5. How the ELECTRE TRI method deals with uncertainties. gj is the value on criterion j, q is the indifference threshold and p is the preference threshold. The top chart shows the

This methodology has been applied to constructed zones (civil security objectives), where the main objective was to identify zones requiring specific surveillance because of their high-risk level. Therefore, the studied zones have been placed in one of the risk groups given in Table 3. An extension has then been applied to non-urbanised zones from a land-

uncertainty on each criterion as a band at the left of each Zri profile.

use planning perspective.

zone Zi belongs.


Table 3. Second ranking system used in Lorraine regarding the mining subsidence hazard and the corresponding recommendations in terms of monitoring.

Regarding the vulnerability assessment within the risk analysis method, two kinds of assets were considered: buildings and infrastructures. Figure 6 shows the used criteria and their weight factors in the analysis.

In the case of buildings, no other asset is taken into account. The buildings' assets are then assessed with one criterion that may have 5 values, from "business park", which corresponds to a small vulnerability level because of its single daily activity, to "city", which corresponds to the highest vulnerability level because of its daily and nightly activities and the potential number of affected people. This classification reflects the population vulnerability and, to a lesser degree, the economic or structural vulnerability, as these kinds of vulnerabilities are indirectly taken into account because they increase with population.

The weight factors linked with the probability, intensity and vulnerability criteria raise a question related to the previously given definitions of risk. If risk is the product of hazard and vulnerability, does the sum of the weight factors for each component have to be equal? In a multi-criteria ranking system, the sum of the weight factors related to the hazard criteria reach the value of 46, while the sum of the weight factors related to the vulnerability criteria reaches "only" a value between 2 and 14, depending on the assets in the area. This difference produces results that are more dependent on hazard than on vulnerability. Consequently, this multi-criteria ranking system may be criticised because it focuses on hazard assessment rather than on risk assessment.

Apart from this problem, we are convinced that multi-criteria methods are exceptionally well suited to risk analysis because they easily accommodate uncertainties and inaccuracies, and they can easily take into account experts' or decision makers' opinions in the evaluation process.

This section shows how the uncertainties influence may be taken into account with multicriteria methods in order to develop operational tools for experts or decision makers. However, the final results are mostly dependent on the uncertainties about each component of the risk assessment. For instance, the vulnerability assessment is very simple in the presented case and strictly restricted to the presence of assets in the studied area. For this reason, the next section focuses on the vulnerability assessment and the development of vulnerability functions that take into account some uncertainties.

Uncertainties and Risk Analysis Related to

11 main building types.

EMS).

instance).

levels in volcanic risk assessment (Spence *et al.* 2005)).

both the building characteristics and the hazard intensity.

deduced from fragility curves with Eq. 1:

modelled by fitted mathematical functions.

distribution of all buildings.

Geohazards: From Practical Applications to Research Trends 175

For example, the EMS (Grunthal, 1998) considers a six-level damage scale that consists of no damage (D0), slight damage (D1), and so on, up to very heavy damage (D5). Most of the existing methods define an equivalent number (four levels of damage in the HAZUS and six

The building typology must be defined according to the most important parameters relevant to the resistance of the buildings against the considered hazard. For instance, the building materials (e.g., concrete, wood, and masonry), the quality of the construction, the type of foundations, and the global stiffness of the building are important in earthquake engineering. For example, the EMS (Grunthal, 1998) considers

The criterion for the event intensity may be a physical parameter (height or speed for a tsunami or acceleration for an earthquake) or an empirical one (earthquake intensity in

Fragility curves provide the probability of reaching or exceeding a given damage state as a function of the intensity of the natural event (see Figure 7b), and they are usually modelled by lognormal functions. A crucial point is that fragility curves clearly take into account that not all buildings of the same type will suffer the same level of damage for a given event intensity, which can be interpreted as the consequences of the uncertainties about the data of

Vulnerability curves are relationships between the mean amount of damage for a given type of building and the value of the event intensity (see Figure 7c). Vulnerability curves may be

where *µ*D is the mean damage for a given intensity, *P*k is the probability of a damage grade *D*k, and *k* is the range of damage category (from 0 to 5 in the EMS damage scale, for

The example shown in Figure 7 concerns a massive stone masonry building (type M4), according to the EMS-98. Figure 7a shows the damage distribution for this type of building during an earthquake of intensity 11. This distribution can be plotted in Figure 7b, where each dot on the figure corresponds to the different fragility curves of this type of building. By calculating the mean of the damages (Eq. 1), it is then possible to plot one point of the vulnerability curve, as shown in Figure 7c. Fragility and vulnerability curves may then be

In practical terms, when developed and validated, fragility and vulnerability curves are both efficient and accurate. Vulnerability curves are used to obtain a synthetic result of the mean damage to buildings in a selected territory. When applied to a single building, fragility curves may be used to assess the probability of reaching a particular damage level. When applied to a set of buildings, fragility curves may be used to assess the damage

*D kk P D* (1)

Fig. 6. Criteria and weight factors used in the actual assessment of risk, hazard, and vulnerability in the Lorraine region regarding the iron field mining subsidence hazards.

#### **4.2 Building vulnerability**

Today, building vulnerability is primarily investigated with vulnerability and fragility curves. These curves are a powerful way to manage data and model uncertainties associated with the assessment of building damage. Many curves have been empirically developed for different hazards based on case studies and statistical analyses of existing damage. However, each curve may be specific for a given place and a given event, and their use in another context may be contestable. An interesting way to strengthen empirical curves and reduce uncertainties associated with the choice of vulnerability curves is to develop theoretical curves that may be calibrated for any specific context. Moreover, such theoretical curves can then be used for sensitivity analysis and assessment of the dependency on data and model uncertainties. The present chapter illustrates a methodology to develop theoretical vulnerability curves based on Monte-Carlo simulations within the context of mining subsidence hazard.

#### **4.2.1 Concept of vulnerability curves**

The vulnerability of buildings and territories to natural hazards is often studied with vulnerability and fragility curves that allow assessment of the damage distribution for a given number of building types in relation to the event intensity, see e.g., earthquakes (HAZUS 1999, McGuire 2004), flooding (USACE 1996, Jonkman *et al.* 2008, HAZUS), volcanoes (Spence, 2005), and tsunamis (Ronald and Hope 2008). Fragility and vulnerability curves are thus developed for a given building type, and they allow quick and realistic damage assessment of all buildings grouped into the same type.

Vulnerability and fragility curves use the following three main types of input data:


Fig. 6. Criteria and weight factors used in the actual assessment of risk, hazard, and vulnerability in the Lorraine region regarding the iron field mining subsidence hazards.

Today, building vulnerability is primarily investigated with vulnerability and fragility curves. These curves are a powerful way to manage data and model uncertainties associated with the assessment of building damage. Many curves have been empirically developed for different hazards based on case studies and statistical analyses of existing damage. However, each curve may be specific for a given place and a given event, and their use in another context may be contestable. An interesting way to strengthen empirical curves and reduce uncertainties associated with the choice of vulnerability curves is to develop theoretical curves that may be calibrated for any specific context. Moreover, such theoretical curves can then be used for sensitivity analysis and assessment of the dependency on data and model uncertainties. The present chapter illustrates a methodology to develop theoretical vulnerability curves based on Monte-Carlo simulations within the context of

The vulnerability of buildings and territories to natural hazards is often studied with vulnerability and fragility curves that allow assessment of the damage distribution for a given number of building types in relation to the event intensity, see e.g., earthquakes (HAZUS 1999, McGuire 2004), flooding (USACE 1996, Jonkman *et al.* 2008, HAZUS), volcanoes (Spence, 2005), and tsunamis (Ronald and Hope 2008). Fragility and vulnerability curves are thus developed for a given building type, and they allow quick and realistic

Vulnerability and fragility curves use the following three main types of input data:

**4.2 Building vulnerability** 

mining subsidence hazard.

1. A damage scale 2. A building typology 3. An intensity criterion

**4.2.1 Concept of vulnerability curves** 

damage assessment of all buildings grouped into the same type.

For example, the EMS (Grunthal, 1998) considers a six-level damage scale that consists of no damage (D0), slight damage (D1), and so on, up to very heavy damage (D5). Most of the existing methods define an equivalent number (four levels of damage in the HAZUS and six levels in volcanic risk assessment (Spence *et al.* 2005)).

The building typology must be defined according to the most important parameters relevant to the resistance of the buildings against the considered hazard. For instance, the building materials (e.g., concrete, wood, and masonry), the quality of the construction, the type of foundations, and the global stiffness of the building are important in earthquake engineering. For example, the EMS (Grunthal, 1998) considers 11 main building types.

The criterion for the event intensity may be a physical parameter (height or speed for a tsunami or acceleration for an earthquake) or an empirical one (earthquake intensity in EMS).

Fragility curves provide the probability of reaching or exceeding a given damage state as a function of the intensity of the natural event (see Figure 7b), and they are usually modelled by lognormal functions. A crucial point is that fragility curves clearly take into account that not all buildings of the same type will suffer the same level of damage for a given event intensity, which can be interpreted as the consequences of the uncertainties about the data of both the building characteristics and the hazard intensity.

Vulnerability curves are relationships between the mean amount of damage for a given type of building and the value of the event intensity (see Figure 7c). Vulnerability curves may be deduced from fragility curves with Eq. 1:

$$
\mu\_{\rm D} = \sum P\_k \cdot D\_k \tag{1}
$$

where *µ*D is the mean damage for a given intensity, *P*k is the probability of a damage grade *D*k, and *k* is the range of damage category (from 0 to 5 in the EMS damage scale, for instance).

The example shown in Figure 7 concerns a massive stone masonry building (type M4), according to the EMS-98. Figure 7a shows the damage distribution for this type of building during an earthquake of intensity 11. This distribution can be plotted in Figure 7b, where each dot on the figure corresponds to the different fragility curves of this type of building. By calculating the mean of the damages (Eq. 1), it is then possible to plot one point of the vulnerability curve, as shown in Figure 7c. Fragility and vulnerability curves may then be modelled by fitted mathematical functions.

In practical terms, when developed and validated, fragility and vulnerability curves are both efficient and accurate. Vulnerability curves are used to obtain a synthetic result of the mean damage to buildings in a selected territory. When applied to a single building, fragility curves may be used to assess the probability of reaching a particular damage level. When applied to a set of buildings, fragility curves may be used to assess the damage distribution of all buildings.

Uncertainties and Risk Analysis Related to

subsidence zone.

**4.2.3 Application** 

Geohazards: From Practical Applications to Research Trends 177

Fig. 8. Methodology for the determination of vulnerability and fragility curves in the

The Lorraine region features large quantities of iron, salt, and coal deposits that were heavily extracted until the beginning of the 1990s (salt continues to be mined here). The presence of former iron mines raises many issues, including that of building vulnerability. Five subsidence events occurred between 1996 and 1999 (two in the city of Auboué in 1996, two in Moutiers in 1997, and the last in Roncourt in 1999), which caused damage to more than 500 dwellings (see Figure 9). Many other cities and villages in this area may still be affected by this phenomenon. The described methodology is applied to develop

Mining subsidence is a consequence of the collapse of underground mines and quarries. It produces significant horizontal and vertical movements at the ground surface. The maximum value "Sm" of the vertical subsidence is usually considered as a characteristic of

Next, we illustrate this approach on the Lorraine case.

vulnerability curves within the context of mining subsidence hazard.

Fig. 7. Damage distribution (a), fragility curves (b), and vulnerability curves (c) for the M4 building type, according to EMS-98 for the assessment of earthquake building damage.

#### **4.2.2 Methodology to define vulnerability and fragility curves**

The methodology adopted to develop vulnerability and fragility curves where data are not sufficient in number (no previous event or no statistical data available) is based on the damage assessment of a set of theoretical buildings whose characteristics are consistent with a particular building type but are also variable to take into account the variability in the type and uncertainties (Saeidi *et al.* 2009, 2011). The method is based on five steps (see Figure 8).

The first step consists of making preliminary choices regarding a damage scale, an intensity criterion of the considered hazard and a method for the building damage evaluation.

The second step consists of defining a building typology and choosing the representative characteristics of each type. Each characteristic is supposed to be deterministic or probabilistic, (defined as a range of possible values with an attached probability distribution) to take into account variability into a given building type and uncertainty about its evaluation.

The third and fourth steps consist of a Monte Carlo simulation. For each type, the third step consists of simulating a database of 1000 virtual buildings whose characteristics (e.g., height, length, materials, and mechanical properties) are consistent with the studied building type. The fourth step consists of evaluating the damage of the 1000 simulated buildings for one value of the intensity criterion and counting the number of buildings in each damage class. The results may then be used to plot a set of points for both the fragility curve (probability of reaching or exceeding a given damage class) and the vulnerability curve (mean damage). Finally, by repeating this step for all of the values of the intensity criterion, both the vulnerability and fragility curves can be drawn.

The fifth step consists of fitting a mathematical model to the results to express the fragility and vulnerability as mathematical functions. A tangent hyperbolic function can be used for the vulnerability functions and a lognormal function for the fragility functions.

Fig. 7. Damage distribution (a), fragility curves (b), and vulnerability curves (c) for the M4 building type, according to EMS-98 for the assessment of earthquake building damage.

The methodology adopted to develop vulnerability and fragility curves where data are not sufficient in number (no previous event or no statistical data available) is based on the damage assessment of a set of theoretical buildings whose characteristics are consistent with a particular building type but are also variable to take into account the variability in the type and uncertainties (Saeidi *et al.* 2009, 2011). The method is based on five steps (see Figure 8). The first step consists of making preliminary choices regarding a damage scale, an intensity

The second step consists of defining a building typology and choosing the representative characteristics of each type. Each characteristic is supposed to be deterministic or probabilistic, (defined as a range of possible values with an attached probability distribution) to take into account variability into a given building type and uncertainty

The third and fourth steps consist of a Monte Carlo simulation. For each type, the third step consists of simulating a database of 1000 virtual buildings whose characteristics (e.g., height, length, materials, and mechanical properties) are consistent with the studied building type. The fourth step consists of evaluating the damage of the 1000 simulated buildings for one value of the intensity criterion and counting the number of buildings in each damage class. The results may then be used to plot a set of points for both the fragility curve (probability of reaching or exceeding a given damage class) and the vulnerability curve (mean damage). Finally, by repeating this step for all of the values of the intensity criterion, both the

The fifth step consists of fitting a mathematical model to the results to express the fragility and vulnerability as mathematical functions. A tangent hyperbolic function can be used for

the vulnerability functions and a lognormal function for the fragility functions.

criterion of the considered hazard and a method for the building damage evaluation.

**4.2.2 Methodology to define vulnerability and fragility curves** 

about its evaluation.

vulnerability and fragility curves can be drawn.

Fig. 8. Methodology for the determination of vulnerability and fragility curves in the subsidence zone.

Next, we illustrate this approach on the Lorraine case.

#### **4.2.3 Application**

The Lorraine region features large quantities of iron, salt, and coal deposits that were heavily extracted until the beginning of the 1990s (salt continues to be mined here). The presence of former iron mines raises many issues, including that of building vulnerability. Five subsidence events occurred between 1996 and 1999 (two in the city of Auboué in 1996, two in Moutiers in 1997, and the last in Roncourt in 1999), which caused damage to more than 500 dwellings (see Figure 9). Many other cities and villages in this area may still be affected by this phenomenon. The described methodology is applied to develop vulnerability curves within the context of mining subsidence hazard.

Mining subsidence is a consequence of the collapse of underground mines and quarries. It produces significant horizontal and vertical movements at the ground surface. The maximum value "Sm" of the vertical subsidence is usually considered as a characteristic of

Uncertainties and Risk Analysis Related to

of buildings (1000 in this example).

ground strain and is calculated with Eq. 3.

Geohazards: From Practical Applications to Research Trends 179

The third step consists of the simulation of a database with 1000 theoretical buildings. To complete this database, the variability of each criteria used in the different methods of damage assessment is considered to be in agreement with the building type. A uniform distribution is used to define the final value of each building. This variability within a building type may be interpreted both as a real physical and observed difference between

For example, when the NCB method is considered, the 1000 theoretical buildings only differ in length, which is randomly chosen between 10 and 20 m. Therefore, the development of vulnerability functions requires a proper definition of the variability of each parameter used by the considered method. Preliminary tests, with a number of buildings between 200 and

> ( ) ( ) *<sup>i</sup> i N D P D*

*where N*(*D*i) is the number of buildings in the damage class "*D*i" and "*n*" is the total number

The vulnerability curve is the relationship between the mean damage and the horizontal

4

1 *<sup>D</sup>*() ( )*i i i*

where D() is the mean of damages for the value "" of horizontal ground strain and *P*(*D*i)

The plot of mean damages is given in Figure 10 and shows a discontinuous curve that is the consequence of using threshold values in all of the empirical methods. This result is hardly compatible with reality because damage should continuously increase with increasing horizontal ground strain. This assumption is also corroborated by the shape of all vulnerability functions developed in other fields, where a tangent hyperbolic function is often used (Lagomarsino *et al.*, 2006). To determine a continuous building vulnerability curve in agreement with the discontinuous curve previously plotted in Figure 10, a tangent

*<sup>D</sup>*( ) [ ( )] *a b Tanh c d*

where D() is the mean of damages for a value "" of the horizontal ground strain and *a, b,* 

These parameters are not independent; two relationships exist between them. According to Table 3, for a horizontal ground strain equal to zero, there is no damage to buildings, and for a horizontal ground strain greater than 9 mm/m, the mean damage to buildings is maximum and equal to four (greatest level in the damage scale). Therefore, this leads to the two boundary conditions detailed in Eq. 5, and only two parameters must still be determined. We used a nonlinear regression method to find the best values of these two parameters. The final

 *PD D* 

*<sup>n</sup>* (2)

(3)

(4)

the buildings and as uncertainties concerning their real characteristics.

2000, showed that 1000 buildings provided acceptably accurate results.

 

is the probability of damage in the class "*D*i", as calculated by Eq. 2.

hyperbolic function may be fitted on data according to Eq. 4

 

*c,* and *d* are four coefficients that must be determined for each building type.

continuous vulnerability curve for the "CF1" building type is shown in Figure 8.

the trough, but the horizontal ground strain "" is mostly correlated to the associated structural damage. Consequently, the horizontal ground strain that involves from an horizontal compression near the centre of the subsidence to an horizontal extension near the edges is used to assess the hazard intensity and the associated potential damage. The assessment of is possible with different methods (Whittaker and Reddish 1989).

Fig. 9. Description of the main characteristics involved in mining subsidence and associated consequences (Saeidi *et al.* 2009). a) Typical profiles of the ground displacements and locations of the compression/sagging and traction/hogging areas. b) Typical values of the subsidence dimension and grounds movements. c) Typical damage due to mining subsidence in the city of Auboué, France.

The methodology for the development of vulnerability curves requires the use of a damage assessment method. Many methods have been developed within the context of mining subsidence hazards. Some methods are empirical (NCB, 1975; Wagner and Schumann, 1991; Yu *et al.,* 1988; Bhattacharya and Singh, 1984; Dzegniuk and Hejmanowski, 2000; Kwiatek, 1998) and based on retro-analysis, while others are analytical and based on mechanical models (Boscardin and Cording, 1989; Boone, 1996; Burland, 1997; Boone, 2001; Finno *et al.* 2005). Most of these methods use a four- or five-level damage scale, and they consider the horizontal ground strain for the hazard intensity.

In the following, the methodology is illustrated with the most simple but popular method of the (NCB 1975), which links building damage (five levels) to the building length and the horizontal ground strain. Other methods can also be used (Saeidi *et al.*, 2009, 2011), and the results will be compared and discussed.

The second step requires defining a building typology. Most of the buildings in the Lorraine region are small masonry buildings with or without reinforcements and can be classified into five types (four masonry building types MR1 to 3 and MC1 and one concrete frame building CF1). For example, the MR2 building type consists of a 10 to 20 metres long masonry of rubble stones, a height of 5 to 8 metres, poor quality mortar without protection against mining subsidence effects, a cellar and concrete slab and a simple external shape with good symmetry of the bearing walls.

the trough, but the horizontal ground strain "" is mostly correlated to the associated structural damage. Consequently, the horizontal ground strain that involves from an horizontal compression near the centre of the subsidence to an horizontal extension near the edges is used to assess the hazard intensity and the associated potential damage. The

Fig. 9. Description of the main characteristics involved in mining subsidence and associated consequences (Saeidi *et al.* 2009). a) Typical profiles of the ground displacements and locations of the compression/sagging and traction/hogging areas. b) Typical values of the

The methodology for the development of vulnerability curves requires the use of a damage assessment method. Many methods have been developed within the context of mining subsidence hazards. Some methods are empirical (NCB, 1975; Wagner and Schumann, 1991; Yu *et al.,* 1988; Bhattacharya and Singh, 1984; Dzegniuk and Hejmanowski, 2000; Kwiatek, 1998) and based on retro-analysis, while others are analytical and based on mechanical models (Boscardin and Cording, 1989; Boone, 1996; Burland, 1997; Boone, 2001; Finno *et al.* 2005). Most of these methods use a four- or five-level damage scale, and they consider the

In the following, the methodology is illustrated with the most simple but popular method of the (NCB 1975), which links building damage (five levels) to the building length and the horizontal ground strain. Other methods can also be used (Saeidi *et al.*, 2009, 2011), and the

The second step requires defining a building typology. Most of the buildings in the Lorraine region are small masonry buildings with or without reinforcements and can be classified into five types (four masonry building types MR1 to 3 and MC1 and one concrete frame building CF1). For example, the MR2 building type consists of a 10 to 20 metres long masonry of rubble stones, a height of 5 to 8 metres, poor quality mortar without protection against mining subsidence effects, a cellar and concrete slab and a simple external shape

subsidence dimension and grounds movements. c) Typical damage due to mining

subsidence in the city of Auboué, France.

horizontal ground strain for the hazard intensity.

results will be compared and discussed.

with good symmetry of the bearing walls.

assessment of is possible with different methods (Whittaker and Reddish 1989).

The third step consists of the simulation of a database with 1000 theoretical buildings. To complete this database, the variability of each criteria used in the different methods of damage assessment is considered to be in agreement with the building type. A uniform distribution is used to define the final value of each building. This variability within a building type may be interpreted both as a real physical and observed difference between the buildings and as uncertainties concerning their real characteristics.

For example, when the NCB method is considered, the 1000 theoretical buildings only differ in length, which is randomly chosen between 10 and 20 m. Therefore, the development of vulnerability functions requires a proper definition of the variability of each parameter used by the considered method. Preliminary tests, with a number of buildings between 200 and 2000, showed that 1000 buildings provided acceptably accurate results.

$$P(D\_i) = \frac{N(D\_i)}{n} \tag{2}$$

*where N*(*D*i) is the number of buildings in the damage class "*D*i" and "*n*" is the total number of buildings (1000 in this example).

The vulnerability curve is the relationship between the mean damage and the horizontal ground strain and is calculated with Eq. 3.

$$\mu\_D(\varepsilon) = \sum\_{i=1}^4 P(D\_i) \cdot D\_i \tag{3}$$

where D() is the mean of damages for the value "" of horizontal ground strain and *P*(*D*i) is the probability of damage in the class "*D*i", as calculated by Eq. 2.

The plot of mean damages is given in Figure 10 and shows a discontinuous curve that is the consequence of using threshold values in all of the empirical methods. This result is hardly compatible with reality because damage should continuously increase with increasing horizontal ground strain. This assumption is also corroborated by the shape of all vulnerability functions developed in other fields, where a tangent hyperbolic function is often used (Lagomarsino *et al.*, 2006). To determine a continuous building vulnerability curve in agreement with the discontinuous curve previously plotted in Figure 10, a tangent hyperbolic function may be fitted on data according to Eq. 4

$$\mu\_{\rm D}(\varepsilon) = a[b + \operatorname{Tanh}(c \cdot \varepsilon + d)] \tag{4}$$

where D() is the mean of damages for a value "" of the horizontal ground strain and *a, b, c,* and *d* are four coefficients that must be determined for each building type.

These parameters are not independent; two relationships exist between them. According to Table 3, for a horizontal ground strain equal to zero, there is no damage to buildings, and for a horizontal ground strain greater than 9 mm/m, the mean damage to buildings is maximum and equal to four (greatest level in the damage scale). Therefore, this leads to the two boundary conditions detailed in Eq. 5, and only two parameters must still be determined. We used a nonlinear regression method to find the best values of these two parameters. The final continuous vulnerability curve for the "CF1" building type is shown in Figure 8.

Uncertainties and Risk Analysis Related to

observed damaged buildings in Lorraine (right).

**5. Closure and future work** 

before analysing other sources of uncertainties.

way to manage both model and data uncertainties.

be identified, for which different solutions can be suggested.

the highest for instance). This should be addressed in future work.

Geohazards: From Practical Applications to Research Trends 181

Fig. 11. Vulnerability curves for MR2 type by all methods (left) and comparison with

In conclusion, vulnerability functions are a powerful and operational way to assess building vulnerability over a large territory. If statistical analysis of previous damage due to a specific hazard is the easiest method to get and validated such an approach, other methods must be developed when such data are missing. This example presents an innovative way to develop such functions. Based on Monte Carlo simulations, the method allows taking into account both variability and uncertainties. However, some steps of the methodology can be questioned, in particular when the 1000 theoretical buildings are simulated. First the choice of the statistical distribution for each parameter is not evidence; secondly the method actually doesn't consider correlations between parameters (e.g. long buildings may also be

Risk analysis is a complex field because of the numerous different forces that interact to assess the risk and to define solutions for the risk management. Public authorities, scientists and citizens may have different objectives, and they seldom use the same definitions. Consequently, a clear definition of the terms used in risk analysis is a preliminary step

Uncertainties may then be classified into various types that highlight different possible features, such as the possibility to quantify them or not with statistical or probabilistic distributions, fuzzy logic or any other method. Finally, many sources of uncertainties may

Two examples have been investigated. The first deals with a decision-aid method that is an interesting approach to managing expertise uncertainties. Based on multi-criteria analysis, initially used and developed in and for economics, this decision-aid method has been developed for the risk management of areas vulnerable to mining subsidence hazards. The second example deals with vulnerability assessment and, more specifically, with building damage assessment. The use and development of vulnerability functions are an interesting

In sum, these two examples illustrate how uncertainties can be taken into account for risk management related to geo-hazards. However, it is obvious that uncertainties may complicate the risk assessment that must be understood and discussed amongst public authorities,

Fig. 10. Vulnerability function and curve for the CF1 building type, built from Table 7.

The next step in determining vulnerability and fragility curves is the damage assessment for all theoretical buildings for different values of the horizontal ground strain between 0 and 10 mm/m.

The damage level of each building is calculated for the different values of horizontal ground strain, and the probability of damage in each damage class "P(Di)" is calculated with Eq. 3.

The influence of the damage assessment method is investigated in Figure 11, and it is a good way to assess the model uncertainty if no arguments are available to privilege one method. The results of the vulnerability curves for the MR2 building type (Table 5) obtained with different methods show significant differences. In particular, the NCB method gives less damage than the other methods; consequently, this method is considered less conservative. A mean vulnerability curve can be calculated. Unless the user has scientific arguments for justifying one method by considering the special features of the studied case, it may be concluded that the mean method MD() gives the most probable damage assessment.

The results are then compared with empirical data of damaged buildings in Lorraine (see Figure 11). This comparison shows that there is a good agreement between the observations and the calculated vulnerability curves. Nevertheless, some differences remain for the lowest and greatest values of the horizontal ground strain. The vulnerability functions underestimate the damage for the lowest values of the horizontal ground strain (less than 3 mm/m), and they overestimate the damage for the greatest values (greater than 9 mm/m). One possible explanation for these differences is the existence of preliminary building damage due to building aging and other building pathologies (e.g., settlement during construction). For the greatest values of the horizontal ground strain, the overestimation of the damage shows that there are always some building types that are stronger than the predicted resistance.

( 9 ) 4 () (0) 1 [( ) ( )] 3

*Tanh d c Tanh d <sup>b</sup>*

3

(9) 4 ( 9 ) 4 ()

Fig. 10. Vulnerability function and curve for the CF1 building type, built from Table 7.

The next step in determining vulnerability and fragility curves is the damage assessment for all theoretical buildings for different values of the horizontal ground strain between 0 and

The damage level of each building is calculated for the different values of horizontal ground strain, and the probability of damage in each damage class "P(Di)" is calculated with Eq. 3. The influence of the damage assessment method is investigated in Figure 11, and it is a good way to assess the model uncertainty if no arguments are available to privilege one method. The results of the vulnerability curves for the MR2 building type (Table 5) obtained with different methods show significant differences. In particular, the NCB method gives less damage than the other methods; consequently, this method is considered less conservative. A mean vulnerability curve can be calculated. Unless the user has scientific arguments for justifying one method by considering the special features of the studied case, it may be concluded that the mean method MD() gives the most probable damage assessment.

The results are then compared with empirical data of damaged buildings in Lorraine (see Figure 11). This comparison shows that there is a good agreement between the observations and the calculated vulnerability curves. Nevertheless, some differences remain for the lowest and greatest values of the horizontal ground strain. The vulnerability functions underestimate the damage for the lowest values of the horizontal ground strain (less than 3 mm/m), and they overestimate the damage for the greatest values (greater than 9 mm/m). One possible explanation for these differences is the existence of preliminary building damage due to building aging and other building pathologies (e.g., settlement during construction). For the greatest values of the horizontal ground strain, the overestimation of the damage shows that

there are always some building types that are stronger than the predicted resistance.

*D D*

10 mm/m.

1

(5)

*<sup>a</sup> Tanh d c Tanh d Tanh d*

Fig. 11. Vulnerability curves for MR2 type by all methods (left) and comparison with observed damaged buildings in Lorraine (right).

In conclusion, vulnerability functions are a powerful and operational way to assess building vulnerability over a large territory. If statistical analysis of previous damage due to a specific hazard is the easiest method to get and validated such an approach, other methods must be developed when such data are missing. This example presents an innovative way to develop such functions. Based on Monte Carlo simulations, the method allows taking into account both variability and uncertainties. However, some steps of the methodology can be questioned, in particular when the 1000 theoretical buildings are simulated. First the choice of the statistical distribution for each parameter is not evidence; secondly the method actually doesn't consider correlations between parameters (e.g. long buildings may also be the highest for instance). This should be addressed in future work.

#### **5. Closure and future work**

Risk analysis is a complex field because of the numerous different forces that interact to assess the risk and to define solutions for the risk management. Public authorities, scientists and citizens may have different objectives, and they seldom use the same definitions. Consequently, a clear definition of the terms used in risk analysis is a preliminary step before analysing other sources of uncertainties.

Uncertainties may then be classified into various types that highlight different possible features, such as the possibility to quantify them or not with statistical or probabilistic distributions, fuzzy logic or any other method. Finally, many sources of uncertainties may be identified, for which different solutions can be suggested.

Two examples have been investigated. The first deals with a decision-aid method that is an interesting approach to managing expertise uncertainties. Based on multi-criteria analysis, initially used and developed in and for economics, this decision-aid method has been developed for the risk management of areas vulnerable to mining subsidence hazards. The second example deals with vulnerability assessment and, more specifically, with building damage assessment. The use and development of vulnerability functions are an interesting way to manage both model and data uncertainties.

In sum, these two examples illustrate how uncertainties can be taken into account for risk management related to geo-hazards. However, it is obvious that uncertainties may complicate the risk assessment that must be understood and discussed amongst public authorities,

Uncertainties and Risk Analysis Related to

18-20 mai 2001.

Wiley & Sons Inc.

Finance, Spring.

(2008), 77–90.

doi:10.4236/jgis.2010.23020.

documentation Française

Institute, 2004.

protection. Risk Analysis, Vol. 27, No. 5.

Manual

Engineering 131, No. 10, 1199-1210.

& Mébarki (eds). Balkema. pp 109-114.

Séismologie, Luxembourg. 1998; Vol. 15.

Engineering and System Safety 54, 113-118.

this concept? Enviromental Hazards, 5, pp. 35-45.

Knight F.H. (1921) Risk, Uncertainty, and Profit. University of Chicago Press.

Risk Analysis, Vol. 26, No. 2.

Geohazards: From Practical Applications to Research Trends 183

Finno R.J. , Voss F.T., Rossow E., Tanner B. (2005). Evaluating Damage Potential in Buildings

Gilbert R.B. & Tang W.H. (1995), Model uncertainty in geotechnical reliability and decision

Griot C. et Ayral P.A. (2001). Terminologie en science du risque, d'après le document de

Grunthal G. European Macroseismic Scale. Centre Européen de Géodynamique et de

Haimes Y. (2004), Risk Modeling, Assessment, and Management, 2nd revised ed., John

HAZUS. Multi-hazard Loss Estimation Methodology Earthquake Model, Technical and User Manuals. Federal Emergency Management Agency, Washington, DC. 1999, chapter 2. HAZUS®MH MR4, Multi-hazard Loss Estimation Methodology - Flood Model - Technical

Hofer E. (1996) When to separate uncertainties and when not to separate, Reliability

Hoskins C.G. (1973), Distinctions between Risk and Uncertainty. Journal of Business

Jonkman S.N., Bockarjova M., Kok M. and Bernardini P. Integrated hydrodynamic and

Karimi I. , Hüllermeier E. . Risk assessment system of natural hazards: A new approach based on fuzzy probability, Fuzzy Sets and Systems 158 (2007) 987 – 999. Karmakar S., Simonovic S. P., Peck A., Black J. An Information System for Risk-Vulnerability

Klein R.J.T., Nicholls R.J., Thomalla F. (2003). Resilience to natural hazards: how useful is

Kwiatek J. Protection of constructions on surface ground mine. (Traduction in poland

Lagomarsino S, Giovinazzi S. (2006). Macroseismic and mechanical models for the vulnerability and damage assessment of current buildings. *Earthquake Engineering* , 4, 415–443. MATE (1997). Plans de prévention des risques naturels prévisibles – Guide général. La

McGill W. L., AYYUB B. M. and Kaminskiy M. (2007). Risk analysis for critical asset

McGuire RK. Seismic Hazard and risk analysis. EERI Earthquake Engineering research

Merad M.M, Verdel T., Roy B., Kouniali S. (2004), Use of multi-criteria decision-aids for risk

Tunneling and Underground Space Technology 19 (2004) 125–138.

zoning and management of large area subjected to mining-induced hazards,

economic modelling of flood damage in the Netherlands, Ecological economics 66

Assessment to Flood, Journal of Geographic Information System, 2010, 2, 129-146

"Ochrona objektow budowlanych na terenach gorniczyych"). GIG, Katowice, 1998.

Hacking Ian (1975), L'émergence de la probabilité, (Traduction de l'anglais, Seuil, 2002). Haimes Y. (2006). On the definition of vulnerabilities in measuring risks to infrastructures.

Galesne Alain (1996), Choix des investissements dans l'entreprise, Rennes:Cerefia.

Affected by Excavations, Journal of Geotechnical and Geoenvironmental

analyses, Proceedings of Applications of Statistics and Probability, Lemaire, Favre

travail du colloque international "Dire le risque : le risque mis en examen", Mèze,

scientists and citizens. A difficulty, then, is the ability of scientists to both manage uncertainties, with more or less complex methods, and communicate the results to allow for good understanding and use by public authorities and citizens without oversimplifications.

#### **6. References**


Boone, 2001


scientists and citizens. A difficulty, then, is the ability of scientists to both manage uncertainties, with more or less complex methods, and communicate the results to allow for good understanding and use by public authorities and citizens without oversimplifications.

Ang A. H-S. & Tang W.H., (1984) Probability concepts in Engineering Planning and Design, Vol2 – Decision, Risk and Reliability, John Wiley & Sons, New York. Aven T. , On how to define, understand and describe risk, Reliability Engineering & System

Baecher G.B. & Christian J.T. (2000), Natural Variation, Limited Knowledge, and the Nature

Baecher G.B., Christian J.T. (2002), The concept of uncertainty in geotechnical reliability, Not

Baecher G.B. & Christian J.T. (2003), Reliability and Statistics in Geotechnical Engineering,

Balandier P. (2004). Urbanisme et Aménagements – Objectifs et problématiques. Collection

Benjamin J.R. & Cornell C.A., (1970) Probability, Statistics and Decision for Civil Engineers,

Bhattacharya S, Singh MM. Proposed criteria for subsidence damage to buildings. Rock

Bogardi J.J. (2004). Hazards, risks and vulnerabilities in a changing environment: the unexpected onslaught on human security. Global Environmental Change, 14, pp. 361-365. Boscardin MD, Cording EJ. Building response to excavation – induced settlement. J. of

Burland JB. Assessment of risk of damage to buildings due to tunnelling and excavation. Earthquake geotechnical engineering, editions Ishihara, Balkema, 1997 ; 1189-1201. Callon M., Lascoumes P., Barthe Y. (2001), Agir dans un monde incertain - Essai sur la

Cauvin M., Salmon R., Verdel T. (2008), Dealing with uncertainties in the context of post

Chacko G.K. (1990), Decision-Making under Uncertainty : An Applied Statistics Approach,

Cox. (2009). What's Wrong with Hazard-Ranking Systems? An Expository Note, Risk

Deck O., Verdel T., Salmon R. (2009). Vulnerability assesment for mining subsidence hazard. International Journal on Risk Analysis – Risk Analysis, Vol. 29, No. 10, 1380-1394. Dzegniuk B, Hejmanowski R, Sroka A. Evaluation of the damage hazard to building objects

Ezell B. C. (2007). Infrastructure vulnerability assessment model (I-VAM). Risk Analysis,

on the mining areas considering the deformation course in time. Proceedings of Xth International Congress of the International Society for Mine Surveying, 2-6

mining hazard evaluation, Post-Mining 2008, Nancy, February 6-8.

mechanics in productivity and protection, 25th Symposium on rock mechanics

of Uncertainty in Risk Analysis. Presentation to The 25th Anniversary Engineering Foundation Conference on Risk and Uncertainty in Water Resources Engineering,

Safety, Volume 95, Issue 6, June 2010, Pages 623-631.

Santa Barbara, October, 2000.

John Wiley & Sons, Ltd.

Conception Parasismique

Mc Graw Hill, New York.

Geotechnical Engineering 1989 ; 115: 1-21.

démocratie technique. Seuil, Paris.

Analysis, Vol. 29, No. 7, pp. 940 - 948.

November 1997, Fremantle, Western Australia.

Praeger Publishers.

Vol. 27, No. 3.

published.

1984;747-755.

Boone, 2001

**6. References** 


**9** 

*Italy* 

**A Monte Carlo Simulation and Fuzzy** 

**Options in Engineering Fields** 

Roberta Pellegrino and Nicola Costantino

**Delphi-Based Approach to Valuing Real** 

*Politecnico di Bari – Dipartimento di Ingegneria Meccanica e Gestionale* 

The success of a firm depends on its ability to manage uncertainty of investment projects and strategies it decides to make or develop. During the management of new projects, routines and technologies, its constant objective should be to earn increasing returns by exploiting opportunities and limiting losses that could be created by uncertainty. Thus, firms must carefully recognize and evaluate the actions to be taken to respond to uncertainty. To help managers in their decision-making process in uncertain environments, new techniques and theories have been developed. One of them is the real option theory, where a real option is the right, not the obligation, to take some action in the future (Dixit and Pindyck, 1995). The formal approach, which originated from financial models, deals with future uncertainty and opportunities a firm can seize, and aims at valuing the flexibility that often managers have to "react" to uncertainty. In this sense, the real option potential to estimate the value of this flexibility is appealing for managers. As Leslie and Michaels (1997) report, in fact, over the past years, the theory has drawn a growing body of literature and has gathered support across the business world in academia, consulting, and the corporation. Copeland and Weiner (1990) of McKinsey observe that the "use of options methodology gives managers a better handle on

Despite the growing support the real option theory has been attracting in academia and its apparent relevance in business decisions, few corporate managers and practitioners have truly recognized or applied the power of real options in managing their businesses (Leslie and Michaels, 1997; Lander and Pinches, 1998). In other words, the application of real options to managerial practice is poor, and is often limited to a conceptual level. Several reasons could explain why real options are not widely used in practice, as some studies analyzed (Lander and Pinches, 1998; Borison, 2003; and others). Anyway, all these reasons could be traced back to a fundamental issue, that is, the "financial" origin of the real option

From a practical standpoint, real option modeling learned from the financial world is not easy to use or implement as real world cases are often reasonably sophisticated and complex. The inputs required for the application of these models (with financial features)

**1. Introduction** 

uncertainty".

theory and their evaluation models.

National Coal Board. Subsidence engineering handbook. 1975; 45-56 chapter 6.

Neil Adger W. N., Vulnerability, Global Environmental Change 16 (2006) 268–281

