**Technogenic Risk**

[38] Mansel P. Sultans in Splendor, Monarchs of the Middle East 1869-1945. London, UK:

[40] Sorkhabi R. The First Oil Discoveries in the Middle East, Flashback on the First

[41] Amuzegar, J. Iran's oil as a blessing and a curse. The Brown Journal of World Affairs.

[42] Abbaszadeh P, Maleki A, Alipour M, Kanani Maman Y. Iran's oil development scenarios

[43] Mohaddes K, Pesaran MH. One Hundred Years of Oil Income and the Iranian Economy: A Curse or a Blessing? CESifo Working Paper no. 4118, Category 12: Empirical and Theorethical Methods. Center for Economic Studies and Ifo Institute; Munich, Germany.

[44] Babayan K. Mystics, Monarchs, and Messiahs, Cultural landscape of early modern Iran.

[45] Ghafory-Ashtiany M. View of Islam on earthquakes, human vitality and disaster.

[46] Clancey G. Earthquake Nation, the Cultural Politics of Japanese Seismicity, 1868-1930.

[47] Ranghieri F, Ishiwatari M. Learning from Megadisasters, Lessons from the Great East

[39] Milani A. The Shah. New York, USA: Palgrave MacMillan; 2011

Discoveries of Oil Fields in the Middle East. Geo Expro; 2010

Disaster Prevention and Management. 2009;**18**(3):218-232

Berkeley and Los Angeles: University of California Press; 2006

Japan Earthquake. Washington, DC: The World Bank; 2014

Parkway Publishing; 1988

by 2025. Energy Policy. 2013;**56**:612-622

USA: Harvard University Press; 2002

2008;**15**(1):47-61

2013

60 Risk Assessment

**Provisional chapter**

### **Mexico City after September 2017: Are We Building the Right City? Right City?**

DOI: 10.5772/intechopen.72499

**Mexico City after September 2017: Are We Building the** 

Milton Montejano-Castillo and Mildred Moreno-Villanueva Mildred Moreno-Villanueva Additional information is available at the end of the chapter

Milton Montejano-Castillo and

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.72499

#### **Abstract**

Due to its destructive effect, a disaster always raises questions about its causes. In the case of the earthquake that occurred in Mexico City on September 19, 2017, one of the most surprising and astonishing situations was buildings that were damaged or collapsed by the earthquake, but which had been recently constructed. These had been built 9 months up to 12 years before, and others were still not inhabited. On the other hand, as in 1985, public spaces have been playing a key role both in the emergency phase and in the reconstruction phase. However, the new public spaces that accompany the most recent housing projects have lost much of their quality. What factors have influenced these urban processes? What are the stakeholders that produce both the new urban forms and the new public spaces? Are there ways to measure the quality of these new public spaces? We depart from the hypothesis that the recomposition of territories of opportunity in Mexico City has been based on the adoption of trends supported by the economy, rather than in the needs of the population, resulting in exclusionary and uninhabitable public spaces in case of disaster.

**Keywords:** Mexico City, earthquake, public space, urban form, disaster risk

#### **1. Introduction**

On September 19, 2017, a 7.1-magnitude earthquake shook Mexico City, which was known to happen because of the determinants of its territory, but it was not known when or the size of the disaster. Due to its evacuation protocols and the coincidence of the day commemorating 32 years of one of the most devastating earthquakes in this city (September 19, 1985), the citizenship mobilized nimbly to attend the emergency stage.

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

The open city was immediately occupied: streets, squares, gardens and street lane dividers; first as safe places to safeguard life as a reaction during the earthquake and then became centers for the collection of tools for rescuers, food and medicine collection, healthcare centers, psychological assistance to citizens with information on missing persons, centers for the collection of food for pets, veterinary care for rescue dogs and pets found, digital attention, but they also became a life opportunity as temporary shelters.

On the other side, in the post-disaster phase, one of the most surprising and astonishing situations was buildings that were damaged or collapsed by the earthquake, but which had been recently constructed. These had been built 9 months up to 12 years before, and others were still not inhabited. These housing buildings have been the result of the so-called real estate boom that has been changing on the one hand the verticality of the city and, on the other, the occupation of the territory, which has been monopolized without leaving sufficient reserves of open spaces; so it seems that the risk conditions have been being built.

For this reason, some questions arise as: How are these new urban forms being created in Mexico City? That is, are the policies for the growth of the City not being respected or are these the ones that are allowing this verticality with little public space? Are we creating safe and habitable cities or are we exchanging safety for built space at high costs? And, in this sense, is our public space inclusive or exclusive, so that in case of disaster is useful to citizens? Is 32 years sufficient to dilute the memory of the disaster and the preventive aspects: density and role of public space in the event of disaster?

The objective of this chapter is to analyze the condition of the public space in Mexico City in quantitative and qualitative terms, understanding public space as parks, public squares and walks. Hence, some public spaces are compared, evaluating the instruments, actions and public interventions for the creation and improvement of them.

To undertake this analysis, we take as case study the neighborhoods Granada and Ampliación Granada, both located in the Municipality of Miguel Hidalgo in Mexico City, that since the end of the first decade of this century have had a reconfiguration in their use from industrial land to residential use. A contrasting case is Polanco, an adjoining neighborhood of success since its creation in the early twentieth century, which supports the new real estate image of the mentioned neighborhoods.

In the recent reconfigured areas, there was an opportunity to create habitable public spaces, but especially the opportunity to enhance public space as an eventual resource in the emergency and reconstruction phase of the city—given its seismic nature and propensity to flood—but this was not done.

We depart from the hypothesis that the recomposition of territories of opportunity in Mexico City has been based on the adoption of trends supported by the economy, rather than in the needs of the population, resulting in exclusionary and uninhabitable public spaces in case of disaster.

### **2. Urban form and seismic risk in Mexico City**

In 1985, an 8.1-magnitude earthquake shook Mexico City leaving an official balance of more than 3000 fatalities and hundreds of buildings collapsed. As many factors came into play, we limit here to underline the relationship between the natural hazard (seismic waves) and the physical vulnerability reflected in one of the most characteristic morphological features of the city, its verticality and the existence and/or absence of public spaces as a support resource in the emergency and recovery phase.

The open city was immediately occupied: streets, squares, gardens and street lane dividers; first as safe places to safeguard life as a reaction during the earthquake and then became centers for the collection of tools for rescuers, food and medicine collection, healthcare centers, psychological assistance to citizens with information on missing persons, centers for the collection of food for pets, veterinary care for rescue dogs and pets found, digital attention, but

On the other side, in the post-disaster phase, one of the most surprising and astonishing situations was buildings that were damaged or collapsed by the earthquake, but which had been recently constructed. These had been built 9 months up to 12 years before, and others were still not inhabited. These housing buildings have been the result of the so-called real estate boom that has been changing on the one hand the verticality of the city and, on the other, the occupation of the territory, which has been monopolized without leaving sufficient reserves of open spaces; so it seems that the risk conditions have been being built. For this reason, some questions arise as: How are these new urban forms being created in Mexico City? That is, are the policies for the growth of the City not being respected or are these the ones that are allowing this verticality with little public space? Are we creating safe and habitable cities or are we exchanging safety for built space at high costs? And, in this sense, is our public space inclusive or exclusive, so that in case of disaster is useful to citizens? Is 32 years sufficient to dilute the memory of the disaster and the preventive aspects: density

The objective of this chapter is to analyze the condition of the public space in Mexico City in quantitative and qualitative terms, understanding public space as parks, public squares and walks. Hence, some public spaces are compared, evaluating the instruments, actions and

To undertake this analysis, we take as case study the neighborhoods Granada and Ampliación Granada, both located in the Municipality of Miguel Hidalgo in Mexico City, that since the end of the first decade of this century have had a reconfiguration in their use from industrial land to residential use. A contrasting case is Polanco, an adjoining neighborhood of success since its creation in the early twentieth century, which supports the new real estate image of

In the recent reconfigured areas, there was an opportunity to create habitable public spaces, but especially the opportunity to enhance public space as an eventual resource in the emergency and reconstruction phase of the city—given its seismic nature and propensity to

We depart from the hypothesis that the recomposition of territories of opportunity in Mexico City has been based on the adoption of trends supported by the economy, rather than in the needs of the population, resulting in exclusionary and uninhabitable public spaces in case of disaster.

In 1985, an 8.1-magnitude earthquake shook Mexico City leaving an official balance of more than 3000 fatalities and hundreds of buildings collapsed. As many factors came into play, we

they also became a life opportunity as temporary shelters.

64 Risk Assessment

and role of public space in the event of disaster?

the mentioned neighborhoods.

flood—but this was not done.

public interventions for the creation and improvement of them.

**2. Urban form and seismic risk in Mexico City**

According to Meli [1], the statistical analysis of damages after 1985 revealed that the collapse of buildings was not at random. Regardless of age, materials and structure of buildings, there were certain types of buildings that particularly collapsed, having in common the number of floors (see **Table 1**): buildings between 7 and 12 floors had more collapses than low-rise buildings. Such a finding made sense when the natural conditions of the soil were revised.

The area of the largest number of collapsed buildings had been the area of the former lake. That is, the lacustrine nature of Mexico City (it was founded on a lake which was later artificially dried out) impregnated the soil with certain characteristics, resulting in an area with three types of soils: (a) the area of the lake (where the lake was formerly located); (b) a transition zone (with part of hard and soft ground) and (c) a zone of hills (with a high resistance capacity) (see **Figure 1**).

The seismic waves that affected Mexico City in 1985 were produced in the coast of the State of Michoacán and traveled 400 km, but upon arriving in Mexico City and coming into contact with the clay soil area, the oscillation period of the waves was amplified. After the earthquake of 1985, the studies carried out on the collapse of the buildings revealed that the causes of this collapse were not so much a function of the age of the construction and the type of structure, but of the height itself due to a natural phenomenon known as "resonance" [1, 2], which causes the seismic movement to be amplified due to the coincidence of the frequency of vibration of the ground with that of the building. By matching the periods of oscillation of ground and buildings, the waves were amplified (reinforced), resulting in inertial forces that ended up causing the collapse of buildings of certain heights.

As a consequence of this phenomenon, the construction regulations in Mexico City were modified, making sure that the buildings were calculated considering the oscillation periods depending on the type of soil. When deciding the number of levels of the buildings (with a more or less constant period of oscillation per floor), this number of floors and its corresponding period of oscillation should not coincide with the period of oscillation of the soil in that area to avoid the phenomenon of resonance.


**Table 1.** Percentage of collapsed or severely damaged buildings according to number of storeys in Mexico City after the earthquake of 1985.

**Figure 1.** Seismic zoning of Mexico City published after the September 2017 earthquake showing the damaged zones in 1985 and 2017. Source: Own elaboration based on the official map of seismic zoning (http://www.atlas.cdmx.gob.mx/ zonificacion\_sismica.html).

On the other hand, the structure of the building should be sufficiently "flexible" and "ductile" enough to dissipate the seismic energy (thereby making the building less vulnerable). If high-rise buildings were built in a soft ground, engineering design should ensure that energy dissipates—with the help of seismic dampers for example—before the higher floors begin to oscillate, a fact that was achieved in the buildings that could afford this technology.

The earthquake on September 19, 2017 of 7.1 degrees, −which caused 228 fatalities and the collapse of 38 buildings1 , brought more elements and hypothesis to be added about the damage to occur. One of the points of discussion and analysis was that the most impacted area was not the one of the former lake as it was in 1985. On this time, although the earthquake was of lesser magnitude, the epicenter was located closer to Mexico City (120 km away), causing the amplification of the waves not in the area of the lake but in the area of transition, causing the collapse of buildings from four to seven levels, thus revealing "a complex pattern of movement and very variable in the space [3]. To the latter, it should be added the question about the correct application of land use zoning and dubious authorizations for the construction of buildings for residential use, since many of the collapsed and damaged buildings were just beginning their useful life (see **Table 2**).

On the other hand, public space played again a fundamental role both in the emergency phase and in the reconstruction stage. It is no coincidence that the spaces that were used in 1985 and 2017 correspond to projects where public space was, from the beginning, the most

<sup>1</sup> Without taking into account 24 buildings that officially will have to be demolished due to the damage they suffered [4].

important component. An example of this balance between housing and public space is the Colonia Hipódromo Condesa, built in 1926 and designed by the architect José Luis Cuevas Pietrasanta (see **Figure 2**). The land was an old racecourse and the architect simply continued with its original shape giving a radial structure and at the center a large park and a green belt. Despite the densification that this area has been subject to, public space remains as an invaluable resource at the time of the emergency. In September 2017, this space was used to organize search and rescue activities, medical service, psychological care, pet care and collection of donations (food, tools, etc.). At the same time, other damaged areas of the city and recent real


On the other hand, the structure of the building should be sufficiently "flexible" and "ductile" enough to dissipate the seismic energy (thereby making the building less vulnerable). If high-rise buildings were built in a soft ground, engineering design should ensure that energy dissipates—with the help of seismic dampers for example—before the higher floors begin to

**Figure 1.** Seismic zoning of Mexico City published after the September 2017 earthquake showing the damaged zones in 1985 and 2017. Source: Own elaboration based on the official map of seismic zoning (http://www.atlas.cdmx.gob.mx/

The earthquake on September 19, 2017 of 7.1 degrees, −which caused 228 fatalities and the col-

occur. One of the points of discussion and analysis was that the most impacted area was not the one of the former lake as it was in 1985. On this time, although the earthquake was of lesser magnitude, the epicenter was located closer to Mexico City (120 km away), causing the amplification of the waves not in the area of the lake but in the area of transition, causing the collapse of buildings from four to seven levels, thus revealing "a complex pattern of movement and very variable in the space [3]. To the latter, it should be added the question about the correct application of land use zoning and dubious authorizations for the construction of buildings for residential use, since many of the collapsed and damaged buildings were just beginning their useful life (see **Table 2**). On the other hand, public space played again a fundamental role both in the emergency phase and in the reconstruction stage. It is no coincidence that the spaces that were used in 1985 and 2017 correspond to projects where public space was, from the beginning, the most

Without taking into account 24 buildings that officially will have to be demolished due to the damage they suffered [4].

, brought more elements and hypothesis to be added about the damage to

oscillate, a fact that was achieved in the buildings that could afford this technology.

lapse of 38 buildings1

zonificacion\_sismica.html).

66 Risk Assessment

1

Source: *Revista Obras* no. 538, Oct 2017 and Najar A. "Las razones por las que colapsaron tantos edificios en CDMX (y no todas son el sismo)", *Animal político* (internet). http://www.animalpolitico.com/2017/10/las-razones-las-colapsarontantos-edificios-ciudad-mexico-no-todas-terremoto/

**Table 2.** Residential use buildings damaged or collapsed during the earthquake of September 19, 2017 in Mexico City.

estate development projects lacked these spaces, making especially difficult the moment of evacuation (see **Figure 3**). The configuration of such new projects is the combination of several factors and conditions described below.

**Figure 2.** España Park plan (top left); process of verticalization 1932–2016 (at the center) and the use of the park during the earthquake of September 2017 (bottom). Image of damages (top right). Photos by the authors. Own drawings based on aerial photographs from Fondo Aerofotográfico Acervo Histórico Fundación ICA.

**Figure 3.** Collapsed buildings in Mexico City in the September 2017 earthquake. Despite the density of the buildings, the absence of public space in the surroundings is evident. Photo: Rosa Lilia Pedraza Vázquez.

### **3. Recent transformations in Mexico City: actors and factors**

In the case of Mexico City, the last decades of the twentieth century brought a change in public policies and a depopulation of the central parts, especially due to the process of deindustrialization and the earthquake of 1985. This meant a reinvention of the city for this century, through standards that call for a redensification and the opportunity to occupy spaces that were attractive to the private sector during the first decade. This meant that the city exceeded its limits, gentrifying spaces and consequently producing poorly rehabilitated residual public spaces or the creation of reduced spaces.

#### **3.1. New policies**

estate development projects lacked these spaces, making especially difficult the moment of evacuation (see **Figure 3**). The configuration of such new projects is the combination of several

**Figure 3.** Collapsed buildings in Mexico City in the September 2017 earthquake. Despite the density of the buildings, the

**Figure 2.** España Park plan (top left); process of verticalization 1932–2016 (at the center) and the use of the park during the earthquake of September 2017 (bottom). Image of damages (top right). Photos by the authors. Own drawings based

absence of public space in the surroundings is evident. Photo: Rosa Lilia Pedraza Vázquez.

on aerial photographs from Fondo Aerofotográfico Acervo Histórico Fundación ICA.

factors and conditions described below.

68 Risk Assessment

The instrument for urban development policies called "Bando Dos," proposed to redensify the city with the specific objective of ordering the urban growth of Mexico City, preventing the construction of more housing in the outskirts of the city. The instrument was presented on December 7, 2000 by the then head of government (Andrés Manuel López Obrador). It had different objectives for the ordering of Mexico City, such as: to stop disordered growth; to safeguard the preservation of soil of the then Federal District (now Mexico City DOF ), preventing the growth of the urban areas and thus avoiding covering the recharge zones of aquifers. It was determined that the districts that had suffered considerable depopulation were mainly four: Cuauhtémoc, Benito Juárez, Miguel Hidalgo and Venustiano Carranza, all located in the central area of the city. It was also assessed which had been disorderly populated, predominating the south and east. It was determined that there is little infrastructure in the city for a strong real estate development [5].

Among the policies implemented was the promotion of population growth toward the districts of Benito Juárez, Cuauhtémoc, Miguel Hidalgo and Venustiano Carranza to take advantage of the infrastructure and services that are currently underutilized, and the construction of housing for the lower income classes [5]. However, in these central districts, such as Benito Juárez, the project did not work as expected. At first, there was a real estate boom, but if it was not successful it was because of the high cost of housing and the poor infrastructure. In different neighbors, there was a wild transformation of the city landscape by cutting down trees and constructing big buildings: where there had been houses for six to eight people, now there appeared buildings with eight to ten floors for many families. In these new buildings, however, not all apartments were sold.

As a part of the first consequences, in 2010, the government of the Federal District at the time, together with the Ministry of Social Development (SEDESOL), the National Council of State Housing Entities (CONOREVI), The Autonomous University of Mexico (UNAM), the Housing Fund of the Institute of Security and Social Services for State Workers (FOVISSSTE), the National Workers Housing Fund Institute (INFONAVIT) and the Federal Mortgage Society (SHF), published the Guide for residential redensification in the internal city [6], in which they present a methodology to identify redensification scenarios, as well as instruments to favor it so as to join the smart city growth system and position Mexico in the international environment in this respect, for which they are planned to address a series of issues, such as increase in the costs of displacements of the inhabitants of said areas; greater consumption of fuels and greater production of emissions polluting the atmosphere; loss of preservation areas, aquifer recharge zones and agricultural production areas; higher costs of urbanization that represent a significant burden for local governments and social and economic segregation of urban space [6].

The approximate 10-year delay for this guide to be published—to take measures on matters of redensification policies—caused for constructions to be carried out during that time in different zones that lack integration with the social fabric, for it has been seen that elite zones are created, which keeps the population dissatisfied and afraid of being displaced. There was an unlimited number of claims derived from the implementation of the Bando 2, caused by the fear of the modification of the environment, decrease of the quality of life, of safety, of the value of real estate, feelings of dispossession or feelings of injustice, for decisions were made that affected the territory without the main interested parties being informed, taken into account or heard, a loss of confidence of the population in the authorities and experts that promoted the project, above all when there is a tradition of local organization and mobilization, risk perception and a feeling of uncertainty. The technical and scientific studies that validated the project were questioned [7].

#### **3.2. Deindustrialization**

With the economic opening abroad with the 1988 free trade agreement, there was a shift in the activities of the manufacturing industry that caused a process of deindustrialization. The industries were moved toward the outskirts of the city or even toward other territories [8]. This process is not yet finished. There are still areas of the city with disappearing industries. With this movement and the change toward a tertiary economy, the reconfiguration of the city was affected on one hand due to the opportunity of land within the city, seized by the real estate power, and on the other hand due to the change of policies that did not work as expected. In Mexico City, some of the areas that have passed through the process of deindustrialization at the end of the twentieth century were the municipalities (delegaciones) Benito Juárez, Cuauhtémoc, Miguel Hidalgo, Venustiano Carranza, as well as Azcapotzalco and Gustavo A. Madero [9]. In recent years, the mass production of housing has captured some of these areas, leading them to transformations that are a result of the inclusion-exclusion struggle that is reflected in the absent public space. An example of this is the case of the neighborhoods Granada and Ampliación Granada, in the Miguel Hidalgo delegation, which has been a categorical place throughout History. From an economic point of view, we could say that it has gone through three sectors: agricultural, industrial and tertiary.

In 1920, the lands of the Hacienda de los Morales were divided, playing a significant role in the urbanization of the city of Mexico due to the fact that part of the space was used for the colonia Polanco assigned to upper middle housing, in which the neighborhood project of the first half of the twentieth century was based on public space. This was a key as it grew until it was divided into five sections, sharply contrasting the neighborhoods Granada and Ampliación Granada, which began to be industrially established without public spaces. The following were some of the factories in the place: the General Motors Factory in 1923; the Mexico glass factory in that same year; the Modelo Brewery and the General Popo in 1925; the Tabiques La Universal Factory, whose year of establishment is unknown; the Chrysler Factory in 1939 and thereafter until 1961; the Palmolive Factory; the Halaxtoc textile factory; Laminadora LMMSA; pharmaceutical industries; Factory in Lago Andrómaco Street; Bolt Factory; factories in lake Neuchatel; Furniture and Steel Factory and another Cotton factory [10].

#### **3.3. Actors and programs in the production of public space**

of fuels and greater production of emissions polluting the atmosphere; loss of preservation areas, aquifer recharge zones and agricultural production areas; higher costs of urbanization that represent a significant burden for local governments and social and economic segrega-

The approximate 10-year delay for this guide to be published—to take measures on matters of redensification policies—caused for constructions to be carried out during that time in different zones that lack integration with the social fabric, for it has been seen that elite zones are created, which keeps the population dissatisfied and afraid of being displaced. There was an unlimited number of claims derived from the implementation of the Bando 2, caused by the fear of the modification of the environment, decrease of the quality of life, of safety, of the value of real estate, feelings of dispossession or feelings of injustice, for decisions were made that affected the territory without the main interested parties being informed, taken into account or heard, a loss of confidence of the population in the authorities and experts that promoted the project, above all when there is a tradition of local organization and mobilization, risk perception and a feeling of uncertainty. The technical and scientific studies that

With the economic opening abroad with the 1988 free trade agreement, there was a shift in the activities of the manufacturing industry that caused a process of deindustrialization. The industries were moved toward the outskirts of the city or even toward other territories [8]. This process is not yet finished. There are still areas of the city with disappearing industries. With this movement and the change toward a tertiary economy, the reconfiguration of the city was affected on one hand due to the opportunity of land within the city, seized by the real estate power, and on the other hand due to the change of policies that did not work as expected. In Mexico City, some of the areas that have passed through the process of deindustrialization at the end of the twentieth century were the municipalities (delegaciones) Benito Juárez, Cuauhtémoc, Miguel Hidalgo, Venustiano Carranza, as well as Azcapotzalco and Gustavo A. Madero [9]. In recent years, the mass production of housing has captured some of these areas, leading them to transformations that are a result of the inclusion-exclusion struggle that is reflected in the absent public space. An example of this is the case of the neighborhoods Granada and Ampliación Granada, in the Miguel Hidalgo delegation, which has been a categorical place throughout History. From an economic point of view, we could say

In 1920, the lands of the Hacienda de los Morales were divided, playing a significant role in the urbanization of the city of Mexico due to the fact that part of the space was used for the colonia Polanco assigned to upper middle housing, in which the neighborhood project of the first half of the twentieth century was based on public space. This was a key as it grew until it was divided into five sections, sharply contrasting the neighborhoods Granada and Ampliación Granada, which began to be industrially established without public spaces. The following were some of the factories in the place: the General Motors Factory in 1923; the Mexico glass factory in that same year; the Modelo Brewery and the General Popo

that it has gone through three sectors: agricultural, industrial and tertiary.

tion of urban space [6].

70 Risk Assessment

**3.2. Deindustrialization**

validated the project were questioned [7].

In Mexico, there are various governmental instances responsible for intervening in or making public space such as The Department of Urban Development and Housing (SEDUVI), the Public Space Authority (AEP) or the different municipalities (former Delegaciones). However, when public space shows specific characteristics and values for which it has been cataloged as equity, the instances for intervening in it change or they are accompanied by certain strict guidelines for their regeneration, such as the INAH (National Institute for Anthropology and History), the INBA (National Institute for Fine Arts), the Historical Center Authority or the UNESCO, according to the case. Each one of the aforementioned instances intervenes in public space from different perspectives and with various actors. The Department of urban Development and Housing, for example, is responsible for designing policies applicable to the city, attempting for them to integrate society when acting and interacting with it, so as to transform the city in an inclusive manner. It creates the Programs of Delegations, Partial Programs and the Urban Development Program for the purpose of ordering the city in all its aspects: mobility, public space, housing, urban infrastructure, basic services, always with the idea of improving and positioning it as a safe city.

On the other hand, there is the Public Space Authority (AEP), which is a decentralized entity of the SEDUVI. It not only designs policies to apply them to urban space but also directly intervenes through the design of the space and the contracting and subcontracting of construction and design companies. Some of its programs and projects are as follows: Ecopark, Bajo puentes (underbridges), Pasos seguros (safe steps), publicidad exterior (advertising), Parques de Bolsillo (pocket parks) and Parques lineales (linear parks), among others. The AEP was created in 2008. It works on the various projects with different companies, for example, CTS Embarq with the Model street, GABANA engineering and GCB Construcciones y Servicios for the refurbishment of the street Torcuato Tasso, Proyecsa e Ingenieros, ANACE Construcciones, Grupo Q and B and Servicios integrados RUBE for the regeneration of the Alameda Central, Grupo Velasco, JM Constructora, Kassar Construcciones, 128 Arquitectura and Diseño Urbano para Espacios Públicos de Bolsillo, to mention a few.

With respect to the organization of the Historic Center of Mexico City, there is another decentralized entity called the Historic Center Authority, created in 2007, which proposes public policies for integration and promotes the refurbishment of public spaces located in this square. However, there are various actors that participate in the intervention and construction of public space. Even when the aforementioned entities are present, the participation of the citizens is already contemplated in almost the majority. Participating in the modifications of the urban environment means a social commitment more than a political one, but the action surpasses that which is social, political and economic.

Concerning the programs for public space in Mexico City, since the first decade of the 21st century, a series of urban projects were implemented by the Department of Urban Development and Housing (SEDUVI) and the Public Space Authority (AEP) to create or intervene in spaces with characteristics of deterioration and abandonment in some cases, including economic activity, which addressed the demands of the inhabitants. On the one hand, among the newly created public space projects were those that had a renewed design, with the minimum characteristics necessary to be used and enjoyed, such as low bridge projects, public pocket parks or bonds of friendship. On the other hand, are the projects of improvement and refurbishment of public spaces, in which there are improvements of spaces with an inclusive design, refurbishment of heritage spaces, pedestrianization and semi-pedestrianization of streets, illuminate your city program and ecoparq and refurbishments of monuments (see **Table 3**).


Source: SEDUVI.

**Table 3.** Public space programs activated in the twenty-first century in Mexico City.

#### **3.4. Verticalization and public space in the new urban territories**

Concerning the programs for public space in Mexico City, since the first decade of the 21st century, a series of urban projects were implemented by the Department of Urban Development and Housing (SEDUVI) and the Public Space Authority (AEP) to create or intervene in spaces with characteristics of deterioration and abandonment in some cases, including economic activity, which addressed the demands of the inhabitants. On the one hand, among the newly created public space projects were those that had a renewed design, with the minimum characteristics necessary to be used and enjoyed, such as low bridge projects, public pocket parks or bonds of friendship. On the other hand, are the projects of improvement and refurbishment of public spaces, in which there are improvements of spaces with an inclusive design, refurbishment of heritage spaces, pedestrianization and semi-pedestrianization of streets, illumi-

nate your city program and ecoparq and refurbishments of monuments (see **Table 3**).

Refurbishment of monuments

Improvement of spaces with inclusive design

Illuminate your City Program

Refurbishment of heritage spaces

Its purpose is to rescue sculptural monuments, integrate them harmoniously into public space and recover them for interaction

Improve pedestrian accessibility and the vehicular flow of the avenue that was inadequately designed for the intense pedestrian and automobile

This unifies public lighting in primary and secondary roads to prevent the "zebra effect" from being produced, which is a phenomenon that creates variations in the intensity of the

installation of parking meters. This improves the mobility of the city

This complements the recovery of public spaces of the historic center, and additionally promotes the use of heritage spaces by optimizing their social function and spacing in benefit

capacity

lighting of the streets

Ecoparq Recovery of public spaces through the

of the inhabitants

**Newly created public space programs Public space refurbishment programs**

Design of social interaction, identity and economic activity, in remaining streets or spaces

Project in the development of cultural and political relationships between the two countries, through the donation of a sculpture placed in a newly

or under-used public spaces, providing them infrastructure with high technical specifications to address the basic needs of the population, including spaces for

Consolidate the pedestrian section of Public space of the Historic Center, promote sustainable mobility, optimize vehicular and pedestrian travel times, provide universal accessibility and optimize the heritage value of the area

equipped with game tables for children, a rest area, green areas, with natural vegetation and chairs called Parkes. These are placed in spaces that are generally used as parking lots

**Table 3.** Public space programs activated in the twenty-first century in Mexico City.

between buildings

created public space

Underbridges This seeks to rescue abandoned

commerce.

Mobile park Spaces assembled in trailer parks,

Public pocket parks

72 Risk Assessment

Bonds of friendship

Pedestrianization and semipedestrianization

Source: SEDUVI.

By the start of the nineteenth century, the neighborhoods Granada and Ampliación Granada were changing their morphology, land use and population. The main change was the use of industrial land to residential land, which was attractive for real estate developers, who saw that its potential was supported by the urban image of the bordering sector Polanco. The two neighborhoods were given different informal names following the first interventions: Ampliación Polanco, Polanco Bis, Polanco II or the Nuevo Polanco; however, a series of contrasts have been seen between Polanco and the more recently built neighborhoods (Granada and Ampliación Granada). The most significant difference between said neighborhoods is the type of public space. In spite of the luxurious residential buildings that broke the specification

**Figure 4.** Urban transformations occurred in the twenty-first century in the territory of Ampliación Granada (expansion of Granada Neighborhood) and Granada Neighborhood in the period between 2001 and 2016. Source: Own elaboration based on Google Earth images from 2001 to 2007. Information from 2008 to 2016 is based on own field survey illustrated on Google Earth maps.

of the Bando Dos and the norm 26 to create housing construction of social and popular interest on urban land and thereby redensify the zones of Mexico City in which there is a certain lack of population, they lost the opportunity to create housing with high quality public spaces (see **Figure 4**).

Due to the rapid and disordered growth in some areas of Mexico City, in 2013, the implementation of the norm that proposed the redensification was detained due to the abuse of the land use and its changes in the type of housing that should be implemented. However, in that same year, the Action through Cooperation System (SAC) was created, which is an instrument to manage and create policies that include public action, the intervention of the State, as well as the private party, that is, the participation of land owner companies to interact with each other in the interest of improving the city for which the Department of Urban Development and Housing (SEDUVI) is responsible.

One of the main characteristics of the area is that, at its pace of development, it has not only been activated housing for the elites, but commercial and service activity has also been developed, creating large office buildings or shopping centers with foreign brand stores. It has become common in the area for small shopping centers with convenience stores, minisupers, restaurants, cafes and bars to be built in the lower part of housing buildings. The main problem was that there were no public spaces. However, far from providing a solution, due to the new constructions, trees have also had to be cut down, and trees have been changed for ornamental plants that represent consequences for the environment and deterioration in life quality. Thus, the place only has what are now the public spaces of the twenty-first century, such as pocket parks (three on the Cuernavaca Railroad), linear parks (that of the Cuernavaca Railroad) and low bridges (that of San Joaquín Avenue at the intersection with Moliere Street). On the opposite case we find wide parks and walkways in the area of Polanco (see **Figure 5**).

**Figure 5.** Public spaces in the neighborhoods of Granada (left at the top) and Polanco (left at the bottom) and their location. Source: Own elaboration.

### **4. Bases for a context-sensitive assessment of public space in Mexico City**

With the assumption that it is essential for urban studies to include different approaches and to pay attention to the processes that transform the city, three views are taken into account for the understanding and analysis of the public space: 1. The habitability of public space, i.e. the human condition of public space. 2. The vision of inclusion regarding physical and social aspects of public space. 3. A vision in the globalized sense of the trends reflected in the space. At the end of the section, we present the main variables that could be the basis of a model to analyze the quality of public space in this city. This model is applied then to the above-mentioned case with the intention to compare the qualities of different public spaces of neighbor areas but produced in different historical periods.

### **4.1. The habitability of public space**

of the Bando Dos and the norm 26 to create housing construction of social and popular interest on urban land and thereby redensify the zones of Mexico City in which there is a certain lack of population, they lost the opportunity to create housing with high quality public spaces

Due to the rapid and disordered growth in some areas of Mexico City, in 2013, the implementation of the norm that proposed the redensification was detained due to the abuse of the land use and its changes in the type of housing that should be implemented. However, in that same year, the Action through Cooperation System (SAC) was created, which is an instrument to manage and create policies that include public action, the intervention of the State, as well as the private party, that is, the participation of land owner companies to interact with each other in the interest of improving the city for which the Department of Urban Development

One of the main characteristics of the area is that, at its pace of development, it has not only been activated housing for the elites, but commercial and service activity has also been developed, creating large office buildings or shopping centers with foreign brand stores. It has become common in the area for small shopping centers with convenience stores, minisupers, restaurants, cafes and bars to be built in the lower part of housing buildings. The main problem was that there were no public spaces. However, far from providing a solution, due to the new constructions, trees have also had to be cut down, and trees have been changed for ornamental plants that represent consequences for the environment and deterioration in life quality. Thus, the place only has what are now the public spaces of the twenty-first century, such as pocket parks (three on the Cuernavaca Railroad), linear parks (that of the Cuernavaca Railroad) and low bridges (that of San Joaquín Avenue at the intersection with Moliere Street). On the opposite case we find wide parks and walkways in the

**Figure 5.** Public spaces in the neighborhoods of Granada (left at the top) and Polanco (left at the bottom) and their

(see **Figure 4**).

74 Risk Assessment

and Housing (SEDUVI) is responsible.

area of Polanco (see **Figure 5**).

location. Source: Own elaboration.

When we talk about desirable public spaces, it could be seen as something subjective. Each human being thinks differently and according to their cultural characteristics, and to that extent, needs could vary. But even in the same country, the geographical or economic situation of each family would imply different demands. Something is very certain, however, and that is that we all have the need to co-inhabit. Each species on this planet has its natural habitat, fishes in the water, monkeys in the jungles and forests and lions in the savannahs. Habitat is the space where species are born, grow, reproduce and die, that is, the space on earth where they meet all their needs. Even when human beings are governed by this general rule, there are two fundamental elements that make them different from other species: the first and most important is that their habitat is not natural, but artificial; and secondly, apart from the physiological needs they need to satisfy, they are also creative beings [11].

Habitability is defined here as the capacity of a place to meet human needs [12], and although several authors consider that habitability refers only to the material and structural conditions of built spaces [13–15], without taking into account the social aspect in the outside [16], habitability for man would be as much within the architectural element as outside of it. Habitability goes beyond the door of the house to the street, toward the public space, where the social function, the community, comes into play, because it is there where "the expression and social identification of the others is built," based on the expression and symbolic construction of the space [17], we leave our house behind to find a huge machinery concentrating the totality of our culture, but which also encapsulates international movements and trends we must incorporate during our journey.

The habitable public space is one that maintains a balance between the material and immaterial elements that intervene in the places of free access for all human beings, regardless of gender, religion, race or social class in order to satisfy the collective needs. Elements of habitability in the public space can be measured and diminished, as appropriate, taking into account the global and local transformations, and the determinants of type of settlement, but … How do we know if the public space is to a greater or lesser extent livable? For this, we consider three theories:

According to the theory of human need by Len Doyal and Ian Gough, cited by Reyes [18], needs are constructed socially and derived from the cultural environment. The authors take into account indices to measure the welfare between nations based on the needs of: appropriate health care, security, economic safety, clean water, adequate food, shelter as a mean of protection from the elements, relationships of recognition, safe working environments and relationships of recognition and belonging. The needs proposed by this theory are general and can be considered basic in different territories and different social groups. It should be taken into account, however, that the cultural and natural environment, the new technologies and even the policies for urban space make human requirements more complex and even different. This is the case of multicultural cities and the public space should regard it as a principle to meet the needs mentioned above.

Based on Max Neef's theory about human needs, Reyes [18] analyzes the habitability of public space and combines criteria from existential and axiological categories, where existential categories focus on needs of being referred to personal or collective attributes, having, which contains the mechanisms and laws required, doing, as personal or collective actions, and being, in those spaces of action and construction of needs, satisfactors and economic goods; while the axiological categories cover the requirements of subsistence, protection, affection, understanding, participation, creation, identity and freedom. This refers us, in terms of the existential category, to social action that allows us to build axiological relations that give meaning to space.

Schiller's theory, cited by Valladares et al. [19], is that of the qualities of the habitable public space where, from variables with a specific meaning and value, he measures the habitable public space, and the qualities space should cover for habitability are as follows: permeability to allow open connections in the urban fabric by measuring them according to the size of typical urban blocks and the elements that can limit them such as railroad tracks or other types of barriers; vitality as a characteristic of the spaces to be places of social interaction measured through the activity there; variety to encourage the complementary uses of the city, variation of typologies and uses; readability to facilitate social and spatial relations from the variable use and density of those who use the city; and robustness which allows an adequate combination and variety of uses at any time of the day with the ability to adapt the space.

According to the theories above, analysis of the habitable public space must be made taking into account physical elements and the design of the space and also considering the social elements of basis subsistence and even the more complex ones such as identity and legal duty. Therefore, we can examine the public space in two dimensions, where the different needs of humans can be encapsulated for the analysis of habitability in the public space, the first, the physical or material dimension, and the second, the intangible dimension, which goes from the social to the spiritual.

In the physical or material dimension, it is possible to concentrate the tangible and quantitative elements that are presented in the urban space, such as public water services, drainage and light, street furniture, transport infrastructure with subway systems, rapid transit busses, light rails, suburban trains, busses, collective transport, bicycle-taxis, bicycles, recreation areas, roads, streets, avenues, circuits, highways, communications infrastructure, public telephones, internet, police officers, security modules and road safety. It is important to mention that the city also has infrastructure for housing, education and health, among others. Similarly, in the immaterial dimension, which goes from the social to the spiritual, it would be the one where we find intangible elements such as the urban social identity, symbolic interactionism, perception of security, culture and social exchange.

#### **4.2. The vision of inclusion**

According to the theory of human need by Len Doyal and Ian Gough, cited by Reyes [18], needs are constructed socially and derived from the cultural environment. The authors take into account indices to measure the welfare between nations based on the needs of: appropriate health care, security, economic safety, clean water, adequate food, shelter as a mean of protection from the elements, relationships of recognition, safe working environments and relationships of recognition and belonging. The needs proposed by this theory are general and can be considered basic in different territories and different social groups. It should be taken into account, however, that the cultural and natural environment, the new technologies and even the policies for urban space make human requirements more complex and even different. This is the case of multicultural cities and the public space should regard it as a

Based on Max Neef's theory about human needs, Reyes [18] analyzes the habitability of public space and combines criteria from existential and axiological categories, where existential categories focus on needs of being referred to personal or collective attributes, having, which contains the mechanisms and laws required, doing, as personal or collective actions, and being, in those spaces of action and construction of needs, satisfactors and economic goods; while the axiological categories cover the requirements of subsistence, protection, affection, understanding, participation, creation, identity and freedom. This refers us, in terms of the existential category, to social action that allows us to build axiological relations that give

Schiller's theory, cited by Valladares et al. [19], is that of the qualities of the habitable public space where, from variables with a specific meaning and value, he measures the habitable public space, and the qualities space should cover for habitability are as follows: permeability to allow open connections in the urban fabric by measuring them according to the size of typical urban blocks and the elements that can limit them such as railroad tracks or other types of barriers; vitality as a characteristic of the spaces to be places of social interaction measured through the activity there; variety to encourage the complementary uses of the city, variation of typologies and uses; readability to facilitate social and spatial relations from the variable use and density of those who use the city; and robustness which allows an adequate combina-

According to the theories above, analysis of the habitable public space must be made taking into account physical elements and the design of the space and also considering the social elements of basis subsistence and even the more complex ones such as identity and legal duty. Therefore, we can examine the public space in two dimensions, where the different needs of humans can be encapsulated for the analysis of habitability in the public space, the first, the physical or material dimension, and the second, the intangible dimension, which goes from

In the physical or material dimension, it is possible to concentrate the tangible and quantitative elements that are presented in the urban space, such as public water services, drainage and light, street furniture, transport infrastructure with subway systems, rapid transit busses, light rails, suburban trains, busses, collective transport, bicycle-taxis, bicycles, recreation areas, roads, streets, avenues, circuits, highways, communications infrastructure, public

tion and variety of uses at any time of the day with the ability to adapt the space.

principle to meet the needs mentioned above.

meaning to space.

76 Risk Assessment

the social to the spiritual.

The 'inclusive' public space is the place where activities and discussions are open to all. It is the place where authorities have the responsibility to guarantee the existence of a public space where people express their opinions, assert their claims and use it for their purposes [20]. However, if there is this concern about inclusion, it means that there are elements that make cities exclusionary so that inclusion-exclusion are studied in a dual way. To this end, two aspects of study are taken into account: 1. social inclusion by exclusion and 2. physical or design inclusion.

*Social inclusion by exclusion.* Public space historically has been valued as a factor of social inclusion and as an inescapable instrument for urban planning. However, the loss of protagonism due to the weakening of previous forms of sociability (resulting in social inequalities and fragmentation) and the emergence of alternative forms of relationship (of communications and encounters introduced by technology, the feeling of insecurity) have sharpened the barrier between recreational and leisure spaces that are used by different social groups. Not forgetting that people of higher income go to private places to recreate, using the street just to circulate, not caring about the state and the quality of public space, which often remains in the background and helps to generate what Bauman calls "ghettos of exclusion," cited by Acuña et al. [21].

Ramírez Kuri and Ziccardi identify discriminatory practices in the labor market, such as access to goods and services; the weakening of social cohesion; luxury consumption activities that can be dissolved by making effective economic, social, cultural and sustainable rights which encourage the integration of the society with the city; informal activities and social conflicts [22].

And on the other hand, we have *physical inclusion or inclusion by design*. In the search to determine the components that public space has for inclusion, we return to the studies that have been carried out to identify the components of exclusion that Ramirez Kuri and Ziccardi analyze, such as the location of the place to determine the quality of services and their infrastructure; the informal and established commerce that pervades the urban space and which fosters crime and the deterioration of the public space and its accessible design [22]. However, these elements are taken in reverse, that is, on the positive side of that which the public space must have to be considered inclusive, such as enough urban infrastructure.

In the design of inclusive public spaces, it is essential to take into account the physical components that foster social integration. From the perspective of Sergio Zermeño, the following are identified as components of exclusion: inaccessible primary and secondary roads; public spaces of richer classes appropriated by needy sectors; crossroads, roads, squares, parks, sidewalks, etc. which operate as frontiers, excess of surveillance and corridors watched by guards, police officers and cameras, and he also identifies social components such as high risk of violence and virtual walls [23].

#### **4.3. The question of globalization trends reflected in urban space**

Public-owned spaces must be able to adapt and survive to global transformations, which by their very contrasting nature absorb these changes in different ways, depending on their environments and the impacts public places are constantly having. Globalization, one of the strongest influences on a city in every sense, whether to its society, space or culture, reinvents them as great scenarios with strong economic and political rather than cultural and social alterations which irreversibly impact on the city's inhabitants. In this sense, the overall composition of the public space is witnessed in two aspects: The public space as an alienable resource, in the sense of appropriation and privatization; and the public space affected by its constructions, in the sense of transformations.

*Public space as an alienable resource through appropriation and privatization of the space in a nonlegal way*. This causes scarcity of public spaces, mainly because of the wide commercialization of everything, a reflection of the globalization, bad economy, excessive appropriation and high delinquency, as this is fundamentally brought about by street vendors or informal establishments that create pervaded scenarios. The transformation of Latin American cities and their spaces are a consequence of social, cultural and technological phenomena. These changes create a new form of social organization, a new cultural model, which can be called postmodernity, globalization or neoliberal culture. This regards the space as a resource, a product, with social, sensual and symbolic policies, which appropriate, use and transform the spaces of cities [24].

It is evident that the production of public space in current cities has changed, the measures for its construction and even its activities are different, but, what is the cause? Although the causes can be many, there is still an ongoing search for the logic that gives us elements to understand the urban transformations that have been tried to be defined with names that are sometimes even difficult to pronounce, composed or decomposed words or more than one to name what is happening: redensification, urbanization, consolidation, gentrification, multiculturalism, and people participation, among others.

In Latin America, the study of processes such as gentrification is recent. Although it is true that the bases defining this concept are not new, the term itself is relatively young, invented by the British sociologist Ruth Glass [25], who observed the differences in social structure from the establishment of higher cost housing in specific areas of Central London, thus examining the invasion of middle and upper classes on working class neighborhoods, displacing and changing the social fabric.

Later, the sociologists Bruce London and John Palen in 1984 tried to explain gentrification by means of five theories that involve different aspects of the life in the city: the ecological-demographic theory, which refers to population and generational statistical aspects (baby boomers); the sociocultural theory, seen from the values, feelings, attitudes, ideas and beliefs of society; the political economic theory, which is based on two approaches: the traditional and the Marxist ones; the community network theory: the community lost and the community gained and finally the theory of social movements and the influence of counter movements [26].

On the other hand, in 1987, Neil Smith's view proposed two theories to explain gentrification by observing the phenomenon from the economic and social point of view with the "production-side theory" and the "consumption-side theory." These theories address the problem of the automobile, urban expansion, changes in lifestyles, depopulation of the city center, transport and pedestrian spaces, where human relations are diminished, but above all, he focuses his research on the results of increased employment in business districts. The interest of this geographer in these elements is an answer to the very elements that have caused the greatest problems in recent decades and have been part not only of gentrification but also of the processes of redensification, rehabilitation and the numberless patches made cities [27].

For gentrification to exist, it must be in a specific geographical space and it is considered to be happening when there is a process of investment and reinvestment of capital, when there are a series of transformations in the urban landscape due to the settlement of higher income social groups in these specific geographies and when there is a direct or indirect displacement of the existing social groups [28]. In the current debate, Michael Janoschka addresses gentrification with six points: 1. Neo-liberal policies of Gentrification, all types of public policies that establish an alliance with the capital that is invested in the city. 2. Supergeneration, when a place has been gentrified at two different historical moments. 3. Gentrification of new areas, industrial areas or ports where there is no gentrification by direct expulsion, but through all the indirect processes that occur around these neighborhoods. 4. New geographies of gentrification: spaces that have not previously been identified as spaces of gentrification, rural and suburban neighborhoods. 5. Symbolic gentrification: virtual sale and placement of new economies. 6. Resistance to gentrification: the congregation of the community to prevent the inflow of foreign capital [28]. Thus, the integration of different urban processes affects constructions and make up, renews and transforms the city and affects the dynamics, practices and design of urban spaces, which is a witness of the reinvention of the city in smaller scales.

#### **4.4. Operationalization of variables**

sidewalks, etc. which operate as frontiers, excess of surveillance and corridors watched by guards, police officers and cameras, and he also identifies social components such as high risk

Public-owned spaces must be able to adapt and survive to global transformations, which by their very contrasting nature absorb these changes in different ways, depending on their environments and the impacts public places are constantly having. Globalization, one of the strongest influences on a city in every sense, whether to its society, space or culture, reinvents them as great scenarios with strong economic and political rather than cultural and social alterations which irreversibly impact on the city's inhabitants. In this sense, the overall composition of the public space is witnessed in two aspects: The public space as an alienable resource, in the sense of appropriation and privatization; and the public space affected by its

*Public space as an alienable resource through appropriation and privatization of the space in a nonlegal way*. This causes scarcity of public spaces, mainly because of the wide commercialization of everything, a reflection of the globalization, bad economy, excessive appropriation and high delinquency, as this is fundamentally brought about by street vendors or informal establishments that create pervaded scenarios. The transformation of Latin American cities and their spaces are a consequence of social, cultural and technological phenomena. These changes create a new form of social organization, a new cultural model, which can be called postmodernity, globalization or neoliberal culture. This regards the space as a resource, a product, with social, sensual and symbolic policies, which appropriate, use and transform the

It is evident that the production of public space in current cities has changed, the measures for its construction and even its activities are different, but, what is the cause? Although the causes can be many, there is still an ongoing search for the logic that gives us elements to understand the urban transformations that have been tried to be defined with names that are sometimes even difficult to pronounce, composed or decomposed words or more than one to name what is happening: redensification, urbanization, consolidation, gentrification, multi-

In Latin America, the study of processes such as gentrification is recent. Although it is true that the bases defining this concept are not new, the term itself is relatively young, invented by the British sociologist Ruth Glass [25], who observed the differences in social structure from the establishment of higher cost housing in specific areas of Central London, thus examining the invasion of middle and upper classes on working class neighborhoods, displacing

Later, the sociologists Bruce London and John Palen in 1984 tried to explain gentrification by means of five theories that involve different aspects of the life in the city: the ecological-demographic theory, which refers to population and generational statistical aspects (baby boomers); the sociocultural theory, seen from the values, feelings, attitudes, ideas and beliefs of society;

**4.3. The question of globalization trends reflected in urban space**

of violence and virtual walls [23].

78 Risk Assessment

constructions, in the sense of transformations.

culturalism, and people participation, among others.

spaces of cities [24].

and changing the social fabric.

With the history of the importance of public space in the City and the influence that urban interventions for luxury housing have had in recent years, as well as the recent public space programs, a Model is created to evaluate the quality of public space in terms of inclusion or exclusion, measured using the following variables and instruments applied in the area of study of Granada, Ampliación Granada and Polanco (see **Table 4** and **5**).

For clear representation, the results are shown in a graph in a model of six concentric axes, forming two hexagons on the same axes. The perimeter of the hexagon is the coordinate zero, while the perimeter of the external hexagon is the coordinate +2 (a very inclusive space). The center of either of the two hexagons shall therefore represent a very high exclusion. In other words, the more covered the area of the hexagon is, the more inclusive that public space will be. The model was applied in all spaces of Polanco and Granadas (see **Table 5** and **Figure 6**).


**Table 4.** Variables and instruments for analyzing the quality of public space.

The main result was that in the neighborhood Granadas, although they had the determinants for their space to be recomposed through public spaces as the base of the project, this was done in an isolated manner, causing for the new pocket public spaces and linear parks determined by the economic tendencies to be places of exclusion, due to the fact that, for example:



**Table 5.** Analysis of the quality of public space in the Neighborhoods Polanco, Granada and Ampliación Granada. Source: Own elaboration.

The main result was that in the neighborhood Granadas, although they had the determinants for their space to be recomposed through public spaces as the base of the project, this was done in an isolated manner, causing for the new pocket public spaces and linear parks determined by the economic tendencies to be places of exclusion, due to the fact that, for

**Variable Importance of the variable in case of disaster**

temporary shelter)

shelters

disaster

**Accessibility** to public space is crucial in all phases of disaster: at the emergency phase (for evacuation purposes), search and rescue activities (for organization of activities) and reconstruction (for

It has been observed that **residential adjacency** to public space permits people to be close to the collapsed buildings instead of going to official temporary

A good level of **lighting, temperature and humidity** are fundamental for the use of public spaces at all phases of

**Urban furniture** may enable or impede the rapid installation of emergency facilities such as tents for the reception of food or medical attention. **Infrastructure** such as water or a flood safe public space may facilitate temporary shelters

A positive **perception of the urban space** (temporary used in the different phases of disaster) may be helpful to the emotional

Physical elements of **control** and private security (as physical barriers) may impede partial or total accessibility to the

wellbeing of victims

public space

**Table 4.** Variables and instruments for analyzing the quality of public space.

**Accessibility:** The degree or measure in which all people can

**Balanced residential adjacency:** The housing around public spaces must be balanced with the rest of

**Lighting, temperature and humidity:** The characteristic of lighting in public spaces can determine their stay in them and their daily hours of life

**Urban furniture and** 

the public space

**infrastructure:** The tangible and quantitative elements that are in

**Perception of the urban space:** How the resident feels about the place. In other words, if it is safe, if they feel included or excluded

**Control:** Physical elements of security that control the space, such as cameras, police, surveillance modules and neighborhood watch

Source: Own elaboration.

use a public space

80 Risk Assessment

the services

**Instrument to collect the** 

Plan or lines of public transportation (metro, bus, collective transport), plan for taxi sites, plan for bicycle sites, plan of virtual accessibility and

Land use plan (diversity of uses), residential land use plan, adjacent housing plan with real heights (2 levels, 3 levels, 5 levels, etc.), closed neighborhoods plan and Aerial

Heights of buildings, luminaries, terrestrial photography, aerial photography, lux meter and

Plans of the public spaces chosen with details of furniture, urban infrastructure plan of the space, adjacent urban infrastructure plan, terrestrial photography and aerial

Photography, interviews, graphs

Security camera record plan, security module record plan, photographic record of human elements of security and interviews

**information**

crosstab plan

Photography

thermometer

photography

and charts

• A lack of accessibility is seen as there are no free internet networks in the public space and there is no bicycle parking as opposed to Polanco, in which there are. Although there is public transportation near the place, it has become exclusive due to the saturation of

• The residential adjacency is not balanced; although the land use is variable, the residential

complexes in the area are very high and gated neighborhoods are dominant.

example:

its use.

• They show records of temperature, humidity and lighting that are not comfortable in shade, since in some cases they have little exposure to the sun, and the sun directly in others. All of them are highly humid and the records go from the lower to the upper limit, due to their low vegetation and the material of their environment.

**Figure 6.** Comparative analysis of inclusion and exclusion characteristics of public spaces of the twentieth and twentyfirst centuries in the Polanco and Granadas neighborhoods. Source: Own elaboration.

• In general, their urban furniture and infrastructure are normal, for they have benches. However, they do not have trash bins, much less fountains, sculptures or playgrounds. However, although they do not have their own luminaries, they have exercise machines.


### **5. Conclusions and outlook**

• In general, their urban furniture and infrastructure are normal, for they have benches. However, they do not have trash bins, much less fountains, sculptures or playgrounds. However, although they do not have their own luminaries, they have exercise

**Figure 6.** Comparative analysis of inclusion and exclusion characteristics of public spaces of the twentieth and twenty-

first centuries in the Polanco and Granadas neighborhoods. Source: Own elaboration.

machines.

82 Risk Assessment

The implementation of urban phenomenon, such as redensification and gentrification, must be treated with more care and with plans of action for all. Failure to do so may cause:

A change of identity after a short time, the loss of neighborhood values, the displacement of neighbors, an abstract public space, a collective trademark image, insufficient urban equipment, vanishing of traditional trade, a lack of roads, scarce and exclusive public spaces, change of land use and excessive trash, among others.

The need to produce and intervene in the public space is going to be determined based on the type of urban growth of the city. In other words, if it is a disordered growth, the functioning of the public space will be directly affected and it will be socially weakened.

Interventions in the city in an unplanned manner can cause problems, for example, of communication in the social and spatial sense, of urban infrastructure and of insufficient public spaces.

The creation and intervention of public spaces in Mexico City of the twenty-first century have been governed by economic, political and social determinants immersed in a global world in search of publicly owned spaces that have inclusive characteristics.

On the other hand, the production of public spaces in this century has been resulting in residual or nook spaces that have undermined spaces that make them have a struggle between the inclusion-exclusion duality.

The results show that on the one hand, there is no quantitative similarity in the characteristics of public spaces, since they are dramatically reduced as a consequence of the lack of urban planning and the lack of political intention to create habitable public spaces for any case, but especially in case of disasters. That is to say, there is no urban design. On the other hand, qualitatively, we have not seen the concern that the spaces of new creation are inclusive and open to the general population with the intention of integration; urban projects predominate not to favor urban fabric, but to delimit territories.

In this sense, if we look at the role of public space in the earthquake of 1985 and the use in 2017, we can have the minimum indicators required to be taken into account in the adaptation of existing spaces or, where appropriate, in new spaces and that should be revalued in the institutional way of thinking and deciding on public space, mainly in the developing territories within Mexico City (see **Figure 7**).

**Figure 7.** Public spaces in Mexico City used to collect food and other emergency supplies right after the earthquake of September 19, 2017. Photos by the authors.

### **Acknowledgements**

The authors would like to thank the reviewer for his valuable comments and suggestions of this manuscript. This article is a product of the research project IPN-SIP 20172147, funded by the Secretaría de Investigación y Posgrado del Instituto Politécnico Nacional, México.

### **Author details**

Milton Montejano-Castillo\* and Mildred Moreno-Villanueva

\*Address all correspondence to: mmontejanoc@ipn.mx

Escuela Superior de Ingeniería y Arquitectura Unidad Tecamachalco del Instituto Politécnico Nacional, Estado de México, Mexico

### **References**


**Acknowledgements**

84 Risk Assessment

September 19, 2017. Photos by the authors.

**Author details**

**References**

Gili; 2009. 207 p

[Accessed: November 7, 2017]

The authors would like to thank the reviewer for his valuable comments and suggestions of this manuscript. This article is a product of the research project IPN-SIP 20172147, funded by

**Figure 7.** Public spaces in Mexico City used to collect food and other emergency supplies right after the earthquake of

the Secretaría de Investigación y Posgrado del Instituto Politécnico Nacional, México.

Escuela Superior de Ingeniería y Arquitectura Unidad Tecamachalco del Instituto

[1] Meli R. El sismo de 1985 en México. In: Lugo Hubp J, Inbar M, editors. Desastres naturales en América Latina. 1st ed. México, DF: Fondo de Cultura Económica; 2002. pp. 125-146

[2] Guevara Pérez T. Arquitectura moderna en zonas sísmicas. 1st ed. Barcelona: Gustavo

[3] Cruz Atienza VM, Krishna Singh S, Ordaz Schroeder M. Qué ocurrió el 19 de septiembre de 2017 en México. Nexos [Internet]. September 23, 2017. Available from: https://www.

[4] Gobierno de la Ciudad de México. Reconstrucción Ciudad de México [Internet]. October 2017. Available from: http://www.reconstruccion.cdmx.gob.mx/inmuebles-demoler

[5] Gobierno del Distrito Federal. Bando número 2. 2000. Available from: http://www.invi.

df.gob.mx/portal/transparencia/pdf/LEYES/Bando\_informativo\_2.pdf

Milton Montejano-Castillo\* and Mildred Moreno-Villanueva

nexos.com.mx/?p=33830 [Accessed: November 7, 2017]

\*Address all correspondence to: mmontejanoc@ipn.mx

Politécnico Nacional, Estado de México, Mexico


Provisional chapter

### **On Risk and Reliability Studies of Climate-Related Building Performance** On Risk and Reliability Studies of Climate-Related

DOI: 10.5772/intechopen.71684

Krystyna Pietrzyk and Ireneusz Czmoch

Additional information is available at the end of the chapter Krystyna Pietrzyk and Ireneusz Czmoch

http://dx.doi.org/10.5772/intechopen.71684 Additional information is available at the end of the chapter

Building Performance

#### Abstract

[18] Reyes Pérez R. Habitabilidad y espacio público en Barrios históricos de Mérida, Yucatán al inicio del siglo XXI [dissertation]. Mexico City: Doctorado en Arquitectura, Facultad de Arquitectura de la UNAM; 2012. 486 p. Available from: http://132.248.9.195/ptd2012/

[19] Valladares Anguiano R, Chávez Martha E, Moreno Olmos S. Elementos de la Habitabilidad Urbana. In: Seminario internacional de arquitectura y vivienda; 2008. Available from: http://insumisos.com/LecturasGratis/elementos%20de%20la%20habitabilidad%20-

[20] Akkar M. Questioning 'inclusivity' of public spaces in post-industrial cities: The case of Haymarket Bus Station, Newcastle upon Tyne. METU Journal of Faculty of Architecture. 2005;**22**(2):1-24. Available from: http://jfa.arch.metu.edu.tr/archive/0258-5316/2005/cilt22/

[21] Acuña C, de Souza L, Leicht E, Musso C, Vainer D, Varela A. Aglomeración Maldonado – Punta del Este – San Carlos. Enfoques y propuestas. Hacia un modelo transformador. 1st ed. Montevideo: Facultad de Arquitectura/Instituto de Teoría de la Arquitectura y Urbanismo. Universidad de la República. Uruguay; 2013. 122 p. Available from: http:// www.fadu.edu.uy/itu/files/2014/12/ITU-AGLOMERACI%C3%93N-MALDONADO-

[22] Ramírez Kuri P, Ziccardi A. Pobreza urbana, desigualdad y exclusión social en la ciudad del siglo XXI. In: Cordera R, Ramírez Kuri P, Ziccardi A, editors. Pobreza, desigualdad y exclusión social en la ciudad del siglo XXI. 1st ed. México, DF: Siglo XXI Editores/

[23] Zermeño S. La centralidad de los excluidos. In: Cordera R, Ramírez Kuri P, Ziccardi A, editors. Pobreza desigualdad y exclusión social en la ciudad del siglo XXI. 1st ed. México, DF: Siglo XXI Editores/UNAM Instituto de Investigaciones Sociales; 2008. pp. 135-152

[24] Remedi G. La ciudad Latinoamericana S.A. (o el asalto al espacio público). In: Subsecretaría de Planeamiento, editor. Las dimensiones del espacio público: problemas y proyectos.

[25] Glass RL, Westergaard J. London's Housing Needs: Statement of Evidence to the Committee on Housing in Greater London. London: Centre for Urban Studies, University

[26] London B, Palen J, editors. Gentrification, Displacement, and Neighborhood Revitalization.

[27] Smith N, Taylor & Francis, Ltd. Gentrification and the rent gap. Annals of the Association

[28] Janoschka M. Geografías urbanas en la era del Neoliberalismo. Una conceptualización de la resistencia local a través de la participación y la ciudadanía urbana. Investigaciones Geográficas, Boletín del Instituto de Geografía, UNAM. 2011;**76**:118-132. Available from: http://www.investigacionesgeograficas.unam.mx/index.php/rig/article/view/29879/27778

Buenos Aires: Gobierno de la Ciudad de Buenos Aires; 2004. pp. 15-25

UNAM Instituto de Investigaciones Sociales; 2008. pp. 23-50

Albany: State University of New York Press; 1984. 271 p

of American Geographers. 1987;**77**(3):462-465

mayo/0679703/Index.html

86 Risk Assessment

sayi\_2/1-24.pdf

College; 1965. 97 p

%20valladares%20chavez%20y%20moreno.pdf

PUNTA-DEL-ESTE-SAN-CARLOS.pdf

A design strategy based on integration of the building form and structure with its external environment in order to take advantage of natural forces (wind and buoyancy effects) has been evaluated in terms of risk and reliability measures. Tools for the probabilistic analysis (First-Order Reliability Method (FORM), Monte Carlo) have been presented and applied in the probabilistic modelling and sensitivity analysis of the response function of the studied building physics problem. Sensitivity analysis of the influence of basic random variables on the probability distribution of a response function is straightforward in FORM methodology. The case-based studies of probabilistic modelling of uncertainties coupled to wind speed and temperature difference through the specified building/environment system have been presented (i.e., the distribution models of the air change rate ACH and the dynamic U value characterising thermal performance of dynamic insulation). Sensitivities of the probability model of ACH to the parameters of wind speed and temperature distributions have been estimated for the consecutive values of the air change rate using FORM methodology. Reliability of ACH turned out to be most sensitive to the shape parameter of the wind speed distribution (in two-parameter Weibull model). The probabilistic risk analysis along with the effective tools for sensitivity analysis can be used to support design decisions and also to develop better models for evaluation of building performance.

Keywords: building performance, environment, risk, reliability, probabilistic approximation, FORM, sensitivity, climate, climate mitigation, wind, air infiltration, ACH, dynamic U value

### 1. Introduction

Developing tools to support decision-making to ensure comfort and safety in built environment while taking into account climate change challenges becomes important. 'Energy is the dominant contributor to climate change, accounting for around 60% of total global greenhouse

© 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

gas emissions' [1]. Reducing the carbon intensity of energy is a key objective in long-term climate goals. Hence, choosing a strategy based on integrating the building form and structure with its external environment in order to take advantage of natural forces (for natural ventilation, solar heating, etc.) is an example of design decisions leading towards mitigation of climate change.

Decisions concerning the choice of the design solutions for the particular project have to be taken under uncertainties related to the unknown and variable conditions, i.e. random climatic conditions, uncertain material performance, uncertain user behaviour, etc. Confronted with significant uncertainty, deterministic modelling supporting design process has been proved to be insufficient for decision-making. However, as it is said in [2] 'Existing engineering-based models are unable to propagate uncertainties through the model, and are therefore limited in their ability to display the impact of uncertainties to decision makers'. Facing that challenge, the chapter includes the discussion of the models and the tools applied by the authors for the probabilistic transformation of uncertainties of climatic parameters through a building/environment system for the predictive modelling of building performance.

The method for the quantification of building performance in terms of probability of poor performance (failure) and satisfactory performance (safe behaviour, in general meaning) is presented. Next, the tools for the probabilistic analysis are described (FORM, Monte Carlo) in relation to probabilistic modelling and possible applications of sensitivity analysis. One of the important results of analysis is the probability distribution functions of different performances as the responses of building/environment systems to the environmental loads. Such analysis requires estimation of some climatic parameters in terms of frequency of occurrence and appropriate statistics.

The chapter includes the case-based studies of probabilistic transformation of uncertainties coupled to wind and temperature through the specified building/environment system to show the effect on the distribution model of the air change rate and further on the distribution model of the dynamic thermal transmittance (dynamic U value) of the building envelope. Furthermore, the estimated distribution models could be included in risk/reliability calculations, carried out with FORM tools. The analysis of the sensitivity of the distribution of ACH with respect to the randomness of wind speed and outdoor temperature exemplifies the potential of the FORM tools, which can be effectively used to find out the probabilistic characteristics typical for the combination of the important variables influencing climate-structure interaction.

### 2. Risk perspective on design for sustainable development

Design for sustainable development can be approached using risk analysis tools. To minimalise the risk of undesired consequences while increasing the chance to enhance the quality of life becomes the basic design objective. The design goal can be expressed in other terms—how to secure reliability of design under risk constraints [3]. To be clear about the terminology used further, some definitions are given below.

• Risk—a state of uncertainty where some possible outcomes have an undesired effect or significant loss [4]. It can be expressed in terms of adverse consequences scaled by the probabilities of undesired outcomes.


#### 2.1. Risk perspective on climate change challenges

gas emissions' [1]. Reducing the carbon intensity of energy is a key objective in long-term climate goals. Hence, choosing a strategy based on integrating the building form and structure with its external environment in order to take advantage of natural forces (for natural ventilation, solar heating, etc.) is an example of design decisions leading towards mitigation of climate change.

Decisions concerning the choice of the design solutions for the particular project have to be taken under uncertainties related to the unknown and variable conditions, i.e. random climatic conditions, uncertain material performance, uncertain user behaviour, etc. Confronted with significant uncertainty, deterministic modelling supporting design process has been proved to be insufficient for decision-making. However, as it is said in [2] 'Existing engineering-based models are unable to propagate uncertainties through the model, and are therefore limited in their ability to display the impact of uncertainties to decision makers'. Facing that challenge, the chapter includes the discussion of the models and the tools applied by the authors for the probabilistic transformation of uncertainties of climatic parameters through a building/envi-

The method for the quantification of building performance in terms of probability of poor performance (failure) and satisfactory performance (safe behaviour, in general meaning) is presented. Next, the tools for the probabilistic analysis are described (FORM, Monte Carlo) in relation to probabilistic modelling and possible applications of sensitivity analysis. One of the important results of analysis is the probability distribution functions of different performances as the responses of building/environment systems to the environmental loads. Such analysis requires estimation of some climatic parameters in terms of frequency of occurrence and

The chapter includes the case-based studies of probabilistic transformation of uncertainties coupled to wind and temperature through the specified building/environment system to show the effect on the distribution model of the air change rate and further on the distribution model of the dynamic thermal transmittance (dynamic U value) of the building envelope. Furthermore, the estimated distribution models could be included in risk/reliability calculations, carried out with FORM tools. The analysis of the sensitivity of the distribution of ACH with respect to the randomness of wind speed and outdoor temperature exemplifies the potential of the FORM tools, which can be effectively used to find out the probabilistic characteristics typical for the

Design for sustainable development can be approached using risk analysis tools. To minimalise the risk of undesired consequences while increasing the chance to enhance the quality of life becomes the basic design objective. The design goal can be expressed in other terms—how to secure reliability of design under risk constraints [3]. To be clear about the terminology used

• Risk—a state of uncertainty where some possible outcomes have an undesired effect or significant loss [4]. It can be expressed in terms of adverse consequences scaled by the

combination of the important variables influencing climate-structure interaction.

2. Risk perspective on design for sustainable development

further, some definitions are given below.

probabilities of undesired outcomes.

ronment system for the predictive modelling of building performance.

appropriate statistics.

88 Risk Assessment

Climate change threatens life on our planet. In view of high uncertainty, qualitative or semiquantitative risk analysis based on the different scenarios is often applied. Following the quantitative definition of risk, one can write

$$Risk = P[\text{hazard}] \* \text{Consquences} \tag{1}$$

P hazard ½ � is the probability of occurrence of undesired events leading to possible Consequences like loss, injury, or discomfort.

Risk reduction could be accomplished by decreasing the probability of undesired event as well as diminishing the scale adverse consequences. Risk reduction of climate change and its consequences can be accomplished by climate change mitigation (decrease of the probability of occurrence of adverse events) or climate change adaptation (decrease of the adverse consequences) described as follows:

Climate change mitigation—'it consists of actions to limit the magnitude and/or rate of longterm climate change' [10]. 'It generally involves reductions in human (anthropogenic) emissions of greenhouse gases´ [11].

Climate change adaptation—'anticipating the adverse effects of climate change and taking appropriate action to prevent or minimize the damage they can cause, or taking advantage of opportunities that may arise' [12].

#### 2.2. Risk assessment as a tool supporting design of buildings

Designing for the integration of the building form and structure with its external environment in order to use natural forces to secure comfort (passive strategies) is an example of activities towards mitigation of climate change. If it is supported by probabilistic prediction of a local climate changes, it can be viewed from the climate adaptation perspective too.

The needs for risk reduction related to the hazards induced by climate change become an important boundary condition in the modelling of building/environment system to support building design. Following the definition of risk (Eq. (1)), adverse consequences are indicated by the set {yes = 1, no = 0}, and as a result, the probability P hazard ½ � becomes the discriminating factor for comparison of different design solutions. It means that certain design could be chosen on the basis of comparison of probability of unsatisfactory performance or reliability, evaluated by a set of alternative design proposals. In that way risk assessment becomes a tool supporting design of buildings.

### 3. Transformation of uncertainties: probabilistic approach

Probability is a measure of uncertainty about future events. Probability of a performance of a building/environment system depends on the theoretical model used and the randomness of the influencing parameters. The epistemic uncertainty about the theoretical models applied together with the aleatory uncertainty coupled to the randomness of important phenomena contributes to the final uncertain outcome, as the result of transformation of uncertainties throughout the model. Methods and tools for probabilistic reliability analysis can be used to estimate the probabilistic response of the structure to the random climatic load. They could be an important part of risk-based design process built upon the framework of risk management methodology as proposed in [13].

#### 3.1. Probabilistic analysis with FORM

Development of reliability methods resulted in variety of powerful algorithms to estimate probability of failure for complicated physical and mathematical models of building systems incorporating random variables (i.e. properties or actions). FORM denotes 'First-Order Reliability Method', which has been developed by many researchers in about 40 years ago. Short description of the FORM basics as well as sensitivity tools is presented below. For details, check [14], and for application in building physics, look in [15]. First-order reliability method (FORM) is the most popular approach applied in practice.

Once the response of a system characterised by a set of basic random variables and a mathematical model describing the relationship among them has been established, the probability density function of the response can be estimated with help of FORM tools.

In general, the performance of a system is analysed in the space <sup>Ω</sup><sup>X</sup> <sup>¼</sup> <sup>X</sup> <sup>∈</sup>Rn f g of basic random variables X. For a given failure mode or serviceability requirement, represented by the limit state surface gð Þ¼ X 0, the space Ω<sup>X</sup> is divided into the safe subset, i.e. satisfactory performance, <sup>Ω</sup><sup>S</sup> <sup>¼</sup> <sup>X</sup> <sup>∈</sup> Rn f g ; <sup>g</sup>ð Þ <sup>X</sup> <sup>&</sup>gt; <sup>0</sup> , and the failure subset, <sup>Ω</sup><sup>F</sup> <sup>¼</sup> <sup>X</sup> <sup>∈</sup>Rn f g ; <sup>g</sup>ð Þ <sup>X</sup> <sup>≤</sup> <sup>0</sup> , i.e., unsatisfactory performance. If all random variables are continuous with the multivariate joint probability density function (PDF) f <sup>X</sup>ð Þx , the failure probability is given by the integral

$$P\_f = \int\_{\Omega^\mathbb{F}} f\_x(\mathbf{x})d\mathbf{x} \tag{2}$$

The integral (Eq. (2)) can be evaluated exactly for a few cases with the most important one: the linear limit state surface and multidimensional normal (Gaussian) distribution function of variables X.

FORM algorithm starts with the non-linear transformation. Non-normal random vector X is transformed into a standard normal (Gaussian) vector Y with zero mean and unit covariance matrix CYY ¼ I. The limit state surface gð Þ¼ x 0 is mapped into a limit state surface Gð Þ¼ y 0. Next, the design point y<sup>∗</sup>, i.e. the point on the limit state surface with the minimum distance to the origin of the Y space, is determined by solving the non-linear optimisation problem with a non-linear constrain Gð Þ¼ y 0:

$$\boldsymbol{\beta} = \min \sqrt{\mathbf{y}^T \mathbf{y}} \quad \text{for} \quad \mathbf{y} \quad \text{on} \ G(\mathbf{y}) = \mathbf{0} \tag{3}$$

The hyperplane tangential to the limit state surface at the point y<sup>∗</sup> is given by formula

$$
\beta - \mathfrak{a}^T \mathfrak{y} = 0 \tag{4}
$$

where α is a unit outward normal vector to the hyperplane and β is the distance between the hyperplane and the origin. Since the random vector Y ¼ Y Xð Þ has standard normal distribution, the first-order approximation of the failure probability is given by

$$P\_f \cong P\left[\boldsymbol{\beta} - \boldsymbol{\alpha}^T \mathbf{Y} \le 0\right] = \boldsymbol{\Phi}(-\boldsymbol{\beta})\tag{5}$$

where Φ(…) is the Laplace function.

2.2. Risk assessment as a tool supporting design of buildings

supporting design of buildings.

90 Risk Assessment

methodology as proposed in [13].

3.1. Probabilistic analysis with FORM

(FORM) is the most popular approach applied in practice.

Designing for the integration of the building form and structure with its external environment in order to use natural forces to secure comfort (passive strategies) is an example of activities towards mitigation of climate change. If it is supported by probabilistic prediction of a local

The needs for risk reduction related to the hazards induced by climate change become an important boundary condition in the modelling of building/environment system to support building design. Following the definition of risk (Eq. (1)), adverse consequences are indicated by the set {yes = 1, no = 0}, and as a result, the probability P hazard ½ � becomes the discriminating factor for comparison of different design solutions. It means that certain design could be chosen on the basis of comparison of probability of unsatisfactory performance or reliability, evaluated by a set of alternative design proposals. In that way risk assessment becomes a tool

Probability is a measure of uncertainty about future events. Probability of a performance of a building/environment system depends on the theoretical model used and the randomness of the influencing parameters. The epistemic uncertainty about the theoretical models applied together with the aleatory uncertainty coupled to the randomness of important phenomena contributes to the final uncertain outcome, as the result of transformation of uncertainties throughout the model. Methods and tools for probabilistic reliability analysis can be used to estimate the probabilistic response of the structure to the random climatic load. They could be an important part of risk-based design process built upon the framework of risk management

Development of reliability methods resulted in variety of powerful algorithms to estimate probability of failure for complicated physical and mathematical models of building systems incorporating random variables (i.e. properties or actions). FORM denotes 'First-Order Reliability Method', which has been developed by many researchers in about 40 years ago. Short description of the FORM basics as well as sensitivity tools is presented below. For details, check [14], and for application in building physics, look in [15]. First-order reliability method

Once the response of a system characterised by a set of basic random variables and a mathematical model describing the relationship among them has been established, the probability

In general, the performance of a system is analysed in the space <sup>Ω</sup><sup>X</sup> <sup>¼</sup> <sup>X</sup> <sup>∈</sup>Rn f g of basic random variables X. For a given failure mode or serviceability requirement, represented by the limit state surface gð Þ¼ X 0, the space Ω<sup>X</sup> is divided into the safe subset, i.e. satisfactory

density function of the response can be estimated with help of FORM tools.

climate changes, it can be viewed from the climate adaptation perspective too.

3. Transformation of uncertainties: probabilistic approach

The non-linear constrained optimisation problem (Eq. (3)) can be solved with many standard procedures as well as algorithms developed especially for this purpose, e.g. algorithm for the case of independent, non-normal random variables [16] and algorithm for problems with incomplete probability information [17].

All such solvers are iterative: for the assumed value of design point x<sup>∗</sup> ð Þ <sup>k</sup> , the values of limit state function g x<sup>∗</sup> ð Þ k � � and its gradient <sup>∇</sup><sup>g</sup> <sup>x</sup><sup>∗</sup> ð Þ k � � are determined. Next, a new position of design point x<sup>∗</sup> ð Þ <sup>k</sup>þ<sup>1</sup> is derived, and the process continues until the convergence criteria are fulfilled. If the state of the analysed system is described by the performance function defined by analytical formula, then the gradient can be evaluated easily, and one of algorithms solving the optimisation problem (Eq. (3)) can be applied directly. Otherwise the stochastic finite element method should be applied in order to calculate the value of the limit state function and its gradient vector at following values of design points.

The first-order reliability index <sup>β</sup> and the failure probability Pf ffi <sup>Φ</sup> �<sup>β</sup> depend on:


Practical experience shows that the failure probability is usually a strongly non-linear function of the parameter θ, whereas the reliability index β is a rather linear function of the parameter θ. Thus the change in the failure probability due to the change of the parameter θ can be approximated as follows:

$$P\_f(\theta + \Delta\theta) \cong \Phi(-\beta - \Delta\beta) \cong \Phi\left(-\beta - \frac{d\beta}{d\theta}\Delta\theta\right) \tag{6}$$

The sensitivity measures of the first-order reliability index do not depend on the curvature of the limit state surface gð Þ¼ x 0 at the design point. Therefore, the application of sensitivity measures is limited to small changes of the values of the parameters.

The sensitivity of the first-order approximation of the failure probability Pf ffi <sup>Φ</sup> �<sup>β</sup> is directly related to the sensitivity of the reliability index β, since

$$\frac{dP\_f}{d\theta} = -\rho(\beta)\frac{d\beta}{d\theta} \tag{7}$$

If θ is a parameter of the limit state function gð Þ x; θ , then derivative of the reliability index with respect to the parameter θ is equal to

$$\frac{d\beta}{d\theta} = \frac{1}{|\nabla G(\mathbf{y}^\*, \theta)|} \frac{\partial}{\partial \theta} G(\mathbf{y}^\*, \theta) \tag{8}$$

where vector Y contains independent standard normal variables related to the vector of basic random variables by transformation Y ¼ Y Xð Þ, and the limit state surface gð Þ¼ x; θ 0 defined in the space X has been mapped into the surface Gð Þ¼ y; θ 0. Since the FORM index β is equal to the minimum distance between the origin of the Y space and the limit state surface <sup>G</sup>ð Þ¼ <sup>y</sup>; <sup>θ</sup> 0, thus the design point <sup>y</sup><sup>∗</sup> is laying on the limit state surface; see Figure 1:

$$\beta = -\frac{\nabla G(\mathbf{y}^\*, \boldsymbol{\theta})}{|\nabla G(\mathbf{y}^\*, \boldsymbol{\theta})|} \mathbf{y}^\* \tag{9}$$

The limit state surface in the X space of basic random variables gð Þ¼ x; θ 0 does not depend on any parameter pik of a random variable Xi with the distribution function Fi xi; pik . However, the limit state surface Gð Þ¼ y; θ 0 depends on parameter pik due to the transformation Y ¼ Y Xð Þ.

Figure 1. Illustration of sensitivity indices α<sup>i</sup> (modified from [14]).

The first-order reliability index <sup>β</sup> and the failure probability Pf ffi <sup>Φ</sup> �<sup>β</sup> depend on:

mean value, standard deviation, skewness or location, scale and shape parameters

Practical experience shows that the failure probability is usually a strongly non-linear function of the parameter θ, whereas the reliability index β is a rather linear function of the parameter θ. Thus the change in the failure probability due to the change of the parameter θ can be

Pfð Þffi <sup>θ</sup> <sup>þ</sup> <sup>Δ</sup><sup>θ</sup> <sup>Φ</sup> �<sup>β</sup> � <sup>Δ</sup><sup>β</sup> ffi <sup>Φ</sup> �<sup>β</sup> � <sup>d</sup><sup>β</sup>

measures is limited to small changes of the values of the parameters.

dβ <sup>d</sup><sup>θ</sup> <sup>¼</sup> <sup>1</sup>

directly related to the sensitivity of the reliability index β, since

The sensitivity measures of the first-order reliability index do not depend on the curvature of the limit state surface gð Þ¼ x 0 at the design point. Therefore, the application of sensitivity

The sensitivity of the first-order approximation of the failure probability Pf ffi <sup>Φ</sup> �<sup>β</sup> is

dβ

∂

<sup>d</sup><sup>θ</sup> ¼ �<sup>φ</sup> <sup>β</sup>

∇G y<sup>∗</sup> j j ð Þ ; θ

<sup>G</sup>ð Þ¼ <sup>y</sup>; <sup>θ</sup> 0, thus the design point <sup>y</sup><sup>∗</sup> is laying on the limit state surface; see Figure 1:

any parameter pik of a random variable Xi with the distribution function Fi xi; pik

<sup>β</sup> ¼ � <sup>∇</sup><sup>G</sup> <sup>y</sup><sup>∗</sup> ð Þ ; <sup>θ</sup>

The limit state surface in the X space of basic random variables gð Þ¼ x; θ 0 does not depend on

limit state surface Gð Þ¼ y; θ 0 depends on parameter pik due to the transformation Y ¼ Y Xð Þ.

If θ is a parameter of the limit state function gð Þ x; θ , then derivative of the reliability index with

where vector Y contains independent standard normal variables related to the vector of basic random variables by transformation Y ¼ Y Xð Þ, and the limit state surface gð Þ¼ x; θ 0 defined in the space X has been mapped into the surface Gð Þ¼ y; θ 0. Since the FORM index β is equal to the minimum distance between the origin of the Y space and the limit state surface

dPf

of the probability distributions of basic random variables, e.g.

defining the form of the performance func-

<sup>d</sup>θΔ<sup>θ</sup> 

<sup>d</sup><sup>θ</sup> (7)

<sup>∂</sup><sup>θ</sup> <sup>G</sup> <sup>y</sup><sup>∗</sup> ð Þ ; <sup>θ</sup> (8)

<sup>∇</sup><sup>G</sup> <sup>y</sup><sup>∗</sup> j j ð Þ ; <sup>θ</sup> <sup>y</sup><sup>∗</sup> (9)

. However, the

(6)

• Parameters p ¼ p1;…; pd

92 Risk Assessment

tion g x; θ1;…; θ<sup>g</sup> 

approximated as follows:

respect to the parameter θ is equal to

• Any deterministic parameters Θ ¼ θ1;…; θ<sup>g</sup>

The derivative of the reliability index with respect to the parameter pik is given by relation

$$\frac{\partial \beta}{\partial p\_{ik}} = \frac{1}{\beta} (\mathbf{y}^\*)^T \frac{\partial}{\partial p\_{ik}} \mathbf{y}^\* \tag{10}$$

The derivative of the vector y<sup>∗</sup> with respect to parameter pik have to be evaluated for each specific transformation Y ¼ Y Xð Þ: For details, see [14].

Sensitivity analysis shows how the uncertainty in the output response function of a system can be allocated to the different uncertainties in the basic variables. Sensitivity analysis is straightforward for FORM methodology. The influence of the basic random variable yi on the statistics of the response can be quantified by the sensitivity indices α<sup>i</sup> [18]:

$$\alpha\_i = -\frac{d\beta}{dy\_i} \quad \text{for} \ y\_i = y\_i^\* \tag{11}$$

where y<sup>∗</sup> is the design point in the space of normalised reduced random variables.

For uncorrelated random variables, the sensitivity vector α coincides with the direction cosines vector of the random variables [18]. Illustration of sensitivity indices is given in Figure 1.

#### 3.2. FORM versus Monte Carlo simulation

An alternative technique applied for probability estimation of risk or reliability is Monte Carlo simulation (MCS). For the purpose of the zero–one indicator-based MCS, Eq. (2) defining the failure probability is given as follows:

$$P\_f = \bigcap\_{\substack{\mathbb{C} \\ \Omega^\mathbb{F}}}^{\square} f\_X(\mathbf{x}) d\mathbf{x} = \bigcap\_{\substack{\mathbf{k} \in \mathbb{R}^v \\ \mathbf{k} \in \mathbb{R}^v}}^{\square} \mathbf{k} h\_\mathbf{K}(\mathbf{k}) d\mathbf{k} \tag{12}$$

where the random vector K has the non-negative sampling density function hK(k) and is defined by the transformation

$$\mathbf{k} = I \left( \mathbf{g}(\mathbf{k}) \le 0 \right) \frac{f\_X(\mathbf{k})}{h\_\mathbf{K}(\mathbf{k})} \tag{13}$$

and I uð Þ is an indicator function:

$$I(u) = \begin{cases} 1 & \text{if} \quad u \le 0 \\ 0 & \text{if} \quad u > 0 \end{cases} \tag{14}$$

In this way the failure probability is equal to the expectation of random vector with the nonnegative sampling density function hKð Þ k :

$$P\_f = E[\mathbf{K}] \tag{15}$$

The average of N simulated values of the random vector K is the estimator of the failure probability, which variance is equal to

$$Var\left[\hat{P}\_f\right] = \frac{1}{N(N-1)}\sum\_{i=1}^{N} \left(\mathbf{k}\_i - E[\mathbf{K}\_i]\right)^2\tag{16}$$

Monte Carlo simulation technique is a powerful tool to calculate the probability of failure for the system described by non-continuous performance function as well as discrete random variables. However, the basic drawback of the MCS is long CPU time calculation, if the failure probability is of the orders 10�<sup>2</sup> � <sup>10</sup>�<sup>6</sup> , since the sample size must be very large in order to obtain estimation of failure probability with low variance and narrow confidence interval. Various variance reduction techniques have been suggested to increase the efficiency of MCS. The basic idea is to assume a sampling density function h<sup>Y</sup> ð Þ y that reduces the variance of the estimator Pb<sup>f</sup> . In the case of highly complicated systems, when time-consuming method must be applied to evaluate a single value of the limit state function, the MCS with the variance reduction technique is still an approach demanding a lot of computer time. Another drawback of the MCS, especially important, in the context of the chapter, is lack of the sensitivity analysis tools. It is simply impossible to run billions of simulation in order to study sensitivity of the system with respect to specific parameters.

### 4. Case-based risk/reliability studies of climate-related building performance

#### 4.1. Building/Environment system performance

In the context of the ventilation design, air infiltration constitutes an important complement to air exchange. Furthermore, air infiltration can influence on the properties (thermal and structural)

Figure 2. Building/Environment system applied in a traditional building physics analysis [19].

of building components. For some building technologies, e.g. lightweight timber frame with mineral wool filling, and loose mineral wool layers for roof insulation, the dependence of the thermal properties of building components on air infiltration can be observed; thus the interaction between, e.g. thermal transmittance and air infiltration should be taken into account. Therefore, it is important to apply a systemic approach to building/environment system performance taking into account different aspects of building physics. Due to the random natural driving forces governing the rate of air infiltration, the approach based on probabilistic methodology seems to be very well suited to handle these phenomena.

A building can be seen as a system transforming as well as resisting different loads (static and dynamic loads—caused by flow of air, heat and moisture) which is designed to ensure safe and comfortable living conditions inside the enclosure. The structure has to be designed in such a way that the possibilities of adverse consequences of this transformation, for example, loss of stability of the structure, inadequate ventilation or mould growth inside a building, have been minimalised. This systemic approach provides a proper theoretical tool for the analysis of the interrelations between the structure, its environment and its performance. An example of systemic model of a building, applicable in building physics studies, is shown in Figure 2 [19].

The local environmental conditions interact with building structure to form a microclimate around a building. Sources of heat, air and moisture, including the products of HVAC systems as well as user behaviour, build up the internal load. Physical boundary conditions define the level of integration of the structure with the environment.

The output of the system can be described by the performance of the building (structure and enclosure). The performance can be considered in terms of safety, comfort and energy consumption and described by various parameters depending on physical conditions of the building structure and inside air. Those parameters should fulfil the performance requirements in order to prevent undesired performance (failure state) occurrence.

#### 4.2. Case description

Pf ¼

defined by the transformation

94 Risk Assessment

and I uð Þ is an indicator function:

negative sampling density function hKð Þ k :

probability, which variance is equal to

of the orders 10�<sup>2</sup> � <sup>10</sup>�<sup>6</sup>

performance

4.1. Building/Environment system performance

⬚ ð

f <sup>X</sup>ðxÞdx ¼

where the random vector K has the non-negative sampling density function hK(k) and is

⬚ ð

khKðkÞdk (12)

<sup>h</sup>Kðk<sup>Þ</sup> (13)

Pf ¼ E½ � K (15)

<sup>k</sup><sup>i</sup> � <sup>E</sup> <sup>K</sup><sup>i</sup> ð Þ ½ � <sup>2</sup> (16)

(14)

k∈ R<sup>n</sup>

� f <sup>X</sup>ðkÞ

1 if u ≤ 0 0 if u > 0

ΩF

k ¼ I � gðkÞ ≤ 0

I uð Þ¼

Var Pb f h i �

In this way the failure probability is equal to the expectation of random vector with the non-

The average of N simulated values of the random vector K is the estimator of the failure

Monte Carlo simulation technique is a powerful tool to calculate the probability of failure for the system described by non-continuous performance function as well as discrete random variables. However, the basic drawback of the MCS is long CPU time calculation, if the failure probability is

failure probability with low variance and narrow confidence interval. Various variance reduction techniques have been suggested to increase the efficiency of MCS. The basic idea is to assume a sampling density function h<sup>Y</sup> ð Þ y that reduces the variance of the estimator Pb<sup>f</sup> . In the case of highly complicated systems, when time-consuming method must be applied to evaluate a single value of the limit state function, the MCS with the variance reduction technique is still an approach demanding a lot of computer time. Another drawback of the MCS, especially important, in the context of the chapter, is lack of the sensitivity analysis tools. It is simply impossible to run billions of simulation in order to study sensitivity of the system with respect to specific parameters.

In the context of the ventilation design, air infiltration constitutes an important complement to air exchange. Furthermore, air infiltration can influence on the properties (thermal and structural)

X N

i¼1

, since the sample size must be very large in order to obtain estimation of

<sup>¼</sup> <sup>1</sup> N Nð Þ � 1

4. Case-based risk/reliability studies of climate-related building

#### 4.2.1. Description of the test house

The object of the study is a timber-framed low-rise naturally ventilated building with aspect ratio 2 and slope of the roof of 45 [20]. The building site in the district of Gothenburg has been considered and can be described as a semi-urban area with the surface roughness equal

Figure 3. Object of the study – the Building/Environment system [20].

to 0.3 m. Example has been worked out for wind blowing from the west. It is assumed that the building is surrounded by other obstructions (other buildings, topography, vegetation, trees etc.) equivalent to half of its height. The following input data are used: volume of the house <sup>V</sup> <sup>¼</sup> 486 m3, area of the building envelope <sup>A</sup> <sup>¼</sup> 336 m2 and internal temperature Tint <sup>¼</sup> <sup>20</sup>� C.

The house was constructed in 1979 with the intention of using it for experimental studies in building physics with focus on ventilation and energy saving. The garage with doors facing south is located in the extended south part of the concrete cellar as shown in Figure 3.

#### 4.2.2. Measurement programme

The following parameters have been measured, as is shown in Figure 4: (1) leakage characteristics of the house using blower door tests, (2) mean value of pressure difference across the six building components with Validyne pressure transducers, (3) wind speed and wind direction with the anemometer located on a small hill about 25 m from the house, (4) internal and external temperatures and (5) limited number of tracer gas measurements of ACH. The measurement programme has been carried out during 8 months. As a result, hourly mean data have been registered.

The results of the pressure drop measurements have been used to validate the air infiltration through the envelope. An opening under the garage door has been treated separately in the calculation model for air change rate [20, 21].

#### 4.3. Modelling of air change rate

The applied infiltration model takes into account the contribution of wind and stack effect to the total air change rate (ACH) in the following form [22]:

$$\text{ACH} = \sqrt{\text{ACH}^2\_s + \text{ACH}^2\_w} \tag{17}$$

where ACHs is the air change rate caused by stack effect and ACHw is the air change rate caused by wind.

The model refers to low-rise building with light-weight construction, single ventilation zone, single temperature zone and steady-state conditions of air flow.

The infiltration model developed by Pietrzyk [20] indicates the air change rate ACH as a random function of three basic random variables: temperature difference, wind speed and wind direction. Wind direction is divided into eight sectors and is treated as a uniformly distributed within each sector. Finally, the air change rate conditioned by the wind direction sector is given by the following expression:

$$\text{ACH}\_{d} = \sqrt{\mathbf{s}\_{1}\boldsymbol{\Delta T}^{2} + \mathbf{s}\_{2}|\boldsymbol{\Delta T}| + \mathbf{s}\_{3}|\boldsymbol{\Delta T}|^{1.5} + \mathbf{w}\_{d,1}\boldsymbol{\upsilon}\_{d}^{4} + \mathbf{w}\_{d,2}\boldsymbol{\upsilon}\_{d}^{2} + \mathbf{w}\_{d,3}\boldsymbol{\upsilon}\_{d}^{3}} \tag{18}$$

where d is a wind direction sector; s1, s2, s3, wd, <sup>1</sup>, wd, <sup>2</sup> and wd, <sup>3</sup> are the deterministic coefficients related to the house dimensions, position of neutral pressure layer, level of external and internal pressure coefficients; ΔT is an ext.-int. temperature difference; and v denotes wind speed.

to 0.3 m. Example has been worked out for wind blowing from the west. It is assumed that the building is surrounded by other obstructions (other buildings, topography, vegetation, trees etc.) equivalent to half of its height. The following input data are used: volume of the house <sup>V</sup> <sup>¼</sup> 486 m3, area of the building envelope <sup>A</sup> <sup>¼</sup> 336 m2 and internal temperature Tint <sup>¼</sup> <sup>20</sup>�

The house was constructed in 1979 with the intention of using it for experimental studies in building physics with focus on ventilation and energy saving. The garage with doors facing

The following parameters have been measured, as is shown in Figure 4: (1) leakage characteristics of the house using blower door tests, (2) mean value of pressure difference across the six building components with Validyne pressure transducers, (3) wind speed and wind direction with the anemometer located on a small hill about 25 m from the house, (4) internal and external temperatures and (5) limited number of tracer gas measurements of ACH. The measurement programme

The results of the pressure drop measurements have been used to validate the air infiltration through the envelope. An opening under the garage door has been treated separately in the

The applied infiltration model takes into account the contribution of wind and stack effect to

where ACHs is the air change rate caused by stack effect and ACHw is the air change rate

q

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ACH<sup>2</sup>

<sup>s</sup> <sup>þ</sup> ACH<sup>2</sup> w

south is located in the extended south part of the concrete cellar as shown in Figure 3.

has been carried out during 8 months. As a result, hourly mean data have been registered.

4.2.2. Measurement programme

96 Risk Assessment

calculation model for air change rate [20, 21].

the total air change rate (ACH) in the following form [22]:

Figure 3. Object of the study – the Building/Environment system [20].

ACH ¼

4.3. Modelling of air change rate

caused by wind.

C.

(17)

Distributions of air change rate averaged over one hour (1-h) periods at a randomly chosen time in the year have been estimated with the help of the model described by Eq. (18). One-hour mean data ensure steady-state conditions of the airflow through the building envelope. Wind is the most important source of variations in the process of air exchange. However, according to wind energy spectrum presented in [23] for the frequency range 0:00014–0:0033 cycles=hour related to time interval from 5 min to 2 h, the wind speed varies slightly. This range is called spectral gap. Measurements carried out for periods of that duration can be regarded as representing the steady-state conditions [24].

Performance criteria in terms of ACH should take into account the minimum threshold evaluated with respect to unhygienic conditions. Then, probability of unsatisfactory performance is equal to P ACH ½ � < threshold .

Figure 4 presents how the building response such as ACH depends on the uncertain environmental conditions. The wind speed is traced from the meteorological station to the site and eventually to the building envelope which in turn influences the microclimatic conditions near to structure. The zone of wind-structure interaction is included in the model of designed system (see boundary conditions of the system presented by the solid lines). Serviceability performance due to wind action can be evaluated in terms of probability of undesired performance (failure). It is worth noticing that measurement data have been used to model the building performance as well as to validate the results of analysis carried out with the help of the established model.

The probability density function for air change rate as a function of basic random variables 1-h mean wind speed and 1-h mean temperature difference at time points chosen randomly during the year has been estimated with the help of the FORM sensitivity analysis for the performance function

$$\log(\mathbf{x}\_1, \mathbf{x}\_2; a) = A \mathbf{C} H(\mathbf{x}\_1, \mathbf{x}\_2) - a \tag{19}$$

where ACH xð Þ <sup>1</sup>; x<sup>2</sup> is given by Eq. (18), x<sup>1</sup> ¼ ΔT and x<sup>2</sup> ¼ vd.

Figure 4. Transformation of uncertainty within the modelling of building performance.

The parametric sensitivity analysis applied to FORM measures (reliability index or failure probability) is used in order to determine the probability density function for the random response ACH ¼ ACH xð Þ <sup>1</sup>; x<sup>2</sup> . The cumulative distribution function of random function ACH ¼ ACH xð Þ <sup>1</sup>; x<sup>2</sup> is actually equivalent to the probability of failure defined for the performance function Eq. (19):

$$F\_{ACH}(a) = P[ACH(\mathbf{x}\_1, \mathbf{x}\_2) \le a] = P[\mathbf{g}(\mathbf{x}\_1, \mathbf{x}\_2; a) \le 0] \tag{20}$$

Thus, the cumulative probability function can be estimated with the help of the FORM analysis:

$$F\_{ACH}(a) \approx \Phi(-\beta\_a) \tag{21}$$

where Φð Þ u is the Laplace function and the reliability index β<sup>a</sup> has been determined for the limit state surface g xð Þ¼ <sup>1</sup>; x<sup>2</sup> ACH xð Þ� <sup>1</sup>; x<sup>2</sup> a ¼ 0 defined for a given value of parameter a.

Following the sensitivity measures presented earlier in the chapter, the probability density function of the random response ACH can be estimated with the help of formula:

$$f\_{ACH}(a) \approx -\,\rho\left(\beta\_a\right) \frac{d\beta\_a}{da} = \frac{\rho\left(\beta\_a\right)}{|\nabla G(\mathbf{y}, a)|\_{\text{for } y = y^\*}}\tag{22}$$

where φð Þ u is the probability density function of the standard Gaussian distribution, Gð Þ y; a is the limit state function in the space <sup>Y</sup> <sup>¼</sup> Y Xð Þ of normalised random variables and <sup>y</sup><sup>∗</sup> is the design point, i.e. the point on the surface Gð Þ¼ y; a at the shortest distance to the origin of the coordinate system. The value of probability density function of random response function fACH(ACH) can be obtained by means of FORM sensitivity analysis for consecutive values of parameter a; for details, see [14].

#### 4.3.1. Wind transformation – climate/local climate/microclimate

The input basic random variable for the infiltration model is wind speed in the vicinity of the building envelope. Wind speed and direction are usually measured at the meteorological stations. The mean value of 1-h mean wind speed can be evaluated from the mean value of 10-min mean wind speed obtained from meteorological station using the principle that the mean velocity increases by 5% when the averaging period is reduced from 1 h to 10 min. The transformation of the abovementioned data to the site of the building is often needed especially for wind that changes drastically due to the roughness of the ground surface.

The hourly mean wind speed v is assumed to follow the two-parameter Weibull distribution with probability density function as follows:

$$f(\upsilon; c, \lambda) = \frac{\lambda}{c} \left(\frac{\upsilon}{c}\right)^{\lambda - 1} \exp\left\{-\left(\frac{\upsilon}{c}\right)^{\lambda}\right\} \tag{23}$$

where λ is a shape parameter and c is a scale parameter.

The parametric sensitivity analysis applied to FORM measures (reliability index or failure probability) is used in order to determine the probability density function for the random response ACH ¼ ACH xð Þ <sup>1</sup>; x<sup>2</sup> . The cumulative distribution function of random function ACH ¼ ACH xð Þ <sup>1</sup>; x<sup>2</sup> is actually equivalent to the probability of failure defined for the perfor-

Thus, the cumulative probability function can be estimated with the help of the FORM analysis:

FACHð Þa ≈ Φ �β<sup>a</sup>

where Φð Þ u is the Laplace function and the reliability index β<sup>a</sup> has been determined for the limit state surface g xð Þ¼ <sup>1</sup>; x<sup>2</sup> ACH xð Þ� <sup>1</sup>; x<sup>2</sup> a ¼ 0 defined for a given value of parameter a. Following the sensitivity measures presented earlier in the chapter, the probability density

function of the random response ACH can be estimated with the help of formula:

<sup>d</sup>β<sup>a</sup>

where φð Þ u is the probability density function of the standard Gaussian distribution, Gð Þ y; a is the limit state function in the space <sup>Y</sup> <sup>¼</sup> Y Xð Þ of normalised random variables and <sup>y</sup><sup>∗</sup> is the design point, i.e. the point on the surface Gð Þ¼ y; a at the shortest distance to the origin of the

da <sup>¼</sup> <sup>φ</sup> <sup>β</sup><sup>a</sup>

 j j <sup>∇</sup>Gð Þ <sup>y</sup>; <sup>a</sup> for <sup>y</sup>¼y<sup>∗</sup>

f ACHð Þa ≈ � φ β<sup>a</sup>

Figure 4. Transformation of uncertainty within the modelling of building performance.

FACHð Þ¼ a P ACH x ½ ð Þ <sup>1</sup>; x<sup>2</sup> ≤ a� ¼ Pg x ½ � ð Þ <sup>1</sup>; x2; a ≤ 0 (20)

(21)

(22)

mance function Eq. (19):

98 Risk Assessment

In general, parameters λ, c for wind speed averaged over 1-h should be estimated for different wind direction sectors that shall result in quite different Weibull distributions due to two reasons: directional variability in the terrain surrounding the house and the predominance of certain wind directions. In particular, most building sites are subjected to sheltering effects from topography, trees and buildings. The roughness of the ground surface changes the mean wind speed and its turbulent characteristics and is described by the surface roughness height (aerodynamic roughness length) denoted z0. Roughness height depends on the mean element height of the roughness field. The results of laboratory measurements show that the value of z<sup>0</sup> is approximately equal to 1/30 of the height of the roughness elements. Table 1 presents the classification of roughness height for different types of surfaces with reference to the categories of terrain roughness used in Swedish Code [24].

Transformation of the wind speed between terrains of different surface roughness is possible due to the similarity theory [25], based on the equilibrium boundary layer height, which is according to [26] equal to 1200 m. The wind flows with the gradient velocity vg along the isobars:

$$w\_{\mathcal{S}} = \frac{u\_\*}{\kappa} \left[ \ln \left( \frac{u\_\*}{f z\_0} \right) - A\_u \right] \tag{24}$$

where z<sup>0</sup> is the surface roughness height (m); κ is Karman's constant, κ ¼ 0, 4; Au is const.; assumed �1; <sup>f</sup> is Coriolis parameter (1/s), <sup>f</sup> <sup>¼</sup> <sup>1</sup>, <sup>12</sup>∗10�<sup>4</sup> for latitude of order of 50� [25]; and vg is gradient velocity (m/s).

u<sup>∗</sup> is friction velocity depending on the surface shear stress τ<sup>0</sup> as given in Eq. (25):

$$
\mu\_\* = \sqrt{\tau\_0/\rho} \tag{25}
$$

where τ<sup>0</sup> is surface shear stress (kg/ms<sup>2</sup> ) and r is air density (kg/m<sup>3</sup> ).

The mean velocity profile u zð Þ near the ground, where z is the height above the ground (z < 100 m), can be expressed by the log-law model described by Eq. (24) assuming ideal conditions, i.e. the uniform height of roughness field and the neutrally stable atmosphere when thermal gradient is weak or absent.

$$
\mu(z) = \frac{u\_\*}{\kappa} \ln \left(\frac{z}{z\_0}\right) \tag{26}
$$

Eq. (26) is used for wind speeds greater than 10 m/s. For low wind speeds, the influence of thermal gradient both for unstable and stable atmosphere should be taken into account. The modified logarithmic formula can be found in [27].

The 10-min mean wind velocity measured at a meteorological station (usually at the level of z ¼ 10 m above the ground) for upwind surface roughness z0<sup>m</sup> can be transformed to any other location described by upwind surface roughness height z0<sup>s</sup> through similarity of the wind speed at the gradient height for all terrain types [27]. Hence, the gradient velocity takes the same value for both locations and can be expressed by Eq. (27):

$$\upsilon\_{\mathcal{S}} = \frac{\mu\_{\ast \text{sr}}}{\kappa} \left[ \ln \left( \frac{\mu\_{\ast \text{m}}}{f \varepsilon\_{0 \text{m}}} \right) - A\_{\text{u}} \right] = \frac{\mu\_{\ast \text{s}}}{\kappa} \left[ \ln \left( \frac{\mu\_{\ast \text{s}}}{f \varepsilon\_{0 \text{s}}} \right) - A\_{\text{u}} \right] \tag{27}$$

where u∗<sup>s</sup> is friction velocity at the building site (m/s) and u∗<sup>m</sup> is friction velocity at the meteorological station (m/s).


Table 1. Roughness height for different types and categories of surfaces, acc. to Swedish Code [24].

The friction velocity at the meteorological station u∗<sup>m</sup> is computed from Eq. (28), which has been derived on the basis of Eq. (27), by substituting the friction velocity at the site with friction velocity at the meteorological station.

<sup>u</sup><sup>∗</sup> <sup>¼</sup> ffiffiffiffiffiffiffiffiffi

The mean velocity profile u zð Þ near the ground, where z is the height above the ground (z < 100 m), can be expressed by the log-law model described by Eq. (24) assuming ideal conditions, i.e. the uniform height of roughness field and the neutrally stable atmosphere when

Eq. (26) is used for wind speeds greater than 10 m/s. For low wind speeds, the influence of thermal gradient both for unstable and stable atmosphere should be taken into account. The

The 10-min mean wind velocity measured at a meteorological station (usually at the level of z ¼ 10 m above the ground) for upwind surface roughness z0<sup>m</sup> can be transformed to any other location described by upwind surface roughness height z0<sup>s</sup> through similarity of the wind speed at the gradient height for all terrain types [27]. Hence, the gradient velocity takes the

� Au

Types of surface roughness Height z0 (m) Category

Calm open sea, water area 0.0001 I Sand surface (smooth) 0.001 I Snow surface 0.003 I Bare soil 0.005 I Airport runway area, mown grass 0.01 I Farmland with very few buildings, trees, etc. 0.03 I Farmland with open appearance 0.05 R Farmland with closed appearance 0.1 II Many trees and bushes 0.2 II Shelter belts 0.3 II Suburbs 0.5 II City, forest 1.0 III

Table 1. Roughness height for different types and categories of surfaces, acc. to Swedish Code [24].

where u∗<sup>s</sup> is friction velocity at the building site (m/s) and u∗<sup>m</sup> is friction velocity at the

<sup>¼</sup> <sup>u</sup>∗<sup>s</sup> κ

ln <sup>u</sup>∗<sup>s</sup> f z0<sup>s</sup> � �

� �

� Au

u zð Þ¼ <sup>u</sup><sup>∗</sup> κ ln <sup>z</sup> z0 � �

) and r is air density (kg/m<sup>3</sup>

where τ<sup>0</sup> is surface shear stress (kg/ms<sup>2</sup>

100 Risk Assessment

thermal gradient is weak or absent.

modified logarithmic formula can be found in [27].

vg <sup>¼</sup> <sup>u</sup>∗<sup>m</sup> κ

meteorological station (m/s).

same value for both locations and can be expressed by Eq. (27):

ln <sup>u</sup>∗<sup>m</sup> f z0<sup>m</sup> � �

� �

<sup>τ</sup>0=<sup>r</sup> <sup>p</sup> (25)

(26)

(27)

).

$$
\mu\_{\ast m} = \frac{\mu\_m(z)\kappa}{\ln\left(\frac{z}{z\_{0m}}\right)}\tag{28}
$$

where umð Þz is 10-min mean wind speed measured at the meteorological station at the height z. The mean wind velocity usð Þz at the site and at the height z characterised by upwind surface roughness z0<sup>s</sup> can be estimated from Eq. (26):

$$
\mu\_s(z) = \frac{\mu\_{\ast s}}{\kappa} \ln \left( \frac{z}{z\_{0s}} \right) \tag{29}
$$

The ratio between wind velocity at the site and the wind velocity measured at the meteorological station denoted as η is a function of the surface roughness z0<sup>m</sup> and z0s:

$$\eta = \frac{\mu\_s(z, z\_{0s})}{\mu\_m(z, z\_{0m})} \tag{30}$$

It can be shown that non-linear relationship ηð Þ um can be approximated with errors of order of 7% or less by a constant factor η for specified surface roughness at the building site. As the surface roughness appears in an implicit form in the expression for wind velocity (Eq. (29)), an analytical expression is not available. Instead, values of the factor η have been computed for different combinations of the surface roughness at the site and at the meteorological station (Table 2).

Simple wind transformation between categories of roughness is possible for z < 20z<sup>0</sup> [28]. Thus, for z ¼ 10 m, the transformation is valid for z<sup>0</sup> < 0,5 m. In the case of non-homogeneous upwind terrain, implementation for multiple roughness changes is required [26].

Values of wind speed measured at the meteorological station can be transformed using Eqs. (27)–(29). The statistical parameters of wind speed evaluated for the site, i.e. the mean value μus and the standard deviation σus , can be easily evaluated on the basis of the mean wind speed measured at the meteorological station μum and the standard deviation of the wind speed measured at the meteorological station σum , since the wind speed at site is related the wind speed measured at the meteorological station by simple Eq. (30). Hence:


Table 2. Values of the ratio η corresponding to different roughness conditions.

$$
\mu\_{u\_\*} = \eta \mu\_{u\_{\mathfrak{u}\_m}} \text{ and } \sigma\_{u\_\*} = \eta \sigma\_{u\_{\mathfrak{u}\_m}} \tag{31}
$$

Concluding, the shape parameter λ characterising the distribution of wind speed at the site of the building remains the same as the shape parameter λ for the meteorological station. The scale parameter for the distribution of wind speed at the site is equal to ηc (Eq. (23)).

Hence, probability density function of wind speed transformed to building location is given by 2p–Weibull probability model:

$$f(v; \eta c, \lambda) = \frac{\lambda}{\eta c} \left(\frac{v}{\eta c}\right)^{\lambda - 1} \exp\left\{-\left(\frac{v}{\eta c}\right)^{\lambda}\right\} \tag{32}$$

where c is a scale parameter and λ is a shape parameter of the PDF of wind speed measured at the nearest meteorological station.

Modelling of microclimate around the structure takes into account the influence of structure form, orientation and the quality of the surrounding. Usually, the effect of wind pressure on the façade is estimated with the help of the tabulated values of wind pressure coefficients. In the analysed case pressure differences across the six building components on the structure were measured.

#### 4.3.2. Air flow through the building envelope (influence of wind and temperature)

Some building performance aspects are dependent on the wind-structure interaction. Wind together with temperature difference causes airflow through building envelope.

The probability distribution model of external temperature depends on the specific geographical region. For temperate regions characterised by four seasons evenly distributed over the year, the normal (Gaussian) model with probability density function φ T; μT; σ<sup>T</sup> � �, given by Eq. (33), can be used for 1-h mean external temperature at "a random time" [29, 30]:

$$f(T; \mu\_T, \sigma\_T) = \frac{1}{\sigma\_T \sqrt{2\pi}} \exp\left\{-\frac{1}{2} \left(\frac{T - \mu\_T}{\sigma\_T}\right)^2\right\} \tag{33}$$

Also the full-scale measurements carried out near Gothenburg indicate [20] that the outdoor temperature can be approximated by the normal distribution.

Climatic data consist of 40-year record of observations made on meteorological stations at the airport in Säve, near Gothenburg. External temperature at the building site has been assumed to be equal to the temperature measured at the meteorological station, and its randomness is modelled by the normal distribution with the mean value of 11:1 and the standard deviation of 6:1 as shown in Figure 5.

Temperature difference across the building envelope is also described by the normal PDF but shifted towards positive values by the average value of internal temperature.

Figure 5. Normal PDF of ext. temperature T (\*C) (left) and PDF of wind speed (m/s) for data coming from the Säve meteorological station (dashed line) and for local wind (solid line).

The Weibull probability density function for a local wind speed has been evaluated on the basis of 10-min mean values of wind speed measured at meteorological station (see Figure 5). The meteorological station is located at the airport with assumed surface roughness 0.01. The ratio between wind velocity at the site and the velocity measured at the meteorological station has been calculated and is equal to 0.66 (Table 2). Probabilistic models of local wind speed together with wind speed measured at the meteorological station are given in Table 3.

The probability density function of the random function ACH (Figure 6) has been evaluated using FORM approach (Eq. (22)). Probabilistic inference leads to the conclusion that the randomness of ACH is best described by the log-normal distribution with the mean value of 0.73 and the standard deviation of 0.38. Mean value and standard deviation are denoted, respectively, by μ and σ. The PDF of the air change rate due to stack effect ACHs and the PDF of air change rate due to wind ACHw are also shown in Figure 6. Randomness of air change


Table 3. Stochastic parameters of the wind speed.

μus ¼ ημum and σus ¼ ησum (31)

Concluding, the shape parameter λ characterising the distribution of wind speed at the site of the building remains the same as the shape parameter λ for the meteorological station. The

Hence, probability density function of wind speed transformed to building location is given by

v ηc � �<sup>λ</sup>�<sup>1</sup>

where c is a scale parameter and λ is a shape parameter of the PDF of wind speed measured at

Modelling of microclimate around the structure takes into account the influence of structure form, orientation and the quality of the surrounding. Usually, the effect of wind pressure on the façade is estimated with the help of the tabulated values of wind pressure coefficients. In the analysed case pressure differences across the six building components on the structure

Some building performance aspects are dependent on the wind-structure interaction. Wind

The probability distribution model of external temperature depends on the specific geographical region. For temperate regions characterised by four seasons evenly distributed over

Also the full-scale measurements carried out near Gothenburg indicate [20] that the outdoor

Climatic data consist of 40-year record of observations made on meteorological stations at the airport in Säve, near Gothenburg. External temperature at the building site has been assumed to be equal to the temperature measured at the meteorological station, and its randomness is modelled by the normal distribution with the mean value of 11:1 and the standard deviation of

Temperature difference across the building envelope is also described by the normal PDF but

2

T � μ<sup>T</sup> σT � �<sup>2</sup> ( )

exp � <sup>v</sup>

ηc � �<sup>λ</sup> ( )

(32)

� �, given by

(33)

scale parameter for the distribution of wind speed at the site is equal to ηc (Eq. (23)).

ηc

f vð Þ¼ ; <sup>η</sup>c; <sup>λ</sup> <sup>λ</sup>

4.3.2. Air flow through the building envelope (influence of wind and temperature)

f T; μT; σ<sup>T</sup> � � <sup>¼</sup> <sup>1</sup>

temperature can be approximated by the normal distribution.

together with temperature difference causes airflow through building envelope.

the year, the normal (Gaussian) model with probability density function φ T; μT; σ<sup>T</sup>

Eq. (33), can be used for 1-h mean external temperature at "a random time" [29, 30]:

σT ffiffiffiffiffiffi <sup>2</sup><sup>π</sup> <sup>p</sup> exp � <sup>1</sup>

shifted towards positive values by the average value of internal temperature.

2p–Weibull probability model:

102 Risk Assessment

the nearest meteorological station.

were measured.

6:1 as shown in Figure 5.

Figure 6. The probability density function for ACHs ð Þ left , ACHw ð Þ middle and ACH (right) established with the help of FORM analysis.

rate due to stack effect can be described by the normal distribution whereas due to wind by the Weibull distribution skewed to the right.

#### 4.3.3. Sensitivity analysis of the probabilistic variability of air change rate with respect to the variability of wind and temperature

Dependence of ACH on the mean values of input variables follows the trends showed by sensitivity indices for individual variables (see Figures 7–9) [31]. For the values of ACH above 1.0, where �αΔ<sup>T</sup> approaches 0 and �α<sup>v</sup> is equal to 1, the changes of reliability indices are dependent almost only on the changes of wind speed. Concluding, the wind velocity and temperature difference contribute significantly to the variability of the air change rate with sensitivity indices up to 0.8 for ΔT (for lower ACH) and up to near to 1 for wind speed (for higher ACH (Table 4)).

Sensitivity of ACH distribution with respect to mean values and standard deviations of input variables leads to the following conclusions: (1) strong dependence on wind variation, (2) temperature difference variations affect only low values of ACH (up to 0.4), (3) variations of ΔT affect the lowest values of the ACH distribution, and (4) variations of the wind speed are significant for performance studies of ACH within the whole range of wind speed values.

Figure 7. Course of sensitivity index α, for variables ΔT and v.

Figure 8. ACH sensitivity to the μ<sup>Δ</sup><sup>T</sup> ð Þ solid and the σ<sup>Δ</sup><sup>T</sup> ð Þ dashed ð Þ left and ACH sensitivity to the μ<sup>v</sup> ð Þ solid and the σ<sup>v</sup> (dashed) (right).

Figure 9. ACH sensitivity to scale (dashed) and shape (solid) parameter of Weibull distribution of wind speed.


Table 4. Some results from Figure 7.

rate due to stack effect can be described by the normal distribution whereas due to wind by the

4.3.3. Sensitivity analysis of the probabilistic variability of air change rate with respect to the variability

Dependence of ACH on the mean values of input variables follows the trends showed by sensitivity indices for individual variables (see Figures 7–9) [31]. For the values of ACH above 1.0, where �αΔ<sup>T</sup> approaches 0 and �α<sup>v</sup> is equal to 1, the changes of reliability indices are dependent almost only on the changes of wind speed. Concluding, the wind velocity and temperature difference contribute significantly to the variability of the air change rate with sensitivity indices up to 0.8 for ΔT (for lower ACH) and up to near to 1 for wind speed (for

Sensitivity of ACH distribution with respect to mean values and standard deviations of input variables leads to the following conclusions: (1) strong dependence on wind variation, (2) temperature difference variations affect only low values of ACH (up to 0.4), (3) variations of ΔT affect the lowest values of the ACH distribution, and (4) variations of the wind speed are significant for

Figure 8. ACH sensitivity to the μ<sup>Δ</sup><sup>T</sup> ð Þ solid and the σ<sup>Δ</sup><sup>T</sup> ð Þ dashed ð Þ left and ACH sensitivity to the μ<sup>v</sup> ð Þ solid and the σ<sup>v</sup>

performance studies of ACH within the whole range of wind speed values.

Figure 7. Course of sensitivity index α, for variables ΔT and v.

Weibull distribution skewed to the right.

of wind and temperature

104 Risk Assessment

higher ACH (Table 4)).

(dashed) (right).

Figure 9 shows the measure of sensitivity of the PDF of ACH with respect to scale or shape parameter of the Weibull distribution of wind speed for consecutive values of air change rate. The changes of shape parameter are the most important for the distribution of air change rate, especially for the threshold values close to the tail of distribution.

#### 4.4. Probabilistic modelling of airflow-dependent thermal transmittance

For lightweight timber frame with mineral wool filling, the dependence of the thermal properties of building components on air infiltration is well acknowledged. An example is so-called dynamic wall [32], specially designed to save energy. In such a wall, the ventilation air passes through the insulation exchanging heat with a porous material reducing its conduction heat loss. The air entering the building is preheated by the conduction heat of the insulation (infiltration case), or the air leaving the building heats up the insulating material (exfiltration case) [33]. In the case of dynamic wall, the thermal transmittance becomes the most interesting parameter that can vary with the climatic data. Dynamic wall as a natural heat exchanger is a future of highperformance housing. The interaction between thermal transmittance and airflow through the components should be considered while calculating heat loss through a building envelope. A modelling approach based on probabilistic methods is proposed in [34].

Probabilistic model of dynamic U value takes into account only some of the uncertainties related to the properties of the thermal insulation described by the thermal transmittance U<sup>0</sup> , the climatic load and the internal load coming from the building installations and occupants' behaviour (ventilation strategy). The model described by Eq. (34) can be used to estimate a probability distribution of the dynamic U value of the building envelope consisting of i-th elements with total area of Atot:

$$
\delta \mathcal{U} = \frac{1}{4} \sum\_{i}^{n} N \mu\_{i} \mathcal{U}\_{i}^{0} A\_{i} \tag{34}
$$

The Nusselt number Nui is equal to 1 for the element without convection flow. In general, the value of the Nusselt number depends on the velocity of the airflow through the insulation, the direction of the flow and the thickness and the density of the insulation.

The example of approximation of the probability density functions of a dynamic U value has been carried out with the help of FORM techniques. PDF of dynamic U value has been evaluated using FORM sensitivity analysis (see Section 3.1.1). It depends on statistical parameters of the joint distribution of two random variables: thermal transmittance U<sup>0</sup> (varying with the temperature) and wind as well as buoyancy-driven airflow in terms of air change rate ACH (see Figure 10). It has been assumed that stochastic information is limited to the parameters of marginal probability density functions of those variables and the correlation coefficient between them.

Probability density functions of thermal transmittance depend on the direction of the airflow through the envelope as well as on the probability model of the air change rate. Respectively, to the contribution of the natural forces (wind, temperature) and mechanical forces, different probability distributions (normal, log-normal, Weibull and gamma) can be fitted to model randomness of the air change rate [35]. In general, the probability density functions of the dynamic U value are skewed to the left—in the case of infiltration—and are skewed to the right, in case of exfiltration. The specific character of the relationship between Nusselt number and air change rate may explain these results. For the case of infiltration, the best fit according to the Kolmogorov-Smirnov test has been obtained for the Weibull distribution, while for the exfiltration case, the threeparameter gamma (or alternatively Gumbel) distribution has been obtained (see Figure 10).

The model could be further developed to include uncertainties due to other mechanisms and factors, e.g. influence of wind or radiation on external heat transfer coefficient or the influence of non-homogeneity of the material characteristics.

Figure 10. Probability density functions of dynamic U value for the cases of infiltration (left) and exfiltration (right) approximated for the building located near to Gothenburg for western winds.

The probabilistic model for estimation of heat loss accounting for interactions between ventilation and transmission heat losses has been presented in [20, 33]. The model predicts the probability density function of the heat loss distribution over a specified period of time (e.g. a heating season) on the basis of the design parameters of the house, temperature characteristics of the site as well as the air change rate due to mechanical or natural ventilation. The probability of heat loss exceeding certain number of kW can be compared for different design options concerning various ventilation strategies (natural or/and mechanical ventilation) and various transmittance properties (tight envelope contra dynamic wall) of the building envelope. Hence, rational engineering decisions promoting low-energy solution contributing to climate change mitigation can be taken into account in the design process.

### 5. Conclusions

<sup>U</sup> <sup>¼</sup> <sup>1</sup> 4 Xn i

direction of the flow and the thickness and the density of the insulation.

correlation coefficient between them.

106 Risk Assessment

of non-homogeneity of the material characteristics.

approximated for the building located near to Gothenburg for western winds.

NuiU<sup>0</sup>

The Nusselt number Nui is equal to 1 for the element without convection flow. In general, the value of the Nusselt number depends on the velocity of the airflow through the insulation, the

The example of approximation of the probability density functions of a dynamic U value has been carried out with the help of FORM techniques. PDF of dynamic U value has been evaluated using FORM sensitivity analysis (see Section 3.1.1). It depends on statistical parameters of the joint distribution of two random variables: thermal transmittance U<sup>0</sup> (varying with the temperature) and wind as well as buoyancy-driven airflow in terms of air change rate ACH (see Figure 10). It has been assumed that stochastic information is limited to the parameters of marginal probability density functions of those variables and the

Probability density functions of thermal transmittance depend on the direction of the airflow through the envelope as well as on the probability model of the air change rate. Respectively, to the contribution of the natural forces (wind, temperature) and mechanical forces, different probability distributions (normal, log-normal, Weibull and gamma) can be fitted to model randomness of the air change rate [35]. In general, the probability density functions of the dynamic U value are skewed to the left—in the case of infiltration—and are skewed to the right, in case of exfiltration. The specific character of the relationship between Nusselt number and air change rate may explain these results. For the case of infiltration, the best fit according to the Kolmogorov-Smirnov test has been obtained for the Weibull distribution, while for the exfiltration case, the threeparameter gamma (or alternatively Gumbel) distribution has been obtained (see Figure 10).

The model could be further developed to include uncertainties due to other mechanisms and factors, e.g. influence of wind or radiation on external heat transfer coefficient or the influence

Figure 10. Probability density functions of dynamic U value for the cases of infiltration (left) and exfiltration (right)

<sup>i</sup> Ai (34)

Risk analysis together with appropriate tools can support building design strategies concerning climate change mitigation and adaptation. Lower levels of uncertainty can be handled by means of risk analysis based on system's risk or reliability estimations. In the case of higher order of uncertainties [36], other strategies could be developed based on the concept of resilience.

Risk analysis of building performance enables the selection of the best design based on comparison of probabilities of undesired performance estimated for alternative design solutions. Systemic approach gives opportunity to identify important relationships between variables. For example, air infiltration as a result of climate/structure interaction may be a significant variable in the thermal performance of building envelope. However, in order to handle the whole complexity of the real system, multivariable decision models for different design solutions should be further developed.

The examples of dynamic U values resulting in the different characters of distribution models for the cases of infiltration and exfiltration show that the probabilistic methods and tools can be effectively used to establish the probabilistic characteristics typical for the combination of the important variables influencing climate-structure interaction.

The sensitivity measures are important in the case of risk or reliability-based design. Sensitivity analysis of the distribution of a response variable (random function) with respect to the basic random variables and its parameters is straightforward for FORM methodology, whereas it is not easy in the case of the Monte Carlo simulation. The results of case studies show that the air change rate distribution depends on the temperature difference ΔT and the wind velocity significantly. Dependence of PDF model of ACH on the mean value of input variables is similar in the trends for both studied variables: temperature and wind speed. Sensitivity analysis of ACH probability distribution model to standard deviations of input variables shows the high contribution of wind speed and limited to low values of ACH (up to 0.4) influence of temperature difference.

Approximate transformation of wind speed data from the meteorological station to a specific location, where analysed building is situated, can be carried out by multiplying wind speed by a constant factor η (with 7% error or less), established for a specific ranges of roughness conditions. The transformation of the probabilistic model of 10-min mean wind speed from meteorological station to the probabilistic model of hourly mean speed for the site of the building results in change of the scale parameter, while the shape parameter remains the same. The form of PDF for ACH as well as reliability index is sensitive to the value of the shape parameter of the Weibull distribution of wind speed and much less sensitive to the scale parameter. Hence, the transformation of probabilistic model of wind speed to the local site seems to be robust for the analysed case.

As it was shown, the sensitivity analysis has helped to understand the relationships between model inputs. It can also help to test the model outcome in terms of its robustness in the presence of uncertainty. The probabilistic risk analysis along with the effective tools for sensitivity analysis can be used to support design decisions and also to develop better models for evaluation of building performance.

### Author details

Krystyna Pietrzyk<sup>1</sup> \* and Ireneusz Czmoch<sup>2</sup>

\*Address all correspondence to: krystyna.pietrzyk@chalmers.se

1 Department of Architecture and Civil Engineering, Chalmers University of Technology, Gothenburg, Sweden

2 Faculty of Civil Engineering, Warsaw University of Technology, Warsaw, Poland

### References


[7] Cardona OD. The need for rethinking the concepts of vulnerability and risk from a holistic perspective: A necessary review and criticism for effective risk management. In: Bankoff G, Frerks G, Hilhorst D, editors. Mapping Vulnerability: Disasters, Development and People. London: Earthscan Publications; 2004. pp. 37-51

conditions. The transformation of the probabilistic model of 10-min mean wind speed from meteorological station to the probabilistic model of hourly mean speed for the site of the building results in change of the scale parameter, while the shape parameter remains the same. The form of PDF for ACH as well as reliability index is sensitive to the value of the shape parameter of the Weibull distribution of wind speed and much less sensitive to the scale parameter. Hence, the transformation of probabilistic model of wind speed to the local site seems to be

As it was shown, the sensitivity analysis has helped to understand the relationships between model inputs. It can also help to test the model outcome in terms of its robustness in the presence of uncertainty. The probabilistic risk analysis along with the effective tools for sensitivity analysis can be used to support design decisions and also to develop better models for

1 Department of Architecture and Civil Engineering, Chalmers University of Technology,

[1] United Nations. Sustainable Development Goals. Available from: http://www.un.org/

[2] Booth AT, Choudhary R, Spiegelhalter DJ. Handling uncertainty in housing stock

[3] Pietrzyk K. Quantification of building/environment system performance. In: The World Sustainable Building Conference (SB08), Vol. 2; Melbourne, Australia; 2008. pp. 501-506

[4] Bedford T, Cooke R. Probabilistic risk analysis. Foundations and Methods. United Kingdom:

[5] Rausand M, Høyland A. System Reliability Theory: Models, Statistical methods, and

[6] Corotis RB. Risk and risk perception for low probability, high consequence events in the built environment. In: Haldar A, editor. Recent Developments in Reliability-Based Civil

2 Faculty of Civil Engineering, Warsaw University of Technology, Warsaw, Poland

robust for the analysed case.

108 Risk Assessment

Author details

Krystyna Pietrzyk<sup>1</sup>

Gothenburg, Sweden

References

evaluation of building performance.

sustainabledevelopment/energy/

Cambridge University Press; 2001

Applications. 2nd ed. Hoboken: Wiley; 2004

Engineering. Singapore: World Scientific Publishing Co; 2006

\* and Ireneusz Czmoch<sup>2</sup>

\*Address all correspondence to: krystyna.pietrzyk@chalmers.se

models. Building and Environment. 2012;49:35-47


**Provisional chapter**

### **Machinery Safety Requirements as an Effective Tools for Operational Safety Management Machinery Safety Requirements as an Effective Tools for Operational Safety Management**

DOI: 10.5772/intechopen.71152

Hana Pacaiova Hana Pacaiova Additional information is available at the end of the chapter

[23] Van der Hoven I. Power spectrum of horizontal wind speed in the frequency range from

[24] Handa K. Estimation of mean wind speeds for different surface roughness. Series on Designing for Wind. Report No. 1. Sweden: Department of Architecture, Chalmers Univer-

[25] Simiu E, Scanlan RH. Wind effect on structures fundamentals and applications to design.

[26] Cook N. The Deaves and Harris ABL model applied to heterogeneous terrain. Journal of

[27] Azad RS, editor. The Atmospheric Boundary Layer for Engineers. Berlin: Springer Verlag;

[28] Wieringa J. Updating the davenport roughness classification. Journal of Wind Engineer-

[29] Kaczmarek Z. Statistical Methods in Hydrology and Meteorology (in Polish). Warszawa:

[31] Pietrzyk K, Czmoch I. Sensitivity analysis of air infiltration through the building envelope to the stochastic characteristics of wind speed and air temperature. In: Proceedings of the COBEE (2nd International Conference on Building Energy & Environment); USA.

[32] Anderlind G, Johansson B. Dynamic insulation. In: A Theoretical Analysis of Thermal Insulation through which a Gas or Fluid Flows. Stockholm: The Swedish Council for

[33] Pietrzyk K. Thermal performance of a building envelope—A probabilistic approach.

[34] Pietrzyk K, Hagentoft C-E. Probabilistic modelling of dynamic U-value. In: Proceedings of the Thermal Performance of the Exterior Envelopes of Buildings IX; American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (ASHRAE); Florida, USA. 2004

[35] Pietrzyk K. Probability-based design in ventilation. The International Journal of Ventila-

[36] Walker WE, Lempert RJ, Kwakkel JH. Deep uncertainty. In: Gass SI, Fu MC, editors. Encyclopedia of Operations Research and Management Science. USA: Springer; 2013.

[30] Guyot G. Physics of the Environment and Climate. Paris: John Wiley & Sons; 1998

0.0007 to 900 cycles per hour. Journal of Meteorology. 1957;14:160-164

Hoboken: John Wiley & Sons (Wiley-Interscience Publication); 1996

Wind Engineering and Industrial Aerodynamics. 1997;66:197-214

ing and Industrial Aerodynamics. 1992;41-44:357-368

Wydawnictwa Komunikacji i Lacznosci; 1970

Journal of Building Physics. 2010;34(1):77-96

sity of Technology; 1996

1993

110 Risk Assessment

2012

Building Research; 1983

tion. 2005;4(2):143-156

pp. 395-402

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.71152

#### **Abstract**

Free circulation of goods is the major pillar of the single united market of European Union's (EU) member states and main motivated power of competitiveness and economical acceleration in EU. By using the legislative were defined base requirements on goods and also high level of authorized interest protection of goods users. The main changes in H&S (Health and Safety) management and also in Machinery safety started after implementation the Framework Directive 89/391/EEC and Directive 89/392/EEC in 1989 year. Directive 89/391/EEC implements systematical tools in H&S management as: H&S politics, Risk Management, education requirements, review activities, employee's involvement in H&S procedures. Directive 89/392/EEC, on the present time 2006/42/EC directive, implements for machinery producer or its contractor duties to access risk for each stage of machinery life cycle and implement adequate measures. These legislative requirements had changed all procedures and rules, which were used in H&S area.

**Keywords:** safety management, risk assessment, equipment criticality, maintenance, safety integrity

### **1. Introduction**

Legislation defines a framework of organization operations and their fulfillment is a part of organization's business policy. Accepting customer's requirements also means fulfillment of legal requirements (e.g. laws, standards, regulations) [1, 2].

Risks management is a basic tool for demonstration of meeting the requirements in different areas of organization management (e.g. occupational health and safety, accidents prevention, critical infrastructure, dangerous substances transportation, environmental or financial requirements). Management of the organization often times is kind of "lost" while determining effective

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

economical actions to be able to follow the required legal frameworks of a business environment or achieve its own and more difficult goals. Benefits of the decision-making process, their reliability and efficiency are determined by a risk assessment analysis, which is increased by improper applied processes, risk assessment methods and a measurements selection for their management [1, 3, 4].

ISO 31000:2009 standard allows us to globally understand the risks assessment versus "unwanted losses" on an integrated level of all the management activities. However, this integrated management requires specific processes and methods of the risks management, which is derived from the system/object properties, risks assessment goals and processes management level in the organization [3].

### **2. Safety legislative requirements**

Risks assessment is a basic requirement of technical systems safety and occupational health and safety (OHS) management.

#### **2.1. Occupational health and safety management**

Human legal requirement is a basic for assessing the safety level at work by meeting the minimal requirements which is defined by Council directive 89/391/EEC ("Framework Directive"); on the introduction of measures to encourage improvements in the safety and health of workers at work.

The scope of this Directive is defined for employers and employees in all sectors of the productive and non-productive sphere [1].

The Directive defines general requirements for prevention for an employer, who is obliged to apply the general principles of the prevention when implementing the measures necessary to ensure the safety and health protection at work, including the information, education and organization of work and tools. These principles include [1]:


The Directive application can be defined as follows:

• It is applied to all public and private areas, such as industry, agriculture, commerce, services, education, culture, leisure, and so on.

• It does not concern areas, where specific public services and activities are involved, e.g. military and police activities or civil protection activities that may conflict with its requirements.

economical actions to be able to follow the required legal frameworks of a business environment or achieve its own and more difficult goals. Benefits of the decision-making process, their reliability and efficiency are determined by a risk assessment analysis, which is increased by improper applied processes, risk assessment methods and a measurements selection for their

ISO 31000:2009 standard allows us to globally understand the risks assessment versus "unwanted losses" on an integrated level of all the management activities. However, this integrated management requires specific processes and methods of the risks management, which is derived from the system/object properties, risks assessment goals and processes manage-

Risks assessment is a basic requirement of technical systems safety and occupational health

Human legal requirement is a basic for assessing the safety level at work by meeting the minimal requirements which is defined by Council directive 89/391/EEC ("Framework Directive"); on the introduction of measures to encourage improvements in the safety and health of workers at work. The scope of this Directive is defined for employers and employees in all sectors of the pro-

The Directive defines general requirements for prevention for an employer, who is obliged to apply the general principles of the prevention when implementing the measures necessary to ensure the safety and health protection at work, including the information, education and

• Risk assessment that cannot be excluded, especially while selecting or using working tools,

• Implementation of the measures to eliminate hazards at the site of their occurrence.

• Prioritization of collective protective measures against individual protective measures.

• Planning and implementing a policy of the prevention through the safety working tools, technologies. methods, improvement of the working conditions with regard to working

• It is applied to all public and private areas, such as industry, agriculture, commerce, ser-

management [1, 3, 4].

112 Risk Assessment

ment level in the organization [3].

and safety (OHS) management.

**2. Safety legislative requirements**

ductive and non-productive sphere [1].

materials, substances and methods.

**2.1. Occupational health and safety management**

organization of work and tools. These principles include [1]:

• Exclusion of the hazard and the possible resulting risk.

environment factors and through social measures.

The Directive application can be defined as follows:

vices, education, culture, leisure, and so on.

A number of specific directives (see **Figure 1**) have been adopted to implement the requirements of this Directive as individual directives within the meaning of Article 16 (1) of Directive 89/391/EEC in the following order (1. e.g. first individual Council Directive, etc.):


**Figure 1.** Relation between commission directive 89/391/EEC and individual council directives in accordance with Article 16 (1) [1].


**20.** 2004/35/EC of the European Parliament and of the Council of 18th of June 2013 on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical factors (electromagnetic fields).

Other significant directives on OHS, but not issued as individual directives under Article 16 (1) The Health and Safety Directives can be classified as:


### **2.2. Machinery safety**

**6.** 2004/37/EC of the European Parliament and of the Council of 29th of April 2004 on the protection of workers from the risks related to exposure to carcinogens and mutagens at work.

**7.** 2000/54/EC of the European Parliament and of the Council of 18th of September 2000 on the protection of workers from risks related to exposure to biological hazards at work,

**8.** 92/57/EEC of 24th of June 1992 on the introduction of minimum safety and health require-

**9.** 92/58/EEC of 24th of June 1992 on the minimum requirements for the provision of safety

**10.** 92/85/EEC of 19th of October 1992 on the introduction of measures to encourage improvements in the safety and health at work of pregnant workers and workers who have re-

**11.** 92/91/EEC of 3rd of November 1992 on the minimum requirements for improving the

**12.** 92/104/EEC of 3rd December 1992 on the minimum requirements for improving the safety

**13.** 93/103/EC of 23rd of November 1993 concerning the minimum safety and health require-

**14.** 98/24/EC of 7th of April 1998 on the protection of the health and safety of workers from

**15.** 1999/92/EC of the European Parliament and of the Council of 16th of December 1999 on minimum requirements for improving the safety and health protection of workers poten-

**16.** 2002/44/EC of the European Parliament and of the Council of 25th of June 2002 on the minimum health and safety requirements regarding the exposure of workers to the risks

**17.** 2003/10/EC of the European Parliament and of the Council of 6th of February on minimum health and safety requirements regarding the exposure of workers to the risks aris-

**18.** Canceled Directive 2004/40/EC of the European Parliament and of the Council of 29th of April 2004 on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical factors (electromagnetic fields), note: replaced

**19.** 2006/25/EC of the European Parliament and of the Council of 5th of April 2006 on the minimum health and safety requirements regarding the exposure of workers to the risks

and health protection of workers on the ground and underground mining.

which is a consolidated directive of the previous Directives.

safety and health protection of workers in the extractive industry.

ments for temporary or site-changing buildings.

and health signs at work.

114 Risk Assessment

cently given birth or are breastfeeding.

ments for work on board fishing vessels.

the risks related to chemical factors at work.

tially at risk from explosive environment.

arising from physical factors (vibration).

arising from physical factors (artificial optical radiation).

ing from physical nuisance (noise).

by 20th Council Directive.

The Machinery Directive 2006/42/EC on the approximation of the laws for the Member States relating to machinery is intended especially for machinery suppliers. For the machinery operators, the rules required in accordance with Directive 2009/104/EC, which replaced Directive 89/655/EEC, the so-called "The second individual Directive within the meaning of Article 16 (1) of Directive 89/391/EEC on the minimum safety and health requirements for the use of working equipment by workers at work (**Figure 2**).

The Machinery Directive covers the use of all technical equipment, including mobile and lifting equipment. These devices must be regularly inspected and maintained to ensure their readiness and security.

**Figure 2.** History of legislation on machinery safety and OHS.

The objective of this directive is to increase the level of equipment safety, with an emphasis on the analysis of possible risks in their intended operation and maintenance at the design and construction stage of the equipment. Emphasis is also placed on the creation of metering points for monitoring the status of equipment by the methods of the technical diagnostics already at the stage of the project. This creates conditions to prevent the occurrence of breakdowns and possible accidents by defining a real technical state, the impact of which would have an obvious effect on the safety and health of the operator or public. An important aspect is a detailed description of the operational regulations requirements in the native language of the country, where the technical equipment is in operation. Then, there are created standardized procedures for informing the operator of the existing hazards and residual risks arising from the operation of these devices [1, 5].

In accordance with Annex I, 1 Machinery Directive is a manufacturer of a machine or a complex technical device obliged to define the hazards, that arise during the operation of the machine, to estimate the consequences of possible injury or damage to health as well as the probability of their occurrence, and then determine and asses risks in order to take measures to minimize them. This is also connected with an obligation to provide machine user relevant information on residual risks.

These requirements make activities of machinery designers, engineers, manufacturers and users of machinery (including maintenance requirements—item 1.6) conditional upon them. Relevant activities must be conducted in accordance with risk management rules.

#### *2.2.1. Integrated safety principle*

The Integrated Safety Principle is defined in Annex I to the Machinery Directive in five steps, as follows:

	- Eliminate or reduce risks as much as possible,
	- Take the necessary measures to protect against risks,
	- Inform users of the residual risks caused by the various shortcomings in the protective measures taken, notifying whether special training is required and determining any need to provide personal protective equipment.

The objective of the taken measures must be exclusion of any risk for the machine life cycle, including the phases of transport, assembly, disassembly, decommissioning and disposal [5–7]!

The instruction manual must inform about residual risks, meaning that informs a user of the ways, in which the machine devices should not be used.

A **risk** (under the Machinery Directive) is defined as a combination of probability and severity of injury or injury to health that may result from a dangerous situation!

### **3. Basic principles of a risk assessment**

The objective of this directive is to increase the level of equipment safety, with an emphasis on the analysis of possible risks in their intended operation and maintenance at the design and construction stage of the equipment. Emphasis is also placed on the creation of metering points for monitoring the status of equipment by the methods of the technical diagnostics already at the stage of the project. This creates conditions to prevent the occurrence of breakdowns and possible accidents by defining a real technical state, the impact of which would have an obvious effect on the safety and health of the operator or public. An important aspect is a detailed description of the operational regulations requirements in the native language of the country, where the technical equipment is in operation. Then, there are created standardized procedures for informing the operator of the existing hazards and residual risks arising from the

In accordance with Annex I, 1 Machinery Directive is a manufacturer of a machine or a complex technical device obliged to define the hazards, that arise during the operation of the machine, to estimate the consequences of possible injury or damage to health as well as the probability of their occurrence, and then determine and asses risks in order to take measures to minimize them. This is also connected with an obligation to provide machine user relevant

These requirements make activities of machinery designers, engineers, manufacturers and users of machinery (including maintenance requirements—item 1.6) conditional upon them.

The Integrated Safety Principle is defined in Annex I to the Machinery Directive in five steps,

**I.** Devices must be designed and constructed in such a way that they are adapted to their function and can be operated, set and maintained, so people who use them are not exposed to the risks under the foreseeable conditions, also taking into account their reason-

**II.** When selecting the most appropriate solutions, the manufacturer or his authorized rep-

• Inform users of the residual risks caused by the various shortcomings in the protective measures taken, notifying whether special training is required and determining any

**III.** When designing and constructing a machine device and when drawing up the instructions for use, the manufacturer or his authorized representative must assume not only

the intended use of the machinery but also his reasonably foreseeable misuse.

resentative, must apply the following principles in the following order:

Relevant activities must be conducted in accordance with risk management rules.

ably foreseeable wrong use (e.g. operator error).

• Eliminate or reduce risks as much as possible,

• Take the necessary measures to protect against risks,

need to provide personal protective equipment.

operation of these devices [1, 5].

information on residual risks.

*2.2.1. Integrated safety principle*

as follows:

116 Risk Assessment

In the risk assessment in the field of OHS, there are usually used simple methods based on the causal model of the accident (hazard → hazard situation → initiation → harm → loss). These methods are usually combined according to their use in the individual steps of the risk assessment algorithm (Brainstorming, Check-list, Risk matrix [1–3]).

The basic risk assessment algorithm is a structured logical sequence of steps (**Figure 3**) [1]. It does not matter whether it is a project, process, technology, device or a provided service. The analyzed system/object must be broken down into individual elements as is required to fulfill a defined task (activity, function). Each element is evaluated separately in terms of the possibility of endangering the target role (function). The probability or frequency, with which this hazard situation may occur at the time considered (duration of action), is the basis for the risk assessment together with the assessment of the severity (consequence) for the target function. In the financial sector, the risk is also declared positively (as ISO 31000 also accepts the concepts of opportunities), while the analysis of technologies and work activities is assessed only in relation to negative consequences [3].

Measures derived from the assessed risks are defined either by legislation (relevant directives for specific areas—hazards, such as work with display units, noise protection, vibration, etc.), or/ and requirements resulting from the overall culture and the advancement of the organization's management to reduce the risk value to the lowest level possible (risk acceptability level) [1, 8].

Normally, risk assessment for OHS in organizations uses a "Risk matrix" (see **Table 1**), which is based on an assessment of the probability and consequence of the analyzed hazard that is determined from work activities [1, 9]. Emphasis is placed on a simple form of an evaluation and risk assessment and its understanding by all involved parties (employees, third parties, etc.).

**Figure 3.** Basic algorithm for risk assessment and management.


**Table 1.** "Risk matrix" example.

It is essential to apply appropriate tools and procedures to meet legislative requirements. These are evolving and changing in terms of requirements for risk assessment, risk management and health and safety management at work. They can be broken down as follows:

#### **3.1. Risk assessment**


#### **3.2. Risk management**

It is essential to apply appropriate tools and procedures to meet legislative requirements. These are evolving and changing in terms of requirements for risk assessment, risk management and health and safety management at work. They can be broken down as follows:

**Low Medium Almost certain**

No

**RIADENIE RIZIKA**

**End** Risks

 **Risk assessment** Risk analysis Brainstorming

Check-list

Analycal scope - descripon and breakdown of a system (system boundaries)

118 Risk Assessment

**Risk Idenficaon** (dangerous properes, acvies, substances, endangering objecves, impact on goals)

**Risk Esmaon** Expression of the risk by esmang the probability and result Their combinaon is a relaonship: **R = P x D**

> Is it necessary to reduce the risk?

**New measures adopted** to reduce or eliminate risk

**Figure 3.** Basic algorithm for risk assessment and management.

**Consequence Probability**

**Table 1.** "Risk matrix" example.

Yes

Insignificant L L M Significant L M H Catastrophic M H H Risk level L – low M – medium H – high

**Risk Evaluaon** Expression of the risk level, its acceptability (e.g.*, Low, Medium, High*)


#### **3.3. Health and safety management systems**


### **4. Machinery risk assessment standards and tools**

The safety issues of machines and machine devices are devoted to a number of harmonized standards which have their hierarchy [1] (see **Figure 4**).

**Type A standards**: safety standards, providing basic concepts and principles for design, construction and general considerations that can be applied to all machine devices. Basic safety standards of Type A include for example EN ISO 12100.

**Type B standards**: safety standards that mostly take care of only one safety aspect or one type of safety device that can be used for a larger amount of machines. They are divided into: Type B1 standards, which are related to individual safety aspects (e.g. safety distances, surface temperatures, noise, etc.) and Type B2 standards for the relevant safety devices (e.g. different shields, pressure sensitive devices, two-hand control device, locking device, etc.).

**Type C standards**: safety standards for machines that define detailed safety requirements for a particular machine type or group of machines. They refer to related Type A and B

**Figure 4.** Hierarchy of standards for the machines and equipment safety.

standards or, if possible, also to other Type C standards and define safety requirements and identify the risks and priorities that are required. The principle is that the Type B and C standards cannot be repeated or verbally describe the text of the other standards to which they refer.

From a legal point of view, any product is safe, which is meeting the requirements of the relevant regulation or where no prescription for this product is meeting the standards requirements or corresponding to the state of scientific and technical knowledge known at the time of its placing on the market.

In general, the safety assessment rules for health and safety at work are based on the basic principle of meeting the requirements of the technical regulations and standards.

#### **4.1. Requirements of EN ISO 12100**

The requirements of directive 2006/42/EC support EN ISO 12100—defines the terminology and methodology used to achieve machine device safety. The purpose of this standard is to provide to the constructors a basic framework for designing safe machines. This standard has a historical development from the basic standard EN 292-1, 2, through EN 1050. Now it is replaced also EN ISO 14121-1 standard [1, 10].

The standard is principally structured to:


This standard has a list of the potential hazards to be taken into account when designing a machine device (examples of hazards are part of Annex B, which is taken from the canceled standard ISO 14121-1. Hazards analysis must take into account the entire life-cycle of the machine—from its design, construction, manufacture, installation, operation and maintenance to its disposal.

#### **4.2. Risk assessment and reduction strategy**

standards or, if possible, also to other Type C standards and define safety requirements and identify the risks and priorities that are required. The principle is that the Type B and C standards cannot be repeated or verbally describe the text of the other standards to which

**Type A** Basic concepts and principles for design, construc
on, terminology **Type B B1** - Overall safety aspects **B2** - Relevant safety devices **Type C** Specific safety parts for par
cular group of machines

From a legal point of view, any product is safe, which is meeting the requirements of the relevant regulation or where no prescription for this product is meeting the standards requirements or corresponding to the state of scientific and technical knowledge known at the time

In general, the safety assessment rules for health and safety at work are based on the basic

The requirements of directive 2006/42/EC support EN ISO 12100—defines the terminology and methodology used to achieve machine device safety. The purpose of this standard is to provide to the constructors a basic framework for designing safe machines. This standard has a historical development from the basic standard EN 292-1, 2, through EN 1050. Now it is

This standard has a list of the potential hazards to be taken into account when designing a machine device (examples of hazards are part of Annex B, which is taken from the canceled standard ISO 14121-1. Hazards analysis must take into account the entire life-cycle of the machine—from its design, construction, manufacture, installation, operation and maintenance

principle of meeting the requirements of the technical regulations and standards.

they refer.

120 Risk Assessment

to its disposal.

of its placing on the market.

**4.1. Requirements of EN ISO 12100**

replaced also EN ISO 14121-1 standard [1, 10].

• Risk assessment, i.e. basic principle and hazards identification.

• Risk reduction, i.e. three-step method and measures.

**Figure 4.** Hierarchy of standards for the machines and equipment safety.

The standard is principally structured to:

This safety strategy—the risk assessment and risk management steps are defined as follows [1, 11, 12]:

**Step 1**: determination of machine boundaries, including intended use of the machine and consideration of its foreseeable misuse (e.g. operator errors),

**Step 2**: identification of hazard sources and hazardous situations,

**Step 3**: The risk estimation for each hazard and the resulting hazard situation,

**Step 4**: the risk evaluation and consideration of the necessary reduction by introducing measures,

**Step 5**: elimination of a hazard or risk reduction associated with hazards by applying appropriate measures (application of the so-called three-step risk reduction method).

The first three steps represent a process of **risk analysis**—a combination of specifications for determining the machine boundaries, hazard and hazard situations identification and risk estimation.

The overall procedure includes risk analysis and **risk evaluation** (fourth step). It is the process of **risk assessment**.

The **risk management** process is based on a risk assessment and proposes to take appropriate measures to implement and monitor their effectiveness.

*Note*: Risk assessment does not include a step of taking measures, only the steps of consideration—the classification of the estimated size of the risk according to the pre-selected scheme (e.g. the risk matrix) and the decision-making process "what to do with that now" based on the risk acceptance rate.

This risk management process is a basic and unchangeable process, and represents an iterative approach (ALARP—As Low As Reasonably Practicable), which the designer or constructor must observe in designing the machine, but also the user in managing workplace safety [1, 12].

The designer must consider designing the machine, all anticipated activities (even not expected ones during normal use of the machine), production must take into account possible risks in a machine manufacturing, the user (or the employer) must ensure the safety of the machine in the working environment.

#### *4.2.1. Step 1: determination of machine boundaries*

The purpose of this step is to understand the principles of machine operation, the conditions and the way it is used. Determining machine boundaries serves to identify sources of hazards, a description of possible hazard scenarios while performing the required activities (e.g. machine operator and maintenance, visit, or third-party activities performed at the working site), or predictable behavior when using the machine by unskilled workers. Also an appropriate procedure is to define the so-called functional machine structures for identifying dangerous elements on the machine. It can be, for example a control function, safety function, stability function, etc.

Procedures to determine the machine boundaries according to EN ISO 12100 standard [1]:

Usage limits (intended use and foreseeable misuse)

	- **A.** Layout: range of motion, operating and maintenance area, relationship between the machine and power source.
	- **B.** Time limit: machine lifespan (parts), maintenance intervals.
	- **C.** Other boundaries: properties of the processed material, purity, environment (temperature, external conditions, etc.).

#### *4.2.2. Step 2: identification of hazards*

After determining machine boundaries, the basic step of the risk assessment is to identify the types of hazard situation depending on the hazard properties of the machine, taking into account each stage of the machine's life-cycle [1, 4, 8, 9, 11].

Account is also taken of the behavior of the operator [1, 11, 13], e.g.:


EN ISO 12100 provides a description of 10 types of potential hazards (e.g. mechanical, electrical, thermal, noise, vibration, radiation, ergonomics, etc.), their potential sources and possible consequences. It is based on the requirements of the Machinery Directive.

Similarly, it is possible to proceed with identifying hazards in relation to the work being done at the workplace in order to assign appropriate personal protective equipment.

#### *4.2.3. Step 3: risk estimation*

procedure is to define the so-called functional machine structures for identifying dangerous elements on the machine. It can be, for example a control function, safety function, stability

Procedures to determine the machine boundaries according to EN ISO 12100 standard [1]:

• Operating modes and preventive procedures, including manipulation with the machine

• The way and the place for the machine use (household, industry) by persons, their skills

• Expected level of qualification, experiences, education and capabilities of the concerned

• Other persons who may be at risk from the machine (other machines operation, adminis-

**A.** Layout: range of motion, operating and maintenance area, relationship between the

**C.** Other boundaries: properties of the processed material, purity, environment (tempera-

After determining machine boundaries, the basic step of the risk assessment is to identify the types of hazard situation depending on the hazard properties of the machine, taking into

• Improper behavior of the person in the event of failure of the machine, in the event of a

• Behavior resulting from the search for options beyond the prescribed procedure (instruc-

EN ISO 12100 provides a description of 10 types of potential hazards (e.g. mechanical, electrical, thermal, noise, vibration, radiation, ergonomics, etc.), their potential sources and possible

persons (a maintenance worker, an attendant, an apprentice or public),

**B.** Time limit: machine lifespan (parts), maintenance intervals.

account each stage of the machine's life-cycle [1, 4, 8, 9, 11].

Account is also taken of the behavior of the operator [1, 11, 13], e.g.: • Loss of control by the operator (e.g. manual or mobile machines),

• Behavior resulting from lack of a concentration or inattention,

• Behavior resulting from the effort to keep the machine running at all costs,

• Behavior of another group of people (children, people with disabilities).

consequences. It is based on the requirements of the Machinery Directive.

Usage limits (intended use and foreseeable misuse)

and the ability to use the machine,

machine and power source.

ture, external conditions, etc.).

*4.2.2. Step 2: identification of hazards*

breakdown or accident,

tion manual), the "least resistance way,"

function, etc.

122 Risk Assessment

when misused,

trative staff, visits).

This is one of the most important risk assessment steps [1, 3, 9, 11]. The level of the risk reflects the severity of a hazardous situation and is dependent on the following parameters:

	- Exposure of the person to the hazard situation, Exposure time: *E*,
	- Probability (or frequency) of occurrence of a hazard situation: *PH*,
	- Technical and human possibilities to prevent or limit the range of possible damage: *M* (measure).

The level of risk can be calculated as function of these parameters, using this formula:

$$\mathcal{R} = f\left(\mathcal{E}\_\prime P\_{\mu^\prime} M\_\prime \mathcal{C}\right) \tag{1}$$

The risk assessment uses simple methods based on the expression of probabilities and consequences and on the risk evaluation, so-called "Risk matrix" (risk rating tool) […].

Usually the level of risk is defined as combination of these parameters:

$$R = P \times \mathbb{C} \tag{2}$$

Creating a Risk matrix as a tool for analysis and risk assessment requires establishing criteria for estimating probabilities and consequences (**Tables 2** and **3**).

For the risk assessor, the "common sense" principle must be applied to determine the range of the level of the assessed parameter (e.g. from 1 to 3).

#### *4.2.4. Step 4: risk evaluation*

The risk matrix (see **Table 4**) can be created by the "ordinary" multiplication of the individual levels assigned to the probability and consequence. The number of levels of the estimated


**Table 2.** Description of the probability of occurrence of a hazardous event: *P*.


**Table 3.** Description of the consequence *C* or the severity of the hazard situation: *C*.


**Table 4.** Risk matrix 3 × 3.

parameters determines the type of matrix, for example, 3 × 3, 4 × 5, 6 × 4, etc. Determining the number of levels depends on the depth, to which the risk assessor intends to specify the probability and consequence of a negative effect.

As can be seen from **Table 4**, the estimated risk sizes range from 1 to 9. In the next step, the risk (risk evaluation) needs to be evaluated, so for the assessor which level is high, medium, and low in severity level (e.g. acceptability) of the risk.

Values: 1–2 can be assigned to a low level, meaning small or low risk: L; from 3 to 4: medium level: M; from 6 to 9: high level: H.

For a better illustration, the Risk matrix can be adjusted more clearly, where the principle of so-called "traffic light" effect is applied, **Table 5** [1, 9, 11].

There is no binding rule to determine the level of a risk (e.g. H: high risk, M: medium risk, L: small or low risk), whether in qualitative, quantitative or semi-quantitative form. The applied methodology depends on the area of investigation (e.g. machine failure and its consequences) and data availability (e.g. monitoring machine failures) [6, 11].


**Table 5.** Risk matrix 3 × 3 "traffic light".

Important at this stage of the risk assessment is to ensure sufficient information, e.g. historical data about machine failures, near-misses, injuries, accidents, as well as opinions of the experts and practitioners in the investigated area or system.

Risk analysis can be done principally in two ways, applied in specific methods [1, 11]:


The choice of these methods depends on the experience and knowledge of the team that deals with the assessment process. The inductive methods may have the advantage over deductive methods in a more advanced analysis of all possible hazards and hazard situations but on the other hand they may be more time consuming.

#### *4.2.5. Step 5: risk reducing measures*

parameters determines the type of matrix, for example, 3 × 3, 4 × 5, 6 × 4, etc. Determining the number of levels depends on the depth, to which the risk assessor intends to specify the prob-

**Negligible Serious Very serious**

Large range of an event consequence—very serious consequence, death or mass injury 3

2

**Consequence Level description of a severity/consequence Level** Negligible Small event impact range, minimal or no consequence, near-miss 1 Serious Medium range of an event consequence - serious consequence, injury—occupational

As can be seen from **Table 4**, the estimated risk sizes range from 1 to 9. In the next step, the risk (risk evaluation) needs to be evaluated, so for the assessor which level is high, medium,

Values: 1–2 can be assigned to a low level, meaning small or low risk: L; from 3 to 4: medium

For a better illustration, the Risk matrix can be adjusted more clearly, where the principle of

There is no binding rule to determine the level of a risk (e.g. H: high risk, M: medium risk, L: small or low risk), whether in qualitative, quantitative or semi-quantitative form. The applied methodology depends on the area of investigation (e.g. machine failure and its consequences)

**Negligible 1 Serious 2 Very serious 3**

ability and consequence of a negative effect.

level: M; from 6 to 9: high level: H.

**Probability Consequence**

**Table 5.** Risk matrix 3 × 3 "traffic light".

**Probability Consequence**

**Table 4.** Risk matrix 3 × 3.

Very serious

124 Risk Assessment

and low in severity level (e.g. acceptability) of the risk.

Low 1 2 3 Medium 2 4 6 High 3 6 9

**Table 3.** Description of the consequence *C* or the severity of the hazard situation: *C*.

accident (e.g. from 3 days off work)

so-called "traffic light" effect is applied, **Table 5** [1, 9, 11].

and data availability (e.g. monitoring machine failures) [6, 11].

Low 1 L(1) L(2) M(3) Medium 2 L(2) M(4) **H(6)** High 3 M(3) **H(6) H(9)**

Reducing the risk to the residual level is conditional on machines by following the **three-step method**—constructional measures excluding or limiting the risks; by installing the necessary protective systems and additional protective measure for those risks that could not be reduced or eliminated in the first step; by providing information on residual risks to the machine user (by providing the instructions for use) [1, 9, 11].


**Residual risk**—is a risk that remained after the adoption of the implemented measures (protective measures) so it can be manageable. It can describe protective or safety measures taken at the design stage or other additional measures taken by the user of the device, at the stage of its operation [1].

### **5. Process safety**

Machine safety under the Machinery Directive requires an integrated approach to the safety. However, the machine is a complex construction that is not only mechanical or electrical, but often times it is a complex control unit whose reliable function affects not only machine safety but also the whole process [6, 12, 14]. For this reason, safety integration is understood as a requirement not only for the safety of the machine itself, but also for the safety of the whole process (IEC 61511). Standard IEC 61511 defines requirements for safety control systems of continuous technological processes and on the other hand IEC 61508 defines functional safety requirements for electrical/electronic/programmable electronic safety systems (**Figure 5**) [1, 5, 10].

The objective of ISO 13849-1 (Type B-1) is to provide guidance on the design and construction of control (safety) systems so the requirement of integrated security is ensured.

A designer—constructor while reducing a risk considers applying safety measures that contain one or more safety features. The parts of machine control systems that provide a safety function are called safety-related parts of the control system and labeled as SRP/CP SRP— Safety related parts; CP—control system). They may consist of hardware and software, but may not be part of the machine's control system.

#### **5.1. Safety control systems**

Safety control systems are designed to perform a safety function. It's the part of the control system (or the control system itself) that prevents the hazards. It could be said that it creates a barrier between hazards and hazards situation (e.g. shields). For these reasons, the safety system must work reliably, under all foreseeable circumstances.

The safety function is implemented by the machine components of the machine control system in such a way so it maintains the device (or bring it into a state) in a safe state with respect to the specific risk circumstances.

According to ISO 13849-1 standard, this is the function of a machine, whose failure can lead to an immediate increase of a risk.

The main task of the designer of the safety system is to avoid hazardous conditions and to prevent the possibility of an unintentional machine start.

**Figure 5.** Relation between IEC 61508 and IEC 61511.

The safety feature may have several parts, e.g. for a protective cover it is possible to define it in three steps:


**5. Process safety**

126 Risk Assessment

Machine safety under the Machinery Directive requires an integrated approach to the safety. However, the machine is a complex construction that is not only mechanical or electrical, but often times it is a complex control unit whose reliable function affects not only machine safety but also the whole process [6, 12, 14]. For this reason, safety integration is understood as a requirement not only for the safety of the machine itself, but also for the safety of the whole process (IEC 61511). Standard IEC 61511 defines requirements for safety control systems of continuous technological processes and on the other hand IEC 61508 defines functional safety requirements

The objective of ISO 13849-1 (Type B-1) is to provide guidance on the design and construction

A designer—constructor while reducing a risk considers applying safety measures that contain one or more safety features. The parts of machine control systems that provide a safety function are called safety-related parts of the control system and labeled as SRP/CP SRP— Safety related parts; CP—control system). They may consist of hardware and software, but

Safety control systems are designed to perform a safety function. It's the part of the control system (or the control system itself) that prevents the hazards. It could be said that it creates a barrier between hazards and hazards situation (e.g. shields). For these reasons, the safety

The safety function is implemented by the machine components of the machine control system in such a way so it maintains the device (or bring it into a state) in a safe state with respect

According to ISO 13849-1 standard, this is the function of a machine, whose failure can lead

The main task of the designer of the safety system is to avoid hazardous conditions and to

Process management safety systems, their integration and utilization **IEC 61511**

**Process safety**

for electrical/electronic/programmable electronic safety systems (**Figure 5**) [1, 5, 10].

of control (safety) systems so the requirement of integrated security is ensured.

may not be part of the machine's control system.

system must work reliably, under all foreseeable circumstances.

prevent the possibility of an unintentional machine start.

Producer and supplier Functional safety **IEC 61508**

**Figure 5.** Relation between IEC 61508 and IEC 61511.

**5.1. Safety control systems**

to the specific risk circumstances.

to an immediate increase of a risk.

For safety systems, the use of "safety requirement or after safety requirement" are used as a result of their mode. An example of the requirement for the safety function is the interruption of the light curtain, the opening of the cover, where the operator may require to stop the machine parts or leave them without power if they had already been stopped after the safety function triggered [1, 5, 10].

The safety function is performed by the machine control system safety parts (elements). The safety function begins by sending a command and ends with a response (by doing an activity).

The safety system must be designed with a level of integrity that corresponds to the machine's risk level. Higher risks require a higher level of integrity so the safety performance is ensured. The machine safety system can be partitioned to the performance level of the capability to ensure the performance of the safety function or otherwise, the functional level of safety integrity.

#### **5.2. Functional safety of control systems**

Functional safety—it is a part of the overall safety that depends on the correct functioning of the systems or devices in responding to their inputs (stimulus) [1, 10, 15].

Functional safety is the identification of potential hazards based on the activation of protective or corrective devices or mechanisms to prevent a dangerous event or to reduce the level of its consequence.

According to IEC 61508 standard, an example of functional safety is, for example, an overheat protection device that uses a thermal sensor in the motor winding to disconnect the voltage before it is overheated and subsequently could occur a destruction. But, e.g. a special insulation, resistant to high temperatures, is not an example of functional safety, even though it provides protection against the same hazards as a thermal sensor. Similarly, the fixed door as an intrusion barrier does not have a characteristics of the functional safety feature, like on the other hand, door locked housing.

In order to achieve the functional safety, it is necessary to meet the requirements for:


Risk assessment is the basis for creating the functional safety requirements: risk analysis provides the basis for the safety function requirements and risk evaluation forms the basis for specifying the safety integrity, i.e. the levels of system properties!

### **5.3. Standards for the functional safety of control systems**

The basic standards for the functional safety of the machine control systems are [1, 5, 10, 15]:


The application of standards has its own possibilities and limitations, e.g. IEC/EN 62061 and EN ISO 13849-1 standards, which deal with an electrical safety management systems (later on they should be unified), they use different methods to achieve their results, and the user can choose them as they are both harmonized under the EU Machinery Directive. The difference between them is in a use in different technologies. IEC/EN 62061 standard is restricted to electrical systems only, while the second ISO 13849-1 standard deals with pneumatic, hydraulic, mechanical and electrical systems.

#### **5.4. SIL and IEC/EN 62061**

This standard describes an extent of the risk, which needs to be reduced but also the capability of the control system to reduce this risk with the Safety Integrity Level (SIL) [5, 15]. In the field of machinery, there are three levels from SIL1 to SIL3 (highest level of integrity) are used.

Since the risks may also occur in a different industry, such as the petrochemical, energetic or a rail sector, e.g. in the manufacturing industry (applies specific standard IEC 61511), this standard also offers another category of the safety integrity level, SIL4.

The SIL category refers to the safety function. The subsystems or elements of the system, into which the safety function is implemented, must have the appropriate capability to be assigned to a particular SIL category. This capability is called SIL Claim Limit.


**Table 6.** Relation between SIL and PL.

**5.3. Standards for the functional safety of control systems**

mable control systems.

128 Risk Assessment

the safety function.

for industrial processes.

mechanical and electrical systems.

**5.4. SIL and IEC/EN 62061**

are used.

The basic standards for the functional safety of the machine control systems are [1, 5, 10, 15]: **a.** IEC/EN 61508: Functional Safety of Electrical, Electronic/Programmable Electronic Safety Systems (Part 1, 2 and 3). This standard is general, not limited to the field of a machinery and contains requirements that apply to the design of complex electronic and program-

**b.** IEC/EN 62061: Machine Safety—Functional Safety of Safety-Related Electrical/Electronic/ Programmable electronic control systems that are connected with the safety. This is, in fact, a specific implementation of the IEC/EN 61508 standard for the machinery. The requirements of this standard can be applied to system level design for all types of electrical

**c.** EN ISO 13849-1: Machine Safety—Safety Parts of Control Systems. This standard provides requirements and guidance for designing, constructing and integrating safety parts of the safety control systems (safety-related parts), including the software design. For these components, there are specific characteristics that include the power level required to ensure

**d.** IEC 61511: Functional Safety. Safety Control Systems of Continuous Technological Processes. This standard was developed in accordance with the introduction of IEC/EN 61508

The application of standards has its own possibilities and limitations, e.g. IEC/EN 62061 and EN ISO 13849-1 standards, which deal with an electrical safety management systems (later on they should be unified), they use different methods to achieve their results, and the user can choose them as they are both harmonized under the EU Machinery Directive. The difference between them is in a use in different technologies. IEC/EN 62061 standard is restricted to electrical systems only, while the second ISO 13849-1 standard deals with pneumatic, hydraulic,

This standard describes an extent of the risk, which needs to be reduced but also the capability of the control system to reduce this risk with the Safety Integrity Level (SIL) [5, 15]. In the field of machinery, there are three levels from SIL1 to SIL3 (highest level of integrity)

Since the risks may also occur in a different industry, such as the petrochemical, energetic or a rail sector, e.g. in the manufacturing industry (applies specific standard IEC 61511), this

The SIL category refers to the safety function. The subsystems or elements of the system, into which the safety function is implemented, must have the appropriate capability to be

standard also offers another category of the safety integrity level, SIL4.

assigned to a particular SIL category. This capability is called SIL Claim Limit.

control systems as well as to not very complex subsystems and devices.

#### **5.5. PL and EN ISO 13849-1**

This standard does not use SIL, instead of that; it uses Performance Level (PL), properties level or performance. It defines five levels, where PLa is the lowest and PLe is the highest (**Table 6**) [1, 5, 15].

Application of adequate methods of determining SIL requirements depend on organization's risk criteria.

Some standards offer similar methods, e.g. EN 61508 offers three methods: quantitative, risk graph, risk matrix and IEC 61511 offers more as semi-quantitative methods, Safety layer matrix and Layer protection analysis (LOPA) [5, 10].

### **6. Risk management**

The risk is an occurrence of a random event that can happen with a certain probability, when it occurs, it may have a negative impact on the organization's business objectives.

In the process of a risk management, it is necessary to accept three principles [1, 3]:


These three principles are often ignored in practice. The manager in the company expects the results of the risk assessment to produce a clear result: "what is wrong and how to fix it" or "I did everything I could and we have no risks at all."

Risk management, particularly in terms of social acceptability, is expressed through the ALARP principle (As Low As Reasonable Practicable). Its priority is to reduce the level of a risk "to such an extent as is reasonably practical," while working with the level of risk between an unacceptable and fully acceptable (tolerable) level.

**Acceptable Risk**: represents a risk that is reduced to a level that can be tolerated in an organization, but at least it must respect provided requirements of binding regulations and the organization's own policy [1, 3, 11].

ALARP was defined by health and safety executive (HSE) organization in Great Britain. The goal is to manage the residual risk to the extent that it is practical (bearable) for the organization. In Great Britain and New Zealand, this model is also described as SFAIRP (So Far As It Is Reasonably Practicable) in the USA by ALARA (As Low As Reasonably Achievable) [1, 12].

When implementing the ISO 31000 standard for considering the so-called "positive risk"— an assessment of opportunities, it would be possible to apply a new approach for assessing an effectiveness, that is AHARP (As High As Reasonably Practicable) [1, 12].

The OHS management system is part of the organization's overall management system that creates and implements the OHS concept and manages health and safety risks. So it represents a set of mutually beneficial elements to make a policy and achieve the set goals.

The OHSAS 18001 standard required the most time to obtain the "standard" status. Since the safety requirements were different in each country and are strongly supported by the country's legislation, the transition to standard was relatively slow.

After accepting the British BS 8800 standard, the ISO organization had issued for the first time the OHSAS 18001 standard in 1999, which was first revised in 2007. In 2017, transition to the HLS structure (High Level Structure) is expected, as well as the standard also gets a new definition by ISO 45001 standard [1, 3].

To understand connection between Machinery Safety and OHS management is important for organization maturity and its competitiveness. Management system requirements are coming from context description of organization operating (external and internal relationships). This context is a base for risk assessment process coming from organizational business activities and is defined as Risk-based Thinking principles (RBT) [1, 4]. Newly prepared standard ISO 45001:2017 requires proactively approach in Risk Management processes. RBT distinguishes term risk and term opportunities, and also is linked with principles of ISO 31000. This brings a natural pressure for assuming methods and tools for risk assessment in relation with organization objectives on all management level.

### **Acknowledgements**

This work was developed within the projects APVV-15-0351 "Development and Application of a Risk Management Model in the Setting of Technological Systems in Compliance with Industry 4.0 Strategy" and 7FP entitled "iNTegRisk," no. CP-IP213345-2 and co-financed by APVV based contract No. DO7RP-0019-08.

### **Author details**

#### Hana Pacaiova

Risk management, particularly in terms of social acceptability, is expressed through the ALARP principle (As Low As Reasonable Practicable). Its priority is to reduce the level of a risk "to such an extent as is reasonably practical," while working with the level of risk

**Acceptable Risk**: represents a risk that is reduced to a level that can be tolerated in an organization, but at least it must respect provided requirements of binding regulations and the

ALARP was defined by health and safety executive (HSE) organization in Great Britain. The goal is to manage the residual risk to the extent that it is practical (bearable) for the organization. In Great Britain and New Zealand, this model is also described as SFAIRP (So Far As It Is Reasonably Practicable) in the USA by ALARA (As Low As Reasonably Achievable) [1, 12]. When implementing the ISO 31000 standard for considering the so-called "positive risk"— an assessment of opportunities, it would be possible to apply a new approach for assessing an

The OHS management system is part of the organization's overall management system that creates and implements the OHS concept and manages health and safety risks. So it repre-

The OHSAS 18001 standard required the most time to obtain the "standard" status. Since the safety requirements were different in each country and are strongly supported by the coun-

After accepting the British BS 8800 standard, the ISO organization had issued for the first time the OHSAS 18001 standard in 1999, which was first revised in 2007. In 2017, transition to the HLS structure (High Level Structure) is expected, as well as the standard also gets a new

To understand connection between Machinery Safety and OHS management is important for organization maturity and its competitiveness. Management system requirements are coming from context description of organization operating (external and internal relationships). This context is a base for risk assessment process coming from organizational business activities and is defined as Risk-based Thinking principles (RBT) [1, 4]. Newly prepared standard ISO 45001:2017 requires proactively approach in Risk Management processes. RBT distinguishes term risk and term opportunities, and also is linked with principles of ISO 31000. This brings a natural pressure for assuming methods and tools for risk assessment in relation with orga-

This work was developed within the projects APVV-15-0351 "Development and Application of a Risk Management Model in the Setting of Technological Systems in Compliance with Industry 4.0 Strategy" and 7FP entitled "iNTegRisk," no. CP-IP213345-2 and co-financed by

sents a set of mutually beneficial elements to make a policy and achieve the set goals.

between an unacceptable and fully acceptable (tolerable) level.

effectiveness, that is AHARP (As High As Reasonably Practicable) [1, 12].

try's legislation, the transition to standard was relatively slow.

organization's own policy [1, 3, 11].

130 Risk Assessment

definition by ISO 45001 standard [1, 3].

nization objectives on all management level.

APVV based contract No. DO7RP-0019-08.

**Acknowledgements**

Address all correspondence to: hana.pacaiova@tuke.sk

Safety and Quality Production Department, Faculty of Mechanical Engineering, Technical University of Kosice, Slovakia

### **References**


**Provisional chapter**

### **Integrated Risk Assessment of Safety, Security, and Safeguards Safeguards**

**Integrated Risk Assessment of Safety, Security, and** 

DOI: 10.5772/intechopen.71522

#### Mitsutoshi Suzuki Additional information is available at the end of the chapter

Mitsutoshi Suzuki

[12] Tablot J. ALARP (As Low As Reasonably Practicable). Available from: http://www.jakeman.com.au/media/alarp-as-low-as-reasonably-practicable [Accessed: 24-09-2016] [13] Pacaiova H. Human reliability in maintenance task. Frontiers of Mechanical Engineering

[14] Maletic D, Maletic M, Al-Najjar B, Gomiscek B. The role of maintenance in improving company's competitiveness and profitability: a case study in a textile company. Journal of Manufacturing Technology Management; pp. 441-452. DOI: 10.1108/JMTM-04-2013-0033

[15] Kingsley J. Safety Integrity Level (SIL)—Explained Simply. 2017. Available from: https:// www.linkedin.com/pulse/safety-integrity-level-sil-explained-simply-john-kingsley

in China; pp. 184-187. DOI: 10.1007/s11465-010-0002-4

[Accessed: 20-07-2017]

132 Risk Assessment

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.71522

#### **Abstract**

The peaceful use of nuclear energy has been pursued for more than half a century, and even after the catastrophic disaster at the Fukushima nuclear power plant in 2011, a new market in East Asia has been growing from the viewpoint of stable supply of nuclear energy. Countermeasures against malicious aircraft attacks have been introduced worldwide after the 9/11 terrorist attack in 2001 as synergies between safety and security. Although safeguards and security communities have different histories and technical aspects compared to safety, not only the mitigation plans as emergency preparedness but also a risk assessment as a supplement to the current requirements could be developed to promote synergism between safety, security, and safeguards (3S). The optimal installment of 3S countermeasures could be encouraged by a risk assessment to enhance reliability, robustness, and transparency of those facilities. One of the synergies of the integrated 3S risk assessment is a 3S by Design (3SBD) approach for new nuclear facilities. An introduction of 3SBD into the conceptual design stage increases regulatory effectiveness as well as operational efficiency and also reduces expensive and time-consuming retrofitting.

**Keywords:** probabilistic risk assessment (PRA), safety, security, and safeguards (3S), integrated risk assessment, 3S by design (3SBD), proliferation risk, sabotage risk

### **1. Introduction**

After the Fukushima accident, the Nuclear Regulation Authority (NRA) in Japan developed the new safety standard, and soon after many utility companies submitted revised license applications to restart their nuclear power plants as soon as possible. Malicious aircraft attacks are considered in the standard, and mitigation plans are required to minimize possible consequences as synergies between safety and security in [1]. This time-consuming installment of

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

3S countermeasures could be encouraged by a risk assessment to enhance reliability, robustness, and transparency of the facilities as in [2].

The 3S initiative launched in 2008 emphasized on three (safety, security, and safeguards) of the 19 infrastructure elements of the Milestones in the Development of a National Infrastructure for Nuclear Power by IAEA as in [3]. One of the most apparent synergies is an adoption of 3SBD approach for new nuclear facilities. This benefit of the concept of 3SBD was pointed out recently in [4], and the Safeguards by Design (SBD) approach has been discussed extensively. The international safeguards has been implemented by the IAEA, and the IAEA has clear responsibility for the verification activities for the state's compliance with the NPT treaties and agreements, while their role in safety and security is limited on regulatory standardization. This means that there is not an obligatory authority governing safety and security regulations worldwide. Therefore, in order to achieve the 3SBD synergy, the realization of SBD approach with the IAEA and member states is challenging to achieve an international consensus in [5].

In addition to this, regarding institutional and technical issue for the internal and international regulators, a risk notion should be harmonized to be shared with the 3S authorities concerned. In safety, a frequency of accident is estimated from the past experienced data, and the accident sequence is analyzed with ETs/FTs, and the probabilistic assessment methodologies have been developed by the long historical trials and discussions. Because of the recent concern about nuclear security, the similar probabilistic assessment is extended to be used in the guideline against sabotage in nuclear security in [6]. The conventional vulnerability assessment in physical security has been well developed on a deterministic and prescriptive basis, on the

**Figure 1.** Integrated regulation by risk-informed and performance-based approach.

other hand, safeguards effectiveness is involved in proliferation resistance (PR) evaluation as extrinsic barriers, and a diversion pathway analysis is used to investigate a proliferation risk of nuclear cycles. Initial efforts for harmonization between reliability and safety and PR and physical protection (PP) are initiated under the Generation IV (GEN IV) international framework in [7]. In this regard, integrated measures and methodologies could be developed to evaluate an optimized balance between the 3S performance quantitatively as shown in **Figure 1**.

### **2. Mathematical model for 3S risk assessment**

3S countermeasures could be encouraged by a risk assessment to enhance reliability, robust-

The 3S initiative launched in 2008 emphasized on three (safety, security, and safeguards) of the 19 infrastructure elements of the Milestones in the Development of a National Infrastructure for Nuclear Power by IAEA as in [3]. One of the most apparent synergies is an adoption of 3SBD approach for new nuclear facilities. This benefit of the concept of 3SBD was pointed out recently in [4], and the Safeguards by Design (SBD) approach has been discussed extensively. The international safeguards has been implemented by the IAEA, and the IAEA has clear responsibility for the verification activities for the state's compliance with the NPT treaties and agreements, while their role in safety and security is limited on regulatory standardization. This means that there is not an obligatory authority governing safety and security regulations worldwide. Therefore, in order to achieve the 3SBD synergy, the realization of SBD approach with the IAEA and member states is challenging to achieve an

In addition to this, regarding institutional and technical issue for the internal and international regulators, a risk notion should be harmonized to be shared with the 3S authorities concerned. In safety, a frequency of accident is estimated from the past experienced data, and the accident sequence is analyzed with ETs/FTs, and the probabilistic assessment methodologies have been developed by the long historical trials and discussions. Because of the recent concern about nuclear security, the similar probabilistic assessment is extended to be used in the guideline against sabotage in nuclear security in [6]. The conventional vulnerability assessment in physical security has been well developed on a deterministic and prescriptive basis, on the

**Figure 1.** Integrated regulation by risk-informed and performance-based approach.

ness, and transparency of the facilities as in [2].

international consensus in [5].

134 Risk Assessment

A major difficulty encountered in applying the probabilistic methods to safeguards is how to determine an initiation of diversion and misuse. In safeguards, the diversion of nuclear material and misuse of technology are induced by motivation of states and intentional acts of facility operator, so that estimating from historical incidences and predicting intentional human acts are generally very difficult. In comparison with the security, event sequence is analyzed probabilistically on a basis of the plant layout, system design, and structural robustness. Candidate tools for proliferation assessment were broadly investigated in [8]. As well-known international PR methodologies, Innovative Nuclear Reactors and Fuel Cycles (INPRO) program that was IAEA-led international project has developed the checklist approach, while the GEN IV's Proliferation Resistance and Physical Protection Working Group (PR&PP WG) has developed a risk-informed methodology in the qualitative and quantitative manner in [9]. To assess 3S risks, several mathematical tools are categorized to consider incident frequency and governing law resulting in the incidence shown in **Figure 2**.

**Figure 2.** Mathematical models and assessment methodologies applied to safety, security, and safeguards (3S). The governing law and incidence frequency are selected to classify the inherent nature among the 3S incidences. The mapping of individual 3S region is drawn heuristically.

#### **2.1. Probabilistic risk model in safeguards**

Although the mathematical formalization for the international safeguards has been developed for several decades as in [10], the discussion of adopting probabilistic methodology to address nonproliferation issues was done in a different perspective in [11]. In safeguards, the estimation of intentional act leading to the diversion and misuse of nuclear material is generally very difficult. In addition to the incidence probability, there is another uncertainty related to measurement error in material accounting. This is the significant quantity (SQ) and timeliness goal that underline the basis for nuclear material accountancy (NMA). Based on the prescriptive and deterministic logic, uncertainty of NMA should be controlled under this limit as a first priority in safeguards. The IAEA determined the threshold value for nuclear material losses for each type of facility and process. However, as an amount of nuclear material increases in large-scale facilities, uncertainty due to measurement error becomes large and likely exceeds the limit. Because it is important to control the measurement error within the absolute threshold of NMA, a probability distribution of the measurement error of NMA has to be considered in conjunction with the incidence probability as shown in **Figure 3**.

The two-dimensional probability formalization is proposed as in the Eq. (1) as follows in [12, 13]:

$$R = P \times C = P(t, m) \times C = P(t) \times P(m \mid t) \times C \tag{1}$$

In the Eq. (1), the measurement error probability is defined as (*P(m)*) related to measurement uncertainty in material accounting. The measurement error probability is expressed as the probability density function in the measurement error axis in **Figure 3**. The accumulated distribution function leads to the detection probability. On the other hand, the incidence probability is defined as a Poisson density function under an assumption of random occurrence of diversion incidence. It should be noted that both probabilities are not independent and those would be closely correlated each other because of the inherent nature of intentional acts.

**Figure 3.** Two-dimensional probability for safeguards. The probability distribution composed of two random variables, the incidence time and the measurement error, is a characteristic feature of the proliferation risk.

#### **2.2. Probabilistic risk model in security**

**2.1. Probabilistic risk model in safeguards**

136 Risk Assessment

Although the mathematical formalization for the international safeguards has been developed for several decades as in [10], the discussion of adopting probabilistic methodology to address nonproliferation issues was done in a different perspective in [11]. In safeguards, the estimation of intentional act leading to the diversion and misuse of nuclear material is generally very difficult. In addition to the incidence probability, there is another uncertainty related to measurement error in material accounting. This is the significant quantity (SQ) and timeliness goal that underline the basis for nuclear material accountancy (NMA). Based on the prescriptive and deterministic logic, uncertainty of NMA should be controlled under this limit as a first priority in safeguards. The IAEA determined the threshold value for nuclear material losses for each type of facility and process. However, as an amount of nuclear material increases in large-scale facilities, uncertainty due to measurement error becomes large and likely exceeds the limit. Because it is important to control the measurement error within the absolute threshold of NMA, a probability distribution of the measurement error of NMA has to be considered in conjunction with the incidence probability as shown in **Figure 3**.

The two-dimensional probability formalization is proposed as in the Eq. (1) as follows in [12, 13]:

*R* = *P* × *C* = *P*(*t*, *m*) × *C* = *P*(*t*) × *P*(*m*|*t*) × *C* (1)

In the Eq. (1), the measurement error probability is defined as (*P(m)*) related to measurement uncertainty in material accounting. The measurement error probability is expressed as the probability density function in the measurement error axis in **Figure 3**. The accumulated distribution function leads to the detection probability. On the other hand, the incidence probability is defined as a Poisson density function under an assumption of random occurrence of diversion incidence. It should be noted that both probabilities are not independent and those would be closely correlated each other because of the inherent nature of intentional acts.

**Figure 3.** Two-dimensional probability for safeguards. The probability distribution composed of two random variables,

the incidence time and the measurement error, is a characteristic feature of the proliferation risk.

In safety, Probabilistic Safety Analysis (PSA) has been developed by the long historical trials and discussions. This approach is to estimate the frequencies of accidents and failures from the historical data and to analyze the accident sequence with ETs/FTs based on these parameters. Because of the recent concern about nuclear security, similar probabilistic assessment was extended for use in developing guidelines for protection of nuclear power plants against sabotage in [14]. Although the conventional vulnerability assessment in physical security has been well developed on a deterministic and prescriptive basis, an inherent difficulty in determining the frequency of terrorist attack by malicious acts is undertaken by the conservative estimate. The risk formalization in security is expressed as in the Eq. (2) in [15]:

$$\mathbb{R} = P\_A \times (1 - P\_\mathbb{E}) \times \mathbb{C} = P\_A \times (1 - P\_\mathbb{I} \times P\_\mathbb{N}) \times \mathbb{C} \tag{2}$$

where (*PA*) is the incidence probability, (*PE*) the performance probability, (*PI* ) the interruption probability, (*PN*) the neutralization probability, and (*C*) the consequence, respectively. Because of the difficulty of specifying the incidence probability, the security system is usually evaluated by the performance probability in which the timeline analysis is performed to identify the interruption probability and the security countermeasures; fence, sensor, camera, and so on are designed and installed into actual nuclear facilities. The neutralization probability is the unique feature of the security risk assessment and is determined by the performance of the response force. In addition to this, the deterrence effect can be estimated with a Bayesian method utilizing historical data, the game method assuming rational behavior and payoff matrix, and others, and the incidence probability could be evaluated qualitatively as in the decision process in [16].

Especially in the security risk study, sabotage risk is defined by taking the product of the frequency of sabotage incidence and the magnitude of consequences. Although it is difficult to estimate the initiation frequency, the risk can be described using the conditional probability and the magnitude of consequence as follows in [17]:

$$\mathcal{R}\_{\rangle} = \pi\_{\rangle} p\_{\rangle} c \tag{3}$$

*Rj* = The risk due to sequence *j* leading to consequence, *π<sup>j</sup>* = The probability that an adversary will attempt to complete sequence *j*, *pj* = The conditional probability of success of causing consequence given attempt of sequence *j*, c = The magnitude of consequence.

For certain sabotage attacks, it is assumed that it is possible to identify well-defined sabotage sequences leading to consequence. In addition, a sequence is a cut set of a sabotage fault tree equation and does not necessarily imply a particular time order because a saboteur might attack the fault tree components in an intentional way.

Considering all sequence levels, the total risk, *R*, is expressed as follows:

$$\mathcal{R} = \sum\_{j=1}^{\mu} \pi\_j p\_j c = c \sum\_{j=1}^{\mu} \pi\_j P\_{\text{DC}\_j} \prod\_{k=1}^{n} q\_k \tag{4}$$

μ = The number of sequences leading to consequence, *PDCj* = The probability of release reduction by the damage control measures, *qjk* = The probability of completion of the *kth* event in sequence *j, η<sup>j</sup>* = The number of discrete events in sequence *j.*

The three categories of measures, which are physical protection, damage control, and plant layout design, provide protection against radiological sabotage. The physical protection measures have been regulated; however, the other two measures are not fully discussed. In order to investigate the effect of damage control and plant layout design on the sabotage protection, the probability of release reduction by damage control measures, *PDCj*, and the probability of completion of event sequence, *qjk* , are important.

### **3. Case study of individual risk analysis**

#### **3.1. Probabilistic risk analysis (PRA) in safety**

The terminology, PSA, has been used in nuclear engineering field in Japan to introduce probabilistic risk analysis (PRA) that has been fully developed in safety assessment of nuclear power plant worldwide. According to the unique history of the introduction of PSA, the case study of PSA in safety is shown as the PRA study.

In the advanced fuel cycle project, Japan Atomic Energy Agency (JAEA) studied fast breeder reactor (FBR), advanced aqueous reprocessing, and fuel fabrication technologies. The details of these technologies have not been developed yet. However, a conceptual design of the advanced aqueous reprocessing process is used for safety risk study as shown in **Figure 4** as in [12, 13]. In this diagram, some challenging technologies and revolutionary instruments are included, and further development should be needed to proceed to engineering phase. After measuring the input plutonium amount at input accountability tank, most of uranium is removed at the crystallization process indicated as no. 9, and the remaining solution is adjusted to be treated correctly at the extraction process as nos. 13 and 14. Fission product and minor actinide are extracted at the extraction process. After the extraction process, the separated plutonium is always accompanied by uranium, and plutonium does not exist solely in the entire process. Through the evaporator output plutonium is measured at output accountability tank, and the mixture of plutonium and uranium is stored as reprocessed product. Several innovative technologies have been investigated in the FBR project. And in this feasibility study, as a typical FBR reprocessing process, the process throughput is assumed to be 200 ton-HM/year for spent fuel from FBR that is approximately corresponding to 18 ton-Pu/ year, and the process is to be operated during 200 days/year.

Using PSA methodology, the risk for radioactive material release and damage to public health is estimated based on failure data of instruments and associated reference values in [6]. Severe

Considering all sequence levels, the total risk, *R*, is expressed as follows:

*j*=1 *μ*

= The number of discrete events in sequence *j.*

*π<sup>j</sup> pj c* = *c* ∑

*j*=1 *μ*

μ = The number of sequences leading to consequence, *PDCj* = The probability of release reduction by the damage control measures, *qjk* = The probability of completion of the *kth* event in

The three categories of measures, which are physical protection, damage control, and plant layout design, provide protection against radiological sabotage. The physical protection measures have been regulated; however, the other two measures are not fully discussed. In order to investigate the effect of damage control and plant layout design on the sabotage protection, the probability of release reduction by damage control measures, *PDCj*, and the probability of

The terminology, PSA, has been used in nuclear engineering field in Japan to introduce probabilistic risk analysis (PRA) that has been fully developed in safety assessment of nuclear power plant worldwide. According to the unique history of the introduction of PSA, the case

In the advanced fuel cycle project, Japan Atomic Energy Agency (JAEA) studied fast breeder reactor (FBR), advanced aqueous reprocessing, and fuel fabrication technologies. The details of these technologies have not been developed yet. However, a conceptual design of the advanced aqueous reprocessing process is used for safety risk study as shown in **Figure 4** as in [12, 13]. In this diagram, some challenging technologies and revolutionary instruments are included, and further development should be needed to proceed to engineering phase. After measuring the input plutonium amount at input accountability tank, most of uranium is removed at the crystallization process indicated as no. 9, and the remaining solution is adjusted to be treated correctly at the extraction process as nos. 13 and 14. Fission product and minor actinide are extracted at the extraction process. After the extraction process, the separated plutonium is always accompanied by uranium, and plutonium does not exist solely in the entire process. Through the evaporator output plutonium is measured at output accountability tank, and the mixture of plutonium and uranium is stored as reprocessed product. Several innovative technologies have been investigated in the FBR project. And in this feasibility study, as a typical FBR reprocessing process, the process throughput is assumed to be 200 ton-HM/year for spent fuel from FBR that is approximately corresponding to 18 ton-Pu/

Using PSA methodology, the risk for radioactive material release and damage to public health is estimated based on failure data of instruments and associated reference values in [6]. Severe

*π<sup>j</sup> PDCj*

∏*<sup>k</sup>*=1

*<sup>η</sup><sup>j</sup> qjk* (4)

*R* = ∑

completion of event sequence, *qjk* , are important.

**3. Case study of individual risk analysis**

**3.1. Probabilistic risk analysis (PRA) in safety**

study of PSA in safety is shown as the PRA study.

year, and the process is to be operated during 200 days/year.

sequence *j, η<sup>j</sup>*

138 Risk Assessment

**Figure 4.** Diagram of advanced aqueous reprocessing process. Spent fuels discharged from fast breeder reactor are stripped to remove the uranium contents at the crystallization, no. 9, and extracted to remove the minor actinide contents using centrifugal extraction machines.

accident that would cause release of radioactive material is evaluated with expert judgment. With an assumption of multiple failures of instruments and human error, some important scenarios are selected as follows:


technique into minor actinide (MA) tank. At that time, MA is leaked from the outlet piping of MA tank, and then the leaked waste solution is boiled.


After specifying important factors and making an ET for success and/or failure branches, an accident sequence is decided. Moreover the incident frequency is estimated with a FT and instrumental data, and the total probability for the sequence is calculated. The assessment result is shown in **Figure 5**. Even in the worst case scenario 2, the estimated risk is still two orders lower than the safety target that is a design goal in the FaCT project.

#### **3.2. PRA in safeguards**

As the case study of the risk analysis in safeguards, the proliferation risk assessment that is applied to reprocessing process is described in this section. General description of large aqueous and PUREX process model is shown in **Table 1**. These process parameters are assumed to represent characteristics of large PUREX commercial plant and do not contain any proprietary information and sensitive technologies. They are simply decided to perform a preliminary investigation on this study while maintaining characteristics of large commercial reprocessing plant.

**Figure 5.** Probabilistic safety assessment for possible accident scenario. The safety target is decided to be 1×10−6 (death/ man/facility/year).


**Table 1.** Typical parameters in a large reprocessing plant.

technique into minor actinide (MA) tank. At that time, MA is leaked from the outlet piping

• *Scenario 3*: The coolant system is broken down due to a pump failure, and then self-heatgeneration source tanks, HALW and MA tanks, are boiled. Radioactive solution materials are evaporated. After chocking and destructing high efficiency performance air (HEPA)

• *Scenario 4*: Organic solvent is very volatile and is leaked from the outlet piping of the extrac-

After specifying important factors and making an ET for success and/or failure branches, an accident sequence is decided. Moreover the incident frequency is estimated with a FT and instrumental data, and the total probability for the sequence is calculated. The assessment result is shown in **Figure 5**. Even in the worst case scenario 2, the estimated risk is still two

As the case study of the risk analysis in safeguards, the proliferation risk assessment that is applied to reprocessing process is described in this section. General description of large aqueous and PUREX process model is shown in **Table 1**. These process parameters are assumed to represent characteristics of large PUREX commercial plant and do not contain any proprietary information and sensitive technologies. They are simply decided to perform a preliminary investigation on this study while maintaining characteristics of large commercial reprocessing plant.

**Figure 5.** Probabilistic safety assessment for possible accident scenario. The safety target is decided to be 1×10−6 (death/

orders lower than the safety target that is a design goal in the FaCT project.

of MA tank, and then the leaked waste solution is boiled.

filter, those effluents are discharged into outside.

tion process, and then fire accident is induced.

**3.2. PRA in safeguards**

140 Risk Assessment

man/facility/year).

The schematics of the process components are shown in **Figure 6**. These are composed of adjusting and input accountability, extraction and partition, and plutonium purification and concentration processes. The annual throughput is 800 ton-HM/y, and the working days per year are 200 days. In a steady-state operation, about 40 kg-Pu is a daily throughput, and plutonium inventory of the entire process is around 400 kg-Pu. It should be noted again that the model lacks the proprietary information. This is constructed from general PUREX specification and does not include design and performance information of dissolver, extraction process, and other sensitive technologies.

**Figure 6.** Schematics of PUREX reprocessing process.

In order to investigate adaptability of risk notion in a PUREX reprocessing process, PRA is carried out using the Markov model developed by the PR&PP WG. The PR&PP WG has also discussed the safeguard ability of an installation, defined as the degree to which a system can be put with effectiveness and efficiency under international safeguards, and the attributes have been defined for its characterizations. One of the evaluation methodologies has clearly noted for the notion of proliferation risk with assuming the Markov process model, and the proliferation risk analysis directly indicates the vulnerable diversion path instead of an expert elicitation.

As shown in **Figure 7**, the Markov process model is applied to the PUREX process to perform proliferation risk analysis, and in addition to the extrinsic effects by safeguards

**Figure 7.** Markov model for proliferation risk analysis. Both extrinsic effects and intrinsic barriers are considered in the model. The extrinsic effect is modeled for safeguards implementation and the intrinsic barriers are for material difficulty due to radiation exposure and high temperature.

implementation, characterized by "*TD*" and "*CD,*" the intrinsic effects by radiation strength in materials, by "a," are considered as technical difficulty. Dip tubes for the solution monitoring system is installed in the 82 tanks in Rokkasho Reprocessing Plant (RRP) and can be used as in-line level and density monitoring. While the solution monitoring can generate real-time signals of the solution, the sensitivity should depend on not only performance of pattern recognition algorithm but also meaningless background by sampling, homogenization, evaporation, and so on. The solution monitoring sensitivity is considered according to the process steps with an assumption that the plutonium concentration would determine the detection capability of the solution level change corresponding to 1 SQ (= 8 kg-Pu). Both the extrinsic effects of safeguards implementation and the intrinsic barriers by radiation from residuals are also considered to evaluate a proliferation risk in the reprocessing process. As shown in **Figure 8**, the detection probability at the process number 1, spent fuel pond, shows the largest probability due to the residence time. The success probability increases clearly after the process number 34, downstream from the second purification process, and especially after the number 40, evaporator, due to high concentration of plutonium as well as low radioactivity in [18].

In order to investigate adaptability of risk notion in a PUREX reprocessing process, PRA is carried out using the Markov model developed by the PR&PP WG. The PR&PP WG has also discussed the safeguard ability of an installation, defined as the degree to which a system can be put with effectiveness and efficiency under international safeguards, and the attributes have been defined for its characterizations. One of the evaluation methodologies has clearly noted for the notion of proliferation risk with assuming the Markov process model, and the proliferation risk analysis directly indicates the vulnerable diversion path instead of an expert elicitation.

As shown in **Figure 7**, the Markov process model is applied to the PUREX process to perform proliferation risk analysis, and in addition to the extrinsic effects by safeguards

**Figure 7.** Markov model for proliferation risk analysis. Both extrinsic effects and intrinsic barriers are considered in the model. The extrinsic effect is modeled for safeguards implementation and the intrinsic barriers are for material difficulty

due to radiation exposure and high temperature.

142 Risk Assessment

Although the proliferation is caused by intentional acts, it is assumed that a Poisson process, which is based on random incidence and is a theoretical background of the Markov model, could be applied to the risk analysis in the reprocessing. It is not yet applied to classical safeguards because PRA is not yet a quantitative safeguards component. In addition, measurement error probability in nuclear material accounting should be considered simultaneously with the incident probability that is a key component in the Markov model.

**Figure 8.** According to the individual process numbers, the detection, failure, and success probabilities are shown. A typical PUREX process is assumed to be composed of the 43 different processes: pool, dissolution, extraction, purification, evaporation, and storage.

#### **3.3. PRA in security**

In this case study, firstly, the Markov approach is applied to the sabotage risk assessment, and the Bayes updating is used to estimate the incident probability. And finally, the risk is considered with taking into account the sabotage sequences.

#### *3.3.1. Markov model approach*

In **Figure 9**, the sabotage pathway of hypothetical nuclear reactor is shown. This example scenario represents a sabotage of nuclear reactor at full power operation by disabling decay heat removal function of the reactor, and the decay heat removal can be performed either by destroying coolant loop or failing the sea water circulation. It is assumed that this sabotage scenario is carried out by a conventional strategy such as unauthorized intrusion. And this unauthorized intrusion is defined as design basis threat (DBT) for physical protection system

**Figure 9.** Sabotage pathway to radiological release of nuclear power reactor. The Markov model is used to model this sabotage pathway analysis. An intrusion attack is assumed as DBT, and a standoff stack is as beyond DBT.

in this hypothetical nuclear reactor. In addition, a standoff attack scenario is considered as a typical example of beyond DBT. In the beyond DBT, standoff attacks are performed to fail the primary coolant boundary integrity and containment integrity, and the serious consequences result in radiological release to the atmosphere.

In this security risk evaluation, two sabotage scenarios for nuclear power plants are considered. As the DBT, an intrusion incidence is investigated in the case (a), and terrorists attempt to overcome physical barriers and to destroy a reactor building using explosives. In the case (b), a standoff attack is modeled, and aircraft and/or missile attacks are assumed as beyond DBT. The failure, mitigation, detection, and success probabilities are calculated according to the elapsed time shown in **Figure 10**. It is understood that the detection probability is very high, and the success probability is very low in the case (a). This means that physical protection system perform efficiently for the DBT scenario. On the contrary, the mitigation and success probabilities increase gradually in accordance with fuel melting in the case (b). This indicates that the physical protection system does not work well in the case of beyond DBT and mitigation plans are important to minimize the consequence. Therefore, it is understood that a good cooperation between facility operator and national response authority is essential to mitigate the consequence against the sabotage attack.

#### *3.3.2. Bayes updating*

**Figure 9.** Sabotage pathway to radiological release of nuclear power reactor. The Markov model is used to model this

In this case study, firstly, the Markov approach is applied to the sabotage risk assessment, and the Bayes updating is used to estimate the incident probability. And finally, the risk is

In **Figure 9**, the sabotage pathway of hypothetical nuclear reactor is shown. This example scenario represents a sabotage of nuclear reactor at full power operation by disabling decay heat removal function of the reactor, and the decay heat removal can be performed either by destroying coolant loop or failing the sea water circulation. It is assumed that this sabotage scenario is carried out by a conventional strategy such as unauthorized intrusion. And this unauthorized intrusion is defined as design basis threat (DBT) for physical protection system

considered with taking into account the sabotage sequences.

sabotage pathway analysis. An intrusion attack is assumed as DBT, and a standoff stack is as beyond DBT.

**3.3. PRA in security**

144 Risk Assessment

*3.3.1. Markov model approach*

To compare the risk representation in security with that in safety, an incidence probability is roughly estimated with the Bayes updating using past 17 data taken from global terrorism incidences against nuclear power plant in the world from 1972 to 2007 as shown in **Figure 11** as in [19].

In the Bayes updating, the prior probability distribution is assumed to be a gamma distribution, and the most updated mean value of the probability is about 4 × 10−2 (1/the number of global NPPs/year). This is about 10−4(1/year) for individual nuclear power plant.

**Figure 10.** The failure, mitigation, detection, and success probabilities are calculated according to the elapsed time from an incidence. In the case (a), sabotage by an intrusion is assumed as design-based threat (DBT), and aircraft and/or missile attacks are assumed as beyond DBT in the case (b).

**Figure 11.** Incident probability is roughly estimated by Bayes updating. Experienced 17 data taken from past terrorism incidences in the world from 1972 to 2007 are used.

#### *3.3.3. Sabotage sequence analysis*

A loss of offsite power (LOOP) is the typical case scenario in the sabotage protection study because of its vulnerability of the offsite power source and transmission line. In addition, a loss of onsite power by emergency diesel generator (EDG) is assumed because of the new safety regulation considering the Tsunami in the Fukushima accident. All these sabotage sequences including a reactor cooling by auxiliary feed water (AFW), breeding steam into containment vessel (CV), and loss of all direct current (DC) and alternate current (AC) power are shown. In order to investigate an effect of damage control design, five design changes cited from reference [20] and emergency power source are included into the event sequence as in **Figure 12**.

In order to evaluate an effect of the design change, the number of target sets is shown in bar charts as histograms in **Figure 13**. The horizontal position of each bar corresponds to the vulnerability derived from the individual target element. The vulnerability is shown as the polyline in the same figure. In the reference case, the hollow bars are seen around the high vulnerability region, the left-hand side in the figure, and the number of elements is 1 or 2. The cumulative number of the target sets is not so large; however, these target sets should be very vulnerable for the sabotage protection. On the contrary, considering the design change, the cumulative number of the target sets is large, but those vulnerabilities are very low. This does not mean that the target sets with the design change are vulnerable.

The sabotage risk in Eq. (3), which is proportional to a summation of the multiplication of the number of target set and the vulnerability, is shown as a function of the number of elements in the target set with and without the design change in **Figure 14**. The number of element

**Figure 12.** The name of heading and that of the design change in the event tree and all fault trees are abbreviated due to security concerns.

*3.3.3. Sabotage sequence analysis*

146 Risk Assessment

incidences in the world from 1972 to 2007 are used.

A loss of offsite power (LOOP) is the typical case scenario in the sabotage protection study because of its vulnerability of the offsite power source and transmission line. In addition, a loss of onsite power by emergency diesel generator (EDG) is assumed because of the new safety regulation considering the Tsunami in the Fukushima accident. All these sabotage sequences including a reactor cooling by auxiliary feed water (AFW), breeding steam into containment vessel (CV), and loss of all direct current (DC) and alternate current (AC) power are shown. In order to investigate an effect of damage control design, five design changes cited from reference [20] and emergency power source are included into the event sequence as in **Figure 12**.

**Figure 11.** Incident probability is roughly estimated by Bayes updating. Experienced 17 data taken from past terrorism

In order to evaluate an effect of the design change, the number of target sets is shown in bar charts as histograms in **Figure 13**. The horizontal position of each bar corresponds to the vulnerability derived from the individual target element. The vulnerability is shown as the polyline in the same figure. In the reference case, the hollow bars are seen around the high vulnerability region, the left-hand side in the figure, and the number of elements is 1 or 2. The cumulative number of the target sets is not so large; however, these target sets should be very vulnerable for the sabotage protection. On the contrary, considering the design change, the cumulative number of the target sets is large, but those vulnerabilities are very low. This does

The sabotage risk in Eq. (3), which is proportional to a summation of the multiplication of the number of target set and the vulnerability, is shown as a function of the number of elements in the target set with and without the design change in **Figure 14**. The number of element

not mean that the target sets with the design change are vulnerable.

constituting the target set changes in the range of 1–3, and the effect of design change for damage control is shown. The total risk in all cases, regardless of the number of element, is reduced by considering the design change for damage control. It is verified that the built-in measures are effective and resistant compared to the emergency equipment placed outdoors. The movable equipment is flexible and resilient measure in accident management. It should be noted, however, that the equipment has to be used properly as the defense-in-depth (DiD) measures due to the possible adversary's interference.

**Figure 13.** The number of target set (TS) and the vulnerability with and without the design change (DC).

**Figure 14.** Reduction of total risk due to the design change (DC).

#### **4. Prospect of integrated risk analysis**

The PRA is an important method to evaluate equality in cost-effectiveness (CE) among 3S countermeasures, and quantification of risk in safeguards and security is always a challenge. The safety CE can be calculated by Eq. (5). And the frequency and damage cost in Eq. (5) have been well investigated. For the security CE, the incidence probability can be roughly estimated as shown in the previous section, and the damage cost can be evaluated according the individual scenario. On the contrary, for the safeguards CE, there is no method to estimate the incidence of diversion and/or misuse:

$$\text{Safety CE} = \text{Frequency} \left( \frac{1}{\text{year}} \right) \times \text{DamageCost (\\$)} \tag{5}$$

$$\text{Safegurads CE} = \text{Llukonvar} \times \text{DamageCost (\\$)}\tag{6}$$

$$\text{Security } \text{CE = Rough} \\ \text{S estimation} \left(\frac{1}{\text{year}}\right) \times \text{DamageCost (\\$)} \tag{7}$$

This is a current status toward the integrated 3S risk evaluation trial. However, the balanced management in resource allotment is highly appreciated to introduce nuclear energy in full compliance with international regimes as well as in a cost-effective manner. An integrated management system based on the quantitative risk evaluation would be the future research area in nuclear engineering field.

PRA in safeguards and security has been evolving and applying to the promotion of 3SBD activities. However, the theoretical basis is diverse, and the effectiveness of PRA in these areas has not been clearly demonstrated yet. Not only for an advanced instrument but also for risk-informed installation, it is shown that the Markov model approach is a good example of Safeguards by Design activities. The model is applied to PRA with the PUREX model, and it is clearly demonstrated that the vulnerable path in the PUREX process is safeguarded by the solution monitoring (SM) originally installed based on the expert elicitation. The recent study on SM is the uncertainty analysis to optimize the safeguards measures with the trade-off relation between the safeguards performance due to measurement error and the economical consideration as increasing the throughput in the advanced reprocessing process. Both the harsh circumstances with the residual MA and FPs and the increase of measurement uncertainty due to the large throughput support more NDA installment than DA with considering the initial and running cost of those measures.

The probabilistic risk methodologies in security have been developing, and the inherent difficulties due to intentional acts are still challenges. However, the Markov model, the Bayes updating, and the sabotage sequence analysis could be applicable to the decision problems in security. In fact, the sabotage scenario analysis using vital area identification methodology has been used to increase an effectiveness of sabotage protection in nuclear power plants. And the sabotage logic trees that have been originally developed as ETs/FTs in the safety PRA are used for the security protection.

Finally, integrating the PSA in safety as the risk assessment techniques with the PRA in safeguards and security would have a potential to be fascinated by the younger generation, and the comprehensive 3S regulation based on the qualitative and quantitative risk discussion should be transparent and persuasive for a reasonable approach in the mandatory 3S implementation.

### **Author details**

**4. Prospect of integrated risk analysis**

**Figure 14.** Reduction of total risk due to the design change (DC).

148 Risk Assessment

the incidence of diversion and/or misuse:

*Safety CE* = *Frequency* (

*Security CE* = *Rough Estimation* (

area in nuclear engineering field.

The PRA is an important method to evaluate equality in cost-effectiveness (CE) among 3S countermeasures, and quantification of risk in safeguards and security is always a challenge. The safety CE can be calculated by Eq. (5). And the frequency and damage cost in Eq. (5) have been well investigated. For the security CE, the incidence probability can be roughly estimated as shown in the previous section, and the damage cost can be evaluated according the individual scenario. On the contrary, for the safeguards CE, there is no method to estimate

> \_\_\_\_\_ 1 \_\_\_\_ *NPPs*

> > \_\_\_\_\_ 1 \_\_\_\_ *NPPs*

*Safeguards CE* = *Unknown* × *DamageCost* (*\$*) (6)

This is a current status toward the integrated 3S risk evaluation trial. However, the balanced management in resource allotment is highly appreciated to introduce nuclear energy in full compliance with international regimes as well as in a cost-effective manner. An integrated management system based on the quantitative risk evaluation would be the future research

PRA in safeguards and security has been evolving and applying to the promotion of 3SBD activities. However, the theoretical basis is diverse, and the effectiveness of PRA in these areas has not been clearly demonstrated yet. Not only for an advanced instrument but also for risk-informed installation, it is shown that the Markov model approach is a good example of Safeguards by

*year* ) × *DamageCost* (*\$*) (5)

*year* ) × *DamageCost* (*\$*) (7)

Mitsutoshi Suzuki

Address all correspondence to: suzuki.mitsutoshi@jaea.go.jp

Integrated Support Center for Nuclear Nonproliferation and Nuclear Security, Atomic Energy Agency, Japan

### **References**


[19] Steinhausler F. Countering Security Risks to Nuclear Power Plants, Jeddah: International Symposium on the Peaceful Applications of Nuclear Technology in the GCC countries; 2008

[5] Suzuki M, Burr T, Howell J. Risk-informed approach for safety, safeguards, and security (3S) by design, ICONE19-43154, 19th international conference on nuclear engineering.

[6] Kurisaka K, Kubo S, Kamiyama K, Niwa H. Comprehensive Safety Examination of Commercialized Fast Reactor Cycle Systems – Examination of Safety Development Target and Risk Analysis of the Aqueous Fuel Cycle Systems-, JNC TN9400 2002-031,

[7] Yue M, Cheng LY, Bari R. A Markov model approach to proliferation-resistance assess-

[8] Suzuki M, Terao N. Solution Monitoring Evaluated by Proliferation Riks Assessment and Fuzzy Optimization Analysis for Safeguards in a Reprocessing Process, Science and

[9] Gen IV International Forum, PR-PP Expert Group, Evaluation Methodology for Proliferation Resistance and Physical Protection, Rev. 5, GIF/PRPPWG/2006/005, OECD,

[10] Cobb D. Sequential tests for near-real-time accounting, INMM Proceedings; 1981.

[11] Avenhaus R, Canty MJ. Formal models for NPT safeguards, Journal of Nuclear Material

[12] Suzuki M, Burr T, Howell J. Risk-informed approach for safety, safeguards, and security (3S) by design, ICONE19-43154, proceedings of ICONE19. Chiba, Japan: 19th International

[13] Suzuki M, Demuth S. Proliferation Risk Assessment for Large Reprocessing Facilities with Simulation and Modeling, Paper 399247, Proceedings of Global 2011, Chiba, Japan,

[14] "Engineering Safety Aspects of the Protection of Nuclear Power Plants against Sabotage", IAEA Nuclear Security Series No.4. Vienna: Technical Guidance, IAEA; 2007 [15] Sandia National Laboratories Security Risk Assessment Methodologies. Available from:

[16] Kardes E, Hall R. Survey of literature on strategic decision making in the presence of

[17] Ericson DM, Bruce Varnado G. Nuclear power plant design concepts for sabotage pro-

[18] Suzuki M, Howell J, Burr T, Demuth S. Proposal of Proof-of-Principle Study on Aqueous

ment of nuclear energy systems, Nuclear Technology. 2008;**162**(26):28-44

Technology of Nuclear Installations. Vol. 2013, Hindawi; 2013

Conference on Nuclear Engineering, May 16-19, 2011

http://www.sandia.gov/ram/RAM-CI.html

adversaries, CREATE Report, March 15; 2005

tection, NUREG/CR-1345, SAND80-0477/1, 1981

Reprocessing Facility, IAEA CRP meeting; 2012

Vol. 16-19. Chiba, Japan. May, 2011

2002

150 Risk Assessment

November 30, 2006

Management. 2007;**354**:69-76

December 16-11-2011

pp. 62-70

[20] Lobner P. Nuclear Power Plant Damage Control Measures and Design Changes for Sabotage Protection, NUREG/CR-2585, SAND82-7011, 1982

Provisional chapter

### **Practical Propagation of Trust in Risk Management Systems** Practical Propagation of Trust in Risk Management

DOI: 10.5772/intechopen.70741

Kristian Helmholt, Matthijs Vonder, Bram Van Der Waaij, Elena Lazovik and Niels Neumann Kristian Helmholt, Matthijs Vonder, Bram Van Der Waaij, Elena Lazovik and

Additional information is available at the end of the chapter Niels Neumann

http://dx.doi.org/10.5772/intechopen.70741 Additional information is available at the end of the chapter

#### Abstract

Systems

Using risk management systems for large-scale asset management is not without risk itself. Systems that collect measurement from a geographically diverse area, across many organisations, contain many interacting components that can fail in many different ways. In this chapter these systems are discussed from a risk assessment point of view, using practical examples. It provides suggestions how trust can propagate between interacting components of risk management systems by making information needed for risk assessment information explicit.

Keywords: asset management, systems architecture, uncertainty, trust, distributed systems

#### 1. Introduction

This chapter is about trust propagation in risk management (related) systems used for largescale asset management that are largely constructed using information and communication technology. An example of such a system is a smart grid monitoring system used for managing the risk of power outage. Based on measurements and failure models, it determines the probability of future failure of components of the grid and the impact of such failure. The focus will not be on the type of risk management these systems were designed to support. Instead, the focus is on assessing risks involved with these systems.

We look at large-scale distributed systems that are so complex that no individual member of the set of people involved in the life cycle (design, construction, operations, etc.) of such a

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

system can understand the entire system. It would require more study and training from any individual than what could be reasonably expected of a human being. Assessing risk in this context requires the propagation of trust between the collaborating multidisciplinary members of the group. Members can vouch for certain aspects of the system based on their knowledge and expertise but need to rely on each other's judgement to come up with a verdict about the system itself and its output as a whole.

Special attention will be given to (1) automation of propagation of trust and (2) make the trust propagation process itself transparent, so that the system can be studied/audited/inspected by third parties that were not involved either in creating a system (of systems) or in monitoring that system. We work from the assumption that these independent third parties must be able to assure people that a system that supports (or even makes) decisions can be trusted. If these systems fail for whatever reason, it should be detected quickly. We go by this assumption, as we think, that without the ability to have people trust decision support/decision-making systems, these systems are useless in practice. Without these systems our societal toolkit to deal with a growing dependency on relatively scarce resources like energy, fresh water, skilled labour force, etc. is becoming dangerously empty. Dealing with these problems requires effective decisions that take into account analysis of many different aspects of physical reality which is above individual human capacity. We will describe several of those systems that are quintessential in making optimal decisions in societal domains where nonoptimal decisions let alone failure—are becoming less and less of an option.

We start this chapter by describing cases where risk management (related) systems (should) provide information to decision-makers. There is risk involved in all decisions. The risk (R) is defined as the product of impact and probability of an undesired event/outcome. For example, the decision of an electricity grid operator not to replace an electricity cable might result (probability p) in power outage (undesired event), while at the same time, the decision to replace the cable might result (1-probability p) in unnecessary spending of money. What is 'worse' depends on the valuation of the undesired events, which can be difficult to compute. If, for example, the loss of power (indirectly) results in the death of a person that could not call an emergency number, the impact is huge. Valuation of such impact is not a topic of this chapter, neither is the computation of probability for each case. Instead, after providing a description of the cases, we will show an approach for propagating trust. Last but not least, this chapter is not about specific improvements to fault tree analysis (FTA) or failure mode, effects (and criticality) analysis (FME[C]A).

### 2. Examples of trust propagation in risk assessment

In this section we provide examples of risk management (related) systems. We will not provide a comprehensive risk assessment of each system as this is beyond the scope of this chapter. The function of these examples is to serve as a backdrop later on in this chapter for illustrating approaches for propagating trust within these systems and to its end-users. All examples are derived from real-life cases on which we have worked. For reasons of clarity, explanation purposes and customer confidentiality, they are not 'verbatim copies' of reality.

#### 2.1. Railroad degradation: finding the cause behind the effect

system can understand the entire system. It would require more study and training from any individual than what could be reasonably expected of a human being. Assessing risk in this context requires the propagation of trust between the collaborating multidisciplinary members of the group. Members can vouch for certain aspects of the system based on their knowledge and expertise but need to rely on each other's judgement to come up with a verdict about the

Special attention will be given to (1) automation of propagation of trust and (2) make the trust propagation process itself transparent, so that the system can be studied/audited/inspected by third parties that were not involved either in creating a system (of systems) or in monitoring that system. We work from the assumption that these independent third parties must be able to assure people that a system that supports (or even makes) decisions can be trusted. If these systems fail for whatever reason, it should be detected quickly. We go by this assumption, as we think, that without the ability to have people trust decision support/decision-making systems, these systems are useless in practice. Without these systems our societal toolkit to deal with a growing dependency on relatively scarce resources like energy, fresh water, skilled labour force, etc. is becoming dangerously empty. Dealing with these problems requires effective decisions that take into account analysis of many different aspects of physical reality which is above individual human capacity. We will describe several of those systems that are quintessential in making optimal decisions in societal domains where nonoptimal decisions—

We start this chapter by describing cases where risk management (related) systems (should) provide information to decision-makers. There is risk involved in all decisions. The risk (R) is defined as the product of impact and probability of an undesired event/outcome. For example, the decision of an electricity grid operator not to replace an electricity cable might result (probability p) in power outage (undesired event), while at the same time, the decision to replace the cable might result (1-probability p) in unnecessary spending of money. What is 'worse' depends on the valuation of the undesired events, which can be difficult to compute. If, for example, the loss of power (indirectly) results in the death of a person that could not call an emergency number, the impact is huge. Valuation of such impact is not a topic of this chapter, neither is the computation of probability for each case. Instead, after providing a description of the cases, we will show an approach for propagating trust. Last but not least, this chapter is not about specific improvements to fault tree analysis (FTA) or failure mode, effects (and critical-

In this section we provide examples of risk management (related) systems. We will not provide a comprehensive risk assessment of each system as this is beyond the scope of this chapter. The function of these examples is to serve as a backdrop later on in this chapter for illustrating approaches for propagating trust within these systems and to its end-users. All examples are derived from real-life cases on which we have worked. For reasons of clarity, explanation

purposes and customer confidentiality, they are not 'verbatim copies' of reality.

system itself and its output as a whole.

154 Risk Assessment

ity) analysis (FME[C]A).

let alone failure—are becoming less and less of an option.

2. Examples of trust propagation in risk assessment

This example serves as a means to show several risks involved in cause and effect relationships in complex systems.

Rail transport operations involve many risks at different levels of abstraction. People depend on the transportation of objects (e.g. people, cargo) to arrive safely within time, possibly comfortable and within budget constraints. End-users run many different risks, for example, not arriving on time, getting killed in an accident, paying too much, etc., different probabilities and different costs of impact. Railroad operators in turn run the risk of creating an accident and delay and providing uncomfortable or overpriced services, etc. In this example we focus on a specific aspect: determining the probability that a specific segment of physical track becomes unavailable due to physical degradation. This probability is a welcome ingredient to sophisticated risk assessment. For example, it can be used to determine when to perform maintenance, or it can be used to determine which tracks are available for routing trains. It is far from trivial to determine this probability, due to the many cause and effect relationships present in railroad operations. This is because of the many physical interactions of objects and forces that together influence the physical condition of the track. These interactions are the results of actions that in turn are the results of processes at different organisations and people involved in railroad operations. For example, an object that exerts influence is the train that interacts with the track through its wheels. The presence of a train is the result of planning of railroad transport carrier organisations. The interaction of the train with the track is influenced by the type of train, its weight, its length and the amount of trains. The track also interacts with its surroundings, like the geotechnical situation (e.g. 'soft or wet soil'), and the weather. The degradation of the track is also influenced by the construction materials, shape and specific construction. Last but not least, maintenance activities influence the degradation process also (i.e. 'it disrupts degradation'). All these influences interact in a way that seems (and probably is) far from trivial to understand. There is a lot of uncertainty with respect to determining the probability of degradation.

In recent years parties involved in railroad operations (i.e. railroad operators, contractors, etc.) have come up with approaches for arriving at a better estimation of the probability of (un) availability of the track. Many of these approaches are based on the idea that future states of the track can be estimated on knowing previous states. This is based on the assumption that a future state is (at least partially) determined by previous states, which can be the case in physical systems. The idea is to analyse recordings of previous states and discover a mathematical relationship between past and present states. This relationship could in turn lead to a prediction/estimation of future states by carrying out computations with previous states as input. From the viewpoint of risk assessment, roughly stated, two types of mathematical relationships can be identified: statistics based and physical model based. The first approach looks at parameters that describe the level of track degradation (e.g. height, shift, etc.) through time. It tries to fit those parameters in a mathematical function that describes the development of the parameters through time as accurate as possible (e.g. 'linear regression', 'curve fitting'). The future state is then estimated using this function. There is no real physical understanding of the system observed. The second approach also looks at parameters that might influence degradation (weather, soil type, etc.). It tries to find a mathematical relationship between influence parameters and degradation parameters through time, resulting in a physical understanding in terms of a model. If that relationship is well known, the future state can be predicted based on measurement of the previous state, influence parameters and application of the model. In practice there are combinations of both approaches. For example, Bayesian networks can be used to determine the conditional chance that a track will degrade. The idea is to determine the contribution of different influence parameters in the probability that a track will degrade. For example, an outcome of an analysis using Bayesian networks might be 'if 80% of the trains crossing the track travel at a speed of 100 km/h, the probability of severe track height degradation is 65%, where as it is reduced to 30% if only 20% of the trains travel at that particular speed'. Using this approach does not provide a complete physical model, but it does provide a deeper understanding of the observed system when compared to extrapolating on a degradation parameter like height alone. In summary, discovering/determining a physical model is far from trivial in railroad degradation. This is due to the many interactions of potential influence parameters and diversity in track construction and operations.

From the perspective of assessing risk in using an approach like this, several types of risk can be identified. There are risks involving:


These risks directly translate into trust issues. Measurement experts must be able to trust sensors. Analysis experts must be able to trust measurements. Risk assessment experts must be able to trust analysis results. As all three areas are different areas of expertise, there is a need for propagation of trust. Again, this is far from trivial as railway systems contain thousands of kilometres of track in different surroundings, used in many different ways. Coming up with a methodology (or system that automates this method) that estimates the probabilistic chance of unavailability due to

degradation in a uniform way is a challenge. It involves collaboration from different experts working at different organisations at different locations. Each organisation has to be trusted to provide the right information. This can be extra difficult in competitive markets, which is the case in some countries (e.g. the Netherlands). In that market cargo transportation companies compete for cargo, and contractors compete for the right to maintaining track for a period of several years. Sharing data or analysis methods might interfere with the rules of competition. For example, determining the cause of a stop in degradation requires knowing where and when a contractor performed maintenance on the track. This however is also part of the competitive edge a contractor has. This is an impediment to sharing this kind of data in general.

#### 2.2. Pipeline management: trusting a computed future

influence parameters and degradation parameters through time, resulting in a physical understanding in terms of a model. If that relationship is well known, the future state can be predicted based on measurement of the previous state, influence parameters and application of the model. In practice there are combinations of both approaches. For example, Bayesian networks can be used to determine the conditional chance that a track will degrade. The idea is to determine the contribution of different influence parameters in the probability that a track will degrade. For example, an outcome of an analysis using Bayesian networks might be 'if 80% of the trains crossing the track travel at a speed of 100 km/h, the probability of severe track height degradation is 65%, where as it is reduced to 30% if only 20% of the trains travel at that particular speed'. Using this approach does not provide a complete physical model, but it does provide a deeper understanding of the observed system when compared to extrapolating on a degradation parameter like height alone. In summary, discovering/determining a physical model is far from trivial in railroad degradation. This is due to the many interactions of

potential influence parameters and diversity in track construction and operations.

be identified. There are risks involving:

156 Risk Assessment

the measurement method.

From the perspective of assessing risk in using an approach like this, several types of risk can

1. Measurement. For example, the recorded measurement might not reflect the actual physical reality. This can be caused by different things: faulty measurement sensors, errors in recording, errors in converting data during data preparation, etc. Also, measurement methods might change throughout the years. A more accurate set of date might become available with different characteristics. This in turn might result in the observation that 'during the years something changed in track behaviour', while it was only a change in

2. Analysis. For example, analysts might assume that a set of measurements is more accurate than it is in reality. Analysis errors might be made due to difficulties in comparing different measurement sources (e.g. height of track, soil saturation of the underground, speed of trains, etc.). As different aspects of reality are measured at different points in time and location (in this case), mathematical interpolation of these types of data is needed for combining them. This might be a wrong conclusion as they are not tightly coupled in and

3. Persistence of analysis results (i.e. identified mathematical relationship). For example, if a mathematical relationship has been discovered, while an influence parameter did not change during statistical analysis of measurement data, the model might not take that parameter into account. As soon as this influence parameter changes, the modelled math-

These risks directly translate into trust issues. Measurement experts must be able to trust sensors. Analysis experts must be able to trust measurements. Risk assessment experts must be able to trust analysis results. As all three areas are different areas of expertise, there is a need for propagation of trust. Again, this is far from trivial as railway systems contain thousands of kilometres of track in different surroundings, used in many different ways. Coming up with a methodology (or system that automates this method) that estimates the probabilistic chance of unavailability due to

come from complex multivariate and multi-organisational systems.

ematical relationship will probably be no longer valid.

This example [1–3] serves as a means to show the risks involved in computing possible future states of complex systems.

In the Netherlands distribution of drinking water and gas is largely done through underground pipelines, as the soft soil of most of the Netherlands permits for relatively easy modification of the top (1–2 m) layers of soil. Failure to provide water and gas has a severe impact on economy and society. Risk management is part and parcel of the work carried out by the organisations responsible for the management of these utility networks. They target at minimising risk within budget restraints by spending resources (time, money, etc.) in the most optimal way. Risk assessment constitutes an important part of their activities in minimising risk. In this example we focus on assessing the risk of structurally unreliable pipelines due to the influence of ground settlement. This risk is significant enough to be investigated, as (in the Netherlands) the top soft soil layer can—and does move at different speeds at different places. This can cause strain in the pipelines, and depending on the materials and specific geometry they can rupture or break, it can result in leakage. Next to not delivering gas and water, there are other types of impact. Water leakage can result in local flooding that can destroy roads ('sinking cars'). Gas leakage might result in explosions that (at least partially) destroy houses as gas seeps into basements from underneath the roads where most pipelines are situated. Risk managers therefor want to reduce as much uncertainty as possible with respect to the probability of this type of pipeline failure occurring.

One approach that is currently being used in a project at TNO is to use physical models of reality to estimate the likelihood of possible future states of pipelines. The following models are used:


The construction and maintenance of these models require specific expertise, for example, in the area of geotechnics and structural reliability. A multidisciplinary team is needed for the constructing a supermodel that integrates all of these models. This 'supermodel' requires a broad spectrum of input parameters, including:


Information about these parameters is not available in a uniform way. There are drill samples of the soil, but not from all locations. This has to be derived using another model that provides an (probabilistic) estimate of what type of soil could (most likely) be located at a specific spot. Often, there is information about when an area was transformed into a 'built environment', but what exactly was built or deposited when and where is mostly forgotten in history. This has to be derived and estimated from other sources. There are geographical information systems (GIS) containing information on 'where to dig' for pipelines, but the exact height and location are often not known. Not in the least because of the fact that pipelines can move due to the softness of the soil. So, the 'supermodel' has to deal with a lot of uncertainty. This is partially done by including the actual settlement of the top soil layer, as measured by satellites. This data can be assimilated in order to keep estimations of soil settlement closer to reality. How this is done is beyond the scope of this chapter.

Because there is a lot of uncertainty with respect to input parameters, the engineers behind this approach have decided to use 'stochastic modelling'. Roughly stated, it means that not one possible future state is computed using the chain, but many different possible states, based on probability density functions. For several possible variations in a variable, a possible state is computed. Given the amount of variables involved and the different probability density functions, these results in many different possible future states. These are subjected to a statistical analysis, which then results in a 'most probable future'. Note that this approach has only become affordable recently due to developments in the automation of distributed computation. The amount of time to wait for results can now be reduced drastically by computing in parallel across clouds and combining the results later on.

The use of so much input data means that there are risks involved with measurement and (statistical) analysis, just as in the previous 'railroad degradation' example. Other risks are:


#### 2.3. Smart grid analysis: non-available data

The construction and maintenance of these models require specific expertise, for example, in the area of geotechnics and structural reliability. A multidisciplinary team is needed for the constructing a supermodel that integrates all of these models. This 'supermodel' requires a

2. Forces on the ground throughout the years, as geotechnical processes can take years (e.g.

Information about these parameters is not available in a uniform way. There are drill samples of the soil, but not from all locations. This has to be derived using another model that provides an (probabilistic) estimate of what type of soil could (most likely) be located at a specific spot. Often, there is information about when an area was transformed into a 'built environment', but what exactly was built or deposited when and where is mostly forgotten in history. This has to be derived and estimated from other sources. There are geographical information systems (GIS) containing information on 'where to dig' for pipelines, but the exact height and location are often not known. Not in the least because of the fact that pipelines can move due to the softness of the soil. So, the 'supermodel' has to deal with a lot of uncertainty. This is partially done by including the actual settlement of the top soil layer, as measured by satellites. This data can be assimilated in order to keep estimations of soil settlement closer to reality. How

Because there is a lot of uncertainty with respect to input parameters, the engineers behind this approach have decided to use 'stochastic modelling'. Roughly stated, it means that not one possible future state is computed using the chain, but many different possible states, based on probability density functions. For several possible variations in a variable, a possible state is computed. Given the amount of variables involved and the different probability density functions, these results in many different possible future states. These are subjected to a statistical analysis, which then results in a 'most probable future'. Note that this approach has only become affordable recently due to developments in the automation of distributed computation. The amount of time to wait for results can now be reduced drastically by computing

The use of so much input data means that there are risks involved with measurement and (statistical) analysis, just as in the previous 'railroad degradation' example. Other risks are:

1. Untimely arrival of data. As the 'supermodel' requires a lot of data on a regular basis to update its estimations, it might be the case that one of the suppliers fails to deliver on time. This will impact the output of the 'supermodel' in the sense that it is an outdated advice.

2. Inaccurate integrated model of the physical world. For example, the models might represent a correct understanding of the physical world separately, but when integrated into a 'supermodel', they do not. For example, a pipeline itself might influence the behaviour of the soil too, which might not be taken into account by the soil settlement model.

broad spectrum of input parameters, including:

158 Risk Assessment

this is done is beyond the scope of this chapter.

in parallel across clouds and combining the results later on.

1. Detailed geographical description of soil type

30 years for settlement due to a specific load) 3. Location, geometry and material of the pipeline

This example [4, 5] serves as a means to show the risk of data becoming unavailable.

In the Netherlands, electricity is distributed throughout underground low-voltage utility networks, just as water and gas. That makes the cables and their connections invisible to direct visual inspection. However, as the condition of the cable isolation contributes significantly to the risk of power outage, knowledge about the actual state of the isolation is an important ingredient for assessing the risk of power outage. An approach, currently under investigation in research projects, to deal with this uncertainty is to come up with an estimate of the condition of the isolation part of the cable, using a model that takes into account the material of the cable, its construction, its surroundings and the power loads it has been subjected to. This model will be developed based on analysing measurements. Using methods from applied statistics, researchers will try to identify relationships between power outage and specific influence parameters. This approach can be compared to the one used in the 'railroad degradation' example above.

What makes this example different from the previous ones is the possible need for accessing data that is protected by privacy laws. Specific power might influence degradation of the condition of the insulation part of the cable. To determine if this is the case measurement, data is needed on the demand and supply of power to a distribution network. Smart metres (at households) might be able to provide this data. However, this data could also provide insight into what equipment is used at which times of the day. This is why smart metres have been the topic of many heated privacy debates. Smart grid operators might be allowed by special law to only use the data for grid analysis. However, in the future there might be new and tougher laws on privacy that stop the operators from having access. Keeping the system (i.e. failure models) up to date will then become a challenge.

This case shows the risk that data is not (always) available, this time due to the introduction of a law instead of a (temporary) failure of a data supplier to deliver.

#### 2.4. Precision dairy farming: sharing valuable information

This example [6] shows the risks involved in sharing commercially sensitive data.

As dairy farming concerns livestock, it involves many different risks, for example, risks involving food safety and animal health. Assessing risks in detail requires having information on cows, their surroundings, their food, etc. In the past it was difficult to retrieve this information as it was not recorded. But as sensors and ICT have become more affordable, farms are becoming places where a multitude of sensors gather measurement data for interpretation by experts that work on improving milk production, cow welfare, etc. Also, laws and regulations have changed in order to ensure safety and care for the animal and the environment.

The information gathered by these systems however is not easily available for everyone. This is because it provides insight into 'secrets of the trade' and not every animal expert wants to share its findings or data/information. There is the risk of 'teaching your competitors'. Discovering cause and effect relationships requires long-term observation of cows in their contexts. This can be difficult as cows can 'switch farms'. There is the risk of data on a cow being unavailable.

Recently, the InfoBroker concept [7] was developed: 'a platform to make real-time sensor data from different farms available, for model developers to support dairy farmers in Precision Livestock Farming. The data has been made available via a standard interface in an open platform in real time at the individual animal level'. The InfoBroker is designed to making data stored in diverse places available in an efficient and controlled manner. Data is not stored centrally, but remains at the source. The InfoBroker is capable of retrieving individual cow data from many sources while at the same time serving a large number of models on demand. 'For each farm it is specified which data may be released by the InfoBroker. This means that the farmer continues to be the owner of the data'.

Newly identified risks are related to the uncertainty of the quality of the sensor data. Not all data sources are created equal. Some sources are more precise or more frequent (and some change over time). Does a sensor measure the weight per 10 kg or per 0.1 kg? Does a sensor measure the activity per day or per min? For some applications, this is irrelevant. For example, for a dashboard application, typically the data is presented as is, without any qualitative indication. For others, especially model-driven applications, it is essential to know if the data from the device is accurate enough to be used. Some scales can be used as weight input; others are not precise enough. Some activity sensors produce frequent enough measurements; others accumulate over too long time periods. It depends on the farm which sensors are used and therefore available to the model. Therefore, there is a risk of drawing the wrong conclusion for some farmers, because they happen to have a sensor with not enough quality.

### 3. Propagating trust in risk assessment

In the previous section, we have provided examples of risks involved in risk management (related) systems. In this section we focus on the propagation of trust between the different parties that design, construct, manage and use these systems. Without the propagation of trust, it is impossible to assess risk using these systems, during construction as well as during its usage. The approach for propagation we describe is largely based on separation of concerns and will be discussed at the logical/functional level of abstraction, from an architectural point of view.

#### 3.1. Separate concerns in risk assessment

2.4. Precision dairy farming: sharing valuable information

160 Risk Assessment

farmer continues to be the owner of the data'.

3. Propagating trust in risk assessment

This example [6] shows the risks involved in sharing commercially sensitive data.

have changed in order to ensure safety and care for the animal and the environment.

As dairy farming concerns livestock, it involves many different risks, for example, risks involving food safety and animal health. Assessing risks in detail requires having information on cows, their surroundings, their food, etc. In the past it was difficult to retrieve this information as it was not recorded. But as sensors and ICT have become more affordable, farms are becoming places where a multitude of sensors gather measurement data for interpretation by experts that work on improving milk production, cow welfare, etc. Also, laws and regulations

The information gathered by these systems however is not easily available for everyone. This is because it provides insight into 'secrets of the trade' and not every animal expert wants to share its findings or data/information. There is the risk of 'teaching your competitors'. Discovering cause and effect relationships requires long-term observation of cows in their contexts. This can be difficult as cows can 'switch farms'. There is the risk of data on a cow being unavailable.

Recently, the InfoBroker concept [7] was developed: 'a platform to make real-time sensor data from different farms available, for model developers to support dairy farmers in Precision Livestock Farming. The data has been made available via a standard interface in an open platform in real time at the individual animal level'. The InfoBroker is designed to making data stored in diverse places available in an efficient and controlled manner. Data is not stored centrally, but remains at the source. The InfoBroker is capable of retrieving individual cow data from many sources while at the same time serving a large number of models on demand. 'For each farm it is specified which data may be released by the InfoBroker. This means that the

Newly identified risks are related to the uncertainty of the quality of the sensor data. Not all data sources are created equal. Some sources are more precise or more frequent (and some change over time). Does a sensor measure the weight per 10 kg or per 0.1 kg? Does a sensor measure the activity per day or per min? For some applications, this is irrelevant. For example, for a dashboard application, typically the data is presented as is, without any qualitative indication. For others, especially model-driven applications, it is essential to know if the data from the device is accurate enough to be used. Some scales can be used as weight input; others are not precise enough. Some activity sensors produce frequent enough measurements; others accumulate over too long time periods. It depends on the farm which sensors are used and therefore available to the model. Therefore, there is a risk of drawing the wrong conclusion for

In the previous section, we have provided examples of risks involved in risk management (related) systems. In this section we focus on the propagation of trust between the different

some farmers, because they happen to have a sensor with not enough quality.

A basic underlying problem for teams of multidisciplinary experts to vouch for a system as a whole is that they cannot do a proper review of the work of experts from another domain. They simply miss the expertise. For example, how can a mathematical analyst determine if measurements made by another expert can be trusted if he/she has no expertise in the field of making measurements? How can the analyst assess how much risk is involved in using these measurements? If they want to vouch for the system as a whole, they have to trust the other experts involved.

We state that it is important for continuous risk assessment of complex risk management (related) systems to separate the concerns for experts. This means that instead of assessing risk for a large monolithic system with lots of intertwined functional components, a conglomerate of components should be designed with risk management built in the components. Each expert can focus on their component which he or she understands. They connect their own components to others using well-defined interfaces. These interfaces take into account risk assessment issues, which we will be discussed later on. The idea of separation of concerns in distributed systems is sometimes also referred as 'unbundling'. Finding out where to 'draw the lines of separation' is beyond the scope of this chapter, as it is a topic of its own in distributed systems design. However, we can state that the following categorisation of types of expertise provides an indication of where to separate:


Once a design is separated into components that are understood by experts in their domain, the next step is to make information on risks that involved explicit. Experts should provide risk assessment-related information about the (un)certainty of the (delivery) of data/information of their component to other components explicitly. In this way it is possible to assess the risks involved of the system as a whole. Note that we do not consider the decomposition of a system into parts as something new: we use this as a stepping stone for the next sections.

#### 3.2. Propagation of trust

Experts can provide risk assessment-related information about the output of their component, based on their knowledge of how the component transforms input into output. In order to do so, they also need risk assessment-related information with regard to the input of their component. If this is provided, it is possible to have (a certain level of) trust to propagate throughout the system, if components are designed, constructed and used according to the following rules:


In the next sections, we will describe the first two rules in more detail. If these rules are followed, the likelihood increases of being able to trace back potential root causes in case of incorrect behaviour. A more detailed description of the third rule is beyond the scope of this chapter as it involves auditing and certificate practices (Figure 1).

#### 3.2.1. Receiving as promised

Delivery of output as input to another component (e.g. measurement data, analysis results, model-based estimations of probability, etc.) has different aspects. For each of these aspects, risks can be identified. We provide a non-exhaustive list of aspects:


Figure 1. Basic trustworthiness model.

flooding 3 days after the flood is a problem for a system that provides information for people who have to decide on a possible evacuation. With respect to behaviour in time, the following types can be identified in terms of reliability (Figure 2):

i. Perfect world: input is always delivered on time.

3.2. Propagation of trust

worthiness for each supplying component.

involved in using a component.

3.2.1. Receiving as promised

Figure 1. Basic trustworthiness model.

rules:

162 Risk Assessment

Experts can provide risk assessment-related information about the output of their component, based on their knowledge of how the component transforms input into output. In order to do so, they also need risk assessment-related information with regard to the input of their component. If this is provided, it is possible to have (a certain level of) trust to propagate throughout the system, if components are designed, constructed and used according to the following

1. Components determine if input is provided as promised. They establish a level of trust-

2. Components include risk assessment (related) information in their output. They provide

3. Components are auditable by third parties. Noninvolved experts can assess the risk

In the next sections, we will describe the first two rules in more detail. If these rules are followed, the likelihood increases of being able to trace back potential root causes in case of incorrect behaviour. A more detailed description of the third rule is beyond the scope of this

Delivery of output as input to another component (e.g. measurement data, analysis results, model-based estimations of probability, etc.) has different aspects. For each of these aspects,

1. Completeness. Whether or not all input was received, having everything that is needed. 2. Timeliness. Whether or not all input example was received in time. Sometimes data becomes useless if it arrives too late. For example, receiving a warning about potential

information for establishing their trustworthiness to other components.

chapter as it involves auditing and certificate practices (Figure 1).

risks can be identified. We provide a non-exhaustive list of aspects:


Figure 2. Aspects of timeliness. The display dots (M1-Mn) represent input data created at wall clock time T1-Tn.

Figure 3. Relationship between precision and accuracy.

6. Validity: the degree to which input conforms to agreements on syntax and semantics. For example, if the temperature of a freezer is noted in degree Fahrenheit, it might be accurate, precise and consistent. However, if the agreement was to report in degree Celsius, the data is not valid.

Using the concepts of completeness, timeliness and correctness, we can define a tree of trustworthiness for a component as seen in Figure 4.

The trustworthiness of a component is determined by its availability to other components and the quality of the output it provides (in time). That quality can be described in terms of 'quality of transport' (QoT) and 'quality of data' (QoD), each from the viewpoint of completeness, timeliness and correctness. From a QoT perspective, the received input is considered as black box, and the focus is on the arrival in time. From a QoD perspective, the content, the data, is considered.

This tree of trustworthiness could also be used to design components that explicitly filter input based on the different aspects: timeliness, completeness and correctness (see Figure 5). Depending on the impact of using input that, for example, did not arrive on time or was partially incomplete, a component might decide not to produce any output. This could in turn result into a cascade of components that stop producing output, thereby signalling the enduser that the system as a whole can no longer be trusted at the same level as before. Whether or not the system as a whole should show this kind of behaviour depends on the specific purpose

Figure 4. Tree of trustworthiness.

Figure 5. Filtering of input based on risk assessment-related aspects.

of the risk management system. It might also be possible to come up with 'best effort' output and explicitly communicate this with the output. This is covered in the next subsection.

#### 3.2.2. Including risk assessment information for trust

In the previous subsection, we focussed on assessing risk by components that receive input. As described before experts from one domain of expertise cannot review the work of experts in another domain (they do not master). Therefore, components need to include information about the inputs that were used into their output. From a viewpoint of intercomponent communication, this can be done in two ways:


Wherever risk assessment information about produced data/information is made available (in band or out of band), there need to be agreements on the syntax and semantics of accuracy, probability, etc.

### 4. Conclusion

6. Validity: the degree to which input conforms to agreements on syntax and semantics. For example, if the temperature of a freezer is noted in degree Fahrenheit, it might be accurate, precise and consistent. However, if the agreement was to report in degree Celsius, the data

Using the concepts of completeness, timeliness and correctness, we can define a tree of trust-

The trustworthiness of a component is determined by its availability to other components and the quality of the output it provides (in time). That quality can be described in terms of 'quality of transport' (QoT) and 'quality of data' (QoD), each from the viewpoint of completeness, timeliness and correctness. From a QoT perspective, the received input is considered as black box, and the focus is on the arrival in time. From a QoD perspective, the content, the data, is considered. This tree of trustworthiness could also be used to design components that explicitly filter input based on the different aspects: timeliness, completeness and correctness (see Figure 5). Depending on the impact of using input that, for example, did not arrive on time or was partially incomplete, a component might decide not to produce any output. This could in turn result into a cascade of components that stop producing output, thereby signalling the enduser that the system as a whole can no longer be trusted at the same level as before. Whether or not the system as a whole should show this kind of behaviour depends on the specific purpose

is not valid.

164 Risk Assessment

Figure 4. Tree of trustworthiness.

worthiness for a component as seen in Figure 4.

Figure 3. Relationship between precision and accuracy.

In this chapter we have discussed the concept of trust propagation in information and communication technology-based systems that are used for risk management, from a risk assessment point of view. We have provided examples of such risk management systems and shown the possible types of risks involved. Furthermore, we have provided suggestions on how to enhance assessment of risk of these systems, by applying the concept of 'separation of concerns' and making risk assessment information explicitly available.

### Author details

Kristian Helmholt\*, Matthijs Vonder, Bram Van Der Waaij, Elena Lazovik and Niels Neumann

\*Address all correspondence to: kristian.helmholt@tno.nl

Netherlands Organisation for Applied Scientific Research (TNO), The Hague, Netherlands

### References


**Provisional chapter**

### **Risk Assessment for Collaborative Operation: A Case Study on Hand-Guided Industrial Robots Study on Hand-Guided Industrial Robots**

**Risk Assessment for Collaborative Operation: A Case** 

DOI: 10.5772/intechopen.70607

Varun Gopinath, Kerstin Johansen and Johan Ölvander Johan Ölvander Additional information is available at the end of the chapter

Varun Gopinath, Kerstin Johansen and

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.70607

#### **Abstract**

the possible types of risks involved. Furthermore, we have provided suggestions on how to enhance assessment of risk of these systems, by applying the concept of 'separation of con-

Kristian Helmholt\*, Matthijs Vonder, Bram Van Der Waaij, Elena Lazovik and Niels Neumann

Netherlands Organisation for Applied Scientific Research (TNO), The Hague, Netherlands

[1] Helmholt K, Courage W. Risk management in large scale underground infrastructures. In: 2013 IEEE International Systems Conference (SysCon); 2013. Orlando, Fl: IEEE; 2013. pp.

[2] van den Heuvel F, Schouten M, Abspoel L, Courage W, Kruse H, Langius E. InSAR for risk-based asset management of pipeline networks (poster). In: European Space Agency

[3] de Bruijn R. et al. Differential subsidence and its effect on subsurface infrastructure: Predicting probability of pipeline failure (STOOP project). In: 19th EGU General Assembly,

[4] Helmholt KA et al. A structured approach to increase situational awareness in low voltage distribution grids. In: 2015 IEEE Eindhoven PowerTech, Eindhoven; 2015. Eindhoven:

[5] Helmholt KA, Broenink EG. Degrees of freedom in information sharing on a greener and smarter grid. In: ENERGY 2011: The First International Conference on Smart Grids, Green Communications and IT Energy-Aware Technologies; May 22–27, 2011. Venice, Italy; 2011

[6] van der Weerdt CA, Kort J, de Boer J, Paradies GL. Smart dairy farming in practice: Design requirements for user-friendly data based services. In: Conference Proceedings for the International Conference on Precision Dairy Farming. Leeuwarden, Netherlands; 21–23

[7] Vonder MR, van der Waaij BD, Harmsma EJ, Donker G. Near real-time large scale (sensor) data provisioning for PLF. In: Guarino M, Berckmans D, editors. European Conference on

Precision Livestock; 15 September 2015 through 18 September; 2015. pp. 290-297

cerns' and making risk assessment information explicitly available.

\*Address all correspondence to: kristian.helmholt@tno.nl

902-908. DOI: 10.1109/SysCon.2013.6549991

Living Planet Symposium; 9–13 May 2016. Prague; 2016

EGU2017; 23–28 April, 2017. Vienna, Austria; 2017. p. 15924

IEEE; 2015. pp. 1-6. DOI: 10.1109/PTC.2015.7232779

Author details

166 Risk Assessment

References

June, 2016. 427-432

Risk assessment is a systematic and iterative process, which involves risk analysis, where probable hazards are identified, and then corresponding risks are evaluated along with solutions to mitigate the effect of these risks. In this article, the outcome of a risk assessment process will be detailed, where a large industrial robot is used as an intelligent and flexible lifting tool that can aid operators in assembly tasks. The realization of a collaborative assembly station has several benefits, such as increased productivity and improved ergonomic work environment. The article will detail the design of the layout of a collaborative assembly workstation, which takes into account the safety and productivity concerns of automotive assembly plants. The hazards associated with hand-guided collaborative operations will also be presented.

**Keywords:** hand-guided robots, industrial system safety, collaborative operations, human-robot collaboration, risk assessment, hazards

### **1. Introduction**

In a manufacturing context, collaborative operations refer to specific applications where operators and robots share a common workspace [1, 2]. This allows operators and industrial robots to share assembly tasks within the pre-defined workspace—referred to as collaborative workspace—and this ability to work collaboratively is expected to improve productivity as well as the working environment of the operator [3].

As pointed out by Marvel et al. [1], collaborative operation implies that there is a higher probability for occurrence of hazardous situations due to close proximity of humans and industrial

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

robots. The hazardous situations can lead to serious injury and, therefore, safety needs to be guaranteed while developing collaborative applications [4].

ISO 10218-1 [5] and ISO 10218-2 [6] are international standards aimed at specifying requirements for safety on the design of industrial robots and robotic systems, respectively. They recognize collaborative applications and list four specific types of collaborative operations, namely (1) safety-rated monitored stop, (2) hand-guiding, (3) speed and separation monitoring, and (4) power and force limiting that can be implemented either individually or as a combination of one or more types.

As industrial robots and robotic systems are designed and integrated into specific manufacturing applications, the safety standards state that a risk assessment needs to be conducted is to ensure safe and reliable operations. Risk assessment, as standardized in ISO 12100 [7], is a detailed and iterative process of (1) risk analysis followed by (2) risk evaluation. The safety standards also state that the effect of residual risks needs to be eliminated or mitigated through appropriate risk reduction measures. The goal of a risk assessment program is to ensure that operators, equipment as well as the environment are protected.

As pointed out by Clifton and Ericson [8], hazard identification is a critical step, where the aim is the cognitive process of hazard recognition, whereas the solutions to mitigate the risks are relatively straightforward. Etherton et al. noted that designers lack a database of known hazards during innovation and design stages [9]. The robot safety standards (ISO 10218-1 [5] and ISO 10218-2 [6]) also have tabulated a list of significant hazards whose purpose is to inform risk assessors of probable inherent dangers associated with robot and robotic systems. Therefore, a case study [10] is used to investigate the characteristics of hazards and the associated risks that are relevant for collaborative operation. The study is focused on a collaborative assembly station, where large industrial robots and operators are to share a common workspace enabled through the application of a systematic and standardized risk assessment process followed by risk reduction measures.

This article is structured as follows: in Section 2, an overall description of the methodology used to conduct the research will be presented along with limitations; Section 3 will detail theoretical background; and Section 4 will present the results of the article followed by discussion of the result and conclude with remarks on future work.

#### **1.1. Background**

Recently, there have been many technological advances within the areas of robot control which aims to solve perceived issues associated with robot safety [11]. A safe collaborative assembly cell, where operators and industrial robots collaborate to complete assembly tasks, is seen as an important technological solution for several reasons including (1) ability to adapt to market fluctuations and trends [12]. (2) Have the possibility to decrease takt time [13, 14]. (3) Improving working environment by decreasing the ergonomic load of the operator [15].

An automotive assembly plant is typically separated into three units: (1) the highly automated body-in-white unit where industrial robots are used to weld sheet metal parts that form the chassis; (2) the body painting unit and (3) the final assembly unit where various components of an automotive are assembled sequentially. The final assembly plants within the automotive industry can be characterized as:


Though, operators are often aided by powered tools to carry out assembly tasks such as pneumatic nut-runners as well as lifting tools, there is a need to improve the ergonomics of their work environment. As pointed by Ore et al. [15], there is demonstrable potential for collaborative operations to aid operators in various tasks including assembly and quality control.

Earlier attempts at introducing automation devices, such as cobots [13, 16], have resulted in custom machinery that functions as ergonomic support. Recently, industrial robots specifically designed for collaboration such as UR10 [17] and KUKA iiwa [18] are available that can be characterized as: (1) having the ability to detect collisions with any part of the robot structure; and (2) having the ability to carry smaller load and shorter reach compared to traditional industrial robots. This feature coupled with the ability to detect collisions fulfills the condition for power and force limiting.

Industrial robots that does not have power and force limiting feature, such as KUKA KR210 [18] or the ABB IRB 6600 [19], have traditionally been used within fenced workstations. In order to enter a robot workspace, the operator was required to deliberately open a gate, which is monitored by a safety device that stops all robot and manufacturing operations within the workstation. As mentioned before, the purpose of the research project was to explore collaborative operations where traditional industry robots are employed for assembly tasks. These robots have the capacity to carry heavy loads with long reach that can be effective for various assembly tasks. However, these advantages correspond to an inherent source of hazard that needs to be understood and managed with appropriate safety focused solutions.

### **2. Working methodology**

robots. The hazardous situations can lead to serious injury and, therefore, safety needs to be

ISO 10218-1 [5] and ISO 10218-2 [6] are international standards aimed at specifying requirements for safety on the design of industrial robots and robotic systems, respectively. They recognize collaborative applications and list four specific types of collaborative operations, namely (1) safety-rated monitored stop, (2) hand-guiding, (3) speed and separation monitoring, and (4) power and force limiting that can be implemented either individually or as a

As industrial robots and robotic systems are designed and integrated into specific manufacturing applications, the safety standards state that a risk assessment needs to be conducted is to ensure safe and reliable operations. Risk assessment, as standardized in ISO 12100 [7], is a detailed and iterative process of (1) risk analysis followed by (2) risk evaluation. The safety standards also state that the effect of residual risks needs to be eliminated or mitigated through appropriate risk reduction measures. The goal of a risk assessment program is to

As pointed out by Clifton and Ericson [8], hazard identification is a critical step, where the aim is the cognitive process of hazard recognition, whereas the solutions to mitigate the risks are relatively straightforward. Etherton et al. noted that designers lack a database of known hazards during innovation and design stages [9]. The robot safety standards (ISO 10218-1 [5] and ISO 10218-2 [6]) also have tabulated a list of significant hazards whose purpose is to inform risk assessors of probable inherent dangers associated with robot and robotic systems. Therefore, a case study [10] is used to investigate the characteristics of hazards and the associated risks that are relevant for collaborative operation. The study is focused on a collaborative assembly station, where large industrial robots and operators are to share a common workspace enabled through the application of a systematic and standardized risk assessment

This article is structured as follows: in Section 2, an overall description of the methodology used to conduct the research will be presented along with limitations; Section 3 will detail theoretical background; and Section 4 will present the results of the article followed by discus-

Recently, there have been many technological advances within the areas of robot control which aims to solve perceived issues associated with robot safety [11]. A safe collaborative assembly cell, where operators and industrial robots collaborate to complete assembly tasks, is seen as an important technological solution for several reasons including (1) ability to adapt to market fluctuations and trends [12]. (2) Have the possibility to decrease takt time [13, 14]. (3) Improving working environment by decreasing the ergonomic load of the operator [15]. An automotive assembly plant is typically separated into three units: (1) the highly automated body-in-white unit where industrial robots are used to weld sheet metal parts that form the chassis; (2) the body painting unit and (3) the final assembly unit where various components

ensure that operators, equipment as well as the environment are protected.

guaranteed while developing collaborative applications [4].

combination of one or more types.

168 Risk Assessment

process followed by risk reduction measures.

**1.1. Background**

sion of the result and conclude with remarks on future work.

To take advantage of the physical performance characteristics of large industrial robots along with the advances in sensor and control technologies, a research project ToMM [20] comprising of members representing the automotive industry, research, and academic institutions were tasked with understanding and specifying industry-relevant safety requirements for collaborative operations.

#### **2.1. Industrial relevance**

The requirements for safety that are relevant for the manufacturing industry are detailed in various standards such as ISO EN 12100 and ISO EN 10218 (parts 1 and 2) which are maintained by various organizations such as International Organization for Standardization (ISO [21]) and International Electrotechnical Commission (IEC [22]). Though these organizations do not have the authority to enforce the standards, a legislatory body such as the European Union, through the EU Machinery directive mandates compliance with normative standards [23] which are prefixed with an EN before their reference number.

#### **2.2. Problem study and data collection**

Objective of the research was to understand the safety requirements for high-volume assembly stations when industrial robots are to be used in a collaborative manner. A case-based approach [10] was followed, where the initial study was focused on an assembly station where a heavy engine component is assembled on an engine block. To gain a better understanding and knowledge of the case study, the following methods were employed:


#### **2.3. Integrating safety in early design phase**

Introduction of a robot into a manual assembly cell might lead to unforeseen hazards whose potential to cause harm needs to be eliminated or minimized. The machinery safety standard [7] suggests the practice of conducting risk assessment followed by risk reduction measures to ensure the safety of the operator as well as other manufacturing processes. The risk assessment process is iterative that concludes when all probable hazards have been identified along with solutions to mitigate the effects of these hazards have been implemented. This process is usually carried out through a safety program and can be documented according to [25].

**Figure 1** depicts an overview of the safety-focused design strategy employed during the research and development phase. The case study was analyzed to understand the benefits of collaborative operations done through a conceptual study, where the overall robot, operator, and collaborative tasks were specified. Employing the results of the conceptual study, the risk assessment methodology followed by risk reduction was carried out where each phase was supported by the use of demonstrators. Björnsson [26] and Jonsson [27] have elaborated the principles of demonstrator-based design along with their perceived benefits and this methodology has been employed in this research work within the context of safety for collaborative operations.

Risk Assessment for Collaborative Operation: A Case Study on Hand-Guided Industrial Robots http://dx.doi.org/10.5772/intechopen.70607 171

**Figure 1.** Overview of the demonstrator-based design methodology employed to ensure a safe collaborative workstation.

### **3. Theoretical background**

by various organizations such as International Organization for Standardization (ISO [21]) and International Electrotechnical Commission (IEC [22]). Though these organizations do not have the authority to enforce the standards, a legislatory body such as the European Union, through the EU Machinery directive mandates compliance with normative standards [23] which are

Objective of the research was to understand the safety requirements for high-volume assembly stations when industrial robots are to be used in a collaborative manner. A case-based approach [10] was followed, where the initial study was focused on an assembly station where a heavy engine component is assembled on an engine block. To gain a better understanding

**1.** Regular meeting in order to have detailed discussion with engineers and line managers at

**2.** Visits to the plant allowed the researchers to directly observe the functioning of the station. This also enabled the researchers to have informal interviews with line workers regarding

**3.** The researchers participated in the assembly process, guided by the operators, allowed the

**4.** Literature sourced from academia, books as well as documentation from various industrial

Introduction of a robot into a manual assembly cell might lead to unforeseen hazards whose potential to cause harm needs to be eliminated or minimized. The machinery safety standard [7] suggests the practice of conducting risk assessment followed by risk reduction measures to ensure the safety of the operator as well as other manufacturing processes. The risk assessment process is iterative that concludes when all probable hazards have been identified along with solutions to mitigate the effects of these hazards have been implemented. This process is usually carried out through a safety program and can be docu-

**Figure 1** depicts an overview of the safety-focused design strategy employed during the research and development phase. The case study was analyzed to understand the benefits of collaborative operations done through a conceptual study, where the overall robot, operator, and collaborative tasks were specified. Employing the results of the conceptual study, the risk assessment methodology followed by risk reduction was carried out where each phase was supported by the use of demonstrators. Björnsson [26] and Jonsson [27] have elaborated the principles of demonstrator-based design along with their perceived benefits and this methodology has been employed in this research work within the context of safety for collaborative

and knowledge of the case study, the following methods were employed:

researchers to gain intuitive understanding of the nature of the task.

the assembly tasks as well as the working environment.

equipment manufactures were reviewed.

**2.3. Integrating safety in early design phase**

prefixed with an EN before their reference number.

**2.2. Problem study and data collection**

170 Risk Assessment

the assembly plant [24].

mented according to [25].

operations.

In this section, beginning with an overview of industrial robots, concepts from hazard theory, industrial system safety and reliability, and task-based risk assessment methodology will be detailed.

#### **3.1. Industrial robotic system and collaborative operations**

An industrial robot is defined as an automatically controlled, reprogrammable, multipurpose manipulator, programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation applications [28]. **Figure 2(A)** shows an illustration of an articulated six-axis manipulator along with the control cabinet and a teach pendant. The control cabinet houses various control equipment such as motor controller, input/output modules, network interfaces, etc.

The teach pendant is used to program the robot, where each line of code establish the robot pose—in terms of coordinates in x, y, z and angles A, B, C—which when executed allow the robot to complete a task. This method of programming is referred to as position control, where individual robot poses are explicitly hard coded. In contrast to position control, sensorbased control allows motion control to be regulated by sensor values. Examples of sensors include vision, force and torque, etc.

On a manufacturing line, robots can be programmed to move at high speed undertaking repetitive tasks. This mode of operation is referred to as automatic mode, and allows the robot controller to execute the program in a loop, provided all safety functions are active. Additionally, ISO 10218-1 [5] has defined manual reduced-speed to allows safe programming and testing of the intended function of the robotic system, where the speed is limited to 250 mm/s at the tool center point. The manual high-speed allows the robot to be moved at high speed, provided all safety functions are activate and this mode is used for verification of the intended function.

The workspace within the robotic station where robots run in automatic mode is termed Robot Workspace (see **Figure 2(B)**). In collaborative operations, where operators and robots can share a workspace, a clearly defined Collaborative Workspace is suggested by [29]. Though the robot can be moved in automatic mode within the collaborative workspace, the speed of the robot is limited [29] and is determined during risk assessment.

**Figure 2.** (A) An example of a manipulator along with the control box and the teach pendant. Examples include KUKA KR-210 [18] and ABB IR 6620 [19]. (B) Illustrates the interaction between the three participants of a collaborative assembly cell within their corresponding workspaces [3].

Robot safety standards recognize the implementation of one or more of the following four different modes of collaborative operation:


#### **3.2. Robotic system safety and reliability**

An industrial robot normally functions as part of an integrated manufacturing system (IMS) where multiple subsystems that perform different functions operate cohesively. As noted by Levenson (page 14 [30]), safety is a system property (not a component property) and needs to be controlled at the system level. This implies that safety as a property needs to be considered at early design phases, which Ericson (page 34 [8]) refers to as CD-HAT or Conceptual Design Hazard Analysis Type. CD-HAT is the first seven types of hazard analysis types, which needs to be considered during various design phases in order to avoid costly design rework.

To realize a functional IMS, a coordinated effort in the form of a system safety program (SSP [8]) which involve participants with various levels of involvement (such as operators, maintenance, line managers, etc.) are carried out. Risk assessment and risk reduction processes are conducted in conjecture with the development of an IMS, in order to promote safety, during development, commissioning, maintenance, upgradation, and finally decommissioning.

#### *3.2.1. Functional safety and sensitive protective equipment (SPE)*

Functional safety refers to the use of sensors to monitor for hazardous situations and take evasive actions upon detection of an imminent hazard. These sensors are referred to as sensitive protective equipment (SPE) and the selection, positioning, configuration, and commissioning of equipment have been standardized and detailed in IEC 62046 [31]. IEC 62046 defines the performance requirements for this equipment and as stated by Marvel and Norcross [32], when triggered, these sensors use electrical safety signals to trigger safety function of the system. They include provisions for two specific types: (1) electro-sensitive protective equipment (ESPE) and (2) pressure-sensitive protective equipment (PSPE). These are to be used for the detection of the presence of human beings and can be used as part of the safety-related system [31].

Electro-sensitive protective equipment (ESPE) uses optical, microwaves, and passive infrared techniques to detect operators entering a hazard zone. That is, unlike physical fence, where the operators and the machinery are physically separated, ESPE relies on the operators to enter a specific zone for the sensor to be triggered. Examples include laser curtains [33], laser scanners [34], and vision-based safety systems such as the SafetyEye [35].

Pressure-sensitive protective equipment (PSPE) has been standardized in parts 1–3 of ISO13856, and works on the principle of an operator physically engaging a specific part of the workstation. These include: (1) ISO 13856-1—pressure sensitive mats and floors [36]; (2) ISO 13856-2—pressure sensitive bars, edges [37]. (3) ISO 13856-3—bumpers, plates, wires, and similar devices [38].

#### *3.2.2. System reliability*

Robot safety standards recognize the implementation of one or more of the following four

**Figure 2.** (A) An example of a manipulator along with the control box and the teach pendant. Examples include KUKA KR-210 [18] and ABB IR 6620 [19]. (B) Illustrates the interaction between the three participants of a collaborative assembly

**1.** Safety-rated monitored stop stipulates that the robot ceases its motion with a category stop 2 when the operator enters the collaborative workspace. In a category stop 2, the robot can

**2.** Hand-guiding allows the operator to send position commands to the robot with the help

**3.** Speed and separation monitoring allows the operator and the robot to move concurrently in the same workspace provided that there is a safe separation distance between them which is greater than the prescribed protective separation distance determined during risk

**4.** Power and force limiting operation refers to robots that are designed to be intrinsically safe and allows contact with the operator provided it does not exert force (either quasi-static or

An industrial robot normally functions as part of an integrated manufacturing system (IMS) where multiple subsystems that perform different functions operate cohesively. As noted by Levenson (page 14 [30]), safety is a system property (not a component property) and needs to be controlled at the system level. This implies that safety as a property needs to be considered at early design phases, which Ericson (page 34 [8]) refers to as CD-HAT or Conceptual Design Hazard Analysis Type. CD-HAT is the first seven types of hazard analysis types, which needs to be considered during various design phases in order to avoid

different modes of collaborative operation:

cell within their corresponding workspaces [3].

**3.2. Robotic system safety and reliability**

assessment.

172 Risk Assessment

costly design rework.

decelerate to a stop in a controlled manner.

of a hand-guiding tool attached at or close to the end-effector.

transient contact) larger than a prescribed threshold limit.

Successful robotic systems are both safe to use and reliable in operation. In an integrated manufacturing system (IMS), reliability is the probability that a component of the IMS will perform its intended function under pre-specified conditions [39]. One measure of reliability is MTTF (mean time to failure) and ranges of this measure has been standardized into five discrete level levels or performance levels (PL) ranging from a to e. For example, PL = d refers to a 10–6 > MTTF ≥ 10–7, which is the required performance level with a category structure 3 ISO 10218-2 (page 10, Section 5.2.2 [6]). That is, in order to be viable to the industry, the final design of the robotic system should reach or exceed the minimum required performance level.

#### **3.3. Hazard theory: hazards, risks, and accidents**

Ericson [8] states that a mishap or an accident is an event which occurs when a hazard, or more specifically hazardous element, is actuated upon by an initiating mechanism. That is, a hazard is a pre-requisite for an accident to occur and is defined as a potential source of harm [7] and is composed of three basic components: (1) hazardous element (HE), (2) initiating mechanism (IM), and (3) target/threat (T/T).

A hazardous element is a resource that has the potential to create a hazard. A target/threat is the person or the equipment directly affected when the hazardous element is activated by an initiating mechanism. These three components, when combined together, can be referred to as a hazard (see **Figure 3(A)**) and are essential components for it to exist. Based on these definitions, if any of the three components are removed or eliminated, by any means (see Section 3.4.2), it is possible to eliminate or reduce the effect of the hazard.

To better illustrate these concepts, consider the fatal accident that took place on July 21, 1984, where an experienced operator entered a robotic workstation while the robot was in automatic mode (see **Figure 3(B)**). The robot was programmed to grasp a die-cast part, dip the part in a quenching tank and place it on an automatic trimming machine. According to Lee et al. [40], the operator was found pinned between the robot and a safety-pole by another operator of an adjacent die-cast station who became curious after hearing the hissing noise of the air-hose for 10–15 min. The function of the safety pole was to limit robot motion and together with the robot-arm can be considered to be a hazardous element. The hazard was initiated by the operator who intentionally entered the workstation either by jumping over the rails or through a 19-inch unguarded spacing and caused the accident. The operator was the target of this unfortunate accident and was pronounced dead after 5 days of the accident.

A hazard is designed into a system [8, 30] and for accident to occur depends on two factors: (1) unique set of hazard components and (2) accident risk presented by the hazard components, where risk is defined

**Figure 3.** (A) The hazard triangle where the three components of hazards—hazardous element, initiating mechanism, and target/threat—are essential and required for the hazard to exist (adapted from page 17 [8]). (B) Shows the layout of the robotic workstation where a fatal accident took place on July 21, 1984 [40].

Ericson notes that a good hazard description can support the risk assessment team to better understand the problem and therefore can enable them to make better judgments (e.g., understanding the severity of the hazard), and therefore suggest that the a good hazard description needs to contain the three hazard components.

#### **3.4. Task-based risk assessment and risk reduction**

Risk assessment is a general methodology where the scope is to analyze and evaluate risks associated with complex system. Various industries have specific methodologies with the same objective. Etherton has summarized a critical review of various risk assessment methodologies for machine safety in [41]. According to ISO 12100, risk assessment (referred to as MSRA—machine safety risk assessment [41]) is an iterative process which involves two sequential steps: (1) risk analysis and (2) risk evaluation. ISO 12100 suggests that if risks are deemed serious, measures should be taken to either eliminate or mitigate the effects of the risks through risk reduction as depicted in **Figure (4)**.

#### *3.4.1. Risk analysis and risk evaluation*

hazard is a pre-requisite for an accident to occur and is defined as a potential source of harm [7] and is composed of three basic components: (1) hazardous element (HE), (2) initiating

A hazardous element is a resource that has the potential to create a hazard. A target/threat is the person or the equipment directly affected when the hazardous element is activated by an initiating mechanism. These three components, when combined together, can be referred to as a hazard (see **Figure 3(A)**) and are essential components for it to exist. Based on these definitions, if any of the three components are removed or eliminated, by any means (see Section

To better illustrate these concepts, consider the fatal accident that took place on July 21, 1984, where an experienced operator entered a robotic workstation while the robot was in automatic mode (see **Figure 3(B)**). The robot was programmed to grasp a die-cast part, dip the part in a quenching tank and place it on an automatic trimming machine. According to Lee et al. [40], the operator was found pinned between the robot and a safety-pole by another operator of an adjacent die-cast station who became curious after hearing the hissing noise of the air-hose for 10–15 min. The function of the safety pole was to limit robot motion and together with the robot-arm can be considered to be a hazardous element. The hazard was initiated by the operator who intentionally entered the workstation either by jumping over the rails or through a 19-inch unguarded spacing and caused the accident. The operator was the target of

A hazard is designed into a system [8, 30] and for accident to occur depends on two factors: (1) unique set of hazard components and (2) accident risk presented by the hazard components,

*Risk* = *Probability* x *Severity* (1)

**Figure 3.** (A) The hazard triangle where the three components of hazards—hazardous element, initiating mechanism, and target/threat—are essential and required for the hazard to exist (adapted from page 17 [8]). (B) Shows the layout of

the robotic workstation where a fatal accident took place on July 21, 1984 [40].

this unfortunate accident and was pronounced dead after 5 days of the accident.

mechanism (IM), and (3) target/threat (T/T).

174 Risk Assessment

where risk is defined

3.4.2), it is possible to eliminate or reduce the effect of the hazard.

Within the context of machine safety, risk analysis begins with identifying the limits of machinery, where the limits in terms of space, use, time are identified and specified. Within this boundary, activities focused on identifying hazards are undertaken. The preferred context for identifying hazards for robotics systems is task-based, where he tasks that needs to be undertaken during various phases of operations are first specified. Then the risk assessors specify the hazards associated with each tasks. Hazard identification is a critical step and ISO 10218-1 [5] and ISO 10218-2 [6] tabulates significant hazards associated with robotic systems. However, they do not explicitly state the hazards associated with collaborative operations.

Risk evaluation is based on a systematic metrics where severity of injury, exposure to hazard and avoidance of hazard are used to evaluate the hazard (see page 9, RIA TR R15.306-2014 [25]). The evaluation results in specifying the risk level in terms of negligible, low, medium-high, and very-high, and determine risk reduction measures to be employed. To support the activities associated with risk assessment, ISO TR 15066 [29] details information required to conduct risk assessment specifically for collaborative applications.

#### *3.4.2. Risk reduction*

When risks are deemed serious, the methodology demands measures to eliminate and/or mitigate the risks. The designers have a hierarchical methodology that can be employed to varying degree depending on the risks that have to be managed. The three hierarchical methods allow the designers to optimize the design and can choose either one or a combination of the methods to sufficiently eliminate/mitigate the risks. They are: (1) inherently safe design measures; (2) safeguarding and/or complementary protective measures; and (3) information for use.

**Figure 4.** An overview of the task-based risk assessment methodology.

### **4. Result: demonstrator for a safe hand-guided collaborative operation**

In this section, the development and functioning of a safe assembly station will be detailed, where a large industrial robot is used in a hand-guided collaborative operation. In order to understand potential benefits with hand-guided industrial robots, an automotive assembly station will be presented as a case study in Section 4.1. With the aim to improve the ergonomics of the assembly station and increase the productivity, the assembly tasks are conceptualized as robot, operator, and collaborative task where the collaborative task is the hand-guided operation and is described in Section 4.2. The results of the iterative risk assessment and risk reduction process (see Section 3.4) will be detailed in Section 4.3. The final layout and the task sequence will be detailed in Section 4.4, and **Table 1** will document the hazards that were identified during risk assessment that were used to improve the safety features of the assembly cell.

#### **4.1. Case study: manual assembly of a flywheel housing cover**

The assembly task is to install a flywheel housing cover (FWC) on the engine block with an intermediate step between the picking of the FWC from the material rack and securing it on the engine block with fasteners. The assembly of FWC, which weighs 20 kg, is a manual operation and these tasks are carried out by one or more operators (see **Figure 5(A)**) and can be described as follows:



**Table 1.** The table describes the hazards that were identified during the risk assessment process.

**4. Result: demonstrator for a safe hand-guided collaborative operation**

**4.1. Case study: manual assembly of a flywheel housing cover**

**Figure 4.** An overview of the task-based risk assessment methodology.

176 Risk Assessment

and require the participation of more than one operator.

force to mate these two surfaces.

In this section, the development and functioning of a safe assembly station will be detailed, where a large industrial robot is used in a hand-guided collaborative operation. In order to understand potential benefits with hand-guided industrial robots, an automotive assembly station will be presented as a case study in Section 4.1. With the aim to improve the ergonomics of the assembly station and increase the productivity, the assembly tasks are conceptualized as robot, operator, and collaborative task where the collaborative task is the hand-guided operation and is described in Section 4.2. The results of the iterative risk assessment and risk reduction process (see Section 3.4) will be detailed in Section 4.3. The final layout and the task sequence will be detailed in Section 4.4, and **Table 1** will document the hazards that were identified during risk assessment that were used to improve the safety features of the assembly cell.

The assembly task is to install a flywheel housing cover (FWC) on the engine block with an intermediate step between the picking of the FWC from the material rack and securing it on the engine block with fasteners. The assembly of FWC, which weighs 20 kg, is a manual operation and these tasks are carried out by one or more operators (see **Figure 5(A)**) and can be described as follows:

**1.** An operator picks up the flywheel housing cover (FWC) with the aid of a lifting device from position P1. The covers are placed on a material rack and can contain upto three part variants.

**2.** This operator moves from position P1 to P2 by pushing the FWC and installs it on the ma-

**3.** After the secondary operation, the operator pushes the FWC to the engine housing (position P3). Here, the operator needs to align the flywheel housing cover with the engine block with the aid of guiding pins. After the two parts are aligned, the operator pushes the flywheel housing cover forward until the two parts are in contact. The operator must exert

**4.** Then the operators begin to fasten the parts with several bolts with the help of two pneumatically powered devices. In order to keep low takt time, these tasks are done in parallel

chine (integrated machinery) where secondary operations will be performed.

**Figure 5.** (A) Shows the manual workstation where several operators work together to assemble flywheel housing covers (FWC) on the engine block. (B) Shows the robot placing the FWC on the integrated machinery. (C) Shows the robot being hand-guided by an operator thereby reducing the ergonomic effort to position the flywheel housing cover on the engine block.

#### **4.2. Task allocation and conceptual design of the hand-guiding tool**

**Figure 5(B)** and **(C)**, shows ergonomic simulations reported by Ore et al. [15] and shows the operator being aided by an industrial robot to complete the task. The first two tasks can be automated by the robot, i.e., picking the FWC from Position 1 and moving it to the integrated machine (position P2, **Figure 5(B)**). Then, the robot moves the FWC to the hand over position where the robot will come to a stop and signal to the operator that the collaborative mode is activated. This allows the operator to hand-guide the robot by grasping the FWC and directing the motion towards the engine block.

Once the motion of the robot is under human control, the operator can assemble the FWC onto the engine block and proceeds to secure it with bolts. After the bolts have been fastened, the operator then moves the robot back to the hand-over position and reactivates the automatic mode which starts the next cycle.

#### **4.3. Safe hand-guiding in the collaborative workspace**

The risk assessment identified several hazardous situations that can affect the safe functioning during the collaborative mode—that is when the operator goes into the workstation and hand-guides the robot to assemble the FWC—and has been tabulated in **Table 1**.

**Figure 6(A)** and **(B)** shows two versions of the end-effector that was developed to support hand-guided robotic assembly. The safety focused design of the hand-guiding tool shown in **Figure 6(A)** has been detailed by Gopinath et al. [42] where the interfaces are part of the end-effector. That is, in an open enclosure (without physical fences—not shown), the location for the interfaces and control devices would optimally be a design feature of the end-effector. However, risk assessment pointed out that an open enclosure might require the following safety measures:


**Figure 6.** (A) and (B) are two versions of the end-effector that was prototyped to verify and validate the design.

The limited space, high volume and the nature of the hazards puts severe restriction on the type of safety solution that can be considered. An enclosed station is shown in **Figure 7**, where physical fences are being used as a safeguarding measure to limit personnel movement, thereby eliminating the possibility of operator accidently entering the robot workspace. The layout of this collaborative station has been detailed by Gopinath et al. [43] and in **Table 2**, a comparison of the design features has been discussed. The change from Design A to Design B was motivated by change in requirements namely:


#### **4.4. Demonstrator for a safe hand-guided collaborative assembly workstation**

**Figure 7** shows a picture of the demonstrator developed in a laboratory environment. Here, a KUKA KR-210 industrial robot is part of the robotic system where the safeguarding solutions include the use of physical fences as well as sensor-based solutions.

**Figure 8** describes the sequence of task necessary to complete the assembly operations. These tasks have been separated into three, i.e., robot, operator, and collaborative tasks and can be described as follows:


#### *4.4.1. Safeguarding*

automated by the robot, i.e., picking the FWC from Position 1 and moving it to the integrated machine (position P2, **Figure 5(B)**). Then, the robot moves the FWC to the hand over position where the robot will come to a stop and signal to the operator that the collaborative mode is activated. This allows the operator to hand-guide the robot by grasping the FWC and direct-

Once the motion of the robot is under human control, the operator can assemble the FWC onto the engine block and proceeds to secure it with bolts. After the bolts have been fastened, the operator then moves the robot back to the hand-over position and reactivates the auto-

The risk assessment identified several hazardous situations that can affect the safe functioning during the collaborative mode—that is when the operator goes into the workstation and

**Figure 6(A)** and **(B)** shows two versions of the end-effector that was developed to support hand-guided robotic assembly. The safety focused design of the hand-guiding tool shown in **Figure 6(A)** has been detailed by Gopinath et al. [42] where the interfaces are part of the end-effector. That is, in an open enclosure (without physical fences—not shown), the location for the interfaces and control devices would optimally be a design feature of the end-effector. However, risk assessment pointed out that an open enclosure might require the following

**1.** The robot needs to be programmed to move at slow speed so that it can stop (in time) ac-

**2.** To implement speed and separation monitoring, a safety rated vision system might be probable solution. However, this may not be viable solution on the current factory floor.

cording to speed and separation monitoring mode of collaborative operation.

**Figure 6.** (A) and (B) are two versions of the end-effector that was prototyped to verify and validate the design.

hand-guides the robot to assemble the FWC—and has been tabulated in **Table 1**.

ing the motion towards the engine block.

matic mode which starts the next cycle.

safety measures:

178 Risk Assessment

**4.3. Safe hand-guiding in the collaborative workspace**

With an understanding that operators are any personnel within the vicinity of hazardous machinery [7], the physical fences can be used to ensure that they do not accidentally enter a hazardous zone. The design requirements stated that the engine block needs to be outside the enclosed zone, meant that the robot needs to move out of the fenced area during collaborative mode (see **Figure 8**). Therefore, the hand over position is located inside the enclosure and the assembly point is located outside of the enclosure and both these points are part of the collaborative workspace. The opening in the fences is monitored during automatic mode using laser curtains.

**Figure 7.** The layout of the physical demonstrator installed in a laboratory environment.


**Table 2.** Feature comparison of two versions of the end-effector shown in **Figure 6(A)** and **(B)**.

Risk Assessment for Collaborative Operation: A Case Study on Hand-Guided Industrial Robots http://dx.doi.org/10.5772/intechopen.70607 181

**Figure 8.** The figure describes the task sequence of the collaborative assembly station where an industrial robot is used as an intelligent and flexible lifting tool. The tasks are decomposed into three — Operator task (OT), Collaborative task (CT) and Robot task (RT) — which are detailed in **Table 3**.

#### *4.4.2. Interfaces*

**Design feature Design A Design B Design evaluation**

**Figure 7.** The layout of the physical demonstrator installed in a laboratory environment.

End-effector is perpendicular to the robot wrist.

The FWC is positioned in front of the operator

Good location and easy to actuate

Minimal physical interfaces

The distance between the handles is short

**Table 2.** Feature comparison of two versions of the end-effector shown in **Figure 6(A)** and **(B)**.

In design A, the last two links of the robot are close to the operator which might make the operators feel unsafe. Design B might allow for an overall safer design due to use of

Design A requires more effort from the operator to align the locating pins (on the engine block) and the mating holes (on the FWC). The operator loses sight of the pins when the two parts are close to each other. In Design B, it is possible to align the two parts by visually aligning the outer edges

In design A, it was evaluated that the E-stop can be accidentally actuated which might lead

that interfaces need to be visible to all working

Evaluation of design A resulted in the decision that interfaces are optimally placed outside

Designs A and B have good overall design. Design B uses standardized components. Design A employs softer materials and interfaces that are easily visible

standardized components

to unproductive stops

No visual interfaces Evaluation of design A resulted in the decision

within the vicinity

the fences area

End-effector is parallel to the robot

The FWC is positioned left to the

Good location and easy to actuate

Good location and visibility

Good location with easy reach.

The handles are angled and is more comfortable

operator

wrist

1. Orientation of the end-effector

2. Position of Flywheel housing cover (FWC)

180 Risk Assessment

3. Location of Emergency stop

interfaces

5. Location of physical interfaces

6. Overall ergonomic design

4. Location of visual

During risk evaluation, the decision to have several interfaces was motivated. A single warning LED lamp (see **Figure 8**) can convey that when the robot has finished the preprogrammed task and waiting to be hand-guided. Additionally, the two physical buttons outside the enclosure has separate functions. The Auto-continue button allows the operator to let the robot continue in automatic mode if the laser curtains were accidentally triggered by an operator and this button is located where it is not easily reached. The second button is meant to start the next assembly cycle (see **Table 1**). **Table 1** (Nos. 2 and 3) motivates the use of enabling devices to trigger the sensor guided motion (see **Figure 6(B)**). The two enabling devices provide the following functions: (1) it acts as a hand-guiding tool that the operator can use to precisely maneuver the robot. (2) By specifying that the switches on the enabling device are engaged for hand-guiding motion, the operators hands are at a prespecified and safe location. (3) Additionally, by engaging the switch, the operator is deliberately changing the mode of the robot to collaborative-mode. This ensures that unintended motion of the robot is avoided.



**Table 3.** The table articulates the sequence of tasks that were formulated during the risk assessment process.

### **5. Discussion**

In this section, the discussion will be focused on the application of the risk assessment methodology and the hazards that were identified during this process.

#### **5.1. Task-based risk assessment methodology**

A risk assessment (RA) is done on a system that exists in a form that can function as a context within which hazards can be documented. In the case study, a force/torque sensor was used to hand-guide the robot and this technique was chosen at the conceptual stage. RA based on this technique led to decision of introducing enabling devices (No. 2 in **Table 1**) to ensure that, while the operator is hand guiding the robot, the hands are within a predetermined safe location and is engaged. Another industrially viable solution is the use of joysticks to hand-guide the robot but this option was not explored further during discussion as it might be less intuitive than force/torque based control. Regardless, it is implicit that the choice of technique poses its own hazardous situation and the risk assessors need a good understanding of the system boundary.

Additionally, during risk assessment, the failure of the various components was not considered explicitly. For example, what if laser curtains failed to function as intended? The explanation lies in the choice of components. As stated in Section 3.2.2, a robotic system to be considered reliable, the components must have a performance level PL = d, which implies a very low probability of failure. Most safety-equipment manufactures publish their MTTF values along with their performance levels and the intended use.

#### **5.2. Hazards**

The critical step in conducting risk assessment (RA) is hazard identification. In Section 3.3, a hazard was decomposed into three: (1) hazardous element (HE), (2) initiating mechanism (IM), and (3) target/threat (T/T). The three sides of the hazard triangle (Section 3.3) have lengths proportional to the degree with which these components can trigger the hazard and cause an accident. That is, if the length of IM side is much larger than the other two, then the most influencing factor to cause an accident is IM. The discussion on risk assessment (Section 3.4) stresses on eliminating/mitigating hazards which implies that the goal of risk assessment can be understood as a method to reduce/remove one or more of the sides of the hazard triangle. Therefore, documenting the hazards in terms of its components might allow for simplified and straightforward downstream RA activities.

The hazards presented in **Table 1** can be summarized as follows: (1) the main source of hazardous element (HE) is slow/fast motion of the robot. (2) The initiating mechanism (IM) can be attributed to unintended actions by an operator. (3) The safety of the operator can be compromised and has the possibility to damage machinery and disrupt production. It can also be motivated, based on the presented case study, that through the use of systematic risk assessment process, hazards associated with collaborative motion can be identified and managed to an acceptable level of risk.

As noted by Eberts and Salvendy [44] and Parsons [45], human factors play a major role in robotic system safety. There are various parameters that can be used to better understand the effect of human behavior in system such as overloaded and/or underloaded working environment, perception of safety, etc. The risk assessors need to be aware of human tendencies and take into consideration while proposing safety solutions. Incidentally, in the fatal accident discussed in Section 3.3, perhaps the operator did not perceive the robot as a serious threat and referred to the robot as Robby [40].

In an automotive assembly plant, as the production volume is relatively high and requires collaborating with other operators, there is a higher probability for an operator to make errors. In **Table 1** (No. 6), a three-button switch was specified to ensure unintentional mode change of the robot. It is probable that an operator can accidentally engage the mode-change button (see **Figure 7**) while the robot is in collaborative mode or the hand-guiding operator did not intend the collaborative mode to be completed. In such a scenario, a robot operating in automatic mode was evaluated to have a high risk level, and therefore the decision was made to have a design change with an additional safety-interface—the three-button switch—that is accessible only to the hand-guiding operator.

Informal interviews suggested that the system should be inherently safe for the operators and that the task sequence—robot, operator, and collaborative tasks—should not demand constant monitoring by the operators as it might lead to increased stress. That is, operators should feel safe and in control and that the tasks should demand minimum attention and time.

### **6. Conclusion and future work**

**5. Discussion**

5. Collaborative

task

182 Risk Assessment

**Tasks Task description**

**5.2. Hazards**

In this section, the discussion will be focused on the application of the risk assessment meth-

6. Robot task The operator goes out and engages the mode-change button. Then, the following sequence of

**Table 3.** The table articulates the sequence of tasks that were formulated during the risk assessment process.

Engage automatic mode: before going out of the assembly station, the operator needs to engage the three-button switch. This deliberate action signals to the robot that the collaborative

events is carried out: (1) laser curtains are activated, (2) warning lamp turns from green to red,

A risk assessment (RA) is done on a system that exists in a form that can function as a context within which hazards can be documented. In the case study, a force/torque sensor was used to hand-guide the robot and this technique was chosen at the conceptual stage. RA based on this technique led to decision of introducing enabling devices (No. 2 in **Table 1**) to ensure that, while the operator is hand guiding the robot, the hands are within a predetermined safe location and is engaged. Another industrially viable solution is the use of joysticks to hand-guide the robot but this option was not explored further during discussion as it might be less intuitive than force/torque based control. Regardless, it is implicit that the choice of technique poses its own hazardous situation and the risk assessors need a good understanding of the system boundary. Additionally, during risk assessment, the failure of the various components was not considered explicitly. For example, what if laser curtains failed to function as intended? The explanation lies in the choice of components. As stated in Section 3.2.2, a robotic system to be considered reliable, the components must have a performance level PL = d, which implies a very low probability of failure. Most safety-equipment manufactures publish their MTTF

The critical step in conducting risk assessment (RA) is hazard identification. In Section 3.3, a hazard was decomposed into three: (1) hazardous element (HE), (2) initiating mechanism (IM), and (3) target/threat (T/T). The three sides of the hazard triangle (Section 3.3) have lengths proportional to the degree with which these components can trigger the hazard and cause an accident. That is, if the length of IM side is much larger than the other two, then the most influencing factor to cause an accident is IM. The discussion on risk assessment (Section 3.4) stresses on eliminating/mitigating hazards which implies that the goal of risk assessment can

odology and the hazards that were identified during this process.

and (3) the robot starts the next cycle

values along with their performance levels and the intended use.

**5.1. Task-based risk assessment methodology**

task is complete

The article presents the results of a risk assessment program, where the objective was the development of an assembly workstation that involves the use of a large industrial robot in a hand-guiding collaborative operation. The collaborative workstation has been realized as a laboratory demonstrator, where the robot functions as an intelligent lifting device. That is, the tasks that can be automated have been tasked to the robot and these sequences of tasks are preprogrammed and run in automatic mode. During collaborative mode, operators are responsible for tasks that are cognitively demanding that require the skills and flexibility inherent to a human being. During this mode, the hand-guided robot carries the weight of the flywheel housing cover, thereby improving the ergonomics of the workstation.

In addition to the laboratory demonstrator, an analysis of the hazards pertinent to handguided collaborative operations has been presented. These hazards were identified during the risk assessment phase, where the hazardous element mainly stems from human error. The decisions taken during the risk reduction phase to eliminate or mitigate the risks associated with these hazards have also been presented.

The risk assessment was carried out through different phases, where physical demonstrators supported each phase of the process. The demonstrator-based approach allowed the researchers to have a common understanding of the nature of the system and the associated hazards. That is, it acted as platform for discussion. The laboratory workstation can act as a demonstration platform where operators and engineers can judge for themselves the advantage and disadvantages of collaborative operations. The demonstration activities can be beneficial to researchers as it can function as a feedback mechanism with respect to the decisions that have been made during the risk assessment process.

Therefore, the next step is to invite operators and engineers in trying out the hand-guided assembly workstation. The working hypothesis in inviting operators and engineers is that, personnel whose main responsibility during their working time in an assembly plant is to find the optimal balance between various production related parameters (such as maintenance time, productivity, safety, working environment, etc.) might have deeper insight into the challenges of introducing large industrial robots in the assembly line.

### **Acknowledgements**

The authors would like to thank Björn Backman of Swerea IVF, Fredrik Ore and Lars Oxelmark of Scania CV for their valuable contributions during the research and development phase of this work. This work has been primarily funded within the FFI program and the authors would like to graciously thank them for their support. In addition, we would like to thank ToMM 2 project members for their valuable input and suggestions.

### **Author details**

Varun Gopinath\*, Kerstin Johansen and Johan Ölvander

\*Address all correspondence to: varun.gopinath@liu.se

Division of Machine Design, Department of Management and Engineering, Linköping University, Sweden

### **References**

[1] Marvel JA, Falco J, Marstio I. Characterizing task-based human-robot collaboration safety in manufacturing. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2015;**45**(2):260-275

[2] Tsarouchi P, Matthaiakis A-S, Makris S. On a human-robot collaboration in an assembly. International Journal of Computer Integrated Manufacturing. 2016;**30**(6):580-589

In addition to the laboratory demonstrator, an analysis of the hazards pertinent to handguided collaborative operations has been presented. These hazards were identified during the risk assessment phase, where the hazardous element mainly stems from human error. The decisions taken during the risk reduction phase to eliminate or mitigate the risks associated

The risk assessment was carried out through different phases, where physical demonstrators supported each phase of the process. The demonstrator-based approach allowed the researchers to have a common understanding of the nature of the system and the associated hazards. That is, it acted as platform for discussion. The laboratory workstation can act as a demonstration platform where operators and engineers can judge for themselves the advantage and disadvantages of collaborative operations. The demonstration activities can be beneficial to researchers as it can function as a feedback mechanism with respect to the decisions that have

Therefore, the next step is to invite operators and engineers in trying out the hand-guided assembly workstation. The working hypothesis in inviting operators and engineers is that, personnel whose main responsibility during their working time in an assembly plant is to find the optimal balance between various production related parameters (such as maintenance time, productivity, safety, working environment, etc.) might have deeper insight into

The authors would like to thank Björn Backman of Swerea IVF, Fredrik Ore and Lars Oxelmark of Scania CV for their valuable contributions during the research and development phase of this work. This work has been primarily funded within the FFI program and the authors would like to graciously thank them for their support. In addition, we would like to thank

Division of Machine Design, Department of Management and Engineering, Linköping

[1] Marvel JA, Falco J, Marstio I. Characterizing task-based human-robot collaboration safety in manufacturing. IEEE Transactions on Systems, Man, and Cybernetics: Systems.

the challenges of introducing large industrial robots in the assembly line.

ToMM 2 project members for their valuable input and suggestions.

Varun Gopinath\*, Kerstin Johansen and Johan Ölvander \*Address all correspondence to: varun.gopinath@liu.se

with these hazards have also been presented.

184 Risk Assessment

been made during the risk assessment process.

**Acknowledgements**

**Author details**

University, Sweden

2015;**45**(2):260-275

**References**


[36] The International Organization for Standardization. ISO 13856-1:2013 – Safety of machinery – Pressure-sensitive protective devices – Part 1: General principles for design and testing of pressure-sensitive mats and pressure-sensitive floors. Switzerland: The International Organization for Standardization; 2013

[18] KUKA AG. Available from: http://www.kuka.com/ [Accessed: March 2017]

[20] ToMM2—Framtida-samarbete-mellan-manniska-och-robot/. Available from: https://

[21] The International Organization for Standardization (ISO). Available from: https://www.

[22] International Electrotechnical Commission (IEC). Available from: http://www.iec.ch/

[23] David Macdonald. Practical machinery safety. 1st ed. Jordan Hill, Oxford: Newnes; 2004.

[24] Leedy PD, Ormrod JE. Practical Research: Planning and Design. Upper Saddle River,

[25] Robotic Industrial Association. RIA TR R15.406-2014: Safeguarding. 1st ed. Ann Arbour,

[26] Andreas Björnsson. Automated layup and forming of prepreg laminates [dissertation].

[27] Marie Jonsson. On Manufacturing Technology as an Enabler of Flexibility: Affordable Reconfigurable Tooling and Force-Controlled Robotics [dissertation]. Linköping,

[28] Swedish Standards Institute. SS-ISO 8373:2012—Industrial Robot Terminology.

[29] The International Organization for Standardization. ISO/TS 15066: Robots and robotic devices—Collaborative robots. Switzerland: The International Organization for Standard-

[30] Leveson NG. Engineering a Safer World: Systems Thinking Applied to Safety.

[31] The International Electrotechnical Commision. IEC TS 62046:2008 – Safety of machiner-Application of protective equipment to detect the presence of persons. Switzerland: The

[32] Marvel JA, Norcross R. Implementing speed and separation monitoring in collaborative robot workcells. Robotics and Computer-Integrated Manufacturing. 2017;**44**:144-155

[35] Pilz International. Safety EYE. Available from: http://www.pilz.com/ [Accessed: May

[33] SICK AG. Available from: http://www.sick.com [Accessed: December 2016]

[34] REER Automation. Available from: http://www.reer.it/ [Accessed: December 2016]

Sweden: Linköping Studies in Science and Technology. Dissertations: 1501; 2013

Michigan, USA: Robotic Industrial Association; 2014. 60 p

Stockholm, Sweden: Swedish Standards Institute; 2012

Engineering Systems ed. USA: MIT Press; 2011

International Electrotechnical Commision; 2008

[19] ABB AB. Available from: http://www.abb.com/ [Accessed: January 2017]

www.vinnova.se/ [Accessed: June 2017]

iso.org/home.html [Accessed: June 2017]

Linköping: Linköping University; 2017

[Accessed: June 2017]

New Jersey: Pearson; 2013

304 p

186 Risk Assessment

ization; 2016

2014]


**Provisional chapter**

### **Managing Technogenic Risks with Stakeholder Cooperation Cooperation**

**Managing Technogenic Risks with Stakeholder** 

DOI: 10.5772/intechopen.70903

#### Riitta Molarius Riitta Molarius Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.70903

#### **Abstract**

Risks involved in new technologies or arising from novel configurations of old technologies recurrently result in major accidents. For example, the new bioleaching technology to extract nickel from ore was taken into use in Finland in 2008. Later, one of the personnel died as a victim of hydrogen sulfide exposure and there were unplanned releases of process waters that contaminated lakes and rivers. Several risk analyses were performed but none of them considered the local climate and surrounding environmental circumstances. A comprehensive risk assessment process combining the knowledge of different stakeholders, authorities, and citizens would have helped to avoid the sad outcome. A single enterprise has a very clear picture of the risk figure on its own, but is reluctant to reveal commercially sensitive information to others, and even incapable of understanding all the expectations and constraints that the natural and built environments may impose. Only governmental authorities are in a position to form a comprehensive picture of all the risks. This paper presents a new approach for a proactive risk identification method based on collaborative integrated assessment. It states that by implementing this method society is able to utilize the science-based information in an efficient way for managing the emerging technogenic risks.

**Keywords:** risk identification, risk assessment, technogenic risk, stakeholder cooperation, hydrogen, fuel cell

### **1. Introduction**

New technologies have been involved in several accidents. For example, in Germany, the novel magnetic levitation train collided with a maintenance vehicle and killed 23 people in 2006 [1]. Ten years later, in 2016, two trains collided again in Germany, killing 11 people and injuring 85 people despite the automatic braking system that ought to have been at work [2].

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In 2016, the autonomous car claimed its first victim when the radar system of the autopilot failed to recognize a truck that was crossing the road in Florida [3]. In June 2017, at least 80 people died in a huge fire that took place in a London tower block that was covered with a new type of cladding, which included polyethylene foam [4]. In 2008, the new bioleaching technology used to extract nickel from ore was taken into commercial use in Finland, the first to adopt it in Europe [5]. Four years later, in 2012, one of the personnel died (as a victim of hydrogen sulfide exposure) due to a lack of safety equipment [6]; additionally, there were significant challenges associated with the management of the process waters, which consequently resulted in the company filing for bankruptcy [7]. Today drones hit people all over the world and collisions with helicopters and planes are just a matter of time [8].

Sociologist Ulrich Beck warned us over 20 years ago about the "risk society" where society is gradually exposed to the risks it creates and finally the negative effects of the progress become greater than the positive impacts [9]. The fast change and development of new kinds of technologies have increased likelihood and probability of technogenic risks even if there is a strong attempt to identify and anticipate risks. The term "technogenic risk" stands here for risks whose origin is in man-made technology, also including newer technologies, such as nanotechnology, biotechnology and information technology. In this article, technogenic risks not only mark accidental risks but also the creeping effects of risks to society, such as gradual land pollution or effects to human welfare.

Today's risk landscape consists of many interlinked elements, including interdependency, complexity, uncertainty, ambiguity, and cascading effects, which are all amplified by an increased dynamic of globalization [10, 11]. Advances in information and communication technology as well as in other kinds of technologies have increased these linkages and connections between states, institutions, corporations, civil society, and individuals. As a result of this process, the amount of interdependencies between persons, nations, markets, and societies is bigger than ever before [10]. Due to the complexity of the systems, it is difficult or even impossible to identify or quantify causal links between causes and the adverse effects of the unwanted phenomenon. One initial event may inflict different consequences in different parts of the state or world. Uncertainty is an inevitable part of different and distinct components of risks, such as statistical variation, measurement errors, ignorance, and indeterminacy [12]. Uncertainty reduces confidence in the estimated cause and effect chains. Ambiguity implies different interpretations based on human observations or data assessments. It strengthens the effect of cultural differences on risk assessment. Cascading effects describe the second, third, etc. step consequences of the initial risks to society as a whole, which, unfortunately, due to the previous elements are difficult to assess. It has been stated that in the near future different technologies, such as nanotechnology, biotechnology, information technology, and cognitive science (NBIC), will converge [13–15]. Due to this, the technogenic risks of new technologies grow even harder to understand and manage.

Complete risk management is challenged by a lack of knowledge. Although our knowledge of the world around us grows day by day, we are not aware of what pieces of information are still missing [16]. It has been pointed out that we do not have enough knowledge of, or we do not have enough understanding of climate change, technological innovations, wars, human behavior in different circumstances or changes in markets [12]. Moreover, even if the required information exists somewhere, we do not want to accept it due to our personal biases. Nicolas Taleb presented, in 2008, the concept of "Black Swans," which are highly consequential but unlikely events [16]. They are easily explainable but, unfortunately, only after the event. The existence of Black Swans highlights human nature: even if we have all the required information, we do not see—or we do not want to see—what is coming. Risk is a very adaptable and flexible unit, and for some, it appears to be a threat and at the same time for others an opportunity; risk is relative and individually defined. Renn has stated that the more ambiguous risk is, the more need there is for interpretation, and it also creates more cognitive, evaluative or normative conflicts [17]. These conflicts cannot be solved only by pure scientific knowledge, as even the scientific opinions of complex issues differ. There is a need for multidimensional discussion and collaborative multidisciplinary risk assessment.

In 2016, the autonomous car claimed its first victim when the radar system of the autopilot failed to recognize a truck that was crossing the road in Florida [3]. In June 2017, at least 80 people died in a huge fire that took place in a London tower block that was covered with a new type of cladding, which included polyethylene foam [4]. In 2008, the new bioleaching technology used to extract nickel from ore was taken into commercial use in Finland, the first to adopt it in Europe [5]. Four years later, in 2012, one of the personnel died (as a victim of hydrogen sulfide exposure) due to a lack of safety equipment [6]; additionally, there were significant challenges associated with the management of the process waters, which consequently resulted in the company filing for bankruptcy [7]. Today drones hit people all over

Sociologist Ulrich Beck warned us over 20 years ago about the "risk society" where society is gradually exposed to the risks it creates and finally the negative effects of the progress become greater than the positive impacts [9]. The fast change and development of new kinds of technologies have increased likelihood and probability of technogenic risks even if there is a strong attempt to identify and anticipate risks. The term "technogenic risk" stands here for risks whose origin is in man-made technology, also including newer technologies, such as nanotechnology, biotechnology and information technology. In this article, technogenic risks not only mark accidental risks but also the creeping effects of risks to society, such as gradual

Today's risk landscape consists of many interlinked elements, including interdependency, complexity, uncertainty, ambiguity, and cascading effects, which are all amplified by an increased dynamic of globalization [10, 11]. Advances in information and communication technology as well as in other kinds of technologies have increased these linkages and connections between states, institutions, corporations, civil society, and individuals. As a result of this process, the amount of interdependencies between persons, nations, markets, and societies is bigger than ever before [10]. Due to the complexity of the systems, it is difficult or even impossible to identify or quantify causal links between causes and the adverse effects of the unwanted phenomenon. One initial event may inflict different consequences in different parts of the state or world. Uncertainty is an inevitable part of different and distinct components of risks, such as statistical variation, measurement errors, ignorance, and indeterminacy [12]. Uncertainty reduces confidence in the estimated cause and effect chains. Ambiguity implies different interpretations based on human observations or data assessments. It strengthens the effect of cultural differences on risk assessment. Cascading effects describe the second, third, etc. step consequences of the initial risks to society as a whole, which, unfortunately, due to the previous elements are difficult to assess. It has been stated that in the near future different technologies, such as nanotechnology, biotechnology, information technology, and cognitive science (NBIC), will converge [13–15]. Due to this, the technogenic risks of new technologies

Complete risk management is challenged by a lack of knowledge. Although our knowledge of the world around us grows day by day, we are not aware of what pieces of information are still missing [16]. It has been pointed out that we do not have enough knowledge of, or we do not have enough understanding of climate change, technological innovations, wars, human behavior in different circumstances or changes in markets [12]. Moreover, even if

the world and collisions with helicopters and planes are just a matter of time [8].

land pollution or effects to human welfare.

190 Risk Assessment

grow even harder to understand and manage.

In a standard risk management procedure, risk identification is the first step toward holistic risk management. It creates the basis for reasonable, effective, and comprehensive risk assessment. Risk identification ensures the quality of risk assessment and finally the effectiveness of the whole risk management process. Thus, the risk identification stage should be done with considerable care.

In spite of all the scientific knowledge about risk identification, assessment, and management, the main responsibility for managing technogenic risks in the European Union is put on operators, corporations, manufacturers, and suppliers. The latest SEVESO III directive (Directive 2012/18/EU) insists that operators and corporations should cooperate to identify the domino or cascading effects of initial risks. This is a challenging task to perform, as often in the same industrial area there are many competing companies that are not willing to share commercially sensitive information. In addition, at least in Finland, the latest technogenic risks ensued from the following initial faulty or poor solutions [18]:


None of these technogenic risks could have been prevented by the cooperation of single companies but may have been by broader stakeholder collaboration.

Due to the rapid change in new technologies and a shrinking and convergent world, it is clear that no person or organization alone is capable of identifying all the emerging technogenic risks. There is a need for effective collaboration between all the different stakeholders, authorities, scientists, politicians, and civilians. The commonly admitted and approved solutions should be retrieved with cooperation and a common valuation of differing values. The future is not defined in advance, but we are all able to change and reconstruct it a little toward the plausible future. Using the methods of future studies, we can create new methods to manage future risks, and in this way, we can define what kind of future we want to have.

Despite all the requirements, it is not a very conventional task to collect all the requisite stakeholders together or combine their knowledge to focus on identifying the technogenic risks of future technologies. The next chapters present not only the method developed for this risk identification task but also the basis for the solution.

### **2. Research process and methods**

To ensure that societies are prepared against technogenic risks due to new technologies, the aim of this research process was to develop a risk identification process that is able to combine information of new technologies from different disciplines as well as from different stakeholders to anticipate future risks.

The research process included two main steps:


As a result of the research, a new risk identification method called anticipation of future technogenic risks identification (FuTecRI) was developed. This risk identification method was developed in close cooperation with Finnish authorities, such as the Finnish Chemical and Safety agency (TUKES), the Pirkanmaa and Uusimaa Centres for Economic Development, Transport and the Environment, the Rescue Services of Helsinki, the City of Virrat and the Council of Tampere Region, for example. These authorities were interviewed, they took part in workshops and they evaluated the method after the workshop. They also did self-assessment of their accrued knowledge of hydrogen and fuel cell technology before and after the workshop.

### **3. FuTecRI: a method for risk identification**

None of these technogenic risks could have been prevented by the cooperation of single com-

Due to the rapid change in new technologies and a shrinking and convergent world, it is clear that no person or organization alone is capable of identifying all the emerging technogenic risks. There is a need for effective collaboration between all the different stakeholders, authorities, scientists, politicians, and civilians. The commonly admitted and approved solutions should be retrieved with cooperation and a common valuation of differing values. The future is not defined in advance, but we are all able to change and reconstruct it a little toward the plausible future. Using the methods of future studies, we can create new methods to manage

Despite all the requirements, it is not a very conventional task to collect all the requisite stakeholders together or combine their knowledge to focus on identifying the technogenic risks of future technologies. The next chapters present not only the method developed for this risk

To ensure that societies are prepared against technogenic risks due to new technologies, the aim of this research process was to develop a risk identification process that is able to combine information of new technologies from different disciplines as well as from different stake-

**1.** Development of the risk identification tool. This stage of the process started from interviews with authorities and literature research to find out the most suitable methods and to select the best one for stakeholder cooperation. The tool was then tested in a real-life situation to find out the risks to society initiating from hydrogen and fuel cell

**2.** Development of the risk identification procedure for risk identification workshops. To arrange effective collaboration in workshops, a broad literature study was done to tackle the worst mistakes preventing fruitful cooperation and to create the guidelines for au-

As a result of the research, a new risk identification method called anticipation of future technogenic risks identification (FuTecRI) was developed. This risk identification method was developed in close cooperation with Finnish authorities, such as the Finnish Chemical and Safety agency (TUKES), the Pirkanmaa and Uusimaa Centres for Economic Development, Transport and the Environment, the Rescue Services of Helsinki, the City of Virrat and the Council of Tampere Region, for example. These authorities were interviewed, they took part in workshops and they evaluated the method after the workshop. They also did self-assessment of their accrued knowledge of hydrogen and fuel cell technology before and after the workshop.

future risks, and in this way, we can define what kind of future we want to have.

panies but may have been by broader stakeholder collaboration.

identification task but also the basis for the solution.

**2. Research process and methods**

The research process included two main steps:

holders to anticipate future risks.

thorities for workshops.

technology.

192 Risk Assessment

The developed future technogenic risks identification (FuTecRI) tool is based on the future studies method called the Futures Wheel that was developed in 1971 by Jerome Glenn. It was developed to organize future events in a reasonable order. The method is a visual one, and it helps one to receive a comprehensive picture of the issue discussed. The Futures Wheel is mainly used to present thoughts about future development or trends [19].

In this study, the FuTecRI method was developed mainly for the authorities' needs. Their opinion was taken into account when selecting a suitable method for risk identification. The Futures Wheel was selected as a base ground for the tool development among 22 different risk assessment and future studies methods for six main reasons:


**Figure 1.** FuTecRI tool for identification of the future risks introduced by new technologies [18].

The Futures Wheel approach was further developed to identify the future negative effects of new technologies In the first stage, four different prior new technologies (chimney, matchstick, steamboat and train, electricity, and mobile phone) were analyzed to find out what tor tens of years ago. These changes were categorized to select key words for the FuTecRI tool against which the risk of new technologies should be identified and evaluated. The breakthrough times of these old technologies were different (chimney 400 years, mobile phone 34 years), which also indicates the amount of changes needed in the regional culture to accept new technologies. The selected key words were health, safety and security, environment, built environment, regulations and instructions, land use, and regional development of the area.

The developed FuTecRI tool is a fill-in-the-blanks diagram presented in **Figure 1**. The central term in the figure describes the discussed new technology that should be evaluated in relation to the key aspects (surrounding it) to find out the potential risks.

### **4. Challenges of collaborative brainstorming**

The risk identification tool alone does not ensure effective risk identification, but great attention must be paid to the participants of the workshops. To find out the rules for selection of these participants and for facilitation of the workshops, another broad literature research was done.

Cognitive sciences highlight that multiprofessional and multidisciplinary cooperation is a key for collaborative grading through which a group can achieve better results in problem solving than by working individually [20] and in group work people are able to solve problems that are unsolved by working alone [21]. This is because in difficult decision-making situations people use other actors and people's knowledge to widen their own knowledge and understanding, and thus, they can effectively solve challenging problems [22, 23]. When people are working in groups, they are also forced to recognize the shortcomings of their own knowledge and they can even change their opinions accordingly [24]. It has been pointed out that when all the knowledge dealing with a common target from different disciplines is combined, it is possible to find solutions that cannot be found by any single discipline alone [25].

However, group work does not automatically ensure better results than working alone. To work effectively, the group must be multidisciplinary or at least multiprofessional. The participants need to understand the idea of the workshop, they have to engage with the work and they should deliver their knowledge to other participants. Bohm and Beat have presented a method for creative dialogue insisting that all the participants should be flexible and they should have the ability to negotiate about their opinions with others [26]. The result of this negotiation is not a compromise but rather a creative solution that will be acceptable to all the participants. The creative dialogue can be received if the participants present broadly different disciplines.

One of the main challenges for the multiprofessional group work comes from the competition between professions. Profession is defined here as an authorized status or post in relation to other professionals with specialized knowledge, and the ability to use one's own discretion at work [27]. It seems that professional competition is very common and it becomes apparent when discussing who has the power to make decisions in a certain context [28]. Very often, the professions feel cooperation with other professions is a threat to their own knowledge and authority. People are also unwilling to receive new knowledge if it contradicts their existing knowledge [29]. One of the best situations in which to exceed professional competition and to share and receive new knowledge is in multidisciplinary workshops that are targeted at sharing knowledge with other professionals [30].

Even when the participants are ready to take part in the workshops and share their knowledge, there are still barriers to overcome. Different work cultures may prevent or hinder cooperation; for example, in hierarchically arranged organizations, such as the police and in hospitals, the valuable knowledge of subordinates might stay invisible. Also, the different paradigms, beliefs, terminologies, and methods will prevent common understanding [26, 31].

Finally, there are some other difficulties that may prevent cooperation. Especially dominant persons may hinder discussion and prevent the group from sharing all information and knowledge they have [32]. It is important that all the participants are interested in the subject they are discussing and they have booked enough time for the work. To get through all these impediments, it is important to create a safe and trustful environment for the work groups. The participants must feel that they can trust not only the expertise of other participants but also their behavior, which should be appreciative, friendly, and predictable [33].

### **5. Risk identification procedure**

The Futures Wheel approach was further developed to identify the future negative effects of new technologies In the first stage, four different prior new technologies (chimney, matchstick, steamboat and train, electricity, and mobile phone) were analyzed to find out what tor tens of years ago. These changes were categorized to select key words for the FuTecRI tool against which the risk of new technologies should be identified and evaluated. The breakthrough times of these old technologies were different (chimney 400 years, mobile phone 34 years), which also indicates the amount of changes needed in the regional culture to accept new technologies. The selected key words were health, safety and security, environment, built environment, regulations and instructions, land use, and regional development of the area.

The developed FuTecRI tool is a fill-in-the-blanks diagram presented in **Figure 1**. The central term in the figure describes the discussed new technology that should be evaluated in relation

The risk identification tool alone does not ensure effective risk identification, but great attention must be paid to the participants of the workshops. To find out the rules for selection of these participants and for facilitation of the workshops, another broad literature research was done. Cognitive sciences highlight that multiprofessional and multidisciplinary cooperation is a key for collaborative grading through which a group can achieve better results in problem solving than by working individually [20] and in group work people are able to solve problems that are unsolved by working alone [21]. This is because in difficult decision-making situations people use other actors and people's knowledge to widen their own knowledge and understanding, and thus, they can effectively solve challenging problems [22, 23]. When people are working in groups, they are also forced to recognize the shortcomings of their own knowledge and they can even change their opinions accordingly [24]. It has been pointed out that when all the knowledge dealing with a common target from different disciplines is combined,

it is possible to find solutions that cannot be found by any single discipline alone [25].

However, group work does not automatically ensure better results than working alone. To work effectively, the group must be multidisciplinary or at least multiprofessional. The participants need to understand the idea of the workshop, they have to engage with the work and they should deliver their knowledge to other participants. Bohm and Beat have presented a method for creative dialogue insisting that all the participants should be flexible and they should have the ability to negotiate about their opinions with others [26]. The result of this negotiation is not a compromise but rather a creative solution that will be acceptable to all the participants. The creative dialogue can be received if the participants present broadly different disciplines.

One of the main challenges for the multiprofessional group work comes from the competition between professions. Profession is defined here as an authorized status or post in relation to other professionals with specialized knowledge, and the ability to use one's own discretion at work [27]. It seems that professional competition is very common and it becomes apparent when discussing who has the power to make decisions in a certain context [28]. Very often,

to the key aspects (surrounding it) to find out the potential risks.

**4. Challenges of collaborative brainstorming**

194 Risk Assessment

As a result of the analyses of the advantages and disadvantages of the collaborative group work, a range of conclusions were made and the working procedure for the authorities was produced. One remarkable note was that all the participants taking part in the workshops should have a personal interest in the discussed topic. However, the authorities are overworked in these days, and therefore, an interest is not enough; they also need to have enough time to take part in the workshops. This led to the conclusion that all the workshops arranged to identify technogenic risks should be strongly justified. This means that for one new technology there should be only 1–2 workshops in the whole country, and they should be planned to combine broadly all scientific knowledge, professional data, and stakeholder opinions in an efficient way. **Figure 2** presents the decision frame to start a new risk identification workshop as well as the main steps of the risk identification workshop. All stakeholders have the possibility to start a new risk identification workshop if they feel that it is needed for ensuring safety and security of society in the future. In the beginning, they have to answer three questions, and if the answer is always "yes," they should start the process. If this condition is not met, the FuTecRI method should not be used, but other methods may be more useful. For example, if the impacts of the new technology are only local, traditional risk assessment methods are more suitable.

The participants of the workshops should be selected in a reasonable way taking into account the following viewpoints [18]:


**Figure 2.** Decision process to start the FuTecRI workshop and the main steps for arranging it [18].

The main question for a successful FuTecRI workshop is how to ensure that the authorities especially, but also other stakeholders, put aside their professional and official position during the workshop because it may prevent them from sharing their personal knowledge and receiving new information. This is an issue that should be clearly discussed at the beginning of the workshop. The participants must understand that they are not in the workshop because of their official status but because of the knowledge they have, and their role is to deliver this knowledge to other participants.

To focus the group on the same target, the flow of the FuTecRI workshop must be well organized and the frame and content should be clear (**Figure 3**). At the beginning of the workshop, it is essential to highlight the importance of risk identification as well as to justify the working method. The importance of the identification of the risks of technogenic risks can be stated by Geel's theory of sociotechnical change [34, 35]. According to this theory, new technologies may break through and spread all over if external circumstances are favorable to them. In these cases, they may affect huge changes in culture, infrastructure, regulation, and markets, for example. A good example of this kind of change took place when mobile phones came into the market. It is clear that some technologies have the potential to change the whole society, and society should be able to handle and remove risks before they are actualized.

To ensure that the topic of the workshop is clear, there should be a state-of-the-art presentation of the discussed technology before the collaborative work. In this presentation, all the known aspects of the technology in question, pros and cons, should be given to the participants, and therefore, this presentation should be given by academic or research institutes. The challenge is to present the technology with terminology and concepts that are understandable to all stakeholders and authorities, professionals, and nonprofessionals.

**Figure 3.** The flow of the FuTecRI workshop [18].

• Multiprofessionality—the participants should represent all the different authorities in charge of preventing the risks of new or novel technologies, or of preserving or maintain-

• Multidisciplinary—the participants should represent the latest academic and scientific

• Personal features—the participants need to be personally interested in the technology discussed, they should be open-minded and responsible persons, and they should want to

**Figure 2.** Decision process to start the FuTecRI workshop and the main steps for arranging it [18].

ing a safe and secure society.

196 Risk Assessment

find good solutions for everyone.

knowledge of the technology in question.

Finally, when risk identification starts, the participants should be arranged into smaller working groups. The group should include only five to seven participants, as this has been proved to be the most effective group size for cooperation [32]. In larger groups, nonparticipation increases because people easily forget their role in the workshop as a knowledge deliverer. It is also important that no notes of who-said-what are taken but all the notes are done as a group. This kind of working removes official roles and gives room for expertise. Each group may have a recorder of their own, but it is also possible for all the participants to make notes on the working tool.

### **6. Piloting the method: risks of hydrogen and fuel cell technology**

The developed FuTecRI tool and working procedure were tested in a workshop arranged by hydrogen and fuel cell researchers from the VTT Technical Research Centre of Finland. Hydrogen and fuel cell technology is undergoing strong development work, and the researchers wanted to know if there is an understanding of what kind of requirements the new technology imposes on society. The participants of the workshop consist of representatives from the Finnish Chemical and Safety Agency (TUKES), the Pirkanmaa and Uusimaa Centres for Economic Development, Transport and the Environment, the Rescue Services of Helsinki and VTT; all together 11 participants were arranged into two working groups. The workshop followed the procedure presented in **Figure 3**. The new technology was presented by VTT researchers to the participants.

The workshop took 5 h, the first 2 h of which were discussions of dealing with the theory of sociotechnical change and the hydrogen and fuel cell technology and its current state. After the lunch break, people worked in groups for 2 h to explore what kind of risk hydrogen and fuel cell technology might create. Finally, both groups presented their results to each other. The results of the workshop were combined into the one mind map and delivered to the participants for their later use.

The workshop results brought out several issues regarding hydrogen and fuel cell technology that need to be discussed and managed at a governmental level, such as [18]:


• Land use. For safe land use, there is a need to plan regional hydrogen pipelines that are later suitable for different kinds of hydrogen use.

Finally, when risk identification starts, the participants should be arranged into smaller working groups. The group should include only five to seven participants, as this has been proved to be the most effective group size for cooperation [32]. In larger groups, nonparticipation increases because people easily forget their role in the workshop as a knowledge deliverer. It is also important that no notes of who-said-what are taken but all the notes are done as a group. This kind of working removes official roles and gives room for expertise. Each group may have a recorder of their own, but it is also possible for all the participants to make notes on the working tool.

**6. Piloting the method: risks of hydrogen and fuel cell technology**

researchers to the participants.

198 Risk Assessment

ticipants for their later use.

riving cars especially in the winter.

The developed FuTecRI tool and working procedure were tested in a workshop arranged by hydrogen and fuel cell researchers from the VTT Technical Research Centre of Finland. Hydrogen and fuel cell technology is undergoing strong development work, and the researchers wanted to know if there is an understanding of what kind of requirements the new technology imposes on society. The participants of the workshop consist of representatives from the Finnish Chemical and Safety Agency (TUKES), the Pirkanmaa and Uusimaa Centres for Economic Development, Transport and the Environment, the Rescue Services of Helsinki and VTT; all together 11 participants were arranged into two working groups. The workshop followed the procedure presented in **Figure 3**. The new technology was presented by VTT

The workshop took 5 h, the first 2 h of which were discussions of dealing with the theory of sociotechnical change and the hydrogen and fuel cell technology and its current state. After the lunch break, people worked in groups for 2 h to explore what kind of risk hydrogen and fuel cell technology might create. Finally, both groups presented their results to each other. The results of the workshop were combined into the one mind map and delivered to the par-

The workshop results brought out several issues regarding hydrogen and fuel cell technology

• Environmental issues. The production of hydrogen and fuel cells requires platinum as a raw material. This will improve material recycling but also increase mining actions. The

• Built environment. The fuel distribution stations are not covered by any legislation. Thus, the hydrogen fuel can be delivered even from delivery trucks, which can cause dangerous situations. There is a need for new legislation. In addition, underground parking places

• Safety issues. Road accidents involving hydrogen and fuel cell cars may cause danger to rescue services because the fuel cell vehicles do not visibly differ from other vehicles but the rescue operations vary depending on the fuel type of the vehicle. In addition, fuel cell vehicles move quietly, which may increase road accidents, as people may not hear the ar-

that need to be discussed and managed at a governmental level, such as [18]:

positive effects include emission-free fuel and quiet traffic.

need to be equipped with hydrogen sensors to avoid explosions.

The workshop participants also took part into two different surveys. The first one was done in two parts, at the beginning of the workshop and immediately after it. This survey focused on evaluating the change in participants' knowledge of hydrogen and fuel cell technology. The idea was that people made a self-assessment at the beginning of the workshop evaluating their own knowledge on a scale of 5–10 (5: weak knowledge; 10: excellent knowledge). This scale was selected as it was used for a long period in Finnish schools, and therefore, it was easy for the participants to understand. After the workshop, they were asked to evaluate themselves a second time. They then had to answer two questions: What did they now think their knowledge was at the beginning of the workshop? and What did they think their knowledge level is after the workshop?

The results were very interesting (**Table 1**). At the beginning of the day, participants thought that their knowledge was at a considerably low level, and only one participant (perhaps a researcher) thought that he had excellent knowledge. During the workshop, they understood that their knowledge level was not even at that level, and a comparison between the morning and afternoon estimations indicates that all of the participants lowered their estimations. The very interesting thing is that the workshop brought a lot of new knowledge to all participants. Even the hydrogen and cell fuel researchers received a lot of information regarding the impacts of new technology on society and the built environment. It seems that the FuTecRI workshop worked as it was planned to, it stimulated the participants to share their knowledge and accumulated new information on top of the old information, and in that way, it made it possible to also identify the risks of new technology.

Evaluation criteria: 5 = poor knowledge, 10 = excellent knowledge [18].

The other survey handled the content of the workshop. The survey was sent to the participants about 1 month after the workshop. The participants were asked how they later thought of the position of the workshop in their minds. The first questions dealt with the reliability and validity of the distributed information dealing with hydrogen and fuel cell technology. The participants were convinced that it was the latest and most up-to-date information they received. However, one researcher pointed out that private companies very often have the newest knowledge, but they are not willing to share it even with research organizations.


**Table 1.** The results of the survey dealing with the increase of knowledge in FuTecRI workshop.

The next questions concerned the functioning of the workshop. All participants were satisfied with the workshop proceedings. They felt that because of the small groups they were consulted and it was easy for them to bring their own knowledge into the process. The results of the brainstorming work were written directly on to a wide paper sheet where the main words were ready-written in the middle of the paper. This helped people to immediately start the brainstorming process, and the fear of the empty paper was tackled. The participants were very active in discussing the hydrogen and fuel cell technology, which was surprising to all.

### **7. Discussion**

The FuTecRI method was developed to help authorities to be prepared for future technogenic risks introduced by new technologies. The use of the method requires effective stakeholder cooperation at least from authorities and scientists. The results may be even better if the companies developing new technology could also take part in the process. According to the results of the FuTecRI workshops, it is possible to steer the development of society toward a safe and secure future through the use of, for example, new regulations or improved land use planning.

To work well, the method should involve not only the authorities and other stakeholders but also researchers from academia and research institutes. It is especially important that the focus of the workshop, the new technology, is presented by special researchers who are specialist in the technology in question. Otherwise, the result of the workshop might be just guesswork and no future solutions can be built on it.

Because the FuTecRI method involves a large group of professionals and scientists, it is important that no workshops are performed in vain, because it will reduce the motivation to take part into the FuTecRI method. Therefore, the results of each workshop should be delivered to all essential authorities through their own information networks.

However, this kind of workshop works only as a starting point to manage the risks of new technologies. The method should be further developed to also produce guidelines on how to analyze the highlighted risks or take them into account in different kinds of processes, such as environmental or chemical licenses, or land use planning.

### **Acknowledgements**

This research and article presents the main findings of the thesis of the author [18]. I am greatly thankful to the representatives of the Finnish authorities who helped me through this work and gave me valuable information. The work was carried out during the years 2012–2015, and it was not the main duty either for these authorities or for me. Because both parties also had other duties to perform, the collaboration was important to the success of this work.

### **Author details**

The next questions concerned the functioning of the workshop. All participants were satisfied with the workshop proceedings. They felt that because of the small groups they were consulted and it was easy for them to bring their own knowledge into the process. The results of the brainstorming work were written directly on to a wide paper sheet where the main words were ready-written in the middle of the paper. This helped people to immediately start the brainstorming process, and the fear of the empty paper was tackled. The participants were very active in discussing the hydrogen and fuel cell technology, which

The FuTecRI method was developed to help authorities to be prepared for future technogenic risks introduced by new technologies. The use of the method requires effective stakeholder cooperation at least from authorities and scientists. The results may be even better if the companies developing new technology could also take part in the process. According to the results of the FuTecRI workshops, it is possible to steer the development of society toward a safe and secure future through the use of, for example, new regulations or improved land

To work well, the method should involve not only the authorities and other stakeholders but also researchers from academia and research institutes. It is especially important that the focus of the workshop, the new technology, is presented by special researchers who are specialist in the technology in question. Otherwise, the result of the workshop might be just

Because the FuTecRI method involves a large group of professionals and scientists, it is important that no workshops are performed in vain, because it will reduce the motivation to take part into the FuTecRI method. Therefore, the results of each workshop should be delivered to

However, this kind of workshop works only as a starting point to manage the risks of new technologies. The method should be further developed to also produce guidelines on how to analyze the highlighted risks or take them into account in different kinds of processes, such

This research and article presents the main findings of the thesis of the author [18]. I am greatly thankful to the representatives of the Finnish authorities who helped me through this work and gave me valuable information. The work was carried out during the years 2012–2015, and it was not the main duty either for these authorities or for me. Because both parties also had other duties to perform, the collaboration was important to the success of

guesswork and no future solutions can be built on it.

all essential authorities through their own information networks.

as environmental or chemical licenses, or land use planning.

was surprising to all.

**7. Discussion**

200 Risk Assessment

use planning.

**Acknowledgements**

this work.

Riitta Molarius

Address all correspondence to: riitta.molarius@vtt.fi

Technical Research Centre of Finland, Ltd., Finland

### **References**


[28] Abbott A. Linked ecologies: States and universities as environment of professions. Sociological Theory. 2005;**23**(3):245-275

[12] van Asselt M. Perspectives on Uncertainty and Risk. The PRIMA Approach to Decision

[13] Nordmann A. Converging Technologies – Shaping the Future of European Societies. Interim Report of the Scenarios Group, High Level Expert Group, 3. 1st ed. Luxembourg:

[14] Roco M. Possibilities for global governance of converging technologies. Journal of

[15] Wolbring G.Why NBIC? Why human performance enhancement? Innovation: The European Journal of Social Science Research. 2008;**21**(1):25-40. DOI: 10.1080/13511610802002189 [16] Taleb N. The Black Swan: The Impact of the Highly Improbable. 1st ed. Random House:

[17] Renn O. The challenge of integrating deliberation and expertise. Participation and discourse in risk management. In: McDaniels T, Small M, editors. Risk Analysis and Society.

[18] Molarius R. Foreseeing risks associated with new technologies (in Finnish) [dissertation]. VTT Technical Research Centre of Finland: Tampere; 2016. 208 p Available from:

[19] Glenn J. The futures wheel. In: Glenn J, Gordon T, editors. Futures Research Methodology. 3rd ed. Washington: The United Nations University - The Millennium Project; 2009 [20] Leathard A. Introduction. In: Leathard A, editor. Interprofessional Collaboration. From Policy to Practice in Health and Social Care. Hove and New York: Brunner & Routledge;

[21] Roschelle J. Learning by collaborating: Convergent conceptual change. Journal of

[22] Hutchins E. Distributed cognition. In: Smelser N, Baltes P, editors. International Encyclopedia of Social & Behavioral Sciences. Elsevier, New York, Amsterdam; 2001. p.

[23] Giere R, Moffat B. Distributed cognition: Where the cognitive and the social merge. Social Studies of Science. 2003;**33**(2):301-310. DOI: 10.1177/03063127030332017

[24] Vygotsky L. Mind in Society: The Development of Higher Psychological Processes.

[25] Kline SJ. Conceptual Foundations for Multidisciplinary Thinking. Stanford: Stanford

[26] Bohm D, Peat F. Science, Order and Creativity. London and New York: Routledge; 1992

[27] Puustinen S. Suunnittelijaprofessio refleksiivisyyden puristuksessa. Yhteiskuntasuunnittelu.

1st ed. New York: Cambridge University Press; 2003. p. 289-366

http://www.vtt.fi/inf/pdf/science/2016/S120.pdf

Cambridge: Harvard University Press; 1978 159 p

Learning Sciences. 1992;**2**(3):235-276

University Press; 1995 337 p

Support. 1st ed Amsterdam: Kluwer Academic Publishers; 2000. 434 p

Office for Official Publications of the European Communities; 2004 64 p

Nanoparticle Research. 2008;**10**(1):11-29

New York; 2007 400 p

202 Risk Assessment

2003. p. 3-11

2068-2072

330 p

2001;**39**(1):26-45


**Provisional chapter**

#### **Risks, Safety and Security in the Ecosystem of Smart Cities Risks, Safety and Security in the Ecosystem of Smart Cities**

DOI: 10.5772/intechopen.70740

Stig O. Johnsen Stig O. Johnsen

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.70740

#### **Abstract**

We have performed a review of systemic risks in smart cities dependent on intelligent and partly autonomous transport systems. Smart cities include concepts such as smart transportation/use of autonomous transportation systems (i.e., autonomous cars, subways, shipping, drones) and improved management of infrastructure (power and water supply). At the same time, this requires safe and resilient infrastructures and need for global collaboration. One challenge is some sort of risk based regulation of emergent vulnerabilities. In this paper we focus on emergent vulnerabilities and discussion of how mitigation can be organized and structured based on emergent and known scenarios cross boundaries. We regard a smart city as a software ecosystem (SEC), defined as a dynamic evolution of systems on top of a common technological platform offering a set of software solutions and services. Software ecosystems are increasingly being used to support critical tasks and operations. As a part of our work we have performed a systematic literature review of safety, security and resilience software ecosystems, in the period 2007–2016. The perspective of software ecosystems has helped to identify and specify patterns of safety, security and resilience on a relevant abstraction level. Significant vulnerabilities and poor awareness of safety, security and resilience has been identified. Key actors that should increase their attention are vendors, regulators, insurance companies and the research community. There is a need to improve private-public partnership and to improve the learning loops between computer emergency teams, security information providers (SIP), regulators and vendors. There is a need to focus more on safety, security and resilience and to establish regulations of responsibilities on the vendors for liabilities.

**Keywords:** safety, security, resilience, smart cities, software ecosystems

© 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

and reproduction in any medium, provided the original work is properly cited.

### **1. Introduction**

This paper contains a discussion and review of safety, security and resilience of smart cities, considered as software ecosystem (SEC). The purpose is to provide an overview of research in the field, identify emergent risks in a systemic perspective and identify possible issues that existing literature is not addressing adequately. The article is initiated by a discussion of the concept of software ecosystems and the need for safety, security and resilience in smart cities.

#### **1.1. Smart cities and software ecosystems**

In Ref. [1] there is a fairly general definition of a smart city, described as a place when investments in human and social capital and traditional (transport) and modern (ICT) communication infrastructure, fuel sustainable economic growth and a high quality of life, with a wise management of natural resources, through participatory governance. From [2], looking at specific systems, smart cites are described as a city that monitors and integrates conditions of its critical infrastructures, traffic (including roads, bridges, tunnels, rails, subways, airports, seaports), communications, water, power and major buildings. All this is done in order to optimize resources, plan its preventive maintenance and monitor security aspects while maximizing services to its citizens.

A literature review of software ecosystems in general was performed in [3] identifying 90 papers in the period from 2007 to 2012. The review identified the software ecosystem (SEC) as a fruitful systemic perspective. The review inspired us to find papers discussing safety, security and resilience of SEC published in the period from 2007 to 2016.

A software ecosystem (SEC) describes the complex environment of a smart city. A SEC will consist of components developed by actors both internally and externally, and solutions will spread outside the traditional borders of software companies to a group of companies, private persons and entities. In [3] they defined a software ecosystem as: "*the interaction of a set of actors on top of a common technological platform that results in a number of software solutions or services. Each actor is motivated by a set of interests or business models and connected to the rest of the actors and the ecosystem as a whole with symbiotic relationships, while, the technological platform is structured in a way that allows the involvement and contribution of the different actors…."*

When discussing software ecosystems, we include the legal and organizational framework in addition to applications and supporting infrastructure as described in **Figure 1**. Scope of digital ecosystems.


**Figure 1.** Scope of software ecosystems.

Software ecosystems are often based on the internet as infrastructure. The internet economy makes up a significant part of the GDP in 2016 since it is 5.3% of GDP [4]. SEC has gained more importance due to mobile platforms such as iPhone and android. Examples of SEC are:

• Digital learning environments;

**1. Introduction**

206 Risk Assessment

resilience in smart cities.

mizing services to its citizens.

digital ecosystems.

**Figure 1.** Scope of software ecosystems.

**1.1. Smart cities and software ecosystems**

This paper contains a discussion and review of safety, security and resilience of smart cities, considered as software ecosystem (SEC). The purpose is to provide an overview of research in the field, identify emergent risks in a systemic perspective and identify possible issues that existing literature is not addressing adequately. The article is initiated by a discussion of the concept of software ecosystems and the need for safety, security and

In Ref. [1] there is a fairly general definition of a smart city, described as a place when investments in human and social capital and traditional (transport) and modern (ICT) communication infrastructure, fuel sustainable economic growth and a high quality of life, with a wise management of natural resources, through participatory governance. From [2], looking at specific systems, smart cites are described as a city that monitors and integrates conditions of its critical infrastructures, traffic (including roads, bridges, tunnels, rails, subways, airports, seaports), communications, water, power and major buildings. All this is done in order to optimize resources, plan its preventive maintenance and monitor security aspects while maxi-

A literature review of software ecosystems in general was performed in [3] identifying 90 papers in the period from 2007 to 2012. The review identified the software ecosystem (SEC) as a fruitful systemic perspective. The review inspired us to find papers discussing safety,

A software ecosystem (SEC) describes the complex environment of a smart city. A SEC will consist of components developed by actors both internally and externally, and solutions will spread outside the traditional borders of software companies to a group of companies, private persons and entities. In [3] they defined a software ecosystem as: "*the interaction of a set of actors on top of a common technological platform that results in a number of software solutions or services. Each actor is motivated by a set of interests or business models and connected to the rest of the actors and the ecosystem as a whole with symbiotic relationships, while, the technological platform is* 

When discussing software ecosystems, we include the legal and organizational framework in addition to applications and supporting infrastructure as described in **Figure 1**. Scope of

> **Legal and organizational framework Applications and Architecture Components Data/ Digital Content Infrastructure**

*structured in a way that allows the involvement and contribution of the different actors…."*

security and resilience of SEC published in the period from 2007 to 2016.


Arguments for discussing software ecosystems has been the speed of development; increased competition and reduction of development costs due to the opening up of development outside of organizational silos. Some of the software ecosystems are critical, in that a malfunction can severely affect the functioning of society or personal well-being. Examples are systems used in transportation, car control systems and health systems (such as pacemakers).

#### **1.2. Safety, security and risks**

In this paper we have used the definition of safety as a state, as described by Department of Defense - [5], "*freedom from those conditions that can cause death, injury, occupational illness, damage to or loss of equipment or property, or damage to the environment.*" This definition describes safety as being free from conditions causing a mishap or accident, i.e., safety is in some sense a "non-event."

Security is used to describe conditions of intentional harm. The relationships to safety are discussed in [6]. Security is defined as "*the degree to which malicious harm is prevented, reduced and properly reacted to*" and safety is defined as "*the degree to which accidental harm is prevented, reduced and properly reacted to*" from [7]. In information systems, there has often been a focus on "*information security.*" Information security is defined as *"protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide: integrity, confidentiality and availability"*, from [8]. Since software ecosystems not only handle information, but also actual critical processes in smart cities, in automobiles and other applications the software ecosystems must both be secure (i.e., protected from malicious harm) and safe (i.e., protected from accidental harm). The systems must be able to handle unanticipated risks, and the ecosystem must be able to handle breakdowns and ensure that the systems has a safe state and/or a secure state.

In [9], the following definition of risk is given, "*Risk: two dimensional combination of the consequences (of an activity) and associated uncertainties (what will the outcome). Probabilities*  *are used to express the uncertainties. When risk is quantified in a risk analysis, this definition is in line with the ISO/IEC Guide 73 (2002) standard definition* [10]*: combination of the probability of an event and its consequence*." Related to new emerging risks and complexities of interactions, there may be challenging to establish the probability of an event since they may be unanticipated.

Systemic risk is defined as "*Probability of loss or failure common to all members of a class or group or to an entire system."* When discussing systemic risks related to smart cities we are exploring failures common to members of a smart city.

A key element when assessing systemic risks are the scope and prioritization of systems to be evaluated. We have focused on critical systems of common interest in a city. In the following we have discussed risks and protection of what is defined as part of critical infrastructure. Definition and protection of critical infrastructure has been a key concern in the US and EU. In the US the establishment of the national infrastructure protection plan (NIPP) from 2009 has been updated systematically. The latest, [11], has the title "Partnering for Critical Infrastructure – Security and Resilience." The successive NIPP has identified specific areas of concern such as interdependencies, cyber security and the international nature of threats. The risk management framework of NIPP is interesting since it is broad and systemic including physical, cyber and human elements. In the EU, the directive 114/08 on the identification and designation of European critical infrastructure and the assessment of the need to improve their protection was established as a council directive in 2008 [12]. In **Table 1**, we have described critical infrastructure sectors.

The main elements and areas of this list of critical infrastructure, is highly relevant when discussing smart cities, of special interest are smart city applications related to: Transportation systems, energy systems (power supply), bank and finance; communication systems; Technologies of information (including navigation systems); Water supply and health systems. The following areas are critical when impacted by loss or failures (**Table 2**).

The criticality or potential loss due to failures, breakdowns or attacks increases the need to be able to support critical operations even when the system is under stress or may fail, thus the ability to handle unanticipated incidents (or ability to go to a safe and secure state) are gaining importance. The concept of resilience engineering is an important strategy to handle these unanticipated incidents. In [13] resilience is defined as "*the intrinsic ability of a system to adjust* 


**Table 1.** Critical infrastructure sectors in US-NIPP and EU 114/08.


**Table 2.** Systemic risks in smart cities.

*are used to express the uncertainties. When risk is quantified in a risk analysis, this definition is in line with the ISO/IEC Guide 73 (2002) standard definition* [10]*: combination of the probability of an event and its consequence*." Related to new emerging risks and complexities of interactions, there may be challenging to establish the probability of an event since they may be

Systemic risk is defined as "*Probability of loss or failure common to all members of a class or group or to an entire system."* When discussing systemic risks related to smart cities we are exploring

A key element when assessing systemic risks are the scope and prioritization of systems to be evaluated. We have focused on critical systems of common interest in a city. In the following we have discussed risks and protection of what is defined as part of critical infrastructure. Definition and protection of critical infrastructure has been a key concern in the US and EU. In the US the establishment of the national infrastructure protection plan (NIPP) from 2009 has been updated systematically. The latest, [11], has the title "Partnering for Critical Infrastructure – Security and Resilience." The successive NIPP has identified specific areas of concern such as interdependencies, cyber security and the international nature of threats. The risk management framework of NIPP is interesting since it is broad and systemic including physical, cyber and human elements. In the EU, the directive 114/08 on the identification and designation of European critical infrastructure and the assessment of the need to improve their protection was established as a council directive in 2008 [12]. In **Table 1**, we

The main elements and areas of this list of critical infrastructure, is highly relevant when discussing smart cities, of special interest are smart city applications related to: Transportation systems, energy systems (power supply), bank and finance; communication systems; Technologies of information (including navigation systems); Water supply and health sys-

The criticality or potential loss due to failures, breakdowns or attacks increases the need to be able to support critical operations even when the system is under stress or may fail, thus the ability to handle unanticipated incidents (or ability to go to a safe and secure state) are gaining importance. The concept of resilience engineering is an important strategy to handle these unanticipated incidents. In [13] resilience is defined as "*the intrinsic ability of a system to adjust* 

tems. The following areas are critical when impacted by loss or failures (**Table 2**).

**US-NIPP sectors EU 114/08 sectors**

Agriculture and food; bank and finance; communications; military installation and defense; technologies of information; national monuments

**Table 1.** Critical infrastructure sectors in US-NIPP and EU 114/08.

and icons; drinking water treatments plans

Energy Energy—electricity, oil, natural gas Transportation systems Transport—roads and highways,

railroads, aviation, inland waterways,

shipping and ports

(NA—not applicable yet)

unanticipated.

208 Risk Assessment

failures common to members of a smart city.

have described critical infrastructure sectors.

*its functioning prior to or following changes and disturbances, so that it can sustain operations even after a major mishap or in the presence of continuous stress.*" In [14], Woods focuses on unanticipated disturbances and adaptations, and describes resilience as: "*How well can a system handle disruptions and variations that fall outside of the base mechanisms/model for being adaptive (adaptive defined as the ability to absorb or adapt to disturbance, disruption and change).*" The handling of the unanticipated and continued functioning has been a key property of resilient systems.

In the European Union, safety, security and resilience are prioritized in the cybersecurity strategy [15]. Three of the top five strategic issues mentioned are: Develop the industrial and technological resources for cybersecurity; Achieving cyber resilience; and establish a coherent international cyberspace policy for the EU. Thus, safety, security and resilience of smart cities are important issues that should be explored further. In addition, it is important to understand how risk governance of smart cities is addressed and established in order to support a coherent cyberspace policy of the key issues.

### **2. Problem definition and methods**

Based on the preceding introduction, and the summary above, the three research questions we wanted to explore are:


In the following we have described some of the challenges and problems of these research questions and our methodology (i.e., approach).

#### **2.1. Challenges and problems**

There is often poor focus on emerging risks, safety and security. These issues have been identified late when vulnerabilities have been exploited and unwanted incidents have been published. The suppliers and vendors (software vendors) seldom has to pay for unwanted incidents even if they are due to poor quality issues in the systems such as safety, security or resilience. The bill has been given to the users, the organizations and/or society.

Critical infrastructure is in most cases regulated by the authorities. Safety and security regulation is often reactive, and lags technological innovation. New software is implemented and societal consequences are discussed later. Internet of things (IoT) is an example of new technology that are introduced in software ecosystems that may affect operations of critical infrastructure. IoT has introduced a broad set of vulnerabilities and can challenge safety, security and resilience of software ecosystems. As an example the Mirai botnet was used in a Denial of service (DoS) attack on the internet firm Dyn, Ref. [16], using unsecured devices on a large scale. The attack affected Dyn's clients such as Twitter, Reddift, Spotify, and SoundCloud. The cyber-attacks caused outages across the whole East Coast in the US in October 2016. When discussing vulnerabilities in a software ecosystem such as in smart cities, one challenge is that there is not one single supplier, but a set of suppliers that must be involved. Incident handling moves to a broader area where it can be difficult to identify responsibilities and manage competencies. This is relevant, in [17] the author points out that there are serious vulnerabilities (poor quality control) in systems used in smart cities (i.e., traffic control systems), which could be used to cause traffic jams or collisions.

#### **2.2. Methodology**

The literature review started by a keyword search based on combination of "software ecosystems" "smart cities" and "safety, security, resilience." Using Google Scholar and then searching the ACM Digital Library, IEEE Explore, Springer Link and Science Direct. The literature body was selected based on that software ecosystems/smart cities and safety (security and resilience) was the main theme. In addition, papers were selected based on a set of criteria i.e., have been peer reviewed and published in a scientific context (journal, conference), available in English, and more than one-page long. Since software ecosystems involves governmental rules, relevant white papers were also identified. The identified literature body is gathered in Section 5, numbered from [18–29] [LIT BODY:13] and [LIT BODY:14]. In addition, we have listed other general references that could not be included in the literature body, in Section 6.

The concept of Risk and Risk Governance has been an issue in the review, and we have structured papers based on risk governance, see [30], starting with problem framing; then risk appraisal (hazards and vulnerabilities); risk judgment; risk communication and risk management.

### **3. Findings and reflections**

• RQ1: How is safety, security and risks of smart cities (software ecosystems of cities) framed

In the following we have described some of the challenges and problems of these research

There is often poor focus on emerging risks, safety and security. These issues have been identified late when vulnerabilities have been exploited and unwanted incidents have been published. The suppliers and vendors (software vendors) seldom has to pay for unwanted incidents even if they are due to poor quality issues in the systems such as safety, security or

Critical infrastructure is in most cases regulated by the authorities. Safety and security regulation is often reactive, and lags technological innovation. New software is implemented and societal consequences are discussed later. Internet of things (IoT) is an example of new technology that are introduced in software ecosystems that may affect operations of critical infrastructure. IoT has introduced a broad set of vulnerabilities and can challenge safety, security and resilience of software ecosystems. As an example the Mirai botnet was used in a Denial of service (DoS) attack on the internet firm Dyn, Ref. [16], using unsecured devices on a large scale. The attack affected Dyn's clients such as Twitter, Reddift, Spotify, and SoundCloud. The cyber-attacks caused outages across the whole East Coast in the US in October 2016. When discussing vulnerabilities in a software ecosystem such as in smart cities, one challenge is that there is not one single supplier, but a set of suppliers that must be involved. Incident handling moves to a broader area where it can be difficult to identify responsibilities and manage competencies. This is relevant, in [17] the author points out that there are serious vulnerabilities (poor quality control) in systems used in smart cities (i.e., traffic control systems), which could

The literature review started by a keyword search based on combination of "software ecosystems" "smart cities" and "safety, security, resilience." Using Google Scholar and then searching the ACM Digital Library, IEEE Explore, Springer Link and Science Direct. The literature body was selected based on that software ecosystems/smart cities and safety (security and resilience) was the main theme. In addition, papers were selected based on a set of criteria i.e., have been peer reviewed and published in a scientific context (journal, conference), available in English, and more than one-page long. Since software ecosystems involves governmental rules, relevant white papers were also identified. The identified literature body is gathered in Section 5, numbered from [18–29] [LIT BODY:13] and [LIT BODY:14]. In addition, we have listed other general references that could not be included in the literature body, in Section 6.

resilience. The bill has been given to the users, the organizations and/or society.

• RQ2: How is risk governance of smart cities (software ecosystems of cities) addressed?

• RQ3: What are key issues in Governance of the ecosystem?

questions and our methodology (i.e., approach).

be used to cause traffic jams or collisions.

**2.2. Methodology**

**2.1. Challenges and problems**

and defined?

210 Risk Assessment

We found 14 papers in total, 13 papers published in the interval 2007–2016; and we included a paper from 2003 that had an illuminating discussion of resilience of systems. The following three sections are based on our research questions (RQ1 to RQ3 as described in Section 2) and have been used as title of the chapter:


#### **3.1. Framing of safety, security and risks**

In [31] there is a discussion of the convergence of safety and security, pointing out that a successful integration of both requirements needs the collaboration of both safety and security disciplines, aided by a common understanding. In [28, 32], it is pointed out that both safety and security issues must be assessed to build trustworthy software ecosystems. Issues identified through security analysis (i.e., threats) must be combined with issues from safety analysis (i.e., hazards). In [33] there is a focus on of the development of industrial control systems and how safety and security must be integrated in the development methodologies. These control systems are similar to control systems employed in smart cities. An overview and comparison of methodologies is given.

In [34], a broad overview of security and safety challenges of digital systems are given based on an ecological perspective. Ecology is used both as a metaphor to learn from the development in the nature, but also to have a more holistic perspective of systems involving human actors in a society. The ecosystem perspective is as an important viewpoint when discussing safety and security in a changing word, and especially when exploring risks and risk governance of smart cities.

In [35] there is a discussion of infrastructure resilience from an organizational context. Adaptive capacity, resource robustness is discussed related to infrastructure and a conceptual framework for assessing resilience is outlined. The conceptual framework seems to be useful when discussing resilience in software ecosystems, especially of critical (infrastructure) ecosystems. In [36] different elements of resilience are discussed. The paper presents a framework for system resilience, consisting of five aspects: time periods, system types, events, resilience actions and properties to preserve. It is followed by principles for emergence, and factors affecting resilience, including improving resilience, trade-offs, and loss of resilience.

Not many textbooks (that can be used in teaching) have been found related to implementation of security and resilience in control systems of smart cities. However, in [37] guidelines for secure and resilient software development are discussed. The development guidelines are targeted toward software ecosystems; and the goal is to improve developer skills related to security and resilience. It is pointed out that security and resilience must be integrated from concept/early design, it reviews security design methodologies and suggests how to measure the development process. The discussion of security in Industrial Automation setting, is discussed in [38], including the challenges of adapting general software security principles to industrial automation and control systems.

In [29] resilience and cyber security of the ecosystems is seen as a part of the maturity of governance and collaboration between industry and government. Thus, cyber resilience is seen as the next step of cyber security.

In [24] there is a discussion of the security dynamics of software ecosystem (SEC), pointing out that SEC reduces cost and are increasing efficiencies for the software producers while society get the costs of software failures (i.e., issues related to security, safety and poor resilience). The paper has a quantitative examination of 27,000 vulnerabilities disclosed over the past decade (1996–2008). The paper identifies the interest of several stakeholders in the market of software vulnerabilities such as the vendors, safety experts/consultants, security information providers (SIP), and criminals. The paper explores several policies such as security trough obscurity, responsible disclosure of vulnerabilities (as a suggested policy) or security trough transparency. One of the key insights is that secrecy prevents people from assessing their own risks, which contributes to a false sense of security. The process of responsible disclosure is that the researcher discloses full information to the vendor, expecting that a patch is developed within a reasonable timeframe. An increasing number of vendors and security organizations have adopted some form of responsible disclosure. The role of security information providers (SIP) as risk-communicators is discussed in the vulnerabilities market.

In summary, there has been a positive development in identifying the need to explore both safety and security in development and to use resilience as a mitigating strategy. The concept of software ecosystems benefits the developers and industries, but it seems that at present that society gets the costs of software failures. Responsible disclosure of vulnerabilities to the vendors, expecting a patch, seems to be a beneficial policy. The role of actors in the software ecology, such as security information providers should be explored further.

#### **3.2. Risk governance of smart cities (software ecosystems) –vulnerabilities and risks**

In [26] a set of vulnerabilities in cars are pointed out such as the possibility to control a wide range of automotive functions and completely ignore driver input from dashboard, including disabling the brakes, selectively braking individual wheels on demand, stopping the engine, and so on. Attacks were easy to perform and the effects were significant. It is possible to bypass rudimentary network security protections within the car, and perform attack that embeds malicious code in the car that will completely erase any evidence of its presence (after a crash). There is a discussion of the challenges in addressing these vulnerabilities while considering the existing automotive ecosystem.

In [27] Semi-autonomous and fully autonomous cars are described as coming from the development stage to actual operations. The autonomous systems are creating safety and security challenges. These challenges require a holistic analysis, under the perspective of ecosystems of autonomous vehicles. The perspective of ecosystems is seen as useful to understand and mitigate security and safety challenges. These systems will become important critical information infrastructures, simultaneously featuring connectivity, autonomy and cooperation. Threat analyses and safety cases should include both (random) faults and (purposeful) attacks.

Not many textbooks (that can be used in teaching) have been found related to implementation of security and resilience in control systems of smart cities. However, in [37] guidelines for secure and resilient software development are discussed. The development guidelines are targeted toward software ecosystems; and the goal is to improve developer skills related to security and resilience. It is pointed out that security and resilience must be integrated from concept/early design, it reviews security design methodologies and suggests how to measure the development process. The discussion of security in Industrial Automation setting, is discussed in [38], including the challenges of adapting general software security principles to

In [29] resilience and cyber security of the ecosystems is seen as a part of the maturity of governance and collaboration between industry and government. Thus, cyber resilience is seen

In [24] there is a discussion of the security dynamics of software ecosystem (SEC), pointing out that SEC reduces cost and are increasing efficiencies for the software producers while society get the costs of software failures (i.e., issues related to security, safety and poor resilience). The paper has a quantitative examination of 27,000 vulnerabilities disclosed over the past decade (1996–2008). The paper identifies the interest of several stakeholders in the market of software vulnerabilities such as the vendors, safety experts/consultants, security information providers (SIP), and criminals. The paper explores several policies such as security trough obscurity, responsible disclosure of vulnerabilities (as a suggested policy) or security trough transparency. One of the key insights is that secrecy prevents people from assessing their own risks, which contributes to a false sense of security. The process of responsible disclosure is that the researcher discloses full information to the vendor, expecting that a patch is developed within a reasonable timeframe. An increasing number of vendors and security organizations have adopted some form of responsible disclosure. The role of security information providers (SIP) as risk-communicators is discussed in the vulnerabilities market.

In summary, there has been a positive development in identifying the need to explore both safety and security in development and to use resilience as a mitigating strategy. The concept of software ecosystems benefits the developers and industries, but it seems that at present that society gets the costs of software failures. Responsible disclosure of vulnerabilities to the vendors, expecting a patch, seems to be a beneficial policy. The role of actors in the software

ecology, such as security information providers should be explored further.

**3.2. Risk governance of smart cities (software ecosystems) –vulnerabilities and risks**

In [26] a set of vulnerabilities in cars are pointed out such as the possibility to control a wide range of automotive functions and completely ignore driver input from dashboard, including disabling the brakes, selectively braking individual wheels on demand, stopping the engine, and so on. Attacks were easy to perform and the effects were significant. It is possible to bypass rudimentary network security protections within the car, and perform attack that embeds malicious code in the car that will completely erase any evidence of its presence (after a crash). There is a discussion of the challenges in addressing these vulnerabilities while con-

industrial automation and control systems.

sidering the existing automotive ecosystem.

as the next step of cyber security.

212 Risk Assessment

In [39], there is a discussion of Cyber-Physical infrastructure risks in the future smart cities. Several examples of unwanted incidents are described in transportation systems (autonomous vehicles; Trains; …) in electricity distribution and management and in the water and wastewater systems sector. It is suggested to the regulator to work with standards and regulations in addition to communication and increased engagement by giving direct assistance. Challenges mentioned are the need to establish goal based standards and regulations as new technology is implemented and to focus on dissemination of best practices in combination with systematic education.

In [17] there is an empirical evaluation of "smart cities" looking at a broad set of technologies of traffic control, management of energy/water/waste and security. Known vulnerabilities are in traffic control systems, mobile applications used by citizens, smart grids/smart meters and video cameras. The issues are in line with peer-reviewed papers, i.e., lack of cyber security testing and approval, lack of encryption, lack of City Computer Emergency Response Teams (CERT), and lack of cyber attack emergency plans. There are reasons to anticipate that we establish potential for serious incidents, if these issues are not addressed and mitigated.

In [20] there is a discussion of the expanded use of federated embedded systems (FES) in automotive and process automation. Expected benefits include the possibility of third-party actors developing add-on functionality; a shorter time to market for new functions; and the ability to upgrade existing products in the field. This is a substantial area for innovation and change, the responsibilities of the manufacturer will change, and a key challenge will be ecosystem management. However, it is suggested that the liabilities and responsibilities of the total product must rest with the manufacturer. The regulator has a key role to define responsibilities. These issues highlight the need for Risk Governance of systems to be used in smart cities.

In [21] open software ecosystem is proposed as an approach to develop software for embedded systems in the automotive industry. The focus is on the need to deliver functionality to customers faster. The paper describes quality attributes and defines a reference architecture. Both safety, security and dependability are explored.

In [22] they model the architecture of a cloud-based ecosystem, showing security patterns for its main components; and discuss the value of such an approach. The ecosystem approach provides a holistic view and is valuable in security, by indicating places where security mechanisms can be attached. Holistic views are seen as important to combine quality factors such as safety and reliability with security. By using this abstraction level, it is argued in the paper that this unified approach reduces complexity, one of the important weaknesses used by attackers and can enable analysis of the propagation of threats and data leaks.

In [28] they cover research on Enterprise Architectures of ecosystems (i.e., software ecosystems) discussing resilience and adaptability as a key area and suggest reference architectures mentioning security. However, safety is not mentioned.

In [25] there is a discussion on how to build robust and evolvable resilient software systems, discussing redundant data structures, transformer middleware and service-oriented communities. The use of transformer middleware may lead to more complex systems and higher costs or latency. Exploration of service-oriented communities may support adaptation and spontaneous emergence of resilience, but may lead to higher costs due to high degree of redundancy and challenges with deterministic behavior.

In summary, there has been documented several vulnerabilities in smart cities, intelligent transport systems and autonomous cars. However, software ecosystems have beneficial elements since more actors are developing functionality and enabling a shorter time to market. Liabilities must rest on the manufacturer and the regulator must define responsibilities. The ecosystem provides a holistic view that is seen as important to combine safety and reliability with security. It is argued in several papers that this approach reduces complexity; one of the weaknesses used by attackers and can enable improved analysis of propagation of threats and data leaks.

#### **3.3. Key issues in governance—responsibilities, management and communication**

International governance of security of the infrastructure of software ecosystems is addressed through several channels such as standards (ISO, IEC) or international bodies such as OECD, EU, NATO and UN. Software Ecosystems are international—involving many actors with different agendas. In [40] there is a discussion of governance of emerging technology (such as IoT) as it is integrated into critical infrastructure. It is suggested that manufacturers should follow the principle of privacy and security by design, when developing new products, and must be prepared to accept legal liability for the quality of the technology they produce. Buyers should collectively demand that manufacturers respond effectively to concerns about privacy and security. Governments can play a positive role by incorporating minimum security standards in their procurement. It is suggested that government regulations should require routine, transparent reporting of technological problems to provide the data required for a transparent market-based cyber-insurance industry. It is suggested to establish an agreement (a compact) based on collaboration between government, industry and private society supporting evidence based decision making.

In [19] the focus is on software assurance of safety-critical and security-critical software (i.e., conceptualized as SEC). The perception is that the use of current methods has not achieved the wished-for level of protection, and that there are missing security principles and standards. The industry continues to see an expansion of major breaches occurring in both the public and private sectors. There need to be incentives or regulations for implementing protective and immunizing measures. Such measures could be a mandatory part of the security architecture of all applications. A formal requirement could be that implementation of protective and immunizing measures is included in any certification process. On governance it is suggested to establish software assurance standards at the UN level; to have a risk based approach; to share best of breed methods; and the need to discuss liabilities for damages occurring as a result of an attack or security-related errors.

In [18] the issue of Information security is highlighted in national governance. They propose a comprehensive conceptual framework for building a robust, resilient and dependable Information Security Infrastructure, based on the perspective of software ecosystems.

Development of security and resilience is seen as a maturity process in [41], referencing the CERT Resilience Management Model (CERT RMM) from the Carnegie Mellon Software Engineering Institute. Resilience as a strategy is not simple to implement, in [42], the analysis of resilience strategies in the US agencies revealed that most of the plans only focus on a few of the stages of resilience. Plans do not focus on resilience in the information and social domain, and do not consider long-term adaptation.

In [23] there is a discussion of resilience as a high level design principle. There is an argument for resilience in systems, i.e., distributed systems composed of independent yet interactive elements may deliver equivalent or better functionality with greater resilience. Guidelines for resilience are given such as robustness through resilience rather than resistance, and intervention rather than control. It is argued for the perspective of resilience and to use an ecological perspective in system design and deployment, thus this article describes a design methodology on the ecology level based on resilience principles.

In [43] there is a discussion of development of software-systems, and ignoring some perspectives of software ecosystems. If we want systems that are secure and reliable, both security and reliability must be built together. Applications, middleware and operating systems must be built in the same way, to get systems that are inherently secure and that can withstand attacks from malicious applications and resist errors. The suggested approach is based on security patterns that are mapped through the architectural levels.

#### **3.4. Key issues related to methods of risk assessments**

In [28] they cover research on Enterprise Architectures of ecosystems (i.e., software ecosystems) discussing resilience and adaptability as a key area and suggest reference architectures

In [25] there is a discussion on how to build robust and evolvable resilient software systems, discussing redundant data structures, transformer middleware and service-oriented communities. The use of transformer middleware may lead to more complex systems and higher costs or latency. Exploration of service-oriented communities may support adaptation and spontaneous emergence of resilience, but may lead to higher costs due to high degree of

In summary, there has been documented several vulnerabilities in smart cities, intelligent transport systems and autonomous cars. However, software ecosystems have beneficial elements since more actors are developing functionality and enabling a shorter time to market. Liabilities must rest on the manufacturer and the regulator must define responsibilities. The ecosystem provides a holistic view that is seen as important to combine safety and reliability with security. It is argued in several papers that this approach reduces complexity; one of the weaknesses used by attackers and can enable improved analysis of propagation

**3.3. Key issues in governance—responsibilities, management and communication**

International governance of security of the infrastructure of software ecosystems is addressed through several channels such as standards (ISO, IEC) or international bodies such as OECD, EU, NATO and UN. Software Ecosystems are international—involving many actors with different agendas. In [40] there is a discussion of governance of emerging technology (such as IoT) as it is integrated into critical infrastructure. It is suggested that manufacturers should follow the principle of privacy and security by design, when developing new products, and must be prepared to accept legal liability for the quality of the technology they produce. Buyers should collectively demand that manufacturers respond effectively to concerns about privacy and security. Governments can play a positive role by incorporating minimum security standards in their procurement. It is suggested that government regulations should require routine, transparent reporting of technological problems to provide the data required for a transparent market-based cyber-insurance industry. It is suggested to establish an agreement (a compact) based on collaboration between government, industry and private society

In [19] the focus is on software assurance of safety-critical and security-critical software (i.e., conceptualized as SEC). The perception is that the use of current methods has not achieved the wished-for level of protection, and that there are missing security principles and standards. The industry continues to see an expansion of major breaches occurring in both the public and private sectors. There need to be incentives or regulations for implementing protective and immunizing measures. Such measures could be a mandatory part of the security architecture of all applications. A formal requirement could be that implementation of protective and immunizing measures is included in any certification process. On governance it

mentioning security. However, safety is not mentioned.

redundancy and challenges with deterministic behavior.

of threats and data leaks.

214 Risk Assessment

supporting evidence based decision making.

The papers identified that the risk assessment was complex, thus there is a need to use methods that integrates the following issues:


In summary, if we want systems that are secure and reliable, both security and reliability must be built together. The suggested approach is based on security patterns that are mapped through the architectural levels of the system. However, there is missing international regulation or compacts based on private public partnerships to ensure privacy, safety, security and resilience. Vendors must ensure this quality by design, and must be prepared to accept legal liability for the quality of the technology they produce. Regulations should require routine, transparent reporting of technological problems to provide data for a transparent marketbased cyber-insurance industry. There is an argument for resilience in systems, composed of independent yet interactive elements that may deliver equivalent or better functionality with greater resilience. However, the maturity of resilience in use is varying.

### **4. Summary**

In this review, we have used the concept of software ecosystems on systems used in smart cities. Our review indicates that the "smart cities" concept are vulnerable and subject to increased emerging risks due to introduction of new technology (such as autonomous transport systems) unsecured components and new connections that has not been foreseen or thought of. Threats, new vulnerabilities and new unwanted incidents are emerging and can be observed through media attention and exploration.

Software assurance of safety-critical and security-critical software (conceptualized as the software system ecosystem) is strongly needed. Current methods have not achieved the necessary level of protection, and are missing security principles and standards. The industry continues to see an expansion of major breaches occurring in both the public and private sectors. Incentives or regulations are needed to implement protective and immunizing measures.

The ecosystem approach seems a promising approach since it provides a holistic view of security needs, by indicating places where security mechanisms can be attached. This approach reduces complexity; one of the important weaknesses used by attackers and can enable analysis of the propagation of threats and data leaks.

Due to the increased proliferation of the IoT and the vulnerability of the Internet, there is a strong need to establish a social compact (agreement) ensuring that the Internet continues to be accessible, inclusive, secure and trustworthy. To ensure that all actors in the value-chain understand the vulnerabilities and the risks, a silo-based "need to know" principle must be replaced by transparent and open reporting. This may support a market based cyber-insurance industry.

In the literature body and in [32] there is an increased understanding of the need for collaboration between the safety and security disciplines to understand and mitigate risks and vulnerabilities. The differences in perspectives between security and safety are due to different adversity models. The security community addresses threats (directed, deliberate, hostile acts) and the safety community addresses hazards (undirected events). Software ecosystems are so pervasive across all sectors of economic activity that this silo approach can no longer be regarded as acceptable.

There is a need for international rulemaking and regulation. This may be difficult to achieve. Vendors must ensure safety, security and resilience by design, and must be prepared to accept legal liability for the quality of the technology they produce. Prescriptive and detailed rulemaking on a national level is missing and is difficult to achieve. This is an international challenge. The Mira denial of service (DoS) attack was due to components produced in China but used in the US. No penetration testing, acceptance or testing of robustness was performed prior to release of the product.

In general, there is a need to establish functional standards, responsibilities of liabilities and practices cross-countries. There must be a specific responsibility of the producer to ensure safety, security and resilience, and ideally, a formal process of product acceptance or certification or safety case exploration before a product can be sold or offered. Thus, there is a need for regulatory action from government to set minimum standards, establish responsibility, and follow up of incidents/accidents. The suppliers should establish a proactive focus on (best practice) safety/security standards.

In **Table 3**, we have exemplified critical ecosystems such as smart cities/intelligent transport systems. Based on our review so far, these critical systems have no mandated test criteria (neither safety cases nor security cases, thus it is described as "Poor") and there are no organizations such as CERTS to handle and systematize unwanted incidents.


**Table 3.** Critical digital ecosystems and learning.

In summary, if we want systems that are secure and reliable, both security and reliability must be built together. The suggested approach is based on security patterns that are mapped through the architectural levels of the system. However, there is missing international regulation or compacts based on private public partnerships to ensure privacy, safety, security and resilience. Vendors must ensure this quality by design, and must be prepared to accept legal liability for the quality of the technology they produce. Regulations should require routine, transparent reporting of technological problems to provide data for a transparent marketbased cyber-insurance industry. There is an argument for resilience in systems, composed of independent yet interactive elements that may deliver equivalent or better functionality with

In this review, we have used the concept of software ecosystems on systems used in smart cities. Our review indicates that the "smart cities" concept are vulnerable and subject to increased emerging risks due to introduction of new technology (such as autonomous transport systems) unsecured components and new connections that has not been foreseen or thought of. Threats, new vulnerabilities and new unwanted incidents are emerging and can

Software assurance of safety-critical and security-critical software (conceptualized as the software system ecosystem) is strongly needed. Current methods have not achieved the necessary level of protection, and are missing security principles and standards. The industry continues to see an expansion of major breaches occurring in both the public and private sectors. Incentives or regulations are needed to implement protective and immunizing measures.

The ecosystem approach seems a promising approach since it provides a holistic view of security needs, by indicating places where security mechanisms can be attached. This approach reduces complexity; one of the important weaknesses used by attackers and can enable analy-

Due to the increased proliferation of the IoT and the vulnerability of the Internet, there is a strong need to establish a social compact (agreement) ensuring that the Internet continues to be accessible, inclusive, secure and trustworthy. To ensure that all actors in the value-chain understand the vulnerabilities and the risks, a silo-based "need to know" principle must be replaced by transparent and open reporting. This may support a market based cyber-insurance industry. In the literature body and in [32] there is an increased understanding of the need for collaboration between the safety and security disciplines to understand and mitigate risks and vulnerabilities. The differences in perspectives between security and safety are due to different adversity models. The security community addresses threats (directed, deliberate, hostile acts) and the safety community addresses hazards (undirected events). Software ecosystems are so pervasive across all sectors of economic activity that this silo approach can no longer

greater resilience. However, the maturity of resilience in use is varying.

be observed through media attention and exploration.

sis of the propagation of threats and data leaks.

be regarded as acceptable.

**4. Summary**

216 Risk Assessment

Development of safety has often been dependent on exploration of publicized accidents and incidents, and a systematic learning loop between users, the regulator and industry. An important component in the learning loop of software systems has been structured reporting and analysis of incidents trough computer incident response teams, i.e., CERTS. There is a need to regulate and ensure that new technology is approved/tested (has some sort of quality control/safety case examination) and that there is some sort of a structured learning process when incidents happen.

Software ecosystems will be exposed to new strains as new unsecured technology are introduced—thus there must be an increased focus on how to handle surprises i.e., resilience and adaptability in software ecosystems to ensure that new demands/stress/failures are not impacting the infrastructure in a catastrophic way. In the review of resilience, [44], there is an increased use of resilience in papers from 2006 on. The resilience concepts are in development, and there is a need to be careful not to place the responsibility of resilience on the individual (i.e., expecting resilience from an individual only). Resilience is the integrated ability of the ecosystem as a whole consisting of an interplay between technology abilities, organizational abilities and human abilities. During the review process, several issues have not been addressed adequately, and are in need of further research, such as:


### **Author details**

Stig O. Johnsen1,2\*

\*Address all correspondence to: stig.ole.johnsen@ntnu.no

1 NTNU, Faculty of Information Technology, Trondheim, Norway

2 SINTEF Technology and Society, Safety Research, Trondheim, Norway

### **References**


[7] Firesmith DG. "Common Concepts Underlying Safety, Security, and Survivability Engineering", Technical Note CMU/SEI-2003-TN-033. Carnegie Mellon University; Pittsburg, USA, 2003

• There has not been an exploration of the different actors that can affect safety, security and resilience in smart cities (i.e., software ecosystems). Such an exploration should give insight into how to improve safety, security and resilience of systems, and how liabilities should

• There has been no systematic discussion of the maturation of resilience in smart cities (specifics in software ecosystems) discussing technology, organizations and human awareness/

• There have been few definitions of patterns of resilience in smart cities and related software ecosystems and how these can be used at an architectural level. There is a missing discussion on how smart cities/critical ecosystems can become resilient, based on patterns

• How to perform ecosystem management of development of federated embedded systems

[1] Caragliu A, Del C, Nijkamp P. Smart cities in Europe. Journal of Urban Technology.

[2] Chourabi H, Nam T, Walker S, Gil-Garcia JR, Mellouli S, Nahon K, et al. Understanding smart cities: An integrative framework. In: IEEE Computer Society, Proceedings of the

[3] Manikas K, Hansen KM. Software ecosystems–a systematic literature review. Journal of

[4] Boston Consulting Group. The Internet Economy in the G-20. 2012 Retrieved from:

[5] DoD. U.S. Department of Defense, "Standard Practice for System Safety" MIL-STD-

[6] Pietre-Cambacedes L, Chaudet C. The SEMA referential framework: Avoiding ambiguities in the terms "security" and "safety". International Journal of Critical Infrastructure

(FES) used in smart cities (i.e., transportation, automotive systems…)

\*Address all correspondence to: stig.ole.johnsen@ntnu.no

1 NTNU, Faculty of Information Technology, Trondheim, Norway

2011;**18**(2):65-82. DOI: 10.1080/10630732.2011.601117

Systems and Software. 2013;**86**(5):1294-1306

www.bcgperspectives.com

Protection. 2010;**3**:55-66

882D 2000

45th Hawaii International Conference; 2012. pp. 2289-2297

2 SINTEF Technology and Society, Safety Research, Trondheim, Norway

be placed

218 Risk Assessment

**Author details**

Stig O. Johnsen1,2\*

**References**

human actions together


[36] Sheard S, Mostashari A. A framework for system resilience discussions. In: Proc Eighteenth Annual Int Symp INCOSE; 2008

[22] Fernandez EB, Yoshioka N, Washizaki H. Patterns for security and privacy in cloud ecosystems. In: Proceedings of the 23rd IEEE International Requirements Engineering

[23] Fiksel J. Designing resilient, sustainable systems. Environmental Science & Technology.

[24] Frei S, Schatzmann D, Plattner B, Trammell B. Modeling the security ecosystem-the dynamics of (in) security. In: Economics of Information Security and Privacy. Springer

[25] De Florio V. Robust-and-evolvable resilient software systems: Open problems and lessons learned. In: Proceedings of the 8th workshop on Assurances for self-adaptive sys-

[26] Koscher K, Czeskis A, Roesner F, Patel S, Kohno T, Checkoway S, Savage S. Experimental security analysis of a modern automobile. In: 2010 IEEE Symposium on Security and

[27] Lima A, Rocha F, Völp M, Esteves-Veríssimo P. Towards safe and secure autonomous and cooperative vehicle ecosystems. In: Proceedings of the 2nd ACM Workshop on

[28] Zimmermann A, Schmidt R, Jugel D, Möhring M. Evolving enterprise architectures for

[29] Sharkov G. From cybersecurity to collaborative resiliency. In: Proceedings of the 2016 ACM Workshop on Automated Decision Making for Active Cyber Defense, ACM; 2016,

[30] Renn O. (2005). Risk governance—Towards an integrative approach, white paper no.1—

[31] Piggin R. S. H. (2013). Process safety and cyber security convergence: Lessons identified, but not learnt? In: IET Conference Proceedings. The Institution of Engineering &

[32] Bryant IRC. Towards a trustworthy software ecosystem. International Software Quality

[33] Kriaa S, Pietre-Cambacedes L, Bouissou M, Halgand Y. A survey of approaches combining safety and security for industrial control systems. Reliability Engineering & System

[34] Carlsson B, Jacobsson A. (2012). Om säkerhet i digitala ekosystem. Studentlitteratur AB,

[35] Longstaff PH, Armstrong NJ, Perrin K, Parker WM, Hidek MA. Building resilient communities: A preliminary framework for assessment. Homeland Security Affairs.

Cyber-Physical Systems Security and Privacy, ACM; 2016. pp. 59-70

Conference, Ottawa, ON, Canada; 2015. pp. 24-28

2003;**37**(23):5330-5339

220 Risk Assessment

Boston, MA; 2010. p. 79-106

tems, ACM; 2011, September. pp. 10-17

digital transformations. DEC. 2015;**15**:25-26

International risk governance council

Management Conference (SQM); 2012. pp. 1-7

Privacy; 2010, May. pp. 447-462

October. pp. 3-9

Technology

Lund, Sverige

2010;**6**(3):1-22

Safety. 2015;**139**:156-178


**Section 3**
