**3. Practical application of the ResilienceCube and the methodology for resilience assessment**

The indicator-based resilience concept described above, enables practical assessment of the following aspects of resilience (**Figure 9**):


*Resilience and Situational Awareness in Critical Infrastructure Protection… DOI: http://dx.doi.org/10.5772/intechopen.97810*

#### **Figure 9.**

*Applying the methodologies in order to assess resilience and obtain practical (quantitative) results.*

7.Checking resilience: Stress-testing

8.Optimizing resilience: Multi-Criteria Decision Making (MCDM)

For its users, the methodologies are embedded into the interactive, web-based and freely available "ResilienceTool". Applied in different case studies, dealing with energy, transportation, health, smart cities, water, sensitive installations, etc., the methodology and tool provide the user with different options when using the approach and the system by showing how benchmarking can be done and the bestpractice solutions can be re-used.

When applying the concept and the methodologies practically, it is important to understand that the flexibility of the concept and the methodologies necessarily demand for domain expertise in "configuring" the resilience model for a specific area/city or critical infrastructure. A fixed list of critical infrastructures for cities in Europe does not exist, and it must be up to each user of the concept, methodologies and the software tool, to decide which feature of respective infrastructures should be analyzed and how. Similarly, no fixed list of threats exists, neither on the area level nor for the single critical infrastructures. Thus, it will be up to the users to define which threats (scenarios) they consider relevant. Domain experts are needed in order to define the important issues, and how to measure these issues, i.e. identifying the indicators. They are in a way "configuring" the resilience model, which largely is a one-time effort prior to using the model for calculating the resilience levels, although some adjustments, tuning, and reconsiderations are expected. Thus, in the implementation phase, it is important to have close collaboration between the users, the method developers, and IT developers (of calculation and presentation tools).

### **3.1 Resilience index/cube, resilience level (RL), functionality level (FL) and multi-level resilience assessment**

Per default, assessing resilience in the concept is based on scoring (other ways of upwards aggregation are possible, but used only in "expert mode"), the scores being aggregated upwards – up to the Resilience Index score. At each level, the scores can be assigned weights, as the indicators, too. When performing the resilience assessment, the indicators' real values are entered into the calculation, and the issue scores are obtained as average weighted scores of the indicator scores. It is possible to let a specific indicator overrule the effect of the other indicators, i.e. having "knock out indicators" where, in the case of a low value, the effect is not "averaged away" through an average weighted score of all the indicators. The reasoning behind the selected scales is that a scale from 0 to 5 for indicators (and issues) are sufficiently broad, especially if there are needs to perform expert judgments to provide scores for the indicators (or directly for the issues) in case of lack of data [17]. This has similarities to the use of safety integrity levels (SIL) for safetyinstrumented systems [30]. In and for the cases where the issue-indicator approach is not sufficient, the concept and the tool allow using multi-level indicators (de facto composite indicators).

analysis has included the web-semantics-based analysis of the descriptions of indicators and the statistical analysis of the values of these indicators in the case studies performed in SmartResilience project. The analysis has also served as the basis for the,more user-oriented visualization of interdependencies in a critical

*Resilience and Situational Awareness in Critical Infrastructure Protection…*

*DOI: http://dx.doi.org/10.5772/intechopen.97810*

**3.3 Comparing resilience of different infrastructures: benchmarking**

the CORE and Recommended DCLs allows for the additional benefit of

Using issues and indicators from pre-approved and standardized sources such as

The stress test framework is used to test whether, in a given threat situation, the smart critical infrastructure is/will be resilient enough to be able to continue functioning within the prescribed limits. The FL curve(s) obtained in the analysis is compared with the stress test criteria and limits in order to evaluate whether the smart critical infrastructure has passed or failed the stress test. In order to do the

stress test, the user needs to decide on the thresholds/limits representing

acceptable/non-acceptable values for each criterion. The stress test criteria can be

*Functionality Level ("vertical loss"):* the stress test limits can be set based on the overall functionality level, at single functionality element(s), and/or at single functionality indicator(s). The limit could be a certain minimum level of functionality (i.e. the lowest point of the resilience curve should be above this FLmin). The functionality level at the lowest point below the curve is sometimes referred to as

*Time ("horizontal loss"):* when subjected to a threat/event, a smart critical infrastructure may set the limits on time (e.g. maximum time to absorb the event, maximum time to partially recover after the event, or maximum time to fully

benchmarking certain aspects of resilience management across different organizations. As the CORE issues are expected to be present in every Complete DCL, organizations can at the very least be compared based on managerial, resilienceoriented activities and processes, regardless of industry or threat. WITHIN a particular scenario (industry and threat), Complete DCLs can be benchmarked when using the Recommended issues proposed by the industry's experts. Once the CORE DCL issues are selected, the user can make an actual resilience assessment adding the indicators under the CORE issues. Since for all of the case studies, the Recommended DCLs have been developed, one can take a look at those lists and choose which indicators from there fit into the CORE DCL. It may happen that the names of the issues from Recommended DCL are slightly different from the CORE ones. Hence, it is possible that not all the previously used indicators will fit. In this case, the user should use only the ones which match with the CORE issue. Furthermore, it may be needed that new indicators (not used in the Recommended DCL) are added in order to ensure sufficient coverage

infrastructure.

of the CORE issue.

related to (e.g.):

**75**

• Functionality Level

• Time (to absorb, to recover)

• Cumulative loss of functionality (area)

"robustness," which can be set as a stress test limit.

**3.4 Checking resilience: stress testing**
