**5. Methodologies characterization and comparative assessment**

A very good description of the various methodologies proposed so far and currently available in the open literature is given in [5].

The earliest significant effort to quantify the reliability of such systems is represented by a methodology known as REPAS (Reliability Evaluation of Passive Systems), [6], which has been developed in late 1990s, cooperatively by ENEA, the University of Pisa, the Polytechnic of Milan and the University of Rome, that was later incorporated in the EU (European Union) RMPS (Reliability Methods for Passive Systems) project. This methodology is based on the evaluation of a failure probability of a system to carry out the desired function from the epistemic uncertainties of those physical and geometric parameters which can cause a failure of the system.

42 Nuclear Power – Practical Aspects

Pet overall probability of failure

mutually non-exclusive independent events

ultimately prevent or arrest core degradation.

available in the open literature is given in [5].

operational experience and data.

The failure model relative to each single basic event is given by:

pi(x)probability distribution function of the parameter x xothreshold value according to the failure criterion

where:

condensable fraction, leak rate, partial opening of the isolation valve, heat exchanger plugged pipes, etc. - and the associated failure criterion. Thus each basic event model pertaining to the relevant failure mode requires the assignment of both the probability distribution and range of the correspondent parameter and the definition of the critical interval defining the failure (for example failure for non-condensable fraction >x%, leak rate > x gr./sec or crack size > x cm2 and so on).In order to evaluate the overall probability of

Pet = 1.0- ((1.0 - Pe1)\*(1.0 - Pe2)\*...\*(1.0 - Pen)) (1)

Pe1 through Pen individual probabilities of failure pertaining to each failure mode, assuming

Pei= ∫ pi(x) dx x>xo (2)

It's worth noting that the assumed failure criterion, based on the failure threshold for each path, implies the neglecting of the "intermediate" modes of operation of the system or equivalently the degraded performance of the system (up to the failure point): this gives credit for a passive system that "partially works" and has failed for its intended function but provides some operation. This operation could be sufficient to prolong the window for opportunity to recover a failed system, for instance through redundancy configuration, and

Once the probabilistic distributions of the parameters are assigned, the reliability of the system can be directly obtained from (1) once a failure criterion is assigned and the single failure probabilities are evaluated through (2): this point is being satisfied by assigning both the range and the probability distributions, basing on expert judgment and engineering assessment. In fact, as further illustrated, difficulties arise in assigning both the range and the probability density functions relative to the critical parameters defining the failure modes, in addition to the definition of a proper failure criterion, because of the lack of

A very good description of the various methodologies proposed so far and currently

The earliest significant effort to quantify the reliability of such systems is represented by a methodology known as REPAS (Reliability Evaluation of Passive Systems), [6], which has been developed in late 1990s, cooperatively by ENEA, the University of Pisa, the Polytechnic

**5. Methodologies characterization and comparative assessment** 

failure of the system, the single failure probabilities are combined according to:

The RMPS methodology, described in [7], was developed to address the following problems: 1) Identification and quantification of the sources of uncertainties and determination of the important variables, 2) Propagation of the uncertainties through thermal-hydraulic (T-H) models and assessment of passive system unreliability and 3) Introduction of passive system unreliability in accident sequence analyses. In this approach, the passive system is modelled by a qualified T-H code (e.g. CATHARE, RELAP) and the reliability evaluation is based on results of code runs, whose inputs are sampled by Monte-Carlo (M-C) simulation. This approach provides realistic assessment of the passive system reliability, thanks to the flexibility of the M-C simulation, which adapts to T-H model complexity without resort to simplifying approximation. In order to limit the number of T-H code runs required by M-C simulation, alternative methods have been proposed such as variance reduction techniques, first and second order reliability methods and response surface methods. The RMPS methodology has been successfully applied to passive systems utilizing natural circulation in different types of reactors (BWR, PWR, and VVER). A complete example of application concerning the passive residual heat removal system of a CAREM reactor is presented in [8]. The RMPS methodology tackles also an important problem, which is the integration of passive system reliability in a PSA study. So far, in existing innovative nuclear reactor projects PSA's, only passive system components failure probabilities are taken into account, disregarding the physical phenomena on which the system is based, such as the natural circulation. The first attempts performed within the framework of RMPS have taken into account the failures of the components of the passive system as well as the impairment of the physical process involved like basic events in static event tree as exposed in [7]. Two other steps have been identified after the development of the RMPS methodology where an improvement was desirable: the inclusion of a formal expert judgment (EJ) protocol to estimate distributions for parameters whose values are either sparse on not available, and the use of efficient sensitivity analysis techniques to estimate the impact of changes in the input parameter distributions on the reliability estimates.

R&D in the United States on the reliability of passive safety systems has not been as active at least until mid 2000. A few published papers from the Massachusetts Institute of Technology (MIT) have demonstrated their development of approaches to the issue. Their technique has examined TH uncertainties in passive cooling systems for Generation IV-type gas-cooled reactors. The MIT research on the reliability of passive safety systems has taken a similar approach but has focused on a different set of reactor technologies. Their research has examined thermal hydraulic uncertainties in passive cooling systems for Generation IV gas-cooled reactors, as described in [9,10]. Instead of post-design probabilistic risk analysis for regulatory purposes, the MIT research seeks to leverage the capabilities of probabilistic risk assessment (PRA) to improve the design of the reactor systems early in their development life cycle.

Reliability of Passive Systems in Nuclear Power Plants 45

load imposed on the system to overcome its capacity. In this framework, probability distributions are assigned to both safety functional requirement on a safety physical parameter (for example, a minimum threshold value of water mass flow required to be circulating through the system for its successful performance) and system state (i.e., the actual value of water mass flow circulating), to reflect the uncertainties in both the safety thresholds for failure and the actual conditions of the system state. Thus the mission of the passive system defines which parameter values are considered a failure by comparing the corresponding pdfs according to defined safety criteria. The main drawback in the last method devised by ENEA lies in the selection and definition of the probability distributions that describe the characteristic parameters, based mainly on subjective/engineering

Every one of three methods devised by ENEA shares with the main RMPS approach the issue related to the uncertainties affecting the system performance assessment process. With respect to the RMPS a greater simplicity is introduced, although detrimental to the relevance of the approaches themselves: this is particularly relevant as far as the approach based on

Finally a different approach is followed in the APSRA (Assessment of Passive System ReliAbility) methodology developed by BARC (Bhabha Atomic Research Centre, India), see [15]. In this approach, a failure surface is generated by considering the deviation of all those critical parameters, which influence the system performance. Then, the causes of deviation of these parameters are found through root diagnosis. It is attributed that the deviation of such physical parameters occurs only due to a failure of mechanical components such as valves, control systems, etc. Then, the probability of failure of a system is evaluated from the failure probability of these mechanical components through classical PSA treatment. Moreover, to reduce the uncertainty in code predictions, BARC foresee to use in-house

With reference to the two most relevant methodologies (i.e. RMPS and APSRA), the RMPS consists mainly in the identification and quantification of parameter uncertainties in the form of probability distributions, to be propagated directly into a T-H code or indirectly in using a response surface; the APSRA methodology strives to assess not the uncertainty of parameters but the causes of deviation from nominal conditions, which can be in the failure

As a result, different approaches are used in the RMPS and APSRA methodologies. RMPS proposes to take into account, in the PSA model, the failure of a physical process. This problem is treated in using a best estimate T-H code plus uncertainty approach. APSRA includes in the PSA model the failure of those components which cause a deviation of the key parameters resulting in a system failure, but does not take into account possible uncertainties on these key parameters. As the consequence, the T-H code is used in RMPS to propagate the uncertainties and in APSRA to build a failure surface. APSRA incorporates an important effort on qualification of the model and use of the available experimental data. These aspects have not been studied in the RMPS, given the context of the RMPS project.

judgment.

hardware components failure is concerned.

of active or passive components or systems.

experimental data from integral facilities as well as separate.

In addition to the RMPS approach, a number of alternative methodologies have been investigated for the reliability assessment of T-H passive systems.

Three different methodologies have been proposed by ENEA (Italian National Agency for New Technologies, Energy and Sustainable Economic Development). In the first methodology [11], the failure probability is evaluated as the probability of occurrence of different independent failure modes, a priori identified as leading to the violation of the boundary conditions or physical mechanisms needed for successful passive system operation.

This approach based on independent failure modes introduces a high level of conservatism as it appears that the probability of failure of the system is relevantly high, because of the combination of various modes of failure as in a series system, where a single fault is sufficient to challenge the system performance. The correspondent value of probability of failure can be conservatively assumed as the upper bound for the unavailability of the system, within a sort of "parts-count" reliability estimation.

In the second, [12], modelling of the passive system is simplified by linking to the modelling of the unreliability of the hardware components of the system: this is achieved by identifying the hardware failures that degrade the natural mechanisms upon which the passive system relies and associating the unreliability of the components designed to assure the best conditions for passive function performance.

Thus, the probabilities of degraded physical mechanisms are reduced to unreliability figures of the components whose failures challenge the successful passive system operation. If, on the one hand, this approach may in theory represent a viable way to address the matter, on the other hand, some critical issues arise with respect to the effectiveness and completeness of the performance assessment over the entire range of possible failure modes that the system may potentially undergo and their association to corresponding hardware failures. In this simplified methodology, degradation of the natural circulation process is always related to failures of active and passive components, not acknowledging, for instance, any possibility of failure just because of unfavourable initial or boundary conditions. In addition, the fault tree model adopted to represent the physical process decomposition is used as a surrogate model to replace the complex T-H code that models the system behaviour. This decomposition is not appropriate to predict interactions among physical phenomena and makes it extremely difficult to realistically assess the impact of parametric uncertainty on the performance of the system.

The third approach is based on the concept of functional failure, within the reliability physics framework of load-capacity exceedance [7,13,14]. The functional reliability concept is defined as the probability of the passive system failing to achieve its safety function as specified in terms of a given safety variable crossing a fixed safety threshold, leading the load imposed on the system to overcome its capacity. In this framework, probability distributions are assigned to both safety functional requirement on a safety physical parameter (for example, a minimum threshold value of water mass flow required to be circulating through the system for its successful performance) and system state (i.e., the actual value of water mass flow circulating), to reflect the uncertainties in both the safety thresholds for failure and the actual conditions of the system state. Thus the mission of the passive system defines which parameter values are considered a failure by comparing the corresponding pdfs according to defined safety criteria. The main drawback in the last method devised by ENEA lies in the selection and definition of the probability distributions that describe the characteristic parameters, based mainly on subjective/engineering judgment.

44 Nuclear Power – Practical Aspects

development life cycle.

operation.

for regulatory purposes, the MIT research seeks to leverage the capabilities of probabilistic risk assessment (PRA) to improve the design of the reactor systems early in their

In addition to the RMPS approach, a number of alternative methodologies have been

Three different methodologies have been proposed by ENEA (Italian National Agency for New Technologies, Energy and Sustainable Economic Development). In the first methodology [11], the failure probability is evaluated as the probability of occurrence of different independent failure modes, a priori identified as leading to the violation of the boundary conditions or physical mechanisms needed for successful passive system

This approach based on independent failure modes introduces a high level of conservatism as it appears that the probability of failure of the system is relevantly high, because of the combination of various modes of failure as in a series system, where a single fault is sufficient to challenge the system performance. The correspondent value of probability of failure can be conservatively assumed as the upper bound for the unavailability of the

In the second, [12], modelling of the passive system is simplified by linking to the modelling of the unreliability of the hardware components of the system: this is achieved by identifying the hardware failures that degrade the natural mechanisms upon which the passive system relies and associating the unreliability of the components designed to assure

Thus, the probabilities of degraded physical mechanisms are reduced to unreliability figures of the components whose failures challenge the successful passive system operation. If, on the one hand, this approach may in theory represent a viable way to address the matter, on the other hand, some critical issues arise with respect to the effectiveness and completeness of the performance assessment over the entire range of possible failure modes that the system may potentially undergo and their association to corresponding hardware failures. In this simplified methodology, degradation of the natural circulation process is always related to failures of active and passive components, not acknowledging, for instance, any possibility of failure just because of unfavourable initial or boundary conditions. In addition, the fault tree model adopted to represent the physical process decomposition is used as a surrogate model to replace the complex T-H code that models the system behaviour. This decomposition is not appropriate to predict interactions among physical phenomena and makes it extremely difficult to realistically assess the impact of parametric uncertainty on

The third approach is based on the concept of functional failure, within the reliability physics framework of load-capacity exceedance [7,13,14]. The functional reliability concept is defined as the probability of the passive system failing to achieve its safety function as specified in terms of a given safety variable crossing a fixed safety threshold, leading the

investigated for the reliability assessment of T-H passive systems.

system, within a sort of "parts-count" reliability estimation.

the best conditions for passive function performance.

the performance of the system.

Every one of three methods devised by ENEA shares with the main RMPS approach the issue related to the uncertainties affecting the system performance assessment process. With respect to the RMPS a greater simplicity is introduced, although detrimental to the relevance of the approaches themselves: this is particularly relevant as far as the approach based on hardware components failure is concerned.

Finally a different approach is followed in the APSRA (Assessment of Passive System ReliAbility) methodology developed by BARC (Bhabha Atomic Research Centre, India), see [15]. In this approach, a failure surface is generated by considering the deviation of all those critical parameters, which influence the system performance. Then, the causes of deviation of these parameters are found through root diagnosis. It is attributed that the deviation of such physical parameters occurs only due to a failure of mechanical components such as valves, control systems, etc. Then, the probability of failure of a system is evaluated from the failure probability of these mechanical components through classical PSA treatment. Moreover, to reduce the uncertainty in code predictions, BARC foresee to use in-house experimental data from integral facilities as well as separate.

With reference to the two most relevant methodologies (i.e. RMPS and APSRA), the RMPS consists mainly in the identification and quantification of parameter uncertainties in the form of probability distributions, to be propagated directly into a T-H code or indirectly in using a response surface; the APSRA methodology strives to assess not the uncertainty of parameters but the causes of deviation from nominal conditions, which can be in the failure of active or passive components or systems.

As a result, different approaches are used in the RMPS and APSRA methodologies. RMPS proposes to take into account, in the PSA model, the failure of a physical process. This problem is treated in using a best estimate T-H code plus uncertainty approach. APSRA includes in the PSA model the failure of those components which cause a deviation of the key parameters resulting in a system failure, but does not take into account possible uncertainties on these key parameters. As the consequence, the T-H code is used in RMPS to propagate the uncertainties and in APSRA to build a failure surface. APSRA incorporates an important effort on qualification of the model and use of the available experimental data. These aspects have not been studied in the RMPS, given the context of the RMPS project.

The following Table attempts to identify the main characteristics of the methodologies proposed so far, with respect to some aspects, such as the development of deterministic and probabilistic approaches, the use of deterministic models to evaluate the system performance, the identification of the sources of uncertainties and the application of expert judgment.

Reliability of Passive Systems in Nuclear Power Plants 47

It's worth noticing that these two last aspects are correlated, but hey will be treated

 The comparison between active and passive systems, mainly on a functional viewpoint. All of these points are elaborated in the following, in an attempt to cover the entire spectrum of issues related to the topic, and capture all the relevant aspects to concentrate on and

The quantity of uncertainties affecting the operation of the T-H passive systems affects considerably the relative process devoted to reliability evaluation, within a probabilistic

These uncertainties stem mainly from the deviations of the natural forces or physical principles, upon which they rely (e.g., gravity and density difference), from the expected conditions due to the inception of T-H factors impairing the system performance or to changes of the initial and boundary conditions, so that the passive system may fail to meet the required function. Indeed a lot of uncertainties arise, when addressing these phenomena, most of them being almost unknown due mainly to the scarcity of operational and experimental data and, consequently, difficulties arise in performing meaningful reliability analysis and deriving credible reliability figures. This is usually designated as phenomenological uncertainty, which becomes particularly relevant when innovative or untested technologies are applied, eventually contributing significantly to the overall

Actually there are two facets to this uncertainty, i.e., "aleatory" and "epistemic" that, because of their natures, must be treated differently. The aleatory uncertainty is that addressed when the phenomena or events being modelled are characterized as occurring in a "random" or "stochastic" manner and probabilistic models are adopted to describe their occurrences. The epistemic uncertainty is that associated with the analyst's confidence in the prediction of the PSA model itself, and it reflects the analyst's assessment of how well the PSA model represents the actual system to be modelled. This has also been referred to as state-of-knowledge uncertainty, which is suitable to reduction as opposed to the aleatory which is, by its nature, irreducible. The uncertainties concerned with the reliability of passive system are both stochastic, because of the randomness of phenomena occurrence, and of epistemic nature, i.e. related to the state of knowledge about the phenomena, because

For instance, as initial step, the approach described in [16]. allows identifying the uncertainties pertaining to passive system operation in terms of critical parameters driving the modes of failure, as, for instance, the presence of non-condensable gas, thermal stratification and so on. In this context the critical parameters are recognized as epistemic

devote resources towards for fulfilling a significant advance.

safety analysis framework, as recognized in [7].

uncertainty related to the reliability assessment.

of the lack of significant operational and experimental data.

separately.

**6.1. Uncertainties** 

uncertainties.


**Table 3.** Main features of the various approaches
