Section 3 Nuclear Technology

#### **Chapter 4**

## Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced Nuclear Reactors

*Thabit Abuqudaira, Pavel Tsvetkov and Piyush Sabharwall*

#### **Abstract**

In the last decade, 97% of the worldwide commercial nuclear reactors connected to the grid were Light Water Reactors (LWRs). LWRs are expected to stay the dominant type of nuclear reactors for the next few decades. Reliable and redundant safety systems are required in nuclear reactors to ensure safe operation and shutdown in abnormal conditions. These safety systems are actuated by the signals obtained from several sensors and instrumentation in and out of the reactor core. In LWRs, these sensors and instrumentation have shown a high level of maturity with long operating experience. Ensuring the compatibility of these sensors and instrumentation with advanced nuclear reactors (Generation IV) is necessary. The compatibility of these contemporary technologies with advanced reactors was assessed by comparing the advanced reactors' environments with those of the currently operating reactors. In addition to that, the needed R&D for such technologies was highlighted. In comparison with the LWRs environment, it was shown that advanced reactor environments are expected to experience elevated temperatures, a fast neutron spectrum, and a harsh corrosion environment. It was demonstrated that R&D is required mainly for fixed in-core nuclear sensors and instrumentation, while it is not a priority for ex-core nuclear sensors and instrumentation.

**Keywords:** instrumentation and control, advanced reactors, reactor applications, extreme conditions, sensors

#### **1. Introduction**

According to the Power Reactors Information System (PRIS) developed by the International Atomic Energy Agency (IAEA) [1], Light Water Reactors (LWRs) share 84% of the nuclear reactors in operation worldwide. A total of 85% of these reactors are Pressurized Water Reactors (PWRs), whereas the remaining 15% are Boiling Water Reactors (BWRs). The remaining reactors in operation worldwide are Pressurized Heavy Water Reactors (PHWRs), Light Water Graphite-moderated Reactors (LWGRs), Gas-Cooled Reactors, Fast Breeder Reactors (FBRs), and

High-Temperature Gas-cooled Reactors (HTGRs). A pie chart showing the currently operating reactors classified by their type is shown in **Figure 1**.

Sixty-eight commercial nuclear reactors have been connected to the grid worldwide in the last decade. Sixty-four of these reactors were LWRs (PWRs). In the previous 5 years, construction was started in 30 nuclear reactors, 29 of them were also LWRs (PWRs). A pie chart showing the nuclear reactors under construction in the last 5 years classified by their type is shown in **Figure 2**.

In 2016, the first Generation III+ reactor was connected to the grid. Generation III+ reactors have been dominating the newly constructed reactors. It is anticipated that they will be leading for the next few decades. Generation IV reactors are expected to be deployed commercially within at least two decades [2]. With this trend of domination of LWRs, specifically PWRs, improvement, and development have to be continued for such types of reactors. On the other hand, the technological advancement in the next generation of reactors requires parallel progress in the R&D efforts for all needed components for their operation.

A nuclear reactor requires various redundant sensors and instrumentation systems to operate safely and reliably. These sensors and instrumentation vary in function and the degree of precision necessary to fulfill their purpose. The information gained from these sensors, and instrumentation is used as input for the reactor's control and

**Figure 1.** *The number of currently operating reactors by type (As of the end of 2022).*

**Figure 2.** *The number of reactors under construction by type (As of the end of 2022).*

#### *Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

to indicate transients and deviations [3]. Thus, in addition to their role as control systems, sensors and instrumentation play a role as safety systems.

Since the first experiment with a nuclear reactor (Chicago Pile-1), sensors and instrumentation were required to monitor and ensure the safe operation of the reactor. Afterward, the number of sensors and instrumentation needed for all nuclear reactors constructed tended to increase. This increment was mainly to improve the safety of the newly operated reactors. Research and Development (R&D) in sensors and instrumentation have gained extra attention following the accident at the Three Mile Island Unit-2 (TMI-2). Investigations were performed by several National Laboratories, private companies, and consultants for the Department of Energy (DOE) to assess the instrumentation and electric components' ability to accomplish their design function [4]. The accident provided an opportunity to evaluate sensors and instrumentation performance in severe conditions [5].

In LWRs, sensors and instrumentation passed a long way of R&D until they reached a high level of maturity. However, the advanced reactor designs pose new challenges to these sensors and instrumentation, mainly due to the differences in the operating environments of the reactors. Some sensors and instrumentation, particularly in-core sensors and instrumentation, will operate in conditions significantly different from those in the current generation of nuclear reactors. High-temperature and corrosive environments can impose significant challenges to the design of sensors and instrumentation. In addition to that, these operating conditions could also impact sensors and instrumentations' reliability and accuracy. As a result of these factors, R&D programs are emerging, intending to commercially obtain sensors and instrumentation specifically designed for these types of reactors.

Advanced reactor designs are experiencing rapid technological advances. For this reason, R&D in sensors and instrumentation technologies must be initiated. The United States Nuclear Regulatory Commission (U.S.NRC) addressed Congress that research is needed to develop and assess the performance of new types of sensors and instrumentation for advanced reactors [6]. The Department of Energy, Office of Nuclear Energy (DOE-NE) has established several projects to accelerate R&D in sensors and instrumentation technologies in the United States. One of these programs is the Light Water Reactor Sustainability (LWRS) program. The program focuses on improving the economics and reliability and sustaining the safety of the U.S.'s fleet of Nuclear Power Plants (NPPs) [7]. Modernization of instrumentation technology is one of the technical areas of R&D of the program.

Another program is the Nuclear Energy Enabling Technologies (NEET) program. Advanced Sensors and Instrumentation (ASI) is one of its elements [8]. ASI focuses on research to develop new types of sensors and instrumentation systems that will likely be used in existing and advanced reactors. The program began in 2011. R&D activities have been effectively supported through directed research and competitive awards, enabling significant advancements in the field.

Nuclear sensors and instrumentation are one of the main types of reactor sensors and instrumentation required to operate the reactor safely and efficiently. Various detectors measure neutron flux and reactor power in the currently operating reactors. It is expected that a new generation of such types of detectors is required to be able to work in advanced reactor environments. Thus, R&D has to be continued to modify and improve the currently available set of sensors and instrumentation and to develop the next generation of such technologies.

The first step toward developing new types of nuclear sensors and instrumentation is to assess the maturity of the currently available sensors and instrumentation. Identifying the compatibility of the currently used sensors and instrumentation with the advanced reactor environment and analyzing the technological gaps and the needed improvements in the available technology will help to guide the R&D toward the most crucial areas. Such an assessment can help to meet the requirements of a new NPP license. For these reasons, an approach was developed to assess the R&D needs in nuclear sensors and instrumentation technologies tailored to future designs of nuclear reactors.

#### **2. Nuclear reactor environments**

Many reactor designs have been proposed since the first critical reactor, Chicago Pile 1. However, not all of them could achieve the stage of commercial operation. Few designs have reached the experimental stage, while many stayed as concepts. Differences in nuclear reactor designs led to significant differences in the operating environments. Thus, adopting a new reactor design into commercial operation faces many challenges, mainly due to the lack of operating experience with the new reactor environment. Understanding the differences in the operating conditions between the commercially available nuclear reactors and the proposed advanced reactor types will help to analyze the problems that sensors and instrumentation would face in advanced reactor environments.

#### **2.1 Current and advanced reactor designs**

In this study, the proposed Generation IV reactors provided by the Generation IV International Forum were adapted for comparison with the currently available LWRs [9]. These are the Supercritical Water Reactor (SCWR), the Very High-Temperature gas-cooled Reactor (VHTR), the Sodium-cooled Fast Reactor (SFR), the Lead-cooled Fast Reactor (LFR), the Gas-cooled Fast Reactor (GFR), and the Molten Salt reactor (MSR). This study did not include a comparison of the operating conditions of the currently operational PHWR and LWGR designs.

#### *2.1.1 Pressurized water reactor*

LWRs are thermal neutron spectrum reactors. They have gained their name since light "ordinary" water is used as a coolant and moderator [10]. Huge experience has been achieved in operating LWRs, including Pressurized Water Reactors (PWRs) and Boiling Water Reactors (BWRs). In the United States, all currently operating 93 reactors are LWRs. Sixty-two of them are PWRs, and 31 are BWRs [11]. This operating experience has given enough maturity to nearly all components required for a reactor operation.

Since the first PWR "Shippingport" constructed by Westinghouse [12], PWRs have dominated the worldwide nuclear reactors market. For this reason, PWRs have been considered a reference "Benchmark" in developing new reactor designs. In general, newly proposed advanced reactor designs are compared with PWRs to assess their advantages and benefits. In addition, most of the proposed LWR designs for the future technology of Small Modular Reactors (SMRs) are PWRs [13]. Several vendors of PWRs are available worldwide. AP1000 reactor design is selected to be a reference for the operating environment for PWRs [14]. All other PWRs designs have quite similar operational parameters.

*Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

#### *2.1.2 Boiling water reactor*

Unlike PWRs, light water in BWRs is permitted to boil within the reactor core. The steam from this boiling process is sent directly to the turbine without passing through a heat exchanger. Thus, BWRs operate at a lower pressure than PWRs (around 7 MPa). The maximum coolant temperature in the reactor core is around 288°C. In addition to that, the reactor has lower power density in comparison with PWRs.

Future designs for BWRs are significantly fewer than PWRs. In the United States, two BWR designs have been approved by the U.S.NRC [15]. These are the Advanced Boiling Water Reactor (ABWR) and the Economic Simplified Boiling Water Reactor (ESBWR). Furthermore, BWRs are considered for small modular reactor designs (SMR), such as the BWRX-300 design by GE Hitachi in the United States [16]. The development of these advanced BWRs resulted from many years of design improvements, drawing from the valuable experience gained through early reactor designs. For operating parameters comparison, the ABWR design was selected.

#### *2.1.3 Supercritical water reactor*

Supercritical Water-Cooled Reactor (SCWR) is a high-temperature, high-pressure, light water-cooled reactor that operates above the thermodynamic critical point of water (374°C, 22.1 MPa) [17]. Even though it uses light water as a coolant, SCWR is considered an evolution of both PWRs and BWRs. It is one of the proposed advanced reactor designs by the Generation IV International Forum (GIF). Unlike LWRs, SCWRs have designs to operate with a fast-neutron spectrum [18]. The United States is interested in SCWRs with a thermal neutron spectrum.

Currently, there is no SCWR under construction. The reactor is still in the phase of pre-conceptual design. Several reactor designs have been proposed worldwide. However, this type of reactor is expected to have meager chances for commercial development compared to other advanced reactor types. However, the share of research papers published on this type of reactor during 2004–2018 among different reactor types was around 9% [19]. The reactor parameters of the proposed U.S. thermal-spectrum SCWR are used as a reference for this study for comparison purposes [20].

#### *2.1.4 Very high-temperature gas-cooled reactor*

Various experimental and commercial gas-cooled reactor designs have been developed around the world. These designs are Dragon (United Kingdom), Peach Bottom (United States), Arbeitsgemeinschaft Versuchsreaktor - AVR (Germany), Fort St. Vrain (United States), Thorium High-Temperature Reactor - THTR (Germany), High-Temperature Engineering Test Reactor—HTTR (Japan), High-Temperature Reactor—HTR-10 (China), and the Advanced Gas-cooled Reactor—AGR (United Kingdom). This operating experience has given enough maturity to nearly all components required for reactor operation. However, advanced gas-cooled reactors would exhibit more challenging operating environments.

VHTR is an evolutionary proposed design of the previously mentioned reactors. It would have operating temperatures of around 1000°C. It uses helium as a coolant, with the fuel being tiny, coated fuel particles embedded in a graphite matrix [21]. Compared with LWRs, the reactor power density is significantly lower, while the coolant temperature is much higher. This type of reactor got attention since the

produced heat can be used for other industrial purposes. In addition to that, the operating pressure considered for the VHTRs is comparable to BWRs.

#### *2.1.5 Sodium-cooled fast reactor*

In SFR, the molten sodium metal is used as the reactor coolant enabling the reactor to have a fast neutron spectrum. Many safety and economic features were the reason for selecting this reactor design as one of the proposed generation IV nuclear reactors. Several SFR have been built around the world. Currently, two reactors in operation in Russia are SFR. TerraPower plans to construct the Traveling Wave Reactor (TWR), a sodium-cooled reactor in the United States.

The technology for SFRs has mainly been established through previous fast reactor programs. SFR is one of the most promising reactor technologies to close the nuclear fuel cycle. The design of the SFR will be used as a breeding/burning uraniumplutonium fuel cycle [22]. The proposed Generation IV SFR was used as a reference to be compared with other reactor operating conditions [23]. It can be observed that the main feature of such a reactor is the reactor's high-power density in comparison with the conventional LWRs. In addition, sodium's high boiling point allows the reactor to operate at atmospheric pressure.

#### *2.1.6 Lead-cooled fast reactor*

A fast-neutron spectrum with lead or lead/bismuth as the coolant characterizes the LFR. Lead as a coolant has several advantages compared to other reactor coolants. The coolant's high boiling point provides a large margin for safety in preventing coolant boiling and maintaining core integrity during accidents. Thus, many simplifications in the reactor design improved the overall economic performance. Compared with LWRs, the reactor operates at atmospheric pressure, and the molten lead coolant is at higher temperatures.

Several countries, including China, Russia, the United States, Sweden, Korea, and Japan, actively develop various LFR concepts. The LFR system may be the first Reactor of Generation IV to be deployed. The first lead-cooled reactor construction started in 2021.

#### *2.1.7 Gas-cooled fast reactor*

The GFR features a fast-neutron spectrum with helium as the coolant [24]. Thus, in addition to the increased operating temperature in such reactors as the VHTR, it has a fast neutron spectrum like SFR. The GFR design aims to operate safely and reliably, achieving sustainability, economic competitiveness, and enhanced proliferation resistance and security [25]. In addition, being a fast reactor will allow the usage of recycled plutonium, which will play a crucial role in the long-term strategy for nuclear power development [26].

The proposed Generation IV GFR design is a 2400 MWth plant operating with a coolant outlet temperature of 850°C. The power density of the core is comparable to that in PWRs, while the coolant pressure is comparable to the BWRs. GFR technology is still in the R&D stage. Moreover, the design represents a significant step toward developing large-scale, high-temperature gas-cooled fast-spectrum reactors.

*Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

#### *2.1.8 Molten salt reactor*

MSRs come in two distinct designs regarding fuel geometry. The first design is when the fuel and the coolant are mixed in one homogeneous fluid. In the second one, the fuel is confined to traditional fuel rods. These two designs have some differences in the operating environment. This study uses the salt-fueled design as a reference for the comparison. The fuel movement, salt chemistry, higher power density, and elevated temperatures significantly differ from the currently operating reactors.

In the United States, liquid fuel nuclear reactors got an attraction in the 1950s by initiating the Molten Salt Reactor Program (MSPR) at Oak Ridge National Laboratory (ORNL). Two MSRs were built and operated: the Aircraft Reactor Experiment (ARE) and the Molten Salt Reactor Experiment (MSRE) [27, 28]. The ARE was successfully operated for over 4 days, while the MSRE was for 4.5 years [29]. Nevertheless, no commercial deployment of this reactor type was achieved.

In the United States, research is conducted on molten salt reactor development and deployment efforts. Construction is underway for a Molten Salt Research Reactor (MSRR) located on the Abilene Christian University (ACU) campus in Texas. The primary purpose of the Reactor will be to facilitate research and provide training opportunities within the Molten Salt Reactor (MSR) environment.

#### **2.2 Comparison of reactor environments**

Coolant temperature and pressure, neutron spectrum, and corrosive environment are the most important parameters regarding the compatibility of new sensors and instrumentation technologies with an advanced reactor environment. Although sensors and instrumentation that would be placed in-core may exhibit slightly higher temperatures than coolant outlet temperatures, this temperature can indicate the temperature range in which the sensors and instrumentation should be able to operate. The operating pressure measurement is necessary to prevent instrumentation failures, damage, or compromised performance. The knowledge of the type of neutron spectrum in a particular reactor indicates the radiation damage and material degradation with time. The coolant material describes the corrosion environment that may face the sensors and instrumentation. A summary of the operating parameters that may make challenges to the currently available sensors and instrumentation in current and advanced reactors is listed in **Table 1**.

**Table 1** shows that advanced nuclear reactors will operate at temperatures higher than what sensors and instrumentation have experienced serving in currently available nuclear reactors. In addition, higher levels of power density, fast neutron spectrum, and the corrosive environment in these reactors may pose a challenge for the available sensors and instrumentation.

In terms of neutron flux, the overall neutron flux level will be comparable for all reactor designs. However, only the type of neutron spectrum and the activation of the coolant material and the associated gamma-ray field may play a role in the compatibility of the sensors and instrumentation. Studying each parameter's impact on the compatibility of the sensors and instrumentation can indicate the effect of this parameter on all advanced reactor designs. The proposed advanced reactor types are compared with LWRs in terms of the operating parameters that may affect the sensors and instrumentations. This comparison is shown in **Table 2**.


*\* Designs provided by the Generation IV International Forum.*

#### **Table 1.**

*Comparison of operating parameters for different types of reactors.*


*\* Some SCWRs are available with fast spectrum design.*

#### **Table 2.**

*Advanced reactors environments in comparison with LWRs.*

It can be concluded that advanced reactor environments will differ from the currently available LWRs by the increase in the operating temperature (from the range of 300–350°C up to around 1000°C in VHTR), the harsh chemical environment caused by the usage of molten salts and liquid sodium and lead, and the exposure to fast neutrons in fast reactors. In addition, it can be demonstrated that high pressure is not a significant concern in advanced reactor designs. Only the SCWR design has an elevated operating pressure than the currently available reactor designs.

#### **3. Assessment methodology**

The previous section discussed the main differences in the operating environment of several nuclear reactors. Commercially available nuclear reactors have attained enough operating experience with various sensors and instrumentations. Thus, the first step to discovering the problems that sensors and instrumentation would face in

#### *Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

advanced reactors is to study the differences in the operational environments between the commercially available reactors and the proposed advanced reactor.

An approach was adopted to investigate nuclear sensor and instrumentation technology's R&D needs. This approach analyzed the available nuclear sensors and instrumentation used in current reactor designs. The technological gaps and the needed improvements were highlighted. Then, the compatibility of these sensors and instrumentation with the advanced reactor environments was investigated.

Sensors and instrumentation that would not face any problem in the advanced reactor environment and are commercially available and can be used in advanced reactor designs. Thus, the instrumentation technology can be assessed as mature. Therefore, R&D is not a priority. If the sensor and instrumentation technology is compatible with the advanced reactor environment but is not commercially available, an R&D program is essential to commercialize the technology. Commercial availability of several types of sensors and instrumentation can be investigated by searching for the availability of the technology within companies of the Nuclear Suppliers Association (NSA). NSA comprises approximately 73 companies specializing in manufacturing and distributing services and products for nuclear energy users [30].

Sensors and instrumentation facing problems in the advanced reactor environment require significant R&D programs to fill the technological gaps. In either case, where R&D is needed based on the defined approach, the availability of initiated research projects on the sensor and instrumentation technology was further investigated. The investigation was based on the availability of research projects supported by the DOE NEET ASI program. Although the availability of funded projects on a particular type of technology does not mean that the technology is mature, it indicates that sensors and instrumentation with no current research projects have priority for future R&D programs. The adopted approach for determining the need for R&D programs for specific instrumentation is summarized in **Figure 3**.

In the future development of this approach, the Technology Readiness Levels (TRLs) approach can be used to assess the maturity of a particular instrumentation technology. NASA first proposed the TRL in the 1980s. The TRLs are divided into nine technology readiness levels starting from TRL 1 (The lowest where the basic principles for a specific technology are observed and reported) and ending in TRL 9 (The highest where the existing system was proven to be successful for mission operations) [31].

#### **4. Nuclear sensors and instrumentation**

Various sensors and instrumentation are required in a nuclear reactor to operate it safely and efficiently. These sensors and instrumentations represent the information suppliers to the reactor operators about the reactor status. Based on their deployment location, they are needed for the reactor core or other nuclear reactor components. These include nuclear instrumentation required to monitor the neutron flux and the associated reactor power, process instrumentation to measure non-nuclear processes such as temperature distribution in the core, pressure and flow measurements of the coolant, radiation instrumentation systems for radiation monitoring and dose levels measurements, and some other specific instrumentation for specific purposes [32]. A summary of nuclear reactor sensors and instrumentation groups is shown in **Figure 4**.

Process instrumentation is crucial in monitoring and controlling various parameters within a nuclear reactor. It involves using sensors, detectors, and other measurement devices to gather data and provide real-time information about the reactor's

#### **Figure 3.** *Sensors and instrumentation maturity assessment approach.*

#### **Figure 4.**

*Sensors and instrumentation groups of a nuclear reactor.*

temperature, pressure, flow, and other essential parameters. Radiation instrumentation systems include the sensors and instrumentation required to monitor the site radiation level (area radiation monitoring), gas effluents, and main steam pipes. These instrumentation systems are essential to protect personnel from radiation exposure, and they can indicate abnormal reactor conditions. Special instrumentation systems are reactor-type targeted. Based on the reactor type, such as the boric acid concentration instrumentation in PWRs, these sensors and instrumentation are required.

#### *Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

Nuclear sensors and instrumentation play a significant role in reactor control and safety. They measure the neutron flux and the associated reactor power level during various reactor conditions. These are: during the approach to criticality and reactor startup, intermediate power operation, and full-power reactor operation [33]. In the Advanced Pressurized water-cooled reactor (AP1000) designed by Westinghouse Electric Company, several types of neutron detectors are used to monitor the neutron flux from zero to 120% of full power [34].

In the currently operating reactors, various neutron detection systems are being used. These sensors and instrumentation can be installed inside the reactor core (in-core) or outside the reactor core (ex-core) based on their design purpose. They provide diversity and redundancy in the obtained information about the reactor status. Fission chambers, Ionization chambers, gamma thermometers, proportional counters, and self-powered neutron detectors (SPNDs) are among this nuclear instrumentation [35]. Furthermore, other nuclear instrumentation systems are used in specific reactor types, such as the aeroball measurement system used in EPR reactors. In addition, research is ongoing to apply new reactor flux and power measurement methods, such as the Cerenkov monitoring systems for in-core power measurement. The typical flux ranges covered by nuclear instrumentations in LWRs are illustrated in **Figure 5** [33]. Note that the nuclear instrumentations should be able to overlap between the operating flux ranges.

A group of fixed in-core and ex-core neutron detectors in nuclear reactors monitors the neutron flux and reactor power. In AP1000 reactors, fixed in-core and ex-core neutron detectors are used to monitor the neutron flux and reactor power [36]. The choice of detector location is made based on the detector's purpose and ability to withstand the harsh environment in the reactor core. These in-core and ex-core nuclear instruments should be able to measure the neutron flux over the entire range from the source start-up range up to the full power range.

In-core instrumentation should be able to operate at the operating conditions of the nuclear reactor. Ex-core instrumentation would be exposed to lower temperature, pressure, and neutron and gamma flux values. In commercial nuclear reactors, self-powered neutron detectors and gamma thermometers are the in-core nuclear instrumentation. Ionization chambers and proportional counters are the ex-core nuclear instrumentation. Fission chambers are used as both in-core or ex-core nuclear instrumentation. Classification of nuclear instrumentation based on their location regarding the reactor core is shown in **Figure 6**.

**Figure 6.**

*Classification of nuclear instrumentation by deployment location.*

Regarding the movability of the installed nuclear sensors and instrumentation, they can be fixed in the reactor core or movable on-demand systems. The aeroball measurement system is an example of a system used as on-demand in-core nuclear instrumentation.

#### **4.1 Self-powered neutron detectors**

Self-Powered Neutron Detectors (SPNDs) are the most common detectors used in commercial nuclear power plants [37]. These detectors attained this wide usage due to their advantages over other detector types in the reactor operating environments. Simple structure, tiny volume, no need for a power supply, and high temperature and radiation resistance are among the advantages of this type of detector [38].

In a nuclear reactor, several SPNDs are located axially within a channel in several channels inside the core. Thus, SPNDs can continuously measure the neutron flux in fixed positions inside the reactor core from low power level up to full power. However, they are unsuitable for startup range measurements. A SPND comprises four main components: emitter, insulator, collector (sheath), and lead wire [39]. A schematic drawing of a SPND is shown in **Figure 7**.

The emitter material is selected to have a high neutron absorption cross-section that undergoes betas decay directly or indirectly following the neutron absorption. The insulator's function is to provide temperature and electric resistance. It must have an optimized thickness so that the produced electrons in the emitter can reach the sheath (collector). The sheath works as a collector of the electrons from the emitter. The lead wire's function is to transmit the electrical current. A current meter will read the current value, which will be proportional to the emitted electrons from the emitter material. After equilibrium, the registered electric current will be proportional to the incident neutron flux from the reactor core. Thus, the reactor power is measured by measuring the output current in the SPND induced by the decay resulting from the neutron capture reactions [40].

The selection of the emitter material determines the response of the SPND. The response of an SPND is dependent on the radioactive decay of the radionuclide resulting from the neutron capture by the emitter material. SPNDs can be classified as prompt response and delayed response SPNDs. The emitter material can produce electrons directly following the neutron absorption reaction, such as in the case of rhodium (103Rh) and vanadium (51V) emitters. The disadvantage of using such

*Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

**Figure 7.** *Schematic drawing of a SPND.*

emitters is that these materials decay with characteristic half-lives (0.7 and 3.76 minutes, respectively) [41]. Thus, the registered neutron flux is delayed; they are named delayed-response SPNDs.

Prompt SPNDs were designed to overcome this delay issue and obtain an instantaneous measurement of the neutron flux in the reactor core. Following the absorption of a neutron, the emitter nucleus will emit gamma rays which will interact with the emitter material to produce electrons. The time from the neutron absorption until the emission of electrons is extremely short. Thus, a prompt measurement of the neutron flux is obtained with such materials. Cobalt and cadmium are the main types of emitter materials in prompt SPNDs.

The selection of the emitter material for a SPND has been an important research topic. The half-life of the resulted radionuclide and thus the response required, the neutron absorption cross-section and therefore the burnup of the emitter material in the reactor core, the natural abundance of the isotope of interest and the ease in manufacturing it, and their adaptively to the high-temperature environment are the critical factors in the selection process.

From the above description of the main components of the SPND, it is clear that the SPND has several components that must be designed to operate in advanced reactors. One of the issues with the SPNDs is that they have a lifetime. This lifetime is affected by the degradation of the insulator material, to a lesser extent, the burnup of the emitter material with time. In addition, one of the issues is that the housing material should be compatible with the reactor coolant, which does not undergo corrosion.

The maximum working temperature for the SPNDs depends on the emitter material and the housing material properties. Typically, it is around 550°C [42]. SPNDs have not been used in high-temperature reactor environments or fast reactors (beyond light water reactor environments). However, deploying SPND in advanced reactors can improve the accuracy of power distribution measurements and hence the core operating LOCA and DNBR margin values [43]. The commercially available technology for LWRs is expected to have no problem in a high-pressure environment. However, a further study on the effects of the high-pressure environment on SPNDs is required.

The commercially available SPNDs cannot be used in fast reactors [44]. This problem can be explained by the drop in the absorption cross-section of the emitter material at higher neutron energies (absorption cross-section decreases with the increase in the incident neutron energy). For this reason, research is ongoing to select a proper material for the emitter to get good results in fast reactors [45, 46]. In molten salts, SPNDs require modifications in the housing material (exterior sheath) to be compatible with the salt [47].

Several funded projects by the NEET ASI program recently initiated the development of SPNDs for high-temperature environments. However, R&D projects are needed to modify the external housing material for the SPNDs if they are used in a chemically corrosive environment. In addition to that, the possibility of deployment of SPNDs in fast reactors is shown to be high. R&D programs can accelerate these efforts.

#### **4.2 Gamma thermometers**

Reactor power measurement sensors are based on detecting neutrons, gammas, or both. Gamma thermometers measure the reactor power by measuring the gamma-ray flux in the reactor core. In addition to that, gamma thermometers can be used to get information about core cooling in nearby channels. Furthermore, they can be used outside the reactor vessel.

The start of deployment of gamma thermometers goes back to the beginning of the 1950s in heavy water-moderated reactors. They were used because they are simple in design, very rugged, and insensitive to neutron fluence. They have the advantage over SPNDs that they do not deplete with time. Thus, they can operate longer than SPNDs and other in-core neutron detectors. Gamma thermometers are not suitable for startup or low power levels. At intermediate power level and up to full power, the gamma-ray flux is proportional to the reactor power level, and thus gamma thermometers can be used.

The design structure of gamma thermometers is similar to the thermocouples used in temperature measurement. A gamma thermometer is composed of a rod material, gap, and sheath [48]. The rod (inner body) is usually made of stainless steel. A gap filled with gas surrounding the inner body is used for heat insulation. The gas is selected with proper thermal conductivity, depending on the level of gamma-ray heating inside the core. For low levels of gamma-ray heating (< 2 W/g), xenon is a proper gap material. Argon is used for intermediate gamma-ray heating levels (~ 10 W/g), while helium is used for high levels of gamma-ray heating (~20 W/g) [49].

When inserted in the reactor core, incident gamma-ray flux heats the rod, which is transferred along the sensor axis. The gap filled with the gas will interrupt the heat flow. Two thermocouples measure the difference between the sheath and the steel rod regions. This difference is proportional to the gamma flux incident on the sheath and, thus, the power of nearby fuel rods. Several thermocouples can be used along the gamma thermometer to get the axial power distribution. Using gamma thermometers in several channels provides the radial power distribution. A simple schematic drawing of a gamma thermometer is shown in **Figure 8**.

Gamma thermometers have yet to achieve widespread, long-term deployment within U.S. commercial nuclear power plants [50]. In-plant tests as part of a joint research program were initiated to use gamma thermometers at commercial nuclear power plants in the United States and Japan [51]. In the United States, research supported by the NEET ASI program is ongoing to develop an optical fiber-based gamma thermometer [52]. Deployment of gamma thermometers in nuclear power plants can enhance safety and core monitoring while reducing other systems' operational and maintenance costs. In the Economic Simplified Boiling Water Reactor (ESBWR),

*Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

**Figure 8.** *Schematic drawing of a gamma thermometer.*

gamma thermometers as in-core monitoring sensors are considered an essential feature over the previous BWRs [53].

Gamma thermometers have not been demonstrated at the higher temperatures of VHTRs. A significant redesign is needed to function at higher temperatures [54]. However, with their simplicity in design, it is promising that they would perform well at higher temperatures. For MSRs, customization effort would be required to develop a version suitable for the reactor environment. High pressure can exert mechanical stress on the components of gamma thermometers. This stress may cause deformation, cracking, or failure of structural elements. Thus, this high pressure may reduce the lifetime of such components. R&D programs must be initiated to study the effect of a high-pressure environment on gamma thermometers.

#### **4.3 Fission chambers**

Fission chambers are a type of ionization chamber. Unlike SPNDs, fission chambers require the application of an electric voltage between the cathode and the anode. The chamber is filled with a gas (commonly argon) at high pressure. Fission chambers can be used as in-core or ex-core nuclear instrumentation. A schematic drawing of a fission chamber is shown in **Figure 9**.

By placing a fission chamber in a neutron field, incident neutrons will be absorbed by the fissile material lined on the fission chamber housing. Usually, the housing material is steel or aluminum, while the fissile material is uranium highly enriched in a 235U isotope. Using highly enriched uranium increases the fission reaction rate and thus enhances the ionization current.

Upon 235U fission, two ionized fission products will be produced with a total kinetic energy of around 160 MeV. These two fission fragments will travel in opposite directions. One of the two fission fragments will be traveling into the gas chamber. For this reason, the thickness of the fissile material should be optimized to allow the fission fragment to reach the gas without affecting the fission rate [55]. The fission

**Figure 9.** *Schematic drawing of a fission chamber.*

fragment will ionize the filled gas atoms along its path, producing positive ions and negative electrons. An external voltage is applied to the cathode and anode, causing the positive ions to move toward the cathode while the electrons are collected on the anode. The current produced from this process is proportional to the fission rate in the fissile material and, thus, to the reactor core's neutron flux [56].

The depletion of the fissile material is the leading cause of the limited operating time for fission chambers as in-core nuclear instrumentation. The sensitivity of a fission chamber decreases with time because of the neutron fluence. Combining fissile and fertile materials can be a solution to such a problem.

Commercial fission chambers are available for the operating environment of LWRs. However, no commercial fission chamber is available for high-temperature reactor environments (above 550°C). Locating the chambers ex-core will reduce the temperature to those that commercially available fission chambers can withstand. The housing material and the internal gas pressure determine the operating temperatures of fission chambers [57].

High pressure may affect the sealing and containment of a fission chamber. The fission chamber must be appropriately sealed to prevent pressure leakage, which could compromise its functionality. Specialized sealing materials and techniques capable of withstanding high-pressure differentials may be required. There is no available research on the effect of a high-pressure environment on fission chambers.

A fission chamber that would operate at high temperatures of 550°C and beyond is named High-Temperature Fission Chamber (HTFC). Previous research programs have developed such chambers that run up to 800°C. However, these chambers are not commercially available. Developing HTFCs and demonstrating their sensitivity and mechanical and thermal robustness in the high-temperature environment will pave the way for developing high-temperature reactors. Several DOE NEET ASI projects have been initiated to design, fabricate, and demonstrate High-Temperature Fission Chambers (HTFCs). They aim to provide HTFC for HTRs, MSRs, and fast reactors.

Micro-pocket fission chambers are pancake-style, highly miniaturized fission chambers that employ sealed alumina plates as their structural backings and coatings of uranium or thorium as their neutron-sensitive element [58]. Micro-pocket fission chambers are a promising technology used in a high-temperature environment. The technology has yet to be commercially available. Several funded projects by DOE NEET ASI were initiated to develop it.

#### **4.4 Ionization chambers**

In PWRs, the core power density distribution can be measured using ex-core detectors. These detectors can be installed outside the reactor vessel in several axial locations. The measured neutron flux in these detectors is then used to get the axial power distribution in the reactor core.

Ionization chambers are neutron detectors that can be used outside the reactor vessel for neutron flux measurements. They are gas-filled radiation detectors that can be designed to measure several types of ionizing radiation. The principle of operation is similar to fission chambers. Compared to fission chambers, ionization chambers use 3 He or 10B nuclides as the filling gas instead of 235U to produce the charged particles.

Compensated and uncompensated ionization chambers are used as ex-core nuclear instrumentation. Uncompensated ionization chambers are used as a powerrange monitoring system. Compensated ionization chambers are used for intermediate power range. The compensated ionization chamber comprises two distinct chambers: one is coated with boron, while the other is uncoated. The coated chamber exhibits sensitivity to gamma and neutrons, whereas the uncoated chamber only detects gamma rays. The detector output is the difference between the two currents generated from the chambers. This difference in current represents the neutron response of the compensated ionization chamber. Therefore, the compensated ionization chamber cancels the current resulting from the gamma rays' interaction. This compensation technique proves valuable in the intermediate power range, ensuring that the detector response is only due to neutron flux.

Several types of compensated and uncompensated ionization chambers are used in LWRs and high-temperature reactors. Thus, no significant challenge is expected in deploying the currently available technology in the advanced high-temperature reactor environment. In a fast reactor environment, ex-core moderation is required to achieve better sensitivities in the measurements or to use more giant ionization chambers.

In MSRs, ionization chambers are expected to be compatible and thus have no problem deploying such detectors [59]. This compatibility can be explained by the fact that these detectors will be used as ex-core nuclear instrumentation. In fact, in the only two MSRs built, no neutron instrumentation was employed in the core. Only ex-core detectors were used. However, for the proposed advanced MSRs, employing in-core neutron detectors that can withstand their harsh environment is necessary for high-sensitivity measurements.

#### **4.5 Proportional counters**

Proportional counters are gas-filled detectors used as a source-range neutron flux monitoring system. They are used as ex-core nuclear instrumentation during the first criticality or for start-up after a very long shutdown. Their principal mode of operation is quite similar to the ionization chamber. However, the applied voltage to the cathode and anode is higher than in ionization chambers.

Some proportional counters are based on the (n, α) reaction with 10B in the boron trifluoride gas (BF3) filled in the detector's chamber. Boron-lined proportional counters have been used in High-Temperature Engineering Test Reactor (HTTR) for temperatures up to 600°C. In gas-cooled reactors, proportional counters have been used as neutron sensors [60]. Thus, no temperature problem is expected to arise in high-temperature reactor environments. Therefore, commercially available proportional counters technology is mature enough, and R&D is not a priority for such a type of detector.

Proportional counters can be used to detect thermal neutrons or fast neutrons. For fast-spectrum reactors, proportional counters are less efficient due to the 10B capture cross-section drop with increased neutron energy. Using more giant detectors can solve this problem to compensate for this drop in cross section. In addition to that, helium or a gas with a low atomic number can be used in the chamber to moderate the neutrons. In molten salt reactors, proportional counters are expected to operate with no problem since they are deployed out of the corrosive environment of the reactor core.

#### **4.6 On-demand nuclear instrumentation**

Some nuclear instrumentation systems are on-demand operation. That is, the system is not in a continuous mode of operation and may not be installed in fixed core positions [61]. One of the examples of such a system is the Aeroball Measurement System (AMS) used in the Evolutionary Power Reactor (EPR). The system measures the neutron flux in predetermined three-dimensional positions in the reactor core. In addition to that, AMS is used to calibrate SPNDs and predict power density distribution in the reactor core [62].

The AMS is composed of several steel balls that contain vanadium. These balls move in specific vertical tubes in the reactor core by a nitrogen gas driving mechanism. The aeroballs enter the reactor core using nitrogen gas pressure. In the reactor core, the thermal neutron flux will irradiate the aeroballs, and many of the 51V atoms will capture neutrons resulting in 52V in an excited state with a half-life of 3.74 minutes. 52V will undergo beta decay with the emission of about 1.43 MeV gamma-ray.

After irradiation, the aeroballs are transported to waiting positions before they are transported to a measuring system. The measuring system consists of several planar silicon detectors that measure the gamma-ray intensity resulting from the decay of the activated atoms. Information about the relative neutron flux values in the reactor core and, thus, power distribution can be calculated using software using these measured intensities from the aeroballs [63]. Because of the short half-life of the produced 52V, this measurement process can be repeated every 15 minutes allowing for the reuse of the aeroballs [64].

Even though such on-demand nuclear instrumentation systems may be installed in the reactor core, they will face a different reactor operation environment than other in-core nuclear sensors and instrumentation. This results from the fact that they will not be exposed to the harsh in-core environment for the same duration. Therefore, they may be treated like ex-core nuclear sensors and instrumentation. In addition, unlike fission chambers or gamma thermometers, such systems are mainly used in certain reactors and not widely used in other reactor designs. For these reasons, the on-demand nuclear instrumentation systems are not discussed further.

#### **5. Overall assessment**

The compatibility of the nuclear sensors and instrumentation with the advanced reactor environments was assessed. For in-core nuclear instrumentation, it is clear

#### *Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

that some of the nuclear sensors and instrumentation technologies used in current LWRs may not be suitable for the advanced reactor environment. SPNDs are expected to have issues operating in a fast-spectrum environment, high temperature, and corrosive environment. Gamma thermometers are not ready to be used in hightemperature or corrosive environments. Fission chambers are compatible with the fast spectrum environment and high-temperature environment. However, the technology is not commercially available at high temperatures. The housing material must be modified to operate in a corrosive environment.

Overall, it can be demonstrated that the ex-core nuclear instrumentation is expected to have no issue when deployed in advanced nuclear reactors. This fact is mainly because, outside the reactor core, the neutron flux, temperature, and corrosion environment are several folds lower than that in the reactor core. Thus, the ex-core environment in the advanced reactors will be the same as in the currently operating reactors. Using ex-core nuclear instrumentation may be sufficient for reactor operation and safety. On the other hand, deploying in-core nuclear sensors and instrumentation may increase the sensitivity of the measurements and, thus, the overall safety. A cost/benefit analysis is required to assess the need to deploy the nuclear sensors and instrumentation in the in-core advanced reactor environments to decide whether an R&D is necessary to deploy the technology.

The compatibility of the above-discussed in-core and ex-core nuclear sensors and instrumentation with various reactor environments is summarized in **Tables 3** and **4**, respectively. Since LWRs have a long operating experience, the comparison was performed with the environment in LWRs.


#### **Table 3.**

*Compatibility of the in-core nuclear instrumentation with various reactor environments.*


**Table 4.**

*Compatibility of the ex-core nuclear instrumentation with various reactor environments.*

Although in-core nuclear instrumentation is expected to function at high pressure, the high-pressure environment may impact the in-core nuclear instrumentation. This influence can be by affecting the structural integrity of the instrumentation caused by the mechanical stresses applied by the high pressure. This stress may cause deformation, cracking, or failure of structural elements of the nuclear instrumentation over the operating time. Additionally, the high pressure may affect the sealing and containment of instrumentation filled with gasses. Thus, instruments must be appropriately sealed to prevent pressure leakage, which could compromise their functionality. Furthermore, the high pressure may influence the calibration standards used for pressure-sensitive instruments. For these reasons, the compatibility and durability of nuclear sensors and instrumentation with high-pressure reactor environments such as SCWR must be investigated.

**Table 5** summarizes the technology gaps, commercial availability, and the R&D needs for the above-discussed in-core nuclear instrumentation in currently operating PWRs, BWRs, and generation IV reactors.

To assess the priorities in future research projects, the availability of funded research projects by the DOE NEET ASI program in advanced nuclear sensors and instrumentation was investigated [65–70]. An overview of the DOE NEET ASI program-funded projects on nuclear sensors and instrumentation is listed in **Table 6**.

It can be observed that all nuclear sensors and instrumentation-funded projects were in-core sensors and instrumentation. This trend supports the conclusions of the study that R&D priorities are for the in-core nuclear reactor sensors and instrumentation. It is shown that nearly all research projects were focused on the hightemperature environment. Thus, R&D projects are required to study in-core nuclear instrumentation in fast-spectrum, high-pressure, and corrosive environments.


*Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

#### **Table 5.**

*Summary of in-core nuclear instrumentation R&D needs in current and advanced reactors.*

#### **6. Summary**

Advanced nuclear reactors (Generation IV) will have differences in operating environments from the currently operating nuclear reactors. These differences are mainly the higher operating temperature and pressure, the corrosive environment, and the fast neutron spectrum. Developing sensors and instrumentation systems capable of operating in these advanced reactors will play a significant role in their licensing and demonstration efforts.


#### **Table 6.**

*DOE NEET ASI program-funded research projects.*

It was demonstrated that the in-core reactor sensors and instrumentation are prioritized in future R&D projects. R&D is required mainly for SPNDs, gamma thermometers, and fission chambers. SPNDs have to be able to operate at higher temperatures, with a fast neutron spectrum, and in a molten salt reactor environment. Gamma thermometers require R&D projects to work in a fast neutron spectrum reactor and with a molten salt environment. Fission chambers that would be employed as in-core reactor instrumentation require R&D to commercialize the available technology of fission chambers to operate in a high-temperature reactor environment. In addition to that, they need R&D if they will be used in a molten salt environment. The durability and structural integrity of all these in-core instrumentation has to be further investigated in a high-pressure environment.

Ex-core nuclear instrumentation is expected to face the same operational environments as those in currently operating LWRs. Thus, R&D for ex-core nuclear instrumentation is not a priority. On-demand in-core nuclear instrumentation must be evaluated based on operating time in the reactor environment. In addition, some types of nuclear instrumentation are not used in the in-core reactor environment. Employment of such sensors and instrumentation in advanced reactors may increase the accuracy of the measurement and, thus, the reactor's safety. However, cost/benefit analysis is necessary for such R&D projects.

#### *Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

In the United States, the Department of Energy established many research programs to fill the technological gaps and assess the advanced reactor needs of a new generation of sensors and instrumentation. Many R&D projects have been initiated and supported by the NEET ASI program in the last few years. For this reason, where R&D is required based on the defined approach, the availability of initiated research projects on the sensor and instrumentation technology was further investigated. It was shown that the funded projects in the last few years were focused on in-core nuclear sensors and instrumentation. However, R&D projects for gamma thermometers at high temperatures are required. In addition, if SPNDs are planned to be used in MSRs, then R&D projects are necessary to design SPNDs compatible with the corrosive environment.

The assessment guides the most prior R&D needs in nuclear sensors and instrumentation technologies. Filling the gaps in the technology will enable advanced reactors to follow the proven safe and operational path of previously operated nuclear reactors.

#### **Author details**

Thabit Abuqudaira1 , Pavel Tsvetkov2 \* and Piyush Sabharwall3

1 Texas A&M University, USA

2 Department of Nuclear Engineering, Texas A&M University, College Station, Texas, United States

3 Idaho National Laboratory, Idaho, USA

\*Address all correspondence to: tsvetkov@tamu.edu

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] International Atomic Energy Agency. Power Reactor Information System. Vienna, Austria: International Atomic Energy Agency. 2022. Available from: https://pris.iaea.org/pris/ [Accessed: October 18, 2022]

[2] Generation IV International Forum. When Will Gen IV Reactors Be Built? Paris, France: Generation IV International Forum; 2022. Available from: https:// www.gen-4.org/gif/jcms/c\_41890/faq-2 [Accessed: October 19, 2022]

[3] International Atomic Energy Agency. Nuclear Power Plant Instrumentation and Control: A Guidebook. Vienna, Austria: International Atomic Energy Agency; 1984

[4] Meininger R. Three Mile Island Technical Information and Examination Program Instrumentation and Electrical Summary Report. Washington D.C., United States: Department of Energy; 1985

[5] Rempe J, Knudson Darrell. TMI-2—A Case Study for PWR Instrumentation Performance during a Severe Accident [Online]. 2013. Available from: http:// www.inl.gov

[6] Nuclear Regulatory Commission. Report to Congress: Advanced Reactor Licensing. Washington D.C., United States: United States Nuclear Regulatory Commission; 2012

[7] U.S. Department of Energy-Office of Nuclear Energy. Light Water Reactor Sustainability (LWRS) Program. Washington D.C., United States: U.S. Department of Energy-Office of Nuclear Energy; 2022. Available from: https://www.energy.gov/ ne/nuclear-reactor-technologies/

light-water-reactor-sustainability-lwrsprogram [Accessed: October 27, 2022]

[8] U.S. Department of Energy-Office of Nuclear Energy. Nuclear Energy Enabling Technologies (NEET). Washington D.C., United States: U.S. Department of Energy-Office of Nuclear Energy; 2022. Available from: https:// www.energy.gov/ne/nuclear-energyenabling-technologies-neet [Accessed: July 24, 2022]

[9] Locatelli G, Mancini M, Todeschini N. Generation IV nuclear reactors: Current status and future prospects. Energy Policy. 2013;**61**:1503-1520. DOI: 10.1016/j. enpol.2013.06.101

[10] Lamarsh J, Baratta A. Introduction to Nuclear Engineering. 3rd ed. Addison-Wesley. Upper Saddle River, New Jersey: Prentice Hall; 2001

[11] United States Nuclear Regulatory Commission. Power Reactors. Washington D.C., United States: United States Nuclear Regulatory Commission; 2022. Available from: https://www.nrc. gov/reactors/power.html [Accessed: October 23, 2022]

[12] Rippon S. History of the PWR and its worldwide development. Energy Policy. 1984;**12**(3):259-265

[13] Subki MH. Water cooled small modular reactors (integral PWR and BWR). In: Encyclopedia of Nuclear Energy. New York, United States: Elsevier; 2021. pp. 694-710. DOI: 10.1016/b978-0-12-819725-7.00208-7

[14] Westinghouse. International Reactor Innovative and Secure (IRIS) Plant Overview. Washington D.C., United States: Westinghouse; Nuclear regulatory Commission, 2002

*Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

[15] Ray HB. U.S. options for licensing a new commercial power plant. In: Encyclopedia of Nuclear Energy. New York, United States: Elsevier; 2021. pp. 168-174. DOI: 10.1016/ b978-0-12-409548-9.12159-1

[16] Hamon DA. Boiling water reactors. In: Encyclopedia of Nuclear Energy. New York, United States: Elsevier; 2021. pp. 214-235. DOI: 10.1016/ B978-0-12-819725-7.00027-1

[17] Wu P, Ren Y, Feng M, Shan J, Huang Y, Yang W. A review of existing SuperCritical water reactor concepts, safety analysis codes and safety characteristics. In: Progress in Nuclear Energy. Vol. 153. New York, United States: Elsevier Ltd; 2022. DOI: 10.1016/j. pnucene.2022.104409

[18] Cai J, Renault C, Gou J. Supercritical water-cooled reactors. Science and Technology of Nuclear Installations. 2014;**2014**:548672. DOI: 10.1155/2014/548672

[19] Wang J, Wang Q, Ding M. Review on Neutronic/thermal-hydraulic coupling simulation methods for nuclear reactor analysis. Annals of Nuclear Energy. 2020;**137**:107165. DOI: 10.1016/j. anucene.2019.107165

[20] Rahman MM, Dongxu J, Jahan N, Salvatores M, Zhao J. Design concepts of supercritical water-cooled reactor (SCWR) and nuclear marine vessel: A review. In: Progress in Nuclear Energy. Vol. 124. New York, United States: Elsevier Ltd; 2020. DOI: 10.1016/j. pnucene.2020.103320

[21] Tsvetkov PV, Lewis TG, Alajo AB, Ii DEA. VHTR-based systems for autonomous co-generation applications. Nuclear Engineering and Design. 2010;**240**:2451-2457. DOI: 10.1016/j.nucengdes.2010.05.053

[22] Scherr J, Tsvetkov P. Reactor design strategy to support spectral variability within a sodium-cooled fast spectrum materials testing reactor. Annals of Nuclear Energy. 2018;**113**:15-24. DOI: 10.1016/j.anucene.2017.10.049

[23] Abram T, Ion S. Generation-IV nuclear power: A review of the state of the science. Energy Policy. 2008;**36**(12):4323-4330. DOI: 10.1016/j. enpol.2008.09.059

[24] Hatala B. Gas cooled fast reactor system (GFR). In: Encyclopedia of Nuclear Energy. New York, United States: Elsevier; 2021. pp. 545-552. DOI: 10.1016/ b978-0-12-409548-9.12207-9

[25] Tsvetkov P. Gas-cooled fast reactors (GFRs). In: Handbook of Generation IV Nuclear Reactors. New York, United States: Woodhead Publishing, Elsevier; 2023. pp. 167-172. DOI: 10.1016/ B978-0-12-820588-4.00016-5

[26] Abuqudaira TM, Stogov YV. Possibilities of better utilization of MOX fuel in VVER type reactors by optimizing neutron spectrum. Journal of Physics: Conference Series. 2020;**1689**:012023. DOI: 10.1088/1742-6596/1689/1/012023

[27] Bettis ES, Schroeder RW, Cristy GA, Savage HW, Affel RG, Hemphill LF. The aircraft reactor experiment -design and construction. Nuclear Science and Engineering. 1957;**2**(6):804-825. DOI: 10.13182/nse57-a35495

[28] Prince BE, Ball SJ, Engel JR, Haubenreich PN, Kerlin TW. Zero-Power Physics Experiments on the Molten-Salt Reactor Experiment. ORNL-4233; Oak Ridge, Tennessee: Oak Ridge National Laboratory; 1968

[29] Serp J et al. The molten salt reactor (MSR) in generation IV: Overview and perspectives. Progress in Nuclear

Energy. 2014;**77**:308-319. DOI: 10.1016/j. pnucene.2014.02.014

[30] NSA. Nuclear Suppliers Association. Available from: https://nuclearsuppliers. org/ [Accessed: July 19, 2022].

[31] NASA. Technology Readiness Level. Available from: https://www.nasa.gov/ directorates/heo/scan/engineering/ technology/technology\_readiness\_level [Accessed: July 19, 2022]

[32] Hashemian HM. Nuclear power plant instrumentation and control. In: Tsvetkov P, editor. Nuclear Power - Control, Reliability and Human Factors. London, United Kingdom: IntechOpen; 2011

[33] Knoll G. Radiation Detection and Measurement. 4th ed. Hoboken, New Jersey, United States: Wiley; 2010

[34] Westinghouse Electric Company LLC. Chapter 7: Instrumentation and Controls, AP1000 Design Control Document. Westinghouse Electric Company LLC. Washington D.C., United States: Nuclear Regulatory Commission; 2011

[35] Pacific Northwest National Laboratory. Technical Readiness and Gaps Analysis of Commercial Optical Materials and Measurement Systems for Advanced Small Modular Reactors - PNNL-22622, Rev. 1. Richland, Washington: Pacific Northwest National Laboratory; 2013

[36] U.S.NRC. AP1000 Design Control Document, Chapter 4. Washington D.C., United States: U.S.NRC, Nuclear Regulatory Commission; 2011

[37] Sang Y et al. Development and verification of a simulation toolkit for self-powered neutron detector. Annals of Nuclear Energy. 2021;**150**:107784. DOI: 10.1016/j.anucene.2020.107784

[38] Liu X, Wang Z, Zhang Q, Deng B, Niu Y. Current compensation for material consumption of cobalt self-powered neutron detector. Nuclear Engineering and Technology. 2020;**52**(4):863-868. DOI: 10.1016/j.net.2019.09.010

[39] Harrer JM, Beckerley JG. Nuclear Power Reactor Instrumentation Systems Handbook. Vol. 11973. Washington D.C., United States: U.S. Atomic Energy Commission, Nuclear Regulatory Commission; 1974

[40] Cui T, Yang Y, Xue H, Kuang H. A Monte-Carlo simulation method for the study of self-powered neutron detectors. In: Nuclear Instruments and Methods in Physics Research, Section a: Accelerators, Spectrometers, Detectors and Associated Equipment. Vol. 954. New York, United States: Elsevier B.V; 2020. DOI: 10.1016/j. nima.2018.10.061

[41] U.S. Department of Energy. DOE Fundamentals Handbook Instrumentation and Control Volume 2 of 2. Vol. 2. Washington D.C., United States: U.S. Department of Energy; 1992

[42] Goodings A. Experience with high-temperature radiation detectors and cables for reactor instrumentation systems. In: Nuclear Power Plant Control and Instrumentation, Proceedings of a Symposium. Vienna: International Atomic Energy Agency; 1978. pp. 225-242

[43] Mourlevat J, Janvier D, Warren H. Industrial tests of rhodium selfpowered detectors: The Golfech 2 experimentation. In: Core Monitoring for Commercial Reactors: Improvements in Systems and Methods. Workshop Proceedings, Stockholm, Sweden, October 4-5, 1999. Paris, France: OEECD NEA; 2000. pp. 65-77

[44] Goetz KC, Cetiner SM, Celik C. Development of a fast-Spectrum *Perspective Chapter: Assessment of Nuclear Sensors and Instrumentation Maturity in Advanced… DOI: http://dx.doi.org/10.5772/intechopen.113403*

self-powered neutron detector for molten salt experiments in the versatile test reactor. EPJ Web of Conferences. 2021;**253**:05006. DOI: 10.1051/ epjconf/202125305006

[45] Verma V, Barbot L, Filliatre P, Hellesen C, Jammes C, Svärd SJ. Self powered neutron detectors as in-core detectors for sodium-cooled fast reactors. Nuclear Instruments and Methods in Physics Research. 2017;**860**:6-12. DOI: 10.1016/j.nima.2017.04.011

[46] Angelone M, Klix A, Pillon M, Batistoni P, Fischer U, Santagata A. Development of self-powered neutron detectors for neutron flux monitoring in HCLL and HCPB ITER-TBM. Fusion Engineering and Design. 2014;**89**(9- 10):2194-2198. DOI: 10.1016/j. fusengdes.2014.01.077

[47] Holcomb DE, Kisner RA, Cetiner SM. Instrumentation Framework for Molten Salt Reactors [online]. 2018. Available from: www.osti.gov

[48] Birri A, Blue TE. Methodology for inferring reactor core power distribution from an optical fiber based gamma thermometer array. Progress in Nuclear Energy. 2020;**130**:103552. DOI: 10.1016/j. pnucene.2020.103552

[49] Van Nieuwenhove R, Vermeeren L. Nuclear heating measurements by gamma and neutron thermometers. EPJ Web of Conferences. 2020;**225**:04003. DOI: 10.1051/ epjconf/202022504003

[50] Nuclear Regulatory Commission. Instrumentation in VHTRs for Process Heat Applications. Washington D.C., United States: Nuclear Regulatory Commission; 2010

[51] Raghavan R, Martin CL, Wirth AL, Itoh T, Goto Y, Arai R. Application

of the gamma thermometer as BWR fixed In-Core calibration system. In: Proceedings of the Specialists Meeting on Incore Instrumentation and Reactor Core Assessment, Mito City, Japan. Paris, France; 1996

[52] Birri A, Petrie CM, Blue TE. Parametric analysis of an optical fiber– based gamma thermometer for university research reactors using an analytic thermal model. Nuclear Technology. 2021;**207**(12):1865-1872. DOI: 10.1080/00295450.2020.1844532

[53] Theriault K. Boiling water reactors (BWRs). In: Kok K, editor. Nuclear Engineering Handbook. Boca Raton, Florida, United States: CRC Press; 2009. pp. 83-139

[54] Oak Ridge National Laboratory. HTGR Measurements and Instrumentation Systems-ORNL/ TM-2012/107 [Online]. Oak Ridge, Tennessee, United States: Oak Ridge National Laboratory; 2012. Available from: http://www.osti.gov/contact.html

[55] Reilly D, Ensslin N, Smith H, Kreiner S, Unis E, Los Alamos National Laboratory (U.S.). Nuclear Regulatory Commission. Neutron detectors. In: Passive Nondestructive Assay of Nuclear Materials, US Department of Commerce. Washington D.C., United States: National Technical Information Service; 1991. pp. 379-406

[56] Coburn J, Luker SM, Parma EJ, DePriest KR. Modeling, calibration, and verification of a fission chamber for ACRR experimenters. In: EPJ Web of Conferences, EDP Sciences. Paris, France; 2016. DOI: 10.1051/ epjconf/201610605001

[57] Lamphere R. Fission detectors. In: Marion J, Fowler J, editors. Fast Neutron Physics. Part 1. Techniques. New York: Interscience; 1960

[58] McGregor DS, Ohmes MF, Ortiz RE, Sabbir Ahmed ASM, Kenneth Shultis J. Micro-pocket fission detectors (MPFD) for in-core neutron flux monitoring. Nuclear Instruments and Methods in Physics Research A. 2005;**554**(1-3):494- 499. DOI: 10.1016/j.nima.2005.06.086

[59] Oak Ridge National Laboratory. Instrumentation Framework for Molten Salt Reactors - ORNL/TM-2018/868 [Online]. Oak Ridge, Tennessee, United States: Oak Ridge National Laboratory; 2018. Available from: https:// www.osti.gov

[60] Oak Ridge National Laboratory. Assessment of Sensor Technologies for Advanced Reactors - ORNL/ TM-2016/337 R1 [Online]. Oak Ridge, Tennessee, United States: Oak Ridge National Laboratory; 2016. Available from: http://www.osti.gov/scitech/

[61] AREVA NP. U.S. EPR Nuclear Incore Instrumentation Systems Report. Paris, France: AREVA NP; 2006

[62] Dias AM, Silva FC. Determination of the power density distribution in a PWR reactor based on neutron flux measurements at fixed reactor incore detectors. Annals of Nuclear Energy. 2016;**90**:148-156. DOI: 10.1016/j. anucene.2015.12.002

[63] Glasow PA. Aeroball system and energy-dispersive analysis: Important industrial applications of silicon detectors. Nuclear Instruments and Methods in Physics Research. 1984;**226**:17-25

[64] Konheiser J, Mueller SE, Seidl M. Study of the influence of water gaps between fuel assemblies on the activation of an aeroball measurement system (AMS). Annals of Nuclear Energy. 2020;**136**:107005. DOI: 10.1016/j. anucene.2019.107005

[65] U.S. Department of Energy. Advanced Sensors and Instrumentation Project Summaries. U.S. Department of Energy; 2016

[66] U.S. Department of Energy. Advanced Sensors and Instrumentation Project Summaries. Washington D.C., United States: U.S. Department of Energy; 2017

[67] U.S. Department of Energy. Advanced Sensors and Instrumentation Project Summaries. Washington D.C., United States: U.S. Department of Energy; 2018

[68] U.S. Department of Energy. Advanced Sensors and Instrumentation Project Summaries. Washington D.C., United States: U.S. Department of Energy; 2019

[69] U.S. Department of Energy. Advanced Sensors and Instrumentation Project Summaries. Washington D.C., United States: U.S. Department of Energy; 2020

[70] U.S. Department of Energy. Advanced Sensors and Instrumentation Project Summaries. Washington D.C., United States: U.S. Department of Energy; 2021

#### **Chapter 5**

## Perspective Chapter: PRA and Protective System Maintenance

*Ernie Kee and Martin Wortman*

#### **Abstract**

The processes used to manage protective system equipment failures as they relate to Probabilistic Risk Assessment (PRA) in the commercial nuclear power setting are reviewed. Efficacy of protection is governed by maintenance policy that includes system modification, maintenance inter-arrivals as a function of time, and upset interarrivals as a function of time. Such a maintenance policy is the one used in nuclear power plant protective systems. Observations described in this article include the impact of time-dependent activities associated with maintenance policy as they relate to endogenous and exogenous upset inter-arrival times. Methods evaluating maintenance policy reliant on combinatorial logic, such as PRA, fault trees, or event trees, may lead to ineffective maintenance policy decision-making for protective system efficacy. Recommendations for maintaining effective protections, and connections to engineering maintenance practice and regulations are made based on the implications that come from our observations. The importance of the issues described herein is that the relationship of design, maintenance, and repair policies must be properly understood and taken into account by process owners, operators, and investors, as well as regulators who specify and enforce protections in hazardous processes.

**Keywords:** PRA, protection, protective systems, maintenance efficacy, nuclear power

#### **1. Introduction**

On March 28, 1979 the United States commercial nuclear power program experienced its first major accident; the reactor core in Unit 2 at the Three Mile Island nuclear plant site in Harrisburg, Pennsylvania had overheated and melted down. Even considering this accident, it can be said, based on the safety record of commercial nuclear power in the United States, that the Nuclear Regulatory Commission (NRC) has successfully produced a regulatory structure that effectively manages risk of radioactive releases, especially from commercial nuclear power plants.<sup>1</sup> With the benefit of hindsight, this accident would help inform stakeholders to better manage risk from nuclear power plant accidents. Within the nuclear power industry, the Three Mile Island accident motivated development of the predictive modeling

<sup>1</sup> The President's Commission report "The Accident at Three Mile Island" states that the level of radioactivity released from the Three Mile Island accident was small Kemeny (see [1], p. 34).

methods and risk analytics commonly in use today. Thus, before exploring the details of reactor protection risk analytics, it is important to review key engineering insights gained following the Three Mile Island accident. Only then can we understand how these insights are either accommodated or fail to be accommodated in risk analysis methods such as PRA.

#### **1.1 The benefit of hindsight**

As described by Kemeny (see [1], p. 43) and Rogovin (see [2], vol. 1, p. 12), the core melt accident at Three Mile Island started when an operator tried to clear a plugged system that supplied water to the primary reactor heat removal system used in electricity production (see [3], for decay heat process details). The operator became involved when a valve installed for the purpose of bypassing around the plugged system was not open ([1], pp. 47–48). Although the actions taken by the operator to clear the plugging were unsuccessful, the heat removal system was supplied with a backup protective system using a completely separate water supply. Unfortunately, the backup system's valves were inadvertently left shut, rendering it ineffective ([1], p. 47). The NRC requirements imposed through regulation and designed by the plant owners and investors anticipated such a loss of cooling sequence so a fourth separate protective system was supplied that would directly inject cooling water into the reactor core to keep it cool. Up to this point, the reactor system had begun heating up causing a relief valve to open, as designed, to reduce pressure.2 Unfortunately, the relief valve failed to close when the pressure dropped back to normal, a malfunction that went unrecognized by the operators. However, the fourth method provided to keep the reactor core from overheating started automatically and began cooling the reactor core. Thus far in the accident sequence, the status of the reactor core protections could be summarized as: (a) main water supply—*plugged up*, (b) bypass protection of main supply—*valve shut*, (c) third water supply system—*valve shut*, (d) reactor system relief valve—*stuck open*, (e) fourth cooling system—*working and cooling*.

Because the relief valve stayed open too long, the reactor pressure kept dropping and the water around the core began expand as it boiled. Water from the reactor core was then pushed into a surge volume where the reactor system water level is measured. Although water was actually being continuously lost from the core through the stuck relief valve, the operators were under the impression, based on the surge volume level, that the reactor system was filling up by the fourth (and final) cooling system and they shut it off. At this point, the plant entered a state where it would lose the ability to prevent the reactor core from melting and releasing radioactive material. *An interesting detail is that even the malfunction of the relief valve sticking open was also anticipated and therefore a shutoff valve was provided; temperature measurements were put in place so the operators could understand if the relief valve was stuck open and they would know when the shutoff valve should be closed. The operators failed to close the valve even though the temperature measurement indicated they should* ([1], p. 46).

Although the several protections put in place would make it almost unimaginable the reactor core could melt, the NRC required a final protective system designed to protect the public in accidents releasing radioactive material from the reactor core. This final protection was a building that could withstand a very high pressure burst and remain

<sup>2</sup> Even this protective overpressure protective system was backed up with separate overpressure relief valves if the pressure had continued to rise.

effectively airtight. This building is called the "containment building" and it is the final barrier to release of radioactive material in the event of a serious accident.<sup>3</sup> Because the containment building was able to contain the radioactive materials until they could be properly managed, the public was not exposed to excessive radioactive material. The following considers what went wrong at Three Mile Island, what went right, and why?

#### *1.1.1 What went wrong?*

A complete description of the accident leading to core melt is given by Kemeny, but the main events could be summarized as: (a) the main cooling water supply was plugged up, (b) the backup system to keep the main water supply flowing was shut off, (c) the third backup system cooling water supply systems' valves were shut, (d) the reactor pressure relief system (itself a protection against overpressure) caused the reactor system to lose water due the valve sticking open and, (e) the fourth backup water system was shut off due to factors such as: incorrect data interpretation; unobserved data or; misunderstanding the thermodynamics of isenthalpic expansion.

Actions by the operator working on the main water supply system caused the main flow path to shut off (an "initiating event", see Sections 1.4, 3.2 and 5.3 and definition 5.2). The system that plugged up is added for chemical cleanup of the main water flow (see [1], p. 94). Although the possibility the chemical cleanup system could clog up was anticipated and protection had been added, the protective system failed to work because its bypass valve was not open; (see [1], p. 47–48), (see [2], vol. 1). The backup system with its valves shut (third main cooling system) did not operate for at least 8 min after it was needed. However this may not have directly made a difference in the core melt outcome (see [1], pp. 47, 91, 94). The reactor system relief valve opened at the right time but failed to close when the pressure had returned to normal; as a consequence, water continued to be released from the reactor system (see [1], pp. 28–29, 90–94). The fourth backup system that could have prevented the melt sequence started automatically and was working well but it was shut off by the operators (see [1], pp. 28, 89, 91, 93–94).

#### *1.1.2 What went right?*

As the accident progressed to melt, radioactivity monitors inside the plant began to show increasing levels of radiation. The radiation levels plus indications of increasing levels of water inside the reactor containment building gave the operators indications that an abnormal condition was present and worsening. Other indications, although working properly, were not seen or interpreted incorrectly by the operators thereby allowing the sequence to progress to core melt. By properly interpreting indications and taking appropriate actions in response, the operators may have kept the event from progressing. Even though instrument indications were "pointing" them to the possibility a dangerous event was in progress, the operators consistently failed to interpret critical indications correctly. It can be said that despite several backup systems and indications, the final most important element that went right was the NRC required a containment building acting as the final, in this case the fifth, barrier against radioactive material exposure to the public.

<sup>3</sup> The General Design Criteria require specific performance characteristics of containments. See, 36 FR 3256, 36 FR 3256, Feb. 20, 1971, as amended at 36 FR 12733, July 7, 1971; 41 FR 6258, Feb. 12, 1976; *43 FR 50163, Oct. 27, 1978*; 51 FR 12505, Apr. 11, 1986; *52 FR 41294, Oct. 27, 1987; 64 FR 72002, Dec. 23, 1999; 72 FR 49505, Aug. 28, 2007*.

#### *1.1.3 Why?*

The things that went wrong at Three Mile Island were mostly attributable to human errors of commission or errors of omission brought on by equipment breakdowns: (a) error of commission, the failed attempt to unplug the main water line, (b) error of omission, the failure to open the bypass valve, (c) error of omission, leaving the emergency water line valves shut, (d) error of commission, turning off the fourth protective system, (e) the reactor system relief stuck open and, (f) the main water supply plugged up. That is, the operators either took action that was wrong, error of commission, or did not take action when required, error of omission. These errors can be attributed to causes such as lack of training, lack of engineering insight, carelessness, or simply not following procedures. These kinds of errors are difficult to eliminate even when attempts are made to overcome them. For example, a second check of the required valve position would have been a good idea for the emergency valves; however this method is not always effective.4

#### **1.2 Probability, reality, and maintenance policy**

One may be tempted to look back at the Three Mile Island accident and argue the probability for the accident was extremely unlikely. Acting on such temptations that is, applying ex-ante probabilities to ex-post observations, is improper. The accident at Three Mile Island Unit 2 actually happened rendering the ex-post probability irrelevant. Prior to the accident some experts would say that the probability was very small, possibly on the order of 1 chance in a million. Of the many scenarios anticipated in experts' analyses of an ex-ante probability, the exact scenario that unfolded was unanticipated making their analyses irrelevant. Personnel misinterpreting indications, the actions taken due to the particular relief valve failure, and errors of commission were unanticipated *as a scenario*. The procedure for working on the chemical system could have required the operator to place the second protective system in service before starting the work as noted by Kemeny on page 47 of his report. In fact there are effectively limitless scenarios that may play out in the future. Solberg and Njå [4] state in their article "Reflections on the ontological status of risk", only one of the many (truly infinite) possible scenarios can play out to produce a current "state of affairs":

*An important observation when we consider change or events is that when an event (a specified state of affairs) is manifested (when it happens) all other logically possible states of affairs are simultaneously excluded from manifestation. For this to be the case it would imply that there exists a whole range of possible future states of affairs, but only one of these would manifest at some point in time (the present).*

*(Solberg and Njå, 2012)*

The discussion above is meant to point out why the accident at Three Mile Island happened even though the NRC and the plant design engineers endeavored to put in place protective systems intended to prevent progression to core melt. Before the accident occurred, a concerned citizen might question the design engineer in a conversation like the following:

*Citizen, "What if the main cooling system gets plugged up?" Engineer, "We thought of that and put in a bypass valve".*

<sup>4</sup> See for example, https://www.nrc.gov/docs/ML1418/ML14188A495.pdf

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

*Citizen, "What if the bypass valve doesn't work?"*

*Engineer, "We also thought of that possibility and added a separate system to pump water in. We also know that if this happens, the reactor system will heat up causing pressure to increase. But before you ask, we added relief valves for this possibility"*

*Citizen, "What if the relief valve doesn't work, will the system over pressurize?" Engineer, "We thought of that too and added extra safety valves that would prevent overpressure"*

*Citizen, "But even so, there is no more cooling system left" Engineer, "We put in yet another separate system to pump water in the reactor system just in case the other systems don't work"*

At this point the citizen might reasonably stop asking questions, satisfied the NRC and the plant engineers had really thought of everything that could go wrong *in the future*.

It is clear that when a substantial hazard is present in a technological system, many questions need to be asked and answered, and many scenarios considered in detail. At Three Mile Island, the particular hazard was the need to remove decay heat from the reactor system over a relatively long period of time. Even though many questions may be asked and answered, once a technological system is started, we take a leap into the unknown. No one can say for sure what future scenarios may play out. Some may lead to great loss, injury, or even death. The maximum level of harm that could result from various failures in a technological system is almost certainly known by the engineers who design it. On the other hand, unless large datasets are available, the numerical *probability value* they would would occur cannot be known. Depending on the value of the technological system to the social welfare, those valuable enough to be allowed by law are most likely subject to regulation designed to protect citizens'safety and health.

#### **1.3 Protective systems**

Technological systems required by regulation are referred to as "protective systems". Such protective systems overlay technological systems operated for the purpose of maximizing profit. They differ from production systems in that they do not create products with a utilitarian purpose. Instead, protective systems function to reduce the probability that citizens would need to backstop losses from risks taken in owner and investor activities that exceed the investors' asset value. No one, not those maximizing profit, not the regulator, not the engineers, and not any independent scientific authority can know the probability or likelihood that a technological system in use will or will not cause harm with exactitude.

Protective systems add to the cost of goods and services. In so doing, they will generally balance the reduction of profit margins against the price citizens are willing to pay for the goods and services produced. In competitive markets, profit maximizers will look for opportunities to reduce costs imposed by protective systems. A subtlety created by the process of cost reduction is that profit maximizers will tend to look at all costs that go against their revenue including production costs in maintenance and operation. A reasonable question would be "does it matter to citizens if the maintenance and operational costs of the *production systems* are reduced if citizens require the regulator to make sure the *protective systems* are maintained up to standards?" The subtlety created by reducing costs in technological systems other than protective systems is that protective systems are required to protect against harms more often

than if the production systems were operating smoothly. Regarding the Three Mile Island accident, Kemeny points out that,

*Review of equipment history for the 6 months prior to the accident showed that a number of equipment items that figured in the accident had had a poor maintenance history without adequate corrective action. These included . . . the condensate polishers. (Kemeny, 1979)*

Failure of the condensate polishers at Three Mile Island triggered the sequence of events leading up to the core melting.<sup>5</sup> This "trigger" is commonly referred to as an "initiating event"; the first event in a series of events that may end with unpleasant consequences. he importance of these initiating events is that the more often they occur, the more often the protective systems will be called upon to operate and because their failure probability can not be known, citizens should be concerned if the maintenance and operational costs of the production systems are reduced to the point more triggers occur.

Whenever a protective system allows an initiating event to progress to harm, it is made obsolete by revision that would address the root cause of its failure. The NRC made substantial changes to protective system requirements in the nuclear reactors it regulates following the Three Mile Island accident. No one can know the probability a technological system that has the potential for harm will, or will not, cause harm over its useful lifetime in the absence of substantial data sets where harm has occurred. This well-known principle is classically described by Cardano and Wilks [5] and more formally by Bernoulli [6].6 As technological systems are operated, protective systems that overlay them should be revised as necessary in light of new information to the point the technological systems are judged to be less harmful than the harms they are designed to overcome. Because protective systems add cost to the production of goods and services produced by technological systems, there is a balance, driven largely by market forces, to be struck among the cost of production, profit margin, and protection.

#### **1.4 Predictive modeling for safety: critical protections**

The concept of PRA is introduced in here at a very high level. In practice, PRA and other methods used to quantify probabilities and frequencies, are fundamentally derived from logical descriptions of a technological system's response to an "upset", or initiating event. **Figure 1** is an example of such a logical description that envisions a technological system that "operates", *Z*, on an initiating event input, *I*, to produce "outputs", *O*. *Z* creates at least 2*<sup>n</sup>* outputs by assigning probabilities that split *I* at each of *n* devices. The splits are conditional such that if *P*ð Þ� is the probability for failure of the device, one branch of the split is assigned *P*ð Þ� , and the other *P*ð Þ� .

Such probability models, with various levels of sophistication, are used to characterize the efficacy of safety-critical protections.<sup>7</sup> Understanding protective

<sup>5</sup> A description of the condensate polishing system is given in [1]. It contains resin beads that chemically condition the water flowing through them and act like a filter susceptible to plugging up.

<sup>6</sup> The "Law of Large Numbers" was formally proved by Bernoulli.

<sup>7</sup> While there is a vast engineering literature touting non-probabilistic strategies for addressing uncertainty (fuzzy logic, interval arithmetic, Dempster-Shafer, etc), only probability theory has survived the unrelenting scrutiny of philosophers of mathematics as a viable theory of uncertainty. We embrace probability theory as the preferred logical framework for predictive modeling in engineering practice.

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

#### **Figure 1.**

*A primitive event sequence logic for a protective system having three devices in parallel (1/3 logic).*

system modeling discussed here necessarily assumes a postgraduate familiarity with stochastic processes distinguishing it from the treatment PRA practitioners use. Stochastic dynamics of safety-critical protections are defined in a general setting allowing treatment of efficacy analysis in a top-down manner starting at general predictive modeling followed by exploration of consequences inherent in the simplifying assumptions required of popular risk quantification methods such as PRA, Quantitative Risk Assessment (QRA), and Probabilistic Safety Assessment (PSA).

Models of efficacious protective system design and management cannot be informed by "probability numbers"—such numbers are unavailable as a practical matter. This is to say, the condition "safe enough" is almost never expressible as a numerical probability as argued by Hansson (see [7–9], for perspectives on safety). Hence, efficacy of protective systems should rightly be explored in terms of insights from the mathematical structure of stochastic processes serving as predictive models. To this end, the following develops models at a level of abstraction sufficient to capture stochastic structures that agree with practical engineering understandings. Most importantly, a top-down exposition leads to straightforward identification of the particularizing assumptions necessary to calibrate reactor protective system stochastic predictive models with observational data. In other words, the engineering physics that must hold to compute valid statistical estimates of risk metrics can be identified. Hansson in ([10], Section 4) sets the stage where statistical models can be calibrated and where they cannot.

Clearly, hazardous technologies and their protections are engineered systems that operate according to laws of physics, and their operational behaviors are governed by ongoing design, maintenance, and management decisions. These systems are carefully monitored over time so that information gained can direct re-designs and maintenance intended to improve productivity and safety. Of course, our uncertainty regarding how protections might perform in the future is, at best, a reflection of our historical understanding of physical behaviors, both technological and environmental. For example, we cannot forecast the future arrival of possible operational problems, the possibility of which we are *a priori* unaware.<sup>8</sup> Thus, any understanding of the efficacy of protections must connect physics with a time-dependent state of knowledge about that physics. In this regard, filtered probability spaces are indispensable when exploring engineered protective system efficacy.

<sup>8</sup> For example, one cannot assign the probability of occurrence of any failure mode that one does not yet know possibly exists ... a point seemingly overlooked by many risk analysts.

System physics are approached in here from an operations perspective to establish the analytical framework within which physical features can be mathematically described as time dependent functionals. This framework will be sufficiently general to accommodate most hazardous production systems. Analytical modeling of operations is concerned with characterizing a system's temporal dynamics as the evolution of a system's "state" over time. Almost always, *operations* is concerned with predicting, up to probability law, transitions among particular sets of system states given certain observable historical behaviors. Such transition dynamics are captured as stochastic processes.

The theory of stochastic processes is mature and provides a valid framework for understanding operations—derivations or detailed explanations of the central results applied in here are found in the widely accessible literature. Avoiding repetition, operation of hazardous systems and their protections are modeled in here by a stochastic point process and cited in the appropriate literature without proof.

#### **2. Construction of accident counting processes for reactor risk analysis**

Consider **Figure 2** showing how causality is aggregated from the underlying physics of nature, through the limitations of engineering understanding of the physics, up to the operational level where the physics is collapsed to observations *functional* or *failed* or, more succinctly, [0,1]. Risk quantification is done on observations whereby probabilities are assigned to events. Such probabilities are assumed in a probability space with events in sets of failure outcomes, *o*ð Þ� , taking place at the level of the underlying physics shown in **Figure 2**.

#### **2.1 Physics: based modeling in a general stochastic setting**

Protective systems safeguard against future accidents. The design of protective systems necessarily relies on deterministic physical laws (physics) embedded within a stochastic framework that analytically captures uncertainty about *future* protection behaviors. This stochastic setting yields *predictive models* useful for understanding the efficacy of given protection designs. We establish the following:

**Definition 2.1** (Object). A collection of devices, subsystems, systems, and environments enclosed within a specified logical control volume.

#### **Figure 2.**

*Concept of knowledge aggregation from the underlying physics (both known and unknown) up to the operational level of observation.*

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

**Definition 2.2** (State). The unique numerical assignment to specific features of an object's physical condition (typically in SI units).

**Definition 2.3** (State Space). The set of all possible object states.

**Definition 2.4** (State Variable). A mapping from the domain of an object's physical condition into its state space.<sup>9</sup> Typically, state variables are indexed by time.

**Definition 2.5** (State Trajectory). The evolution of an object's state over a collection of specific time intervals.

**Definition 2.6** (Predictive Models). Probability laws with an object's state variables indexed by time. Predictive models typically include a time *t* ¼ 0 that delineates future from past.

**Remark 1** Predictive models *do not* predict the future. Rather, they frame uncertainty about future physical behaviors in terms of the present state of knowledge.

Objects of modeling interest can be comprised of subordinate objects; that is, an object can be the logical union of other objects. An object is *atomic* if it is not composed of subordinate objects. It follows that, an object's state space is defined on the union of its subordinate atomic object state spaces. The state of an object typically evolves over time. Hence, predictive modeling requires not only specifying the state variable mappings that assign state values to material and environmental conditions, but also crafting a mathematical characterization of the interaction of an object's state variables over time. The control volume defining any object connects its features with the laws of physics that govern its evolution, and it is our understanding of physics that allows us to mathematically characterize the time dependent interaction among an object's state variables. The time dependent behavior of an object's state is referred to as a *state trajectory*. The physical laws that govern state trajectories are, of course, specific to the object being modeled. Inasmuch as one's interests lie with establishing a mathematical framework within which predictive models for hazardous technologies can be crafted, they need only make reference to specific physics where application dictates.

**Remark 2** Valid predictive models must respect two practical engineering constraints: 1) State trajectories are never physically observable beyond the present, and 2) Engineers are not clairvoyants. Valid predictive models are strictly informed by historical information, while allowing for the acquisition of additional information as the object's future unfolds (see [11], for an interesting perspective). The evolution of information history cannot be omitted from valid predictive models.

#### *2.1.1 Life cycle, state trajectories, and the state process*

The terminology *life cycle* is used when referring to object state trajectories that are defined over the entire open time interval ð Þ �∞, þ∞ , giving a complete history of an object's state over time. In predictive modeling, an object's life cycle is rarely, if ever, identifiable with certainty at any finite time, and one must must be satisfied with expressing the likelihood that an observed state trajectory belongs to a specified collection of possible life cycles.

Clearly, an object's life cycle is determined by the physics within its control volume. In the usual manner, let the domain of an object's state variables be Ω, the collection of possible physical outcomes the object can experience. Further, let F be a *σ*-algebra on Ω such that ð Þ Ω, F forms a measurable space.

<sup>9</sup> There is no guarantee that all object features in the domain of state variables are physically observable.

Elements of F are subsets of possible physical outcomes with a defined probability measure expressing the likelihood that a given subset of measurable outcomes contains an outcome of specific interest. *P* is a set function *P* : F ! f g 0, 1 , the triple ð Þ Ω, F, *P* forms a standard probability space (see [12], for a formal development).

Now, if *Zt* is the object's state at time *t*, where *Zt* : ð Þ! Ω, F ð Þ *S*,Bð Þ*S* , for *t*∈ , then *Zt* is a measurable state variable mapping.<sup>10</sup> If *A* is a measurable subset of , *Zt*ð Þ *ω* : <sup>þ</sup> ! *S*, and *t*∈ *A*, then fixed *ω*∈ Ω, forms a *state trajectory* under physical outcome *ω*. When *A* ¼ , *Zt*ð Þ *ω* becomes the object's life cycle under outcome *ω*. Now, without loss of generality, require that for all *ω* ∈ Ω and *t*∈ , *Zt*ð Þ *ω* be rightcontinuous with left-hand limits. It then follows that the collection of random variables *Zt* f g ; *t*≥0 forms a stochastic process, with state space *S*. **Z** ¼ *Zt* f g ; *t*≥0 is referred to as an object's *state process*.

#### *2.1.2 Probability law and validation*

Predictive modeling requires framing uncertainty about an object's state through a *probability law* ℒ**<sup>Z</sup>** : Bð Þ! *S* ½ � 0, 1 on its state process. Here, for each *n*-sequence twotuple *Bt <sup>n</sup>* ¼ ð Þ ð Þ *t*1, *B*<sup>1</sup> , … ,ð Þ *tn*, *Bn* such that *ti* ∈ <sup>þ</sup> and *Bi* ∈Bð Þ*S* , *i* ¼ 1, … , *n*< ∞,

$$\mathcal{QC}\_{\mathbf{Z}}\left(\underline{\mathbf{B}}\_{\mathbf{z}}\right) \equiv P\left(\bigcup\_{i=1}^{n} Z\_{t\_i}^{-1}(\mathbf{B}\_i)\right) = P\{w \in \mathbf{\Omega} : Z\_{t1}(w) \in \mathbf{B}\_1, \dots, Z\_{t\_n}(w) \in \mathbf{B}\_n\}.\tag{1}$$

The measurability of all state variables in **Z** ensures that its probability law ℒ**<sup>Z</sup>** defines a probability measure on the measurable space ð Þ *S*,Bð Þ*S* . Hence, uncertainty associated with any physics-based predictive model is captured on the probability space ð Þ *S*,Bð Þ*S* , ℒ**<sup>Z</sup>** .

**Definition 2.7** (Predictive Model). *For an object with state process* **Z** *defined on* ð Þ Ω, F, *P , the probability space S*ð Þ ,Bð Þ*S* , ℒ**<sup>Z</sup>** *is the object's predictive model.*

Validity of a specific predictive model ð Þ *S*,Bð Þ*S* , ℒ**<sup>Z</sup>** requires that it agree with both the physics underlying **Z** and historical observations of physical behaviors of the object that **Z** represents. Without crawling deeply into the weeds of model validation, it is easily reasoned that the degree of difficulty in proving validity is greatly influenced by the cardinality of both *S* and Bð Þ*S* . Suffice it to say that large state spaces associated with *complicated* object trajectories and *large* information bearing *σ*-algebras Bð Þ*S* require vast amounts of observational data to prove model validity. In fact, validating Bð Þ*S* is almost always a practical impossibility, since quantifying the probability law ℒ**<sup>Z</sup>** stands among the most challenging aspects of stochastic modeling. However, careful stochastic analysis provides a direct means of identifying *invalid models*.

**Observation 1** A necessary condition for an object's state process to be valid is that it must be congruent with well-understood physics. Conversely, in circumstances where **Z** is not congruent with well-understood physics, it immediately follows that the object's predictive model ð Þ *S*,Bð Þ*S* , ℒ**<sup>Z</sup>** must be invalid.

Observation 1 reveals the importance of stochastic modeling in a general setting where an object's *physics–based* behaviors are mapped into time–dependent state trajectories. While proving validity of a predictive model is rarely an achievable engineering objective, it is often straightforward to recognize invalidity of specific models.

<sup>10</sup> <sup>B</sup>ð Þ*<sup>S</sup>* is the standard Borel *<sup>σ</sup>*-algebra generated by the object state space *<sup>S</sup>*.

The following Section 2.2 points out how popular risk modeling methodologies for protective system *operations* can easily obscure underlying physics and can result in a failure to recognize model invalidity.

#### **2.2 Stochastic modeling: production, protection and environment**

When concern lies with the efficacy of protections, **Z** is understood to model an object representing a hazardous production system protections with *S* as the object state space, onto which all underlying physics is mapped. Designate three subordinate objects as: a production system, a protective system, and an environment.<sup>11</sup> Without loss of generality, assume that these three subordinate objects are mutually exclusive (i.e., their respective control volumes do not intersect); mutual exclusivity does *not* imply that production, protection, and environment operate independently.

The mutual exclusivity of production, protection, and environment ensures that state space *S* is formed as the cross product of the subordinate object states. Here,

$$\mathcal{S} = \mathcal{S}^{\mathcal{X}} \times \mathcal{S}^{\mathcal{R}} \times \mathcal{S}^{Y},\tag{2}$$

where

*S<sup>X</sup>* is the subspace of production system states,

*S<sup>R</sup>* is the subspace of protection system states, and.

*S<sup>Y</sup>* is the subspace of environment states.

Take the subspaces identified in Eq. (2) to each be a manifold with boundary. By definition, any element of *SX* numerically characterizes a production state. Similarly, elements of *S<sup>R</sup>* quantify protection states, while subsets of *S<sup>Y</sup>* describe an environmental state within which production and protection exist. Now,

**<sup>X</sup>** <sup>¼</sup> *Xt* f g : *<sup>t</sup>*≥<sup>0</sup> , is  the  production  state  process, where *Xt* : ð Þ! <sup>Ω</sup>, <sup>F</sup> *<sup>S</sup><sup>X</sup>*,<sup>B</sup> *SX* is  the  production  system  state at time *<sup>t</sup>*; **<sup>R</sup>** <sup>¼</sup> *Rt* f g : *<sup>t</sup>*≥<sup>0</sup> , is  the  protection  system  state process, where *Rt* : ð Þ! <sup>Ω</sup>, <sup>F</sup> *SR*,<sup>B</sup> *<sup>S</sup><sup>R</sup>* is  the  protection  system  state at time *<sup>t</sup>*; and **<sup>Y</sup>** <sup>¼</sup> *Yt* f g : *<sup>t</sup>*≥<sup>0</sup> , is  the  environment  state  process, where *Yt* : ð Þ! <sup>Ω</sup>, <sup>F</sup> *<sup>S</sup><sup>Y</sup>*,<sup>B</sup> *<sup>S</sup><sup>Y</sup>* is  the  protection  system  state at time *<sup>t</sup>:*

And, by construction **Z** ¼ ð Þ **X**, **R**, **Y** , and for all *t*≥0

$$Z\_t = (X\_t, R\_t, Y\_t) : (\mathfrak{Q}, \mathcal{F}) \to (\mathbb{S}, \mathcal{B}(\mathbb{S})) \tag{3}$$

where

$$\mathcal{B}\left(\mathbb{S}\right) = \mathcal{B}\left(\mathbb{S}^{\mathcal{X}}\right) \otimes \mathcal{B}\left(\mathbb{S}^{\mathbb{R}}\right) \otimes \mathcal{B}\left(\mathbb{S}^{\mathcal{Y}}\right). \tag{4}$$

<sup>11</sup> Recall that a protective system is comprised only of elements that are mandated by regulatory oversight (i.e., its elements would not be present within the enterprise operations control volume in the absence of regulatory requirements).

#### *2.2.1 Dependency among production, environment, and protection*

By convention, take all trajectories of **X**, **R**, and **Y** to be càdlàg (*right-continuous* with *left-hand limits*). *S* partitioned with *SX*, *SR*, and *S<sup>Y</sup>* allows framing the interaction among production, protection and environment over time. It is understood that the temporal dynamics of these three stochastic state processes are not mutually independent. Here, with the *σ*-algebras respectively generated by the processes **X**, **S**, **Y**, and **Z** written as

$$\mathcal{F}^X \equiv \sigma(\mathbf{X}) = \lim\_{t \to \infty} \mathcal{F}^X\_t = \sigma\{X\_u; -\infty < u \le t\},\tag{5}$$

$$\mathcal{F}^{\mathbb{R}} \equiv \sigma(\mathbf{R}) = \lim\_{t \to \infty} \mathcal{F}^{\mathbb{R}}\_t = \sigma\{R\_{\boldsymbol{\theta}}; -\infty < \boldsymbol{u} \le t\}, \tag{6}$$

$$\mathcal{F}^Y \equiv \sigma(\mathbf{Y}) = \lim\_{t \to \infty} \mathcal{F}^Y\_t = \sigma\{Y\_u; -\infty < u \le t\},\tag{7}$$

$$\mathcal{F}^Z \equiv \sigma(\mathbf{Z}) = \lim\_{t \to \infty} \mathcal{F}^Z\_t = \sigma\{Z\_u; -\infty < u \le t\}. \tag{8}$$

There can exist some *BZ* <sup>¼</sup> *BX*∪*BR*∪*BY* <sup>∈</sup> <sup>F</sup>*<sup>Z</sup>*, where *BX* <sup>∈</sup> <sup>F</sup>*<sup>X</sup>*, *BR* <sup>∈</sup> <sup>F</sup>*<sup>R</sup>* and *BY* <sup>∈</sup> <sup>F</sup>*<sup>Y</sup>* such that

$$P(B\_X \cap B\_R \cap B\_Y) \neq P(B\_X) \cdot P(B\_R) \cdot P(B\_Y). \tag{9}$$

Dependence among **X**, **R**, and **Y** is intuitively clear: The production state **X** and the protection state **R** both depend on **Y**, the common environment within which they operate. So, there is some *B*∈*S* such that the likelihood that production and protection are in *B* at time *t*≥0 must as a matter of physics depend on the evolution of their common operating environment. For example the arrival of natural disasters and the wear-out rate of equipment are physics-based environmental influences on both production and protection. Thus,

$$\begin{aligned} P(\mathbf{X}\_t \in B | Y\_u; u \le t) &\ne P(\mathbf{X}\_t \in B) \\ P(R\_t \in B | Y\_u; u \le t) &\ne P(R\_t \in B), \end{aligned} \tag{10}$$

and, consequently, **X**, **R** are not generally independent. However, the future environment process **Y** is typically uninfluential on the histories of production and protection; we call this the Lack of Anticipation Property (LAP).

**Definition 2.8** (Lack of Anticipation Property)**.** For all *t*, *s* ≥0 and *B* ∈Bð Þ*S* ,

$$P(Y\_{t+\varepsilon} \in B | X\_u, R\_u : u \le t) = P(Y\_{t+\varepsilon} \in B) \tag{11}$$

The LAP plays a central role in predictive modeling of safety-critical protections, and is represented schematically in **Figure 3**.

#### *2.2.2 Information flow and filtrations*

Time and state dependencies among **X**, **R** and **Y** are informed by the flow of historical information. Information is characterized in a time indexed collection of sub-*σ*-algebras f g F*<sup>t</sup> <sup>t</sup>*≥0, often called a history, forming a filtration augmenting the standard probability space on which state variables are defined. Here, the filtered

#### **Figure 3.**

*Schematic of dependence among production, protection, and environment trajectories.*

probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* is such that the state variables *Zt* <sup>¼</sup> *Xt*, *Rt* ð Þ , *Yt* are F*t*-measurable for all *t*≥0. This filtration has the standard properties where,

**Property 1** F<sup>0</sup> contains all *P*–null sets (completeness).

**Property 2** lim *<sup>s</sup>*↓<sup>0</sup> F*<sup>t</sup>*þ*<sup>s</sup>* ≜ ∩ *<sup>s</sup>*>*<sup>t</sup>* F*<sup>s</sup>* ¼ F*<sup>t</sup>* (right–continuity).

**Property 3** F*<sup>t</sup>* ⊆ F*<sup>t</sup>*þ*<sup>s</sup>* (monotonicity).

**Property 4** lim *<sup>t</sup>*!<sup>∞</sup> F*<sup>t</sup>* ≜ ∨*<sup>t</sup>*≥<sup>0</sup> F*<sup>t</sup>* ¼ F (convergence).

The first property is an easily satisfied technical requirement from measure theory. The remaining three properties are normative and intuitively understandable. Property 2 indicates that the acquisition of engineering information is not necessarily *smooth*, with no information acquired in certain time intervals, while large amounts can be acquired at certain points in time. Property 3 asserts that information gained through discovery is not lost over time. Property 3 and Property 4 acknowledge that one cannot be convinced that all useful modeling information will be revealed prior to the end of lifecycles.

All information (engineering design, maintenance, operations, weather, management, *etc.*) associated with all possible object life cycles is contained in F. Information available at time *t*< ∞ is limited to the sub-*σ*-algebra F*t*. Note that information represented in F*<sup>t</sup>* is not necessarily limited to F*<sup>Z</sup> <sup>t</sup>* , the *σ*-algebra generated by f g *Zu*; *<sup>u</sup>* <sup>≤</sup>*<sup>t</sup>* (for example, maintenance schedules are not directly represented in <sup>F</sup>*<sup>Z</sup> t* , but they are extremely useful in predictive modeling). Thus, in general, F*<sup>X</sup> <sup>t</sup>* , F*<sup>R</sup> <sup>t</sup>* , and F*<sup>Y</sup> <sup>t</sup>* are each sub-*σ*-algebras of F*<sup>Z</sup> <sup>t</sup>* , and F*<sup>Z</sup> <sup>t</sup>* ⊂ F*t*, for all *t*≥0.

**Remark 3** Predictive modeling must rely on presently available information. With *t* ¼ 0 taken as the beginning of all lifecycles in Ω, the sub-*σ*-algebra F*<sup>t</sup>* contains all modeling information acquirable by time *t*≥0. The filtration f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> contains all acquirable information flows over all possible complete lifecycles. f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> is indispensable in predictive modeling because it enables modelers to represent the current state of knowledge. Omitting filtrations from predictive modeling implies that all possible modeling information is available at time *t* ¼ 0; that is, one assumes that <sup>F</sup>*<sup>t</sup>* <sup>¼</sup> <sup>F</sup> for all *<sup>t</sup>*<sup>≥</sup> 0. The filtered probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* allows modelers to characterize their uncertainty about state variable values as a conditional (on available information) probability. For example with *t*, *s*≥ 0, *P Z*ð Þ *<sup>t</sup>*þ*<sup>s</sup>* ∈ *B* ∣F*t*Þ is the likelihood that the state of a reactor and its protection will be in condition *B* ∈Bð Þ*S* at time *t* þ *s*, given the information available at time *t*.

#### *2.2.3 Calibrating probability law*

In recent years, Uncertainty Quantification (UQ) has emerged as a topic of considerable interest in risk analysis and safety engineering. Generally as the name suggests, UQ explores means to measure or judge the size of uncertainty. In the context of a nuclear reactor with regulated safeguards, operators and regulators are concerned with the efficacy of protection, and the predictive model ð Þ *S*,Bð Þ*S* , ℒ**<sup>Z</sup>** provides them a functional relationship between life cycle physics and uncertainty. Without exception, any useful UQ metric *m*ð Þ� maps probability law into a real-valued *n*-vector; that is, *<sup>m</sup>* : <sup>ℒ</sup>**<sup>Z</sup>** ! *<sup>n</sup>*, *<sup>n</sup>* <sup>≥</sup>1*:* With *<sup>B</sup>*<sup>∈</sup> *<sup>S</sup>* taken as the support of *<sup>m</sup>*, it is required that *<sup>m</sup>* be Bð Þ *B* -measurable. This mild measurability restriction admits UQ metrics that might not require a complete characterization of the probability law ℒ**Z**. Nonetheless, all UQ metrics require that (if not all) at least some part of ℒ**<sup>Z</sup>** be calibrated. Calibrating ℒ**<sup>Z</sup>** is generally quite challenging.

Recall from the Kolmogorov Extension Theorem that specifying the probability law ℒ**<sup>Z</sup>** is equivalent to specifying all finite joint distributions on the state process **Z** ¼ *Zt* f g ; *t*≥ 0 [13]. Of course, there is generally an uncountable number of finite joint distributions on stochastic processes that evolve over continuous time. So, unless the state process **Z** exhibits very special independence properties (e.g., regeneration, stationarity, ergodicity) in combination with a small support for the UQ metric *m*, the amount of historical data required to statistically calibrate elements of ℒ**<sup>Z</sup>** is staggering.

Intuitively, crafting UQ metrics on predictive models that are useful for informing the efficacy of reactor protections boils down to:


In practice, when particularizing **Z** to a specific reactor and safety–critical protective system, choosing a fine granularity on the support of any UQ metric *<sup>m</sup>* : <sup>ℒ</sup>**<sup>Z</sup>** ! *<sup>n</sup>* introduces a model calibration burden for ℒ**<sup>Z</sup>** that is typically insurmountable. Section 4 reviews a collection of very strong underlying modeling assumptions required to calibrate Core Damage Frequency (CDF), the widely used UQ metric that is integral to the popular PRA methodology. The bottom line, here, is that UQ can never escape the challenges of calibrating ℒ**z**; thus, UQ metrics applied to reactor protective system efficacy analysis should be evaluated with a healthy engineering mistrust.

#### **3. Operations modeling**

*Operations* generally refers to the study of *stochastic point processes* that are embedded within an object's state process **Z**. It follows that in operations modeling and analysis *time* is the only physical variable of interest. An operations point process is typically embedded within **Z** and generates an increasing sequence of random times f g *Tn*; *n*∈ identifying the times of occurrence of a particular non-quantitative feature of **Z** that is identifiable through observable state changes. In principle, the state space *S* is covered with a partition *G* ∈Bð Þ*S* , where *G* is an at most countable collection of sets. The elements of *G* correspond to important non-quantitative features on the state space *S*, and the random variable *Tn* marks the time of the *n*th occurrence of a state transition from one particular element of *G* to another.

In operations modeling there are practical measurability issues that must be considered when examining transitions among elements of the state space *G* by the trajectories of state process **Z** (which is, of course, adapted to the filtration f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> of the filtered probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* ).

1. In principle, *Z*�<sup>1</sup> *<sup>t</sup>* ð Þ *G* must be F-measurable for all *t* ≥0.


If *Z*�<sup>1</sup> *<sup>t</sup>* ð Þ *G* ∉ F*t*, then for all *t*≥ 0 there can exist important, and as yet undiscovered, operations events that may be observed in the future. The very practical circumstances where *Z*�<sup>1</sup> *<sup>t</sup>* ð Þ *G* is not F0-measurable are explored in Section 4.

In the remainder of this section, except where otherwise noted, the typical development of operations modeling where random quantities are defined on a standard probability space ð Þ Ω, F, *P* , free from the information flow dynamics characteristic of filtered probability spaces, is followed.

**Remark 4** Operations modeling is common in engineering practice. For example in reliability analysis *<sup>G</sup>* <sup>¼</sup> *<sup>B</sup>*, *Bc* f g partitions the state space *<sup>S</sup>* such that *<sup>B</sup>* is the collection of all *reliable* states. The standard indicator mapping 1*<sup>B</sup>* : ð Þ! *S*,Bð Þ*S* ð Þ f g 0, 1 , *σ*ð Þ f g 0, 1 of physical states into the set {0, 1} defines a unit-less random variable on the probability space ð Þ *S*,Bð Þ*S* , ℒ**<sup>Z</sup>** . The object's availability process **A** ≜ *At* f g ; *t*≥ 0 , where *At* ¼ 1*B*ð Þ *Zt* , has trajectories that proceed only in jumps (up or down) of magnitude one.<sup>12</sup> When *Tn* is taken as the time of the *n*th downward jump of **A**, the sequence **T** ≜ f g *Tn*; *n* ∈ corresponds to a stochastic point process marking time epochs of object state transition from *working* to *failed*. The failure-time process *T* is clearly subordinate to the state process **Z**. *It is the point process* **T** *that characterizes object failures from an operations perspective.*

An operations point process *T* is often analyzed through its corresponding *counting* process **Q**. Here,

$$\mathbf{Q} \triangleq \{ \mathbf{Q}\_i; t \ge \mathbf{0} \}, \tag{12}$$

where,

$$Q\_t = \sum\_{n=1}^{\infty} \mathbf{1}\_{[0,t]} \left( T\_n \right) \tag{13}$$

counts the number of operations epochs appearing in the closed time interval [0, *t*]. There is an obvious one-to-one correspondence between the sequence ð Þ *Tn*ð Þ *ω* ; *n*∈ and the trajectory *Qt*ð Þ *ω* , *t*≥0Þ for each *ω*∈ Ω – a plot of one directly reveals the other. It easily follows that, for all *n*∈ and *t*≥ 0, f g *Qt* ≤*n* ¼ f g *Tn* ≥ *t* which directly implies the distributional relationships

$$\begin{aligned} P(Q\_t \le n) &= \mathbf{1} - P(T\_n < t) \\ E[Q\_t] &= \sum\_{n=1}^{\infty} P(T\_n \le t) . \end{aligned} \tag{14}$$

<sup>12</sup> The object's (limiting) availability often a metric of interest and is taken as *<sup>A</sup>* <sup>≜</sup> lim *<sup>t</sup>*!<sup>∞</sup> *E At*jF*<sup>Z</sup> t* � �, when it exists.

The operations point process **T** analysis is approached through its corresponding counting process **Q** which is accessible through the *martingale calculus*.

#### **3.1 Classification of states for reactor operations with protections**

Consider now an object representing a nuclear reactor, its regulated protections, and the environment within which the reactor and protections operate. The state process **Z** is defined, as before, in Section 2.2.2. Operations modeling and analysis begins with partitioning the state space *<sup>S</sup>* according *<sup>G</sup>* <sup>¼</sup> *Gp*, *<sup>G</sup><sup>c</sup> p* n o where *Gp* <sup>∈</sup>Bð Þ*<sup>S</sup>* is the set of all *persistent* states and its complement *Gc <sup>p</sup>* is called the *transient* states. The set *Gp* is closed under all trajectories. That is, for all *s*∈ *Gp* and *ω*∈ Ω

$$\lim\_{t \to \infty} \sum\_{0 < u \le t} \mathbf{1}\_{G\_p}(Z\_u(\boldsymbol{\alpha})) \cdot \mathbf{1}\_{G\_p^c}(Z\_{u-}(\boldsymbol{\alpha})) = \mathbf{0};\tag{15}$$

thus, once entering the set of all persistent states *Gp*, a trajectory *Zt*ð Þ *ω* cannot depart the set. *s*∈ *Gp* is an *absorbing* state if for all *ω* ∈ Ω

$$\lim\_{t \to \infty} \sum\_{0 < u \le t} \mathbf{1}\_{\{s\}}(Z\_u(o)) \cdot \mathbf{1}\_{\{t\}^c}(Z\_{u-}(o)) = \mathbf{0}.\tag{16}$$

Clearly, absorbing states are persistent.

As a practical matter, require that all trajectories *Zt*ð Þ *ω* , *ω*∈ Ω and *t*≥ 0, of the state process terminate with retirement.<sup>13</sup> Retirement occurs as either, (1) an inconsequential cessation of production, or (2) the consequence of an accident from which the reactor cannot recover. Once the reactor trajectory enters a retirement state, production terminates forever. All retirement states are taken as *absorbing*. Thus the absorbing states, designated *R*⊂ *S*, and transient states designated *R<sup>c</sup>* , partition *S*. The transients states *R<sup>c</sup>* can themselves be partitioned such that *N* are *normal* operating states and *D distressed* operating states. Clearly, the sets {*N*, *D*, *R*} also partition *S*. The possible state transitions among the elements of this partition are shown in **Figure 4**.

#### **Figure 4.** *Partitioned state transition diagram for state process Z.*

<sup>13</sup> In sharp contrast to our analyses, PRA, QRA and PSA require that the state space *S* be almost surely (a.s.) composed of only persistent states none of which are not absorbing. The practical consequence of this state classification requirement is explored in Section 5.

#### **3.2 The initiating event counting process**

Recall that state variables are the three-tuple formed by *Zt* ≜ *Xt*, *Rt* ð Þ , *Yt* . Thus, when a trajectory *Xt*ð Þ *ω* ,*ω*∈ Ω and *t*≥0, of the production state process leaves the normal operating states *N* ∩ *SX* and enters the distressed states *D* ∩ *SX*, an *initiating event* has occurred. Recalling that by convention all state trajectories are rightcontinuous and thus càdlàg, all initiating events occur at the exact instant of transition from *N* ∩ *S<sup>X</sup>* to *D* ∩ *SX*. Arrival of an initiating event indicates that the reactor protective system should engage so as to mitigate potential harm that might arise with the reactor operating in the distressed states. In practice, initiating events are often (although not always) observable.

For all *ω*∈ Ω and *t* ≥0, define

$$dQ\_t(\boldsymbol{\alpha}) \triangleq \mathbf{1}\_{\left\{\boldsymbol{D} \cap S^{\mathbb{X}}\right\}}(\boldsymbol{X}\_t(\boldsymbol{\alpha})) \cdot \mathbf{1}\_{\left\{\boldsymbol{N} \cap S^{\mathbb{X}}\right\}}(\boldsymbol{X}\_{t-}(\boldsymbol{\alpha})).\tag{17}$$

Clearly for *t* >0, *dQt* : Ω ! f g 0, 1 is a random variable on the measurable space of possible life cycles ð Þ Ω, F . For all *ω*∈ Ω and *t* ≥0, it follows that

$$Q\_{\mathfrak{d}}(\boldsymbol{w}) = \int\_{(0,t]} d\boldsymbol{Q}\_{\mathfrak{u}}(\boldsymbol{w}) \triangleq \sum\_{0 < \boldsymbol{u} \le t} \mathbf{1}\_{\{D \cap \mathcal{S}^{X}\}}(\boldsymbol{X}\_{\mathfrak{u}}(\boldsymbol{w})) \cdot \mathbf{1}\_{\{N \cap \mathcal{S}^{X}\}}(\boldsymbol{X}\_{\mathfrak{u}-}(\boldsymbol{w})) \tag{18}$$

form the trajectories of the stochastic process **Q** ¼ *Qt* f g ; *t*≥ 0 inherit the càdlàg property. We call **Q** the *initiating event counting process*. Here, as a practical consideration, we will require that *Q*0ð Þ¼ *ω* 0 for all *ω*∈ Ω. That is, we assume that reactor production should be in a normal operating state at time *t* ¼ 0 at the beginning of lifecycle *ω*. 14

It is a straightforward matter to construct the operations point process **T** ¼ f g *Tn*; *n*∈ which captures the arrival times of initiating events. **T** is referred to as the *initiating event process*, and for each *ω* ∈ Ω and *n* ∈ ,

$$T\_n(\boldsymbol{\omega}) \equiv \inf\_{t>0} \left\{ \int\_{[0,t]} dQ\_t(\boldsymbol{\omega}) = n \right\} \le \infty. \tag{19}$$

The initiating event counting process **Q** plays a central role in our treatment of operations modeling and efficacy analysis of safety-critical reactor protections. Understanding the construction of **Q** provides important insights (to be discussed in subsequent sections) into the relationship between hazard, risk, and the efficacy.

**Remark 5** When assuming that *Z*�<sup>1</sup> *<sup>t</sup>* ð Þ *<sup>G</sup>* <sup>∈</sup> <sup>F</sup><sup>0</sup> and with *Qt* : <sup>Ω</sup>, <sup>F</sup>*<sup>Z</sup>* � � ! ð Þ ,Bð Þ , by construction it is ensured that *Qt* is F*t*–measurable for all *t* ≥0; hence, the initiating event process **Q** is adapted to the filtration f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> which contains the history generated by the state process **Z**. It is preferable to model operations processes on the

<sup>14</sup> Support for this assumption requires the state (normal) to be observable.

filtered probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* , because its filtration contains all available information ... not simply the natural filtration F*<sup>Q</sup> t <sup>t</sup>*≥<sup>0</sup> of **Q**. When PRA, QRA and PSA are used for risk analysis, they implicitly and strictly rely on the natural filtration

$$\left\{\mathcal{F}\_t^{\mathbb{Q}}\right\}\_{t\geq0} \triangleq \{\sigma(Q\_t)\}\_{t\geq0} \subset \{\mathcal{F}\_t\}\_{t\geq0} \tag{20}$$

which contains only information about the occurrence times of initiating events, while ignoring all history of the state process **Z**, maintenance activity, weather, *etc*.

#### **3.3 The accident counting process**

When safety-critical protections function properly a distressed reactor (i.e., *Xt* ∈ *D* ∩ *S<sup>X</sup>* ) will avoid an accident and return to the set of normal operating states in *N*. Accidents are events influencing states of nature outside the control volume within which a nuclear reactor's state process **Z** develops. Accidents are always a consequence of protection failure that causes collateral harm.<sup>15</sup> For our purposes, when an initiating event leads to any level of collateral harm, then that event becomes an epoch of an accident.<sup>16</sup> Possible time delays are allowed for between the arrival of an initiating event and any eventual collateral harm.<sup>17</sup> To this end, distressed state *D* ⊂*S* is partitioned into those states *C* that impact physics outside the system control volume causing collateral harm and *E* ¼ *D=C* those distressed states that do not cause collateral harm. Partitioning *D* allows an operations characterization of an accident:

**Definition 3.1.** An **Epoch of accident** occurs when the trajectory *Xt* ð Þ , *Rt* ð Þ *ω* , *ω*∈ Ω is such that the production state process **X** enters *C*∩ *S<sup>X</sup>* ⊂ *D* ⊂ *S* while the state of the protection process **R** is in *D* ∩ *S<sup>R</sup>* .

Guided by the transition diagram of **Figure 5**, a modification of **Figure 4**, showing the partitioning of distressed states *D* into *C* and *E*, the accident counting process and the accident point process can be constructed.

**Remark 6** Note that in the interval following an initiating event indicating that the state process has moved into distress, the system state can possibly transition many times between accident states in *C* and non-accident distressed states in *E* before either returning to normal or being retired. In practical modeling scenarios, state transitions across the partition of distressed states might take months (or even years). Thus, the initiating event leading to distress might not exceed to an accident for quite some time. In such situations, knowing that an initiating event has just occurred does not necessarily reveal whether or not the system state is on a trajectory of accident.

Now, define

<sup>15</sup> A good system for setting level of consequence is the International Nuclear Event Scale (INES). In this system, we would call events similar to those having INES Level 4 or above as an accident. The NRC defines an Extraordinary Nuclear Occurrence (ENO) in 10 CFR 140.83.

<sup>16</sup> Consideration of disastrous events is from a public policy perspective and focuses on collateral economic harm to people and/or the environment outside of the security fence.

<sup>17</sup> Ground water contamination sourced from the Savannah River Site facility continued developing over many years, with an accident being discovered long after the facility first enter into distressed states of operation.

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

#### **Figure 5.**

*Partitioned state transition diagram for the state process Z with the distresses states* D *decomposed into* C *(catastrophic states) and* E *non-catastrophic distressed states.*

$$\begin{split}d\boldsymbol{Q}\_{t}^{\mathcal{C}}(\boldsymbol{\omega}) & \triangleq \boldsymbol{Q}\_{t}^{\mathcal{C}}(\boldsymbol{\omega}) - \boldsymbol{Q}\_{t-}^{\mathcal{C}}(\boldsymbol{\omega}) \\ &= \mathbf{1}\_{\left\{\boldsymbol{C}\in\mathcal{S}^{\mathcal{X}}\right\}}(\boldsymbol{X}\_{t}(\boldsymbol{\omega})) \cdot \mathbf{1}\_{\left\{\boldsymbol{D}\in\mathcal{S}^{\emptyset}\right\}}(\boldsymbol{R}\_{t}(\boldsymbol{\omega})) \cdot \mathbf{1}\_{\left\{\boldsymbol{N}\in\mathcal{S}^{\mathcal{X}}\right\}}(\boldsymbol{X}\_{t-}(\boldsymbol{\omega})). \end{split} \tag{21}$$

Note that the time *Tm*ð Þ *ω* of the *m*th arriving epoch of an accident is given by

$$T\_m^C(\boldsymbol{\omega}) \triangleq \inf\_{t>0} \left\{ \int\_{(0,t]} dQ\_u^C(\boldsymbol{\omega}) = m \right\} \leq \infty. \tag{22}$$

When *m* is the index of the last arriving epoch of an accident before retirement, all subsequent accidents are taken by convention to occur at infinity. Clearly, the random sequence *T<sup>C</sup> <sup>m</sup>*; *<sup>m</sup>* <sup>∈</sup> � � is a thinning of the initiating event process **<sup>T</sup>**. The point process **TC** ≜ *T<sup>C</sup> <sup>m</sup>*; *<sup>m</sup>* <sup>∈</sup> � � is the *accident process*. 18

With *Q<sup>C</sup> <sup>t</sup>* being the number of arriving accidents in the interval (0, *t*], it follows that for each *ω* ∈ Ω, *t* ≥0, and *m* ∈

$$Q\_t^C(\boldsymbol{\alpha}) = Q\_{T\_m^C(\boldsymbol{\alpha})}(\boldsymbol{\alpha}), t \in \left[T\_m^C(\boldsymbol{\alpha}), T\_{m+1}^C(\boldsymbol{\alpha})\right), \tag{23}$$

and, in practice with *Q<sup>C</sup> <sup>t</sup>* ð Þ¼ *ω* 0 for all *ω*∈ Ω,

$$\begin{split} Q\_t^{\mathbb{C}}(\boldsymbol{\omega}) &= \int\_{[0,t]} dQ\_{\boldsymbol{u}}^{\mathbb{C}}(\boldsymbol{\omega}) \\ &\triangleq \sum\_{0 < \boldsymbol{u} \le t} \mathbf{1}\_{\left\{ \boldsymbol{\mathcal{C}} \cap \mathcal{S}^{\mathbb{K}} \right\}} (\mathbf{X}\_{\boldsymbol{u}}(\boldsymbol{\omega})) \cdot \mathbf{1}\_{\left\{ \boldsymbol{D} \cap \mathcal{S}^{\mathbb{K}} \right\}} (\boldsymbol{R}\_{\boldsymbol{u}}(\boldsymbol{\omega})) \cdot \mathbf{1}\_{\left\{ \boldsymbol{N} \cap \mathcal{S}^{\mathbb{K}} \right\}} (\mathbf{X}\_{\boldsymbol{u}-}(\boldsymbol{\omega})). \end{split} \tag{24}$$

**Q<sup>C</sup>** ≜ *Q<sup>C</sup> <sup>t</sup>* ; *<sup>t</sup>*≥<sup>0</sup> � � is called the *accident counting process*. By construction, **<sup>Q</sup><sup>C</sup>** is adapted to the filtration of <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* � �.

**Remark 7** When assuming that *Z*�<sup>1</sup> *<sup>t</sup>* ð Þ *<sup>G</sup>* <sup>∈</sup> <sup>F</sup>0, the accident counting process **<sup>Q</sup><sup>C</sup>** is, clearly, subordinate the initiating event counting process **Q**. That is, each epoch of arrival

<sup>18</sup> The arrival processes **T** and **TC** play a prominent role in PRA, QRA, and PSA. However, studying their corresponding arrival counting processes in order to access important results from the martingale calculus is preferred.

in **Q<sup>C</sup>** is also an epoch of arrival in **Q**. It is important to note, however, that while both **<sup>Q</sup><sup>C</sup>** and **<sup>Q</sup>** are adapted to the natural filtration f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, **<sup>Q</sup><sup>C</sup>** *is not* adapted to the natural filtration F*<sup>Q</sup> t* � � *<sup>t</sup>*≥<sup>0</sup> of the initiating event process **Q**. Thus, the history of initiating events alone contains insufficient information to construct the accident thinning—a physicsbased insight often overlooked in popular quantitative risk methodologies.

Finally, as a notational convenience, define for all *ω*∈ Ω and *t*≥0

$$A\_t(o) \triangleq \mathbf{1}\_{\left\{N \cap S^{\mathbb{R}}\right\}}(R\_t(o)),\tag{25}$$

where *At* : ð Þ! *S*,Bð Þ*S* ðf g 0, 1 ,Bð Þ f g 0, 1 is a random variable of the filtered probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* � �. Clearly,

$$A\_t = \begin{cases} \mathbf{1}, & \text{projections are available at time } t \\ \mathbf{0}, & \text{otherwise.} \end{cases} \tag{26}$$

We call **A** ¼ *At* f g ; *t*≥ 0 the *protection availability process*, which inherits càdlàg properties from **R**. Substituting Eq. (17) and Eq. (25) into Eq. (24) gives

$$\overline{Q\_t^C(o) = \int\_{[0,t]} (1 - A\_u(o)) dQ\_u(o)}\tag{27}$$

for all *ω*∈ Ω and *t*≥0. And, it follows directly from Eq. (22) and Eq. (27) that for all *m* ∈ , *ω*∈ Ω and *t* ≥0

$$T\_m^C(\boldsymbol{\alpha}) \triangleq \inf\_{t > 0} \left\{ \int\_{(0,t]} (1 - A\_\hbar(\boldsymbol{\alpha})) dQ\_\hbar(\boldsymbol{\alpha}) = m \right\} \leq \infty. \tag{28}$$

The importance of Eq. (27) is that it gives the dynamic relationship between the number of accidents over time in terms of arriving initiating events and the reliability of protections. It should be clear from the construction of Eq. (27) from trajectories of the state process **Z** that the stream of arriving initiating events and the reliability of protections are *not* generally independent since both random phenomena are stochastically dependent on the dynamics of the environment process **Y**. Further, Eq. (22) establishes the relationship between the accident time point process **TC** and initiation event arrivals and protection reliability where, again, stochastic dependence between initiating event arrivals and protection reliability cannot be ignored. Eqs. (22) and (27) play a central role in the developments presented in Sections 4 and 5.

#### **4. Unknown-unknowns**

When designing nuclear reactor protections, it is impossible to foresee and design for every circumstance that might lead to an accident. Such design deficiencies are often called *unknown-unknowns*. <sup>19</sup> Unknown-unknown design deficiencies are

<sup>19</sup> The terminology *unknown-unknowns* was popularized by former United States Secretary of Defense Donald Rumsfeld.

#### *Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

routinely discovered, documented, and corrected. It can be shown that uncertainty about the consequences of as yet undiscovered design deficiencies cannot be quantified. Thus, Probability Quantification (PQ) methodologies without exception overlook the influence of unknown-unknowns on initiating events, protection reliability, and ultimately epochs of accidents. Overlooking unknown-unknowns can only bias predictive accident metrics such as CDF optimistically, presenting an unavoidable pitfall for quantitative methodologies including PRA, QRA and PSA that make no use of filtered probability spaces.

Unknown-unknown failure modes in protective systems will be the focus of analytical developments in the following.<sup>20</sup> Clearly, the possibility of undiscovered failure modes in reactor protections is a matter of great concern to both operators and regulators. The NRC has established rigorous reporting and operation protocols focused on the discovery of newly discovered protection design flaws.

The analytical consequences of unknown-unknowns are revealed only when the system state process **<sup>Z</sup>** is defined on the filtered probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* , where all predictive assertions must be addressed in the context of information currently available in the filtration f g F*<sup>t</sup> <sup>t</sup>*≥0. In particular suppose that in the design of protections there exists protection failure modes that are undiscovered at the time of deployment. These design inadequacies will only be discovered during operations and are then incorporated into the existing body of information. Thus, protection failure modes unknown at time *t* ¼ 0 will enter into the filtration f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> at some time *t*> 0 upon discovery.

By construction as shown in Sections 2 and 3 the protection availability process **A** is adapted to the filtration f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> (i.e., *At* is F*t*-measurable). When for example examining the likelihood that protections are not in a normal condition at any time *t*≥0, it follows that

$$\begin{split}P(A\_t = \mathbf{0}) &= \mathbf{1} - E\left[A\_t\right] \\ &= \mathbf{1} - E\left[E\left[A\_t \mid \mathcal{F}\_t\right]\right], \end{split} \tag{29}$$

because the random variable *E At*jF*<sup>t</sup>* ½ � is well-defined.<sup>21</sup>

In predictive modeling, it is often the case that the condition of protections at some random time *Aτ*, where *τ* : Ω ! þ, is of great interest. Here, care must exercised because Eq. (29) does not necessarily hold when substituting *A<sup>τ</sup>* for *At*. Let *Gf* ⊂*S* be the set of all states where protections are unavailable (protections can be either failed or out of service due to maintenance). Now for *t* taken at the present time, let *τ* be the time of the next arriving initiating event in the initiating event process **T**. It follows that

$$
\pi = T\_{Q\_t + 1} > t,\tag{30}
$$

with *Qt* being the number of initiating events arriving in the interval [0, *t*]. Now consider two cases: (1) *A*�<sup>1</sup> *<sup>t</sup> Gf* is F0-measurable, and (2) *A*�<sup>1</sup> *<sup>t</sup> Gf* is *not* F0-measurable.

<sup>20</sup> Unknown-knowns also bias predictive models of initiating events.

<sup>21</sup> *E At*jF*<sup>t</sup>* ½ � represents the equivalence class of random variables satisfying the definition of conditional expectation on the probability space ð Þ Ω, F, *P* .

#### **4.1 Case 1:** *A*�**<sup>1</sup>** *<sup>t</sup> Gf* **∈** F**<sup>0</sup>**

By definition, *Gf* ∈ F is the collection of all states in *S* where protection is unavailable. Thus, any failure mode for protection must be reflected as a state *s*∈ *Gp* ⊂ *S*. When *Gp* ∈ F0, it follows that every possible failure mode must be known at time *t* ¼ 0 since the pre-image of *G* � *f* through the random variable *At* appears in F0. By definition the filtration f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, <sup>F</sup><sup>0</sup> <sup>⊂</sup> <sup>F</sup>*<sup>t</sup>* for all *<sup>t</sup>*>0 implies that *<sup>A</sup>*�<sup>1</sup> *<sup>t</sup> Gp* ∈ F*<sup>t</sup>* for all *t*≥ 0. It now follows that for any *u*>0 *A*�<sup>1</sup> *<sup>t</sup>*þ*<sup>u</sup> Gp* ∈ F*<sup>t</sup>* and, since *τ* is an F*<sup>t</sup>* stopping time, *τ* > *t*, *A*�<sup>1</sup> *<sup>τ</sup> Gp* ∈ F*t*. In other words, when all possible states where protection is unavailable (including protection failure modes) are understood at time *t* ¼ 0, then

$$\begin{split}P(A\_{\mathfrak{r}}=\mathbf{0})&=\mathbf{1}-E\left[A\_{\mathfrak{r}}\right]\\&=\mathbf{1}-E\left[E[A\_{\mathfrak{r}}|\mathcal{F}\_{\mathfrak{t}}]\right] \end{split} \tag{31}$$

gives the likelihood that the next arriving initiating event will result in an accident.

#### **4.2 Case 2:** *A*�**<sup>1</sup>** *<sup>t</sup> Gf* **∉** F**<sup>0</sup>**

Suppose now that there exist protection failure modes that are unknown at time *<sup>t</sup>* <sup>¼</sup> 0. This implies that *<sup>A</sup>*�<sup>1</sup> *<sup>t</sup> Gf* ∉ F0. When *A*�<sup>1</sup> *<sup>t</sup> Gf* is not F0-measurable there can be no guarantee that *A*�<sup>1</sup> *<sup>t</sup> Gf* will be F*t*-measurable for any 0< *t*< ∞. That is, for any *u*> 0, we cannot ensure that *At*þ*<sup>u</sup>* is F*t*-measurable. And, because *τ* > 0, we *cannot ensure* that *A<sup>τ</sup>* is F*t*-measurable even though *τ* is an F*<sup>t</sup>* stopping time. In fact there will always be some *t* >0 where *A<sup>τ</sup>* ∉ F*<sup>t</sup>* which implies that *E Aτ*jF*<sup>t</sup>* ½ � is not well-defined. From the failure of *E Aτ*jF*<sup>t</sup>* ½ � to satisfy the definition of a random variable on Ω, it can be concluded that when there are undiscovered protection failure modes, Eq. (31) cannot hold.

#### **4.3 The practical implications of undiscovered protection failure modes**

It is reasonable at this juncture to consider the extent to which protection failure modes not identified in F<sup>0</sup> are problematic. Let *τ<sup>d</sup>* be the time of first discovery of a heretofore undiscovered protection failure mode. Some observations that can be normatively understood are:


*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*


If there is sufficient confidence in a non-clairvoyant belief that that all protection failure modes have been discovered, then all information impacting protection design will be found in the tail *σ*-algebra T, where

$$\mathcal{T} \triangleq \bigcap\_{t \geq 0} \sigma(\{A\_u; u \geq t\}).\tag{32}$$

By definition, T ⊂ F and characterizes design information in the *remote future* of **A**. In the remote future of protections, there can be no undiscovered failure modes.

Typically, PQ methodologies (e.g., PRA, QRA, PSA) implicitly rely on the assumption that events associated with protections are T-measurable, because this assumption guarantees the existence of

$$A = \lim\_{t \to \infty} A\_t \quad \text{and} \quad \overline{A} = \lim\_{t \to \infty} \frac{1}{t} \int\_0^t A\_u(\boldsymbol{\alpha}) d\boldsymbol{u}, \boldsymbol{\alpha} \in \Omega. \tag{33}$$

The expected value *E*[*A*] and its statistical estimator *A* play essential roles in PRA, QRA and PSA, and they are only practically accessible under the assumption that all available information is in the tail *σ*-algebra T. The estimator *A*, when computed with data collected other than in T will always underestimate the true value of *E*[*A*], a dangerous bias.

#### **5. The hierarchy of modeling assumptions supporting PRA**

PRA appears, today, in regulatory language, NRC directives and even in federal legislation. PRA methodology relies on very strong modeling assumptions that are rarely (if ever) explicitly qualified in practice. Owing to the wide acceptance and application of PRA in the civilian nuclear industry, it is useful to identify and explain the assumptions underlying this popular risk analysis methodology. These modeling assumptions are best explained hierarchically so as to reveal a sequence of increasingly strong and necessary conditions leading to the computation of CDF ... the central risk metric derived from PRA. These assumptions are difficult to justify in practice and failure to satisfy any of them leads to optimistically biased estimates for CDF.

Begin again with the state process **Z** characterizing the time dependent behavior of a reactor's production, protections, and common surrounding environment defined on the filtered probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* � �, where the state variables are mapped onto the measurable state space ð Þ *S*,Bð Þ*S* . The state space *S* is partitioned according to **Figure 5** in Section 3.

Now identify seven modeling assumptions that must be enforced in order to calibrate CDF using historical data.

**Assumption 1** There are no unknown-unknowns.

**Assumption 2** There are no absorbing states.

**Assumption 3** CDF is well-defined.

**Assumption 4** Arriving initiating events see the time average of protection availability.

**Assumption 5** Arriving initiating events form a Poisson process.

**Assumption 6** The protection availability process is stationary and ergodic.

**Assumption 7** Protection unavailability is independent of all initiating events.

Assumptions 1–7 are numbered such that each implicitly enforces all lower numbered assumptions. Assumptions 6 and 7 are required in order to calibrate CDF with historical data. The consequence of these cumulative assumptions are examined oneat-a-time and in order.

#### **5.1 Cumulative assumption: there are no unknown-unknowns**

As discussed in Section 4, assuming that there are no as yet undiscovered states requires that

$$Z\_t^{-1}(B) \in \mathcal{F}\_0,\tag{34}$$

for *B*∈Bð Þ*S* and *t*≥ 0. Recall that by definition of the filtration f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> F*t*measurability is also ensured. This modeling assumption cannot be verified so long as a reactor is not retired. Obviously, with the random time *τ* : Ω ! ð Þ 0, ∞ taken as the time of discovery of the last heretofore undiscovered state, *τ* is not F*t*-measurable for any *t* < ∞, which implies that there cannot be an observable condition confirming that there are not more undiscovered states. Clearly, the no unknown-unknowns assumption reflects a (believed) near clairvoyant understanding of the technologies being modeled.

#### **5.2 Cumulative assumption: there are no absorbing states**

The absence of absorbing states implies that either all *s*∈*S* are transient, or there exists at least one non-singular collection of persistent states *B<sup>α</sup>* ∈*S*, *α* ∈A such that

$$P(\lim\_{t \to \infty} Z\_t \in B\_a) > 0,\tag{35}$$

and when a state trajectory enters *B<sup>α</sup>* it never exits, visiting each state *s*∈*Bα*, infinitely often. Further,

$$P\left(\lim\_{t\to\so}\limits\_{t\to\so}Z\_{t}\in\bigcup\_{a\in\mathcal{A}}B\_{a}\right)=\mathbf{1},\tag{36}$$

which implies that each state trajectory must eventually enter a non-singular collection of persistent states that it visits infinitely often.

The PRA methodology implicitly rejects the possibility that all states in *S* are transient, and therefore accepts that all state trajectories visit some collection of states infinitely often. Further, PRA some of the states visited infinitely often are accidents (else there would be no reason to perform PRA). It follows that accepting the assumption that reactors are never retired (either through an accident or cessation of operations), there will be accidents *ad infinitum*.

#### **5.3 Cumulative assumption: CDF is well defined**

Accepting Assumptions 1 and 2, it is feasible to investigate CDF. Here, as before, *Q<sup>C</sup> <sup>t</sup>* , counts the number of core damage events a reactor suffers in the interval [0, *t*]. CDF is a frequency and thus is the limiting number of core damage events per unit time. That is,

**Definition 5.1** (Core Damage Frequency)**.**

$$\text{CDF} = \lim\_{t \to \infty} \frac{1}{t} \mathbf{Q}\_t^C \tag{37}$$

whenever convergence to a constant occurs a.s.

CDF is treated as a numerical constant. However, convergence of Eq. (37) is not guaranteed. Further, even when CDF converges, there is no guarantee that its limit is a constant; convergence to a random variable is a completely plausible circumstance. It must be emphasized that CDF is a numerical constant, estimates of which are used to gauge the risk of suffering a core damage event.

The existence of CDF is determined in part on the dynamics of initiating event arrivals. The limiting arrival rate *λ* of initiating events is defined as follows.

**Definition 5.2** (Initiating Event Frequency)**.**

$$
\lambda = \lim\_{t \to \infty} \frac{1}{t} Q\_t \tag{38}
$$

convergence occurs a.s. In general *λ* can be a random variable, and convergence to a constant occurs only when *λ* ¼ *E*½ � *λ* .

Recall that **Q<sup>C</sup>** and **Q** are right-continuous. It follows directly from definitions 5.1 and 5.2 that when *λ* is a constant,

$$\text{CDF} \stackrel{\text{a.s.}}{=} \lim\_{t \to \infty} \frac{Q\_t^C}{Q\_t} \frac{Q\_t}{t} = p\lambda \tag{39}$$

where, often the *split fraction* is

$$p \triangleq \lim\_{t \to \infty} \frac{Q\_t^C}{Q\_t}. \tag{40}$$

Thus, CDF is the product of the initiating event frequency with the proportion of initiating events that exceed to core damage. Typically, estimating *λ* is straightforward. Estimating *p* is more challenging. Note that while *p* takes values in the interval [0, 1], it is defined as the limiting value of a ratio of random variables. However, under certain circumstances *p* can be interpreted as the probability that an arriving initiating event will exceed to core damage. One such circumstance occurs when initiating events form an ordinary Poisson process. Then, the well-known Poisson Arrivals See Time Averages (PASTA) result applies, and *p* can be computed as the limiting unavailability of reactor protections (see [14]). Unfortunately, the assumptions needed to justify PASTA defy practical justification. Consequently, estimating *p* is quite difficult in practice.

There are a variety of approaches for crafting estimators of *p* that incorporate the joint histories of initiating event arrivals, protection maintenance activity,

environmental conditions, etc. Monte Carlo methods, owing to their adaptability to complex engineering models, have gained acceptance and popularity for estimating *p* and other statistics. These methods are not, however, a panacea because they require characterizing probability laws on subordinate stochastic processes that must be mapped into the dynamics of protection availability in order to build useful estimators. Characterizing probability laws on stochastic processes is often impractical due to the intensive data support required for all but the most stylized processes.

In order to better appreciate the manner in which CDF jointly depends on the arrival of initiating events and the efficacy of reactor protections, we will appeal to the martingale characterization of stochastic point processes (see [15]). For practical considerations, we require that initiating events occur one at a time a.s.

Since the trajectories of **Q** are nonnegative non-decreasing and proceed in jumps of size one a.s., clearly *E Qt*þ*<sup>u</sup>*jF*<sup>t</sup>* � �≥ *Qt* for all *t*, *u* ≥0. Thus, **Q** forms a sub-martingale on the filtered probability space <sup>Ω</sup>, <sup>F</sup>, f g <sup>F</sup>*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup>, *<sup>P</sup>* � �. Appealing to the standard results from the martingale calculus (see [16]), it now follows from the Doob-Mayer Decomposition Theorem that

$$Q\_t = M\_t + \Lambda\_t,\tag{41}$$

where the process **M** ≜ *Mt* f g ; *t* ≥0 forms an F*t*–martingale with compensator Λ*<sup>t</sup>* f g ; *t*≥ 0 . Hence,

$$E\left[\mathbf{M}\_{t+u} \middle| \mathcal{F}\_t\right] = \mathbf{M}\_t = \mathbf{0} \tag{42}$$

for all *t*, *u*≥0, and Λ*<sup>t</sup>* is increasing a.s. and F*t*–predictable, with

$$
\Lambda\_{\mathfrak{k}} = \int\_{\left[0,t\right]} \lambda\_{\mathfrak{u}} d\mathfrak{u}.\tag{43}
$$

Here, *λ<sup>t</sup>* is well-defined whenever for all nonnegative, f g F*<sup>t</sup>* -predictable f g *Ct <sup>t</sup>*≥<sup>0</sup>

$$E\left[\int\_{\left[0,\infty\right]} \mathbf{C}\_{t} d\mathbf{Q}\_{t}\right] = E\left[\int\_{\left[0,\infty\right]} \mathbf{C}\_{t} \lambda\_{t} dt\right] \tag{44}$$

is satisfied. **M** is called the *initiating event martingale*.

When well defined, *λ<sup>t</sup>* is a unique Radon-Nikodym derivative defined on the usual equivalence class, with the stochastic intensity process *λ<sup>t</sup>* f g ; *t*≥ 0 adapted to f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> and predictable. Informally, *λ<sup>t</sup>* ¼ *E dQt* ½ � jF*<sup>t</sup>*� and can be understood as the propensity for an initiating event to arrive in the next instant of time given the history of initiating events and reactor protections.

Now, consider the martingale transform *M<sup>C</sup> <sup>t</sup>* of protection unavailability 1ð Þ � *At* with respect to *Mt* of Eq. (41), where *M<sup>C</sup> <sup>t</sup>* ¼ def Ð ½ � 0,*<sup>t</sup>* ð Þ <sup>1</sup> � *Au dMu* <sup>¼</sup> <sup>Ð</sup> ½ � 0,*<sup>t</sup>* ð Þ <sup>1</sup> � *Au dQu* � <sup>Ð</sup> ½ � 0,*<sup>t</sup>* ð Þ <sup>1</sup> � *Au <sup>d</sup>*Λ*u:*

**Proposition 1 (Core Damage Martingale) M<sup>C</sup>** ≜ *M<sup>C</sup> <sup>t</sup>* ; *<sup>t</sup>*≥<sup>0</sup> � � is a martingale whenever the stochastic intensity process of arriving initiating events f g *λ<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> exists.

**Proof:** Since *At* is F*t*–predictable and 0 ≤ *At*ð Þ *ω* ≤1 for all *ω*∈ Ω and *t*≥0, it follows that *M<sup>C</sup> t* � � *<sup>t</sup>*≥<sup>0</sup> is also an <sup>F</sup>*t*–martingale (see [17]), and noting that *<sup>Q</sup><sup>C</sup> <sup>t</sup>* <sup>¼</sup> <sup>Ð</sup> ½ � 0,*<sup>t</sup>* ð Þ <sup>1</sup> � *Au dQu* counts the number of core damage events in the interval [0, *<sup>t</sup>*], we have that

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

$$\mathbf{M}\_t^C = \mathbf{Q}\_t^C - \int\_{[0,t]} (\mathbf{1} - A\_u) d\Lambda\_u \tag{45}$$

and, substituting from Eq. (43) gives

$$\mathbf{M}\_t^\mathbf{C} = \mathbf{Q}\_t^\mathbf{C} - \boldsymbol{\Lambda}\_t^\mathbf{C}.\tag{46}$$

We refer to **MC** as the *Core Damage Martingale* and its compensator is given by

$$
\Lambda\_t^C = \int\_{[0,t]} \lambda\_u^C du = \int\_{[0,t]} (\mathbf{1} - A\_u) \, \lambda\_u \, du. \tag{47}
$$

**Remark 8** Eq. (46) stands as the most general available expression characterizing the relationship among core damage events, the arrival of initiating events, and the efficacy of reactor protections. It is important to keep in mind that, for all *t*≥0, *M<sup>C</sup> <sup>t</sup>* , *Q<sup>C</sup> <sup>t</sup>* , *λ<sup>C</sup> <sup>t</sup>* , *At*, and (in particular) *λ<sup>t</sup>* are all random variables. Hence, Eq. (46) is nontrivial and must be examined in the context of stochastic integration.

Consider now, the following proposition:

**Proposition 2 (Existence of CDF)** If

$$\lim\_{t \to \infty} \frac{M\_t^C}{t} \stackrel{\text{a.s.}}{=} \mathbf{0},\tag{48}$$

then *λ<sup>C</sup>* exists (and is possibly a random variable) and for almost (a.a.) *ω*∈ Ω

$$\lambda^{\mathbb{C}}(\boldsymbol{\alpha}) = \lim\_{t \to \infty} \frac{Q\_t^{\mathbb{C}}(\boldsymbol{\alpha})}{t} = \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} (1 - A\_u(\boldsymbol{\alpha})) \lambda\_u(\boldsymbol{\alpha}) d\boldsymbol{u} \tag{49}$$

where, 0 <sup>&</sup>lt;*λ<sup>C</sup>*ð Þ *<sup>ω</sup>* <sup>&</sup>lt; <sup>∞</sup>. That is,

$$\begin{aligned} &\lim\_{t\to\infty} \frac{M\_t^C}{t} \overset{\text{a.s.}}{=} 0 \quad \text{if and only if} \\ &\lim\_{t\to\infty} \frac{\mathbf{Q}\_t^C}{t} \overset{\text{a.s.}}{=} \lambda^C \quad \text{and} \\ &\lim\_{t\to\infty} \frac{1}{t} \int\_{[0,t]} (\mathbf{1} - A\_u) \lambda\_u du \overset{\text{a.s.}}{=} \lambda^C. \end{aligned} \tag{50}$$

**Proof:** Proposition 2 is an obvious consequence of Definition 5.1 and Eq. (46). **Remark 9** Clearly, CDF exists only if

$$\frac{M\_t^C}{t} \stackrel{\text{a.s.}}{\longrightarrow} \mathbf{0} \text{ and } \boldsymbol{\lambda}^C \stackrel{\text{a.s.}}{=} E\left[\boldsymbol{\lambda}^C\right] < \infty. \tag{51}$$

Proposition 2 reveals the challenge in estimating CDF. In the absence of observed core damage events, predictive estimates of CDF must be formulated in terms of phenomena that can be observed. To this end, analysts must rely on observations of initiating event arrival times, and reactor protection performance (principally in the form of maintenance records and failure data). These observations are, of course, insufficient to capture the joint dynamics of *At* ð Þ , *λ<sup>t</sup>* f g ; *t*≥0 needed to directly employ the strong law relationship of Proposition 2, where,

$$\lambda^{\mathcal{C}} \stackrel{\text{a.s.}}{=} \lim\_{t \to \infty} \frac{\Lambda\_t^{\mathcal{C}}}{t} = \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} (\mathbf{1} - A\_u) \lambda\_u du. \tag{52}$$

Monte Carlo methods do not escape the difficulty of computing *<sup>λ</sup>C*. **<sup>A</sup>** and f g *<sup>λ</sup>*; *<sup>t</sup>* <sup>≥</sup><sup>0</sup> are not mutually independent (even when **Q** is Poisson with rate *λ*) and *λ<sup>t</sup>* is not directly observable. Since this dependence requires a Monte Carlo model to rely on an accurate characterization of the probability law on the joint stochastic process *At* ð Þ , *λ<sup>t</sup>* f g ; *t*≥0 , it is clear that the data requirements to support accurate estimation of this probability law are beyond the practical reality of reactor unit operations records.

#### **5.4 Cumulative assumption: arriving initiating events see time averages**

Important insights regarding CDF are revealed by exploring the expectation of **MC**. In particular, we are interested in the consequences of the stochastic dependence the state of system protections f g *At <sup>t</sup>*≥<sup>0</sup> and the arrival of initiating events f g *Qt <sup>t</sup>*≥<sup>0</sup> and consequently f g *λ<sup>t</sup> <sup>t</sup>*≥0. Consider now,

**Proposition 3 (Moment Convergence)** Suppose that

$$\frac{\mathbf{M}\_t^C}{t} \stackrel{\text{a.s.}}{=} \mathbf{0}.\tag{53}$$

Then,

$$\lim\_{t \to \infty} E\left[\frac{\mathbf{M}\_t^C}{t}\right] = 0 \quad \text{if and only if}$$

$$\lim\_{t \to \infty} E\left[\frac{\mathbf{Q}\_t^C}{t}\right] = E\left[\boldsymbol{\lambda}^C\right] \quad \text{and} \tag{54}$$

$$\lim\_{t \to \infty} E\left[\frac{1}{t} \int\_{[0,t]} (\mathbf{1} - A\_u) \boldsymbol{\lambda}\_u du\right] = E\left[\boldsymbol{\lambda}^C\right].$$

And, with the additional condition that *<sup>λ</sup><sup>C</sup>* <sup>¼</sup> *<sup>E</sup> <sup>λ</sup><sup>C</sup>* � �<sup>&</sup>lt; <sup>∞</sup>,

$$\text{CDF} = \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} E[(1 - A\_u)\lambda\_u] du. \tag{55}$$

**Proof:** Recall that almost sure convergence implies convergence in expectation (see [18]). Hence, Eq. (54) follows directly from Proposition 2. Nonnegativity of the integrand in Eq. (54) allows a routine application of Tonelli's Theorem to exchange the order of expectation and integration to show Eq. (55) (see [19]). Finally, *<sup>λ</sup><sup>C</sup>* <sup>¼</sup> *E λ<sup>C</sup>* � �< ∞ implies that

$$\frac{Q\_t^C}{t} \stackrel{\text{a.s.}}{\longrightarrow} E\left[\lambda^C\right] < \infty,\tag{56}$$

thus, ensuring the existence of CDF.

Recalling the definition of covariance, it immediately follows that.

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

**Corollary 1** When *<sup>λ</sup><sup>C</sup>* <sup>¼</sup> *<sup>E</sup> <sup>λ</sup><sup>C</sup>* � �<sup>&</sup>lt; <sup>∞</sup>, then

$$\text{CDF} \stackrel{\text{a.s.}}{=} \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} cov((\mathbf{1} - A\_u), \lambda\_u) du + \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} E\left[ (\mathbf{1} - A\_u) \middle| E\left[ \lambda\_u \right] du. \right] \tag{57}$$

**Proof:** Simply apply the definitions of CDF and covariance. **Corollary 2** When *<sup>λ</sup><sup>C</sup>* <sup>¼</sup> *<sup>E</sup> <sup>λ</sup><sup>C</sup>* � �<sup>&</sup>lt; <sup>∞</sup>,

$$\text{CDF} \stackrel{\text{a.s.}}{=} \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} E[(1 - A\_u)] E[\lambda\_u] du \tag{58}$$

if and only if *E λt*jF*<sup>t</sup>* ½ �¼ *E λ<sup>t</sup>* ½ � for all *t*≥ 0.

**Proof:** It need only be shown that *cov* ð Þ 1 � *At* ð Þ¼ , *λ<sup>t</sup>* 0 if and only if *E λt*jF*<sup>t</sup>* ½ �¼ *E λ<sup>t</sup>* ½ �. First, assume that *E λt*jF*<sup>t</sup>* ½ �¼ *E λ<sup>t</sup>* ½ � for all *t*≥0 for all *t*≥0. Note that

$$\begin{split} E[(\mathbf{1} - \mathbf{A}\_t)\boldsymbol{\lambda}\_t] &= E[E[(\mathbf{1} - \mathbf{A}\_t)\boldsymbol{\lambda}\_t] \boldsymbol{\mathcal{F}}\_t] \\ &= E[E[(\mathbf{1} - \mathbf{A}\_t)E[\boldsymbol{\lambda}\_t] | \boldsymbol{\mathcal{F}}\_t] = E[\boldsymbol{\lambda}\_t]E[E[(\mathbf{1} - \mathbf{A}\_t)| \boldsymbol{\mathcal{F}}\_t] = E[\boldsymbol{\lambda}\_t]E[(\mathbf{1} - \mathbf{A}\_t)]. \end{split} \tag{59}$$

Thus, it follows from the definition of covariance that *cov* ð Þ 1 � *At* ð Þ¼ , *λ<sup>t</sup>* 0. With *cov* ð Þ 1 � *At* ð Þ¼ , *λ<sup>t</sup>* 0 it follows trivially that

$$E[(\mathbf{1} - \mathbf{A}\_t)\lambda\_t] = E[\lambda\_t]E[(\mathbf{1} - \mathbf{A}\_t)].\tag{60}$$

**Corollary 3.** When, *E λt*jF*<sup>t</sup>* ½ �¼ *E λ<sup>t</sup>* ½ �¼ *λ*< ∞ and *At* ! a*:*s*: A*, then

$$\text{CDF} \stackrel{\text{a.s.}}{=} \lambda \lim\_{t \to \infty} \frac{1}{t} \int\_{\left[0, t\right]} (\mathbf{1} - A\_u) du. \tag{61}$$

**Proof:** It follows from Eq. (58) that when *E λt*jF*<sup>t</sup>* ½ �¼ *E λ<sup>t</sup>* ½ �¼ *λ*< ∞,

$$\text{CDF} \stackrel{\text{a.s.}}{=} \lambda \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} E[(1 - A\_u)] du,\tag{62}$$

and when *At* ! a*:*s*: A*,

$$\lambda \lim\_{t \to \infty} \frac{1}{t} \int\_{\left[0,t\right]} (\mathbf{1} - A\_u) du \stackrel{\text{a.s.}}{=} \lambda \lim\_{t \to \infty} \frac{1}{t} \int\_{\left[0,t\right]} E[(\mathbf{1} - A\_u)] du,\tag{63}$$

Eq. (61) follows.

**Remark 10** It is important to appreciate that Corollary 3 *does not* imply that the state of reactor protections **A** is independent of initiating event arrivals **Q**. The condition *E λt*jF*<sup>t</sup>* ½ �¼ *λ* is simply congruent with the lack of anticipation property of *At*. 22 The conditions establishing Corollary 3 allow for the possibility that initiating events can cause a failure of system protections (in addition to the possibility protections were already failed immediately prior to arrival).

<sup>22</sup> Recall from Section 2 that the state of system protections *At* at time *t*, does not influence the arrival times of future initiating events.

Recall Eq. (38). Perhaps the most important consequence of assuming that arriving initiating events see time averages is:

**Corollary 4.** When *E λt*jF*<sup>t</sup>* ½ �¼ *λ*, then

$$\overline{p \triangleq \lim\_{t \to \infty} \frac{Q\_t^C}{Q\_t}} = E[A]. \tag{64}$$

**Proof:** Substitute Eq. (61) into Eq. (39).

#### **5.5 Cumulative assumption: initiating events form a Poisson process**

Eq. (61) admits, as a special case, the condition that the initiating event counting process **Q** forms an ordinary Poisson process of rate *λ*. By the Watanabe characterization, it is well known that the stochastic intensity of a counting process is a fixed constant if and only if that counting process forms an ordinary Poisson process. Thus, when **Q** forms an ordinary Poisson process of rate *λ*, its intensity *λ<sup>t</sup>* ¼ *λ* for all *t*≥0, where *λ* is a positive valued constant. Returning to Eq. (46), it follows that the constant *λ* can be moved outside the integral, giving

$$
\Lambda\_t^C = \int\_{[0,t]} (\mathbf{1} - A\_u) \lambda\_u du = \lambda \int\_{[0,t]} (\mathbf{1} - A\_u) du. \tag{65}
$$

With the elements of the compensator of the core damage martingale a.s. constant for all *t*≥0 as given by Eq. (65), a special case of Corollary 3 is created. The condition *E λt*jF*<sup>t</sup>* ½ �¼ *λ* given in Corollary 3 is more general (and difficult to calibrate with historical data) than under the Poisson initiating event assumption (see [20]).

**Remark 11** The Poisson initiating event assumption does *not* imply independence between **A** and **Q**; on the contrary, **A** and **Q** remain dependent since there is the possibility that arriving initiating events will disrupt protections and lead to an accident. Wolff gives a full development of *Poisson arrivals see time averages,* (see [14]).

An obvious benefit gained when **Q** is assumed Poisson is that statistical calibration of *λ* becomes accessible using historical data. Since all Poisson processes are stationary and ergodic, the parameter *λ* is easily estimated since

$$\lambda = \lim\_{t \to \infty} \frac{1}{t} Q\_t(a) \tag{66}$$

for a.a. *ω*∈ Ω.

Assuming that **Q** is Poisson does not change Eq. (61). But, all Poisson processes implicitly carry the very strong independent increments requirement, meaning here that the number of initiating events appearing in any collection of disjoint time intervals are mutually independent. In practice, the independent increments condition is an extremely difficult to justify.

#### **5.6 Cumulative assumption: the protection process is stationary and ergodic**

Returning to Corollary 3 with *A* ¼ lim *<sup>t</sup>*!<sup>∞</sup> *At*, it is clear that for a.a. *ω*∈ Ω the split fraction *p* is given by

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

$$p = E[A] = \lim\_{t \to \infty} \frac{1}{t} \int\_{[0,t]} (1 - A\_u(\boldsymbol{\alpha})) d\boldsymbol{u}.\tag{67}$$

Calibrating *p* with historical data, as is preferred with PRA, is generally inaccessible in the limit. However, assuming that the protection availability process **A** is both stationary and ergodic guarantees that A is measurable with respect to its tail *σ*-algebra for all *t*≥0, and thus

$$p = E[A] = E[A\_t] = P(A\_t = \mathbf{0}),\tag{68}$$

for any fixed *t*>0. Clearly, the best available estimate of the split fraction *p* along the (only) historically available *ω* is given by

$$p \approx \frac{1}{t} \int\_{\left[0,t\right]} (1 - A\_u(\alpha)) du. \tag{69}$$

Typically, practitioners rationalize stationarity and ergodicity of **A** by claiming knowledge of a *long history* of protection operations augmented with an extensive observed history of initiating events and equipment maintenance activity. Based on a experience they will designate where to establish time *t* ¼ 0, in Eq. (69). However, the approximation Eq. (69) suffers a bias that is a consequence of well designed protections: The speed of convergence of Eq. (67) depends on observing initiating events that will cause protections to fail. Obviously, well designed reactor protections will experience few (if any) such events. This implies that even under the assumption that **A** is stationary and ergodic the Eq. (69) will be positively biased for all *t* < ∞.

#### **5.7 Cumulative assumption: protection unavailability is independent of all initiating events**

The optimistic bias of Eq. (69) disappears under the additional assumption that **A** and **Q** are independent stochastic processes. Under this circumstance, since both the initiating event process **T** and the protection availability process **A** are adapted to the filtration f g F*<sup>t</sup> <sup>t</sup>*≥<sup>0</sup> as described in Section 2, it follows that

$$E\left[A\_{T\_n}\right] = E\left[A\_t\right] \tag{70}$$

for all *t*≥ 0 and *n* ∈ . Independence of **A** and **Q** is congruent with the belief that reactor protections are so robust as to never fail due to the impact of an initiating event. Or, equivalently, an initiating event will induce an accident only if it finds protections already out of service upon arrival.

Of course, the model assumption that **A** and **Q** are independent stochastic processes is very strong and hardly justifiable in practice. That said, PRA methodology relies directly on the cumulative assumptions leading to Eq. (69). The assignment of the split fraction value *p* appeals to *ad hoc* analysis (e.g., penetration factor) where little or no observed historical calibration data can be found.

#### **6. Summary: reasonableness assumptions and the consequences of making them**

Numbers such as CDF or Large Early Release Frequency (LERF) are intended to inform engineers, regulators, and other citizens on the quantitative level of risk posed by a commercial nuclear power plant. Risk in this context is against the Atomic Energy Act of 1954 as amended (AEA) requirement for "adequate protection" of the health and safety of the public.<sup>23</sup> Protection contemplated in the AEA is to help prevent harm to the public from uncontrolled release of radioactive material. Such harms can come from accident scenarios involving reactivity control and loss of core cooling following reactor shutdown. Protective systems are put in place under regulation to minimize the likelihood of such scenarios.

Sections 3–5 review counting of consequential outcomes in hazardous processes such as commercial nuclear power plant technologies as they relate to "unknownunknowns" (Section 4) and the assumptions required to obtain a numerical result where data are sparse (Section 5), particularly with respect to CDF. In the review, it is shown that PQ on robust protective systems, where little or no data are available for calibration and validation, will produce optimistic estimates for risk of protection breakdowns leading to disastrous consequences. The progressive introduction of *assumptions required to obtain quantitative levels of risk* and their relationship to nuclear power plant processes in current popular methods based on PQ include: (a) there are no unknown-unknowns, (b) there are no absorbing states, (c) CDF is well-defined, (d) arriving initiating events see the time average of protection availability, (e) arriving initiating events form a Poisson process, (f) the protection availability process is stationary and ergodic, and (g) protection unavailability is independent of all initiating events. The implications of such assumptions, when adopted, is that the number obtained will underestimate the frequency of an accident by an unknowable amount.

Commercial nuclear power, as currently regulated by Western standards, is arguably the most safe energy source (see again [7], on "safe" and "safety") of all other energy technologies currently available. Regulatory standards, regulatory inspection and enforcement, management oversight, and engineering practice implement available engineering solutions against "unknown unknowns" that include safety margins, defense in depth, root cause analysis, and corrective action to mitigate the consequence of "unknown unknown" events as they actually appear or are imagined to appear. Investors, regulators, design engineers, operators, and the public can best review efficacy of protection using PQ up to the point of risk quantification. Such fundamental engineering processes as design, testing, maintenance, operation, and design revision are supported by PQ that produces and holds out hope for categorizing breakdown scenarios for example, by the level of support they afford. Risk management in modern fission reactor technologies is best managed when the acquisition of knowledge from observations during ongoing operations is applied to protection going forward. This implies a strong organizational commitment to efficacious root cause analysis and corrective action throughout an asset's lifetime.

<sup>23</sup> The adequate protection language appears for example in Sections 182 and 189 of the Atomic Energy Act.

*Perspective Chapter: PRA and Protective System Maintenance DOI: http://dx.doi.org/10.5772/intechopen.110049*

### **Author details**

Ernie Kee1 \* and Martin Wortman<sup>2</sup>

1 University of Illinois at Urbana-Champaign, Urbana, Illinois, USA

2 HazTechRisk.org, Galveston, Texas, USA

\*Address all correspondence to: erniekee@illinois.edu

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Kemeny JG. The need for change, the legacy of TMI: Report of the President's Commission on the Accident at Three Mile Island, John G. Kemeny, chairman. Washington, D.C.: s.n.: for sale by the Supt. of Docs., U.S. Govt. Print. Off; 1979

[2] Rogovin M. Three Mile Island: A Report to the Commissioners and to the Public. Vol. 1250. Nuclear Regulatory Commission, Special Inquiry Group; 1980

[3] Tobias A. Decay heat. Progress in Nuclear Energy. 1980;**5**:1-93

[4] Solberg Ø, Njå O. Reflections on the ontological status of risk. Journal of Risk Research. 2012;**15**(9):1201-1215

[5] Cardano G, Wilks S. The Book on Games of Chance: "Liber de Ludo Aleae", Translated by Sydneu Henry Gould. New York, NY, USA: Holt, Rinehart & Winston; 1961

[6] Bernoulli J, Jacobi Bernoulli... Ars Conjectandi, Opus Posthumum. Accedit Tractatus de Seriebus Infinitis, et Epistola Gallice scripta de Ludo pilae reticularis. Basileae, Impensis Thurnisiorum, Fratrum – Werke 3, 1713. pp. 107-286

[7] Hansson SO. Safety is an inherently inconsistent concept. Safety Science. 2012;**50**(7):1522-1527

[8] Doorn N, Hansson S. Should probabilistic design replace safety factors? Philosophy & Technology. 2011; **24**(2):151-168

[9] Möller N, Hansson SO, Peterson M. Safety is more than the antonym of risk. Journal of Applied Philosophy. 2006; **23**(4):419-432

[10] Hansson SO. The epistemology of technological risk. Techné. 2005;**9**(2): 68-80

[11] McTaggart JE. The unreality of time. Mind. 1908;**17**(68):457-474

[12] Kolmogorov AN. Foundations of the Theory of Probability: Second English Edition. New York, NY, USA: Courier Dover Publications; 2018

[13] Øksendal B. Stochastic Differential Equations: An Introduction with Applications. Berlin, Germany: Springer Science & Business Media; 2013

[14] Wolff RW. Poisson arrivals see time averages. Operations Research. 1982; **30**(2):223-231

[15] Bremaud P. Point Processes and Queues: Martingale Dynamics. New York: Springer-Verlag; 1981

[16] Çınlar E. Probability and Stochastics. New York, NY, USA: Springer; 2011

[17] Rogers LCG, Williams D. Diffusions, Markov Processes and Martingales. 9th ed. Cambridge, UK: Cambridge University Press; 2000

[18] Dudley RM. Real Analysis and Probability. Boca Raton, FL, USA: CRC Press; 2018

[19] Folland GB. Real Analysis: Modern Techniques and their Applications. Vol. 40. New York, NY, USA: John Wiley & Sons; 1999

[20] Melamed B, Whitt W. On arrivals that see time averages: A martingale approach. Journal of Applied Probability. 1990;**27**(2):376-384

#### **Chapter 6**

## Possible Applications of Modern Aqueous Homogeneous Reactors

*Ahmed Shaker*

#### **Abstract**

This chapter describes the potential of the aqueous homogeneous reactor, briefing readers on the physics and history of the subject, whilst providing both current and possible future applications for this reactor technology. These reactors were some of the first nuclear reactors ever constructed, and provided valuable information on critical mass and other nuclear physical properties on fissile solutions. The compact nature of these reactors, combined with their inherent safety characteristics, have made them attractive for the generation of medical radioisotopes and neutrons for experimentation. However, material corrosion issues and advanced development of solid-fuelled light water reactors would curtail much interest in the technology in the 50's. Although operating temperatures of this type of reactor are usually low, even such low temperature heat is useful in process and industry; such a reactor can be used for environmentally-friendly district heating or the supply of process heat in industry, and could even be used to produce hydrogen. With modern advances in physics and chemistry, and disruptions in conventional energy sources; such reactors in their modern form may serve an important role: supplying various energy demands that could be derived from nuclear power, but may not require more advanced and costly reactor technologies.

**Keywords:** nuclear engineering, nuclear power, nuclear energy, process heat, district heating, nuclear heating, nuclear desalination, desalination, hydrogen production, clean hydrogen

#### **1. Introduction**

As we face an increasing issue of energy scarcity in light of a global energy crisis [1], nuclear energy is once again gaining attention as a way to reduce our reliance on fossil fuels and the often questionable sources that supply them [2]. Interest in nuclear energy is some of the highest it has ever been, and has seen many nations such as Bangladesh and Egypt build nuclear power plants for the first time, or others like Japan, return to the construction of new reactors after a long hiatus on construction [3].

However, research and development in nuclear energy can be said to be progressing at a slower pace relative to other technologies. Many kinds of fission reactors have been developed since the first was assembled under university football bleachers in 1942, but only a few types are in commercial use today; most using solid fuel

with some form of water as coolant and moderator. These solid-fueled reactors have performed well in their service as power generators, proving themselves through tens of thousands of reactor-hours of operational experience, but are often large, complex machines that have become increasingly difficult to finance and construct and are rather inflexible in their siting and energy output [4].

It is important to note that not every application of nuclear energy should be handled by a select few or even a singular reactor type(s), nor is it possible to do so in many cases. As the uses of nuclear energy expand, the flexibility in design of nuclear reactors should also increase. For example, high-temperature reactors are being developed to replace coal-fired boilers *in-situ* [5], and fast-spectrum reactors are being explored as a way to burn up plutonium and other minor actinides as part of closed-cycle nuclear waste management programs [6].

Amongst the flurry of hypothetical fission reactor designs that were being considered for development in the backdrop of the Manhattan Project, there is one that holds great promise for modern applications — but has been side-lined and underdeveloped. In 1944, the brilliant Enrico Fermi decided to construct the first of a set of reactors at Los Alamos that would use liquid fuel mixed homogeneously with water as the moderator (wartime secrecy stipulated that they be called "water boilers") [7]; it was a little spherical reactor, contained in a stainless steel vessel no more than a foot across, filled with a solution of uranyl sulfate and light water, reflected by beryllium and graphite, and called LOPO (for low power) as it produced almost no energy in operation.

LOPO reached criticality in May of 1944 and was instrumental in determining the critical mass of uranium solutions. As experiments with LOPO concluded, it was disassembled later that year to make way for a higher-power solution reactor: HYPO [7]. HYPO was a larger reactor with cooling provisions that allowed it to operate at a higher power of 5.5 kW and provide a stronger neutron flux for experiments. HYPO was brought critical in December of 1944 and would be the key for the neutronics measurements needed to design the nuclear fission assemblies for the first atomic weapons.

HYPO itself would be, in turn, upgraded to SUPO (**Figure 1**) in March of 1951 with improved cooling and an increased neutron flux in sustained operations up to 35 kW of power [7]. SUPO would go on to operate until 1974, and the whole water boiler program would provide critical information that would be useful for the construction of other aqueous homogeneous reactors to come.

After the success at Los Alamos, engineers at Oak Ridge set out in 1952 to construct a far more powerful solution reactor under the auspices of the Homogeneous Reactor Experiment (HRE) [8]. This reactor, the HRE-1 (**Figure 2**), was much larger than previously built, with a power level of 1 MW thermal. The HRE-1 had an output temperature of 250 degrees, and was also coupled to a steam turbine, demonstrating an ability to produce 140 kW of electric power from its heat (**Table 1**). The HRE-1 used a novel control system utilizing magnetically coupled neutron absorbing plates between its core and reflector, dispensing the need for traditional control rods.

The HRE-1 was the first solution reactor that was built with the goal of extracting power in mind and would go on to conduct experiments regarding control and power output. During such experiments, it was further confirmed that AHRs possessed a very strong negative thermal reactivity coefficient [8], owing to the effect of voiding in the solution at higher-than-designed power levels and subsequently reduced

*Possible Applications of Modern Aqueous Homogeneous Reactors DOI: http://dx.doi.org/10.5772/intechopen.111896*

#### **Figure 1.**

*SUPO reactor vessel without its graphite reflector in place. Public domain.*

moderation; something originally discovered during an accidental reactivity incursion with SUPO.

These aqueous homogeneous reactors (AHRs) usually have a simple construction relative to solid-fuelled reactors, often consisting of simple tank-like structures, and requiring far less fuel fabrication (a very expensive and complex task in itself) as the fuel is not held in rods, but kept in either a solution or a slurry [9]. This design makes the possibility of meltdown with reactors of this type highly unlikely, and reduces the volume of active coolant, possibly making containment structures more compact and economical.

They have found themselves today as convenient sources of neutrons and fission products, and are often used today to produce radioisotopes for medical and industrial uses [9], as processing of the liquid fuel solution can be done *on-line* to continuously extract target isotopes before they decay, or done after batches of short reactor runs. However, there are other characteristics of these reactors that could also make them attractive for a wider range of applications.

The thermal energy of the AHR may come in useful in applications where the complexity and cost of high-temperature reactors is not needed, such as for district heating and seawater desalination. With a high fuel burn-up and simple control stratagem, such reactors could be used for long periods of time, almost — if not entirely unattended, as clean and sustainable industrial and commercial heat sources, much like a traditional fuel-fired hot water heater or boiler. All the while, the aforementioned problem of hydrogen arising from radiolysis may actually be a blessing in an era where we are starting to use hydrogen as a fuel itself but find it difficult to produce without fossil fuels.

#### **Figure 2.**

*HRE-1 internal details. ORNL drawing D-9065A. Public domain.*


#### **Table 1.**

*HRE-1 design parameters.Adapted from Murray [8].*

#### **2. AHRs for the supply of heat**

While the limited output temperature of the AHR makes it unsuitable for power generation, this low-grade heat is still usable for other industrial and commercial uses. A plethora of applications that currently utilize process steam could possibly

be made nuclear using the AHR at a low capital cost, and replace combustion-fired boilers and heaters at many industrial sites.

The output temperatures of AHRs are limited partly by the high amounts of radiolysis encountered in operation; this, combined with heat, causes dissociation and precipitation of the solution at higher temperatures. These effects limit the output temperatures of AHRs to around 300°C, lower than the output temperature of modern PWRs and far lower than that of gas cooled designs, such as the 1200°C of the output of the HTR-PM [5].

However, with higher temperatures come higher pressures and stresses, requiring stronger, heavier, and more expensive structures to deal with such heat, especially those that hold water under pressure. As most AHR designs are not designed for two-phase flows (boiling) in the main reactor, this would necessitate a pressure vessel strong enough to keep the water from boiling at the reactor's operating temperature.

Such a pressure vessel could be selected for specific operating conditions and built accordingly, with lower temperature reactors requiring thinner vessel walls and lighter ancillaries compared to higher temperature reactors. This could make lower temperature reactors simpler and more economical to design and construct. Reactors could even be designed for near-atmospheric pressures if temperatures under 100°C are needed.

Low-complexity reactors that have been designed for such low-temperature applications are not a novel idea. For example, the SLOWPOKE Energy System was conceived by the AECL in Canada, and was a pool-type reactor with an output temperature of no more than 100°C. The low temperature allowed for non-pressurized construction and an increased margin of safety, obviating the need for operators; extensive automation was a goal of the design [10].

In the Czech Republic, the TEPLATOR project proposes to use spent PWR fuel bundles in a tank for the express supply of district heat at 98°C, and although it is not a homogenous design, and runs on spent fuel, the low capital cost of constructing such simple reactors may allow TEPLATOR to deliver heat at prices significantly lower than with fossil fuels [11]. A purpose-built AHR for such a purpose could, in theory, be able to do the same, if not more effectively.

Such heat from a low-cost and low-complexity source could enable greater use of nuclear power in seawater desalination and district heating. Contemporary multistage flash distillation (MSFD) processes can usually only work with temperatures under 120°C, while multiple effect distillation (MED) processes can only work with temperatures up to 70°C before scale formation becomes an issue [12]. Although nuclear desalination is a tried and tested concept, most implementations have used steam or hot water extracted from the turbines or condensers of power-producing reactors, increasing implementation costs. An AHR could be used solely as a heat source, and be designed for lower temperatures, potentially lowering implementation costs and making desalination a more accessible endeavor.

District heating has historically used network temperatures upwards of 200°C, but modern "fourth generation" systems are trying to bring this down to as low as 70°C [13] in a bid to capture more waste heat from various sources. As a compact and inherently safe design, AHRs could be sited very close to the built-up areas where district heating networks exist, lowering transmission losses and decreasing the need for expensive network piping. The heat could also be used to facilitate district cooling with the use of absorption chillers, allowing for such a plant to operate all-year round in temperate climates by supplying customers with chilled water.

Although direct production of steam is possible within the AHR itself (much like a BWR) [8] it is impractical due to the presence of fission products in the steam. Because of this, most schemes to extract heat from AHRs would have to employ the use of intermediate heat exchangers and/or steam generators. Such a heat exchange loop would probably be of the *hi-lo-hi* type with high isolation loop pressure that is seen in nuclear desalination applications to preclude the possibility of contaminating network fluids with reactor solution [14].

#### **3. AHRs for hydrogen production**

The rapidly developing hydrogen economy currently relies on natural gas reforming to produce the vast majority of the substance, but has been transitioning to use sustainable sources of energy to lessen its reliance on fossil fuels. The use of hydrogen from nuclear sources would greatly help this effort. Most other schemes to produce hydrogen with nuclear energy involve either high temperature reactors using thermochemical cycles, or the use of electrolysis from nuclear electricity [15]. Such schemes involve equipment ancillary to the reactor itself, and require diversion of energy, reducing the power output of the associated plant in cogeneration. Such capital and running costs make nuclear hydrogen production difficult to viably implement at present.

However, the AHR is able to produce it *directly* as a by-product, potentially at a lower cost than other nuclear-based options, and would provide heat that could be useful for other parts of the hydrogen production process. As the very ions of uranium fuel in the AHR are in direct contact with the water as fission occurs, the intense energy from the fission, fission fragments and other energetic particles cause the water to disassociate to its constituents: hydrogen and oxygen [16]. This radiation-driven process is called radiolysis, and is responsible for the large amounts of hydrogen produced by the AHR. However, combined with the oxygen produced, this potentially explosive atmosphere would constitute a hazard inside the reactor and necessitates its constant recombination and removal.

Soluble ionic catalysts were proposed and tested to reduce this production, with copper shown to be effective in virtually stopping all production with the HRT [8]. But the hydrogen created by the reaction may be a far more valuable commodity going forward. World hydrogen demand is projected to grow 5 to 7 times larger than it was in 2021 by 2050, with hydrogen making up 15–20% of all energy demands [17].

As hydrogen is known to cause embrittlement issues in many materials, and creates an explosive atmosphere, the construction of such a reactor would require the use of novel techniques for extraction of the hydrogen and mitigation of the associated corrosion. Engineers at Los Alamos used mechanical gas separators that would use swirl-vanes to centrifuge out the gas in the HRT and send it to recombiners [18]. It should be noted, however, that hydrogen production in the AHR requires moderately high temperatures, and it would be inefficient to use AHRs to solely produce hydrogen [19]. Such hydrogen producing AHRs would inevitably have to be heating or power reactors, where the production of hydrogen could be seen as a bonus product or be used as a process gas alongside its associated nuclear process heat in industrial settings.

#### **4. AHRs for the supply of power**

As demonstrated with the HRE, production of power using steam turbines is possible with the AHR. By running the core solution through a steam generator, the HRE-1 was able to produce almost 1 MW of steam at 250°, giving its 140 kW power

#### *Possible Applications of Modern Aqueous Homogeneous Reactors DOI: http://dx.doi.org/10.5772/intechopen.111896*

plant an efficiency of 14%. Later, the HRT increased this temperature to 300°, but had issues with the solution dissociating; creating hot spots and power fluctuations. These issues were caused by phase instabilities of U2SO4 in water at temperatures above 340° [8], and limit the efficiency of such a setup. Increasing the temperature beyond this also increases the corrosion of reactor internals greatly. Other experiments with different acid-based chemistries were able to increase the operating temperature to 450° [16], but high rates of corrosion make such exotic chemistries impractical even today.

Due to these corrosion issues limiting the output temperature, an AHR would not be the best choice of reactor for power production, but nonetheless, could be a simple way to implement nuclear power in mechanical or electrical applications. The design and construction advantages could be useful in applications where the increased fuel costs from the inefficiencies of the reactor plant do not warrant the need for a more efficient, yet, more capital intensive type of reactor. Such applications could include powering remote towns and facilities, or powering coastal and riverine vessels as part of a nuclear propulsion scheme.

An AHR could also be coupled to a stirling engine, eliminating the need for a traditional steam plant and associated steam generators, potentially reducing implementation costs in certain scenarios. Another option would involve using an organic rankine cycle (ORC), as temperatures below 350° make the use of steam rankine cycles particularly inefficient. ORC systems use an organic working fluid, such as pentane or R134a to achieve higher efficiencies at lower temperatures, and have seen use for capturing waste or low-temperature heat from industrial processes and renewable energies [20]. Such alternative cycles would make the power production from AHRs a more efficient undertaking in the present day.

Given its placid control characteristics, an AHR power plant could be operated autonomously, only needing personnel on-site for periodic maintenance and refueling. This characteristic, combined with the potential for cogeneration with heat, potentially makes such plants particularly attractive for arctic areas, where keeping personnel around is a challenge in itself.

If further drops in implementation costs are desired, the AHR could be used as a two-phase reactor, and include boiling in the core [16]. This would eliminate the need for steam generators, but would also expose the steam plant to fission products, contaminating it in the process. Such a design would also be problematic for the supply of heat, as it would have to be extracted from either the condenser, or off of the turbine stages, and then sent through a heat exchanger to render it safe for use.

#### **5. AHRs as radioisotope sources**

Owing to the compact size and intense neutron flux provided by the AHR, it makes it an excellent producer of radioisotopes from various sources [use the IAEA book on Mo-99 production for this section]. The solution chemistry of the AHR allows it to burn-up uranium almost completely and simultaneously remove fission products — both poisons and potential commodities, such as Xe-135 or Mo-99, for use as industrial and commercial sources of radiation.

The extraction of Mo-99 in particular from AHRs is of great interest. Mo-99 is an essential radioisotope for the medical industry, as it is used to produce Tc-99m for use in medical imaging as a tracer [21]. Mo-99 however, has a very short half-life of 66 h, which complicates its production and transportation. The use of an AHR would allow for the use of on-line processing to continuously produce Mo-99 whenever needed.

By use of chemical extraction methods, the Mo-99 condensate can be collected from the reactor loop without removing the fissile fuel. This method would also eliminate the amount of waste that would be generated when using a solid uranium target, of which the uranium is only 0.4% spent, and the processing of which releases fission products and unused uranium into waste streams [22].

Other isotopes like Sr-89 are often also difficult to produce using solid targetbased systems. With the AHR, it is possible to create Sr-89 without the associated Sr-90 impurity by using extraction of the precursor gasses. At the ARGUS reactor at the Kurchatov Institute, Sr-89 of high purity is extracted by running the reactor for a few minutes at a time to generate the needed krypton precursors, then waiting for the shorter-lived Sr-90 precursor, Kr-90 (half-life of 33 s), to decay out faster than the relatively long-lived Sr-89 precursor, Kr-89 [22].

This Kr-89 is then pushed out of the reactor into a sorbent bed with the help of inert gasses, where it decays to produce Sr-89 and awaits further processing to remove other fission products. After purification, the Sr-90 content of the Sr. recovered is insignificant. Although running such a reactor for the sole purpose of extracting Sr-89 would be impractical, it is something that can be done alongside the production of Mo-99 and other radioisotopes.

The production of other useful isotopes such as Xe-133 — another useful medical tracer, I-131 — a radiotherapy agent, and Cs-137 — a potent gamma emitter with many industrial uses, is also possible, and was demonstrated with ARGUS [22]. If other radioisotopes needed to be produced in solid form, an AHR could be designed with provisions for the insertion of targets. If, for example, Co-60 needed to be produced in rod form, the reactor could be designed to accommodate for irradiation of such rods within its core.

Other radioisotopes of interest may be the Pu-239 or U-233 generated during fission by transmutation in uranium and thorium powered reactors. By using a breeder design with a "blanket" solution around the core, lost neutrons could be absorbed by fertile material such as U-238 or Th-232, rendering it fissile [16]. Such a system could also be adapted, in theory, to produce H-3 (tritium) from either H-2 or Li-6 for use in many applications.

#### **6. Conclusions**

The unique design features of the aqueous homogeneous reactor make them a valuable tool in increasing the uses of nuclear power. The simple design and control characteristics of AHRs could make them very useful wherever heat and/or steam is required, but the complexities of solid-fuelled reactors are unwarranted and unwanted. Moreso, if hydrogen needs to be produced, the AHR is capable of doing so directly. Various radioisotopes or fertile fuels could be extracted from such reactors as a value-added product during their normal operations, or be produced by reactors exclusively designed to do so.

Industrial and commercial users could benefit from a heat source that has the potential to be far cheaper than fossil fuels, yet cause far less environmental harm. This will be especially important in an era where governments worldwide are trying to wean themselves off of fossil fuels, which supply over 50% of the world's heating needs [23]. AHRs can supply heating to entire district heating networks, or be made small enough to supply individual buildings. Cooling can also be provided with the use of absorption chillers, and if done so while generating electricity, could comprise an economical CHP (combined heat and power) or CCHP (combined cooling, heat and power) scheme.

Thermal energy from the AHR could also be used for nuclear desalination purposes, where the energy costs of the process heavily determine the price of the purified water produced. Low-cost nuclear desalination could help arid areas of the world gain access to clean water without the associated fossil fuel needs or pollution. An AHR built expressly for this purpose has the potential to be far more economical than using power reactors or combustion boilers, as its material, logistic and safety requirements are greatly relaxed.

The production of hydrogen in the AHR is something that would usually comprise a problem, but with clever extraction techniques, this annoyance may turn out to be a lucrative opportunity in light of the growth in the hydrogen economy. Of course, if the complexities arising from hydrogen production in such a reactor are found to be a concern, catalysts may be added to the solution to facilitate complete recombination of these radiolysis products. But, as very little extra equipment is required for its extraction, the AHR could easily produce hydrogen as a value-added product for sale, or for use in industrial processes as feedstock or fuel. As the use of hydrogen as a fuel grows worldwide, the AHR finds itself uniquely suitable for the production of the gas without emissions or extra costs.

The production of radioisotopes would be another value-added product that could be produced in heating reactors, or produced by reactors purposely built for such a purpose. The solution chemistry of AHRs allow for the extraction of a wide range of radioisotopes without the production of large volumes of waste as seen with traditional target-based systems, and permits the extraction of certain radioisotopes that would otherwise be impractical to do with traditional systems, such as Sr-89. This same chemistry allows AHRs to achieve a high fuel burn-up, as well as a high breeding ratio, potentially allowing AHRs to operate as breeder reactors.

Although the output temperatures of AHRs are limited, they can still be used to generate power as demonstrated before (albeit, with lower efficiency than highertemperature reactors). An AHR could provide just enough electricity through a Rankine or Stirling cycle to power itself and associated facilities, or generate enough to power other loads, such as industrial, commercial, and residential consumers of electricity. If done alongside thermal and hydrogen production, this could make the AHR a very flexible reactor type in cogeneration. This energy could also be used for propulsion at sea and inland waterways, where such reactors may enable economically-built nuclear-powered vessels that would require little justification for a more complex reactor type, such as tug boats or coastal bulk freighters.

As the world increasingly becomes more conscious of the deleterious effects of fossil fuel consumption and strives to move away from fossil fuels altogether, nuclear energy will be a vital tool in facilitating its replacement. In doing so, many different designs for nuclear reactors will be needed for the different applications they will be optimal for. The AHR has the potential to economically decarbonize industries and sectors that rely on low-temperature heat, or supply clean electric power. Hydrogen created from the regular operation of such reactors may be harnessed and utilized as a fuel, or be used for other industrial processes. These reactors are also optimal for the production of radioisotopes — especially those for medical uses, and could be made compact enough to be installed near or at the facilities that need them on a regular basis. Using modern technologies, it is not beyond the realm of possibility that the AHR can be deployed for commercial use in the near future. With its inherently safe

and simple features bolstered by the use of our advanced tools and knowledge, the AHR is a very promising design looking forward.

#### **Acknowledgements**

Special thanks to Bianca, whose hospitality was instrumental in the creation of this chapter. I owe her my utmost gratitude.

### **Author details**

Ahmed Shaker University of Ontario Institute of Technology, Oshawa, ON, Canada

\*Address all correspondence to: ahmed.shaker@uoit.net

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] IEA. World Energy Outlook 2022 [Internet]. 2022. Available from: https:// www.iea.org/reports/world-energyoutlook-2022 [Accessed: 29 January 2023]

[2] S&P Global. Nuclear energy is growing on a global scale [Internet]. 2022. Available from: https://www. spglobal.com/esg/s1/research-analysis/ nuclear-energy-is-growing-on-a-globalscale.html [Accessed: 29 January 2023]

[3] Nikkei Asia. Japan PM Kishida orders new nuclear power plant construction [Internet]. 2022. Available from: http:// asia.nikkei.com/Politics/Japan-PM-Kishida-orders-new-nuclear-power-plantconstruction [Accessed: 29 January 2023]

[4] Bartak J, Bruna G, Cognet G. Economics of Small Modular Reactors: Will They Make Nuclear Power More Competitive? Journal of Energy and Power Engineering. 2021;**15**:193-201. DOI: 10.17265/1934-8975/2021.06.001

[5] Xu S, Lu YHM, Mutailipu M, Yan K, Zhang Y, Qvist S. Repowering coal power in China by nuclear energy— Implementation strategy and potential. Energies. 2022;**15**(3):1072. DOI: 10.3390/ en15031072

[6] Daniel W. Why Faster Is Better - on Minor Actinide Transmutation in Hard Neutron Spectra [thesis]. Stockholm: KTU; 2007

[7] Bunker Merle E. Early reactors: From Fermi's water boiler to novel power prototypes. In: Los Alamos Science No. 7. Los Alamos: Los Alamos National Laboratory; 1983. pp. 124-131

[8] Murray WR. An Account of oak Ridge National Laboratory's Thirteen Nuclear Reactors. Oak Ridge: Oak Ridge National Laboratory; 2009. pp. 14-23

[9] Rijnsdorp S. Design of a Small Aqueous Homogeneous Reactor for Production of 99Mo [thesis]. Delft: Delft University of Technology; 2014

[10] Hilborn JW, Lynch GF. SLOWPOKE - Heating Reactors in the Urban Environment. Chalk River: Chalk River Nuclear Laboratories; 1988. pp. 1-6

[11] Hussein ASA, David M, Michael M, Radek S. Economics of reusing spent nuclear fuel by Teplator for district heating applications. International Journal of Energy Research. 2022;**46**(5):1. DOI: 10.1002/er.7521

[12] Alharthi K. Increasing the Top Brine Temperature of Multi-Effects Distillation (MED) to Boost its Performance through Controlling the Formation of Scale by Nanofiltration and Antiscalants. Thuwal: King Abdullah University of Science and Technology; 2020. p. 15

[13] Lund H, Werner S, Wiltshire R, Svendsen S, Thorsen JE, Hvelplund F, et al. 4th Generation District heating (4GDH) integrating smart thermal grids into future sustainable energy systems. Energy. 2014;**68**:5. DOI: 10.1016/j. energy.2014.02.089

[14] Bouaichaoui Y, Belkaid A, Amzert SA. Economic and safety aspects in nuclear seawater desalination. Procedia Engineering. 2012;**33**:151. DOI: 10.1016/j.proeng.2012.01.1187

[15] Non-Nuclear Products of Nuclear Energy. Issy-les-Moulineaux: OECD NEA; 2004. pp. 9-11

[16] Lane JA, Thompson WE. Homogeneous reactors and their development. In: Lane JA, MacPherson HG, Maslan F, editors. Aqueous Homogeneous Reactors. Reading: Addison-Wesley; 1958

[17] Energy Transitions Commission. Making the Hydrogen Economy Possible: Accelerating Clean Hydrogen in an Electrified Economy [Internet]. 2021. Available from: https://www.energytransitions.org/publications/makingclean-hydrogen-possible/ [Accessed: 21 February 2023]

[18] Hafford JA. Development of the Pipe-Line Gas Separator. Oak Ridge: Oak Ridge National Laboratories; 1954

[19] Kerr W, Majumdar DP. Aqueous homogeneous reactor for hydrogen production. In: Proceedings of the Hydrogen Economy Miami Energy Conference; 18-20 March 1974; Miami Beach. New York: Plenum Press; 1975. pp. 167-181. DOI: 10.1007/978-1-4684-2607-6\_12

[20] Rettig A, Lagler M, Lamare T, Li S, Mahadea V, McCallion S, et al. Application of organic Rankine cycles. In: Proceedings of the World Engineer's Convention; 4-9 September 2011; Geneva. ZHAW: Winterthur; 2011

[21] Papagiannopoulou D. Technetium-99m radiochemistry for pharmaceutical applications. Journal of Labelled Compounds and Radiopharmaceuticals. 2017;**60**(11):502-520. DOI: 10.1002/ jlcr.3531

[22] Chuvilin DY, Khvostionov VE, Markovskij DV, Pavshouk VA, Zagryadsky VA. Low-waste and proliferation-free production of medical radioisotopes in solution and moltensalt reactors. In: Rahman RA, editor. Radioactive Waste. London: IntechOpen; 2012. p. 142, 153-160. DOI: 10.5772/37158

[23] IEA. Heating - Fuels & Technologies [Internet]. 2022. Available from: https:// www.iea.org/fuels-and-technologies/ heating [Accessed: 21 February 2023]

Section 4 Applications

## **Chapter 7** Applications of Fission

*Anna C. Hayes*

#### **Abstract**

This chapter is devoted to a discussion of applications of nuclear fission. It covers some aspects of the topics of nuclear reactors, nuclear safeguards and nonproliferation, reactor antineutrinos, and nuclear medicine. It is, however, limited in scope and the reader is encouraged to explore the many other exciting sub-areas of the applications of nuclear fission.

**Keywords:** nuclear reactors, MOX fuel, fission antineutrinos, nuclear nonproliferation, radio-nuclide therapy

#### **1. Introduction**

The two key properties of fission, the release of several neutrons and the release of a broad distribution of fission fragments, are fundamental reasons for the huge number of applications that have come since the discovery of the fission process. The importance of neutron emission mainly lies in the uses of fission chain reactions, while the radioactive nature of most of the fission fragments underlies applications from this aspect of fission. The chain reaction nature of fission can be exploited in two ways, controlled fission chain reactions used to generate nuclear energy and uncontrolled chain reactions in explosive nuclear devices. Both of these uses of fission have been in existence since the Second World War when many of the top scientists of the time concentrated on making these applications a reality. Although today the basic and practical principles determining our ability to use fission for nuclear energy or nuclear explosions are well-understood, the initial years required a lot of ingenuity and creativity. The research was equally creative, mostly started in the 1950s, that went into harnessing the products of fission for medical isotopes. Several fission fragments have ideal decay properties to be used as diagnostics or in radiotherapy to treat some medical conditions, especially cancer. The decay properties of many fission fragments also make them ideal probes for nuclear nonproliferation and reactor safeguards. Another property of the decay of fission fragments is that their beta-decay results in the emission of antineutrinos and the antineutrino fluxes from nuclear reactors have been used in several worldwide collaborations to examine neutrino oscillations and physics beyond the Standard Model of particle physics.

The enormous range and diversity of applications of nuclear fission make it one of the most intriguing subatomic processes. In this chapter, I will attempt to discuss a selection of these applications, ranging from nuclear reactors to nuclear

nonproliferation to neutrino physics. It is not possible to cover all of the applications of fission that have been invented by so many resourceful scientific teams. But it is my hope that the discussion of the applications presented here will encourage the reader to explore this rich field in more depth.

#### **2. Nuclear reactors**

This section discusses aspects of nuclear reactors that are important for understanding later discussions of nonproliferation and reactor neutrino physics. In particular, this section pays particular attention to the production of plutonium in reactors. Most modern reactors use low-enriched (<sup>2</sup>–5% 235U) uranium fuel and thermalized neutron flux. The shape of the neutron flux depends on the initial enrichment, and in **Figure 1**, examples are shown for a pressurized water reactor (PWR) with different enrichments. These fluxes were derived from MCNP/CINDER [1] simulations. With fresh fuel, the burn is initially dominated by fissions from 235U, but as the burn proceeds, 239Pu is generated by neutron capture on 238U, followed by two beta decays, **Figure 2**.

The in-growth rate of the different Pu isotopes depends on the reactor design and the initial fuel enrichment of the uranium fuel. For example, in **Figure 3**, we compare the percentage of fissions induced by the different isotopes in the fuel for two PWRs that only differ in the fuel enrichment, one with 2.7% and the other with 4.2% enriched uranium. The total exposure of the 4.2% enriched fuel is about twice that of the 2.7% enriched fuel and is 60 GW days per metric ton of uranium (GWd/MTU). These simulations were taken from Ref. [2] and from earlier coupled calculations with the ERPI/CELL and CINDER codes [3, 4]. Apart from the factor of two differences in total burn times, the two fission histories are very similar. Indeed, increasing the fuel enrichment induces a kind of "accordion" effect in which the *x*-axis is simply stretched.

#### **Figure 1.**

*The neutron flux of PWRs with different levels of 235U enrichment. The flux is graphed as flux (Φ(E)) per unit lethargy (Δu), that is, Φ(E)/Δu, where Δu = ln(EI/EI+1).*

#### **Figure 2.**

*Reactor production of 239Pu and 240Pu starts with neutron capture on 238U. The production is complicated by the capture on and beta decay of the intermediate nuclei 239U, 240U, 239Np, and 240Np. At high neutron flux, φ <sup>10</sup><sup>14</sup> n/cm<sup>2</sup> /sec, the 2.355 half-life of 239Np affects the 240Pu/239Pu ratio.*

#### **Figure 3.**

*The fission history for fresh 2.7% enriched (97.3% 238U, 2.7% 235U) and 4.2% enriched (95.8% 238U, 4.2% 235U) reactors. As the fuels burn, the importance of 235U, in terms of the fraction of fissions taking place, decreases steadily. The fission histories are very similar for the two enrichments, apart from the clear difference in burn time.*

#### **2.1 Other reactor designs**

A number of alternate reactor designs to PWRs exist or have been designed. Perhaps the most famous alternate design is the Canadian design, the CANDU reactor [5]. CANDU designs use natural uranium (0.711% 235U, 9928% 238U, and trace 234U) and are pressurized heavy-water (D2O) designs. Unlike conventional light-water PWRs, where the core fuel is placed in a pressure vessel, the CANDU bundles of fuel are contained in pressure tubes and these pressure tubes are then contained in a larger unpressurized vessel (a calandria) that is moderated by heavy water. In addition to being capable of producing energy from natural uranium, a large advantage of the CANDU design is that refueling can be done on a continuous basis. This is because only a signal fuel tube needs to be depressurized and replaced at a time. This is in strong contrast to a normal PWR, where the entire core must be shut down in order to open the pressure vessel and refuel the core. There are 30 CANDU reactors operating around the world, in Argentina, Canada, China, India, Pakistan, Romania, and South Korea.

Another very different reactor design is the thorium-uranium fuel design [6]. Thorium is more than three times more abundant on Earth than uranium and considerable research has gone into Th-U reactor designs. 232Th is almost stable with a halflife of 1.405 � <sup>10</sup><sup>10</sup> y, and it accounts for 99.98% of the natural thorium on Earth. The remaining 0.02% is accounted for by the very small abundance of 230Th found in parts of the ocean. In general, 232Th can only be used in breeder reactors because it does not undergo thermal fission. The breeding process is analogous to the production of 239Pu in uranium reactor, in which 232Th absorbs a thermal neutron to make 233Th, and two beta decays result in 233U, which is fissile.

$$^{232}Th + n \rightarrow ^{233}Th(22.3 \text{ min}) \stackrel{\beta-\text{decay } 233}{\rightarrow} Pa \text{ (26.975 days)} \stackrel{\beta-\text{decay } 233}{\rightarrow} ^{233}U \tag{1}$$

The thermal neutron capture cross section on 232Th is larger than that on 238U, which results in 233U being bred more efficiently in thorium reactors than plutonium in uranium reactors. To sustain criticality in a thorium reactor, it normally requires a second fissile material to be included in the core to initiate the burn and to provide a neutron flux to allow the nuclear sequence in Eq. 1 to proceed. There have been many possible fissile fuels considered for adding to thorium-dominated fuel. These have mostly been low-enriched uranium (LEU) or reactor-grade plutonium (RGPu). LEU generally refers to uranium enriched up to 20% in 235U. Higher enrichments would also work, but proliferation becomes an issue. RGPu is generally spent reactor fuel containing about 50% 239Pu, with the remaining Pu coming from 238Pu, 240Pu, 241Pu, and 242Pu.

In **Figure 4**, we show the burn history for the case of 80%ThO2 + 20%LEU. For this example, 235U initially dominates the fissions, until sufficient 233U has been bred from the 232Th. However, the 238U in the LEU fuel also produces 239Pu, which eventually contributes more fission than 235U.

#### **2.2 MOX plutonium fuels**

One of the important concepts for the nuclear fuel cycle is to reuse spent reactor fuel by separating the plutonium and mixing it with depleted uranium to form a mixed oxide (MOX) fuel [7]. The plutonium is recovered through reprocessing. Depending on the nation involved in the reprocessing, the spent uranium can also be reprocessed. There are about 40 reactors in Europe (France, Germany Belgium, and Switzerland) that are licensed to use MOX, although only about 30 European reactors are currently using MOX fuel. Japan also recycles MOX fuel in its reactors. Typically, commercial thermal reactors in countries that recycle reactor fuel are loaded with onethird of MOX, although some advanced light water designs are capable of accepting 100% MOX loadings.

#### **Figure 4.**

*The fission fractions predicted for a Th-U reactor as a function of the burnup. The main role of the 232Th is to breed 233U, and 233U eventually dominates the number of fissions being produced. The LEU fuel is necessary to bring the reactor up to critical and supply enough neutrons for the breeding of 233U. The LEU fuel is made up of 2.56% 235 U and 97.44% 238U. The 239Pu comes from neutron capture on 238U, as summarized in Figure 2.*

Another concept that has been explored is to recycle weapons-grade MOX plutonium fuel. The typical isotopic difference between weapons-grade and reactor-grade MOX pluton is summarized in **Table 1**.

There are many MOX loadings that could be considered, but in a detailed analysis we considered [8] four possible cases:


For all four cases, the Monteburns code [1] was used, which couples the Monte Carlo neutron transport code MCNP [9] to the burn code CINDER'90 [10]; the latter uses 63 neutron energy groups and tracks up to 3400 nuclides, including 638 isomers. In all cases, we assume the same reactor configuration as the H.B. Robinson Unit 2 (HBR2) PWR, which was loaded with a fuel assembly that had 2.56% enriched fresh


**Table 1.**

*Initial isotopics for weapons and reactor-grade MOX fuels.*

LEU and no MOX fuel. The simulations reproduce the reported history for the HBR2 PWR using variable concentrations of burnable boron poison rods over the four burn periods and the simulations predict spent fuel isotopic inventories for the uranium and plutonium isotopes within about 5% of measurement. The equivalent burn histories predicted for our four MOX fuel scenarios are shown in **Figures 5** and **6**.

#### **Figure 5.**

*Fission fraction for 235,238U, and 239,241Pu as a function of burnup for 33.3% MOX and 66.7%LEU. Panel (a) is RG and (b) WG plutonium.*

#### **Figure 6.**

*Fission fraction for 235,238U, and 239,241Pu as a function of burnup for pure MOX fuels. Panel (a) is for RG and (b) is for WG plutonium. The two grades of Pu are distinguished by the relative importance of 239Pu and 241Pu.*

#### *Applications of Fission DOI: http://dx.doi.org/10.5772/intechopen.111765*

As expected, the burn histories for cores involving partial or total MOX fuel are quite distinct from fuel only containing LEU. In the cases where one-third of the core is MOX fuel, the fissions are dominated by 235U and 239Pu. The largest difference in the burn between the RG and WG plutonium fuel is the relative importance of 241Pu. A second difference is the burnup value at which 239Pu becomes a larger fraction of the fission than 235U. As is usually the case with all uranium fuel, the relative importance of 238U remains approximately constant throughout the burn. When the fuel is 100% MOX Pu, 239Pu dominates the fissions for all burnups. Burning RG fuel differs from WG fuels in the relative importance of 239Pu versus 241Pu.

#### **3. Fission reactor antineutrinos**

One very important application of fission is the use of the antineutrinos emitted in the beta decay of fission fragments. Since the discovery of the antineutrino at the Savannah River reactor by Reines and Cowan in 1953 [11–13], neutrino physicists have taken advantage of the intense sources of low-energy antineutrinos from nuclear reactors to study the fundamental nature of neutrinos, particularly neutrino oscillations. The concept of neutrinos oscillating [14] from one flavor (νe, νμ, ντ) to another is a natural outcome of gauge theories with massive neutrinos and in its simplest form, it is expressed as a unitary transformation relating the flavor eigenstates (νe, νμ, ντ) to mass (ν1, ν2, ν2) eigenstates.

Understanding the details of antineutrino fluxes from reactor is a complex and detailed subject. All of the detectable neutrinos emitted from reactors come from the beta-decay of neutron-rich fission fragments and correspond to an antineutrino spectrum ranging from 0 to 10 MeV. The antineutrino spectrum from each fissioning actinide is different and involves thousands of beta-decay branches. The study of neutrino oscillations from reactors has become increasingly important and has now reached a precision science, wherein detailed knowledge of the antineutrino spectra is required. This is especially true since the emergence of the so-called reactor neutrino anomaly [15].

The history of reactor neutrino experiments is a very rich one that spans over 60 years. These neutrino oscillation experiments mostly involved searches for variations of the emitted spectrum with the distance from the reactor. In the early experiments, performed in the 1980s–1990s, neutrino detectors were placed within 100 meters of the reactor, but no oscillations were observed. These observations agreed with predictions based on three-neutrino oscillations models and the findings of other neutrino experiments. The first observation of the disappearance of antineutrinos from a reactor was made by the KamLAND [16] experiment in the years between 2003 and 2008. KamLAND detected antineutrinos from 56 Japanese reactors at an average distance of about 180 km. By 2012, KamLAND, together with the CHOOZ [17, 18] and Palo Verde [19] experiments, had set an upper limit on the neutrino mixing angle θ<sup>13</sup> and shown that θ<sup>13</sup> was significantly smaller than the mixing angles θ<sup>12</sup> and θ23. Following these experiments, the reactor neutrino field moved to much larger detectors and employed both a near detector and a main far detector. The reactor neutrino experiment Daya Bay [20], RENO [21], and Double Chooz [22] determined the value of θ<sup>13</sup> with high accuracy. The best-fit value from Daya Bay is sin<sup>2</sup> (2θ13) = 0.084+/0.005.

#### **3.1 The physics determining fission antineutrino spectra**

For precision studies of neutrino oscillations, accurate knowledge is needed of the fission antineutrino spectra for each actinide contributing to the reactor fuel. This issue became a focal point of reactor neutrino studies when in 2011, a reevaluation of the spectra suggested by Huber [23, 24] and Mueller [25] (HM) stated that the expected spectra from all reactors should be increased by about 5–6%. If correct, this would introduce a very serious problem for the neutrino oscillations community because it would mean a need for reassessment of all previous reactor neutrino experiments. In particular, it would imply that the short-baseline experiments were observing neutrino oscillations, but this suggestion could not be accommodated by the standard three neutrino model. New models were proposed to explain the situation, including postulating the existence of a sterile neutrino.

A related problem arose from analyses of the change in the number of antineutrinos emitted from reactors with increasing fuel burnup [26]. As summarized in **Figure 3**, as the burn proceeds in a reactor, the relative importance of 239Pu increases. The total number of antineutrinos emitted from the fission of 239Pu is less than that emitted from 235U, so as the fuel evolves the number of antineutrinos emitted decreases. Daya Bay [26] used the reactor fuel evolution to extract a cross-section ratio for 235U/239Pu, *σ*235*=σ*<sup>239</sup> =1.445+/�0.097. Here *σ* means the average cross section for an antineutrino spectrum for the inverse beta-decay process *p* þ *ν<sup>e</sup>* ! *n* þ *e*þ. However, the HM [23, 25] ratio is *σ*235*=σ*<sup>239</sup> =1.53+/�0.05. The combination of the reactor neutrino anomaly and the anomalous *σ*235*=σ*<sup>239</sup> ratio behooved the nuclear physics community to reexamine the detailed physics determining fission antineutrino spectra.

There are two complementary ways to determine the expected antineutrino spectrum from a fissioning nucleus [27]: the *ab initio* summation and the electron spectrum conversion methods. In either case, the aggregate fission antineutrino spectrum is determined by summing the contributions of all β-decay branches (*bni*) of all fission fragments (*Yn(An, Zn*)),

$$\frac{dN}{dE\_{\overline{\nu}}} = \sum\_{n} Y\_n(Z\_n, A\_n) \sum\_{i} b\_{ni} \left( E\_0^i \right) \mathcal{S}\_{\overline{\nu}} \left( E\_{\overline{\nu}}, E\_o^i, Z\_n \right) \,. \tag{2}$$

Here *Yn(An, Zn*) is the cumulative fission yield for fragments *(An, Zn*). The betadecay spectrum S for a single transition in nucleus (Z, A) with end-point energy E0 = Ee + Eν is

$$\mathcal{S}(E\_{\epsilon}, Z, A) = \mathcal{S}\_0(E\_{\epsilon}) F(E\_{\epsilon}, Z, A) C(E\_{\epsilon}) (1 + \delta(E\_{\epsilon}, A, Z)),\tag{3}$$

where *S*<sup>0</sup> = GF <sup>2</sup> Ee (E0-Ee) 2 /2π<sup>3</sup> , Ee(pe) is the total electron energy (momentum), *F Ee* ð Þ , *Z*, *A* is the Fermi function, and C(Ee) is a shape factor [28] for forbidden transitions. In the case of Gamow-Teller transitions C(Ee) = 1. The corrections to the spectrum, including in the term δ(Ee, Z, A), arise from weak magnetism, finite size effects in the Fermi function, and radiative corrections. Changes in the treatment of these corrections were a major source of the initial reactor neutrino anomaly.

In the summation method, one uses the nuclear databases to carry out the sum of over all fission fragments and end-point energies, Eq. 2. This method suffers from the problem of the databases being somewhat incomplete. There are theoretical models for some of the missing data, but overall it would be difficult to use the summation method to determine whether an anomaly exists at the 5% level or not.

The second method for determining the antineutrino spectra is to convert a measured aggregate beta-spectrum into an antineutrino spectrum. This method also comes with difficulties. Any fit to an aggregate beta-spectrum is restricted to about 25–30 (fake) endpoint energies, Ei 0. This is because the measured [29–32] aggregate beta spectra are very smooth over five orders of magnitude, **Figure 7**. Thus, a prescription is needed for the 25–30 values of "Zeff" that enter the Fermi functions and this introduces uncertainty at the few percent levels. However, regardless of the prescription used, it can be shown [33] that the ratio *σ*235*=σ*<sup>239</sup> is constrained to be close to 1.53+/ �0.05, if the Schreckenbach data [29] are used for the conversion.

To address this problem, Kopeikin *et al.* [34] remeasured the ratio of the beta spectra for 235U and 239Pu at a research reactor at National Research Centre Kurchatov Institute. They found a ratio of *<sup>σ</sup>*<sup>235</sup> *<sup>σ</sup>*<sup>239</sup> ¼ 1*:*45 þ *=* � 0*:*03, which is close to the average value found by Daya Bay and RENO, but 5% lower than any reasonable analysis of the Schreckenbach 235U and 239Pu aggregate beta spectra could predict. In addition, the STEREO experiment, which was performed at a reactor with highly enriched 235U fuel, showed that the anomalous results for the change in the total antineutrino flux with fuel evolutions observed at Daya Bay and RENO could be explained entirely in terms of the Schreckenbach measurements having a 235U spectrum with a normalization that was 5% too high. This then translated into the HM normalization also being too high. It is worth noting that the predictions of the summation method, using the

#### **Figure 7.**

*The aggregate fission antineutrino spectra for 235U, 239Pu, and 241Pu, deduced from the aggregate beta spectra measured by Schreckenbach et al. [29]. Conversion of these spectra to antineutrino spectra by Huber [23] and Mueller [25] led to the so-called reactor neutrino anomaly [15] and to the predicted ratio for 235U/239Pu antineutrinos inverse beta decay cross sections of 1.53+/*�*0.05, which is larger than that observed at Daya bay [26].*

JEFF-3.1 cumulative fission yields [35] and the ENDF/B-VIII.0 decay data yields [36], yield a ratio *<sup>σ</sup>*<sup>235</sup> *<sup>σ</sup>*<sup>239</sup> ¼ 1*:*445, in agreement with the more modern experiments.

#### **4. Nuclear threat reduction and global security**

Preventing nuclear proliferation is a worldwide problem that relies on international agreements and on our ability to detect illicit nuclear activities. Undeclared production of fissile material at all stages of any such program is of most concern. These international efforts include nuclear safeguards for reactors, detecting reprocessing of spent nuclear fuel for the production of weapons plutonium, detecting trafficking of nuclear material, and developing techniques for analyzing debris in the case of a possible terrorist nuclear explosion. The techniques that can be used often depend on the standoff distance. For example, at a compliant reactor, measurements can be made on-site, while for a reprocessing facility in an unfriendly nation, the techniques used may require detection schemes that can work many kilometers from the facility. We consider here some very different situations.

#### **4.1 Monitoring the fuel isotopic content in a molten salt reactor**

The international community is paying increasing attention to molten salt reactor (MSR) technology as a promising form of clean and reliable energy. In addition to the fuel being dissolved in the salt, many MSR designs involve fuel replacement and reprocessing on a (semi-)continuous basis. These operations render standard techniques, wherein macroscopic samples are removed from fuel rods for assaying the actinide content, useless. Thus, MSR designs raise the challenge of how the isotopic composition of the dissolved fuel content will be determined as the reactor burn progresses. Actinide and fission fragment inventories for standard light water reactors are determined by assaying the spent fuel rods. In the case of MSRs, the fuel is dissolved into the salt so that standard technique cannot be used. In this sub-section, we examine a possible method of measuring inventories in-line at the reprocessing station, where the fission gases would be released. The main concept is to replace actinides measurements with actinide *inferences* from the fission gases measurements, applicable to MSRs with continuous refueling and reprocessing operations. Fission gases are automatically released in any reprocessing operation and are easily collected for assay (**Figure 8**).

In MSR, the salt is typically treated in-line to remove as many fission poisons as possible One set of fission products that are relatively easy to remove are the noble gases, xenon, and krypton. This constant separation of the Xe and Kr gases makes the on-site reprocessing plant an ideal station to monitor the isotopic composition of these gases. As the burn proceeds:


#### *Applications of Fission DOI: http://dx.doi.org/10.5772/intechopen.111765*

**Figure 8.**

*An example of a molten salt reactor, taken from Wikipedia. The fuel is dissolved in the salt. The red box indicates a proposed online operation where fission gas measurements would be made, that is, next to the reprocessing plant.*

Therefore, for example, the growth rate of stable1 34Xe is simply,

$$\frac{dN\_{134X\epsilon}}{dt} = \Gamma\_f \overline{f}\_{134X\epsilon},\tag{4}$$

where Γ<sup>f</sup> is the fission rate and *f <sup>A</sup>* is the burn-weighted cumulative fission yield of nucleus A. **Figure 9** shows the simple linear increase in the 134Xe/135Xe ratio with the burnup for an MSR with thorium chloride fuel enriched with 10% 233U [37]. Thus, a measurement of the 134Xe/135Xe ratio can be inverted to determine the burnup of the fuel, and when coupled with a reactor simulation, the isotopic content.

#### **4.2 Deducing the grade of plutonium from volatile fission fragments**

The grade of plutonium is normally defined to be the ratio 240Pu/239Pu, and weapons-grade and reactor-grade fuel are distinguished with the former typically having a 240Pu/239Pu ratio of less than 7% and the latter a ratio of about 25%. If the reactor is running under equilibrium conditions, the 240Pu/239Pu ratio is only dependent on the total neutron fluence, **Figures 10** and **11**. There is no dependence on the enrichment of the uranium fuel. However, the relationship between fluency and the 240Pu/239Pu ratio has a small flux dependence, especially at high flux. This is because 239Pu is produced by the decay of 239Np, which has a half-life of 2.355 days, and equilibrates at a rate that is flux-dependent, **Figure 2**.

Deducing the neutron fluence (n/cm<sup>2</sup> ) that fuel has been exposed to, in order to deduce the grade of plutonium, can be translated into a problem of deducing the neutron flux (n/cm<sup>2</sup> /sec) and the total irradiation time.

#### **Figure 9.**

*The increase in the ratio of 134Xe/135Xe with burnup for thorium chloride+10% 233U MSR. By measuring the ratio of the stable xenon isotopes to 135Xe, the fuel burnup, and hence the fuel isotopic content can be deduced. [from Monteburns simulations [1, 37].*

#### **Figure 10.**

*The 240/239Pu ratio dependence on total neutron fluence. For high flux, the curve becomes flux dependent because of the time required for 239Np to reach equilibrium.*

#### *4.2.1 Deducing the neutron flux from Xe and Cs fission fragments*

The competition between the 9.14-hour decay of xenon-135 to cesium-135 and the thermal neutron capture on xenon-135 to xenon-136, with an abnormally large crosssection of 2.6x10<sup>6</sup> barns, causes the concentration of the fission products 136Xe and 135Cs to be sensitive functions of the neutron flux, **Figure 12**.

If the thermal neutron flux is low, the 135Xe dominantly beta-decays to 135Cs, while if the thermal flux is high, the neutron capture on 135Xe proceeds faster than the beta-

**Figure 11.**

*The 240/239Pu ratio is not dependent on the fuel enrichment. In these simulations, the half-life of 239 Np has arbitrarily been set to 1 hour to emphasize the (mostly) simple dependence on fluence.*

#### **Figure 12.**

*The concentrations of 136Xe and 135Cs relative to other fission products, such as 134Xe, are determined by the magnitude of the thermal neutron flux. This sensitivity arises because the rate of thermal neutron capture on 135Xe competes with rate of decay of 135Xe to 135Cs.*

decay because the capture cross section is of the order of 10<sup>6</sup> barns. In comparison, most fission products are produced directly by fission and/or by the beta-decay of other fission fragments.

135Xe is a well-known reactor poison. Under steady-state conditions, it reaches an equilibrium value that depends on the capture rate of 136Xe and on the concentration of 135I. Changes in the thermal flux cause short-term fluctuations in the level of xenon-135, and after about 40 to 50 hours of steady flux, the xenon-135 level settles to a new equilibrium value that reflects the new flux. Such changes in flux, as well as shutdowns and restarts of the reactor, also affect the 136Xe/134Xe and 135Cs/137Cs ratios.

### *4.2.2 The dependence of 136Xe on the reactor thermal neutron flux*

In a reactor, 136Xe is produced by three main mechanisms: as a direct fission product, from the β-decay of 136I, and from the 135Xe(n,γ) 136Xe reaction. The first two mechanisms determine the so-called cumulative fission yield, which is about 7% per fission for both uranium and plutonium.

Under steady-state conditions, the growth rate of 136Xe is [38],

$$\begin{aligned} \dot{N}\_{136} &= \overline{f}\_{136} \Gamma\_f + N\_{135\text{X}\epsilon}^{equilibrium} \phi \sigma\\ \dot{N}\_{136} &= \overline{f}\_{136} \Gamma\_f \left[ 1 + \frac{\overline{f}\_{135\text{X}\epsilon}}{\overline{f}\_{136\text{X}\epsilon}} \frac{\phi \sigma}{\lambda\_{135I} + \phi \sigma} \right] \end{aligned} \tag{5}$$

where Γ<sup>f</sup> is the fission rate, fA is the burn-weighted cumulative fission yield of nucleus A, ϕ is the thermal neutron flux, σ the capture cross section on 135Xe, and Nequil represents the 135Xe equilibrium value.

The 136Xe production rate falls between two limits,

$$\begin{aligned} \phi \sigma \gg \lambda\_{135l} : \dot{N}\_{136} &= \overline{f}\_{136} \Gamma\_f \left( \mathbf{1} + \frac{\overline{f}\_{135 \text{X} \epsilon}}{\overline{f}\_{136 \text{X} \epsilon}} \right) \\ \phi \sigma \gg \lambda\_{135l} : \dot{N}\_{136} &= \overline{f}\_{136} \Gamma\_f \end{aligned} \tag{6}$$

The ratio of xenon-136 to xenon-134 is then,

$$\frac{N\_{136}}{N\_{134}} = \frac{\overline{f}\_{136X\epsilon}}{\overline{f}\_{134X\epsilon}} \left( \mathbf{1} + \left( \frac{\overline{f}\_{135X\epsilon}}{\overline{f}\_{134X\epsilon}} \right) \frac{\phi \sigma}{\lambda\_{135} + \phi \sigma} \right) \tag{7}$$

From Eq. 7, the 136Xe/134Xe ratio equilibrates on a (flux-dependent) time scale of weeks. **Figure 6** displays the situation for different values of the thermal flux (**Figure 13**).

### *4.2.3 The relationship between thermal flux and the 135Cs/137Cs ratio*

In this section, we derive an analytic expression relating the concentration of cesium-135 to the thermal neutron flux. Cesium-135 is produced through the decay chain 135I ➔ 135Xe ➔ 135Cs, which is complicated by the transmutation of xenon-135 *via* neutron capture. The number of cesium-135 atoms produced in a time t is given by,

**Figure 13.**

*The 136Xe/134Xe ratio equilibrates to a value that is determined by the thermal neutron flux. Thus, this ratio can be used to deduce the flux. The residual slope of the plateaus in the curves is caused by the slow evolution of the fuel composition, and hence the cumulative fission yields, from ref. [38].*

$$N\_{135\text{C}}(t) = \lambda\_{135\text{X}\epsilon} \left[ dt' \left[ \frac{\overline{\tilde{f}\_{13\text{C}}} \dot{N}\_{137\text{C}\epsilon} - \dot{N}\_{135\text{I}} - \dot{N}\_{135\text{X}\epsilon}}{\lambda\_{135\text{X}\epsilon} + \phi \sigma} \right] \tag{8}$$

Here *N*\_ *<sup>A</sup>* is the growth rate of nuclide A.

An analytic expression for 135Cs/137Cs ratio can be obtained by assuming that the concentration of iodine-135 and xenon-135 are at their equilibrium values and that the thermal neutron flux is constant. Under these assumptions,

$$N\_{135\text{\AA}} = \frac{\lambda\_{135\text{\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\text\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\}\/\/\}\/\/\/\/\/\/\/\/)\tag\$$

Here λ135I is the decay constant for iodine-135, and Tirrad is the total irradiation time in the reactor. The form of the second term, which gives rise to the term inversely proportional to the total irradiation time, arises from the proper treatment of startup transients.

#### *4.2.4 The effect of reactor shutdowns*

After a reactor shutdown, the operators must wait long enough to ensure that all the 135Xe and 135I have decayed before restarting the reactor. Failure to do so was one of the errors that occurred in the Chernobyl accident. Each time the reactor is shut down, the concentration of 135Cs is increased. For constant neutron flux operation,

shutdown adds P terms identical to the second term in Eq. 9, where P is the number of shutdowns before the fuel is removed. The minimum value of P is one, and the xenon ratio becomes,

$$\frac{N\_{\text{136X\epsilon}}}{N\_{\text{134X\epsilon}}} = \frac{\overline{f}\_{\text{136X\epsilon}}}{\overline{f}\_{\text{134X\epsilon}}} \left| \mathbf{1} + \frac{\phi \sigma}{\lambda\_{\text{135X\epsilon}} + \phi \sigma} \left( \frac{\overline{f}\_{\text{135X\epsilon}}}{\overline{f}\_{\text{136X\epsilon}}} \right) \left( \mathbf{1} - \frac{P}{\lambda\_{\text{135I}} T\_{irrad}^{total}} \right) \left( \mathbf{1} + \frac{\lambda\_{\text{135I}}}{\lambda\_{\text{135X\epsilon}} + \phi \sigma} \right) \right|, \tag{10}$$

while the cesium ratio becomes,

$$N\_{13\text{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\$$

Here *Ttotal irrad* is the sum of all the irradiation times to which the fuel was exposed. The shutdown correction for the cesium ratio, but not the xenon ratio, becomes significant at high flux. The difference is that for xenon the correction is relative to the constant 1. However, a measurement of the cesium ratio in spent fuel contains information on both the flux and the number of shutdowns.

#### *4.2.5 Comparisons between the theoretical and experimental cesium and xenon ratios*

Maeck *et al.* measured [39] fission isotope product ratios by irradiating highly enriched uranium targets in the Advanced Test Reactor (ATR) and in the Engineering Test Reactor (ETR) at Idaho National Laboratory. By placing the targets at different distances from the reactor mid-plane, each target was exposed to a different flux, and the flux ranged between 6x10<sup>12</sup> and 1.5x10<sup>14</sup> n/cm<sup>2</sup> /sec. The ATR targets were 93% 235U and were irradiated for 100 days over 335 days, with several shutdowns. The ETR targets were more than 99% pure 235U and one set of targets was irradiated for 20 days and another for 180 days over 320 days with some unspecified number of shutdowns.

**Figures 14** and **15** shows the experimental data and the calculated 135Cs/137Cs ratios. As can be seen, the 137Cs/135Cs ratio scales approximately linearly with the flux, but the reactor shutdowns cause the isotopic ratio to increase with flux at a slower rate at high fluxes. **Figure 15** shows the flux dependence of the 136Xe/134Xe ratio, which is a sensitive diagnostic at low, but not high, fluxes. The 136Xe/134Xe ratio is an ideal probe for low-flux graphite reactors, for example.

#### *4.2.6 Monitoring reactor on-off times*

To use the Cs and Xe ratios to address our main problem, namely, the plutonium grade of fuel that is being reprocessed, there remains the problem of knowing the irradiation times and the number of shutdowns. There are several techniques and the usefulness of these depends on the standoff distance from the reactor. We discuss only one method here, namely, reactor neutrino monitoring.

Several neutrino experiments have shown very successfully that measurements of the total number of neutrinos emitted from a reactor can be used to monitor reactor on-off times. Two examples are shown in **Figure 16**.

#### **Figure 14.**

*The 137Cs/135Cs ratio. The measurements are the data of Maeck et al. [39]. The deviation of the data from the one-shutdown scenario shows the strong dependence of this ratio on the number of shutdowns. The different colors used to display the data points distinguish the different reactor or irritation times used. The open circles and triangles correspond to targets irradiated at the ATR reactor, while the blue circles are data from targets in the ETR reactor.*

**Figure 15.**

*The 136Xe/134Xe ratio. The measurements are the data of Maeck [39]. Reactor shutdowns have very little effect on this ratio. The xenon ratio is a very good diagnostic for low-flux, but not for high-flux, reactors.*

Thus, the grade of reactor fuel can be determined if:

a. The original unirradiated fuel is some form of uranium 238 U + 235 U but of unknown enrichment.

#### **Figure 16.**

*Antineutrino monitoring of reactors allows accurate measurements of the reactor on and off burn times. (left) form the SONGS detector data taken at the San Onofre reactor [40]. (right) from the PROSPECT detector data taken at the 85 MW HFIR reactor at oak ridge [41].*


This is a nice example of combining two diagnostics to gain enhanced information for nonproliferation, namely, combining reactor fission gas data and antineutrino data.

#### **5. Fission applications to nuclear medicine**

This section is intended as a short survey of the most common nuclear medical procedures that were derived from fission. In general, uses of radioisotopes and nuclear techniques in medicine fall into two main classes: (a) imaging and diagnosing structures in the body and their functioning, and (b) treating diseases. Fission isotopes contribute to both classes and many reactor facilities around the world are involved in reprocessing spent fuel to produce medical isotopes. The advances in nuclear medicine involve highly multi-disciplinary research and biology and physiology are an important aspect of these studies because of the importance of accurately targeting the organ of interest.

#### **5.1 Nuclear imaging**

The 99Mo-99mTm pair are the most used medical isotope and 99mTc is used for imaging and it can be delivered to organs or tissues when injected into the body. Nuclear imaging can detect biochemical changes in an organ (as opposed solely to changes in size), and, thus, can be used to affect decisions in treating disease. The imaging technique used is known as single photon emission computed tomography (SPECT). SPECT [42] uses gamma rays emitted in the decay of a radionuclide. The 9mTc used in SPECT comes from the 2.75-day decay of the fission product 99Mo. Most of the 99M is produced in reactors that use highly enriched uranium (HEU) targets, although there is considerable research on moving to production using low-enriched uranium. The isomer of 99Tc has a half-life of 6 hours and emits a 140 keV photon. Some fractions of the photons

#### *Applications of Fission DOI: http://dx.doi.org/10.5772/intechopen.111765*

reach the detector after escaping the body and can be used to create a 3-D image. This is done by taking 2-D projections at different angles. The 3-D dataset can be manipulated into slices along any axis. One of the advantages of SPECT is that it can be used to study blood flow. The temporal resolution for SPECT is limited so that only averaged views in time are possible. There has been some discussion in the literature [43, 44] about the advantages or disadvantages of SPECT versus positron emission tomography (PET). However, SPECT myocardial perfusion imaging is used almost 20 times more frequently than PET in clinical practices in the United States [44].

#### **5.2 Radionuclide therapy**

In targeted radionuclide therapy (TRT) [**45**], radiopharmaceuticals are delivered to tumor cells by targeting specific characteristics of the tumor, such as antigens. In this way, a toxic level of the radionuclide can be delivered to the site. The radionuclide of choice emits charged particles with the required stopping range for effective destruction of malignant tissue, and includes, beta electrons, Auger electrons, and alpha particles. A commonly used fission isotope is **<sup>90</sup>**Y. As a high-energy betaemitter**, <sup>90</sup>**Y is used for the treatment of larger tumors and is a successfully targeted radiotherapy of the liver. Another application of TRT is in the treatment of bone cancers. A large fraction of breast and prostate cancer patients develop bone metastases, which can cause severe pain. Both **<sup>153</sup>**Sm and **<sup>89</sup>**Sr deliver high radiation doses to bone metastases and micrometastases in the bone marrow. The field of TRT is ever going, with fission fragments playing major role. For example, **<sup>131</sup>**I, which has traditionally been used to treat thyroid cancer, is now being studied for TRT in metastatic melanoma [**46**] (**Figure 17**).

*A toxic level of radiation is delivered to a diseased site by attaching a radionuclide to a molecular carrier that binds to the site or tumor.*

### **6. Concluding remarks**

Nuclear fission has and continues to be applied to a very broad range of needs. Many of these applications have developed into entire subfields of their own and have their own dedicated technical journals. This chapter includes a discussion of nuclear reactors, fission neutrino oscillation studies, nuclear nonproliferation, and nuclear medicine. However, there are many more applications of nuclear fission, some of which are discussed in Ref. [47]. Nuclear fission is very important for other fields, such as nuclear astrophysics, geophysics, nuclear submarines, and nuclear propulsion through space. No doubt the fission of applied nuclear fission will continue to grow as a healthy scientific area.

### **Author details**

Anna C. Hayes Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, USA

\*Address all correspondence to: anna\_hayes@lanl.gov

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Trellue H. Los Alamos national laboratory internal report, development of Monteburns: A code that links MCNP and ORIGEN2 in an automated fashion for burnup calculations. Thesis. 1998; **LA-13514-T**:1-190

[2] Nieto MM, Hayes AC, Wilson WB, Teeter CM, Stanbro WD. Detection of antineutrinos for nonproliferation. Nuclear Science and Engineering. 2005; **149**(3):270-276. DOI: 10.13182/NSE05- A2493

[3] Wilson WB, England TR, Ozer O, Wessol DE. Actinide decay power. Transactions of the American Nuclear Society. 1979;**32**:737

[4] Wilson WB, England TR, LaBAUVE RJ, Mitchell JA. Calculated radionuclide inventories of high-exposure LWR fuels. Nuclear Safety. 1988;**29**:177

[5] Torgerson DF, Shalaby BA, Pang S. CANDU technology for generation III+ and IV reactors. Nuclear Engineering and Design. 2006;**236**:1565-1572

[6] Thorium fuel cycle- Potential benefits and challenges. IAEA-TECDOC-1450. 2005

[7] Status and Advances in MOX Fuel Technology. Technical Reports Series 415. Vienna: IAEA; 2003

[8] Hayes AC, Trellue HR, Nieto MM, Wilson WB. Antineutrino monitoring of burning mixed oxide plutonium fuels. Physical Review C. 2012;**85**:024617

[9] Kulesza JA, Adams TR, Armstrong JC, Bolding SR, Brown FB, Bull JS, et al. MCNP® Code Version 6.3.0 Theory & User Manual. LA-UR-22-30006, Rev. 1. Los Alamos, NM, USA: Los Alamos National Laboratory Tech. Rep; 2022

[10] Wilson WB, Cowell ST, England TR, Hayes AC, Moller P. A Manual for CINDER'90 Version 07.4 Codes and Data. Los Alamos, NM: Los Alamos National Laboratory, LA-UR-07-8412; 2007

[11] Reines F, Cowan CL Jr. Detection of the free neutrino. Physics Review. 1953; **92**:830

[12] Reines F, Cowan CL Jr. Free antineutrino absorption cross section. I. Measurement of the free antineutrino absorption cross section by protons. Physica Review. 1959;**113**:273

[13] The Reines-Cowan Experiments. Detecting the poltergeist. Los Alamos Science. 1997;**25**:3

[14] Pontecorvo B. Neutrino experiments and the problem of conservation of Leptonic charge. Zhurnal Eksperimental'noi i Teoreticheskoi Fiziki. 1968;**53**:1717

[15] Mention G, Fechner M, Lasserre T, Mueller TA, Lhuillier D, Crobier M, et al. The reactor antineutrino anomaly. Physics Review. 2001;**D83**:073006

[16] Apollonio M et al. Limits on neutrino oscillations from the CHOOZ experiment. Physics Letters B. 1999;**466**:415

[17] Boehm F et al. Search for neutrino oscillations at the Palo Verde nuclear reactors. Physical Review Letters. 2000; **84**:3764

[18] Boehm F et al. Results from the Palo Verde neutrino oscillation experiment. Physical Review D. 2000;**62**:072002

[19] *Eguchi K* et al. (*KamLAND Collaboration*). "First results from Kam LAND: Evidence for reactor antineutrino disappearance. Physical Review Letters. 2003;**90**(2): 021802–021807

[20] Daya Bay Collaboration. Observation of electron-antineutrino disappearance at Daya bay. Physics Review. 2012;**108**(17):171803

[21] RENO Collaboration. Observation of electron-antineutrino disappearance at RENO. Physical Review Letters. 2012; **108**(18):191802

[22] Abe Y et al. (Double Chooz Collaboration) "Improved measurements of the neutrino mixing angle θ<sup>13</sup> with the double Chooz detector". Journal of High Energy Physics. 2014;**86**(10)

[23] Huber P. Determination of antineutrino spectra from nuclear reactors. Physical Review C. 2011;**84**: 024617

[24] Huber P. Erratum. Physical Review C. 2012;**85**:029901

[25] Mueller TA et al. Improved predictions of reactor antineutrino spectra. Physical Review C. 2011;**83**: 054615

[26] An FP et al. Evolution of the reactor antineutrino flux and spectrum at Daya Bay. Physical Review Letters. 2017;**118**: 251801

[27] Hayes AC, Vogel P. Reactor Neutrino Spectra. Annual Review of Nuclear and Particle Science. 2016;**66**: 219-244

[28] Schopper HF. Weak Interactions and Nuclear Beta Decay. Amsterdam: North-Holland; 1966

[29] Schreckenbach K, Faust HR, von Feilitzsch F, Hahn AA, Hawerkamp K, Vuilleumier JL. Absolute measurement of the beta spectrum from 235-U fission as a basis for reactor antineutrino experiments. Physics Letters. 1981;**99B**: 251

[30] von Feilitzsch F, Hahn AA, Schreckenbach K. Experimental betaspectra from 239Pu and 235U thermal neutron fission products and their correlated antineutrino spectra. Physics Letters. 1982;**118B**:162

[31] Schreckenbach K, Colvin G, Gelletly W, von Feilitzsch F. Determination of the antineutrino spectrum from 235U thermal neutron fission products up to 9.5 MeV. Physics Letters. 1985;**160B**:325

[32] Hahn AA, Schreckenbach K, Gelletly W, von Feilitzsch F, Colvin G, Krusche B. Antineutrino spectra from 241Pu and 239Pu thermal neutron fission products. Physics Letters B. 1989; **218**:365

[33] Hayes AC, Gerard Jungman EA, McCutchan AA, Sonzogni GT, Garvey XBW. Analysis of the Daya bay reactor antineutrino flux changes with fuel burnup. Physical Review Letters. 2018;**120**:022503

[34] Kopeikin V, Skorokhvatov M, Titov O. Reevaluating reactor antineutrino spectra with new measurements of the ratio between 235U and 239Pu β spectra. Physical Review D. 2021;**104**:L071301

[35] Kellett MA, Bersillon O, Mills RW. The JEFF-3.1/3.1.1 radioactive decay data and fission yields sub-libraries. JEFF Report 20, NEA Report No. 6287. 2009:1-147

[36] Chadwick MB et al. The ENDF/B-VIII.0 library. Nuclear Data Sheets. 2011; *Applications of Fission DOI: http://dx.doi.org/10.5772/intechopen.111765*

**112**:2887. ISBN 92–0–103405–9. ISSN 1011-4289

[37] Trellue H, Lafreniere P, Maldonado A, Hayes-Sterbentz A, Mehta V. Calculation of Fission Gases from Different Fuel Sources in Molten Salt Reactors, LANL internal report, unpublished

[38] Hayes AC. Gerard Jungman, determining reactor flux from xenon-136 and cesium-135 inn spent fuel. Nuclear Instruments and Methods in Physics Research A. 2012;**690**:68

[39] Maeck WJ, Tromp LR, Duce FA, Emel WA. Isotope Correlation Studies Relative to High Enrichment Test Reactor Fuels, ICP-1156. Idaho: Allied Chemical Corporation; 1978

[40] Bowden N et al. Experimental results from an antineutrino detector for cooperative monitoring of nuclear reactors. Nuclear Instruments and Methods A. 2007;**572**:985-998

[41] Andriamirado et al. Improved shortbaseline neutrino oscillation search and energy spectrum measurement with the PROSPECT experiment at HFIR. PRD103. 2021;**03201**:03201-1-03201-45

[42] Seo Y, Mari C, Hasegawa BH. Technological development and advances in single-photon emission computed tomography/computed tomography. Seminars in Nuclear Medicine. 2008;**38**:177-198

[43] Shrestha U, Sciammarella M, Alhassen F, Yeghiazarians Y, Ellin J, Verdin E, et al. Measurement of absolute myocardial blood flow in humans using dynamic cardiac SPECT and 99mtctetrofosmin: Method and validation. Journal of Nuclear Cardiology. 2017;**24** (1):268-277. DOI: 10.1007/s12350-015- 0320-3

[44] Slomka P, Berman DS, Germano G. Myocardial blood flow from SPECT. Journal of Nuclear Cardiology. 2017;**24** (1):278-281

[45] Williams LE, DeNardo GL, Meredith RF. Targeted radionuclide therapy. Medical Physics. 2008;**35**: 3062-3068

[46] Thivat E et al. Phase I study of [131I] ICF01012, a targeted radionuclide therapy, in metastatic melanoma: MELRIV-1 protocol. BMC Cancer Actions. 2022;**22**:1. DOI: 10.1186/ s12885-022-09495-3

[47] Hayes AC. Applications of nuclear physics. Reports on Progress in Physics. 2017;**80**:026301

#### **Chapter 8**

## Basics of Neutron Imaging

*Eberhard H. Lehmann*

#### **Abstract**

Neutron imaging is established at many neutron sources around the world as a method for noninvasive investigations of samples and object on the macroscopic scale. Similarly to X-ray imaging, it provides the possibility to "look through" materials and allows one to "see" the inner, hidden content. However, owing to the complete different interaction mechanism, neutron imaging provides very different and complementary contrasts compared to X-rays, even if the image quality often is about the same. We report about the method's principles, describe the state of the art, and give an outlook for new trends and developments.

**Keywords:** neutron source, research reactor, neutron tomography, nondestructive testing, neutron detection, neutron moderation, thermal neutrons, cold neutrons

#### **1. Introduction**

The discovery of neutron in 1932 and afterward the process to initiate nuclear fission in a chain reaction provided the opportunity to set free an enormous amount of energy for two purposes: nuclear weapons and nuclear power plants. However, there is a third aspect in using fission reactions: the generation of fields of free neutrons.

For this purpose, the erection of specialized strong neutron sources, the so-called research reactors, which have been designed and built in many countries all over the world, has started just after World War II. Their principle is the fission of U-235 in a compact nuclear core with the purpose of highest possible neutron output, but minimal energy production. In this case, the energy release is seen to be more disturbing than useful. In the end, the necessity to remove the resulting energy from the reactor core limits the strength of such neutron sources to about 100 MW and the corresponding neutron flux level of about 1015 cm−2 s−1.

Such strong neutron sources have been used for many scientific and also practical purposes until today. Next to the training of specialists in the reactor technology for power plants and the test of nuclear fuel in the compact cores with its high power density, the neutrons themselves are the target and tool of research.

In the interaction of neutrons with matter, nuclear reactions are initiated with the result of new isotopes, often not existing in nature due to their limited lifetime during decay. Many of such isotopes can be used in medicine for diagnostics and therapy and as gamma sources for several practical applications. On the other hand, neutrons are scattered at atomic nuclei and provide in this way important information about the investigated materials, not accessible by other scientific methods.

Neutron imaging, the topic of this chapter, is a method on the macroscopic scale, performed with the aim to study materials, too. Similarly to common X-ray diagnostics, the material under investigation is illuminated by a directed beam of neutrons, and the detector behind registers the remaining neutron intensity. By comparison to the beam intensity without sample, the amount of missing (captured, scattered out) neutrons can be determined and related to the attenuating properties of the investigated material.

In order to perform neutron imaging experiments, next to the neutron source, a dedicated and well-designed facility has to be built. Such neutron imaging stations are part of the suite of experimental devices around the neutron source (up to now mainly research reactors).

In this chapter, we will describe how the best conditions for neutron imaging are provided: from the initial fission reaction, the tuning of the suitable neutron energy in the moderation process, via the beam extraction toward the best detector design. We will show some of the best installations for neutron imaging, describe the principles of the interaction of neutrons with matter, and demonstrate with examples how neutron imaging can be used for scientific and technical purposes. In the outlook, we will sketch how neutron imaging is developed further by methodical improvements and new neutron sources next to the still existing research reactors.

#### **2. Reactor-based neutron sources for neutron imaging**

As described above, none of the research reactors have been built exclusively for neutron imaging. Other applications like isotope production, neutron scattering, or fuel development have higher priorities and importance. Therefore, not all neutron sources are able to perform neutron imaging on an up-to-date level yet. The main aspect is the access to a suitable beam port, where the most useful neutrons can be extracted and a beam is formed for the imaging process.

While the neutrons, resulting from the fission reaction, are very fast (energies on the order of MeV), the most useful and interesting neutrons are very slow (in the range of meV). The reason to use slow neutrons is their specific interaction properties with matter (neutron's wavelength are about the atomic distances in solids), high detection probability, and high diversity in the cross-sectional data, which allows one to distinguish even individual isotopes very precisely.

In order to slow down fission neutrons over such a high energy range (nine orders of magnitude), a moderation process is necessary. It has been found out that the elastic collision with very light materials, which do not absorb neutrons much, enables a stepwise reduction of the neutron energy. The lower the mass of the moderator material, the less nuclear collisions are needed to reach the intended "thermal energy"—the equilibrium with the thermal movements of the moderator's molecules. Hydrogen is the ideal material to provide such moderation performance. However, the natural hydrogen 1 H also absorbs neutrons. This reduces the amount of the resulting slow neutrons. Much better is the hydrogen isotope deuterium 2 H = the nucleus with a proton and an additional neutron. It absorbs nearly no neutrons, but needs some more collisions to arrive at the required thermal equilibrium.

About 2.4 fast neutrons are emitted with each fission reaction of U-235, of which already one neutron is needed to continue the chain reaction. From the remaining 1.4 neutrons, a part is lost by parasitic capture in the moderator or in structural materials in the reactor core. However, there are still enough neutrons remaining, forming a

*DOI: http://dx.doi.org/10.5772/intechopen.110403 Basics of Neutron Imaging*

cloud around the reactor core. Because neutrons are neutral particles without a charge, there are no methods available to guide them with electromagnetic fields into a desired direction. Therefore, neutrons can only be extracted from the source region by a selection of useful ones, which fly into the dedicated direction of an experimental facility. Neutron flux intensities of only about 107 cm−2 s−1 are common at imaging stations.

Despite the success of research reactors, spallation neutron sources have been built as replacements and complements, useful in particular with their pulsing option for time-of-flight (TOF) investigations.

#### **3. Principles and options in neutron imaging**

Indeed, right with the availability of neutron sources, several attempts were made to use neutrons for imaging purposes. It was quickly found out that such transmission images look similar to X-ray and gamma images, however with alternative contrast features [1, 2].

As sketched in **Figure 1**, neutron images are obtained employing a two-dimensional area detector behind an object when a collimated neutron beam is sent through it. In the beginning of such investigations, photographic film methods were applied. However, as neutrons cannot directly activate the film, a neutron-to-radiation converter has to be used. There are some materials with high absorption probability as Gd, Dy, In, or Au, converting the captured neutrons into another kind of radiation, which then excites the film.

The film techniques were used until the 1990s of the last century (and even today for special certified applications) before some "revolution" happened: the introduction of digital neutron imaging detection systems. In this way, the previous limitations with film were overcome, and new techniques in neutron imaging have been developed and established. **Figure 2** provides an overview on which kind of methods are available today, all based on the advantages of the digital imaging option.

#### **Figure 1.**

*Sketch of a neutron imaging experiment (not to scale), where an aperture and the collimator define the beam properties.*

#### **Figure 2.**

*Overview of neutron imaging techniques, available in several user facilities (see the list in the appendix).*

#### **Figure 3.**

*Setup of a camera-based neutron detector with its major components; the induced light emission from the neutron-sensitive scintillator screen is sent to the camera via a mirror arranged in 45° inside a light-tight box with permission by the IAEA [3].*

In particular, the acquisition time for a "valid" neutron image is now on the order of seconds and below, enabling the sampling high amounts of data—to be treated accordingly and stored efficiently.

#### *DOI: http://dx.doi.org/10.5772/intechopen.110403 Basics of Neutron Imaging*

From the various digital methods, the setup with a highly sensitive digital camera has been found most flexible and efficient. The principle construction is shown in **Figure 3**. Next to the (often expensive) high-performance camera (CCD or CMOS), the neutronsensitive scintillator screen is of importance. It converts the capture of neutrons into visible light, registered pixelwise in the camera detector. It depends on the particular application if high spatial resolution, high frame rate, or highest quantitative accuracy has to be obtained in the experiments. Accordingly, the composition, layer thickness, and converter material of the scintillator have to be chosen well.

Based on the digital imaging data, neutron tomography is established as a routine method, providing access to the 3D volume of an object, as shown in the example in **Figure 4**. In time sequences, a current image can be compared to a previous one, while the precision to derive values of changes is much enhanced. This is shown in

#### **Figure 4.**

*Results of radiography (middle) and tomography (right) measurements with thermal neutrons, when a bronze sculpture from Tibet (seventeenth century) has been investigated concerning its hidden organic content (wood, paper, and dry flowers).*

#### **Figure 5.**

*Using a pixelwise referencing image procedure, it is possible to visualize the moisture accumulation inside the trumpet (online during "playing").*

an example in **Figure 5**, where the moisture accumulation during trumpet play is determined very locally and with high sensitivity for water deposits.

#### **4. Generic setup of a neutron imaging facility**

Although no neutron imaging facility is constructed identical to another one, the most important components of it can be specified, as shown in **Figure 6**. Starting with the primary source of fast neutrons, the moderators define the neutron energy and the best point for the neutron extraction. Because a high amount of gamma radiation is emitted during the fission process in addition to the neutrons, a direct view on the fission region should be avoided. It is better to look toward the peak of moderated neutrons next to the nuclear core. Remaining gamma radiation can be reduced in its intensity with suitable filters.

The beam for the neutron imaging facility is formed in the collimators in a way to get a quasi-parallel flat neutron distribution at the end of the flight path. Because neutrons interact also with the moisture in the air, the whole neutron flight path should be empty, which means evacuated. A He-filled tube is also acceptable when vacuum is not possible.

To get a beam with low divergence, the aperture close to the source should be small and the distance to the detector long. Both the features are reducing the beam intensity much, and a compromise has to be found between collimation and resulting neutron flux. The so-called L/D-ratio (L = collimator length, D = aperture diameter) should be on the order of 200 or even higher. A useful neutron flux level is on the order of 106 cm−2 s−1.

To handle the samples in the beam, it is very useful to install remotely driven manipulators, which enable precise motions in all three directions and additionally

#### **Figure 6.**

*Generic neutron imaging facility (not to scale); the whole system is surrounded by a radiation tight shielding (mostly concrete) with permission by the IAEA [3].*

#### *DOI: http://dx.doi.org/10.5772/intechopen.110403 Basics of Neutron Imaging*

a rotation stage, mainly used in tomography experiments. The precision in sample manipulation should be better than 0.1 mm depending on the detailed requirements.

The detection system (see below for more details) is the most important component in an experiment, but not necessarily the most expensive. Here, the requirement to surround the whole setup with a tight shielding comes in. Because not only the direct beam, but also scattered radiation has to be enveloped safely during beam applications, the demands for shielding are high. With the common flux level mentioned before, concrete walls of about 1-m thickness in all directions are required. In the forward beam direction, a neutron-absorbing beam catcher is additionally needed behind sample and detector.

Many other installation components are added according to the methodical requirements on demand (see Section 7).

#### **5. Detector options**

As mentioned before, camera-based detection systems are quite common in neutron imaging today. The outdated film technology with a neutron-absorbing converter was used for nearly half the twentieth century. Its advantage was high spatial resolution (about 50 μm) for relative large fields of view (FOVs) (about 30 ×30 cm). However, the acquisition time per image is on the order of 30 minutes (exposure, handling, development, fixation, and drying) at usual flux levels. This is not competitive today at all. Furthermore, film methods are not linear in their response, have low dynamic range, and cannot be used for precise quantification.

The abovementioned camera detector technology has several degrees of freedom: number of pixels, spectral range, efficiency, pixel size, and dynamic, noise behavior. In

#### **Figure 7.**

*Detectors for neutron imaging with their approximate working range w.r.t. spatial and time resolution, covering several orders of magnitude with permission by the IAEA [3].*

addition, also the optical coupling system is relevant for the performance. Because the number of photons from the scintillator is all times limited, lenses with small f-numbers are preferred, which makes a trade-off with geometric aberrations necessary.

Some more detector options are given in **Figure 7**: flat panels based on amorphous silicon or CMOS technology, pixel sensors with absorber doped microchannels, and intensified camera systems. Recent developments with camera sensors of highest readout capability allow already counting the neutrons event by event [4].

With this high flexibility and variability, the operators of neutron imaging facilities (and their users) have to decide what are the most useful setups for their specific study. Generally, neutron imaging is limited by the number of neutrons. More neutrons would allow for higher frame rates or higher spatial resolution or better counting statistics—but never all these things at the same moment.

#### **6. User lab function in a global context**

Large-scale neutron sources (research reactors and spallation neutron sources) have been built as "user labs" with a worldwide access to beam time at high-level performing neutron research instruments, mainly for neutron scattering. Because the counting statistics at such devices is often limited in the neutron beam intensity, high source power is required.

The reactor at the Laue-Langevin Institute (ILL) in Grenoble, France, can be seen as a template for such kind of neutron source facilities. It has been functional since 1971; it is financed by 14 European countries and provides 40 neutron research instruments. The access to these instruments is organized and formalized with research proposals and supervised by scientific boards. Recently, also a neutron imaging facility (NEXT) has been installed—about 50 years after the reactor's start, indicating the increasing importance of neutron imaging techniques in "neutrons for society" (according to ILL marketing).

In the meantime, many other institutes followed the ILL scheme with their sources, often nationally operated and organized. A list of currently available user lab sources (with options for neutron imaging) is given in the Appendix.

For a while, until about 2005, neutron imaging was seen to be a simple technique without high scientific value. However, using modern imaging detection systems on the basis of digital and highly efficient methods, neutron imaging is now widely considered an important tool for many scientific, but also industrial relevant applications in various fields, where other methods are limited and often fail.

Because the experimental infrastructure for neutron imaging on high technical level can become quite expensive, it is a clever approach to share these devices also in the user lab scheme, following that for neutron scattering, and including the access rules.

High-performance neutron sources are unique with respect to their specific layout and particular performance at diverse beam lines. For neutron imaging, however, thermal and cold neutrons are the most common options; only a few facilities employing fast or epithermal neutrons are in use. No wonder, also the respective beam lines for neutron imaging have been built individually. Owing to the user lab function, a high flexibility in the infrastructure with respect to beam size, collimations, intensity, and spectrum is required. The facility ICON at the spallation neutron source SINQ , Paul Scherrer Institut (PSI), Switzerland, is shown in **Figure 8** as an example. The neutron spectrum at ICON is given in

**Figure 8.**

*Layout of the ICON neutron imaging facility for cold neutrons at PSI with the description of major installation features.*

**Figure 9** in comparison to the other two beams at SINQ in use for neutron imaging—demonstrating the overall high flexibility with respect to penetration power and detection sensitivity.

An overview of the current neutron imaging options and modalities was already given in **Figure 2**—accordingly, special devices and setups need to be available. However, not all sources provide the full set of options or the same conditions. Any potential applicants have to check, which facility fits best to their demands. A pretest at one or more facilities might be useful for developing a perfect setup for a specific request.

#### **7. Methodical features in neutron imaging**

The overview of "standard" and "advanced" methods has been already given by **Figure 2**. Here, we want to spend some more effort on describing particular features and their advantages for the research with neutrons.

Most of the different methods are based on the transmission image as sketched in **Figure 1**. Already this "radiography" setup has the degree of freedom for larger or smaller fields of view (FOVs), depending on the sample size, or higher or coarser spatial resolution. Because camera-based detection systems are coupled to the primary sensor (scintillator screen), the relation between FOV and pixel size can be managed by variable lens systems. The moment best layout with respect to high spatial resolution [5] enables a pixel size of 2 μm, FOV = 5×5 mm. On the other hand, the large FOV is limited by the neutron beam dimensions to about 40 ×40 cm. Even larger (but flat, e.g., paintings) objects can be studied in a scanning mode, taking images at different position by moving the sample and composing them together.

**Figure 9.**

*Normalized neutron spectra at the PSI imaging facilities NEUTRA, ICON, and BOA and the slope of the interaction probability with Al, rising toward longer wavelengths (lower neutron energies) with permission by the IAEA [3].*

Neutron tomography allows one to study samples also in their third dimension. The method works similarly to X-ray tomography in hospitals. However, neutron sources cannot be moved around the object: the scanning from different directions has to be done by sample rotation around a vertical (or horizontal) axis. A number of "projections" are taken over an angular range of at least 180° in sequential steps. They are used for the volume reconstruction harnessing mathematical procedures like "filtered backprojection" in the inverse Fourier space. The 3D neutron image of the investigated object is built from a voxel matrix of attenuation coefficients. With the help of visualization tools, slices through the object at arbitrary positions can be calculated and displayed. Regions with the same voxel value can be segmented and measured. The example in **Figure 4** compares one two-dimensional neutron radiography image of a Tibetan bronze sculpture with its 3D tomography reconstruction, for this image, cut in the middle.

Real-time image sequences can be obtained when the detector is able to provide high frame rates and corresponding read-out features. It is the advantage of modern detector options (see Section 5) to have such capabilities, at least at facilities with the required beam intensity.

Stroboscopic imaging is most useful for the observation of repetitive processes like engines. In this way, even fast motions can be investigated by a synchronization of the detection system with the process itself. Snapshots with durations of milliseconds are obtained and stacked to a valid image with good statistical accuracy. A frequency of about 800 rpm has been studied successfully [6].

Energy-selective neutron imaging has its relevance in particular for the study of crystalline solid materials like metals. Their elastic scattering behavior with neutrons is given via the Bragg condition:

$$\mathbf{n} \cdot \mathcal{X} = \mathbf{2d} \cdot \sin \Theta \tag{1}$$

with the neutron's wavelength λ, the lattice distance d, and the scattering angle θ. By choosing the right neutron wavelength λ (corresponding to the neutron energy via the de Broglie relation), crystallites with the same orientation θ can be visualized on the macroscale. In this way, textures and their modifications can be seen and analyzed even for large structures. The example in **Figure 10** shows the texture of an Al sample near a polycrystalline weld.

Because the initial beam at a neutron imaging facility is spread over a wide energy range (Maxwellian distribution around the mean thermal (25 meV) or cold (3 meV) energy), such investigations are only possible with narrowed energy bands. There are three techniques in use to select neutrons for specific energies: turbines with tilted absorber blades, double-crystal settings, and choppers for time-of-flight (TOF) options. In all the cases, the amount of the usable neutrons is strongly reduced, and the acquisition time has to be extended accordingly.

Neutron grating interferometry is based on the coherence properties in the interaction of neutrons with matter. In the understanding of neutrons as waves according to the de Broglie's relation, next to their amplitude, a phase can also be attributed [7]. If coherent neutrons with the same phase interact with samples of suitable structure, a phase shift in different regions of the sample occurs, which can be analyzed with a grating setup [8]. With this technique, magnetic domain walls can be studied online during the magnetization process [9]. An extension of this method is the determination of the "dark field image" of a sample with structures in the μm to nm range, where small-angle scattering happens [10].

#### **Figure 10.**

*Structural analysis by neutron imaging of a rolled Al sample near a weld using neutrons with different wavelength; at long wavelengths (5.8 Å) and in the polychromatic beam, no texture is visible, contrary to at other wavelengths.*

Diffraction imaging is based on the detection of scattered neutrons to the side of a sample by means of detection systems around it. The transmitted beam is only used as a monitor to see where the initial beam hits the sample. This method is useful mainly for samples with large grain structures and for the characterization of single crystals.

A simple setup consists of a second imaging detector perpendicularly arranged to the transmission detector at the side of the sample. During sample rotation, regions of the single crystal are illuminated with neutrons, which fulfill the condition (1) and the out-scattered signal is registered [11]. As a practical case, turbine blades of high-performance engines can be studied for their homogeneity in the structure nondestructively. For samples with smaller crystallites, multiple spots are detected by a setup covering the forward and backward regions around the sample. If the sample is rotated, many diffraction patterns can be used to derive crystalline orientation and crystallite size [12].

Imaging with polarized neutrons is based on the neutron's property to carry a magnetic moment as particle. It is oriented antiparallel to the spin vector and has a value of about −1.9 μn, the nuclear magneton. Because of the spin value ½, two states (up and down) are possible. In a nonpolarized beam, the two states exist equally. However, it is possible to sort out one of the states with special absorbing filters [13], and the remaining state yields a "polarized beam."

When this beam hits samples with magnetic properties, the polarization of the neutrons can be changed. An analyzer device behind the sample can observe how many polarized neutrons have changed their orientation by the sample or by the magnetic fields on their way to the detector. Therefore, imaging with polarized neutrons is very useful to investigate magnets (e.g., under superconducting conditions at low temperatures) and also magnetic fields in a certain strength range.

Data fusion with X-ray images becomes possible because common digital detection systems are able to detect both radiations in pixelwise precision. However, since the interaction with matter of neutrons and X-rays, respectively, are different in principle, two independent image data sets are produced. While X-rays only interact with the electrons in the atomic shell, neutrons "ignore" all electrons and interact with only the atomic nuclei. The higher the mass number of the atoms and the number of electrons, the higher is the X-ray contrast. This is not the case for neutrons at all—the contrast of neutron interactions depends on the peculiarities of the structure of the involved atomic nuclei. Even very light isotopes like <sup>1</sup> H, 10B, or 6 Li have a high attenuation power, cases in which additional neutrons can readily be incorporated in the respective nuclei; in addition, in simple billiard-ball physics, energy transfer is much higher when neutrons scatter with light nuclei.

Now, it is a question how to combine the two independently obtained data sets in order to enhance features and structures. A symbolic case is the study of moisture migration in porous media (stone, concrete, wood, soil, etc.). Here, the X-rays can probe the empty structure, while the neutron image is dominated by the water contrast [13].

#### **8. Applications in neutron imaging**

Because neutrons can penetrate heavy materials, in particular metals, better than X-rays, technical applications have a high importance. This enables many industrially relevant studies like of the fuel injection in running engines, the oil distribution in combustion motors, soot accumulation in particulate filters, water distributions in electrical fuel cells, and ion migration in batteries.

The quality assurance of explosives for civil applications (initiators in rockets and detonators in mining fields) can only be done with neutron imaging.

Also, the investigation of cultural heritage objects can take advantage of neutron imaging techniques, in particular when metallic samples cover some hidden organic material—see **Figure 4** [14]. This holds also for checking the corrosion status of old samples, including the monitoring of the success of anticorrosion treatments.

Scientific usage of neutron imaging methods is widely spread. It covers different fields, listed here without going into much detail. This chapter should only give an impression how powerful and important these techniques are.


#### **9. Conclusions: future trend and outlook**

Neutron imaging depends strongly on the access to suitable neutron sources. This method alone does not justify the construction and operation of large-scale facilities as spallation sources. However, in the "concert" with other important techniques like neutron scattering and fundamental research of the neutron itself (lifetime, electrical moment), neutron imaging with all its versatile features can play an important role.

In particular, the pulsed beam option of spallation neutron sources will enable a flexible and very efficient resolution of the neutron energy in the TOF operation of beam and detection systems. The direct combination with a diffractometer device is already considered at some places [16].

However, with lower beam intensities, we can perform some neutron imaging experiments. This has been demonstrated at weak sources operating in the few watts power range [17]. New sources on the basis of proton or deuteron accelerators are under consideration and are designed in countries where a renovation of research reactors is nearly impossible, mainly for political and ideological reasons [18].

Because of their uniqueness, neutrons will be used for investigations also in near, maybe also far future, as long as the neutron sources are available. In the meantime, further methodical progress is expected, and improvements in the detection methods, like single event counting, become available.

#### **Acknowledgements**

The author thanks all involved colleagues of his team at the Paul Scherrer Institut within the Laboratory for Neutron Scattering & Imaging for their valuable contributions to the development, implementation, and usage of all described neutron imaging techniques.

### **A. Appendix**

Neutron imaging facilities at large-scale neutron sources (Status Jan. 2023).


*Basics of Neutron Imaging DOI: http://dx.doi.org/10.5772/intechopen.110403*

### **Author details**

Eberhard H. Lehmann Paul Scherrer Institut, Switzerland

\*Address all correspondence to: eberhard.lehmann@psi.ch

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Kalmann H. Research. 1947;**1**:254-260

[2] Peter OZ. Naturforschung A. 1946;**1**:557-559

[3] Lehmann E. Status and Progress in Neutron Imaging Detection Systems. IAEA-TECDOC. Vienna: IAEA; 2020. pp. 261-271

[4] Losko A. New perspectives for neutron imaging through advanced event-mode data acquisition detector. Scientific Reports. 2021;**11**(1):21360. DOI: 10.1038/s41598-021-00822-5

[5] Trtik P, Lehmann E. Progress in high-resolution neutron imaging at the Paul Scherrer Institut - the neutron microscope project. Journal of Physics: Conference Series. 2016;**746**:012004

[6] Grünzweig C. Progress in industrial applications using modern neutron imaging techniques. Physics Procedia. 2013;**43**:231-242

[7] Pfeiffer F. Milestones and basic principles of grating-based x-ray and neutron phase-contrast imaging. AIP Conference Proceedings. 2012;**1466**:2. DOI: 10.1063/1.4742261

[8] Grünzweig C et al. Design, fabrication, and characterization of diffraction gratings for neutron phase contrast imaging. Review of Scientific Instruments. 2008;**79**:053703. DOI: 10.1063/1.2930866

[9] Betz B, Grünzweig C, Lehmann E. Advances in neutron imaging with grating interferometry. Materials Evaluation. 2014;**72**:491-496

[10] Strobl M et al. Physical Review Letters. 2008;**101**:123902

[11] Petermans S, Lehmann E. Simultaneous neutron transmission and diffraction contrast tomography as a non-destructive 3D-method for bulk single crystal quality investigations. Journal of Applied Physics. 2013;**114**:124905

[12] Samothrakitis S et al. Laue 3-dimensional neutron diffraction tomography and the FALCON instrument. Swiss Neutron News No. 2022;**60**:6-18

[13] Treimer W et al. Polarized neutron imaging and three-dimensional calculation of magnetic flux trapping in bulk of superconductors. Physical Review B. 2012;**85**:184522

[14] Mannes D, Lehmann E. Neutron imaging of cultural heritage objects. In: D'Amico S, Venuti V, editors. Handbook of Cultural Heritage Analysis. Switzerland AG: Springer Nature; 2022. DOI: 10.1007/978-3-030-60016-7\_9

[15] Tengatini A et al. Neutron imaging for geomechanics: A review. Geomechanics for Energy and the Environment. 2021;**27**:100206

[16] Kockelmann W et al. IMAT–A new imaging and diffraction instrument at ISIS. Physics Procedia. 2013;**43**:100-110

[17] Lange C, Berndt N. Neutron imaging at the low flux training and research reactor AKR-2. NIM A. 2019;**941**:162292

[18] Brückel T, Gutberlet T, editors. Conceptual Design Report. Jülich High Brilliance Neutron Source (HBS), Forschungszentrum Jülich: Germany; 2020

### *Edited by Pavel Tsvetkov*

This book presents the subject of nuclear fission through select chapters focusing on policy, economics, fundamentals, and applications within the nuclear fission domain. It provides an opportunity to explore contemporary and emerging frontiers. It examines nuclear fission and its benefits as a clean and reliable energy source while also discussing such topics as nuclear physics, economics, deployment and operation, and the status of sensors for nuclear energy applications.

Published in London, UK © 2024 IntechOpen © EzumeImages / iStock

Nuclear Fission - From Fundamentals to Applications

Nuclear Fission

From Fundamentals to Applications

*Edited by Pavel Tsvetkov*