3. Verification, validation, comparison, and reliability of Monte Carlo toolkits

To guarantee that one application is realistic, it is important to test it (computational code) in different ways. There are several known ways to test a computational code and its parts; however, in this section, the focus is to present the concepts applied on developed applications for MCCTs such as verification, validation, comparison, and reliability.

When one is working in an application for MCCT, it is important to understand the concepts that may guarantee its internal consistency and accuracy. The IEEE 1012–2016 gives a general description of software verification and validation, and the IEEE 24765–2017 gives a detailed description of these concepts defining these terms. Verification is defined as a "confirmation by examination and provisions of objective evidence that specified requirements have been fulfilled" (IEEE 1012– 2016), and lately this concept was detailed as "the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase" (IEEE 24765–2017). Validation is

libraries considers EPDL<sup>7</sup> total cross sections for photoelectric absorption and Rayleigh scattering, XCOM8 cross sections for pair production, and SUMGA<sup>9</sup> function for total atomic cross sections and Compton scattering. PENELOPE can simulate the emission of characteristic X-rays and Auger electrons resulting from vacancies produced in K, L, M, and N shells by photoelectric absorption, Compton scattering, triplet production, and electron/positron impact. In PENELOPE 2014, the elastic collisions of electrons and positrons are simulated, using numerical partial-wave cross sections for free neutral atoms by elastic scattering of electrons and positrons by atoms (ELSEPA) program that is a database distributed by ICRU Report 77 (2007) [60]. The output may be defined using Fortran subroutines, where the AD may get different quantities such as number of materials that were loaded, mass density of specific materials, characteristics of the slowing down for charged particles, energy of the particle at the beginning of the track segment, effective stopping power of soft energy-loss interactions, and energy lost along the step, among others [61].

Theory, Application, and Implementation of Monte Carlo Method in Science and Technology

The electron gamma shower (EGS) MCCT may be found on different main versions, EGS5 and EGSnrc. Both versions of EGS are implemented in Mortran3 language, which is a preprocessor for Fortran [64, 65]. The origins of EGS MCCT are documented in NRC-PIRS-0436 report [66]. The EGS5 simulates the coupled transport of electrons and photons in an arbitrary geometry for particles with energies from a few keV up to a several hundred GeV [64] depending on the atomic numbers of the target materials. The EGSnrc<sup>10</sup> (Electron Gamma Shower from National Research Council) is an extended and improved version of the EGS MCCT,

having specific modeling implementations to electron and photon transport through matter. It includes the BEAMnrc software component that models beams traveling through consecutive material components, ranging from a simple slab to the full treatment head of a radiotherapy linear particle accelerator (linac). EGSnrc is particularly well-suited for medical physics applications (research and devices development) being used for medical radiation detection, medical image based on x-radiation, and dosimetry for a specific volume. However, due to the flexibility of this MCCT, the AD may use it for different applications such as in industrial linac beams, X-ray emitters, radiation shielding, and more. The EGSnrc simulates the radiation transport in homogeneous materials for photons, electrons, and positrons with energies between 1 keV and 10 GeV. It incorporates significant refinements in charged particle transport and better low energy cross sections and makes it possi-

The electron transport (ETRAN) MCCT transports electrons and photons through extended media being developed by the National Bureau of Standards. This MCCT has various versions representing mainly refinements, embellishments, and different geometrical treatments that share the same basic simulation algorithm based on random sampling the path of electrons and photons as they travel through matter. The algorithms and computational tools written at other laboratories, such

<sup>7</sup> EPDL: Photon and Electron Interaction Data is available at https://www-nds.iaea.org/epdl97.

<sup>8</sup> XCOM: Photon Cross-sectional Database is available at https://www.nist.gov/pml/xcom-photon-cross-

<sup>9</sup> Additional information about SUGMA function access SectionB.2 in Appendix B of the PENELOPE-2014: A Code System for Monte Carlo Simulation of Electron and Photon Transport at https://

<sup>10</sup> The EGSnrc has its official page associate to National Research Council Canada at https://nrc.canada. ca/en/research-development/products-services/software-applications/egsnrc-software-tool-model-radia

ble to define elaborated geometries and particle sources [65].

www.oecd-nea.org/science/docs/2015/nsc-doc2015-3.pdf

sections-database.

tion-transport.

defined as a "confirmation by examination and provisions of objective evidence that the particular requirements for a specific intended use are fulfilled" (IEEE 1012– 2016), and lately this concept was detailed as "the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements" (IEEE 24765–2017). So, one may say that a validation was performed when this one answers affirmatively the question: "Are we building the right product?" In the other hand, one may affirm that one is doing a verification by answering the question: "Are we building the product right?" [67].

• IEEE 730–2014—IEEE Standard for Software Quality Assurance Processes

Monte Carlo's Core and Tests for Application Developers: Geant4 and XRMC Comparison…

• IEEE 982.1–2005—IEEE Standard Dictionary of Measures of the Software

• IEEE 1016–2009—IEEE Standard for Information Technology-Systems Design

• IEEE 1012–2016—IEEE Standard for System, Software, and Hardware Verification and Validation (corrigendum 1012–2016/Cor 1–2017)

• IEEE 12207–2017—ISO/IEC/IEEE International Standard—Systems and

• IEEE 14764–2006—ISO/IEC/IEEE International Standard for Software

• IEEE 15026–1—Revision-2019—ISO/IEC/IEEE Approved Draft International Standard—Systems and Software Engineering—Systems and Software

• IEEE 15026–2-2011—IEEE Standard—Adoption of ISO/IEC 15026–2:2011 Systems and Software Engineering—Systems and Software Assurance—Part 2:

• IEEE 15026–3-2013—IEEE Standard Adoption of ISO/IEC 15026–3—Systems and Software Engineering—Systems and Software Assurance—Part 3: System

• IEEE 15026–4-2013—IEEE Standard Adoption of ISO/IEC 15026–4—Systems and Software Engineering—Systems and Software Assurance—Part 4:

• IEEE 24765–2017—ISO/IEC/IEEE International Standard—Systems and

• IEEE 29119–1-2013—ISO/IEC/IEEE International Standard—Software and systems engineering—Software testing—Part 1: Concepts and definitions

• IEEE 29119–2-2013—ISO/IEC/IEEE International Standard—Software and

• IEEE 29119–3-2013—ISO/IEC/IEEE International Standard—Software and systems engineering—Software testing—Part 3: Test documentation

• IEEE 29119–4-2015—ISO/IEC/IEEE International Standard—Software and

• IEEE 29119–5-2016—ISO/IEC/IEEE International Standard—Software and systems engineering—Software testing—Part 5: Keyword-Driven Testing

• IEC 61508–0 (2005–2101)—Functional safety of electrical/electronic/ programmable electronic safety-related systems—Part 0: Functional safety

systems engineering—Software testing—Part 2: Test processes

systems engineering—Software testing—Part 4: Test techniques

Engineering—Software Life Cycle Processes—Maintenance

software engineering—Software life cycle processes

Assurance—Part 1: Concepts and Vocabulary

Aspects of Dependability

DOI: http://dx.doi.org/10.5772/intechopen.88893

Assurance Case

Integrity Levels

91

Assurance in the Life Cycle

software engineering—Vocabulary

—Software Design Descriptions

According to [68], "Validation involves the system and acceptance testing during the test phase, whereas verification involves reviews and audits, software unit testing, and other techniques to evaluate intermediate work products such as the software requirements specification, software design description, and individual modules during earlier project phases." In MC, the AD does the verification of the application developed to guarantee that this application is reproducing the system (or geometry) and general conditions as close as possible to the reality, and the AD does the validation to guarantee that the MC application (considering the geometry material, particles if interaction and energy range of the particles) gives realistic results when compared statistically to experimental data, when a consistent amount of quantitative experimental data is available. In this context, it is fundamental to understand the setup and the experimental limitations of the instruments and measurements used in the experiments to take it into account on the data analyses to explain observed differences and similarities on the results.

When experimental data is not available, it is possible to use other MCCT or deterministic models to compare to the MC application results. In this way, one is performing a comparison between models and not a validation. This comparison must be based on quantitative statistical tests. In this case, to know and understand the main conceptions involved in the models and databases used, including its limitations and previous validations, it is fundamental to explain the observed differences and similarities on the results.

A reliability evaluation is recommendable when there are neither experimental data on specific trustable models nor amount of data to perform a validation or a comparison. The IEEE 982.1–2005 provides information used as indicators of reliability defining software reliability as "the probability that software does not cause the failure of a system for a specified time under specified conditions." In this context, the software reliability represents an effective measurement of the more general concept of software quality, using derived quantities and experimental models that are partially consistent to the application of interest. It is important to know the systematic errors and map all differences on the contour limitations of the application and the theory involved in this comparison.

It is possible to combine validation results, comparison between models, and software reliability to evaluate an application. Additional information about statistical tests and specific recommendations for software verification, validation, reliability, and comparison may be found in international documents. Thus, it is important to study the international standard regulations/recommendations when one wants to validate any software, including the MCCTs themselves and applications developed using them. The standard lists of active documents from IEEE, International Electrotechnical Commission (IEC), and International Organization for Standardization (ISO) may be searched online.<sup>11</sup> Additional detailed information about this subject may be studied at:

<sup>11</sup> Search for the active standards was performed at https://standards.ieee.org; https://www.en-standard. eu and https://www.iso.org/about-us.html.

Monte Carlo's Core and Tests for Application Developers: Geant4 and XRMC Comparison… DOI: http://dx.doi.org/10.5772/intechopen.88893


defined as a "confirmation by examination and provisions of objective evidence that the particular requirements for a specific intended use are fulfilled" (IEEE 1012– 2016), and lately this concept was detailed as "the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements" (IEEE 24765–2017). So, one may say that a validation was performed when this one answers affirmatively the question: "Are we building the right product?" In the other hand, one may affirm that one is doing a verification by answering the question: "Are we building the product right?" [67]. According to [68], "Validation involves the system and acceptance testing during the test phase, whereas verification involves reviews and audits, software unit testing, and other techniques to evaluate intermediate work products such as the software requirements specification, software design description, and individual modules during earlier project phases." In MC, the AD does the verification of the application developed to guarantee that this application is reproducing the system (or geometry) and general conditions as close as possible to the reality, and the AD does the validation to guarantee that the MC application (considering the geometry material, particles if interaction and energy range of the particles) gives realistic results when compared statistically to experimental data, when a consistent amount of quantitative experimental data is available. In this context, it is fundamental to understand the setup and the experimental limitations of the instruments and measurements used in the experiments to take it into account on the data analyses

Theory, Application, and Implementation of Monte Carlo Method in Science and Technology

to explain observed differences and similarities on the results.

application and the theory involved in this comparison.

tion about this subject may be studied at:

eu and https://www.iso.org/about-us.html.

90

and similarities on the results.

When experimental data is not available, it is possible to use other MCCT or deterministic models to compare to the MC application results. In this way, one is performing a comparison between models and not a validation. This comparison must be based on quantitative statistical tests. In this case, to know and understand the main conceptions involved in the models and databases used, including its limitations and previous validations, it is fundamental to explain the observed differences

A reliability evaluation is recommendable when there are neither experimental data on specific trustable models nor amount of data to perform a validation or a comparison. The IEEE 982.1–2005 provides information used as indicators of reliability defining software reliability as "the probability that software does not cause the failure of a system for a specified time under specified conditions." In this context, the software reliability represents an effective measurement of the more general concept of software quality, using derived quantities and experimental models that are partially consistent to the application of interest. It is important to know the systematic errors and map all differences on the contour limitations of the

It is possible to combine validation results, comparison between models, and software reliability to evaluate an application. Additional information about statistical tests and specific recommendations for software verification, validation, reliability, and comparison may be found in international documents. Thus, it is important to study the international standard regulations/recommendations when one wants to validate any software, including the MCCTs themselves and applications developed using them. The standard lists of active documents from IEEE, International Electrotechnical Commission (IEC), and International Organization for Standardization (ISO) may be searched online.<sup>11</sup> Additional detailed informa-

<sup>11</sup> Search for the active standards was performed at https://standards.ieee.org; https://www.en-standard.

• IEC 61508–1 (2010–2104)—Functional safety of electrical/electronic/ programmable electronic safety-related systems—Part 1: General requirements

is presented, as well as the validation of both MCCTs using experimental data collected on three different mammographs. For validation the following measurements were performed: exposure (X), kerma, half-value layer (HVL), inverse square law (ISL), and backscattering (BS). Limitations, advantages, and disadvantages of using a general and specific MCCT will be commented too. Absolute and normalized quantities were selected because it is important to know the correction factor for total number of photons generated per mAs per total irradiated area for each equipment (this number is characteristic of each X-ray tube and will change with the time), and the combination of these quantities helps to define the best approximation for this correction factor in the simulation to get results closer to the

Monte Carlo's Core and Tests for Application Developers: Geant4 and XRMC Comparison…

It is important to inform that each setup had the data collected with calibrated equipment (electrometers and ionizing chambers) available at their institutions and performed by the same person that developed the application with both MCCTs. The simulated geometries are the same used on the data collection. In the following, a brief description of the measurement equipment and simulated setup is presented:

measurements were performed with electrometer and ionizing chamber TNT 12000 kit (Fluke) and Al 99% purity filters. SIMULATION: dry air-sensitive

circular surface of 2.08 cm of radius; spectra for acceleration voltages 25, 30, and 35 kVp; track-additional filtration combination Mo-Mo (30 μm) and Mo-

0.8 mm of beryllium (Be). The HVL calculations are based on a source-todetector distance of 41.0 cm for different Al thickness filtration; and X data were collected and simulated to source-to-detector distances 26, 40, 50, and

• Mammomat 3000 [71] (henceforth called M3000)—measurements were performed with electrometer Victoreen model 660–1 (1315REV) and ionizing chamber Victoreen model 660-4A (512REV). SIMULATION: dry air-sensitive

window 0.8 mm thick; track-additional filtration combinations of Mo-Mo (30 μm), Mo-Rh (25 μm), and W-Rh (50 μm); and spectrum acceleration voltages of 24 up to 32 kVp, in steps of 2 kVp. The BS was calculated considering simulators of BR12 epoxy and polymethilmetacrilate, considering a source-todetector distance of 60.0 cm and simulator thicknesses of 4, 5, 6, and 8 cm.

• Lorad MIII [72] (henceforth called Lorad)—measurements were performed with electrometer Modified Keitlhy (model 602) and ionizing chamber for mammography MPT SN 442. SIMUALTION: dry air-sensitive volume of

34 kVp, in steps of 2 kVp; track-additional filtration combination of Mo-Mo (30 μm) and Mo-Rh (25 μm); spectra of ripple 0%; target tilt angulation of 16o

and a Be window 0.8 mm thick. The X measurements were performed with compression paddle and by minimizing the BS effects by increasing the

It is important to evaluate all the available possibilities on the MCCT to get a realistic perspective of the configurations. Because of that, two modes to describe

distance between the bucky and the ionizing chamber.

; focal spot as point-source irradiating homogeneously on a rectangular

; focal spot as point-source irradiating homogeneously on

; focal spot as point-source irradiating homogeneously on a

; spectra of ripple 0%; target tilt angle of 22o

; spectra for acceleration voltages from 26 to

; and a window of

; a Be

;

• Mammomat Inspiration [69, 70] (henceforth called Inspiration)—

Rh (25 μm); spectra of ripple 0%; target tilt angle of 20<sup>o</sup>

clinical reality.

volume of 15 cm<sup>3</sup>

DOI: http://dx.doi.org/10.5772/intechopen.88893

volume of 4 cm3

circular surface of 10.0 cm2

surface of (18.0 24.0) cm<sup>2</sup>

60 cm.

6.0 cm<sup>3</sup>

93


There are two ISO documents under development at the moment: the ISO/DTR 11462–3 Guidelines for implementation of statistical process control (SPC)—Part 3: Reference data sets for SPC software validation and ISO/NP TR 11462–4 Guidelines for implementation of statistical process control (SPC)—Part 4: Reference data sets for measurement process analysis software validation.
