*3.2.4 Determination of the absorbed dose to water (Dw)*

Method 1. Measurement performed using a graphite or water calorimeter.


**Table 1.**

*Typical history of air kerma standard traceability between two laboratories LNMRI and BIPM.*

A Calorimeter measures the quantity absorbed dose to water or to graphite according to its definition, that is, from the increase in temperature in the medium due to a process of radiation induction. This evaluation is done by thermistors installed in the calorimeter body filled with high-purity water, as reported by Malcolm [16]. The calorimeter, in this case, your heart (nucleus), is placed at the reference depth in a 30 cm x 30 cm x 30 cm phantom. The measured signal is generally very low, on the order of 1 mK for an absorbed dose of 2 Gy, and its reproducibility is an important factor. Due to its complexity, it is suitable for use not in clinical settings, but in National Metrology Laboratories or research (**Figure 6**).

An important parameter is the magnitude of the heat defect, that is, the fraction of energy that is not released in the form of heat, being material dependent, this effect being more significant in graphite.

The typical temperature fluctuation obtained when using a radiation source consists of three basic regions:


#### **Figure 6.**

*Shows a schematic diagram of the Domen-type water calorimeter, built jointly with the Canadian McGill University and reported by Rosado and de Almeida [17] to be operated with non-circulating water at 4.0°C.*

Using a model of heat conduction in water, the onset time of this sudden temperature rise can be accurately predicted as a function of the distance between the measurement point and the source.

Specifically, for a standard of absorbed dose to water such as the calorimeter, the dose *Dw* at a point in the water at a given distance (r) from the thermistor corresponds to the measured temperature increase at that point (ΔT) being quantified through the relationship:

$$D\_w = \Delta T\_w \bullet cw \bullet kt \bullet kc \bullet kv \bullet kdd \bullet k\rho \bullet kHD \tag{5}$$

where:

Δ*Tw* ∙ = increase in the temperature;

*cw* = specific heat of the water;

*kt* = transient effect on the thermistor response due to dose deposition;

*kc* = conductive transfer of heat due to the excess of heat from the glass components and temperature gradients;

*kv* = conductive transfer of heat when water temperature is different from 4°C;

*kp* = disturbance caused in the radiation field due to the presence of the heart (core) of the calorimeter and thermistors, calculated by Monte Carlo simulation;

*kdd* = refers to the non-uniformity of radiation the beam;

*kρ* = variation in the density of water due to the presence of the calorimeter;

*kHD* = the heat defect, that is, the difference between the absorbed energy and the energy that appears as heat due to chemical reactions induced by radiation.

One of the advantages of the water calorimeter is that the quantity of absorbed dose to water is being measured directly in water, while in the case of using graphite, a graphite to water conversion factor is necessary.

Method 2. Measurement performed on the graphite phantom using a known volume ionization chamber.

In general, the measurement of the absorbed dose to water *Dw* [1] is carried out under the same reference conditions as mentioned before, as illustrated in **Figure 7**.

#### **Figure 7.**

*Parallel plates graphite ionization chamber (1.8 gm/cm3 ) with 2.8 mm wall thickness, inner diameter of 45 mm, outer diameter of 50.5 mm, used by the BIPM and reported by Boutillon and Niatel [18].*

The reference conditions include radiation field of 10 x 10 cm<sup>2</sup> in the plane of the phantom surface, SSD = 100 cm, with the center of the chamber positioned at 5 g/cm<sup>2</sup> depth in graphite, reference air temperature of 22°C, atmospheric pressure of 101.3 kPa, and humidity between 30 and 70%, according to the formalism:

$$D\_w = \frac{I}{\rho} \bullet v \bullet \frac{W\_{air}}{e} \bullet \left(\frac{\mu\_{en}}{\rho}\right)\_{w,\varepsilon} \bullet sc, a \bullet \Pi k\_j \tag{6}$$

where:

*I* = current reading corrected for the reference conditions of T and P;

*ρ* = air density;

*v* = sensitive volume of the cavity;

*Wair <sup>e</sup>* = average energy needed to produce a pair of ions, its product being equal to the energy ceded to the air mass *mair* from the reference sensitive volume;

*μen ρ w*,*c* = ratio between the mass-energy absorption coefficients for water and graphite. Proposed by Hubbel and Seltzer [15];

*sc*, *a* = ratio of the restricted stopping power between graphite and air, calculated based on the Spencer-Attix theory taking into account the average value of the energy in the electron spectrum generated by the effect;

Π*k <sup>j</sup>* = the product of several correction factors:

*kh* = correction for the reference humidity;

*ks* = loss by ionic recombination;

*km* = radial non-uniformity of the beam in the chamber plane;

(*d/do*) = deviation correction between nominal and actual distance;

*f* = graphite to water conversion factor.

Fricke dosimetry consists of measuring the conversion, due to the ionizing radiation, of the ferrous ions present in the solution, into ferric ions through spectrophotometry. The Fricke dosimeter consists of a 96% water solution, therefore its attenuation to radiation is very similar to that of water and can be used in the dose range of 5 Gy–400 Gy with dose rates of up to 106 Gy/s.

The quantity determined by the Fricke chemical dosimetry system is the absorbed dose to the Fricke solution (*DF*), as defined in Eq. (7) and described in the literature by [19, 20].

$$G\left(Fe^{3+}\right) = \frac{\Delta OD}{D\_F L.\rho.\varepsilon} \tag{7}$$

Where:

Δ*OD* = difference between the absorbance of the irradiated solution and the control solution, corrected for the temperature during irradiation and reading measured at 304 nm;

*G(Fe+3)* = chemical yield of the reaction for the gamma radiation beam;

*L* = optical pathlength of the cuvette, where the solution is placed during the readings by the spectrophotometer;

*ρ* = density of the Fricke solution;

<sup>ε</sup> = molar absorptivity coefficient or molar extinction coefficient;

To determine the quantity of interest, *Dw* in water, it is necessary to use the correction factors defined in Eq. (8), as proposed by [21] and expanded by [19]: *Absolute, Reference, and Relative Dosimetry in Radiotherapy DOI: http://dx.doi.org/10.5772/intechopen.101806*

$$D\_w = D\_F \bullet f\_{\
u, F} \cdot P\_{\
uall} \bullet f\_{\
uug} \tag{8}$$

Where:

*DF* = absorbed dose to the Fricke solution;

*f <sup>w</sup>*,*F*= factor that converts the absorbed dose to the Fricke solution to the absorbed dose to water.

*Pwall*= factor that corrects disturbances caused by the PMMA walls of the holders containing the solution.

*f avg* = factor that corrects the inhomogeneity of the dose deposited in the Fricke solution along the radial and the vertical axis.

This method requires laboratories with several parameters under control such as temperature, dust, cleaning, laminar flow hoods, Milli Q water production, glassware, quartz cuvettes, high-resolution double-beam spectrophotometer with filters for your QA, and high-purity chemicals. For this reason, its use is restricted to laboratories and not to be used at clinical environments.

### **4. Reference dosimetry**

It refers to the measurement of the absorbed dose in water with an ionization chamber in the beam of the user's Institution. The reference conditions used in the calibration laboratory must reproduced, and the influence quantities (T, P, U) are measured at the time of data acquisition and correction accordingly.

Step 1: Calibration of a user's chamber at the level of the National Laboratory or of an SSDL according to interface [3].

$$\text{ND}\_{w,Q} = \frac{^{lab}\text{D}\_{w,Q}}{^{lab}\text{M}\_{w,Q}} \tag{9}$$

where:

*NDw*,*<sup>Q</sup>* = calibration coefficient provided by SSDL or PSDL to the user;

*labDw*,*<sup>Q</sup>* = absorbed dose to water determined in the SSDL by the standard instrument under reference conditions, that is, SSD = 100 cm, radiation field 10 � 10 cm<sup>2</sup> and the chamber centered at a depth of 5 cm in water;

*labDw*,*<sup>Q</sup>* = reading of the user's chamber called reference chamber, performed on the same beam and under the same conditions as in the SSDL or PSDL.

Step 2. With the calibration coefficient *NDw*,*Q:*

These measurements are performed at the user's institution with its reference chamber to obtain the absorbed dose to water with a beam of the same quality as the SSDL under the reference conditions: SSD =100 cm, radiation field 10 x 10 cm2 and depth of 5 cm in water according to the Eq. (10):

$$\prescript{u}{}{}{D}\_{w,Q} = \prescript{u}{}{M}\_{w,Q} \bullet \prescript{}{}{N}\_{D,w,Q} \tag{10}$$

where:

*uDw*,*<sup>Q</sup>* = dose measured in the user's beam under reference conditions;

*uMw*,*<sup>Q</sup>* = average reading of the reference chamber in the user's beam;

*ND*,*w*,*<sup>Q</sup>* = calibration coefficient provided to the user for a given beam quality by the Calibration Laboratory, in general gamma rays of 60Co.

As the calibration coefficient is normally defined for a 60Co gamma ray beam, if the user has a different beam (e.g., photons with 6, 10, 15 MV) a *Kq* factor well described by Andreo et al. [6] should be used to adjust the detector's response to this new beam quality according to the Eq. (11):

$${}^{u}D\_{w,Q} = {}^{u}M\_{w,Q} \bullet N\_{D,w,Q} \bullet k\_Q \tag{11}$$

where:

*uDw*,*<sup>Q</sup>* = dose measured in the user's beam under reference conditions;

*uMw*,*<sup>Q</sup>* = average reading of the reference chamber in the user's beam;

*ND*,*w*,*<sup>Q</sup>* = calibration coefficient provided to the user for a given beam quality by the Calibration Laboratory, in general gamma rays of 60Co.

*kQ* = factor that adjusts the value measured in the quality of the user's beam defined from the relationship between the readings taken on the water phantom, with a 10 x 10 cm<sup>2</sup> radiation field size defined at 20 cm and measured at 10 cm in depth in the same geometry, that is, according to the definition of the TPR20,10 as shown in **Figure 8**.

The numerical value of this factor varies with the type of materials used in the chambers, whose beam quality is expressed by the TPR20,10 ratio, which empirically represents the variation in the interaction and absorption behavior of each of the materials due to the different cross sections. Typical behavior of *Kq* values as a function of the beam quality, defined by the TPR20,10, is shown in **Figure 9**.

#### **Figure 8.**

*Geometry that should be used for measurement of the quality of the Q beam, to obtain the kQ factor from the TPR20,10 ratio, for a source chamber distance (SCD) of 100 cm, 10 x 10 cm<sup>2</sup> field and measurements at depths of 10 and 20 g/cm<sup>2</sup> of water as recommended by the TRS#398 [1].*

#### **Figure 9.** *Typical behavior of* Kq *values as a function of the beam quality, defined by the TPR20,10.*

The graph clearly shows a dependence of the *Kq* value with the type of the chamber, in this case for photons of different energies, using Farmer-type cylindrical chambers from various manufacturers, built with different materials. TRS#398 [1].

The measurement system that best suits this application at the user level is the ionization chamber, in which case there is no need to know its volume as the calibration coefficient considers the chamber's response and not its real volume.

The TPR20,10 can also be estimated from the Percentage Depth Dose measurements using the empirical relationship, according to Eq. (12):

$$TPR\_{20,10} = 1.2661 \bullet PDD\_{20.10} - 0.0595 \tag{12}$$

where,

*TPR*20,10= ratio of ionization measurements at 20 cm and 10 cm depth in water for a constant source to chamber distance and with a 10 x 10 cm field at the plane of the detector.

*PDD*20*:*<sup>10</sup> = ratio between the values measured at 20 and 10 cm depth for a 10 x 10 cm2 field at a source camera distance of 100 cm.
