*2.1.2 Computer-aided diagnosis*

*Artificial Intelligence - Applications in Medicine and Biology*

**2.1 Patient diagnosis, assessment, and consultation**

possibility of providing the likelihood of detection.

racy), and diminish inter- and intraobserver variability.

*2.1.1 Computer-aided detection*

techniques could compensate for human limitations in handling a large amount of flowing information in an efficient manner, in which simple errors can make the difference between life and death. Also, it would allow improvements in quality of patient care through the potentials toward a practical application of precision medicine in radiation oncology. In this section, we go over each part in the radiation oncology workflow (**Figure 1**) process presenting studies that have been conducted with machine learning models. The radiation oncology workflow starts with patient diagnosis and assessment, to treatment simulation, to treatment planning, to qual-

ity assurance and treatment delivery, to treatment outcome and follow-up.

The radiation oncology process begins at the first consultation. During which, the radiation oncologist and patient meet to discuss the clinical situation to determine a treatment strategy [14]. The stage that precedes the patient assessment and consultation is a patient diagnosis, in which patient with cancer disease identified on medical images and then pathologically confirmed the disease. Machine learning toolkits such as computer-aided detection/diagnosis have been introduced for identifying and classifying cancer subtypes (staging). For example, lesion candidates into abnormal or normal (identify and mark suspicious areas in an image), lesions or non-lesions (help radiologists decide if a patient should have a biopsy or not), malignant or benign (report the likelihood that a lesion is malignant), etc. Machine learning plays a crucial role in computer-aided detection/diagnosis toolkits, and it could provide a "second opinion" in decision-making to the physician in diagnostic radiology.

Computer-aided detection (CADe) has defined as detection made by a physician/radiologist who takes into account the computer output as a "second opinion" [2]. CADe has been an active research area in medical imaging [2]. Its task is classification based solving a problem, in which the ML classifier task here is to determine "optimal" boundaries for separating classes in the multidimensional feature space. It focuses on a detection task, e.g., localization of lesions in medical images with the

Several investigators [15–18] have developed ML-based models for detection of cancer, e.g., lung nodules [15] in thoracic computed tomography (CT) using massive training artificial neural network (ANN), micro-calcification breast masses [16] in mammography using a convolutional neural network (CNN), prostate cancer [17] and brain lesion [18] on magnetic resonance imaging (MRI) data using deep learning. Chan et al. [16] achieved a very good accuracy, an area under a receiver operating characteristic curve (AUC) of 0.90, in the automatic detection of clustered of breast microcalcifications on mammograms. Suzuki et al. [15] reported an improved accuracy in the detection of lung nodules in low-dose CT images. Zhu et al. [17] reported an averaged detection rate of 89.90% of prostate cancer on MR images, with clear indication that the high-level features learned from the deep learning method can achieve better performance than the handcrafted features in detecting prostate cancer regions. Rezaei et al. [18] results demonstrated the superior ability of the deep learning approach in brain lesions detection.

Overall, the use of computer-aided detection systems as a "second opinion" tool in identifying the lesion regions in the images would significantly contribute to improving diagnostic performance. For example, it would lead to avoid missing cancer regions, increase sensitivity and specificity of detection (increased accu-

**44**

Computer-aided diagnosis (CADx) is a computerized procedure to provide a "second objective opinion" for the assistance of medical image interpretation and diagnosis [19]. Similar to CADe, its task is a classification solving-problem. CADx focuses on a diagnosis (characterization) task, e.g., distinction and automatically classifying a tumor or lesion being malignant or benign with a possibility of providing the likelihood of diagnosis.

Numerous studies [19–22] have demonstrated the application of CADx tools for diagnosing lung [19–21] and breast [19, 22] lesions. Cheng et al. [19] investigated the deep learning capability for the diagnosis of breast lesions in ultrasound (US) images and pulmonary nodules in CT scans. Their results showed that the deeplearning-based CADx can achieve better differentiation performance than the comparison methods across different modalities and diseases. **Figure 4** illustrates several cases of breast lesions and pulmonary nodules in US and CT images, respectively, differentiated with deep learning-based CADx [19]. Feng et al. [20] and Beig et al. [21] studied the classification of lung lesions on endo-bronchoscopic images [20] with logistic regressions, and non-small cell lung cancer (NSCLC) adenocarcinomas distinctions from granulomas on non-contrast CT [21] using support vector machine (SVM) and neural network (NN). The reported results indicated an accuracy of 86% in distinguishing lung cancer types, e.g., adenocarcinoma and squamous cell carcinoma [20]. Surprisingly, the reported results [21] in distinguishing non-small cell lung cancer adenocarcinomas from granulomas on non-contrast CT images showed that the developed CADx systems outperformed the radiologist readers. Joo et al. [22] developed a CADx system using an ANN for breast nodule malignancy diagnosis in US images. Their results demonstrated the potential to increase the specificity of US for characterization of breast lesions.

Overall, computer-aided diagnosis tool as a "second opinion" system could significantly enhance the radiologists' performance by reducing the misdiagnosed rate of malignant cases, then decreases the false positive of the cases sent for surgical biopsy. Also with CADx, the diagnosis can be performed based on multimodality medical images in a non-invasive (without biopsy), fast (fast scanning) and a lowcost way (no additional examination cost).

## *2.1.3 Assessment and consultation*

During the patient assessment phase, the radiation oncologist and patient meet to discuss the clinical situation. Circumstances like the risks and benefits of treatment and the patient's goals of care are determined for the treatment strategy [14]. Useful information to assess the potential benefit of treatment is acquired, e.g., tumor

#### **Figure 4.**

*Computer-aided diagnosis for lung nodules and breast lesion with deep learning. It shows that it may be hard to differentiate for a person without a medical background and for a junior medical doctor (reproduced from [19]).* stage, prior and current therapies, margin status if post-resection, ability to tolerate multimodality therapy, and overall performance status [14]. Parameters that impact potential risk and tolerability of treatment are balanced, e.g., patient age, comorbidities, functional status, the proximity between tumor and critical normal tissues, and ability to cooperate with motion management [14]. All of these represent valuable features which can be utilized to build predictive models of treatment outcome and toxicity. These models, then, can be used to inform physicians and patients to manage expectations and guide trade-offs between risks and benefit [14].

Machine learning models [23–26] such as logistic regressions, decision trees, random forests, gradient boosting, and support vector machines are suitable for this purpose. Logistic regressions or decision trees are similarly effective [23, 24] for a goal to assist physicians and patients reach the best decision, compromising balance between interpretability of the results and accurate predictions. In case of accuracy is favored over interpretability, then methods [25, 26] such as random forests or gradient boosting, and SVMs with kernels, are better and consistently win most modeling competitions [14].

Overall, the delivery of models that could help with these scenarios require standardizing nomenclature and developing standards for data collection of these heterogeneous patient clinical data remain a challenge in radiation oncology.

## **2.2 Treatment simulation**

Once a physician and patient have decided to proceed with radiation therapy, the physician will place robust instructions for a simulation, which is then scheduled. The order for simulation includes details about immobilization, scan range, treatment site, and other specifics necessary to complete the procedure appropriately [14]. Patient preparation for simulation could include fiducial placement, fasting or bladder/rectal filling instructions, or kidney function testing for intravenous (IV) contrast. Special instructions have given for patients with a cardiac device, or who are pregnant, and lift help or a translator is requested if necessary [14]. The treatment simulation process typically includes patient's setup and immobilization, three- or four-dimensional computed tomography (3DCT or 4DCT) image data acquisition, and image reconstruction/segmentation. Machine learning algorithms could have an essential role to play in this sequence to improve the simulation quality, hence a better treatment outcome.

### *2.2.1 3D/4DCT image acquisition*

Three-dimensional CT anatomical image information for the patient are acquired during the simulation on a dedicated CT scanner ("CT-Simulator") to be used later for the treatment planning purposes. A good CT simulation is critical to the success of all subsequent processes, to achieve an accurate, high quality, robust, and deliverable plan for a patient. It could prevent a repeated CT simulation due to insufficient scan range, suboptimal immobilization, non-optimal bladder/rectal filling, artifacts, lack of breath-hold reproducibility, and so on [14]. 4DCT scanning is used increasingly in radiotherapy departments to track the motion of tumors in relation to the respiratory cycle of the patient. It monitors the breathing cycle of the patient and can either; acquire CT images at a certain point in the breathing cycle, or acquire CT images over the whole breathing cycle. This CT data is then used to generate an ITV (internal target volume) that encompasses the motion of the CTV (clinical target volume), or MIP (maximum intensity projection) scans to aid in the definition of an ITV [2]. 4DCT imaging is necessary for successful implementation of stereotactic ablative radiotherapy (SBRT), e.g., for early-stage NSCLC.

**47**

*Radiation Oncology in the Era of Big Data and Machine Learning for Precision Medicine*

Few works [27–30] have carried out using ML-based methods for this purpose. For instance, a work by Fayad et al. [27] demonstrated an ML method based on the principal component analysis (PCA) to develop a global respiratory motion model capable of relating external patient surface motion to internal structure motion without the need for a patient-specific 4DCT acquisition. Its finding looks promising but future works of assessing the model extensively are needed. Another study by Steiner et al. [28] investigated an ML-based model on correlations and linear regressions for quantifying whether 4DCT or 4D CBCT (cone-beam CT) represents the actual motion range during treatment using Calypso (Varian Medical Systems Inc., Palo Alto, CA, USA) motion signals as the "ground truth." The study results found that 4DCT and 4DCBCT under-predict intra-fraction lung target motion during radiotherapy. A third interesting one by Dick et al. [29] examined an ANN model for fiducial-less tracking for the radiotherapy of liver tumors through tracking lung-diaphragm border. The findings showed that the diaphragm and tracking volumes are closely related, and the method has indicated the potential to replace fiducial markers for clinical application. Finally, a study by Johansson et al. [30] investigated an ML-based PCA model for reconstructing breathing-compensated images showing the phases of gastrointestinal (GI) motion. Its results indicated that GI 4D MRIs could help define internal target volumes for treatment planning or

Overall, the discussed ML-based methods in the simulation area have shown the potential for improved accuracy of patient CT simulation. Machine learning utilization in 3D/4D CT image acquisition simulation is an area where the community has focused little effort. Thus, focusing on the simulation, there are many questions that could be answered/optimized through ML algorithms to aid in decision-mak-

Here, we explore the power of machine learning based methods for image reconstruction in radiation oncology procedure. We present two application examples where ML has utilized for estimating CT from MRI images and reconstructing a 7

The first application supports reconstructing an image modality form another imaging modality, e.g., CT image from MR image. Clinical implementation of MRI-only treatment planning radiotherapy approach requires a method to derive or reconstruct synthetic CT image from MR image. CT is currently supporting the workflows of radiation oncology treatment planning for dose calculations. However, CT imaging modality has some limitations in comparison with other modalities like MRI, e.g., (a) CT images provide poor soft tissue contrast compared to MRI scans which has superior visualization of anatomical structures and tumors, and (b) CT exposes radiation during CT imaging, which may cause side effect to the

Numerous studies [31–34] have demonstrated ML-based approaches to map CT images to MR images like deep learning (fully CNN) model [31], boosting-based sampling (RUSBoost) algorithm [32], random forest and auto-context model [33], and U-net CNN model [34]. Nie et al. [31] experimental results showed that deep learning method is accurate and robust for predicting CT image from MRI image. **Figure 5** shows the synthetic CT image from MRI data with deep learning and the "ground truth" MRI [31]. The developed deep learning model outperformed other state-of-the-art methods under comparison. Bayisa et al. [32] proposed an approach based on boosting algorithm indicated outperformance in CT estimation quality in comparison with the existing model-based methods on the brain and bone tissues.

*DOI: http://dx.doi.org/10.5772/intechopen.84629*

support GI motion tracking during irradiation.

Tesla (7 T)-like MR image from a 3 T MR image.

patient, where MRI is much safer and does not involve radiation.

ing and overall workflow efficiency.

*2.2.2 Image reconstruction*

### *Radiation Oncology in the Era of Big Data and Machine Learning for Precision Medicine DOI: http://dx.doi.org/10.5772/intechopen.84629*

Few works [27–30] have carried out using ML-based methods for this purpose. For instance, a work by Fayad et al. [27] demonstrated an ML method based on the principal component analysis (PCA) to develop a global respiratory motion model capable of relating external patient surface motion to internal structure motion without the need for a patient-specific 4DCT acquisition. Its finding looks promising but future works of assessing the model extensively are needed. Another study by Steiner et al. [28] investigated an ML-based model on correlations and linear regressions for quantifying whether 4DCT or 4D CBCT (cone-beam CT) represents the actual motion range during treatment using Calypso (Varian Medical Systems Inc., Palo Alto, CA, USA) motion signals as the "ground truth." The study results found that 4DCT and 4DCBCT under-predict intra-fraction lung target motion during radiotherapy. A third interesting one by Dick et al. [29] examined an ANN model for fiducial-less tracking for the radiotherapy of liver tumors through tracking lung-diaphragm border. The findings showed that the diaphragm and tracking volumes are closely related, and the method has indicated the potential to replace fiducial markers for clinical application. Finally, a study by Johansson et al. [30] investigated an ML-based PCA model for reconstructing breathing-compensated images showing the phases of gastrointestinal (GI) motion. Its results indicated that GI 4D MRIs could help define internal target volumes for treatment planning or support GI motion tracking during irradiation.

Overall, the discussed ML-based methods in the simulation area have shown the potential for improved accuracy of patient CT simulation. Machine learning utilization in 3D/4D CT image acquisition simulation is an area where the community has focused little effort. Thus, focusing on the simulation, there are many questions that could be answered/optimized through ML algorithms to aid in decision-making and overall workflow efficiency.

## *2.2.2 Image reconstruction*

*Artificial Intelligence - Applications in Medicine and Biology*

modeling competitions [14].

**2.2 Treatment simulation**

ity, hence a better treatment outcome.

*2.2.1 3D/4DCT image acquisition*

stage, prior and current therapies, margin status if post-resection, ability to tolerate multimodality therapy, and overall performance status [14]. Parameters that impact potential risk and tolerability of treatment are balanced, e.g., patient age, comorbidities, functional status, the proximity between tumor and critical normal tissues, and ability to cooperate with motion management [14]. All of these represent valuable features which can be utilized to build predictive models of treatment outcome and toxicity. These models, then, can be used to inform physicians and patients to man-

Machine learning models [23–26] such as logistic regressions, decision trees, random forests, gradient boosting, and support vector machines are suitable for this purpose. Logistic regressions or decision trees are similarly effective [23, 24] for a goal to assist physicians and patients reach the best decision, compromising balance between interpretability of the results and accurate predictions. In case of accuracy is favored over interpretability, then methods [25, 26] such as random forests or gradient boosting, and SVMs with kernels, are better and consistently win most

Overall, the delivery of models that could help with these scenarios require standardizing nomenclature and developing standards for data collection of these heterogeneous patient clinical data remain a challenge in radiation oncology.

Once a physician and patient have decided to proceed with radiation therapy, the physician will place robust instructions for a simulation, which is then scheduled. The order for simulation includes details about immobilization, scan range, treatment site, and other specifics necessary to complete the procedure appropriately [14]. Patient preparation for simulation could include fiducial placement, fasting or bladder/rectal filling instructions, or kidney function testing for intravenous (IV) contrast. Special instructions have given for patients with a cardiac device, or who are pregnant, and lift help or a translator is requested if necessary [14]. The treatment simulation process typically includes patient's setup and immobilization, three- or four-dimensional computed tomography (3DCT or 4DCT) image data acquisition, and image reconstruction/segmentation. Machine learning algorithms could have an essential role to play in this sequence to improve the simulation qual-

Three-dimensional CT anatomical image information for the patient are acquired during the simulation on a dedicated CT scanner ("CT-Simulator") to be used later for the treatment planning purposes. A good CT simulation is critical to the success of all subsequent processes, to achieve an accurate, high quality, robust, and deliverable plan for a patient. It could prevent a repeated CT simulation due to insufficient scan range, suboptimal immobilization, non-optimal bladder/rectal filling, artifacts, lack of breath-hold reproducibility, and so on [14]. 4DCT scanning is used increasingly in radiotherapy departments to track the motion of tumors in relation to the respiratory cycle of the patient. It monitors the breathing cycle of the patient and can either; acquire CT images at a certain point in the breathing cycle, or acquire CT images over the whole breathing cycle. This CT data is then used to generate an ITV (internal target volume) that encompasses the motion of the CTV (clinical target volume), or MIP (maximum intensity projection) scans to aid in the definition of an ITV [2]. 4DCT imaging is necessary for successful implementation

of stereotactic ablative radiotherapy (SBRT), e.g., for early-stage NSCLC.

age expectations and guide trade-offs between risks and benefit [14].

**46**

Here, we explore the power of machine learning based methods for image reconstruction in radiation oncology procedure. We present two application examples where ML has utilized for estimating CT from MRI images and reconstructing a 7 Tesla (7 T)-like MR image from a 3 T MR image.

The first application supports reconstructing an image modality form another imaging modality, e.g., CT image from MR image. Clinical implementation of MRI-only treatment planning radiotherapy approach requires a method to derive or reconstruct synthetic CT image from MR image. CT is currently supporting the workflows of radiation oncology treatment planning for dose calculations. However, CT imaging modality has some limitations in comparison with other modalities like MRI, e.g., (a) CT images provide poor soft tissue contrast compared to MRI scans which has superior visualization of anatomical structures and tumors, and (b) CT exposes radiation during CT imaging, which may cause side effect to the patient, where MRI is much safer and does not involve radiation.

Numerous studies [31–34] have demonstrated ML-based approaches to map CT images to MR images like deep learning (fully CNN) model [31], boosting-based sampling (RUSBoost) algorithm [32], random forest and auto-context model [33], and U-net CNN model [34]. Nie et al. [31] experimental results showed that deep learning method is accurate and robust for predicting CT image from MRI image. **Figure 5** shows the synthetic CT image from MRI data with deep learning and the "ground truth" MRI [31]. The developed deep learning model outperformed other state-of-the-art methods under comparison. Bayisa et al. [32] proposed an approach based on boosting algorithm indicated outperformance in CT estimation quality in comparison with the existing model-based methods on the brain and bone tissues.

Huynh et al. [33] experimental results showed that a structured random forest and auto-context based model can accurately predict CT images in various scenarios, and also outperformed two state-of-the-art methods. Chen et al. [34] investigated the feasibility of a deep CNN for MRI-based synthetic CT generation. The gamma analysis of their results with "ground truth" CT image for 1%/1 mm gamma pass rates was over 98.03%. The dosimetric accuracy on the dose-volume histogram (DVH) parameters discrepancy was less than 0.87% and the maximum point dose discrepancy within PTV (planning target volume) was less than 1.01% respect to the prescription on prostate intensity modulated radiotherapy (IMRT) planning.

Overall, the presented findings have obviously demonstrated the potential of the discussed methods to generate synthetic CT images to support the MR-only workflow of radiotherapy treatment planning and image guidance.

The second application supports reconstructing a high-quality image modality from a lower quality one, e.g., 7 T-like MR image from 3 T MR image. The advanced ultra—high 7 T magnetic field scanners provide MR images with higher resolution and better tissue contrast compared to routine 3 T MRI scanners. However, 7 T MRI scanners are currently more expensive, less available in clinical centers, and higher restrictions are required for safety due to its extremely high magnetic field power. As a result, generating/reconstructing a 7 T-like MR image from a 3 T MR image with ML-based approaches would resolve these concerns as well as facilitate early disease diagnosis.

Researchers [35–38] have developed ML-based models to generate a 7 T-like MR image from 3 T MR image. Approaches based on deep learning CNN [35], hierarchical reconstruction based on group sparsity in a novel multi-level canonical correlation analysis (CCA) space [36], and random forest and sparse representation [37, 38] have been investigated to map 3 T MR images to be as 7 T-like MR images. Bahrami et al. [35] visual and numerical results showed that deep learning method outperformed the comparison methods. **Figure 6** presents the reconstruction of 7 T-like MR image from 3 T MR image with deep learning. A second study [36] done by the same author showed that a hierarchical reconstruction based on group sparsity method outperformed other previous methods and resulted in higher accuracy in the segmentation of brain structures, compared to segmentation of 3 T MR images. Other studies by Bahrami et al. [37, 38] using random forest regression model and a group sparse representation showed that the predicted 7 T-like MR images can best match the "groundtruth" 7 T MR images, compared to other methods. Moreover, the experiment on brain tissue segmentation showed that predicted 7 T-like MR images lead to the highest accuracy in the segmentation, compared to segmentation of 3 T MR images.

Overall, the predicted 7 T-like MR images have demonstrated better spatial resolution compared to 3 T MR images. Moreover, delineation critical structure,

#### **Figure 5.**

*Synthetic CT image from MRI data. MR image (left), estimated CT form the MR (middle) with deep learning, and "ground truth" (right) MR image for the same subject (reproduced from [31]).*

**49**

*Radiation Oncology in the Era of Big Data and Machine Learning for Precision Medicine*

i.e., brain tissue structures on 7 T-like MR images showed better accuracy compared to segmentation of 3 T MR images. Adding to above, such high-quality 7 T-like MR

*Reconstruction of 7 T-like MR image from 3 T MR image. 3 T MR image (left), reconstructed 7 T-like MR image (middle) using deep learning, and 7 T MR "ground truth" image (left) of the same subject with each one corresponded with a same selected zoomed area. From the figure, 7 T MR image shows clearly better anatomical* 

Image registration in radiotherapy is the process of aligning images rigidly which allows some changes in images to be easily detected. However, such an alignment does not model changes from, e.g., organ deformation, patient weight loss, or tumor shrinkage. It is possible to take such changes into account using deformable image registration (DIR) which is a method for finding the mapping between points in one image and the corresponding points in another image. DIR has the perspective of being widely integrated into many different steps of the radiotherapy process. The tasks of planning, delivery, and evaluation of radiotherapy can all be improved by taking organ deformation into account. Use of image registration in imageguided radiotherapy (IGRT) can be split into intra-patient (*inter- and intra-fractionated*) and inter-patient registration. Intra-patient registration is matching of images of a single patient, e.g., *inter-fractional registration* (i.e., improving patient positioning, and evaluating organ motion relative to bones) and *intra-fractional registration* (i.e., online tracking of organ movement). In contrast, inter-patient registration is matching images from different patients (i.e., an "average" of images acquired from a number of patients, thereby allowing information to be transferred from the atlas to the newly acquired image). The process of combining information from two images after these have been registered is called data fusion. A particular use of data transfer between images is the propagation of contours from the planning image or an atlas to a newly acquired image [39, 40]. Although many image registration methods have been proposed, there are still some challenges for DIR of complex situations, e.g., large anatomical changes and dynamic appearance changes.

image could better help disease diagnosis and intervention.

*details and tissue contrast compared to 3 T MR image (reproduced from [35]).*

*2.2.3 Image registration/fusion*

**Figure 6.**

*DOI: http://dx.doi.org/10.5772/intechopen.84629*

*Radiation Oncology in the Era of Big Data and Machine Learning for Precision Medicine DOI: http://dx.doi.org/10.5772/intechopen.84629*

#### **Figure 6.**

*Artificial Intelligence - Applications in Medicine and Biology*

flow of radiotherapy treatment planning and image guidance.

Huynh et al. [33] experimental results showed that a structured random forest and auto-context based model can accurately predict CT images in various scenarios, and also outperformed two state-of-the-art methods. Chen et al. [34] investigated the feasibility of a deep CNN for MRI-based synthetic CT generation. The gamma analysis of their results with "ground truth" CT image for 1%/1 mm gamma pass rates was over 98.03%. The dosimetric accuracy on the dose-volume histogram (DVH) parameters discrepancy was less than 0.87% and the maximum point dose discrepancy within PTV (planning target volume) was less than 1.01% respect to the prescription on prostate intensity modulated radiotherapy (IMRT) planning. Overall, the presented findings have obviously demonstrated the potential of the discussed methods to generate synthetic CT images to support the MR-only work-

The second application supports reconstructing a high-quality image modality from a lower quality one, e.g., 7 T-like MR image from 3 T MR image. The advanced ultra—high 7 T magnetic field scanners provide MR images with higher resolution and better tissue contrast compared to routine 3 T MRI scanners. However, 7 T MRI scanners are currently more expensive, less available in clinical centers, and higher restrictions are required for safety due to its extremely high magnetic field power. As a result, generating/reconstructing a 7 T-like MR image from a 3 T MR image with ML-based approaches would resolve these concerns as well as facilitate early

Researchers [35–38] have developed ML-based models to generate a 7 T-like MR image from 3 T MR image. Approaches based on deep learning CNN [35], hierarchical reconstruction based on group sparsity in a novel multi-level canonical correlation analysis (CCA) space [36], and random forest and sparse representation [37, 38] have been investigated to map 3 T MR images to be as 7 T-like MR images. Bahrami et al. [35] visual and numerical results showed that deep learning method outperformed the comparison methods. **Figure 6** presents the reconstruction of 7 T-like MR image from 3 T MR image with deep learning. A second study [36] done by the same author showed that a hierarchical reconstruction based on group sparsity method outperformed other previous methods and resulted in higher accuracy in the segmentation of brain structures, compared to segmentation of 3 T MR images. Other studies by Bahrami et al. [37, 38] using random forest regression model and a group sparse representation showed that the predicted 7 T-like MR images can best match the "groundtruth" 7 T MR images, compared to other methods. Moreover, the experiment on brain tissue segmentation showed that predicted 7 T-like MR images lead to the highest accuracy in the segmentation, compared to segmentation of 3 T MR images.

Overall, the predicted 7 T-like MR images have demonstrated better spatial resolution compared to 3 T MR images. Moreover, delineation critical structure,

*Synthetic CT image from MRI data. MR image (left), estimated CT form the MR (middle) with deep learning,* 

*and "ground truth" (right) MR image for the same subject (reproduced from [31]).*

**48**

**Figure 5.**

disease diagnosis.

*Reconstruction of 7 T-like MR image from 3 T MR image. 3 T MR image (left), reconstructed 7 T-like MR image (middle) using deep learning, and 7 T MR "ground truth" image (left) of the same subject with each one corresponded with a same selected zoomed area. From the figure, 7 T MR image shows clearly better anatomical details and tissue contrast compared to 3 T MR image (reproduced from [35]).*

i.e., brain tissue structures on 7 T-like MR images showed better accuracy compared to segmentation of 3 T MR images. Adding to above, such high-quality 7 T-like MR image could better help disease diagnosis and intervention.

#### *2.2.3 Image registration/fusion*

Image registration in radiotherapy is the process of aligning images rigidly which allows some changes in images to be easily detected. However, such an alignment does not model changes from, e.g., organ deformation, patient weight loss, or tumor shrinkage. It is possible to take such changes into account using deformable image registration (DIR) which is a method for finding the mapping between points in one image and the corresponding points in another image. DIR has the perspective of being widely integrated into many different steps of the radiotherapy process. The tasks of planning, delivery, and evaluation of radiotherapy can all be improved by taking organ deformation into account. Use of image registration in imageguided radiotherapy (IGRT) can be split into intra-patient (*inter- and intra-fractionated*) and inter-patient registration. Intra-patient registration is matching of images of a single patient, e.g., *inter-fractional registration* (i.e., improving patient positioning, and evaluating organ motion relative to bones) and *intra-fractional registration* (i.e., online tracking of organ movement). In contrast, inter-patient registration is matching images from different patients (i.e., an "average" of images acquired from a number of patients, thereby allowing information to be transferred from the atlas to the newly acquired image). The process of combining information from two images after these have been registered is called data fusion. A particular use of data transfer between images is the propagation of contours from the planning image or an atlas to a newly acquired image [39, 40]. Although many image registration methods have been proposed, there are still some challenges for DIR of complex situations, e.g., large anatomical changes and dynamic appearance changes.

Advancement in computer vision and deep learning could provide solutions to overcome these challenges of conventional rigid/deformable image registrations.

Various machine learning-based methods [41–47] for image registration have proposed by investigators to not only align the anatomical structures but also alleviate the appearance difference. Hu et al. [41] proposed a method based on regression forest for image registration of two arbitrary MR images. The learning-based registration method achieved higher registration accuracy compared with other counterpart registration methods. Zagoruyko et al. [42] proposed a general similarity function for comparing image patches, which is a task for many computer vision problems. The results showed that such an approach like CNN-based model can significantly outperform other state-of-the-art methods. Jiang et al. [43] employed a discriminative local derivative pattern method to achieve fast and robust multimodal image registration. The results revealed that the proposed method can achieve superior performance regarding accuracy in multimodal image registration as well as also indicated the potential for clinical US-guided intervention. Neylon et al. [44] developed a deep neural network for automated quantification of DIR performance. Their results showed a correlation between the NN predicted error and the "ground truth" for the PTV and the organs at risk (OARs) were consistently observed to be greater than 0.90. Wu et al. [45, 46] developed an NN-based registration quality evaluator, and a deep learning-based image registration framework, respectively, to improve the image registration robustness. The quality evaluator method [45] showed potentials to be used in a 2D/3D rigid image registration system to improve the overall robustness, and the new image registration framework [46] consistently demonstrated more accurate registration results when compared to the state-of-theart. Kearney et al. [47] developed a deep unsupervised learning strategy for CBCT to CT deformable image registration. The results indicated that deep learning method performed better than rigid registration, intensity corrected demons and landmarkguided deformable image registration for all evaluation metrics.

Overall, most of the machine learning based methods discussed here for image registration have revealed superior performance regarding accuracy in multimodal image registration. Hence, potentials for improved rigid/deformable image registration in radiation oncology are clinically feasible.

#### *2.2.4 Image segmentation/auto-contouring*

Volume definition is a prerequisite for meaningful 3D treatment planning and for accurate dose reporting. International Commission on Radiation Units and Measurements (ICRU) Reports No. 50, 62, 71 and 83 [48] define and describe target volumes (e.g., planning target volume) and critical structure/normal tissue (organ at risk) volumes that aid in the treatment planning process and that provide a basis for comparison of treatment outcomes. The organ at risk is an organ whose sensitivity to radiation is such that the dose received from a treatment plan may be significant compared with its tolerance, possibly needs to be delineated to evaluate its received dose [49]. Multimodal diagnostic images, e.g., CT, MRI, US, positron emission tomography (PET)/CT, etc. can be used through image fusion to help in the process of delineating tumor and OAR structures on CT slices acquired during the patient's treatment simulation. The delineation (auto-contouring) process has subsequently become performed via automated or semi-automated analytical model-based software commercially available for clinical use (e.g., Atlas basedmodels). These software tools are performing reasonably well for critical organs/ OARs delineation but not yet ready for tumor/target structures contouring which represent a challenging task. State-of-the-art machine learning algorithms may play an effective role here for both tasks.

**51**

**Figure 7.**

*annotations overlap; for four different subjects.*

*Radiation Oncology in the Era of Big Data and Machine Learning for Precision Medicine*

Several ML-based methods [52–58] have reported for tumor/target segmentation/auto-contouring, e.g., brain [52–55], prostate [56], rectum [57], sclerosis lesion [58], etc. The reported results showed that deep learning [54, 55] and ensemble learning [50, 53] ML-based methods are the winner algorithms over the other ML-based methods in the brain tumor segmentation competitions [50]. Such a method by Osman [52] based on SVM for glioma brain tumor segmentation showed a robust consistency performance on the training and new "unseen" testing data even though its reported accuracy on multi-institution datasets was reasonably acceptable. **Figure 7** shows the whole glioma brain tumor segmentation on MRI (BRATS'2017 dataset [50, 51]) with an SVM model [52]. For organs segmentation, deep learning algorithm [57, 59, 60] has shown a superior performance than other state-of-the-art segmentation methods and commercially available software for

Overall, tumor/target segmentation/auto-contouring using ML-based methods still remains challenging for some reasons such as availability of big data of multimodal images with their "ground truth" annotation data for training these models. Recent advances in computer vision, specifically around deep learning [61], are particularly well suited for segmentation and it has shown superiority over the other machine learning algorithms for tumor and organs segmentation tasks.

*Whole glioma brain tumor segmentation on MRI (BRATS'2017 dataset [50, 51]). (a) T2-FLAIR MRI, (b) manual "ground truth" glioma segmentation by an experienced board-certified radiation oncologist, (c) machine learning—SVM model glioma segmentation [52], and (d) both, manual and ML, segmented* 

*DOI: http://dx.doi.org/10.5772/intechopen.84629*

segmentation of, e.g., rectum [57], parotid [59], etc.
