**3. CT diagnosis algorithms**

Medical imaging is a useful supplement to reverse transcription polymerase chain reaction (RT-PCR) testing for the confirmation of COVID-19. Researchers have found that CT images of COVID-19 patients exhibit typical imaging characteristics. During the last year, studies have shown that typical chest CT patterns of COVID-19 viral pneumonia include multifocal bilateral peripheral ground-glass areas associated with subsegmental patchy consolidations, mostly subpleural, and predominantly involving lower lung lobes and posterior segments [28–32]. In more detail, chest CT images of COVID-19 patients could be evaluated using the following characteristics [33–40]:


*AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*


GGO, which is defined as hazy increased lung attenuation with preservation of bronchial and vascular margins [41], is the most common early finding of COVID-19 on chest CT. Besides GGO, bilateral patchy shadowing is one of the most common radiologic findings on chest CT [42]. In another study containing 51 COVID-19 patients, Song et al. [43] found that disease progression can be determined by lesions with consolidation. Multiple lesions and crazy-paving pattern are also common in COVID-19 patients. The diagnosis of chest CT depending on visual diagnosis of radiologists suffers from some problems [44]. For example, chest CT contains hundreds of slices, which takes a long time to diagnose. Also, it was found that the chest CT images of some COVID-19 patients share some similar manifestations with other types of pneumonia. This could add extra challenges to inexperienced radiologists, considering that COVID-19 is a new lung disease.

During the last year, AI methods and ML techniques have played a very important role in COVID-19 diagnosis in applications utilizing the CT imaging. The aim of AI techniques and ML methods was always to extract the distinguished features of COVID-19 presented in the different types of images. In this section, we review and thoroughly discuss the major works and articles that have addressed AI and ML in COVID-19 diagnosis using CT imagery. AI and deep learning methods have shown great ability to address the aforementioned problems by detecting this disease and distinguishing it from community acquired pneumonia (CAP) and other non-pneumonic lung diseases using chest CT. We explore important studies that have been performed by various academic and research communities from numerous disciplines which focus on detecting, quantifying, and tracking of Coronavirus and study how they can differentiate Coronavirus patients from those who do not have the disease.

Li et al. [45] developed a 3D deep learning framework for the detection of COVID-19, referred to as COVID-19 detection neural network (COVNet). The proposed CNN consists of a RestNet50 as the backbone, which takes a series of CT slices as the input and generates features for the corresponding slices. In more detail, it extracts visual features from volumetric chest CT scans both in 2D local and 3D global representation. The extracted features from all slices are then combined by a max-pooling operation. CAP and other non-pneumonia CT scans were included to test the robustness of the proposed model. The final feature map is fed to a fully connected layer and softmax activation function to generate a probability score for each type (COVID-19, CAP, and non-pneumonia), and produce a classification prediction. The CT scans are performed using different manufacturers with

standard imaging protocols. Each volumetric scan contains 1094 CT slices with a varying slice-thickness from 0.5 mm to 3 mm. The reconstruction matrix is 512x512 pixels with in-plane pixel spatial resolution from 0.29x0.29 mm2 to 0.98x0.98 mm<sup>2</sup> . The CT scans are preprocessed and the lung region is extracted as the region of interest (ROI) using a U-net based segmentation method. Then, the image is passed to the COVNet for the predictions, as shown in **Figure 1**.

The authors have tested the system on datasets collected from six hospitals between August 2016 and February 2020. The collected datasets consisted of 4356 chest CT scans from 3322 patients. Diagnostic performance was assessed by the area under the receiver operating characteristic curve (AUC), sensitivity and specificity. The COVID-19 cases were affirmed as positive by RT-PCR and were obtained from Dec 31, 2019 to Feb 17, 2020. The most shared symptoms were fever (81%) and cough (66%). Moreover, the patients were 49±15 years old and there are slightly more male patients than female (1838 vs. 1484). CT scans with multiple reconstruction kernels at the same imaging session or acquired at multiple time points were included. The final dataset consisted of 1296 (30%) scans for COVID-19, 1735 (40%) for CAP and 1325 (30%) for non-pneumonia.

For each patient, one or multiple CT scans at several time points during the course of the disease were acquired (Average CT scans per patient was 1.8, with a range from 1 to 6). The per-scan sensitivity and specificity for detecting COVID-19 in the independent test set was 114 of 127 (90% [95% confidence interval: 83%, 94%]) and 294 of 307 (96% [95% confidence interval: 93%, 98%]), respectively, with an AUC of 0.96. The details of their tests are given in **Table 1**.

In another study, a weakly-supervised deep learning-based software system was developed by Zheng et al. [46] using 3D CT volumes to detect COVID-19. The authors have searched unenhanced chest CT scans of patients with suspected COVID-19 from the picture archiving and communication system of radiology department (Union Hospital, Tongji Medical College, Huazhong University of

#### **Figure 1.**

*COVID-19 detection neural network (COVNet) architecture [45].*


#### **Table 1.**

*The performance of COVNet as per [45].*

#### *AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

Science and Technology). 540 patients (age of 42.5 ± 16.1 years; range 3–81 years) were enrolled into the study, including 313 patients (age, 50.7 ± 14.7 years; range 8–81 years) with clinical diagnosed COVID-19 (COVID-positive group) and 227 patients (age of 31.2 ± 10.0 years; range, 3–69 years) without COVID-19 (COVIDnegative group). As shown in **Figure 2**, the system takes a CT volume and its 3D lung mask as input, where the 3D lung mask is generated by a pre-trained U-Net. The proposed system is divided into three stages. The first stage consists of a 3D convolution with a kernel size of 5 × 7 × 7, a batchnorm layer and a pooling layer. The second stage is composed of two 3D residual blocks. In each one of the residual block, a 3D feature map is handed into both a 3D convolution with a batchnorm layer and a shorter connection containing a 3D convolution. The third stage is a progressive classifier, which contains three 3D convolution layers and a fullyconnected layer with the softmax activation function. As described in **Figure 3**, a U-Net is trained for lung region segmentation on the labeled training set using the ground-truth lung masks generated by an unsupervised learning method. Then, the pre-trained U-Net is used to test all CT volumes to obtain the lung masks. The lung mask is concatenated with CT volume and serves as the input of the system. The authors have used the spatially global pooling layer and the temporally global pooling layer to technically handle the weakly-supervised COVID-19 detection problem.

Furthermore, Gozes et al. [47] presented a system that exploits 2D and 3D deep learning models. **Figure 4** shows a block diagram of the developed system. The system is comprised of several components and analyzes the CT case at two distinct levels: *Subsystem A and Subsystem B.* Subsystem A provides a 3D analysis of the case volume for nodules and focal opacities using existing, previously developed algorithms, where Subsystem B provides newly developed 2D analysis of each slice of

**Figure 2.**

*The architecture proposed in [46]. The network takes a CT volume with its 3D lung mask as the input and outputs the probabilities of COVID-19 positive/negative.*

**Figure 3.** *Training and testing procedures [46].*

**Figure 4.** *System block diagram [47].*

the case to detect and localize larger-sized diffuse opacities including ground glass infiltrates which have been clinically described as representative of Coronavirus. As argued by the authors, working in the 2D space has several advantages for deep learning based algorithms in limited data scenarios. These include an increase in the training samples (with many slices per single case), the ability to use pre-trained networks that are common in 2D space, and an easier annotation for segmentation purposes.

For Subsystem A, the authors used commercial off-the-shelf software that detects nodules and small opacities within a 3D lung volume. This software was developed as a solution for lung pathology detection and provides quantitative measurements (including volumetric measurements, axial measurements, calcification detection and texture characterization). For Subsystem B, the first step is the *lung crop stage*, where the lung region of interest is extracted using a lung segmentation module. In the following step, Coronavirus related abnormalities are detected using Resnet-50, which is a 2D CNN architecture that consists of 50 layers. In the classification stage, the authors calculated the ratio of positive detected slices out of the total slices of the lung (*positive ratio*). A positive case-decision is made if the positive ratio exceeds a pre-defined threshold. The system was tested on 157 patients from China and U.S. The sensitivity and the specificity of the system were 98.2% and 92.2%, respectively. **Figure 5** shows a patient case visualization.

The authors have also proposed a *Corona score* which is a volumetric measurement of the opacities burden. The corona score is computed by a volumetric summation of the network-activation maps. The system output enables quantitative measurements for smaller opacities (volume, diameter) and visualization of the larger opacities in a slice-based "heat map" or a 3D volume display. The authors claim that the score is robust to slice thickness and pixel spacing as it includes pixel volume. For patientspecific monitoring of disease progression, they suggested the *Relative Corona score* in which they normalize the corona score by the score computed at the first time point. The suggested "Corona score" measures the progression of disease over time. An example of such an implementation is shown in **Figure 6** which demonstrates a tracking over time of a specific opacity in a Coronavirus patient (red box). In this example, a patient was imaged in time points where the first CT scan was obtained few days following the first signs of the virus (fever, cough). This case involves multiple opacities and shows an overview of the patient recovery process with its corresponding Corona score over time.

Barstugan et al. [48] presented a classification system consisting of five different feature extraction methods followed by support vector machine (SVM). The feature extraction methods were Gray Level Co-occurrence Matrix (GLCM), Local *AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

#### **Figure 5.**

*Patient case visualization. Left: Coronal view; right: Automatically generated 3D volume map of focal opacities (green) and larger diffuse opacities (red) [47].*

**Figure 6.** *Multi time point tracking of disease progression [47].*

Directional Patterns (LDP), Gray Level Run Length Matrix (GLRLM), Gray Level Size Zone Matrix (GLSZM), and Discrete Wavelet Transform (DWT). To test the proposed system, four different datasets were formed by taking patches of size 16x16, 32x32, 48x48 and 64x64 from 150 CT images belonging to 53 infected cases, from the "Societa Italiana di Radiologia Medica e Interventistica". The samples of datasets were labeled as Coronavirus/non-Coronavirus (infected/non-infected). **Table 2** shows the four different subsets created from patch regions. The authors have implemented 2-fold, 5-fold and 10-fold cross-validations during the classification process. Sensitivity, specificity, accuracy, precision, and F-score metrics were used to evaluate the classification performance. **Figure 7** shows patch regions and patch samples from the four different subsets.

Caruso et al. [49] investigated chest CT features of patients with COVID-19 in Rome, Italy, and compared the diagnostic performance of CT with that of RT-PCR. All chest CT examinations were performed with patients in the supine position on a 128-slice CT scanner. Radiologists in consensus with thoracic imaging experience evaluated the images using a clinically available dedicated application (Thoracic VCAR, GE Medical Systems), defining patients as having positive CT findings when a diagnosis of viral pneumonia was reported. The study comprised 158 participants, of them fever was witnessed in 97 (61%) and cough and dyspnea were observed in 88 (56%) and 52 (33%), respectively. Of these patients, 62 (39%) had positive RT-PCR results and 102 (64%) had positive CT findings. Sensitivity, specificity, and accuracy of CT for COVID-19 pneumonia were 97% (60 of 62 participants), 56% (54 of 96 participants), and 72% (114 of 158 participants), respectively.

**Table 3** details the CT features in participants with COVID-19 infection confirmed with RT-PCR as reported in [49]. The results presented in [49] agree with


#### **Table 2.**

*Four different subsets created from patch regions [48].*

#### **Figure 7.**

*Patch regions and patch samples from the four different subsets. Sample images for infected and non-infected situations for all subsets are shown as well [48].*

the study performed by Salehi et al. [50] of 919 patients, despite some differences. However, the population in [50] varies from the population examined in [49]. Also, Chung et al. [51] analyzed a small population consisting of 21 patients and found a very low frequency of crazy paving pattern compared with [49] (19% vs. 39%).

Furthermore, Xu et al. [52] established a model to distinguish COVID-19 from influenza-A viral pneumonia (IAVP) and healthy cases through pulmonary CT images. The authors have discussed that the RT-PCR detection of viral RNA from sputum or nasopharyngeal swab have a relatively low positive rate in the early stage. They argued that the manifestations of COVID-19 as seen through CT imaging show individual characteristics that differ from those of other types of viral pneumonia such as IAVP. The suggested model consists of multiple CNNs, where the candidate infection regions are segmented out from the pulmonary CT image set. Then, these separated images are categorized into the COVID-19, IAVP, and irrelevant to infection groups, together with the corresponding confidence scores, using a location-attention classification model. Finally, the infection type and overall confidence score for each CT case are calculated using the Noisy-OR Bayesian function.

**Figure 8** shows the whole process. As described in the figure, the CT images are first preprocessed to excavate the effective pulmonary regions. Then, a 3D CNN segmentation model is used to segment multiple candidate image cubes. After that, an image classification model is used to classify all the image patches into three


#### *AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

#### **Table 3.**

*CT features in participants with COVID-19 infection confirmed with RT-PCR [49].*

### **Figure 8.**

*The process flow chart of [52].*

kinds: COVID-19, IAVP, and irrelevant to infection. Image patches from the same group "vote" for the type and confidence score of this candidate as a whole. Finally, the Noisy-OR Bayesian function is used to calculate the overall analysis report for one CT sample. It is worth mentioning that the model uses a V-Net as the backbone feature extraction part. The authors have further discussed how the variable 3D structures of the lesion regions can aggravate the results. For example, when the border between a healthy region and the infected one becomes blurred and indistinct, it will be difficult to label pixel-level masks for lesion regions of pneumonia. As such, the model uses the RPN structure [52] to capture the region of interest with 3D bounding boxes instead of pixel-level segmented masks.

To evaluate the system, two classification models were used, as shown in **Figure 9**. The first one was the ResNet model and the other was designed based on the first network structure by concatenating the location-attention mechanism in the full-connection layer to improve the overall accuracy rate. The resultant model was added to the first full-connection layer to enhance the influence of this factor on the whole network. The output of the convolution layer was flattened to a 256-dimensional feature vector and then converted into a 16-dimensional feature vector using a full-connection network. The overall accuracy rate was 86.7% in terms of all the CT cases taken together.

Belfiore et al. [53] presented a practice of a good tool for radiologists (Thoracic VCAR) that can be used in COVID-19 diagnosis. Thoracic VCAR offers quantitative measurements of the lung involvement. Further, it can generate a clear, fast and concise report that communicates vital medical information to referring physicians. In the post-processing phase, the software can recognize the ground glass and differentiate it from consolidation and quantifies them as a percentage with respect to the healthy parenchyma. This information is useful for evaluating regression or progression disease in response to drug therapy as well as evaluating the effectiveness of pronation maneuvers for alveolar recruitment in ICU patients. The authors in [53] have discussed the importance of such high-resolution CT (HRCT) technique in investigating the patients with suspicion COVID-19 pneumonia. They have argued that the HRCT is a very accurate technique in identifying pathognomic findings of interstitial pneumonia as ground glass areas, crazy paving, nodules and consolidations, mono- or bilateral, patchy or multifocal, central and/or peripheral distribution, declivous or nondeclivous. As per the discussion, during the followup, HRCT examination can quantify the course of the disease and evaluate the effectiveness of the experimental trial and the patient's prognosis.

In [54], Mei et al. have also used AI algorithms to integrate chest CT findings with clinical symptoms, exposure history and laboratory testing to rapidly diagnose patients who are positive for COVID-19. Among a total of 905 patients

**Figure 9.** *The network structure of ResNet-18-based classification model [52].*

#### *AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

tested by real-time RT–PCR test, 419 (46.3%) tested positive for SARS-CoV-2. In this study, the dataset included patients aged from 1 to 91 years (with mean of 40.7 year and standard deviation of 6.5 years) where 488 of the patients were men and 417 were women. All scans were acquired using a standard chest CT protocol and were reconstructed using the multiple kernels and displayed with a lung window. Clinical information included travel and exposure history, leukocyte counts (including absolute neutrophil number, percentage neutrophils, absolute lymphocyte number and percentage lymphocytes), symptomatology (presence of fever, cough and sputum), patient age and patient sex. More specifically, the authors developed a CNN to learn the imaging characteristics of patients on the initial CT scan. They used multilayer perceptron classifiers to classify patients with COVID-19 according to the radiological data and clinical information.

Of the 134 positive cases in the test set, 90 were correctly categorized by both the joint model and the senior thoracic radiologist and 33 were classified differently. Of the 33 patients, 23 were correctly classified as positive by the joint model, but were misclassified by the senior thoracic radiologist. Ten patients were classified as negative by the joint model, but correctly diagnosed by the senior thoracic radiologist. Eleven patients were misclassified by both the joint model and the senior thoracic radiologist. Of the 145 patients negative for COVID-19 in the test set, 113 were correctly classified by both the joint model and the senior thoracic radiologist. Thirty-two out of 145 were classified differently by the joint model and the senior thoracic radiologist. Seven were correctly classified as negative by the joint model, but were diagnosed as positive by the senior thoracic radiologist. Twenty-three were classified as positive by the joint model, but correctly diagnosed as negative by the senior thoracic radiologist. Two patients were misclassified by both the joint model and the senior thoracic radiologist. As discussed in [54], patient's age, presence of exposure to SARS-CoV-2, presence of fever, cough, cough with sputum, and white blood cell counts are significant features associated with SARS-CoV-2 status. However, it should be pointed out that difficulties on model training have been witnessed due to the limited sample size.

Moreover, Fei et al. [55] developed a deep learning-based system for automatic segmentation of lung and infection sites using chest CT. Likewise, Xiaowei et al. [56] distinguished COVID-19 pneumonia and Influenza-A viral pneumonia from healthy cases. Further, Shuai et al. [57] developed a system to extract the graphical features in order to provide a clinical diagnosis before pathogenic testing and thus save critical time. Also, Zheng et al. [58] developed a model for automatic detection using 3D CT volumes. Bai et al. [59] established and evaluated an AI system for differentiating COVID-19 and other pneumonia from chest CT to assess radiologist performance. As they have discussed, distinguishing COVID-19 from normal lung or other lung diseases, such as cancer from chest CT, may be straightforward. However, a major difficulty in controlling the current pandemic is making out subtle radiologic differences between COVID-19 and pneumonia of other origins. A total of 521 patients with positive RT-PCR results for COVID-19 and abnormal chest CT findings were retrospectively identified from 10 hospitals. A total of 665 patients with non–COVID-19 pneumonia and definite evidence of pneumonia from chest CT were retrospectively selected from three hospitals.

Further, the authors have performed data augmentation dynamically during training and included flips, scaling, rotations, random brightness and contrast manipulations, random noise, and blurring. Training was performed for 20 epochs, where each epoch was defined as 16000 slices. A classification model was trained to distinguish between slices with and those without pneumonia-like findings

(both COVID-19 and non–COVID-19). In more technical details, the EfficientNet B4 architecture was used for the pneumonia classification task. Each slice was stacked to three channels as the input of EfficientNet that used pretrained weights on ImageNet. EfficientNets with dense top fully connected layers were used. There were four fully connected layers of 256, 128, 64, and 32 neurons, respectively. Also, a fully connected layer with 16 neurons with batch normalization and a classification layer with sigmoid activation were added at the end of EfficientNet. Then, the slices were pooled using a two-layer fully connected neural network to make predictions at the patient level. **Figure 10** shows the proposed classification neural network model, while **Figure 11** demonstrates the model's flowchart.

Kumar et al. [60] proposed a framework that collects a big amount of data from various hospitals and trains a deep learning model over a decentralized network using the most recent information related to COVID-19 patients based on CT slices. The authors suggested the integration of blockchain and federated-learning technology that allow the collection of data from different hospitals without the leakage of data; a step that adds the necessary privacy to the model. They employed Google's Inception V3 network for feature extraction and tested various learning models (VGG, DenseNet, AlexNet, MobileNet, ResNet, and Capsule Network) in order

**Figure 10.** *Classification neural network model proposed by [59].*

#### **Figure 11.**

*The flowchart showing the AI model used to distinguish COVID-19 from non–COVID-19 pneumonia. (PR AUC = precision recall area under curve, ROC AUC = receiver operator characteristics area under the curve) [59].*

#### *AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

to recognize the patterns from lung screening. They found that Capsule network achieved the best performance when compared to other learning models. **Figure 12** shows the suggested model in [60].

The Capsule network contains four layers: i) Convolutional layer, ii) Hidden layer, iii) PrimaryCaps layer, and iv) DigitCaps layer. A capsule is made when input features are in the lower layer. Each layer of the Capsule Network contains many capsules. To train it, the activation layer represents the parameters of the entity and computes the length of the Capsule network to re-compute the scores of the feature part. The capsule acts as a neuron. Capsule networks tend to describe an image at a component level and associate a vector with each component. The probability of the existence of a component is represented by the vectors lengths.

In federated learning, the hospitals keep their data private and share only the weights and gradients while blockchain technology is used to distribute the data securely among the hospitals. Federated learning was proposed by McMahan et al. [61] to learn from the shared model while protecting the privacy of data. In this context, the federated learning is used to secure data and aggregate the parameters from multiple organizations. As argued by the authors, since the volume of data is big, placing them on the blockchain directly with its limited storage space will be very expensive and resource-intensive. As such, a special data manipulation is needed. So, the hospital needs to store a transaction in the block to verify the ownership. The hospital data include the data type and size. It is noteworthy that federated learning does not affect the accuracy but it adds the privacy while sharing the data. Some selected 3D samples from the dataset are shown in **Figure 13**. The authors have claimed that the system sensitivity is 0.96, and its precision is 0.83. However, its specificity was not very attractive.

A simple 2D deep learning framework was developed in [62] to diagnose COVID-19 pneumonia based on a single chest CT image using transfer learning. For training and testing, the authors collected 3993 chest CT images of patients with COVID-19 pneumonia, other pneumonia and nonpneumonia diseases. These CT images were split into a training set and a testing set at a ratio of 8:2. After a simple preprocessing stage, three channels (256 × 256 × 3 pixels) were arranged in the input layer and fed into the pretrained model layers. In the pretrained model layers, the authors included one of these four models (VGG16, ResNet-50, Inception-v3, and Xception). Each model comprises two parts: a convolutional base and a classifier. The convolutional base is composed of a stack of convolutional and pooling layers to generate features from the images. The role of the classifier is to categorize the image based on the extracted features. The activations from the pretrained model layers were fed into the additional layers. In the additional layers, the activations were first flattened and connected to two fully connected layers: one consisted of 32 nodes, and the other consisted of three nodes. Subsequently, the activations from the second fully connected layer were fed into a SoftMax layer, which provided the probability for each of the

**Figure 12.** *COVID-19 model suggested by [60].*

**Figure 13.** *Selected samples from [60].*

classes (COVID-19, other pneumonia, and nonpneumonia). However, the study has several limitations as well. First, the testing dataset was obtained from the same sources as the training data set. This may raise issues of generalizability and overfitting of the models. Indeed, the authors have mentioned that the detection accuracy decreased when datasets from other published papers were used.

Song et al. [63] first extracted the main regions of the lungs and filled the blank of lung segmentation with the lung itself to avoid noise caused by different lung contours. Then, they extracted the top-K details in the CT images and obtained image-level predictions. Finally, the image-level predictions were combined to attain patient-level diagnoses. In the testing set, the model achieved an AUC of 0.95 and sensitivity of 0.96. In [64], Jin et al. built a method to accelerate the diagnosis speed. This model was trained using 312 images. Yet, it achieved a comparable performance with experienced radiologists. Among 1255 independent testing cases, the proposed deep-learning model achieved an accuracy of 94.98%, an AUC of 97.91%, a sensitivity of 94.06% and a specificity of 95.47%.

Zheng et al. [65] used U-Net to segment the lung area automatically, and then used 3DResNet for classification. As they have discussed, infectious areas can be distributed in many locations in the lungs, and automatic infectious area detection may not guarantee very high precision. Consequently, using the whole lung for classification is more convenient in practice. In [66], 3506 patients (468 with COVID-19, 1551 with CAP, and 1303 with non-pneumonia) were used to train and test another deep-learning model. The authors first used U-net to extract the whole lung region as an ROI. Afterwards, 2D RestNet50 was used for classifying COVID-19. Since each CT scanning includes multiple 2D image slices, the features in the last layer of ResNet50 were max pooled and combined for prediction. The model achieved an AUC of 0.96 in classifying COVID-19 from CAP and other pneumonia. Moreover, Shi et al. [67] included 1658 patients with COVID-19 and 1027 patients with CAP for classification. They first used VBNet to segment the infected areas, bilateral lungs, 5 lung lobes, and 18 lung pulmonary areas. Then, hand-crafted features such as location specific features, infection size, and radiomic features were extracted, and least absolute shrinkage and selection operator (LASSO) was used for feature selection. The method reached sensitivity of 0.9, specificity of 0.8, and accuracy of 0.88.

Further, Dong et al. [68] reviewed the use of various imaging characteristics and computing models that have been applied for the management of COVID-19. Specifically, they have quantitatively analyzed the use of imaging data for detection

#### *AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

and treatment by means of CT, positron emission tomography - CT (PET/CT), lung ultrasound, and magnetic resonance imaging (MRI). PET is a sensitive but invasive imaging method that plays an important role in evaluating inflammatory and infectious pulmonary diseases, monitoring disease progression and treatment effect, and improving patient management. It is worth mentioning that lung ultrasound is a non-invasive, radiation-free, and portable imaging method that allows for an initial bedside screening of low-risk patients, diagnosis of suspected cases in the emergency room setting, prognostic stratification, and monitoring of the changes in pneumonia [69, 70].

Also, Jin et al. [71] presented their experience in building and deploying an AI system that analyzes CT images and detects COVID-19 pneumonia features. They obtained the image samples from five different hospitals with 11 different models of CT equipment to increase the model's generalization ability. The combined "segmentation - classification" model pipeline, can highlight the lesion regions in addition to the screening result. The model pipeline is divided into two stages: 3D segmentation and classification. The pipeline leverages a model library that contains different segmentation models such as FCN-8 s, U-Net, V-Net, and 3D U-Net++, as well as the classification models like dual path network (DPN-92), Inception-v3, ResNet-50, and Attention ResNet-50. As for the training set, in addition to the positive cases, they assembled a set of negative images of inflammatory and neoplastic pulmonary diseases, such as lobar pneumonia, lobster pneumonia, and old lesions. Their aim was enabling the model to learn different COVID-19 features from various resources. Using 1136 training cases (723 positives for COVID-19), they were able to achieve a sensitivity of 0.974 and a specificity of 0.922 on the test set. Further, the system achieved an AUC of 0.991. According to the authors, the system is in use in 16 hospitals and has a daily capacity of over 1300 screenings. Similarly, Jin et al. [72] performed an extensive statistical analysis on CT images diagnosed by COVID-19.

They evaluated the system on a large dataset with more than 10000 CT volumes from COVID-19, influenza-A/B, non-viral CAP and non-pneumonia subjects. **Figure 14** shows the workflow of the suggested system. The system consists of five key parts: (1) lung segmentation network, (2) slice diagnosis network, (3) COVIDinfectious slice locating network, (4) visualization module for interpreting the vital region, and (5) image phenotype analysis module for features explanation. CT volumes were divided into different cohorts. The authors claimed that the system achieved an AUC of 97.81% on a test set of 3199 scans.

Jin et al. [73] drafted a guideline according to the guidelines methodology and general rules of WHO in relation to CT imaging. This guideline includes the epidemiological characteristics, disease screening, diagnosis, treatment, and nosocomial infection prevention. In this regard, the authors have discussed that the imaging findings vary with the patient's age, immunity status, disease stage at the time of scanning, underlying diseases, and drug interventions. The imaging features of lesions show: (1) dominant distribution (mainly subpleural, along the bronchial vascular bundles), (2) quantity (often more than three or more lesions, occasional single or double lesions), (3) shape (patchy, large block, nodular, lumpy, honeycomb-like or grid-like, cord-like, etc.), (4) density (mostly uneven, a paving stones-like change mixed with ground glass density and interlobular septal thickening, consolidation and thickened bronchial wall, etc.), and (5) concomitant signs variations (air-bronchogram, rare pleural effusion and mediastinal lymph nodes enlargement, etc.).

In addition, Chen et al. [74] constructed a system based on deep learning for detecting COVID-19 pneumonia from high resolution CT. For model development and validation, 46096 anonymous images from 106 admitted patients, including 51

**Figure 14.** *The workflow of the AI system suggested in [72].*

patients of laboratory confirmed COVID-19 pneumonia and 55 control patients of other diseases in Renmin Hospital of Wuhan University were retrospectively collected and processed. Twenty-seven consecutive patients who underwent CT scans were prospectively collected to evaluate and compare the efficiency of radiologists against COVID-19 pneumonia with that of the model. The authors have first filtered the images where 35355 images were selected and split into training and testing datasets. In more detail, the authors implemented UNet++ being a well-known architecture for medical image segmentation. They trained UNet++ to extract valid areas in CT images using 289 randomly selected CT images and tested it on other 600 randomly selected CT images. The training images were labeled with the smallest rectangle containing all valid areas. With the raw CT scan images taken as the input, and the labeled map from the expert as the output, UNet++ was used to train in an image-to-image manner. The model successfully extracted valid areas in 600 images from the testing set with an accuracy of 100%. Based on system performance, the authors constructed a cloud-based platform to provide a worldwide assistance for detecting COVID-19 pneumonia [75].

In [76], Vinod and Prabaharan have elaborated a methodology that helps identifying COVID-19 infected people among the normal individuals by utilizing CT scan. The image diagnosis tool utilizes decision tree classifier for finding Coronavirus infected person. The percentage accuracy of an image was analyzed in terms of precision, recall score and F1 score. Moreover, Gieraerts et al. [77] hypothesized that the use of semi-automated AI may allow for more accurate patient detection. They assessed COVID-19 patients who underwent chest CT by conventional visual and AI-based quantification of lung injury. They also studied the impact of chest CT variability in determining the potential response to novel antiviral therapies. In their study, 250 consecutive patients with clinical suspicion of COVID-19 pneumonia were tested with both RT-PCR and CT within a 2-hour

#### *AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

interval of hospital admission. Epidemiological, demographic, clinical, and laboratory data at admission were obtained from the electronic patient management system.

In Zhang et al. [78], 4695 manually annotated CT slices were used to build seven classes, including background, lung field, consolidation, ground-glass opacity, pulmonary fibrosis, interstitial thickening, and pleural effusion. After a comparison between different semantic segmentation approaches, the authors selected DeepLabv3 as the segmentation detection backbone. The diagnostic system was based on a neural network fed by the lung-lesion maps. The results showed a COVID-19 diagnostic accuracy of 92.49% when tested on 260 subjects. In Bai et al. [79], a direct classification of COVID-19 specific pneumonia versus other etiologies was performed using an EfficientNet B5 network followed by a two-layer fully connected network to pool the information from multiple slices and provide a patientlevel diagnosis. This system yielded 96% accuracy on a testing set of 119 subjects compared to an average accuracy of 85% for six radiologists.

Also, Ying et al. [80] used 2D slices including lung regions segmented by OpenCV. Fifteen slices of complete lungs were derived from each 3D chest CT images, and each 2D slice was used as the input to the system. A pretrained ResNet-50 was used and the Feature Pyramid Network (FPN) was added to extract the top-K details from each image. An attention module was coupled to learn the important details. Chest CT images from 88 patients with COVID-19, 101 patients with bacterial pneumonia, and 86 healthy persons were used. The model achieved an accuracy of 86% for pneumonia classification (COVID-19 or bacterial pneumonia), and an accuracy of 94% for pneumonia diagnosis (COVID-19 or healthy). Wang et al. [81] used 1065 chest CT scan images of COVID-19 patients to build a classifier using InceptionNet. They reported an accuracy of 89.5%, a specificity of 0.88, and a sensitivity of 0.87. In [82], different deep learning approaches (VGG16, InceptionResNetV2, ResNet50, VGG19, MobilenetV2, and NasNetMobile) have been modified and tested on 400 CT scan images. The results have shown that NasNetMobile outperformed all other models in terms of accuracy (81.5% –95.2%). On the other hand, Mucahid et al. [83] used classical feature extraction techniques for COVID-19 detection. For example, they have implemented gray level co-occurrence matrices (GLCM), local directional pattern (LDP), gray-level run length matrix (GLRLM), and discrete wavelet transform (DWT). They reported an accuracy of 99.68% in the best configuration settings.

Modegh et al. [84] proposed a system to distinguish healthy people, patients with COVID-19, and patients with other pneumonia diseases from axial lung CT-scan images. The general workflow for the proposed model is shown in **Figure 15**. The Ground Glass Opacity Axial (GGOA) CT-scan images are preprocessed and the lobes of the lungs are detected and extracted from the axial slices. The images of the left and right lobes of all the slices are then fed into two deep CNNs, one for calculating the probability of being diseased versus healthy, and the other for calculating the probability of diagnosis to be COVID-19 versus other diseases. In addition, the system detects the infected areas in the lung images. At the end, the probabilities assigned to the lobes are aggregated to make a final decision.

**Figure 16** shows the model used for calculating the probability of each slice lobe being infected. The model was evaluated on a dataset of 3359 samples from 6 different medical centers and achieved sensitivities of 97.8% and 98.2%, and specificities of 87% and 81% in distinguishing normal cases from the diseased and COVID-19 from other diseases, respectively. Authors in [85] examined the effect of generalizability of the deep learning models, given the heterogeneous factors in training datasets such as patient demographics and pre-existing clinical conditions. The examination was done by evaluating the classification models trained to identify COVID-19 positive patients on 3D CT datasets from different

#### **Figure 16.**

*The deep model used for calculating the probability of each slice lobe [84].*

countries: UT Southwestern (UTSW), CC-CCII Dataset (China), COVID-CTset (Iran), and MosMedData (Russia). The data were divided into two classes: COVID-19 positive and COVID-19 negative patients. The models trained on a

*AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

single dataset achieved accuracy/AUC values of 0.87/0.826 (UTSW), 0.97/0.988 (CC-CCCI), and 0.86/0.873 (COVID-CTset) when evaluated on their own dataset.

In addition, Shah et al. [86] developed a deep learning network (CTnet-10) for COVID-19 classification. The model is fed with an input image of size 128 × 128 × 3. It passes through two convolutional blocks of dimensions 126 × 126 × 32, 124 × 124 × 32 respectively. Then it passes through a max-pooling of dimension 62 × 62 × 32 followed by two convolutional layers of dimensions 60 × 60 × 32, 58 × 58 × 32 respectively. Then, it is passed through a pooling layer of dimension 29 × 29 × 32, a flattened layer of 26912 neurons, and dropout layers of 256 neurons. After that, it is passed through a dense layer of a single neuron, where the CT scan image is classified as COVID-19 positive or negative. The system achieved an accuracy of 82.1%. The CTnet-10 model architecture is shown in **Figure 17**.

VB-Net, a deep learning network, was developed by Shan et al. [87] to quantify longitudinal changes in the follow-up CT scans of COVID-19 patients, and to explore the quantitative lesion distribution. VB-Net is a modified 3D CNN that consists of two paths. The first is a contracting path including down-sampling and convolution operations to extract global image features. The second is an expansive path including up-sampling and convolution operations to integrate fine-grained image features. Compared with V-Net, the VB-Net is much faster. The system not only performs auto-contouring of infection regions, but also accurately estimates their shapes, volumes and percentage of infection (POI) in CT scans. In addition, it measures the severity of COVID-19 and the distribution of infection within the lung. The accurate segmentation provides quantitative information that is necessary

**Figure 17.** *CTnet-10 model architecture [86].*

**Figure 18.** *The human-in-the-loop workflow [87].*

to track disease progression and analyze longitude changes of COVID-19 during the entire treatment period. After segmentation, various metrics are computed to quantify the infection, including the volumes of infection in the whole lung, and the volumes of infection in each lobe and each bronchopulmonary segment.

The system was trained using 249 COVID- 19 patients, and validated using 300 new COVID-19 patients. To accelerate the manual delineation of CT images for training, a human-in-the-loop (HITL) strategy (shown in **Figure 18**) was adopted to assist radiologists to refine automatic annotation of each case. To evaluate the performance of the system, the Dice similarity coefficient, the differences of volume and the POI were calculated between the automatic and the manual

#### **Figure 19.**

*The proposed pipeline for quantifying COVID-19 infection [87].*

#### **Figure 20.**

*Typical infection segmentation results of CT scans for three COVID-19 patients. Rows 1–3: Early, progressive and severe stages. Columns 1–3: CT image, CT images overlaid with segmentation, and 3D surface rendering of segmented infections [87].*

*AI Modeling to Combat COVID-19 Using CT Scan Imaging Algorithms and Simulations: A Study DOI: http://dx.doi.org/10.5772/intechopen.99442*

segmentation results on the validation set. The system yielded a Dice similarity coefficient of 91.6% and a mean POI estimation error of 0.3% for the whole lung on the validation dataset. Moreover, the proposed HITL strategy reduced the delineation time to 4 minutes after 3 iterations of model updating, compared to the cases of fully manual delineation that often take 1 to 5 hours. **Figure 19** shows the pipeline for quantifying COVID-19 infection, whereas **Figure 20** shows typical infection segmentation results of CT scans for three COVID-19 patients.
