**1. Introduction**

The global COVID-19 pandemic, originating from the Coronavirus disease 2019 (COVID-19), continues to persist across more than 200 countries and territories, evoking significant apprehension within the international community [1, 2]. This crisis has engendered profound human losses and economic ramifications, reshaping lives in various countries through the implementation of lockdown measures. In order to curb the dissemination of COVID-19 and curtail its associated mortality, early detection of the disease assumes critical importance. Effective and prompt screening and testing are pivotal for the proficient management of individuals afflicted by

COVID-19 [3, 4]. The evolution of advanced techniques has led to the proposition of increasingly efficient screening technologies, aimed at attenuating the transmission of COVID-19.

#### **1.1 COVID-19 screening**

The goal of a COVID-19 screening test is to identify potential cases of the disease in individuals who are asymptomatic. This proactive approach aims to mitigate the spread of COVID-19 by detecting infections early, allowing for timely and effective treatment. During the initial stages of the COVID-19 pandemic, the primary method for detecting viral infections was the Reverse Transcription-Polymerase Chain Reaction (RT-PCR) test [5]. However, the effectiveness of the RT-PCR assay has been questioned due to its limited sensitivity [6], which can be attributed to various factors like sample preparation and quality control issues [7]. Furthermore, the current nucleic acid tests'sensitivity necessitates repeated testing for a significant portion of suspected patients to achieve a reliable diagnosis. This underscores the need for the development of a complementary tool capable of providing lung-imaging information. Such a tool would serve as an invaluable resource for medical professionals, aiding them in enhancing the accuracy of COVID-19 diagnoses.

Chest X-rays and thoracic computed tomography (CT) scans are readily available imaging tools that offer significant support to medical practitioners in diagnosing lung-related ailments [8–10]. The application of artificial intelligence (AI) in enhancing image analysis of chest X-rays and thoracic CT data has garnered substantial interest, particularly in the development of effective COVID-19 screening techniques. AI, a burgeoning technology in the realm of medical imaging, has played a dynamic role in the battle against COVID-19 [11]. This is in contrast to traditional imaging processes that heavily rely on human interpretation, as AI offers imaging solutions that are safer, more accurate, and more efficient. Notably, the utilization of deep learning for data representation has exhibited remarkable success in image processing [12]. Convolutional neural networks (CNNs) [13–18] have effectively tackled the challenge of representing digital images, particularly on extensive datasets such as the ImageNet dataset [19]. These advances demonstrate the potential of deep learning in transforming biomedical image analysis.

The potency of deep learning methods, such as Convolutional Neural Networks (CNNs), has been prominently demonstrated in the classification of COVID-19 cases. Ghoshal et al. [20] introduce a Bayesian Convolutional Neural Network designed to estimate diagnostic uncertainty in COVID-19 predictions. This approach incorporates 70 lung X-ray images from COVID-19 patients sourced from an online COVID-19 dataset [21], as well as non-COVID-19 images from Kaggle's Chest X-Ray Images (Pneumonia), where Bayesian inference is employed to enhance detection accuracy. Narin et al. [10] focus on COVID-19 infection detection using X-ray images, employing a comparative analysis of three deep learning models: ResNet50, InceptionV3, and Inception-ResNetV2. The evaluation results indicate that the ResNet50 model surpasses the performance of the other two models. Zhang et al. [22] similarly harness the ResNet architecture for COVID-19 classification using X-ray images. An anomaly score is estimated to optimize the COVID-19 score, which in turn is used for classification. Wang et al. [23] introduce COVID-Net, a framework tailored for the detection of COVID-19 cases through X-ray images. Primarily, most ongoing studies employ X-ray images for discriminating between COVID-19 cases and other instances of pneumonia and healthy subjects. However, the limited quantity of

*Effective Screening and Face Mask Detection for COVID Spread Mitigation Using Deep… DOI: http://dx.doi.org/10.5772/intechopen.113176*

available COVID-19 images raises concerns regarding the methods' robustness and generalizability, urging further investigation.

Furthermore, it is of utmost importance to delineate the areas affected by COVID-19 infection, as this yields comprehensive insights crucial for accurate diagnosis. Semantic segmentation plays a pivotal role in identifying and quantifying COVID-19 by recognizing regions and associated patterns. This technique enables the assessment of regions of interest (ROIs) encompassing lung structures, lobes, bronchopulmonary segments, and infected regions or lesions within chest X-ray or CT images. Extracting handcrafted or learned features for diagnosis and other applications becomes feasible through the use of segmented regions. The advancement of deep learning has significantly propelled the evolution of semantic image segmentation. In the context of CT scans, the networks employed for COVID-19 include established models like U-Net [24–26], UNet++ [27], and VB-Net [28] to segment ROIs. Furthermore, segmentation approaches for COVID-19 can be categorized into two main groups: those oriented toward lung regions and those aimed at lung lesions. The former group focuses on distinguishing lung regions, encompassing entire lungs and lung lobes, from surrounding (background) regions in CT or X-ray images [29, 30]. For instance, Jin et al. [29] utilize UNet++ to detect the entire lung region. The latter group seeks to isolate lung lesions (or artifacts such as metal and motion) from lung regions [31, 32]. In addition to screening techniques, physical solutions for mitigating the spread of COVID-19 also hold efficacy.

#### **1.2 Face mask detection**

An effective physical measure to counter the spread of COVID-19 is the utilization of face masks in public settings [33]. **Figure 1** illustrates the varying transmission risks between an infected individual and an uninfected person. When an infected person does not wear a mask, the risk of transmitting the virus to an uninfected person is substantially high, as depicted in the first row. This risk diminishes to a moderate level if either of them wears a face mask (depicted in the second row). The lowest risk of

#### **Figure 1.**

*Different risk of transmission between infected person (left column) and uninfected person (right column).*

**Figure 2.**

*The flow of building face mask detector.*

infection occurs when both individuals are wearing masks [34], as depicted in the third row. Thus, wearing face masks effectively reduces the spread of COVID-19. However, ensuring universal adherence to mask-wearing mandates poses challenges. AI-driven face mask detection [35] has emerged as a technique to identify compliance with face mask requirements and can serve as a reminder for those not wearing masks. Developing a face mask detection system from scratch presents challenges due to the scarcity of labeled images. As a result, deep transfer learning [36] offers a promising solution to this predicament. This technique involves adapting pre-trained models to the task of face mask detection. The complete process of constructing face mask detection models through deep transfer learning is delineated in **Figure 2**.

During the model training phase, a limited set of annotated images is loaded as training data. These images are then used to fine-tune pre-trained deep learning models, ultimately creating a robust fake mask detector. In the testing phase, datasets with labeled ground truth are loaded. The face mask detector is applied to this data, and its performance is evaluated using predefined metrics. Subsequent sections delve into comprehensive details regarding COVID-19 screening methodologies and fake mask detection on edge devices, all supported by illustrative case studies.

The structure of this chapter is as follows: In Section 2, we provide an overview of previous research on AI-driven COVID-19 screening techniques and face mask detection. Moving to Section 3, we delve into specific case studies, presenting associated outcomes that highlight effective screening and face mask detection strategies. These endeavors leverage deep learning and edge devices to mitigate the spread of COVID-19. Finally, Section 4 encapsulates the conclusions and outlines avenues for future research.

### **2. Related work**

#### **2.1 COVID-19 screening** *via* **AI-enhanced image processing**

The advancement of AI-driven image processing has played a pivotal role in significantly advancing COVID-19 screening techniques, encompassing both COVID-19 classification and COVID-19 segmentation. Considerable attention has been directed toward the classification of COVID-19 cases versus non-COVID-19 cases, with a focus on employing deep learning models. These models aim to effectively differentiate between COVID-19 patients and non-COVID-19 subjects, wherein the latter group includes individuals with common pneumonia and those without pneumonia. The diagram depicted in **Figure 3** illustrates the process of COVID-19 classification on a chest X-ray image. The COVID-19 classifier receives a chest X-ray image as *Effective Screening and Face Mask Detection for COVID Spread Mitigation Using Deep… DOI: http://dx.doi.org/10.5772/intechopen.113176*

#### **Figure 3.**

*A diagram of COVID-19 classification on chest X-ray images.*

input and provides an output that classifies the image as either indicating COVID-19 or non-COVID-19 status.

Considerable research efforts have been directed toward COVID-19 classification. Chen et al. [27] pursued COVID-19 classification using segmented lesion patterns extracted *via* UNet++. Their dataset included diverse patient cases, such as COVID-19 patients, viral pneumonia patients, and non-pneumonia patients. Given the visual similarity between common pneumonias, particularly viral pneumonia, and COVID-19, distinguishing these conditions becomes crucial for effective clinical screening. To address this, a 2D CNN model was proposed, employing manually delineated region patches for classification between COVID-19 and typical viral pneumonia. Additionally, Wang et al. [37] combined segmentation information with a proposed 2D CNN model to classify COVID-19 cases by considering handcrafted features of relative infection distance from the lung's edge. Xu et al. [38] utilized candidate infection regions segmented by V-Net. They combined these region patches with handcrafted features representing the distance from the edge of the region to perform COVID-19 classification using a ResNet-18 model. Zheng et al. [24] employed U-Net for lung segmentation and utilized 3D CNNs to predict COVID-19 probabilities based on the segmented features. Their dataset consisted solely of chest CT images of COVID-19 and non-COVID-19 cases. Similarly, Jin et al. [29] introduced a UNet++ based segmentation model to identify lesions and a ResNet50-based classification model for diagnosis. Their larger dataset encompassed chest CT images of 1136 cases, including 723 COVID-19 positives and 413 COVID-19 negatives. In another work, Jin et al. [39] employed a 2D Deeplab v1 model for lung segmentation and a 2D ResNet152 model for slice-based identification of positive COVID-19 cases. In summary, ongoing efforts in COVID-19 classification primarily focus on learning from significant volumes of medical images. However, the application of these techniques is hindered by the considerable data requirements for building effective classifiers.

The segmentation of COVID-19 cases is achieved through image semantic segmentation techniques bolstered by deep learning models such as U-Net and V-Net [40, 41]. An illustrative instance of image semantic segmentation on a chest X-ray image is presented in **Figure 4**. This approach enables the precise identification and isolation of COVID-19-affected regions within medical images.

**Figure 4.** *An example of image semantic segmentation on a chest X-ray image.*

The U-Net architecture is a powerful tool for segmenting both lung regions and lung lesions, facilitating the construction of effective image segmentation models [31]. U-Net, developed using a fully convolutional network [42], features a distinctive Ushaped design encompassing two symmetric paths: an encoding path and a decoding path. The layers at the corresponding levels in these two paths are interconnected through shortcut connections, fostering the acquisition of improved visual semantics and intricate contextual details. Zhou et al. [43] introduced UNet++, which inserts a nested convolutional structure between the encoding and decoding paths, further enhancing segmentation performance. In a similar vein, Milletari et al. [44] developed the V-Net, employing residual blocks as fundamental convolutional units and optimizing the network using a Dice loss. Furthermore, Shan et al. [28] devised VB-Net, enhancing segmentation efficiency by incorporating convolutional blocks with bottleneck blocks. Numerous variations of the U-Net architecture and its derivatives have been explored, yielding promising segmentation outcomes in COVID-19 diagnosis [27]. Advanced attention mechanisms are incorporated to identify the most discriminant features within deep learning models. Oktay et al. [45] introduced the Attention U-Net, capable of capturing intricate structures in medical images, rendering it suitable for segmenting lesions and lung nodules in COVID-19 applications. The integration of COVID-19 classification and segmentation empowers the implementation of multifaceted screening techniques across different levels.

#### **2.2 Face mask detection**

Face mask detection has been a subject of extensive research, and these efforts can be broadly categorized into two classes. The first approach treats it as an object detection task, where the goal is to localize the face mask area using bounding boxes. For instance, Mingjie Jiang et al. [46] proposed a one-stage face mask detector that employs a pre-trained ResNet for transfer learning and a feature pyramid network to extract semantic information. They introduced a novel context attention mechanism to enhance the detection of mask features. Similarly, Loey et al. [47] presented an object detection process using a combination of ResNet and YOLO V2. Another approach views face mask detection as an image classification problem [35]. Researchers in this category employed various convolutional neural networks (CNNs) such as MobileNet [48], Inception V3 [49], VGG-16 [50], and ResNet [51]. MobileNet, designed for edge devices, operates at high speed due to its smaller model size and complexity. Inception V3 utilizes factorizing convolutions to maintain robustness while reducing connections. VGG-16 explores the impact of depth on accuracy in large-scale image classification tasks.

In addition to training models from scratch, some researchers use pre-trained face detectors to extract faces, and then apply mask detection classification models on the detected faces [52, 53]. For example, Lippert et al. utilize OpenCV's pre-trained face detector and a VGG-16-based classifier for face mask detection. The deployment of machine learning models on resource-constrained devices has gained popularity, as executing models locally is often preferable to sending data to the cloud due to issues such as limited bandwidth and privacy concerns.

In summary, face mask detection has been tackled through two main avenues: object detection using bounding boxes and image classification using various CNN architectures. These diverse approaches aim to enhance the accuracy and efficiency of detecting face mask presence and adherence.

*Effective Screening and Face Mask Detection for COVID Spread Mitigation Using Deep… DOI: http://dx.doi.org/10.5772/intechopen.113176*
