**10.1 Preprocessing**

Most of the AI systems have an input or baseline dataset and starts preprocessing the data before it undergoes further changes through the scanning software. Series of calibrations are done to refine the data. These include resampling the data and removing the noises in the image. The basic reason for this process is to make sure that the existing dataset evolves. Since the AI system works on the knowledge from the previous data set which in turn comes from a binary data set (numbers 0 and 1) so the basic changes in the aberrancies from the data set can be easily pointed out for the doctors/radiologist to bring attention to. These aberrancies can be studied by the doctors (**Figure 2**) [3].

#### **10.2 Demarcation**

The second step is segmentation or demarcation of the normal structures in the data set. This step includes the demarcation, and consequent categorization of anatomic regions. This is the toughest step in the process and it is the most studied area. This also decides the accuracy of the AI system. As compared to the human knowledge which greatly relies on the differentiation of the structures and the artifacts from the prior knowledge the AI system greatly depend on the data sets. The more the data set the more refined algorithms are going to be (**Figure 3**) [28].

#### **10.3 Detection of the aberrancy**

The next big step is the aberrancy detected and identified matching the system to the subsets from the normal. These locations are called as the *candidates*. These *candidates* can be polyps, tumors, calcifications, dysplasia [3]. This step is very sensitivity driven. The AI system has to make sure the sensitivity crosses a certain threshold whereas the specificity can be low. It is important for us to understand these aberrancies may not necessarily be anomalies and are subject to the doctor's acumen to decipher and diagnose (**Figure 4**).

**83**

**Figure 2.**

**10.4 Scrutinizing the aberrancy**

*the preprocessing with the noise defects fixed.*

As mentioned previously, the extracted data has to be very sensitive for the next stage, which is scrutinizing the aberrancy. In this step each area is closely analyzed to rule out the normal variations. This is done by using the *vector space paradigm*, which means each aberrancy from the normal is given a feature which can be computed. Each candidate has a feature and has its own mean value and standard deviations. The border gradients are also described accordingly. This is paramount

*(a) The CBCT reconstruction image with the crude image that was just captured from the patient and stored in to the system. (b) Enhanced image after the preprocessing with the color defects fixed. (c) Enhanced image after* 

*Computer Simulation and the Practice of Oral Medicine and Radiology*

*DOI: http://dx.doi.org/10.5772/intechopen.90082*

*Computer Simulation and the Practice of Oral Medicine and Radiology DOI: http://dx.doi.org/10.5772/intechopen.90082*

#### **Figure 2.**

*Numerical Modeling and Computer Simulation*

might have missed [27–41].

even if it is because of human error.

**(AI) system**

**10.1 Preprocessing**

the doctors (**Figure 2**) [3].

**10.3 Detection of the aberrancy**

acumen to decipher and diagnose (**Figure 4**).

**10.2 Demarcation**

**9. Artificial intelligence (AI) for lesion detection**

Presently, the existing CAD systems are pushed as complementary tools for radiologists to further evaluate certain images that need attention. CADs have a limitation though, i.e., it does not detect all potential lesions and would limit the radiologist to focus only on the areas that the CAD system has identified. Therefore, it is imperative that the radiologist does the evaluation of the complete image. But the CAD system can help detect lesions that the radiologist

**10. The heart of computer simulation: working of artificial intelligence** 

It is very complicated and difficult for computers to decipher radiologic images. To understand an image, the CAD system breaks down the issue into multiple parts and does a step-by-step process to conclude whether a specific area on a radiology image looks suspicious. Therefore, it is important that radiologists have a basic understanding to comprehend why the output of the CAD is off from the usual

Most of the AI systems have an input or baseline dataset and starts preprocessing the data before it undergoes further changes through the scanning software. Series of calibrations are done to refine the data. These include resampling the data and removing the noises in the image. The basic reason for this process is to make sure that the existing dataset evolves. Since the AI system works on the knowledge from the previous data set which in turn comes from a binary data set (numbers 0 and 1) so the basic changes in the aberrancies from the data set can be easily pointed out for the doctors/radiologist to bring attention to. These aberrancies can be studied by

The second step is segmentation or demarcation of the normal structures in the data set. This step includes the demarcation, and consequent categorization of anatomic regions. This is the toughest step in the process and it is the most studied area. This also decides the accuracy of the AI system. As compared to the human knowledge which greatly relies on the differentiation of the structures and the artifacts from the prior knowledge the AI system greatly depend on the data sets. The more the data set the more refined algorithms are going to be (**Figure 3**) [28].

The next big step is the aberrancy detected and identified matching the system to the subsets from the normal. These locations are called as the *candidates*. These *candidates* can be polyps, tumors, calcifications, dysplasia [3]. This step is very sensitivity driven. The AI system has to make sure the sensitivity crosses a certain threshold whereas the specificity can be low. It is important for us to understand these aberrancies may not necessarily be anomalies and are subject to the doctor's

**82**

*(a) The CBCT reconstruction image with the crude image that was just captured from the patient and stored in to the system. (b) Enhanced image after the preprocessing with the color defects fixed. (c) Enhanced image after the preprocessing with the noise defects fixed.*

#### **10.4 Scrutinizing the aberrancy**

As mentioned previously, the extracted data has to be very sensitive for the next stage, which is scrutinizing the aberrancy. In this step each area is closely analyzed to rule out the normal variations. This is done by using the *vector space paradigm*, which means each aberrancy from the normal is given a feature which can be computed. Each candidate has a feature and has its own mean value and standard deviations. The border gradients are also described accordingly. This is paramount

#### **Figure 3.**

*CBCT panoramic reconstruction with demarcation of the anatomic structure showing the lower border of the mandible.*

#### **Figure 4.** *CBCT panoramic reconstruction of a Cherubism showing aberrancy from the normal data set.*

because now each aberrancy pointed out in the last step, is represented by a vector. These vectors can be mathematically represented in the space which is important to gauge anything in the machine's learning pattern, which is the basis of AI systems. These pattern analyses and machine learning can be used in any system to quantify the data and convert those into the computer's language [29–32].

#### **10.5 Stratification**

The learning patterns and the recognitions are classified and stratified in the space where the normal and the abnormal candidates exist. The next step is training the classified data set. Consistency is a big deciding factor here. The normal candidates are classified and stratified consistently with the in one subsets whereas the abnormal candidates are classified with consistency in the other data set. This provides training for the AI system to form an opinion regarding the classified and stratified data. This training is done with the help of person who had prior knowledge and could feed the data top provide reference standard with correct information. For example, the doctor can point out to the location of the cherubism on the CBCT pantographic reconstruction view with prior knowledge of its location. Also, data, like age, can be a deciding factor which is fed into the system by the doctor and this too aids the diagnostic process [29–32].

This is a very complicated step because some of the basic aberrancies which might not be the disease may also have the same locations or some diseases may not be in their classical locations. Therefore, data enrichment and stratification should be done on a regular basis.

**85**

luxury of time.

*Computer Simulation and the Practice of Oral Medicine and Radiology*

The AI system, as discussed before, learns and adapts from the quality of the data set. So, we need more exposure to the data set to refine the results. AI researchers are always in the ongoing process of learning and experimenting with the

The process comes to completion when a degree of suspicion is assigned to each and every candidate in the strata or the group. The threshold decided by the doctor is the parameter used to test the degree of suspicion. These degrees of suspicion crossing the threshold are demarcated with the identifiers which can be circles or arrows. The AI system learns from the threshold and the final outcomes and enriches its data and subsets of the data. The results that come from the previous learning go to the computer's learning curve. For example, in the CBCT shown below if features of cherubism are detected, and the doctor diagnosed it as a case of cherubism, then the result is saved in the system for future reference for other cases. If CBCT for the case of a swollen angle of jaw presents a similar data, then the computer uses its previous knowledge to make sure whether the indicated region is

There is a big range of applications of the AI and CAD in various fields of medicine. These applications have received a premarket approval (PMA). This encompasses the devices which have been shown to pose serious levels of risk for the users. In these cases the FDA guidelines recommend newer devices could be

In the last decade there has been a considerable increase in their accuracy as systems for diagnostic help [25, 26]. The AI systems in mammography have been successful in detecting differences between the mass lesions and micro-calcifications [42]. For micro-calcifications it has shown that the AI systems show high performance and sensitivity which is greatly used by doctors in making an educated decision during diagnosis making. The data from multiple views and combination of different modalities like ultrasonography (US), magnetic resonance (MR) imaging, and digital breast tomosynthesis have shown to be really promising in enhanc-

ing the diagnosis and enriching the data for future diagnoses [34].

**12. Computer simulation and the future of diagnostic expertise**

In 1996, the American Journal of Roentgenology published a report of three cases of diagnostic errors in radiology. After assessing the clinicians' defense of their decisions, the author concluded that the radiologists missed out on the diagnosis because they did not think of the lesion rather than not know of it. It is popularly known as the *aunt Minnie effect*—that is if a woman in a picture looks like Aunt Minnie, she must be aunt Minnie [43]. Improving patient care means, we have to look for ways to minimize diagnostic errors- an umbrella term that includes as varied factors as personal or social bias, heuristics and even failure of perception [44]. Debiasing programs work but may take a lifetime to refine, even doctors who are willing to accept error on their part [44]. In the present world however, where software changes by the day, we do not happen to have that

*DOI: http://dx.doi.org/10.5772/intechopen.90082*

considered true or false positive [3].

safer and more effective in the near future.

**11. Clinical applications**

classification schemes.

**10.6 Output of AI**

The AI system, as discussed before, learns and adapts from the quality of the data set. So, we need more exposure to the data set to refine the results. AI researchers are always in the ongoing process of learning and experimenting with the classification schemes.
