**5. Results and discussions**

The experiments are executed on processor Intel, core i3-2330 M @ 2.20GHz and RAM 4GB. The system type is windows 7 ultimate of 64-bit OS and the software used for research implementation is MATLAB R2013a. The proposed methodology is tested on PH2 dataset [10]. It consists of 200 8-bit RGB dermoscopic images of melanocytic lesions with a resolution of 768×560 pixels. The dermoscopic images were obtained at the Dermatology Service of Hospital Pedro Hispano, Portugal under the same conditions through Tuebinger Mole Analyzer system using a magnification of 20×. The efficiency of the proposed algorithm is the detection and removal of thin/thick and light/dark hair from dermoscopic images with the preservation of the texture pattern, shape, and colors of skin lesion. Furthermore, any dermoscopic image does not contain hair, the algorithm preserves its features. **Figure 9** depicts a sample of results consists of five input images as an initial stage sorted in the first row, and accordingly, their output images as a final stage appear in the last row after the implementation process of the proposed algorithm as discussed earlier represented by bottom arrows as an intermediate step.

The statistical analysis based on the metrics of sensitivity, specificity, and diagnostic accuracy was used to determine the performance of hair detection and inpainting operation. Our proposed algorithm reports a true positive rate (sensitivity) of 97.36%, a false positive rate (fall-out) of 4.25%, and a true negative rate (specificity) of 95.75%. The diagnostic accuracy achieved is recorded at level high of 95.78%. To estimate the accuracy of the proposed algorithm and to quantify the automatic hair detection error, quantitative evaluations were performed using three statistical metrics: sensitivity or true detection rate (TDR), specificity or true negative rate (TNR), and diagnostic accuracy (DA). TDR measures the rate of pixels which were classified as hair by both the automatic algorithm and the medical expert, and FPR measures the rate of pixels which were not classified as hair by both the automatic segmentation and the medical expert. These metrics are calculated using Eqs. (2)–(5) as follows:

$$\text{sensitivity (TDR)} = \frac{TP}{TP \star FN} \times 100\tag{2}$$

$$\text{Specificity (TNR)} = \frac{TN}{TN \star FP} \times 100\tag{3}$$

$$\text{fall-out (FPR)} = \frac{FP}{FP \star TN} \times 100\tag{4}$$

**Figure 9.** *Sample of results.*

*An Efficient Block-Based Algorithm for Hair Removal in Dermoscopic Images DOI: http://dx.doi.org/10.5772/intechopen.80024*

$$\begin{aligned} \text{(i.e., } \text{map-}) & \text{(i.e., } \text{map-}) \times \text{(m.e., } \text{map-}) \\\\ \text{(i.e., } \text{diagonal-}) & \text{(DA)} = \frac{\text{TP} \cdot \text{TN}}{\text{TP} + \text{FN} + \text{FP} + \text{TN}} \times \text{100} \end{aligned} \tag{5}$$

where TP, FP, FN, and TN stand for the number of true positive, false positive, false negative, and true negative, respectively. The quantitative results of the proposed algorithm are summarized in **Table 6**. They were calculated as follows:



• False positive (FP): count of the remained white pixels.

#### **Table 6.**

*Performance evaluation (confusion matrix).*

**Figure 10.**

*False negative calculation. (a, e) Y-channel. (b, f) Repaired Y-Channel (Y*″*). (c, g) Differences between (a, b) and (e, f) illustrated by red dots. (d, h) Y-channel with false negative pixels represented by red dots.*

**Figure 11.**

*True positive calculation. (a, c) Hair segmented binary image. (b, d) Truly classified hair pixels.*

#### **Figure 12.**

*Results of complement operation performed on binarized images.*


#### **Table 7.**

*Comparison of the hair detection algorithms.*

Unfortunately, we could not find a common database that can be shared with other researchers and there is no related work used PH2 dataset [10] to compare the proposed algorithm with others. However, **Table 7** compares the proposed hair detection algorithm with some other methods.
