**3. Results**

We used color images from various individuals and conditions of luminosity, skin color, background and presence of shadows. These images were obtained from public websites BurnVictims [17], howzak [18], ISCN [19], Healthline [14], MedicalNewsToday [15], Brouhard [16], UrgentCare [20]. The images were studied and tagged with the assistance of an experienced plastic surgeon with experience in

**Figure 3.** *Classification of image patches.*

treating burnt patients. In general, first-degree burns were superficial (epidermal) lesions that are red and dry. Second-degree burns were white or yellow and were mostly wet and involved blisters. Third-degree burns were bloody and leathery lesions revealing subcutaneous structures and were red, brown or black. The collection includes 20 color images that show 33 superficial burns, 20 images with 26 partial thickness burns and 20 images with 56 third-degree burns. These images were characterized by a diversity of backgrounds. **Figure 4** shows four cases of burns of first-, second-, and third- degrees. The image formats were JPG and PNG. The image resolution ranged from 12*:*13 *pixels in* to 89*:*<sup>55</sup> *pixels in* . The variation in image resolution was due to the variation in distance between the camera and the subject. The image dataset's average image resolution was 54*:*67 *pixels in* .

Image segmentation was performed using six dictionaries for each class: *healthy skin*, *first-degree-burnt skin*, *second-degree-burnt skin*, *third-degree-burnt skin*, *shadowed skin*, and *background*. Each dictionary contained 3000 normalized atoms (unit vectors) with 54 *features atoms* . Color and texture were used as features for atom extraction and patches were obtained from images at various locations. **Figure 5** shows six sets of patches, each corresponding to a different class. Patches were used for extraction of dictionary atoms. Color and texture played a significant role as features for discrimination. Identification of shadowed skin is not necessary for clinical analysis; however, these dictionaries were helpful to discard these regions; otherwise, shadowed skin might be classified as a first- or third-degree burn. The occurrence of shadowed skin usually takes place around finger joints and between fingers. The introduction of a dictionary for shadowed skin allowed a reduction in the number of false positives.

Two sets of experiments were conducted using either dictionaries that were directly built and learned dictionaries that were trained with *K - singular value decomposition*. *Untrained dictionaries* required of collecting atoms from multiple images to generate each dictionary *D*, a matrix where the number of rows is equal to the number of features (54 features), and the number of columns is equal to the number of atoms

**Figure 4.** *Examples of images with first-, second-, and third-degree burns.*

*Detection and Classification of Burnt Skin on Images with Sparse Representation of Image… DOI: http://dx.doi.org/10.5772/intechopen.105162*

**Figure 5.**

*Collections of patches from first-, second-, and third-degree burns, healthy skin, shadowed skin, and background.*

#### **Figure 6.**

*Three examples of detection and classification of burnt skin. First-degree burn on dark skin (upper left panel), first- and second-degree burns (upper right panel), third-degree burn (lower panel).*

(3000 atoms). An additional set of 500 observations *X* was necessary to train each learned dictionary with K – singular value decomposition. These observations were collected from various images at various positions to train the dictionaries. Feature values were mostly positive, while atom entries after K-SVD were both positive and negative. During dictionary learning, *K – singular value decomposition* was applied to generate dictionaries for each of the six classes.

**Figure 6** shows the results of skin burn segmentation and classification. The images under analysis shows regions with first-degree and second-degree burns. Second-degree burns affect deeper skin layers (epidermis and dermis). Image patches detected as healthy skin or background are painted in black. A second-degree burn is

usually surrounded by a first-degree burns. First-degree burns are painted in red, second-degree burns are painted in green. The image under analysis presents shadowed skin around the fingers, which might be incorrectly classified as burns. Thus, extraction of connected components was used to find the largest objects and eliminate the smallest objects. An object was suppressed if its size was less than one fifth the size of the largest object. Feature vectors were extracted from 23 � 23 patches. The first two rows in each panel show the segmentation results obtained with untrained dictionaries. The third row of each panel shows the result of segmentation based on trained dictionaries. The best results were obtained with untrained dictionaries. The second and third columns of each panel show the results before and after extraction of connected components, respectively. After all the patches were classified, each image patch was re-classified by analyzing the class of the nearest patches and taking the majority vote. The results of re-classification depended on the dimensions of the patch to be classified. The third panel presents a challenging case since this image is characterized by the presence of shadowed skin. Furthermore, toenails are painted in red so that these regions ended up being classified as third-degree burns (false positives) before extraction of connected components. Third-degree burns destroy the epidermis and dermis, and their physical appearance is typically red. For this case, the best results were obtained by using untrained dictionaries.

**Table 1** shows the number of true positives, false positives, and false negatives for each segmentation experiment in **Figure 6** and for each burn degree. When optimal experimental settings (second row) were used for this image, the accuracy and sensitivity were both 100%.

The classification performance of the proposed method for identification and classification is shown in **Table 2**, in which *Sensitivity* <sup>¼</sup> *TP TP*þ*FN* and *Precision* <sup>¼</sup> *TP TP*þ*FP* values are presented for each burn degree. The detection performance is shown in the last column, where *TP* accounts for all the detected burns (all degrees), *FN* accounts for all the undetected burns (all degrees), and *FP* correspond to all the incorrectly detected burns (all degrees). Performance metrics were obtained by using 4-fold cross-validation in which 75% of the images (15 images) were randomly selected to build dictionaries, and the other 25% (5 images) were used to test burn identification and classification. This process was repeated four times so that all images were tested. Sets of 3000 patches were obtained from 15 images to build dictionaries. The performance metrics were obtained by using two strategies for the building of dictionaries, extraction of atoms from images and trained dictionaries. Various sparsity factor values (*L* ¼ 1, 2, 3) were tried during segmentation, and *L* ¼ 1 provided the best performance. The size for patch labeling was 1 in some cases and 3 in other cases. The 0<*T* <1 for suppression of the smallest connected components was experimentally established as 0*:*25<*T* <0*:*3. Since the purpose of this research is to assist physicians in the identification and classification of burnt skin, the segmentation process was aimed at burn regions. According to **Figure 6**, the segmentation process identified multiple burns, some of which were not true burns, i. e., false positives. *False positives* corresponded to healthy skin or shadowed skin or background. False positives were characterized as considerably smaller in absolute area than *TP*; this is the reason why extraction of connected components was used to detect and suppress the smallest components. The results of suppression of the smallest connected components are shown in the rightmost column in panels of **Figure 6**. The introduction of three classes, shadowed skin, healthy skin, and background, reduced the prevalence of false positives while improving the skin burn degree classification performance.

*Detection and Classification of Burnt Skin on Images with Sparse Representation of Image… DOI: http://dx.doi.org/10.5772/intechopen.105162*

