**2. Methodology**

An overview of the proposed method is shown in **Figure 1** [13]. Each image was segmented into two regions: no burns and burns. Burns were classified as first-degree burns, second-degree burns, or third-degree burns. Image segmentation was performed by classifying each image patch. Each image patch (black square in **Figure 1** was classified by (1) extracting a feature vector (red square in **Figure 1**), and (2) classifying the feature vector based on its closest reconstruction over six dictionaries. The dimensions of the image patch to be classified were smaller than the dimensions of the patch for feature extraction. The extracted features were color and texture. The class assigned to a patch depended on the dictionary with the best reconstruction of the extracted feature vector. The best sparse reconstruction corresponded to the minimal Euclidean distance between the feature vector and its reconstruction.

To obtain color features, three histograms were generated, *PR*, *PG*, and *PB*. One histogram was obtained from each RGB channel. Five statistical attributes were computed for each channel: the *average value*

*<sup>μ</sup>* <sup>¼</sup> <sup>P</sup><sup>255</sup> *<sup>I</sup>*¼<sup>0</sup>*IP I*ð Þ, *variance <sup>σ</sup>*<sup>2</sup> <sup>¼</sup> <sup>P</sup><sup>255</sup> *<sup>I</sup>*¼<sup>0</sup> ð Þ *<sup>I</sup>* � *<sup>μ</sup>* <sup>2</sup> *P I*ð Þ, *skewness m*<sup>3</sup> <sup>¼</sup> <sup>P</sup><sup>255</sup> *<sup>I</sup>*¼<sup>0</sup> ð Þ *<sup>I</sup>* � *<sup>μ</sup>* <sup>3</sup> *P I*ð Þ, *kurtosis m*<sup>4</sup> <sup>¼</sup> <sup>P</sup><sup>255</sup> *<sup>I</sup>*¼<sup>0</sup> ð Þ *<sup>I</sup>* � *<sup>μ</sup>* <sup>3</sup> *P I*ð Þ, and *entropy H* <sup>¼</sup> <sup>P</sup><sup>255</sup> *<sup>I</sup>*¼<sup>0</sup> log ½ � *P I*ð Þ *P I*ð Þ. To conduct texture analysis, a color image was transformed into a grayscale image by using the luminosity model, which consists of a weighted average of the RGB channels: *I* ¼ 0*:*21 *R* þ 0*:*72 *G* þ 0*:*07 *B*. According to the luminosity model, the green channel is the one that contributes the most, which agrees with the fact that green is the dominant color in skin. Color was extracted from three channels within each patch: green

**Figure 1.** *Skin burn detection and classification model.*

channel, blue channel, and a linear combination of the three RGB channels (luminosity model) as it is shown in **Figure 2**. To convert color information to a single grayscale value for texture analysis, the luminosity model was used.

*Textural features* convey statistical information about the relative positions of the pixel intensity values within the region of interest. *Textural features* are obtained from the gray level co-occurrence matrix (GLCM), where these two concepts were introduced by Robert Haralick [14–16]. The GLCM specifies the distribution of pairs of pixel intensities according to (1) the distance between these two pixels and (2) the angle of the line segment that joins them. There are four possible angles: 0° (horizontal), 45° (diagonal), 90° (vertical), and 135° (antidiagonal). The GLCM of a pair of pixel intensities ð Þ *Im*, *In* at distance *d* and angle *φ* is defined as

$$P(I\_m, I\_n, d, \rho) = \frac{\text{Number of pairs } (I\_m, I\_n) \text{ at distance d and angle } \rho}{\text{Total number of pairs}} \tag{1}$$

where f g *P I*ð Þ *<sup>m</sup>*,*In*, *d*, *φ* ; *m* ¼ 1, 2, … , *N*; *n* ¼ 1, 2, … , *N* is a second-order histogram defined as the probability of occurrence of a pair of pixel intensities in terms of their relative position and angle, and *N* is the number of pixel intensities. Seven textural features were computed from the GLCM: (1) the angular second moment *ASM* <sup>¼</sup> <sup>P</sup>*<sup>N</sup> m*¼1 P*<sup>N</sup> <sup>n</sup>*¼<sup>1</sup> *P I*ð Þ *<sup>m</sup>*,*In* <sup>2</sup> , (2) the contrast *CON* ¼ P*<sup>N</sup> <sup>ℓ</sup>*¼<sup>1</sup> *<sup>ℓ</sup>*<sup>2</sup> <sup>P</sup>*<sup>N</sup> m*¼1 P*<sup>N</sup> <sup>n</sup>*¼1, j j *Im*�*In* <sup>&</sup>lt;*<sup>ℓ</sup> P I*ð Þ *<sup>m</sup>*,*In* , (3) the inverse difference moment *IDF* <sup>¼</sup> P*<sup>N</sup> m*¼1 P*<sup>N</sup> n*¼1 *P I*ð Þ *<sup>m</sup>*,*In* <sup>1</sup>þð Þ *Im*�*In* 2, (4) the correlation *Corr* <sup>¼</sup> P*<sup>N</sup> m*¼1 P*<sup>N</sup> n*¼1 *Im In P I*ð Þ� *<sup>m</sup>*, *In <sup>μ</sup>*<sup>2</sup> *<sup>σ</sup>*<sup>2</sup> , (5) the cooccurrence matrix variance *Var* <sup>¼</sup> <sup>P</sup>*<sup>N</sup> m*¼1 P*<sup>N</sup> <sup>n</sup>*¼<sup>1</sup> ð Þ *Im* � *<sup>μ</sup>* <sup>2</sup> *P I*ð Þ *<sup>m</sup>*,*In* , (6) the difference

**Figure 2.** *Image channels and luminosity model.*

*Detection and Classification of Burnt Skin on Images with Sparse Representation of Image… DOI: http://dx.doi.org/10.5772/intechopen.105162*

average *DA* <sup>¼</sup> <sup>P</sup>*<sup>N</sup> m*¼1 P*<sup>N</sup> <sup>ℓ</sup>*¼<sup>1</sup> *Im P I*ð Þ *<sup>m</sup>*,*In* � *<sup>ℓ</sup>* , and (7) the co-occurrence matrix entropy *<sup>H</sup>* <sup>¼</sup> <sup>P</sup>*<sup>N</sup> m*¼1 P*<sup>N</sup> <sup>n</sup>*¼1*P I*ð Þ *<sup>m</sup>*,*In* log ½ � *P I*ð Þ *<sup>m</sup>*,*In* .

In this work, the GLCM was computed for three different values of angle *φ*: 0° (horizontal), 45° (diagonal), and 90° (vertical). The GLCM was generated for two pixel intensity values ð Þ *Im*,*In* at each of three distance values *d* ¼ 1, 2, and 3. Thus, nine GLCMs were generated for each image patch. Seven texture-based features were computed from each GLCM. Thus, 45 texture-based features were used to build each feature vector. The total number of entries in an image patch's feature vectors was thus nine color features and 405 texture-based features.

The classification of a feature vector *x* extracted from an image patch consisted of (1) sparsely reconstructing the feature vector over a dictionary for each class *D<sup>i</sup>* (healthy skin, first-degree-burnt skin, second-degree-burnt skin, third-degree-burnt skin, shadowed skin, background), followed by (2) determining the dictionary or class that gave the sparse reconstruction *s<sup>i</sup>* closest to the feature vector *x* under analysis. The closest sparse reconstruction corresponds to the smallest distance between the feature vector and its corresponding approximation according to the minimization problem *class* ¼ min *<sup>i</sup>* k k *<sup>x</sup>* � *<sup>s</sup><sup>i</sup>* 2 , where *s<sup>i</sup>* is the sparse reconstruction of *x* with dictionary *D<sup>i</sup>* as it is shown in **Figure 3**.
