2.1. Texture descriptors

Several problems must be considered in the development of a face recognition system such as illumination changes, facial expressions, and partial occlusions. It is because these kinds of changes can harm the accuracy of a face recognition system [6]. The changes in lighting conditions have received significant attention [6]. Because of that, a lot of systems were proposed in the last years, trying to reduce these problems [6]. Some systems proposed to this end are based on image processing techniques such as histogram equalization [6, 7] and contrast-limited adaptive histogram equalization (CLAHE) [8]. Another way to solve the problem of illumination changes is the development of different high-performance methods to solve these kinds of changes such as the eigenphases approach [7–11]. Also, are useful some methods based on frequency transforms like discrete cosine transform [12–14], discrete Gabor transform [15–17], discrete wavelet transforms [18–21], and discrete Haar transform [22]. Additional methods that could be applied are the eigenfaces [23, 24] which use the principal component analysis (PCA) [25, 26], modular PCA-based face recognition methods [27], the

The local binary pattern (LBP) operator [30] has recently been proposed in several applications. The principal advantages of this algorithm are that it has a good computational performance and presents a good support when the images have gray-level changes. Because of that LBP can be applied for image characterization in several pattern recognition tasks [31]. This algorithm can be used for face characterization because the face images have a lot of little patterns which can be characterized using the LBP [30]. Several LBP variations have been proposed such as: the holistic LBP histogram (hLBPH) [30], the spatially enhanced LBP histogram (eLBPH) [32], holistic LBP Image algorithm (hLBPI) [32] and decimated image window binary pattern (WBP) [33]. All of these algorithms are based on the original LBP algorithm, but the computational complexity of the hLBPI and WBP are lower than the others providing also a

In recent years, the interest in the face recognition schemes has increased because of its potential implementation in mobile devices, which generally have a limited computational power. Hence, this chapter presents a comparison of the texture descriptors like hLBPI and WBP. Finally, some classification methods, like SVM, Euclidean distance, and cosine distance, are used to perform the recognition. In this chapter, these algorithms were evaluated with

The remainder of this chapter is organized as follows: Section 2 presents the description of the evaluated system. Section 3 presents the evaluation results. Finally, Section 4 provides the conc-

Figure 1 shows the block diagram of the evaluated face recognition system, where, firstly, the system receives the face image under analysis. It is then fed into an interpolation stage (Any other method can be applied.). Next, the texture descriptor algorithm is applied to characterize

the image. Finally, the feature matrix is fed into the classification stage.

Fisherfaces approach [28], and the Laplacianfaces [29].

112 From Natural to Artificial Intelligence - Algorithms and Applications

good performance as shown in this chapter.

lusions of this research.

2. Evaluated system

different illumination and facial expression changes.

In this chapter were used two texture descriptors, the hLBPI and the WBP. The hLBPI algorithm, introduced by Ojala et al. [34], is based on the original LBP method. This algorithm uses masks of 3 3 pixels. There is a neighborhood, as is shown in Figure 2a, where all neighbors are compared with the central pixel where each of these pixels are labeled with a 0 if their values are smaller than the central pixel; otherwise, they are labeled as 1 (Figure 2b). Next, the label of each pixel is multiplied by 2<sup>p</sup> , where p is the position of each pixel in the neighborhood from 0 to 7 (Figure 2c). Finally, all values are added to get the label that will be positioned in the place of the central pixel as shown in Figure 2d. This algorithm obtains 128 different values for the central pixel. These steps apply to an image to obtain a LBP matrix.

Figure 1. (a) Block diagram of the evaluated face recognition scheme, (b) illustration of the evaluated face recognition scheme.

Figure 2. Example of implementation of LBP in a neighborhood of 3 � 3 pixels.

After obtaining the LBP matrix, can estimate an L-dimensional feature vector, where L is the total number of training images of N � M pixels, requires 8NM additions and 8NM comparisons for LBP estimation. The LBP matrix is then arranged in a column vector, X, of size NM which is multiplied by the matrix Φ, of size L � NM obtained from the PCA analysis. Thus, the estimation of the feature vector, given by Y ¼ ΦX, requires NML additions and NML multiplications. Then, the feature vector estimation requires 24 ð Þ þ L NM additions, 16 ð Þ þ L NM multiplications and 8NM comparisons. Thus, assuming that all operations represent the same computational cost, the hLBPI algorithm requires approximately 48 ð Þ þ 2L NM operations. On the other hand, the WBP is an algorithm that reduces the computational complexity of the original hLBPI without a big loss of accuracy in the recognition. Firstly, the image is reduced with bicubic interpolation using a factor of 9. Then the image is divided into lxm non-overlapping windows of 3 � 3 pixels such that the input size of the original image ð Þ M � N could be represented as 3l � 3m. Then, the WBP is defined as follows:

$$\text{WBP}(j,k) = \sum\_{p=0}^{P-1} s\left(I\_{j,k}(\mathbf{x},y) - \mathbf{g}\_c\right) \mathbf{2}^p \tag{1}$$

A comparison of the computational complexity of other recently proposed methods for feature

Face Recognition Based on Texture Descriptors http://dx.doi.org/10.5772/intechopen.76722 115

After obtaining the feature vectors, the next step is the classification stage, which involves performing the identification or verification task. In this chapter, in the training stage, the Kmeans was used to obtain a template by averaging the training images of each class. During the identification task, three classifiers are used. These are SVM, the Euclidean distance, and cosine distance, classifying the image under analysis as belonging to the N-class with the smaller distance or the largest probability in the SVM case. During the verification task, the system validates the identity of the person under analysis with a given threshold, and these

vector estimation during the testing operation is shown in Figure 4.

2.2. Classification stage

Figure 3. WBP example.

Figure 4. Computational complexity of different algorithms.

where x ¼ 3j þ r, y ¼ 3k þ q, r ¼ 1, 2, …, 8, q ¼ 1, 2, …, 8, j ¼ 1, 2, …, M=3, k ¼ 1, 2, …, N=3, Ij, <sup>k</sup>ð Þ x; y ; represents the ð Þ j; k -th block of 3 � 3 pixels of the down sampled input image, gc is the central pixel of the same block and s Ij, <sup>k</sup>ð Þ� x; y gc � � <sup>¼</sup> 0 if Ij, <sup>k</sup>ð Þ <sup>x</sup>; <sup>y</sup> <sup>&</sup>lt; gc and 1 otherwise. Finally, the label of each pixel is multiplied by 2<sup>p</sup> , where p is the position of each pixel in the neighborhood from 0 to 7. Next the feature matrix obtained from Eq. (1) is rearranged in a vector of size MN=9 which is fed into the classification stage. The main advantage of this algorithm is that the face image can be characterized by a small non-overlapping window of 3 � 3-pixel instead of the overlapped windows used by the original hLBPI.

A WBP example is shown in Figure 3. First, the original image is divided into windows of 3 � 3 pixels (Figure 3a). Figure 3b shows the result of the comparison of neighboring pixels and Figure 3c shows the result of its respective conversion to decimal values. The matrix resulting from the sum of the decimal values (WBP image) is shown in Figure 3d. The size is reduced to 1=9 of the original image. This matrix is then introduced in the classifier for training or recognition. The computational complexity of this algorithm includes the estimation of the LBP coefficients using non-overlapping blocks of 3 � 3 pixels, which require 8NM=81 additions and 8NM=81 comparisons. Thus, assuming that the three operations have similar complexity, the proposed algorithm requires 16NM=81 operations.

A comparison of the computational complexity of other recently proposed methods for feature vector estimation during the testing operation is shown in Figure 4.

#### 2.2. Classification stage

After obtaining the LBP matrix, can estimate an L-dimensional feature vector, where L is the total number of training images of N � M pixels, requires 8NM additions and 8NM comparisons for LBP estimation. The LBP matrix is then arranged in a column vector, X, of size NM which is multiplied by the matrix Φ, of size L � NM obtained from the PCA analysis. Thus, the estimation of the feature vector, given by Y ¼ ΦX, requires NML additions and NML multiplications. Then, the feature vector estimation requires 24 ð Þ þ L NM additions, 16 ð Þ þ L NM multiplications and 8NM comparisons. Thus, assuming that all operations represent the same computational cost, the hLBPI algorithm requires approximately 48 ð Þ þ 2L NM operations. On the other hand, the WBP is an algorithm that reduces the computational complexity of the original hLBPI without a big loss of accuracy in the recognition. Firstly, the image is reduced with bicubic interpolation using a factor of 9. Then the image is divided into lxm non-overlapping windows of 3 � 3 pixels such that the input size of the original image ð Þ M � N could be represented as 3l � 3m. Then, the

WBP jð Þ¼ ; <sup>k</sup> <sup>X</sup>

3 � 3-pixel instead of the overlapped windows used by the original hLBPI.

plexity, the proposed algorithm requires 16NM=81 operations.

the central pixel of the same block and s Ij, <sup>k</sup>ð Þ� x; y gc

Figure 2. Example of implementation of LBP in a neighborhood of 3 � 3 pixels.

114 From Natural to Artificial Intelligence - Algorithms and Applications

Finally, the label of each pixel is multiplied by 2<sup>p</sup>

P�1

s Ij, <sup>k</sup>ð Þ� x; y gc

� �2<sup>p</sup> (1)

� � <sup>¼</sup> 0 if Ij, <sup>k</sup>ð Þ <sup>x</sup>; <sup>y</sup> <sup>&</sup>lt; gc and 1 otherwise.

, where p is the position of each pixel in the

p¼0

where x ¼ 3j þ r, y ¼ 3k þ q, r ¼ 1, 2, …, 8, q ¼ 1, 2, …, 8, j ¼ 1, 2, …, M=3, k ¼ 1, 2, …, N=3, Ij, <sup>k</sup>ð Þ x; y ; represents the ð Þ j; k -th block of 3 � 3 pixels of the down sampled input image, gc is

neighborhood from 0 to 7. Next the feature matrix obtained from Eq. (1) is rearranged in a vector of size MN=9 which is fed into the classification stage. The main advantage of this algorithm is that the face image can be characterized by a small non-overlapping window of

A WBP example is shown in Figure 3. First, the original image is divided into windows of 3 � 3 pixels (Figure 3a). Figure 3b shows the result of the comparison of neighboring pixels and Figure 3c shows the result of its respective conversion to decimal values. The matrix resulting from the sum of the decimal values (WBP image) is shown in Figure 3d. The size is reduced to 1=9 of the original image. This matrix is then introduced in the classifier for training or recognition. The computational complexity of this algorithm includes the estimation of the LBP coefficients using non-overlapping blocks of 3 � 3 pixels, which require 8NM=81 additions and 8NM=81 comparisons. Thus, assuming that the three operations have similar com-

WBP is defined as follows:

After obtaining the feature vectors, the next step is the classification stage, which involves performing the identification or verification task. In this chapter, in the training stage, the Kmeans was used to obtain a template by averaging the training images of each class. During the identification task, three classifiers are used. These are SVM, the Euclidean distance, and cosine distance, classifying the image under analysis as belonging to the N-class with the smaller distance or the largest probability in the SVM case. During the verification task, the system validates the identity of the person under analysis with a given threshold, and these

Figure 4. Computational complexity of different algorithms.

results are only obtained by the SVM. In this chapter, two different distances were evaluated, the Euclidean distance given by:

$$d\_{st} = \begin{pmatrix} \mathbf{x}\_s - \mathbf{y}\_t \end{pmatrix} \begin{pmatrix} \mathbf{x}\_s - \mathbf{y}\_t \end{pmatrix}^T,\tag{2}$$

changes and the AR(B) which has 30 images with partial occlusion using sunglasses and also

Face Recognition Based on Texture Descriptors http://dx.doi.org/10.5772/intechopen.76722 117

In real-world applications, the number of training images and recognition accuracy are strongly related; as more images are used for training, the recognition accuracy improves. However, in real applications the number of training images is limited. Figure 7 shows the performance of hLBPI (Figure 7a) and WBP (Figure 7b) with different number of training images using three

Figure 8 shows the recognition performance of the texture descriptors hLBPI and WBP compared with the other classical methods, all of them using the set AR(A) and seven training

An important evaluation to obtain also is the ranking of identification, where the ranking (n) denotes the probability that an image belongs to one of n classes with highest probability. That is, a ranking of 10 is the probability that the image belongs to one of the 10 most likely persons.

In all cases, the training was done using seven images per person belonging to either the AR (A), while the recognition system was tested with images that were not used for training from

In the case of verification, the percentage of error is divided into two: false acceptance and false rejection. False acceptance occurs when an individual claims to be the person who is not and

Figures 9 and 10 present the ranking evaluation with the set AR(A) and set AR(B).

Figure 6. Examples of (a) images from the AR(A) set. (b) Images from the AR(B) set.

illumination changes. Figure 6 shows some examples of these images.

different classifiers.

3.1. Identification

3.2. Verification

images for each person.

the AR(A) and AR(B) sets respectively.

and the cosine distance is given by.

$$d\_{st} = 1 - \frac{\mathbf{x}\_s \mathbf{y}\_t^T}{\left(\mathbf{x}\_s \mathbf{x}\_s^T\right) \left(\mathbf{y}\_t \mathbf{y}\_t^T\right)'}\tag{3}$$

where xs is the estimated feature vector of the image under analysis and yt is the center of the tth class.
