**Abstract**

Face images with partially-occluded areas create huge deteriorated problems for face recognition systems. Linear regression classification (LRC) is a simple and powerful approach for face recognition, of course, it cannot perform well under occlusion situations as well. By segmenting the face image into small subfaces, called modules, the LRC system could achieve some improvements by selecting the best non-occluded module for face classification. However, the recognition performance will be deteriorated due to the usage of the module, a small portion of the face image. We could further enhance the performance if we can properly identify the occluded modules and utilize all the non-occluded modules as many as possible. In this chapter, we first analyze the texture histogram (TH) of the module and then use the HT difference to measure its occlusion tendency. Thus, based on TH difference, we suggest a general concept of the weighted module face recognition to solve the occlusion problem. Thus, the weighted module linear regression classification method, called WMLRC-TH, is proposed for partially-occluded fact recognition. To evaluate the performances, the proposed WMLRC-TH method, which is tested on AR and FRGC2.0 face databases with several synthesized occlusions, is compared to the well-known face recognition methods and other robust face recognition methods. Experimental results show that the proposed method achieves the best performance for recognize occluded faces. Due to its simplicity in both training and testing phases, a face recognition system based on the WMLRC-TH method is realized on Android phones for fast recognition of occluded faces.

**Keywords:** face recognition, linear regression classification, occlusion tendency, weighted module face recognition, texture histogram difference

### **1. Introduction**

As the progress of the computer vision and machine learning, person identification and verification for security considerations become practical and play an important role for modern smart living applications. Face recognition for security control has received lot of attentions recently. Designed with a camera, the face

recognition becomes a simple and direct way to achieve reliable identification and verification. There are numerous algorithms [1–4] made important contributions to face recognition. These approaches can be divided into two categories, holistic based and modular based methods [5]. The holistic based approaches include the principle component analysis (PCA) [6] and linear discriminant analysis (LDA) [6]. John *et al.* [7] exploiting sparsity representation to improve the recognition rate. On the other hand, the modular based approaches provide more valuable features for face recognition. Naseem *et al.* [8] proposed linear regression classification (LRC) and its modular design to solve occlusion problems. During training stage, the LRC algorithm is to form the regression surface, which has the most fit of the data distribution, of each identify. In the testing stage, the unlabeled face image vector is used to recognize the person by finding the shortest project distance to the regression surface of the identify.

In practical applications, especially in COVID-19 pandemic period, face recognition might encounter diversified challenges such as low resolution, luminous change, facial expression and partial occlusion affected by facial accessories and covering face masks and scarf. For various challenges, there are several improved LRC algorithms proposed in the literatures, such as PCA-based LRC [9], unitary LRC [10], class-specific kernel LRC [11], and sparse representation classifier [12] methods.

**Figure 1** shows some examples of partially-occluded faces, where they are partially covered by facial accessories or covering obstacles. Partially occlusion is one of the practical and difficult problems in face recognition [13, 14]. Thus, some researches have focused on partially-occluded faces [15–19] recently. Most approaches tried to detect the testing images in several modules, avoided occlusion area influence the recognition results. The module LRC (MLRC) proposed in [8] is an effective way to solve occlusion issues in facial recognition. The MLRC treats each module equally important, however the occluded modules should be excluded or reduced their contribution for the final classification. Thus, by using regression parameters (RP), our early-version weighted module linear regression classification (WMLRC-PR) [20] is first proposed to enhance the MLRC method. To secure the access of ATM machines, the occluded face detection methods are proposed [21, 22]. By using support vector machine (SVM) to detect the occlusion modules, the LDA face recognition method is performed in the non-occlusion part [23]. By using structured dictionary learning (SDL) method, the SRC face recognition can separate occluded face images [24]. The robust face recognition algorithms based on various features, such as Huber loss [25], local binary feature [26] and topology preserving structure matching (TPSM) [27] are proposed to solve occlusion problems successfully. It is noted that the state-of-the-art CNN approaches [28–31], which need pre-trained by all face images, will not be applicable for partiallyoccluded face recognition. Besides, the CNN approaches usually need to pre-train the model, any instance increase of new members becomes impossible.


**Figure 1.** *Examples of partially-occluded situations.*

*Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

In this chapter, we propose a better occlusion tendency detection by using texture histogram (TH) to distinguish occluded and non-occluded modules. With the TH occlusion tendency, the WMLRC-TH methods are then proposed. The rest of the chapter is organized as follows. In Section 2, we first review the basic LRC concept and introduce the module LRC design for face recognition. Then, Section 3 first analyzes texture histograms and discusses how to use it to determine the occlusion tendency of the module. The proposed WMLRC-TH method in use of texture histogram is the proposed. In Section 4, the proposed WMLRC-TH method, which is tested on AR [32] and FRGC2.0 [33] face databases with synthesized occlusions, is compared to the well-known and robust face recognition methods. The realization of the partially-occluded face recognition system based on the WMLRC-TH method in mobile platforms are also addressed. Finally, conclusions are addressed in Section 6.

#### **2. Module linear regression classification for face recognition**

For face recognition, we assume that all training and testing facial images are preprocessed by face detection, face cropping and possible face alignment functions. Assume that we have *C* subjects, which are characterized by training facial images used for identity recognition. For *C* identities, the *i* th identity is with *N* training facial images all with the size of *p* � *q* pixels. Each pixel could be further represented with *K* color channels. Thus, the *k*th channel data of the *i* th identity is represented by *vi,j,k* ∈ *R<sup>p</sup>* � *<sup>q</sup>* , *i* = 1, 2, … , *C*, *j* = 1, 2, … , *N* and *k* = 1, 2, … , *K*, where *K* = 3 for most RGB face images.

For the *k*th channel, we can cascade its *q* column vectors into a larger column vector, *xi,j,k*∈ *Rpq* � <sup>1</sup> . For the *i* th class identify, its *N* training vectors, *xi,j,k*, *j* = 1, 2, … , *N* are grouped into the *i* th class vector space as

$$\mathbf{X}\_{i,k} = [\mathbf{x}\_{i,1,k}, \mathbf{x}\_{i,2,k}, \dots, \mathbf{x}\_{i,N,k}] \in R^{pq \times N}, \quad i = \mathbf{1}, 2, \dots, \mathbf{C}. \tag{1}$$

If *νi,j,k* is reformed by linearly combining channels to one grayscale channel, we set *K* = 1 or just remove the subscript *k* as *xi,j* ∈ *Rd* � <sup>1</sup> and **X***<sup>i</sup>* ∈*Rpq*�*<sup>N</sup>* for simplicity. For face recognition, the linear regression classification (LRC) and module LRC are briefly reviewed as follows.

#### **2.1 Linear regression classification**

Let *u<sup>k</sup>* ∈ *Rp* � *<sup>q</sup>* be the *k*th channel of an unlabeled query image, which is formed as a column vector *y<sup>k</sup>* ∈ *Rpq* � <sup>1</sup> . If the query data vector *y<sup>k</sup>* belongs to the *i* th class, the prediction with the linear combination of the *i* th class vector space can be rewritten as

$$\mathbf{y}\_k = \mathbf{X}\_{i,k}\mathbf{f}\_{i,k}, \quad i = 1, 2, \dots, C,\tag{2}$$

where *βi,k* ∈ *R<sup>N</sup>* � <sup>1</sup> is the regression parameter vector of the *i* th class. The optimal regression parameter vector can be solved by the least square optimization and expressed in matrix operations as

$$\mathfrak{g}\_{i,k} = \left(\mathbf{X}\_{i,k}^T \mathbf{X}\_{i,k}\right)^{-1} \mathbf{X}\_{i,k}^T \mathbf{y}\_k, \quad i = 1, 2, \dots, C. \tag{3}$$

With optimized regression parameter vector, *βi,k*, the predicted response vector **<sup>y</sup>**~*<sup>i</sup>*,*<sup>k</sup>* of the *<sup>k</sup>*th channel for the *<sup>i</sup>* th class can be formed as

*Digital Image Processing Applications*

$$\tilde{\mathbf{y}}\_{i,k} = \mathbf{X}\_{i,k} \mathfrak{h}\_{i,k}, \quad i = 1, 2, \dots, \mathbf{C}. \tag{4}$$

For all *C* classes, we first compute the *k*th-channel predicted response vectors. To construct the recognition system, by combining Eq. (3) and Eq. (4) together, we can store the *pq*�*pq* projection matrix of the *i* th identity as:

$$\mathbf{H}\_{i,k} = \mathbf{X}\_{i,k} \left(\mathbf{X}\_{i,k}^T \mathbf{X}\_{i,k}\right)^{-1} \mathbf{X}\_{i,k}^T, \quad k = \mathbf{1}, \mathbf{2}, \dots, K. \tag{5}$$

The construction of the *i* th projection matrix can be treated as the training process of the *i* th identity. For the trained parameters of all *C* identities, we need to store *Cp*<sup>2</sup> *q*<sup>2</sup> coefficients for recognition. The computation of predicted response vector becomes the projection process as

$$
\tilde{\mathbf{y}}\_{i,k} = \mathbf{H}\_{i,k} \mathbf{y}\_k, \quad k = \mathbf{1}, 2, \dots, K,\tag{6}
$$

which takes about *Kp*<sup>2</sup> *q*<sup>2</sup> multiplications. For testing all *C* identities, one identity recognition process, it needs *CKp*<sup>2</sup> *q*<sup>2</sup> multiplications. To determine the best class, the LRC method chooses the identity, which is with the minimum prediction error of all channels. Thus, the classified identity *i*\* is determined by minimizing the L2 norm distance between the predicted response vector and the query data vector as

$$\text{i } i^\* \text{.LRC} = \text{arg}\min\_i \left( d\_i \right), \quad i = \textbf{1}, \textbf{2}, \dots, \textbf{C}. \tag{7}$$

where

$$d\_i = \sum\_{k=1}^{K} \left\| \tilde{\mathbf{y}}\_{i,k} - \mathbf{y}\_k \right\|\_2, \quad i = 1, 2, \dots, C,\tag{8}$$

for *K-*channel classifications.

#### **2.2 Module linear regression classification (MLRC)**

For real applications, if a query sample is partially corrupted or occluded, the LRC algorithm cannot handle these special situations well since the discernible messages are getting less. The module, which is in a small and clean subface, approach can efficiently overcome this problem.

For the module LRC (MLRC), each training image *νi,j,k* ∈ *R<sup>p</sup>* � *qxK* is segmented into *M* non-overlapped partitions. **Figure 2** shows an example of segmentation of facial images with *M* = 16. The *m*th module, as the LRC approach, is formed as a column vector **x***<sup>m</sup> <sup>i</sup>*,*j*,*<sup>k</sup>*, *i* = 1, 2, … ,*C*, *j* = 1, 2, … , *N*, *m* = 1, 2, … , *M*. As Subsection 2.1,

**Figure 2.** *Segmentation of face images with* M *= 16 modules.*

*Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

in the training phase, the column vectors of all training image related to each class are grouped accordingly. Hence, for the *i* th class, we could obtain *M* groups of the *k*th channel data spaces as

$$\mathbf{X}\_{i,k}^{m} = \begin{bmatrix} \mathbf{x}\_{i,1,k}^{m}, \mathbf{x}\_{i,2,k}^{m}, \dots, \mathbf{x}\_{i,N,k}^{m} \end{bmatrix} \in \mathbb{R}^{\mathbb{R}^{N} \times N} \tag{9}$$

for *i* = 1, 2, … , *C*, *j* = 1, 2, … , *N.*

For the *k*th channel of the query image *uk*, as the training images, we first segment it to *M* modules and form them as column vectors *y<sup>k</sup> <sup>m</sup>* for *m* = 1, 2, … , *M*. If the *m*th module of the query image, *y<sup>k</sup> <sup>m</sup>* is assumed to lie on the *m*th module of the *i* th class, the least square prediction can be expressed in the training images of the *m*th module in the *i* th class as

$$\mathbf{y}\_k^m = \mathbf{X}\_{i,k}^m \mathfrak{f}\_{i,k}^m, \quad i = 1, 2, \dots, C, m = 1, 2, \dots, M,\tag{10}$$

where the optimal regression parameter vector is given as

$$\mathbf{g}\_{i,k}^{m} = \left( \left( \mathbf{X}\_{i,k}^{m} \right)^{\mathrm{T}} \mathbf{X}\_{i,k}^{m} \right)^{-1} \left( \mathbf{X}\_{i,k}^{m} \right)^{\mathrm{T}} \mathbf{y}\_{k}^{m} = \mathbf{R}\_{i,k}^{m} \mathbf{y}\_{k}^{m} \,. \tag{11}$$

and the regression matrix is defined as

$$\mathbf{R}\_{i,k}^{m} = \left( \left( \mathbf{X}\_{i,k}^{m} \right)^{\mathrm{T}} \mathbf{X}\_{i,k}^{m} \right)^{-1} \left( \mathbf{X}\_{i,k}^{m} \right)^{\mathrm{T}}.\tag{12}$$

Similar to Eq. (5), the corresponding module response vectors **y**~*<sup>m</sup> <sup>i</sup>*,*<sup>k</sup>* can be predicted as

$$
\tilde{\mathbf{y}}\_{i,k}^{m} = \mathbf{X}\_{i,k}^{m} \mathfrak{f}\_{i,k}^{m} = \mathbf{H}\_{i,k}^{m} \mathbf{y}\_{k}^{m}, \tag{13}
$$

where **H***<sup>m</sup> <sup>i</sup>*,*<sup>k</sup>*is the (*pq*/*M*) � (*pq*/*M*) projection matrix of the *<sup>m</sup>*th module of the *<sup>i</sup>* th identity as

$$\mathbf{H}\_{i,k}^{m} = \mathbf{X}\_{i,k}^{m} \left( \left( \mathbf{X}\_{i,k}^{m} \right)^{\mathrm{T}} \mathbf{X}\_{i,k}^{m} \right)^{-1} \left( \mathbf{X}\_{i,k}^{m} \right)^{\mathrm{T}}, \quad k = \mathbf{1}, 2, \ldots, K. \tag{14}$$

The project matrix, **H***<sup>m</sup> <sup>i</sup>*,*<sup>k</sup>*, for *m* = 1, 2, .., *M*, *k* = 1, 2, … , *K* can be treated as the training parameters for the *i* th identity. For the trained parameters of *M* modules of all *C* identities, we need to store *CKp*<sup>2</sup> *q*2 /*M* coefficients for recognition process. For testing all *C* identities, we can directly use (13) to compute the project result. Thus, the MLRC recognition process, which needs to compute all **y**~*<sup>m</sup> <sup>i</sup>*,*<sup>k</sup>* for *M*-module and *<sup>K</sup>*-channel *<sup>p</sup>* � *<sup>q</sup>* facial images by using (13), will need *CKp*<sup>2</sup> *q*2 /*M* multiplications. The computation of the MLRC is less than the LRC method, which needs *CKp*<sup>2</sup> *q*2 multiplications.

With the above module optimization, each module is processed by the LRC computation individually. Without the knowledge of occluded modules, the minimum distance could be used as the distance metric which implicates that the occluded modules will be automatically removed. The MLRC in min-min distance measure is expressed as.

$$i^\*\_{\text{MLRC}} = \arg\min\_i \left( \min\_m d\_i^m \right), \tag{15}$$

where

$$d\_i^m = \sum\_{k=1}^K \left\| \tilde{\mathbf{y}}\_{i,k}^m - \mathbf{y}\_k^m \right\|\_2, \quad i = 1, 2, \dots, C; \quad m = 1, 2, \dots, M. \tag{16}$$

for *K*-channel classifications.

### **3. Weighted MLRC by detection of occluded modules**

The module linear regression classification (MLRC) shows better recognition performance under occlusion situations. However, the MLRC with min-min distance measure only uses the best module for classification essentially. Thus, we can further improve its classification performance if we can properly gather more reliable non-occluded modules. As shown in **Figure 2**, we should fully utilize all top 8 clear modules (#1, #5, #9, #13, #2, #6, #10, #14) modules and moderately adopt 4 partially-occluded modules (#3, #7, #11, #15) for face recognition.

Thus, in the early age, we suggested an occlusion detection measure, which can be used to infer the occlusion tendency of the module. For linear regression classification method, the occlusion detection could be obtained by regression parameters [15] by the assumption that the regression parameters of the clean modules will have large variations for in-class and out-class faces. The module with small variations of regression parameters will have high probabilities to be detected as the occluded modules. The regression parameter (RP) concept is only good for dark and less texture occlusion objects. If the face recognition algorithm is not a linear regression approach, it will need extra computation to compute the regression parameters. In this chapter, we suggest a simple detection method to find the occlusion tendency of the module. Generally, the module has its particular location, the texture distribution of the module could easily be identified by its texture histogram.

#### **3.1 Occlusion detection by texture histogram**

To detect the occluded modules, instead of regression parameters, we can also perform the occlusion detection in the texture domain. First, the D-bin texture histogram of a grayscale image *I*(*x, y*) ∈ *Rp* � *<sup>q</sup>* is expressed by

$$\mathbf{h}\_{l(x,y)} = \begin{bmatrix} t\_1, \dots, t\_i, \dots, t\_D \end{bmatrix}^T \tag{17}$$

where the count of gray level *I*(*x*, *y*) for every *b* = 256/*D* in *D* bins will be collected as.

$$t\_i = \sum\_{\mathbf{y}=1}^p \sum\_{\mathbf{x}=1}^q q\_i(I(\mathbf{x}, \mathbf{y})) \tag{18}$$

and *qi* ð Þ *p* is to quantize *p* into the *i* th bin as

$$q\_i(p) = \begin{cases} 1 & \text{if } \ (i-1) \cdot b \le p < i \cdot b \end{cases},\tag{19}$$

For finding *D*-bin texture histogram of modules of the *i* th class among *N* training samples, we first obtain the expected *v<sup>m</sup> <sup>i</sup>* as

$$\overline{\mathbf{v}}\_{i,k}^{m} = \frac{1}{N} \sum\_{j=1}^{N} \mathbf{v}\_{i,j,k}^{m}. \tag{20}$$

#### *Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

where *v<sup>m</sup> i,j,k* is the *k*th channel of the *m*th module, *νi,j,k* ∈ *R<sup>p</sup>* � *<sup>q</sup>* . Let the *k*th channel of the query image *u* ∈ *R<sup>p</sup>* � *<sup>q</sup>* , *u<sup>k</sup>* be segmented into *M* partitions as *u<sup>k</sup> <sup>m</sup>*. To evaluate the occlusion tendency of the modules, γ<sup>i</sup> m, which denotes the distance between the *D*-bin texture histograms of the *m*th module of query image *u<sup>k</sup> <sup>m</sup>* and the averaged module **v***<sup>m</sup> <sup>i</sup>*,*k*after summing all *K* channels, becomes

$$\overline{\gamma}\_i^m = \sum\_k \left| \mathbf{h}\_{\mathbf{u}\_k^m} - \mathbf{h}\_{\overline{\mathbf{v}}\_{i,k}^m} \right|\_2. \tag{21}$$

It is obvious that the modules of the query image, which are with larger *γ<sup>m</sup> <sup>i</sup>* , will have more occlusion tendency due to dissimilar intensity distribution. In AR face database, we test 200 scarf images, With respect to 100 identities, **Figure 3** shows 200x100 overlapped curves of *γ<sup>m</sup> <sup>i</sup>* for all 16 modules. The results also show that *γ<sup>m</sup> <sup>i</sup>* in the occluded modules with *m* = 4, 8, 12, 16 are larger than the reminding ones. Similarly, the texture histogram difference could be also used to determine the occlusion tendency.

Conceptually, the texture histogram difference (HTD) of the module is a meaningful measure to identify the location of the face. For example, the modules #5 and #6 are forehead and left-center-eye modules, respectively. The average of texture histogram for module #5 has flat texture distribution of pure skin color while that for module #6 is unevenly distributed with textures of eyeball and nose regions. These two modules have their distinct and special texture histograms. If the modules are occluded by the mask, their texture histograms become very different from the original ones. For any face recognition algorithm, the classification criterion generally can be expressed by

$$i^\*\,\_{\text{face}} = \arg\min\_i \left( f\_i(\mathbf{y}) \right), \ i = 1, 2, \dots, \mathcal{C}, \tag{22}$$

where *fi*(*y*) denotes the *i* th class score function of the face recognition algorithm. If we divide the face region into *M* modules, the adaptive weighted module face recognition algorithm generally becomes

**Figure 3.** *Plots of γ<sup>m</sup> <sup>i</sup> for all 16 modules of 200x100 scarf query images in AR face database.*

$$i^\*\_{\text{ surface-HT}} = \arg\min\_i \left( \sum\_{m=1}^M \mathbf{g}\left(\tilde{\boldsymbol{\eta}}\_i^m\right) f\_i\left(\mathbf{y}^m\right) \right), \quad i = 1, 2, \dots, C,\tag{23}$$

by using the above averaged texture histogram difference stated in (21). In (23), *g γ<sup>m</sup> i* � � is a function of HTD parameters should be properly designed to achieve the best face recognition performance. Without loss of generality, we apply the HTD parameters to the adaptive weighted MLRC method in the following subsection.

#### **3.2 Weighted MLRC method by texture histogram**

By using texture histogram difference (THD), we suggest a weighted MLRC method, called WMLRC-HT for robust face recognition. For minimization measure errors, it is obvious that occluded modules should give larger weights in the WMLRC such that we can improve the drawback of the MLRC and achieve better recognition performance. Similarly as the observation shown in **Figure 3**, the texture histogram distance, *γ<sup>m</sup> <sup>i</sup>* has highly correlated to the occlusion tendency of the module, In other words, the occluded module has larger *γ<sup>m</sup> <sup>i</sup>* than the normal one. Thus, we define the texture histogram (TH) weight for the *m*th module to be

$$\boldsymbol{w}\_{\rm TH}^{\rm m} = \mathbf{g}\left(\overline{\boldsymbol{\gamma}}\_{i}^{\rm m}\right) = \left(\overline{\boldsymbol{\gamma}}\_{i}^{\rm m}\right)^{-1}.\tag{24}$$

With (24), the TH weight will be bigger for smaller texture histogram difference. Thus, *w<sup>m</sup> TH* for the occluded module is smaller than the normal one. As the module LRC, the response vector **y**~*<sup>m</sup> <sup>i</sup>* is predicted in terms of **X***<sup>m</sup> <sup>i</sup>*,*<sup>j</sup>* as

$$
\hat{\mathbf{y}}\_i^m = \mathbf{X}\_i^m \hat{\boldsymbol{\mathfrak{b}}}^m. \tag{25}
$$

By using THD weight defined in (24), the WMLRC-TH with the weighted minimum rule can be adjudicated as

$$\mathbf{i}i^\*{}\_{TH} = \arg\min\_i \left( \sum\_m \left( ||\bar{\mathbf{y}}\_i^m - \mathbf{y}^m|| \cdot \frac{\boldsymbol{w}\_{TH}^m}{\sum\_m \boldsymbol{w}\_{TH}^m} \right) \right). \tag{26}$$

#### **4. Experimental results**

In our experiments, the recognition performance with different face recognition algorithms will be used to validate the proposed methods. In experiments, AR and FRGC 2.0 face databases are used by synthetically adding osculation. We compare recognition performances of the proposed WMLRC methods to those of PCA, LDA, LRC, MLRC, SRC, locality preserving projection (LPP) [34], neighboring preserving embedding (NPE) [35] methods. All the experiments are carried out on a personal computer, which is equipped with Intel Core2 Q9400 CPU associated a 4-GB RAM. The testing environment is on Microsoft Visual Studio 2013 with OpenCV.

#### **4.1 Experiments on AR database**

The famous face recognition algorithms and recent robust face recognition methods [23, 24, 27] all show the recognition performances on AR database, which *Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

contains more than 100 people's color face images. For experiments, the cropped facial images are normalized to 40 � 40 pixels. The first experiment is to test all the algorithms on AR database [32] under sunglasses, scarf, colored blocks, black, white, and texture occlusions, overlapped onto normal faces. The AR database consists of more than 100 people's color face images. The images also include several facial variations, such as illumination change, expression and facial disguises. We choose a subset of 100 subjects with 50 men and 50 women. In each subject, 3 normal images are used as the training set as shown in **Figure 4** while 2 images with sunglasses and 2 images with scarf are used for the testing set as shown in **Figure 5**.

Before the experiments, we should first decide suitable settings for the proposed WMLRC-TH method. For module designs, the images are divided into 16 modules, i.e., *M* = 16. For the WMLRC-RP, we choose *α*, *ε*, and *γ* to be fixed to 2, 0.3, 0.7, respectively. The number of bins in texture histogram is *D* = 64 for WMLRC-TH. For the MLRC, we further modify the min-min criterion stated in Eq. (12) to become min-min 4 criterion, the MLRC-4, which is the sum of distances of the best 4 modules as

$$\left(i^\*\right)\_{\text{MLRC-4}} = \arg\min\_i \left(\sum\_{m\*=1}^4 \min\_m d\_i^m\right),\tag{27}$$

By using original AR database, **Table 1** shows the comparison results with those normal face algorithms while **Table 2** exhibits the reported results from the recent robust face recognition methods. From **Table 1**, the results show that the proposed WMLRC-TH methods achieved with over 95% accuracy outperforms the other methods. The LDA with SVM method [23] achieves 91.5% recognition rate, the SRC with SDL method [24], the recognition rate for sunglasses plus scarf face images, the recognition rate achieves up to 92%. The topology preserving structure matching (TPSM) [27] achieve 91.7%. recognition accuracy. From **Table 2**, the proposed WMLRC-TH outperforms the WMLRC-RP [20], the LDA-SVM [23], SRC-SDL [24] and TPSM [27] methods for recognizing occlusion face images on AR database.

To further analyze the detailed performances of the algorithms with different conditions of occlusions. The synthesized occlusions with various occlusion levels overlapped to the normal faces are shown in **Figures 6**–**9**. We still keep all the face

**Figure 4.**

*Normal faces as the training set on AR database.*

**Figure 5.** *Sunglasses and scarf occluded faces on AR database.*


#### **Table 1.**

*Recognition performances (%) achieved by different methods for sunglasses and scarf images on AR database.*


#### **Table 2.**

*Recognition performances (%) achieved by the recently-proposed robust face recognition algorithms on AR database.*

#### **Figure 6.**

*Added colored occlusion with occlusion ratios on AR database.*

#### **Figure 7.**

*Black partial occlusion with various occlusion ratios on AR database.*

recognition algorithms with same settings, it is noted the synthesized occlusion images are used for the testing results.

**Tables 3**–**6** show the recognition performances of all the face recognition methods for all synthesized occlusion cases, where WMLRC-TH1 and WMLPRC-TH3 denote the WMLRC-TH preformed with gray (*K* = 1) and RGB color (*K* = 3) images, respectively. The experimental results show that the proposed WMLRC-TH with gray and color images outperform the other methods, which can achieve over

*Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

#### **Figure 8.**

*White partial occlusion with various occlusion ratios on AR database.*

#### **Figure 9.**

*Different occlusion (a) - (e) color wall patterns with 50% occlusion ratio on AR database.*


#### **Table 3.**

*Recognition performances (%) of different methods with varying occlusion ratios of dark partial occlusion on AR database.*

95% of accuracy. The MLRC and MLRC-4 performed badly in dark occlusion because low pixel value produces low prediction error, which leads to error recognition. All results show that the proposed WMLRC-TH methods achieve the best recognition performances. The WMLRC-TH had a few benefit from RGB color information since the WMLRC-TH (*K* = 3) exhibits better recognition performance than WMLRC-TH (*K* = 1).

#### **4.2 Experiments on FRGC 2.0 database**

The second experiments are to test all the methods on the FRGC2.0 database [33, 36] under gray texture blocks, dark, white, and texture occlusions overlapped

#### *Digital Image Processing Applications*


#### **Table 4.**

*Recognition performances (%) of different methods with varying occlusion ratios of colored block occlusion on AR database.*


#### **Table 5.**

*Recognition performances (%) of different methods with varying levels of white partial occlusion on AR database.*

onto normal faces. The FRGC2.0 database consists of more than 4000 people's front-view grayscale images including non-facial expressions and slightly facial variations. We choose a subset of 100 subjects with 50 men and 50 women. In each subject, two non-facial expression images are used for the training set as shown in **Figure 10**. In additions, the synthesized occlusions with various occlusion levels overlapped to normal faces as shown in **Figures 11**–**14** are also used for the testing. As to the experiment settings, the facial images are also normalized to 40 � 40 pixels and divided into 16 modules.

**Tables 7**–**10** illustrate the recognition performances of all the methods in all test cases. On FRGC 2.0 database, the proposed WMLRC-TH methods also achieve


*Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

#### **Table 6.**

*Recognition performances (%) of different methods with different occlusion textures of 50% occlusion on AR database.*


#### **Figure 10.**

*Normal faces as the raining set on FRGC2.0 database.*

**Figure 11.**

*Added texture block with various occlusion ratios on FRGC 2.0 database.*

**Figure 12.**

*Dark partial occlusion with various occlusion ratios on FRGC 2.0 database.*

better performances than other approaches. From **Table 8**, the MLRC and MLRC-4 performed badly in dark occlusion also. In most cases, the WMLRC-TH performs better than WMLRC-RP. In flat and bright occlusions, as shown in **Table 9**, the WMLRC-RP carries out the better performance than WMLRC-TH. Since the

#### **Figure 13.**

*White partial occlusion with various occlusion ratios on FRGC 2.0 database.*

#### **Figure 14.**

*Different occlusion (a) - (e) gray wall patterns with 50% occlusion on FRGC 2.0 database.*


#### **Table 7.**

*Recognition performances (%) of different methods with varying occlusion ratios of random block occlusion on FRGC 2.0 database.*

sunglasses are the most common situation in daily life, WMLRC-RP is still useful in real applications in the cases of dark occlusions.

From above experiments, the proposed WMLRC-HT method achieves best accurate recognition rate. For the cases of light color occlusions, the WMLRC-RP performs better than the WMLRC-HT. For the other cases, the WMLRC-HT achieves the best recognition performances. The simulations show that the texture histogram feature can be used to detect occluded modules effectively. Besides, we observe that the MLRC-TH in RGB space domain performs better than that in grayscale domain because the grayscale images would waste the useful colored information.

#### **4.3 Android based system implementation**

Recently, Android-based operating system successfully supports smartphones and tablets because of its computing ability, storage capacity and its handy


#### *Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

#### **Table 8.**

*Recognition performances (%) of different methods with varying occlusion ratios of dark occlusion on FRGC 2.0 database.*


#### **Table 9.**

*Recognition performances (%) of different methods with varying occlusion ratios of white partial occlusion on FRGC 2.0 database.*

functions for image capture and image processing. Thus, we realize the proposed WMLRC-TH method on Android system to a robust face recognition system on mobile devices since it has more robust performance in general situations, especially in color image.

In general, the face recognition system can be divided into training phase and the testing phases. As shown in **Figure 15**, the user needs to select the training phase first after capturing face images of a new identity. The user will be asked to enter the name of the identity then the system will store face images with the name in database. In the test phase, the user can capture face image and then identify it. The testing phase basically consists of three steps. **Figure 16** shows the flow chart of testing phase. The first step is the face detection, which rapidly captures human faces when the human is located within a proper distance by the mobile phone camera. The second step is the pre-processing stage that is removing the noisy

#### *Digital Image Processing Applications*


#### **Table 10.**

*Recognition performances (%) of different methods with different occlusion texture of 50% occlusion on FRGC 2.0 database.*

**Figure 15.** *Proposed android face recognition system.*

#### **Figure 16.**

*Flow chart of testing phase in Andorid phones.*

*Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition DOI: http://dx.doi.org/10.5772/intechopen.100621*

#### **Figure 17.**

*User interfaces for registration and face recognition. (a) Registration and setup database (b) recognition and show name.*

pixels like background and human hairs of the face images. The third step is face recognition that is recognizing the face according to the face database collected before.

Considering the computation resources of mobile phones, we surveyed several face detection methods [37–41]. By trade-off between accuracy and computation, we finally selected the LBP-feature [37] for face detector, where the LBP operator is given as:

$$LBP\left(\mathbf{x}\_{\varepsilon}, \mathbf{y}\_{\varepsilon}\right) = \sum\_{p=0}^{p-1} \mathfrak{Z}^p s\left(i\_p - i\_{\varepsilon}\right), s(\infty) = \begin{cases} 1, \varkappa \ge 0 \\ 0, \varkappa < 0 \end{cases} \tag{28}$$

where *ic* is the intensity of (*xc*, *yc*), at the central pixel in the local image, *ip* is the intensity of neighbor pixel. *P* is the total number of neighbor pixels. After face detection, the pre-processing stage, including cropping, gray transform, elliptical mask, and resizing. It is noted that the face detection and pre-processing should be identically performed in both training and testing phases. Finally, the face image vector is then classified by the proposed WMLRC-HT algorithm. **Figure 17** shows the user interfaces of face registration and face recognition of the proposed system.

The proposed Android face recognition system is mainly developed on OpenCV SDK and Android NDK (Native Development Kit). Android NDK is used to overcome the limitations of Java, such as memory management and performance, by programming directly into Android native interface to support native development in C/C++.

#### **5. Conclusion**

In this chapter, we first review the linear regression classification algorithm as the base of face recognition. If we divide the face region into several subfaces, called modules, any face recognition algorithm will become a new module face recognition algorithm, which can avoid the serious degradation of recognition performance to solve the occlusion problem. We proposed the texture histogram difference of the module to detect the its occlusion tendency of the input face image. The concept of the texture histogram difference can be used for any face recognition algorithms if they adopt the module design. By using texture histogram (HT) concept, the weighted module linear regression classification (WMLRC-HT) method for partially-occluded face recognition is finally proposed. The proposed WMLRC-HT method with adaptive HT weights can effectively improve the shortcoming of the original LRC and MLRC algorithms. The experimental results show that the proposed WMLRC-HT method performs better than the existing linear regression classification methods and the contemporary approaches like SRC with various

occluded situations. Since the WMLRC-HT method acquires small computation cost in both training and testing, after the face detector with LBP-features, we implement the proposed method on smart phones. Even the people wear masks, we can easily train and successfully detect the identify only with a smart phone.
