**4.2. Training and classification using extreme learning machine (ELM)**

In the stage of image classification, we employ the Extreme Learning Machine (ELM) [13, 14] to train and classify the data. This algorithm is a supervised learning algorithm for single-hidden layer feed-forward network (SLFN). The main idea of ELM is to determine the number of hidden layer neurons, then perform random assignment on the input weights and hidden layer bias, and finally the output layer weights can be directly calculated by the least squares method. The entire learning process is completed at once, without iteration, so its learning speed is very fast. Based on the extensive experimental experience, we set the number of hidden layer nodes to 2000.

#### **4.3. The experimental results**

#### *4.3.1. Experimental results on AR face database*

In this section, we compared the LCGS algorithm with the traditional LGS, SLGS and MOW-SLGS algorithms in terms of recognition rate on AR face database. Hundred people in the database are selected as our experimental data, where half of men and women. The training set selects unobstructed face images, seven images per person, and a total of 700 images. The test set is divided into three parts. The shelter in test data set 1 is a scarf, three images per


**Table 1.** Recognition rates of different algorithms on the AR database.

person, a total of 300 images. The shelter in test data set 2 is sunglasses, three images per person, a total of 300 images. The shelter in test set 3 is a mixture of scarves and sunglasses, each with six images, a total of 300 images. The results of the recognition rates are shown in **Table 1**.

Through **Table 1**, we can see that recognition rate achieved by LCGS algorithm is higher than other algorithms for all these three test data set. In particular, when the shelter of the test set is a scarf, the recognition rates of the LGS, SLGS, and MOW-SLGS algorithms are 83.67, 88.47, and 89.67%. Moreover, the recognition rate of the LCGS algorithm is 91.27%, which well demonstrates the advantages of our proposed method in the occluded images with a scarf. We also find that the recognition rate for test set 1 is the lowest among the three test data sets. This is due to the occlusion caused by the scarf almost accounted for 20% of the image. It has a great effect on the feature extraction of the whole image.

#### *4.3.2. Experimental results on ORL face database*

**Figure 7.** Examples of ORL face database.

**Figure 6.** Examples of AR face database.

62 Machine Learning and Biometrics

**4.3. The experimental results**

*4.3.1. Experimental results on AR face database*

**4.1. Dimensionality reduction using principal component analysis (PCA)**

**4.2. Training and classification using extreme learning machine (ELM)**

set the principal component contribution rate as 0.95.

The LCGS algorithm is used to extract the features of a face image, and the dimension of the feature matrix is usually very large, which makes it difficult to use the classifier to train and test. Therefore, we adopt a state-of-the-art method Principal Component Analysis (PCA) [11, 12] to reduce the dimension after feature extraction. For the implementation of PCA, we

In the stage of image classification, we employ the Extreme Learning Machine (ELM) [13, 14] to train and classify the data. This algorithm is a supervised learning algorithm for single-hidden layer feed-forward network (SLFN). The main idea of ELM is to determine the number of hidden layer neurons, then perform random assignment on the input weights and hidden layer bias, and finally the output layer weights can be directly calculated by the least squares method. The entire learning process is completed at once, without iteration, so its learning speed is very fast. Based on the extensive experimental experience, we set the number of hidden layer nodes to 2000.

In this section, we compared the LCGS algorithm with the traditional LGS, SLGS and MOW-SLGS algorithms in terms of recognition rate on AR face database. Hundred people in the database are selected as our experimental data, where half of men and women. The training set selects unobstructed face images, seven images per person, and a total of 700 images. The test set is divided into three parts. The shelter in test data set 1 is a scarf, three images per We performed an analog occlusion experiment on the ORL face database, and we chose LGS, SLGS, and MOW-SLGS to compare our proposed algorithm, LCGS. We used a baboon picture to block the original face picture randomly. We set the occlusion area as 10%, 20%, 30%, 40%, and 50% of the original image. We set up three training sets. The first group selected six images per person, a total of 240 images. The second group selects seven images per person, a total of 280 images. The third group selects eight images per person, a total of 320 images. Corresponding, the test set is also divided into three groups. The test set 1 is four images per person, a total of 160 images. The test set 2 is three images per person, a total of 120 images. The test set 3 is two images per person, a total of 80 images. The results are shown in **Table 2**, **Figure 8**, and **Table 3**.


**Table 2.** Recognition rates of different algorithms on test set 1 of the ORL database.

**Figure 8.** Recognition rates of different algorithms on test set 2 of the ORL database.


Through **Table 5**, we can see that the processing time required for one image by the LCGS algorithm is 0.4493 seconds. Although the required time is higher than that of LGS and SLGS, it is significantly less than the time required by the MOW-SLGS. The reason is that the MOW-SLGS algorithm calculates the feature values of the four directions around the target pixel. Each direction is located on both sides of the target pixel, that is, a total of eight sets of feature values are calculated. The proposed LCGS algorithm only calculates the four sets of feature values based on the full usage of the surrounding pixels. Therefore, MOW-SLGS's processing

**Occlusion area (%) LGS SLGS MOW-SLGS LCGS** 0.6900 0.7150 0.7875 0.7825 0.6850 0.6900 0.7475 0.7575 0.5650 0.6400 0.6875 0.6925 0.4900 0.5300 0.5525 0.6175 0.3625 0.4675 0.4275 0.5200

Face Recognition with Facial Occlusion Based on Local Cycle Graph Structure Operator

http://dx.doi.org/10.5772/intechopen.78597

65

**Algorithms LGS SLGS MOW-SLGS LCGS** Processing time (second) 0.2886 0.2895 1.0093 0.4493

**Table 4.** 10-fold cross-validation experiment results for different algorithms.

In this chapter, we proposed a LCGS algorithm and applied it to face recognition with occlusion. LCGS makes full use of the texture features of the surrounding pixels in the 3 × 3 neighborhood. It makes up for the shortcomings of LGS and MOW-SLGS algorithm where the information around the target pixel is not sufficient. The characteristics of the target pixel value using LCGS algorithm is easier to recognize. Therefore, it improves the recognition rates with occlusion. Through the experiments on the AR and ORL database, we demonstrated that the LCGS algorithm is superior to the traditional algorithm in the recognition rate of face images with occlusion, and the time consumption is lower than the MOW-SLGS algorithm.

This work was supported by the National Natural Science Foundation of China under Grant No.61502338 and No. 61502339, the 2015 key projects of Tianjin science and technology

time is about twice as high as LCGS.

**Table 5.** The processing time of different algorithms.

**5. Conclusion**

**Acknowledgements**

**Table 3.** Recognition rates of different algorithms on test set 3 of the ORL database.

From **Table 2**, **Figure 8**, and **Table 3**, we can clearly see that the recognition rate of the LCGS algorithm is higher than that of the conventional algorithms. In the test set 1, the recognition rates of the LGS, SLGS, and MOW-SLGS algorithms are 45.87%, 53.00% and 55.87%, respectively, when the occlusion area is 30%, and the recognition rate of the LCGS algorithm is 60.19%.

#### *4.3.3. 10-fold cross-validation*

In order to further verify the accuracy of the algorithm, we also conducted 10-fold cross-validation. We chose the ORL database and randomly masked only one piece of data for testing. The comparison algorithms are LGS, SLGS, and MOW-SLGS. The result is shown in **Table 4**.

Through the experimental results, as shown in **Table 4**, we can see that LCGS algorithm performs better than other algorithms. The recognition rates of the LGS, SLGS, and MOW-SLGS algorithms are 49.00%, 53.00%, and 55.25%, respectively, when the occlusion area is 40% while the recognition rate of the LCGS algorithm is 61.75%.

#### *4.3.4. Comparison of the processing time*

During the experiments, we compared the processing time for the same image required by the LGS, SLGS, MOW-SLGS, and our proposed LCGS algorithm. The result is shown in **Table 5**.

Face Recognition with Facial Occlusion Based on Local Cycle Graph Structure Operator http://dx.doi.org/10.5772/intechopen.78597 65


**Table 4.** 10-fold cross-validation experiment results for different algorithms.


**Table 5.** The processing time of different algorithms.

Through **Table 5**, we can see that the processing time required for one image by the LCGS algorithm is 0.4493 seconds. Although the required time is higher than that of LGS and SLGS, it is significantly less than the time required by the MOW-SLGS. The reason is that the MOW-SLGS algorithm calculates the feature values of the four directions around the target pixel. Each direction is located on both sides of the target pixel, that is, a total of eight sets of feature values are calculated. The proposed LCGS algorithm only calculates the four sets of feature values based on the full usage of the surrounding pixels. Therefore, MOW-SLGS's processing time is about twice as high as LCGS.
