**4.1.4 AdaBoost classification**

88 E-Learning – Organizational Infrastructure and Tools for Specific Areas

To find the occlusion regions on the face, we adopt the method of saliency detection. Firstly, the original face image is transformed into gray level and normalized to *I x* ,*y* using histogram equalization. Then, *I x* ,*y* is reconstructed to *R x* ,*y* using RPCA. We can obtain the residual image *D x* ,*y* between the reconstructed image *R x* ,*y* and *I x* ,*y* by:

Then, the residual image *D x* ,*y* is put to a saliency detector to find the local places with high complexity, which is hypothesized to be the occlusion on the face. The measure of the

, , 2, log *D Rx D R i D R i X X*

Where , *DR i <sup>X</sup> P d* is the probability of descriptor (or difference image) *D* taking the value *<sup>i</sup> d* in the local region *Rx* . We apply the saliency detection in the residual image over a wide range of scale, and set the threshold value of *HD Rx* , . The region with biggest *HD Rx* , value over the threshold is set to the occlusion region. If all regions have *HD Rx* , less than the threshold, it is presumed that no occlusion exists. Note that we just choose one occlusion region in one operation of saliency detection even if there are multiple regions with saliency

Detailed information is most important to facial expression recognition. To avoid the wrong information introduced by face reconstruction in non-occluded region, we adopt the mechanism of occlusion region reconstruction rather than the total face reconstruction. To obtain the new face image *P x* ,*y* , pixel values of the detected occlusion region will be replaced by the reconstructed face using RPCA. Thus, the wrong information in the occlusion region may be shielded while the other regions of the face retain the same. To further decrease the impact of occlusion for facial expression reconstruction, we perform occlusion region reconstruction for several iterations until the difference of the reconstructed face between two iterations is below a threshold. The new face image *P x <sup>t</sup>* ,*y* in iteration *t* 

*R xy RPCA P t*

*t*

iteration *t*, and *Rocclusion* defines the occlusion region. Note that

Where *RPCA* designates the RPCA procedure, *t* is the iteration index.

, x,y , , x,y

*P xy R xy <sup>R</sup>* 

Where *I x* ,*y* is the normalized image, *R x <sup>t</sup>* ,*y* is the reconstructed image using RPCA in

*I xy R*

*t occlusion*

 <sup>1</sup> <sup>1</sup> , 1 *<sup>t</sup> t RPCA I t*

*occlusion*

(5)

(6)

*i*

*Dxy Rxy I xy* , ,, (3)

*H Pd Pd* (4)

**4.1.2 Occlusion detection using saliency detection** 

local saliency is defined as:

value over the threshold.

can be obtained by:

**4.1.3 Occlusion region reconstruction** 

We employ harr-like features for feature extraction and implement multiple one-against-rest two-class AdaBoost classifiers for robust facial expression recognition. In the algorithm, multiple two-class classifiers are constructed from weak features which are selected to discriminate one class from the others. It can solve the problem that weak features to discriminate multiple classes are hard to be selected in traditional multi-class AdaBoost algorithm. The proposed algorithms were trained and tested on our Facial Expression Database. This database consists of 57 university students in age from 19 to 27 years old and includes videos with hand and glass occlusion when displaying kinds of facial expressions. We also randomly add occlusions on the face to generate occluded faces. The experiment results are listed in Table 1.


Table 1. Facial Recognition Results.
