**7. Experimental results of smile stage classification based on the maximum value selection of kernel linear preserving projection**

To evaluate **the Maximum Value Selection of Kernel Linear Preserving Projection Method**, it is necessary to conduct the experiment. In this case, 30 persons were used as experiment. Each person consists of 3 patterns, which are smiling pattern I, III and IV, while smiling pattern II is not used. The image size was 640x 640 pixels and every face image was changed the size into 50x50 pixels (Figure 14). Before feature extraction process, face image had been manually cropped against a face data at oral area to produce spatial coordinate [5.90816 34.0714 39.3877 15.1020] [Mauridhi et al., 2010]. This was conducted to simplify calculation process. In this case, cropped data were used for both training and testing set. This process caused the face data size reduction into 40x16 pixels as seen in Figure 15.

Fig. 14. Original Sample of Smiling Pattern

160 Principal Component Analysis

data and a weighted graph *G* = (*V*, *E*) is constructed from data points where the data points that are closed to linked by the edge. Suppose maps of a graph to a line is chosen to minimize the objective function of KLPP in Equation (24) on the limits (constraints) as appropriate. Suppose *a* represents transformation vector, whereas the *ith* column vector of *X* is symbolized by using *xi*. By simple algebra formulation step, the objective function in

2 2

 

*a x a xa x a x S*

<sup>1</sup> ( ( ) ( ) ( ) ( ) 2 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )) ( <sup>2</sup>

*TT TT T T i i i j j j ij ij*

)

*a xx a a xx a a xx aS*

( )()( )()

*Laplacian* matrices in feature space known as *Laplacianlips*, when these are implemented in smiling stage classification. The minimum of the objective function in feature space is given by the minimum eigenvalue solution in feature space by using the following equation

( )( ( ) ( )) ( ) ( ) ( ) ( ) ( ) ( )

Eigenvalues and eigenvectors in feature space can be calculated by using Equation (27). The most until the less dominant features can be achieved by sorting eigenvalues decreasingly

**7. Experimental results of smile stage classification based on the maximum** 

To evaluate **the Maximum Value Selection of Kernel Linear Preserving Projection Method**, it is necessary to conduct the experiment. In this case, 30 persons were used as experiment. Each person consists of 3 patterns, which are smiling pattern I, III and IV, while smiling pattern II is not used. The image size was 640x 640 pixels and every face image was

 

*T T X D S X w xDx*

( )( )( )( ) ( )()( )( )

 

*XLX w xDx*

*XSX a*

*T*

 

*T T*

 

 

 

 

> 

> >

> > >

*j Sij*) and

 

(*L*) *=* (*D*)*–* 

(27)

 

(*xj*) are mapped far apart in feature space and

(*Sij*) incurs a heavy penalty in feature space. Suppose a set of

 

> 

> > (26)

(*S*) represent

(*xi*) and

feature space can be reduced in the following equation [Mauridhi et al., 2010]

2

 

*T TT T i i j j ij ij*

> 

<sup>1</sup> (2 ( ) ( ) ( ) ( ) 2 ( ) ( ) ( ) ( )) ( ) <sup>2</sup>

*TT T T i i i j ij ij TTTT i ij i i ij j ij ij T TT T*

( )( )( )( )() ( )( )()( )()

 

> 

> > (*xM*)], (*Dii*) *=* (

*a x D x a a XSX a*

 

<sup>1</sup> (( ( ) ( )) 2 ( ) ( ) ( ) ( ) ( ( ) ( )) ) ( ) <sup>2</sup>

( )( )( )( )() ( )( )( )( )()

*a xS x a a xS x a*

 

*a xx a a xx aS*

 

2

<sup>1</sup> ( ( ) ( ) ( ) ( )) ( ) <sup>2</sup>

 

*ax ax S*

 

 

> 

*T T i j ij ij*

( )( )( )( )() ( )

( ) ( )( ( ) ( )) ( ) ( )

*a XD SX a*

 

 

 

> 

 

(*x2*), . . . . .

 

and followed by sorting corresponding eigenvectors in feature space.

**value selection of kernel linear preserving projection** 

 

*a XDX a a*

*T T*

*i ii i i T TT*

( )( )( )( )()

*a XLX a*

 

 

*T T*

 

 

same class. If neighboring points of

(*yj*)) is large, then

<sup>1</sup> ( ( ) ( )) ( ) <sup>2</sup>

 

 

(*X*)=[ (*x1*)*,* 

 

In this case,

 

 

> 

*i j ij ij*

*y yS*

if ((*yi*) – 

Fig. 15. Cropping Result Sample of Smiling Pattern

Experiments were applied by using 3 scenarios. In the first scenario, the first of 2/3 data (20 of 30 persons) became the training set and the rest (10 persons) were used as testing set. In the second scenario, the first of 10 persons (10 of 30) were used as testing set and the last 20 (20 of 30) persons were used as training set. In the last scenario, the first and the last of 10 persons (20 of 30) were used as training set and the middle 10 persons (10 of 30) were used for testing set. It means, data were being rotated without overlap, thus each of them has got the experience of becoming testing data. Due to smiling pattern III, the numbers of training and testing data were 20\*3=60 and 10\*3=30 images respectively. In this experiment, 60 dimensions were used. To measure similarity, the angular separation and Canberra were used. The Equation (17) and (18) are similarity measure for the angular separation and Canberra [Mauridhi et al., 2010]. To achieve classification rate percentage, equation (19) was used. The result of classification using the 1st, 2nd, and 3rd scenario can be seen in Figure 16, 17, and 18 respectively [Mauridhi et al., 2010].

The 1st, 2nd, and 3rd scenario had similarity trend as seen in Figure 16, 17, and 18. Recognition rate increased significantly from the 1st until 10th dimension, whereas recognition rate using more than 11 dimensions slightly fluctuated. The maximum and the average recognition rate in the 1st scenario were not different, which was 93.33%. In the 2nd scenario, the maximum recognition rate was 90%, when Canberra similarity measure was used. In the 3rd scenario, the maximum recognition rate was 100%, when angular separation was used. The maximum recognition rate was 93.33%, for both Angular Separation and Canberra Similarity Measure [Mauridhi et al., 2010] as seen in Table 6.

The Maximum Non-Linear Feature Selection of Kernel Based on Object Appearance 163

Angular Separation 93.33 86.67 100 93.33 Canberra 93.33 90.00 96.67 93.33 Maximum 93.33 90.00 100 94.44 Average 93.33 88.34 98.34 93.33

Table 6. The Smile Stage Classification Recognition Rate using Maximum Feature Value Selection Method of Non linear Kernel Function based on Kernel Linear Preserving

The experimental results of **the Maximum Value Selection of Kernel Linear Preserving Projection Method** have been compared to "Two Dimensional Principal Component Analysis (2D-PCA) and Support Vector Machine (SVM) as its classifier" [Rima et al., 2010] and have been combined with some methods, which were Principal Component Analysis (PCA)+Linear Discriminant Analysis (LDA) and SVM as its classifier [Gunawan et al., 2009] as seen Figure 19

> **The Comparison Results of Recognition Rate for Smile Stage Classification**

> > 2DPCA+SVM PCA+LDA+SVM The Maximum Value

**Methods**

Fig. 19. The Comparison Results of Recognition Rate for Smile Stage Classification

PCA+SVM and the PCA+LDA+SVM for smile stage classification.

For both the maximum non-linear feature selection of Kernel Principal Component Analysis and Kernel Linear Preserving Projection has yielded local feature structure for extraction, which is more important than global structures in feature space. It can be shown that, the maximum non-linear feature selection of Kernel Principal Component Analysis for face recognition has outperformed the PCA, LDA/QR and LPP/QR on the ORL and the YALE face databases. Whereas the maximum value selection of Kernel Linear Preserving Projection as extension of Kernel Principal Component Analysis has outperformed the 2D-

Selection of Kernel Linear Preserving Projection

The Maximum Recognition Rate in the

1st 2nd 3rd

Scenario (%) Average

Similarity Methods

90.5 91 91.5 92 92.5 93 93.5

**Recognition Rate**

**8. Conclusion** 

Projection

Fig. 16. Smile Stage Classification Recognition Rate Based on the Maximum Value Selection of Kernel Linear Preserving Projection Method Using 1st Scenario

Fig. 17. Smile Stage Classification Recognition Rate Based on the Maximum Value Selection of Kernel Linear Preserving Projection Method Using 2nd Scenario

Fig. 18. Smile Stage Classification Recognition Rate Based on the Maximum Value Selection of Kernel Linear Preserving Projection Method Using 3rd Scenario


Table 6. The Smile Stage Classification Recognition Rate using Maximum Feature Value Selection Method of Non linear Kernel Function based on Kernel Linear Preserving Projection

The experimental results of **the Maximum Value Selection of Kernel Linear Preserving Projection Method** have been compared to "Two Dimensional Principal Component Analysis (2D-PCA) and Support Vector Machine (SVM) as its classifier" [Rima et al., 2010] and have been combined with some methods, which were Principal Component Analysis (PCA)+Linear Discriminant Analysis (LDA) and SVM as its classifier [Gunawan et al., 2009] as seen Figure 19

Fig. 19. The Comparison Results of Recognition Rate for Smile Stage Classification
