**6. The maximum value selection of kernel linear preserving projection as extension of kernel principal component analysis**

Kernel Principal Component Analysis as appearance method in feature space yields global structure to characterized an object. Besides global structure, local structure is also important. Kernel Linear Preserving Projection as known as KLPP is method used to preserve the intrinsic geometry of the data and local structure in feature space [Cai et al., 2005; Cai et al.,, 2006; Kokiopoulou, 2004; Mauridhi et al., 2010]. The objective of LPP in feature space is written in the following equation [Mauridhi et al., 2010]

$$\min \sum\_{\boldsymbol{\upmu}} (\phi(\boldsymbol{y})\_{\boldsymbol{\upmu}} - \phi(\boldsymbol{y})\_{\boldsymbol{\upmu}})^2 \,\phi(\boldsymbol{S}\_{\boldsymbol{\upmu}}) \tag{24}$$

In this case the value of *Si,j* can be defined as

$$\phi(S\_{\psi}) = \begin{cases} e^{\frac{(\phi(\mathbf{x}\_{i}) - \phi(\mathbf{x}\_{j}))^{2}}{t}} & |\mid \phi(\mathbf{x}\_{i}) - \phi(\mathbf{x}\_{j})| \mid < \varepsilon\\ 0 & otherwise \end{cases} \tag{25}$$

Where >0, but it is sufficiently small compared to the local neighborhood radius. Minimizing the objective function ensures the closeness between points that is located in the

The Maximum Non-Linear Feature Selection of Kernel Based on Object Appearance 161

changed the size into 50x50 pixels (Figure 14). Before feature extraction process, face image had been manually cropped against a face data at oral area to produce spatial coordinate [5.90816 34.0714 39.3877 15.1020] [Mauridhi et al., 2010]. This was conducted to simplify calculation process. In this case, cropped data were used for both training and testing set. This process caused the face data size reduction into 40x16 pixels as seen in Figure 15.

Experiments were applied by using 3 scenarios. In the first scenario, the first of 2/3 data (20 of 30 persons) became the training set and the rest (10 persons) were used as testing set. In the second scenario, the first of 10 persons (10 of 30) were used as testing set and the last 20 (20 of 30) persons were used as training set. In the last scenario, the first and the last of 10 persons (20 of 30) were used as training set and the middle 10 persons (10 of 30) were used for testing set. It means, data were being rotated without overlap, thus each of them has got the experience of becoming testing data. Due to smiling pattern III, the numbers of training and testing data were 20\*3=60 and 10\*3=30 images respectively. In this experiment, 60 dimensions were used. To measure similarity, the angular separation and Canberra were used. The Equation (17) and (18) are similarity measure for the angular separation and Canberra [Mauridhi et al., 2010]. To achieve classification rate percentage, equation (19) was used. The result of classification using the 1st, 2nd, and 3rd scenario can be seen in Figure 16,

The 1st, 2nd, and 3rd scenario had similarity trend as seen in Figure 16, 17, and 18. Recognition rate increased significantly from the 1st until 10th dimension, whereas recognition rate using more than 11 dimensions slightly fluctuated. The maximum and the average recognition rate in the 1st scenario were not different, which was 93.33%. In the 2nd scenario, the maximum recognition rate was 90%, when Canberra similarity measure was used. In the 3rd scenario, the maximum recognition rate was 100%, when angular separation was used. The maximum recognition rate was 93.33%, for both Angular Separation and

Canberra Similarity Measure [Mauridhi et al., 2010] as seen in Table 6.

Fig. 14. Original Sample of Smiling Pattern

Fig. 15. Cropping Result Sample of Smiling Pattern

17, and 18 respectively [Mauridhi et al., 2010].

same class. If neighboring points of (*xi*) and (*xj*) are mapped far apart in feature space and if ((*yi*) – (*yj*)) is large, then (*Sij*) incurs a heavy penalty in feature space. Suppose a set of data and a weighted graph *G* = (*V*, *E*) is constructed from data points where the data points that are closed to linked by the edge. Suppose maps of a graph to a line is chosen to minimize the objective function of KLPP in Equation (24) on the limits (constraints) as appropriate. Suppose *a* represents transformation vector, whereas the *ith* column vector of *X* is symbolized by using *xi*. By simple algebra formulation step, the objective function in feature space can be reduced in the following equation [Mauridhi et al., 2010]

2 2 2 2 <sup>1</sup> ( ( ) ( )) ( ) <sup>2</sup> <sup>1</sup> ( ( ) ( ) ( ) ( )) ( ) <sup>2</sup> <sup>1</sup> (( ( ) ( )) 2 ( ) ( ) ( ) ( ) ( ( ) ( )) ) ( ) <sup>2</sup> <sup>1</sup> ( ( ) ( ) ( ) ( ) 2 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )) ( <sup>2</sup> *i j ij ij T T i j ij ij T TT T i i j j ij ij TT TT T T i i i j j j ij ij y yS ax ax S a x a xa x a x S a xx a a xx a a xx aS* ) <sup>1</sup> (2 ( ) ( ) ( ) ( ) 2 ( ) ( ) ( ) ( )) ( ) <sup>2</sup> ( )( )( )( )() ( )( )( )( )() ( )( )( )( )() ( )( )()( )() ( )( )( )( )() ( ) *TT T T i i i j ij ij TTTT i ij i i ij j ij ij T TT T i ii i i T TT a xx a a xx aS a xS x a a xS x a a x D x a a XSX a a XDX a a* ( )()( )() ( ) ( )( ( ) ( )) ( ) ( ) ( )( )( )( )() *T T T T T XSX a a XD SX a a XLX a* (26)

In this case, (*X*)=[ (*x1*)*,* (*x2*), . . . . . (*xM*)], (*Dii*) *=* (*j Sij*) and (*L*) *=* (*D*)*–* (*S*) represent *Laplacian* matrices in feature space known as *Laplacianlips*, when these are implemented in smiling stage classification. The minimum of the objective function in feature space is given by the minimum eigenvalue solution in feature space by using the following equation

$$\begin{aligned} \phi(X)(\phi(D) - \phi(S))\phi(X^\top)\phi(w) &= \phi(\mathcal{A})\phi(\mathbf{x})\phi(D)\phi(\mathbf{x}^\top) \\ \phi(X)\phi(L)\phi(X^\top)\phi(w) &= \phi(\mathcal{A})\phi(\mathbf{x})\phi(D)\phi(\mathbf{x}^\top) \end{aligned} \tag{27}$$

Eigenvalues and eigenvectors in feature space can be calculated by using Equation (27). The most until the less dominant features can be achieved by sorting eigenvalues decreasingly and followed by sorting corresponding eigenvectors in feature space.
