**3.2 Periocular feature extraction by LBP**

The periocular area contains the iris, eyes, eyelids, eyelashes, and partially eyebrows. The *LBP* method can be used to describe the texture of the periocular

**Figure 8.** *2D functions and 2D Gabor filter.*

**Figure 9.**

*Imaginary part of the Gabor filter responses of a palm print image.*

*Multimodal Biometrics for Person Authentication DOI: http://dx.doi.org/10.5772/intechopen.85003*

The *Gabω, <sup>θ</sup>*ð Þ *<sup>x</sup>; <sup>y</sup>* can be decomposed into a real <sup>R</sup>f g *Gabω, <sup>θ</sup>*ð Þ *<sup>x</sup>; <sup>y</sup>* <sup>¼</sup> <sup>1</sup>

multi-orientation Gabor filters *Gabω, <sup>θ</sup>*ð Þ *x; y* with the image *f (x, y).*

q

*Security and Privacy From a Legal, Ethical, and Technical Perspective*

*Phω, <sup>θ</sup>*ð Þ¼ *x; y* arctan

*Gω, <sup>θ</sup>*ð Þ¼ *x; y f x*ð Þ *; y* ∗ *Gabω, <sup>θ</sup>*ð Þ¼ *x; y Magω, <sup>θ</sup>*ð Þ *x; y e*

Rf g *Gabω,θ*ð Þ *x; y*

The Gabor filter responses for palm print image and dorsal vein image are shown

The periocular area contains the iris, eyes, eyelids, eyelashes, and partially eyebrows. The *LBP* method can be used to describe the texture of the periocular

<sup>2</sup>*πσ*<sup>2</sup> *Gθ*ð Þ *x; y* If g *Sω,θ*ð Þ *x; y* parts (for *σ<sup>x</sup>* ¼ *σ<sup>y</sup>* ¼ *σ*Þ. Gabor response images are obtained by convolution operation of multiscale and

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

If g *Gabω, <sup>θ</sup>*ð Þ *x; y* <sup>R</sup>f g *Gabω, <sup>θ</sup>*ð Þ *<sup>x</sup>; <sup>y</sup> ,*

<sup>2</sup> <sup>þ</sup> <sup>I</sup>f g *Gabω,θ*ð Þ *<sup>x</sup>; <sup>y</sup>*

*i Phω, <sup>θ</sup>*ð Þ *<sup>x</sup>;<sup>y</sup>* (3)

2

*,*

<sup>2</sup>*πσ*<sup>2</sup> *Gθ*ð Þ *x; y* Rf g *Sω, <sup>θ</sup>*ð Þ *x; y* and an imaginary.

*Magω, <sup>θ</sup>*ð Þ¼ *x; y*

where and ∗ is the convolution operator.

**3.2 Periocular feature extraction by LBP**

in **Figures 9** and **10**, respectively.

**Figure 8.**

**Figure 9.**

**186**

*Imaginary part of the Gabor filter responses of a palm print image.*

*2D functions and 2D Gabor filter.*

<sup>I</sup>f g¼ *Gabω, <sup>θ</sup>*ð Þ *<sup>x</sup>; <sup>y</sup>* <sup>1</sup>

area, and the feature vectors contain LBP features. The operator of local binary patterns (*LBP*) was proposed by Ojala [33] as a texture descriptor.

*LBP* divides the image into non-overlapping blocks of the same size. Local image features are calculated for each block separately. For a set of pixels belonging to a given block, the *LBP* values are calculated and then a histogram is created. The feature vectors (histograms) of each block are combined to form a global vector of features of the entire image.

*LBP* analyzes the local neighborhood consisting of *gp* points located on a circle with radius *R* and surrounding the center point of *gc* and checks whether the points of *gp* are greater or lesser than the *gc* point value.

The *LBP* value of the*gc* point is specified as follows:

$$LBP\_{P,R} = \sum\_{p=0}^{P-1} \mathcal{S} \left(\mathbf{g}\_p - \mathbf{g}\_c\right) \mathbf{2}^p \tag{4}$$

where *gp* and *gc* are the luminance values of the neighborhood and center point, respectively.

The idea of this operator is presented in **Figure 11**.

For an image size *M* � *N*, the image descriptor is a histogram created from the *LBP* values:

**Figure 10.** *Imaginary part of the Gabor filter responses of a dorsal vein image.*

**Figure 11.** *The basic idea of LBP approach.*

hand vein database [35]. We choose 20 subjects with 10 images per subject at random. From 10 images, 5 images are used for training and 5 for the testing.

*Multimodal Biometrics for Person Authentication DOI: http://dx.doi.org/10.5772/intechopen.85003*

The combination of feature vectors at this level is difficult to achieve in practice due to the combination of certain fundamentally different feature vectors that can result in a resulting vector of features with very large dimensionality. In a merger at the level of feature vectors, each individual modality process generates a feature vector. The fusion process combines these feature vectors into one vector.

For dorsal vein images and for palm print images, we perform the same image processing operations that the feature vectors have the same sizes. As a result of convolution operation of multiscale and multi-orientation Gabor filters with the input image, we get the Gabor response images. The feature vector has a very large size of *(M x N x k x l)* where *MxN* is the image size, *k* is the number of scales, and *l* is the number of orientations. In our case, for both dorsal vein images and palm print images, we get a feature vector containing (150 � 150 � 3 � 6) = 405,000 items. The images subjected to the Gabor filtration are rescaled with a scale factor of 0.1, which allows obtaining a vector of features with a size of 1 � 4050 elements. For periocular images, the feature vector has a size of 36 � 59 = 2124.

Next we reduce dimensionality of these vectors used in PCA method (**Figure 14**

*T* ¼ *G*1*; G*2*;* ⋯*; Gq* 

<sup>Ψ</sup> <sup>¼</sup> <sup>1</sup> *<sup>q</sup>* <sup>∑</sup>*<sup>q</sup>* <sup>1</sup>*Gq*

*<sup>C</sup>* <sup>¼</sup> <sup>1</sup> *<sup>q</sup>* <sup>∑</sup>*<sup>q</sup>* 1Φ*i*Φ*<sup>t</sup> <sup>i</sup>* <sup>¼</sup> *AAt*

The eigenvectors and their corresponding eigenvalues are paired and ordered from high to low.

*G* ¼ *vw* þ Ψ

Φ*<sup>i</sup>* ¼ *Gi*–Ψ

*C vi* ¼ *λivii* ¼ 1*,* ⋯*, q*

and **Table 4**) [5]. Separated features are normalized using zero mean and unit

variance as

**Figure 14.**

*Steps to image processing using PCA.*

Organizing the training set of images

Calculating the average of the set T

Calculating the covariance matrix C

Approximated image is calculated as

for *v* ¼ ð Þ *v*1*; v*2*;* ⋯*; vk*

**Table 4.** *PCA algorithm.*

**189**

where *q* is the number of images in the training set

The eigenvectors and corresponding eigenvalues are computed

PCA algorithm

Calculating

**Figure 12.** *The original image (a) and image as a result of the LBP operator (b).*

**Figure 13.** *The LBPP,R histogram (a), histograms of the* n *blocks (b), and the LBPu*<sup>2</sup> *P,R histogram (c).*

$$H(k) = \sum\_{i=1}^{M} \sum\_{j=1}^{N} f(LBP\_{P,R}(i,j), k); k \in [0, K] \tag{5}$$

$$\mathbf{f}(\mathbf{x}, \mathbf{y}) = \begin{cases} \mathbf{1}, & \mathbf{x} = \mathbf{y} \\ & \mathbf{0}, \text{otherwise} \end{cases}$$

where *k* is one *LBP* pattern and *K* is the maximal *LBP* pattern value (number bin of the histogram).

Using the *LBP* operator, we obtain 2*<sup>P</sup>* different output values corresponding to 2*<sup>P</sup>* different binary patterns created by *P* of neighboring pixels. Certain binary patterns contain more information than others, so we can only consider this subset of *LBP* values. Patterns of this subset are called uniform patterns. So we have a standard *LBPP,R* operator and an *LBP<sup>u</sup>*<sup>2</sup> *P,R* operator.

Typically image is divided into *n* blocks and histograms of each block are concatenate into feature vector [34].

In the case *LBPP,R* operator, the histogram contains 256 bins. In the case *of LBP<sup>u</sup>*<sup>2</sup> *P,R* operator, the histogram contains 59 bins (**Figures 12** and **13**).
