**3. The proposed multi-biometric system**

The multi-biometric system (dorsal vein + periocular + palm print) is presented in **Figure 5**.

In our proposed method, the first is preprocessing block including noise elimination, ROI detection and normalization, and contrast normalization. For all three

of the iris. Using the conventional algorithm for detecting the iris, we determine the center of the iris and its diameter. The periocular area is a rectangle centered in the

After the ROI detection, we perform image size normalization and apply the contrast normalization by using CLAHE algorithm. The image is divided into nonoverlapping areas of equal size, and the histograms of each region are calculated. Next, the cutoff threshold for histograms is obtained, and each histogram is processed in such a way that its height does not exceed the cutoff threshold [21]. The sample input images after normalization operations and operations using the CLAHE algorithm are shown in **Figure 7**. Next processing blocks include feature

In biologically inspired vision models, receptor fields exist that are the primary aspect of early visual processing in mammalian vision systems. Gabor functions are widely used in image feature analysis because they are similar to receptive field profiles in mammalian cortical simple cells. These fields are modeled using Gabor

Imitation of mammalian vision systems (or some of them) in object recognition systems leads to their increased efficiency and plausibility. Object recognition systems that are inspired by the biological approach use filter banks, in particular

The 2D Gabor filter family can be represented as expressed in Eq. (2):

þð Þ �*sinθ*þ*ycos<sup>θ</sup>* <sup>2</sup> 2*σ*2 *y*

*ROI area for dorsal vein images (a), palm print images (b), and periocular images (c).*

*Images after normalization (size 150* � *150 pixels) and after applying the CLAHE algorithm.*

1 2*πσxσ<sup>y</sup>*

*Gθ*ð Þ *x; y Sω, <sup>θ</sup>*ð Þ *x; y* (2)

and *<sup>S</sup>ω, <sup>θ</sup>*ð Þ¼ *<sup>x</sup>; <sup>y</sup> ei*ð Þ *<sup>ω</sup>xcosθ*þ*ωysin<sup>θ</sup>* � *<sup>e</sup>*�*ω*2*σ*<sup>2</sup>

<sup>2</sup> .

*Gabω, <sup>θ</sup>*ð Þ¼ *x; y*

� ð Þ *xcosθ*þ*ysin<sup>θ</sup>* <sup>2</sup> 2*σ*2 *x*

extraction, feature selection, fusion, and classification.

*Multimodal Biometrics for Person Authentication DOI: http://dx.doi.org/10.5772/intechopen.85003*

iris center [25, 26].

**3.1 Gabor feature extraction**

Gabor filters (**Figure 8**) [28–32].

where *Gθ*ð Þ¼ *x; y e*

filters [27].

**Figure 6.**

**Figure 7.**

**185**

**Figure 5.**

*Considered multi-biometric system architecture.*

modalities, noise elimination for an image *f x*ð Þ *; y* is performed using median filtering (2D MF) operation formulated as [21]:

$$\hat{f}(\mathbf{x}, \mathbf{y}) = \operatorname{median}\_{\mathcal{A}\_1} f(\mathbf{x}, \mathbf{y}) = \operatorname{median} [f(\mathbf{x} + r, \mathbf{y} + \mathbf{s})] \tag{1}$$

where *A*1is the *MF* window.

Next step in preprocessing phase is ROI detection and normalization (**Figure 6**). This operation is quite different for dorsal vein images, palm print images, and periocular images. For dorsal vein images, we use distance transform to detect the dorsal image center and build square ROI based on this center coordinates [22, 23]. The ROI design for palm print images is based on hand-specific points (finger valleys) and two angles [24]. The periocular region is detected based on the center

*Multimodal Biometrics for Person Authentication DOI: http://dx.doi.org/10.5772/intechopen.85003*

of the iris. Using the conventional algorithm for detecting the iris, we determine the center of the iris and its diameter. The periocular area is a rectangle centered in the iris center [25, 26].

After the ROI detection, we perform image size normalization and apply the contrast normalization by using CLAHE algorithm. The image is divided into nonoverlapping areas of equal size, and the histograms of each region are calculated. Next, the cutoff threshold for histograms is obtained, and each histogram is processed in such a way that its height does not exceed the cutoff threshold [21].

The sample input images after normalization operations and operations using the CLAHE algorithm are shown in **Figure 7**. Next processing blocks include feature extraction, feature selection, fusion, and classification.

### **3.1 Gabor feature extraction**

In biologically inspired vision models, receptor fields exist that are the primary aspect of early visual processing in mammalian vision systems. Gabor functions are widely used in image feature analysis because they are similar to receptive field profiles in mammalian cortical simple cells. These fields are modeled using Gabor filters [27].

Imitation of mammalian vision systems (or some of them) in object recognition systems leads to their increased efficiency and plausibility. Object recognition systems that are inspired by the biological approach use filter banks, in particular Gabor filters (**Figure 8**) [28–32].

The 2D Gabor filter family can be represented as expressed in Eq. (2):

$$\mathbf{G}ab\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi}) = \frac{1}{2\pi\sigma\_{\boldsymbol{\chi}}\sigma\_{\boldsymbol{\chi}}}\mathbf{G}\_{\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi})\mathbf{S}\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi})\tag{2}$$

$$\text{where } G\_{\theta}(\mathbf{x}, \mathbf{y}) = e^{-\left(\frac{(\mathbf{x}\mathbf{m}\theta + \mathbf{y}\sin\theta)^{2}}{2\sigma\_{\mathbf{x}}^{2}} + \frac{(-\sin\theta + \mathbf{y}\sin\theta)^{2}}{2\sigma\_{\mathbf{y}}^{2}}\right)} \text{ and } \mathbf{S}\_{a, \theta}(\mathbf{x}, \mathbf{y}) = e^{i(\mathbf{a}\mathbf{x}\cos\theta + \mathbf{a}\mathbf{y}\sin\theta)} - e^{\frac{\mathbf{a}^{2}\sigma^{2}}{2}}.$$

**Figure 6.** *ROI area for dorsal vein images (a), palm print images (b), and periocular images (c).*

**Figure 7.** *Images after normalization (size 150* � *150 pixels) and after applying the CLAHE algorithm.*

modalities, noise elimination for an image *f x*ð Þ *; y* is performed using median filter-

Next step in preprocessing phase is ROI detection and normalization (**Figure 6**).

This operation is quite different for dorsal vein images, palm print images, and periocular images. For dorsal vein images, we use distance transform to detect the dorsal image center and build square ROI based on this center coordinates [22, 23]. The ROI design for palm print images is based on hand-specific points (finger valleys) and two angles [24]. The periocular region is detected based on the center

^*f x*ð Þ¼ *; <sup>y</sup> medianA*<sup>1</sup> *f x*ð Þ¼ *; <sup>y</sup> median f x* ½ � ð Þ <sup>þ</sup> *<sup>r</sup>; <sup>y</sup>* <sup>þ</sup> *<sup>s</sup>* (1)

ing (2D MF) operation formulated as [21]:

*Security and Privacy From a Legal, Ethical, and Technical Perspective*

where *A*1is the *MF* window.

*Considered multi-biometric system architecture.*

**Figure 5.**

**184**

The *Gabω, <sup>θ</sup>*ð Þ *<sup>x</sup>; <sup>y</sup>* can be decomposed into a real <sup>R</sup>f g *Gabω, <sup>θ</sup>*ð Þ *<sup>x</sup>; <sup>y</sup>* <sup>¼</sup> <sup>1</sup> <sup>2</sup>*πσ*<sup>2</sup> *Gθ*ð Þ *x; y* Rf g *Sω, <sup>θ</sup>*ð Þ *x; y* and an imaginary.

<sup>I</sup>f g¼ *Gabω, <sup>θ</sup>*ð Þ *<sup>x</sup>; <sup>y</sup>* <sup>1</sup> <sup>2</sup>*πσ*<sup>2</sup> *Gθ*ð Þ *x; y* If g *Sω,θ*ð Þ *x; y* parts (for *σ<sup>x</sup>* ¼ *σ<sup>y</sup>* ¼ *σ*Þ.

Gabor response images are obtained by convolution operation of multiscale and multi-orientation Gabor filters *Gabω, <sup>θ</sup>*ð Þ *x; y* with the image *f (x, y).*

$$\begin{aligned} \mathsf{G}\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi}) &= f(\boldsymbol{\chi},\boldsymbol{\chi}) \* \mathsf{G}ab\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi}) = \mathsf{M}\mathsf{g}\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi})e^{i\ \mathsf{P}h\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi})} \\ \mathsf{M}\mathsf{g}\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi}) &= \sqrt{\mathsf{N}\{\mathsf{G}ab\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi})\}^{2} + \mathsf{J}\{\mathsf{G}ab\_{\boldsymbol{\alpha},\boldsymbol{\theta}}(\boldsymbol{\chi},\boldsymbol{\chi})\}^{2}}, \\\\ &\mathcal{N}\mathsf{C}\mathcal{A}\mathsf{b} = (\mathsf{C}\mathsf{D})\mathsf{N} \end{aligned} \tag{3}$$

area, and the feature vectors contain LBP features. The operator of local binary

*LBP* divides the image into non-overlapping blocks of the same size. Local image features are calculated for each block separately. For a set of pixels belonging to a given block, the *LBP* values are calculated and then a histogram is created. The feature vectors (histograms) of each block are combined to form a global vector of

*LBP* analyzes the local neighborhood consisting of *gp* points located on a circle with radius *R* and surrounding the center point of *gc* and checks whether the points

> *P*�1 *p*¼0

where *gp* and *gc* are the luminance values of the neighborhood and center point,

For an image size *M* � *N*, the image descriptor is a histogram created from the

*S gp* � *gc*

<sup>2</sup>*<sup>p</sup>* (4)

patterns (*LBP*) was proposed by Ojala [33] as a texture descriptor.

features of the entire image.

respectively.

*LBP* values:

**Figure 10.**

**Figure 11.**

**187**

*The basic idea of LBP approach.*

of *gp* are greater or lesser than the *gc* point value.

*Multimodal Biometrics for Person Authentication DOI: http://dx.doi.org/10.5772/intechopen.85003*

The *LBP* value of the*gc* point is specified as follows:

The idea of this operator is presented in **Figure 11**.

*Imaginary part of the Gabor filter responses of a dorsal vein image.*

*LBPP,R* ¼ ∑

$$P h\_{\alpha,\theta}(\mathbf{x},\boldsymbol{\mathcal{y}}) = \arctan \frac{\Im\{Gab\_{\alpha,\theta}(\mathbf{x},\boldsymbol{\mathcal{y}})\}}{\Re\{Gab\_{\alpha,\theta}(\mathbf{x},\boldsymbol{\mathcal{y}})\}},$$

where and ∗ is the convolution operator.

The Gabor filter responses for palm print image and dorsal vein image are shown in **Figures 9** and **10**, respectively.
