**2.3 Related work**

The fusion of biometrics modalities on different levels of multi-biometrics system is extensively studied in the literature (**Table 3**). For all that the merger at the level of feature vectors is relatively poorly discussed. The merger at this level includes the integration of feature vectors corresponding to many sources of information. Because the feature vectors contain more elements than the input biometric data, it is obvious that the merger at the feature vectors level will provide better


authentication results. However, mergers at this level are difficult to implement in practice because (i) sets of features of many modalities may be incompatible, (ii) the combination of two feature vectors may result in a vector of features with very large dimensionality, and (iii) a complex comparing system is required.

The multi-biometric system (dorsal vein + periocular + palm print) is presented

In our proposed method, the first is preprocessing block including noise elimination, ROI detection and normalization, and contrast normalization. For all three

**3. The proposed multi-biometric system**

*Summary of works on multimodal biometric systems.*

**Biometrics traits Fusion**

Face and hand geometry

Ocular—iris and conjunctival vascular

Face, ear, and signature

**Table 3.**

**methods**

*Multimodal Biometrics for Person Authentication DOI: http://dx.doi.org/10.5772/intechopen.85003*

classified using SVM Palm print and face **Feature** The PCA is used to extract features of palm and face

Face and gait **Feature** Method is based on learning face and gait features in

Face and iris Score Multi-biometrics system using dual iris, visible and

fusion algorithm

face regions, respectively

Face and ear Scores To match score level, fusion is proposed in [19]. Authors

resulted in an EER of 2.83%

rank level fusion

PCA and LDA

**Description of the implementation method References**

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[4]

[20]

and SIFT are proposed. The extracted feature vectors are

image transform spaces. Two methods are considered—

thermal face traits is considered. 1D Log-Gabor and Complex Gabor Jet Descriptor (CGJD) were used to extract feature vectors. Authors proposed a score level

The ordinal measures and local binary pattern (LBP) methods are proposed to extract features from iris and

**Feature** Paper [17] presents the extractions of iris features based on 2D Gabor and facial features using the PCA method

**Feature** The 2D DCT is used to extract discriminant face features which are concatenated with hand geometric features. The resultant feature vector is classified using SVM

Recognition rate is 95.53% with 4.47% EER

Rank In [20], the PCA and Fisher's linear discriminant (FLD) methods in the face, ear, and signature, multimodal biometrics system is proposed. Local features are extracted from face, ear, and signature data. Features are matched using Euclidean distance. This system is using

Score In [4], authors presented fusion of both iris and

use Dempster-Shafer decision theory for each modality.

conjunctival vascular information. A weighted fusion method is proposed for each modality. The fusion

images. Fusion technique concatenated the feature vectors of the face and palm modalities into one fused

vector, and feature selection is performed.

in **Figure 5**.

**183**

• Rank level fusion. The classifier determines the rank of each registered

*Security and Privacy From a Legal, Ethical, and Technical Perspective*

The fusion of biometrics modalities on different levels of multi-biometrics system is extensively studied in the literature (**Table 3**). For all that the merger at the level of feature vectors is relatively poorly discussed. The merger at this level includes the integration of feature vectors corresponding to many sources of information. Because the feature vectors contain more elements than the input biometric data, it is obvious that the merger at the feature vectors level will provide better

**2.3 Related work**

**Biometrics traits Fusion**

Fingerprint and

Fingerprint, finger knuckle print, finger vein Finger shape

Palm print and hand shape

Finger knuckle and palm print

**182**

face

**methods**

biometric identity. A high position is a good indicator of a good fit (**Figure 4b**).

**Feature** In [5], it was proposed to extract face and fingerprint

rate are FAR, 4.95%, and FRR, 1.12%

FRR = 3.18%, and accuracy = 97.41

finally the NN-classifier is used

**Feature** Information from the face image and gait image are combined at the function level. Using the principal component analysis (PCA) method, facial features were obtained The result of multiple discrimination analysis (MDA) is gait energy image (GEI) Recognition rate

Fingerprint and iris Score In [8], authors propose a frequency approach to generate

fused using the sum rule

results are 91.3%

product (BLIP)

Palm print and iris **Feature** In system described in [10], texture parameters are

extracted based on Gabor filters.

EER = 2.3900e-04

Score In [6], authors presented score level fusion technique using the SIFT features for the face and the minutiae features for fingerprint. Results are: FAR = 1.98%,

**Feature** The multi-set canonical correlation analysis is used to

achieves the recognition performance, with

fuse multiple feature sets. The feature based on MCCA

With the help of the unified Gabor filter, fingerprint codes and finger vein codes are generated. The extraction of features is carried out by using a supervised local canonical correlation analysis (SLPCCA), and

a unified homogeneous template for fingerprint and iris features. Scores generated from these templates are

Fusion of the palm print features and iris features is based on the wavelets. Decision is obtained using kNN classifier. Recognition accuracy is 99.2% and FRR = 1.6% In [11], fusion method for the information of phases about the iris and palm utilizes a Baud limited image

**Feature** In this paper, feature extraction method for palm print is monogenic binary coding; for inner knuckle print recognition, two algorithms named ridgelet transform

characteristics invariant to the rotation and scaling of Zernike moments (ZM). On the basis of ZM, the fusion of facial features and fingerprints is realized. The RBF network implements the decision-making process. The accuracy rate is 96.55%. Testing result of authentication

**Description of the implementation method References**

[5]

[6]

[1]

[7]

[8]

[9]

[10]

[11]

[12]


### **Table 3.**

*Summary of works on multimodal biometric systems.*

authentication results. However, mergers at this level are difficult to implement in practice because (i) sets of features of many modalities may be incompatible, (ii) the combination of two feature vectors may result in a vector of features with very large dimensionality, and (iii) a complex comparing system is required.
