1. Introduction

Biometrics is a powerful field of science for identifying a person using their physiological and behavioral features [1, 2]. Biometrics is the automatic recognition of people based on behavioral or physiological characteristics. During recognition given users are assigned to prescribed classes. We extract the essential features of the object and use these features to classify the object.

Biometric systems in general perform two tasks: identification and verification (recognition) of people (Figure 1). The process of verification (recognition) boils down to distinguishing a specific person from a limited number of people whose biometric data are known. The identification consists of determining the vector of features corresponding to the person being subjected to the identification process and trying to find a match between this vector and the feature vectors in the database containing records (feature vectors) concerning people. As a result, we get a list of the most similar individuals in the database. Identification is much more difficult [3, 4].

Images play an important role in the identification process of people. Image processing and recognition are fields that use complex signal and image processing algorithms.

The image in digital form is stored as a two-dimensional array. Formally

$$D = \{ (\mathbf{x}, y) | \mathbf{x} \epsilon M, y \epsilon N \} \tag{1}$$

where ¼ f g 1; 2; ⋯; m , N ¼ f g 1; 2; ⋯; n , and G-1 is the gray/color maximum value of each

A Survey on Methods of Image Processing and Recognition for Personal Identification

http://dx.doi.org/10.5772/intechopen.76116

93

The processing generally comprises the steps of acquiring an image, selecting the desired color space, improving image quality, image segmentation, and features extraction for the recognition. Recognition process involves several stages—extraction features and dimensionality reduction which selects the best set of features and rejects irrelevance. The resultant feature

The image is usually obtained using a CCD camera or NIR camera. It can be a color image (three-color components) or a grayscale image. Usually, color space (RGB with 24 bit) is

Below, some steps shown in image processing system on Figure 2 will be explained in more

Figure 2. Schematic diagram of the image processing and recognition system for personal identification.

The components of an image processing system are presented on Figure 2.

resolution cell.

detail [5].

vector is the basis for classification.

converted to gray color space (8 bit).

Figure 3. Image processing operations.

• Processing of single points of the image.

• Operations that use pixel group processing.

Image processing operations can be divided into (Figure 3):

and

$$F = \{ f(\mathbf{x}, y) | (\mathbf{x}, y) \epsilon D \text{ and } f(\mathbf{x}, y) \epsilon \; 0, 1, \dots, G - 1 \} \tag{2}$$

Figure 1. Identification and verification process.

where ¼ f g 1; 2; ⋯; m , N ¼ f g 1; 2; ⋯; n , and G-1 is the gray/color maximum value of each resolution cell.

The components of an image processing system are presented on Figure 2.

The processing generally comprises the steps of acquiring an image, selecting the desired color space, improving image quality, image segmentation, and features extraction for the recognition. Recognition process involves several stages—extraction features and dimensionality reduction which selects the best set of features and rejects irrelevance. The resultant feature vector is the basis for classification.

The image is usually obtained using a CCD camera or NIR camera. It can be a color image (three-color components) or a grayscale image. Usually, color space (RGB with 24 bit) is converted to gray color space (8 bit).

Below, some steps shown in image processing system on Figure 2 will be explained in more detail [5].

Image processing operations can be divided into (Figure 3):

• Processing of single points of the image.

1. Introduction

92 Machine Learning and Biometrics

difficult [3, 4].

Figure 1. Identification and verification process.

and

Biometrics is a powerful field of science for identifying a person using their physiological and behavioral features [1, 2]. Biometrics is the automatic recognition of people based on behavioral or physiological characteristics. During recognition given users are assigned to prescribed classes. We extract the essential features of the object and use these features to classify the object.

Biometric systems in general perform two tasks: identification and verification (recognition) of people (Figure 1). The process of verification (recognition) boils down to distinguishing a specific person from a limited number of people whose biometric data are known. The identification consists of determining the vector of features corresponding to the person being subjected to the identification process and trying to find a match between this vector and the feature vectors in the database containing records (feature vectors) concerning people. As a result, we get a list of the most similar individuals in the database. Identification is much more

Images play an important role in the identification process of people. Image processing and

D ¼ f g ð Þj x; y xEM; yEN (1)

F ¼ f g f xð Þj ; y ð Þ x; y ED and f xð Þ ; y E 0; 1; ⋯; G � 1 (2)

recognition are fields that use complex signal and image processing algorithms.

The image in digital form is stored as a two-dimensional array. Formally

• Operations that use pixel group processing.

Figure 2. Schematic diagram of the image processing and recognition system for personal identification.

Figure 3. Image processing operations.

The first group includes operations related to modification histogram, while the second group includes operations related to edge detection and various types of image filtration.

Transforming the brightness scale of image elements enables:


In practice, transformation T can be a logarithmic transformation, exponential transformation, etc. (Figure 4).

If hg represents the number of pixels in an image with intensity g, e.g., f xð Þ¼ ; y g, then the probability density function is defined as probðf xð Þ¼ ; <sup>y</sup> <sup>g</sup> <sup>¼</sup> hg MN for g ¼ 0, 1, ⋯, G � 1, and the cumulative density function is defined as cf x ð Þ¼ ð Þ¼ ; y g G P�1 g¼0 prob f x ð ð Þ¼ ; y g for g ¼ 0,

1, ⋯, G � 1.

The gray levels are modified as [5, 6]

$$
\overline{\mathcal{g}} = (\max - \min) \cdot c(f(x, y) = \mathcal{g}) + \min \tag{3}
$$

the median value of the elements inside the window. MF allows you to keep sharp changes in

Figure 5. Histogram of the original fingerprint image (a) and histogram-enhanced images after logarithmic transformation (b), exponential transformation (c), equalization (d), and CLAHE (contrast limited adaptive histogram equalization) (e).

A Survey on Methods of Image Processing and Recognition for Personal Identification

http://dx.doi.org/10.5772/intechopen.76116

95

MF allows you to keep sharp changes in brightness and high efficiency in eliminating impul-

Edges carry useful information about object boundaries which can be used for further analysis. Edge detectors can be grouped into two classes: (a) local techniques which use operators on

> � �<sup>2</sup> <sup>þ</sup> <sup>f</sup> <sup>y</sup> � �<sup>2</sup> � �<sup>1</sup>

bf ¼ f <sup>x</sup>

bf xð Þ¼ ; y medianA<sup>1</sup> f xð Þ¼ ; y median f x ½ � ð Þ þ r; y þ s (4)

2

(5)

brightness and high efficiency in eliminating impulsive noise [5].

Figure 6. The original fingerprint image (a), enhanced image (b), and stretched image (c).

The 2D MF for an image f xð Þ ; y is defined as

local image neighborhoods and (b) global techniques.

where A<sup>1</sup> is the MF window.

Gradient estimates is done as

and can be expressed by (Table 1)

sive noise (Figure 7).

where max and min are, respectively, the maximum and minimum values of image gray level [6] (Figures 5 and 6).

One of the methods of noise elimination ("salt pepper" type) and other image distortions is median filtering (MF). Median filtering is a nonlinear operation, and this fact complicates the mathematical analysis of its properties. It is implemented by moving the window (the mask) along the lines of the digital image and changing the value of the middle window element by

Figure 4. The original fingerprint image (a), the result of logarithmic transformation (b), and the exponential transformation (c).

A Survey on Methods of Image Processing and Recognition for Personal Identification http://dx.doi.org/10.5772/intechopen.76116 95

Figure 5. Histogram of the original fingerprint image (a) and histogram-enhanced images after logarithmic transformation (b), exponential transformation (c), equalization (d), and CLAHE (contrast limited adaptive histogram equalization) (e).

Figure 6. The original fingerprint image (a), enhanced image (b), and stretched image (c).

the median value of the elements inside the window. MF allows you to keep sharp changes in brightness and high efficiency in eliminating impulsive noise [5].

The 2D MF for an image f xð Þ ; y is defined as

$$f(\mathbf{x}, y) = \operatorname{median}\_{A\downarrow} f(\mathbf{x}, y) = \operatorname{median}[f(\mathbf{x} + r, y + s)] \tag{4}$$

where A<sup>1</sup> is the MF window.

The first group includes operations related to modification histogram, while the second group

• In the case where the brightness range does not cover the entire scale available for the

• Modifying the brightness of image elements to obtain a uniform image frequency of the

In practice, transformation T can be a logarithmic transformation, exponential transformation,

If hg represents the number of pixels in an image with intensity g, e.g., f xð Þ¼ ; y g, then the

where max and min are, respectively, the maximum and minimum values of image gray level

One of the methods of noise elimination ("salt pepper" type) and other image distortions is median filtering (MF). Median filtering is a nonlinear operation, and this fact complicates the mathematical analysis of its properties. It is implemented by moving the window (the mask) along the lines of the digital image and changing the value of the middle window element by

Figure 4. The original fingerprint image (a), the result of logarithmic transformation (b), and the exponential transforma-

MN for g ¼ 0, 1, ⋯, G � 1, and the

prob f x ð ð Þ¼ ; y g for g ¼ 0,

G P�1 g¼0

g ¼ ð Þ max � min ∙cf x ð Þþ ð Þ¼ ; y g min (3)

includes operations related to edge detection and various types of image filtration.

image, the extension of the range (the effect of increase contrast)

• Emphasizing certain brightness ranges and suppressing others

probability density function is defined as probðf xð Þ¼ ; <sup>y</sup> <sup>g</sup> <sup>¼</sup> hg

cumulative density function is defined as cf x ð Þ¼ ð Þ¼ ; y g

Transforming the brightness scale of image elements enables:

occurrence of appropriate levels of brightness

etc. (Figure 4).

94 Machine Learning and Biometrics

1, ⋯, G � 1.

tion (c).

[6] (Figures 5 and 6).

The gray levels are modified as [5, 6]

MF allows you to keep sharp changes in brightness and high efficiency in eliminating impulsive noise (Figure 7).

Edges carry useful information about object boundaries which can be used for further analysis. Edge detectors can be grouped into two classes: (a) local techniques which use operators on local image neighborhoods and (b) global techniques.

Gradient estimates is done as

$$\widehat{f} = \left[ \quad \left( f\_x \right)^2 + \left( f\_y \right)^2 \right]^{\frac{1}{2}} \tag{5}$$

and can be expressed by (Table 1)

Figure 7. Median filtration: Original image (a), image with noise (b), image with "salt and pepper" noise; (d), (e), and (f) image after MF.

$$\widehat{f}^{\uparrow} = \left[ \left( w\_1^t f^4 \right)^2 + \left( w\_2^t f^4 \right)^2 \right]^{\frac{1}{2}} \tag{6}$$

or

$$\widehat{f}^{\gamma} = \left[ \left( w\_1^t f^8 \right)^2 + \left( w\_2^t f^8 \right)^2 \right]^{\frac{1}{2}} \tag{7}$$

Edge detector operators

Differential Roberts edge detectors

Max. difference Prewitt edge detector

Sobel edge detector

Table 1.

Differential

 gradient operators.

f x ¼ f x

f y ¼ f x

þ 1; y � 1

 Þþ 2<sup>f</sup> x

þ 1; j

 Þþ f x

þ 1; y þ 1

 Þ

Þ � f x

� 1; y � 1

 Þþ 2<sup>f</sup> x

� 1; y

 Þþ f x

� 1; y þ 1

 Þ

 Þ

0

7

�2

7

6

6

7

6

6

7

�1

7

�1

7

6

6

7

6

7

6

6

7

6

7

2

7

0

7

6

6

7

6

6

7

w1¼

6

7

; w2¼

0

7

6

6

7

6

7

6

7

0

7

6

�2

7

0

7

6

6

7

http://dx.doi.org/10.5772/intechopen.76116

6

6

7

1

1

7

6

6

7

6

7

7

6

6

7

6

7

0

7

2

7

6

6

�1

1

97

4

5

4

5

ð

ð

ð

ð

ð

ð

ð

ð

ð

� 1; y � 1

 Þþ 2<sup>f</sup> x; y � 1

ð

 Þþ f x

þ 1; y � 1

 Þ

Þ � f i

� 1; y þ 1

 Þþ 2<sup>f</sup> x; y þ 1

ð

 Þþ f x

þ 1; y þ 1

 Þ

 Þ

1

�1

3 � 3

A Survey on Methods of Image Processing and Recognition for Personal Identification

2

3

2

3

ð

ð

ð

ð

ð

ð

f x ¼ max f x; y ð Þ; f x

f x = � min f x; y ð Þ; f x

f x¼ f x

f y¼ f x

þ 1; y � 1

 Þþ f x

þ 1; y

 Þþ f x

þ 1; y þ 1

 Þ

Þ � f x

� 1; y � 1

 Þþ f x

� 1; y

 Þþ f x

� 1; y þ 1

 Þ

 Þ w1¼

6

7

; w2¼

0

7

6

6

7

6

7

6

7

0

7

6

�1

7

0

7

6

6

7

6

6

7

1

1

7

6

6

7

6

7

7

6

6

7

6

7

0

7

1

7

6

6

�1

1

4

5

4

5

ð

ð

ð

ð

ð

ð

ð

ð

� 1; y � 1

 Þþ f x; y � 1

ð

 Þþ f x

þ 1; y � 1

 Þ

Þ � f x

� 1; y þ 1

 Þþ f x; y þ 1

ð

 Þþ f x

þ 1; y þ 1

 Þ

 Þ

1

�1

3 � 3

2

3

2

3

0

�1

7

6

6

7

7

6

6

7

�1

7

�1

7

6

6

7

6

7

6

6

7

6

7

1

7

0

7

6

6

7

6

6

7

ð

ð

ð

ð

ð

ð

þ 1; y

 Þf x; y þ 1

þ 1; y

 Þf x; y þ 1

 Þ; f x

þ 1; y þ 1

 Þf x

þ 1; y þ 1

 Þ

 Þ

ð

ð

ð

ð

ð

 Þ; f x

þ 1; y þ 1

 Þ

 Þ

ð

ð

ð

ð

f x¼f x; y ð Þ� f x

f y ¼ f x

þ 1; y

 Þ� f x; y þ 1

ð

 Þ

ð

þ 1; y þ 1

 Þ

ð

 Partial derivatives f x ¼ x; y ð Þf � f x; y þ 1

f y ¼ f x

þ 1; y

 Þ� f x; y ð Þ

ð

½

½

ð

 Þ � � f x

þ 1; y þ 1

 Þ� f x; y þ 1

ð

 Þ

 �

ð

½

� � f x

þ 1; y

 Þ� f x

þ 1; y þ 1

 Þ

 �

ð

ð

½

 along x and y axes

Weight vectors

1

�1

2 � 2

2

3

2

3

w1¼

6

7

1

7

1

7

6

5; w2¼

6

�1

1

0

2 � 2

2

3

2

3

w1¼

6

7

0

7

1

7

6

5; w2¼

6

�1

there is no

2 � 2

0

4

5

4

0

�1

6

7

6

7

7

6

1

4

5

4

�1

�1

6

7

6

7

6

7

Kernels

where f <sup>4</sup> and f <sup>8</sup> are neighborhood pixels.

Another popular operator, not shown in Table 1, is the Canny edge detector operator implemented in accordance with the Figure 8 [7].

Examples of applications of edge detection operators are shown in Figure 9.

Let f xð Þ ; y be a function of the brightness of the analyzed image; X a finite subset of the plane on which the function f xð Þ ; y is specified; S ¼ f g S1; S2; ⋯; SK the division of X into K non� empty subsets Si, i ¼ 1, 2, ⋯, K, and Reg the rule specified on the set S and assuming the value true if and only if any pair of points from each subset Si corresponds to a certain homogeneity criterion.


<sup>b</sup>f` <sup>¼</sup> <sup>w</sup><sup>t</sup>

<sup>b</sup>f` <sup>¼</sup> <sup>w</sup><sup>t</sup>

Examples of applications of edge detection operators are shown in Figure 9.

<sup>8</sup> are neighborhood pixels.

implemented in accordance with the Figure 8 [7].

or

where f

image after MF.

96 Machine Learning and Biometrics

criterion.

<sup>4</sup> and f

1f <sup>4</sup> � �<sup>2</sup>

Figure 7. Median filtration: Original image (a), image with noise (b), image with "salt and pepper" noise; (d), (e), and (f)

1f <sup>8</sup> � �<sup>2</sup>

Another popular operator, not shown in Table 1, is the Canny edge detector operator

Let f xð Þ ; y be a function of the brightness of the analyzed image; X a finite subset of the plane on which the function f xð Þ ; y is specified; S ¼ f g S1; S2; ⋯; SK the division of X into K non� empty subsets Si, i ¼ 1, 2, ⋯, K, and Reg the rule specified on the set S and assuming the value true if and only if any pair of points from each subset Si corresponds to a certain homogeneity

<sup>þ</sup> <sup>w</sup><sup>t</sup> 2f <sup>4</sup> � �<sup>2</sup> h i<sup>1</sup>

<sup>þ</sup> <sup>w</sup><sup>t</sup> 2f <sup>8</sup> � �<sup>2</sup> h i<sup>1</sup>

2

2

(6)

(7)

A Survey on Methods of Image Processing and Recognition for Personal Identification http://dx.doi.org/10.5772/intechopen.76116 97

> Table 1. Differential gradient operators.

2. Feature extraction

processing algorithms.

and textural analysis (Table 2).

Methods for feature extraction on biometric traits can be categorized into geometrical analysis

A Survey on Methods of Image Processing and Recognition for Personal Identification

The texture image can be seen as an image area containing repetitive pixel intensity patterns arranged in a certain structural manner. The concept of texture has no formal and mathematical definition, but there are a number of methods for extracting texture features that can be roughly divided into model-based (fractal and stochastic method), statistical, and using signal

Methods using signal processing algorithms (in the frequency domain and/or space-frequency domain) are widely used in transform-based texture analysis, e.g., Fourier transform, Gabor

Analysis texture pattern composed

http://dx.doi.org/10.5772/intechopen.76116

99

Spatial distribution of minutiae points

with ridges and valleys

Local line binary pattern Co-occurrence matrix

Co-occurrence matrix

Gabor's filtering

Moment invariants

Gabor's filtering

Gabor's filtering

Gabor's filtering Riesz transform Wavelet, curvelet Radon transform

Histogram of oriented gradients SIFT (shift-invariant feature transform)

Curvelet

Wavelets

—

LBP

transform, Riesz transform, Radon transform, and wavelet transform.

Delta points Triangulation methods Crossing number

Wrinkles

Finger knuckle print Shape-oriented features: lines, curves, contours

Face Spatial relationship among eyes, lips, nose, chin

Periocular Geometry of eyelids, eye folds, eye corners LBP

Crossing number

Bifurcation points Ending points

Fingerprint Minutiae singular points

Hand geometry Shape-oriented features

Ear Force field transformation

Retina Minutiae singular points

Table 2. Biometric feature extraction methods.

Vein Hand vein Finger vein Forearm vein

Palmprint Principal lines. Line edge map

Biometric physiological modality Geometrical features Texture features

Palmar friction ridges Shape-oriented features

Finger length and width

2D and 3D shape descriptors

Iris — Phase-based method

Figure 8. Canny edge detector.

Figure 9. Original image after edge detector operator: Roberts (a), Prewitt (b), Sobel (c), and Canny.

The segmentation of the image f xð Þ ; y according to the Reg rule is the division S ¼ f g S1; S2; ⋯; SK corresponding to the conditions as follows:

$$\begin{aligned} \text{a. } & \bigcup\_{i=1}^{K} \mathcal{S}\_{i} = \mathcal{X};\\ \text{b. } & \mathcal{S}\_{i} \bigcap \mathcal{S}\_{j} = \mathcal{0}, \qquad \forall \ i \neq j;\\ \text{c. } & \text{Reg}(\mathcal{S}\_{i}) = true \qquad \forall i;\\ \text{d. } & \text{Reg}(\mathcal{S}\_{i} \cap \mathcal{S}\_{j}) = false \qquad \forall \ i \neq j. \end{aligned} \tag{8}$$

The Reg rule specifies a certain homogeneity criterion and depends on the function of f xð Þ ; y . We consider segmentation as

$$\text{Seg}: f(\mathbf{x}, \mathbf{y}) \to \mathbf{s}\_{i,j} \tag{9a}$$

$$s\_{i,j} = \lambda\_i \qquad \text{for} \quad (\mathbf{x}, y) \quad \mathbf{S}\_{i\prime} \quad \mathbf{i} = \mathbf{1}, \mathbf{2}, \cdots, \mathbf{K} \tag{9b}$$

where f xð Þ ; y and si,j are functions that define the input image and the segmented image, respectively, while λ<sup>i</sup> is the label (name) of Si area.
