**2.4 Key problems in device design**

As is discussed above, the main parts of palmprint acquisition devices are cameras and light sources. So, the problems we need to consider when designing new devices are as follows:

1.The resolution of the imaging sensor

2.The focal length of the lens


Many previous works have studied the light sources [15–17]. Generally, the basic goal is avoiding overexposure and underexposure. Image noise increases under low illumination conditions. Although many new deep learning-based denoising techniques are proposed [18], the most effective solution for palmprint imaging is developing active light sources to provide suitable illumination conditions. In this work, we only focus on the first four problems. We developed three palm image capture devices to test the performance of different hardware frameworks (as is shown in **Figure 2**). We denote them as *devicea*, *deviceb*, and *devicec*. Among them, *devicea* and *deviceb* are touch-based devices. *devicea* is designed to generate highquality palmprint images. The device contains an ultra-high-definition imaging sensor (about 500 M pixels) and a distortion-free lens. The long working distance is designed to further guarantee the image quality. During the capture process, the user's palm is put on the device to avoid motion blur. *deviceb* is designed to generate high-distortion palmprint images. It contains a high-definition imaging sensor (about 120 M pixels) and an ultrawide lens. The working distance is very short (about 2 cm). *devicec* is a touchless device; it is designed to capture high- and lowdefinition images in touchless scenarios. It has two cameras, one is high-definition (120 M pixels), and the other one is low-definition (30 M pixels); both of them are equipped with distortion-free lenses. We use different devices to collect palm images from the same palm; the captured images are shown in **Figure 2(d)**–**(e)**. We can see that the 500 M pixel camera can capture clear ridges and valleys of the palmprint, the 120 M pixel camera can capture most of the ridges and valleys, and the 30 M pixel camera only can capture the principal lines and coarse-grained skin textures. For touchless applications, the distance between the palm and the camera is not stable. Distance variations may decrease the palm image PPI and cause defocus-blur. In practice, it is very hard to guarantee the quality of the captured images. Hence, what we want to know is which level of image sharpness is sufficient for palmprint identification.

**Figure 2.**

*Different palmprint acquisition devices and the palm images generated by them. (a) The touch-based device with a 500 M pixel imaging sensor and a long imaging distance. (b) The touch-based device with a 120 M pixel imaging sensor and a very short imaging distance. (c) The multicamera touchless device with 120 M and 30 M pixel imaging sensors and a long imaging distance. (d) The palm image captured by (a) and the corresponding enlarged local regions. (e) The palm image captured by (b) and the corresponding enlarged local regions. (f) The palm images captured by (c) and the corresponding enlarged local regions.*

### **3. System design based on palm image sharpness**

#### **3.1 Palm distance and recognition performance**

The imaging model is shown in **Figure 3**. Let *lp* and *wp* stand for the statistical information of the length and width of the palm, respectively. Let *Zmin* and *Zmax* stand for the minimum and maximum distance the palm can reach in the field of view (FOV). If the hand want to be captured completely, we need *l* ≥*lp* and *w* ≥ *wp*, where *l* and *w* are the corresponding sizes of the field of view (FOV) of the camera (as is shown in **Figure 3**). Then *Zmin* could be estimated by

$$Z\_{\min} = \max\left(\frac{l\_p/2}{\tan \theta\_u}, \frac{w\_p/2}{\tan \theta\_v}\right) \tag{1}$$

palm and the camera's optical center. So *pw* changes according to different palm distances. Eq. (3) shows the constraints of the image palm width *pw*, equivalent focal length *f*, palm distance *z*, and the palm width *wp*. According to Eqs. (2) and

*Image Sharpness-Based System Design for Touchless Palmprint Recognition*

*DOI: http://dx.doi.org/10.5772/intechopen.92828*

where *ppimin* is the minimum PPI for palmprint recognition. So, what we need to know is the relation between image PPI and system equal error rate (EER). Here, EER is an index of the system's recognition performance; lower is better. In data collection process, it is very difficult to let the users to put and hold their hands at the designed target distances, so we plan to utilize the public database to conduct simulation experiments to study the relationship between EER and PPI. In this section, database COEP [20] is selected to use, due to it is collected in a highly constrained environment. The images in it are captured by single-lens reflex camera (SLR), so they have a high signal-to-noise ratio (SNR) and very low distortions. During capturing, the user's palm is put stably on the backboard. The image resolution is sufficient to record the palmprint ridges and valleys. So we take images in COEP as the ground truth; it means they are captured with proper focus and sufficient PPI. Then the images are resized to generate palm images with different

> *ppi* <sup>¼</sup> <sup>1</sup> *N* X *N*

palm image. However, in practice the captured image may contain radial and tangential distortions. The distortion parameters of the imaging model could be estimated by camera calibration [19]. Based on the imaging model, the captured image could be undistorted. Image undistortion also introduces image blur to the undistorted image. Taking this into consideration, we select four different kinds of lenses for testing, they are long-focus, standard, wide-angle, and ultrawide-angle lenses (as is shown in **Figure 4**). We use them to capture checkboard images from different views. After camera calibration, we got the corresponding intrinsic

*i*¼1

where *N* is the image number of the dataset and *ppii* is the *ppi* value of the *i*-th

*z* ¼ *f =ppi* (4)

*Zmax* ¼ *f =ppimin* (5)

*ppii* (6)

(3), we have

*Imaging model and related notations.*

**Figure 3.**

Hence,

**73**

PPI. The mean PPI of a database is defined as

where *θ<sup>u</sup>* and *θ<sup>v</sup>* are half angles of the FOV along directions of *u* and *v*, respectively. As is shown in **Figure 3**, in the generated image, *pw* (in units of pixel) is the palm width. *rw* (in units of pixel) is the length of the tangent line formed by two finger valley key points. We introduce it here, because most of the region of interest (ROI) localization methods utilize those two key points [1]. The PPI is calculated by

$$pp\dot{\imath} = p\_w/w\_p\tag{2}$$

in which *wp* is the fixed real palm size. Based on the triangle geometry constraints defined in the pin-hole imaging model [19], we have

$$p\_w/f = w\_p/z\tag{3}$$

where *f* is the focal length (in units of pixel), which is related with the pixel size of the imaging sensor and the focal length of the lens; *z* is the distance between the

*Image Sharpness-Based System Design for Touchless Palmprint Recognition DOI: http://dx.doi.org/10.5772/intechopen.92828*

**Figure 3.** *Imaging model and related notations.*

palm and the camera's optical center. So *pw* changes according to different palm distances. Eq. (3) shows the constraints of the image palm width *pw*, equivalent focal length *f*, palm distance *z*, and the palm width *wp*. According to Eqs. (2) and (3), we have

$$z = f / 
pm \tag{4}$$

Hence,

**3. System design based on palm image sharpness**

*The palm images captured by (c) and the corresponding enlarged local regions.*

(as is shown in **Figure 3**). Then *Zmin* could be estimated by

*Zmin* ¼ max

The imaging model is shown in **Figure 3**. Let *lp* and *wp* stand for the statistical information of the length and width of the palm, respectively. Let *Zmin* and *Zmax* stand for the minimum and maximum distance the palm can reach in the field of view (FOV). If the hand want to be captured completely, we need *l* ≥*lp* and *w* ≥ *wp*, where *l* and *w* are the corresponding sizes of the field of view (FOV) of the camera

*Different palmprint acquisition devices and the palm images generated by them. (a) The touch-based device with a 500 M pixel imaging sensor and a long imaging distance. (b) The touch-based device with a 120 M pixel imaging sensor and a very short imaging distance. (c) The multicamera touchless device with 120 M and 30 M pixel imaging sensors and a long imaging distance. (d) The palm image captured by (a) and the corresponding enlarged local regions. (e) The palm image captured by (b) and the corresponding enlarged local regions. (f)*

> *lp=*2 tan *θ<sup>u</sup>*

where *θ<sup>u</sup>* and *θ<sup>v</sup>* are half angles of the FOV along directions of *u* and *v*, respectively. As is shown in **Figure 3**, in the generated image, *pw* (in units of pixel) is the palm width. *rw* (in units of pixel) is the length of the tangent line formed by two finger valley key points. We introduce it here, because most of the region of interest (ROI) localization methods utilize those two key points [1]. The PPI is calculated by

in which *wp* is the fixed real palm size. Based on the triangle geometry

where *f* is the focal length (in units of pixel), which is related with the pixel size of the imaging sensor and the focal length of the lens; *z* is the distance between the

constraints defined in the pin-hole imaging model [19], we have

, *wp=*<sup>2</sup> tan *θ<sup>v</sup>*

*ppi* ¼ *pw=wp* (2)

*pw=f* ¼ *wp=z* (3)

(1)

**3.1 Palm distance and recognition performance**

**Figure 2.**

*Biometric Systems*

**72**

$$Z\_{\text{max}} = \text{f} / \text{ppi}\_{\text{min}} \tag{5}$$

where *ppimin* is the minimum PPI for palmprint recognition. So, what we need to know is the relation between image PPI and system equal error rate (EER). Here, EER is an index of the system's recognition performance; lower is better. In data collection process, it is very difficult to let the users to put and hold their hands at the designed target distances, so we plan to utilize the public database to conduct simulation experiments to study the relationship between EER and PPI. In this section, database COEP [20] is selected to use, due to it is collected in a highly constrained environment. The images in it are captured by single-lens reflex camera (SLR), so they have a high signal-to-noise ratio (SNR) and very low distortions. During capturing, the user's palm is put stably on the backboard. The image resolution is sufficient to record the palmprint ridges and valleys. So we take images in COEP as the ground truth; it means they are captured with proper focus and sufficient PPI. Then the images are resized to generate palm images with different PPI. The mean PPI of a database is defined as

$$\overline{ppi} = \frac{1}{N} \sum\_{i=1}^{N} ppi\_i \tag{6}$$

where *N* is the image number of the dataset and *ppii* is the *ppi* value of the *i*-th palm image. However, in practice the captured image may contain radial and tangential distortions. The distortion parameters of the imaging model could be estimated by camera calibration [19]. Based on the imaging model, the captured image could be undistorted. Image undistortion also introduces image blur to the undistorted image. Taking this into consideration, we select four different kinds of lenses for testing, they are long-focus, standard, wide-angle, and ultrawide-angle lenses (as is shown in **Figure 4**). We use them to capture checkboard images from different views. After camera calibration, we got the corresponding intrinsic

parameters. They are listed in **Table 2**. *f <sup>u</sup>* and *f <sup>v</sup>* are focal length along *u* and *v* directions, respectively. *θ<sup>u</sup>* and *θ<sup>v</sup>* are half angle of the FOV along *u* and *v* directions, respectively. *k*1, *k*2, and *k*<sup>3</sup> are radial distortion coefficients. *p*<sup>1</sup> and *p*<sup>2</sup> are tangential distortion factors. As is shown in **Figure 5**, the images in COEP first are distorted by the four distortion parameter sets and then undistorted by coordinates mapping and pixel interpolation based on the distortion model. The obtained images are further resized to generate different PPI palm images. According to [21], the average palm width is 84 mm for male and 74 mm for female. In [22], the average palm width is 84.18�6.81 mm for German and 82.38�11.82 mm for Chinese, and most of their subjects are male. Since palm width varies with gender, age, and race, it depends on the specific application scenarios. For simplicity, we set *wp* ≈80mm (3.15 inches) and *lp* ≈110 mm (4.33 inches) in our work. The original image size of COEP is 1600 � 1200. In order to delete the background area, they are cropped to size of 1280 � 960. In this experiment, we totally generate 10 datasets by image resizing; detail statistical information is listed in **Table 3**. For each palm image, using the ROI localization method proposed in [1], we can detect the tangent line of the two finger valleys, and then *rw* is obtained. *pw* also could be detected based on the relative coordinate system of the palm. Given a dataset, the mean *pw* and mean *rw* are defined as.

$$\overline{p\_w} = \frac{1}{N} \sum\_{i=1}^{N} p\_w^i \tag{7}$$

$$\overline{r\_w} = \frac{1}{N} \sum\_{i=1}^{N} r\_w^i \tag{8}$$

where N is the image number of the dataset and *pi*

*Images obtained at different distances (PPI) using different distortion models.*

*Image Sharpness-Based System Design for Touchless Palmprint Recognition*

*DOI: http://dx.doi.org/10.5772/intechopen.92828*

**960 720**

**800 600**

**640 480**

*rw* 304.8 266.7 228.6 190.5 152.4 114.3 76.2 57.2 38.1 19.1 *pw* 524.8 459.2 393.6 328.0 262.4 196.8 131.2 98.4 65.6 32.8 *ppi* 166.6 145.8 125.0 104.1 83.3 62.5 41.7 31.2 20.8 10.4

**480 360**

**1120 840**

*Palm region size, palm width, and corresponding ppi.*

**Figure 5.**

**Palm region size**

**Table 3.**

**75**

**1280 960**

pixels is recommended according to **Figure 6**.

values of the *i*-th palm image. Here, *pw* is selected as the index to measure the resolution of the palm image. The sample images and corresponding enlarged local patches of the generated datasets are shown in **Figure 5**. **Table 4** describes the EERs and thresholds obtained by CompCode on different datasets. Here, *eav* is an index for sharpness assessment [23]. It should be noted that the sharpness level (*eav*) obtained here has not taken the defocus-blur into consideration. It will be further studied in the next subsection. The distribution curves of *pw* and corresponding EER and *eav* are shown in **Figure 6**. From it, we can see that the affection on image sharpness caused by undistortion is not quite obvious. Among the four cameras (as is shown in **Figure 4**), the long-focus lens obtains the highest sharpness, and wideangle lens reaches the lowest sharpness. As to the ultrawide-angle lens, many newly designed lenses have improved their optical models to generate big distortions just in the boundary regions and small distortions in the center region. In this experiment, the wide-angle lens gains more distortions than the ultrawide-angle lens; it depends on the specific optical model the manufacturer used. Generally, the palm is put at the center of the image, so the differences between the four lenses are not large. Although the long-focus lens can provide high sharpness palm images, in realworld scenarios, the wide-angle lens is more recommended because its wide FOV provides better user experience for image capturing. As is shown in **Figure 6**, the EERs increase drastically when *pw* is less than 130 pixels. So when we were selecting the imaging sensor and determining the working distance, at least we should guarantee, in the final palm image, the palm width should be large than 130 pixels; 300

*<sup>w</sup>* and *r<sup>i</sup>*

**320 240**

**240 180**

**160 120**

**80 60**

*<sup>w</sup>* are the *pw* and *rw*

#### **Figure 4.**

*Images captured by different lenses. (a) The imaging device and different kinds of lenses. (b) An image captured by long-focus lens. (c) An image captured by standard lens. (d) An image captured by ultrawide-angle lens.*


#### **Table 2.**

*The calibrated parameters of different camera lenses.*

*Image Sharpness-Based System Design for Touchless Palmprint Recognition DOI: http://dx.doi.org/10.5772/intechopen.92828*

**Figure 5.** *Images obtained at different distances (PPI) using different distortion models.*


#### **Table 3.**

parameters. They are listed in **Table 2**. *f <sup>u</sup>* and *f <sup>v</sup>* are focal length along *u* and *v* directions, respectively. *θ<sup>u</sup>* and *θ<sup>v</sup>* are half angle of the FOV along *u* and *v* directions, respectively. *k*1, *k*2, and *k*<sup>3</sup> are radial distortion coefficients. *p*<sup>1</sup> and *p*<sup>2</sup> are tangential distortion factors. As is shown in **Figure 5**, the images in COEP first are distorted by the four distortion parameter sets and then undistorted by coordinates mapping and pixel interpolation based on the distortion model. The obtained images are further resized to generate different PPI palm images. According to [21], the average palm width is 84 mm for male and 74 mm for female. In [22], the average palm width is 84.18�6.81 mm for German and 82.38�11.82 mm for Chinese, and most of their subjects are male. Since palm width varies with gender, age, and race, it depends on the specific application scenarios. For simplicity, we set *wp* ≈80mm (3.15 inches) and *lp* ≈110 mm (4.33 inches) in our work. The original image size of COEP is 1600 � 1200. In order to delete the background area, they are cropped to size of 1280 � 960. In this experiment, we totally generate 10 datasets by image resizing; detail statistical information is listed in **Table 3**. For each palm image, using the ROI localization method proposed in [1], we can detect the tangent line of the two finger valleys, and then *rw* is obtained. *pw* also could be detected based on the relative coordinate system of the palm. Given a dataset, the mean *pw* and mean *rw* are

> *pw* <sup>¼</sup> <sup>1</sup> *N* X *N*

*rw* <sup>¼</sup> <sup>1</sup> *N* X *N*

*i*¼1 *pi*

*i*¼1 *r i*

*Images captured by different lenses. (a) The imaging device and different kinds of lenses. (b) An image captured by long-focus lens. (c) An image captured by standard lens. (d) An image captured by ultrawide-angle lens.*

**Lens** *f <sup>u</sup> f <sup>v</sup> θ<sup>u</sup> θ<sup>v</sup> k***<sup>1</sup>** *k***<sup>2</sup>** *k***<sup>3</sup>** *p***<sup>1</sup>** *p***<sup>2</sup>** Long-focus 3507.05 3497.24 10.4° 7.9° �0.37 �1.36 — �0.0018 �0.0000 Standard 706.96 707.29 48.7° 37.5° 0.13 �0.51 — 0.0055 �0.0001 Wide-angle 435.57 436.10 72.6° 57.7° �0.41 0.14 — 0.0014 0.0006 Ultrawide 217.19 217.99 111.7° 95.5° 0.05 �0.07 0.0105 �0.0002 �0.0018

*<sup>w</sup>* (7)

*<sup>w</sup>* (8)

defined as.

*Biometric Systems*

**Figure 4.**

**Table 2.**

**74**

*The calibrated parameters of different camera lenses.*

*Palm region size, palm width, and corresponding ppi.*

where N is the image number of the dataset and *pi <sup>w</sup>* and *r<sup>i</sup> <sup>w</sup>* are the *pw* and *rw* values of the *i*-th palm image. Here, *pw* is selected as the index to measure the resolution of the palm image. The sample images and corresponding enlarged local patches of the generated datasets are shown in **Figure 5**. **Table 4** describes the EERs and thresholds obtained by CompCode on different datasets. Here, *eav* is an index for sharpness assessment [23]. It should be noted that the sharpness level (*eav*) obtained here has not taken the defocus-blur into consideration. It will be further studied in the next subsection. The distribution curves of *pw* and corresponding EER and *eav* are shown in **Figure 6**. From it, we can see that the affection on image sharpness caused by undistortion is not quite obvious. Among the four cameras (as is shown in **Figure 4**), the long-focus lens obtains the highest sharpness, and wideangle lens reaches the lowest sharpness. As to the ultrawide-angle lens, many newly designed lenses have improved their optical models to generate big distortions just in the boundary regions and small distortions in the center region. In this experiment, the wide-angle lens gains more distortions than the ultrawide-angle lens; it depends on the specific optical model the manufacturer used. Generally, the palm is put at the center of the image, so the differences between the four lenses are not large. Although the long-focus lens can provide high sharpness palm images, in realworld scenarios, the wide-angle lens is more recommended because its wide FOV provides better user experience for image capturing. As is shown in **Figure 6**, the EERs increase drastically when *pw* is less than 130 pixels. So when we were selecting the imaging sensor and determining the working distance, at least we should guarantee, in the final palm image, the palm width should be large than 130 pixels; 300 pixels is recommended according to **Figure 6**.


*L x*ð Þ¼ , *y I x*ð Þ , *y* ∗ *G*ð Þ *σ* (9)

where *x*, *y* is the specific coordinates of the pixel and *σ* is the scale-coordinate. *G*ð Þ *σ* is the Gaussian smooth filter used for smooth the input image, and *σ* is its standard deviation. *I* is the initial image, and *L* is the smoothed image. So images in the scale space have different sharpness levels. As is shown in **Figure 7**, scale space function tries to generate all the potential palmprint images that may be captured in practice. In order to achieve the scale-invariant capacity, SIFT [24] tries to utilize all the information of the scale space. The method proposed in [25] is utilized here to conduct SIFT-based palmprint verifications, in which each palmprint ROI image will match against all the other images in the database. After SIFT feature extraction and matching, the random sample consensus (RANSAC) algorithm will be used to further delete the outliers. The matching between two images captured from the same palm is genuine matching, and the matching between two images captured from different palms is impostor matching. The matching number is selected as the matching score. A Gaussian image pyramid is a sampling subset of the Gaussian scale space. We wonder whether all the image layers in the Gaussian image pyramid has the same contribution to the final matches. In this experiment, once two key points from the two intra-class images are matched, the points'scales are recorded. At last, the statistical information of *σ* is shown in **Figure 8**. From it, we can see that the contributions of different scales are not the same; most of the distinctive local patterns only exist in some specific scales. The other layers are not discriminative. So the captured palm ROI image should not fall into those useless scale ranges. In fact, the palmprint shows different patterns at different scales. When the image is captured clearly, the palmprint consists of principal lines, wrinkles, ridges, valleys, and some minutiae points. When *σ* is increasing, the palmprint ROI image tends to show the spot patterns; the fine-grained ridges and valleys are smoothed and reduced to be large-scale textures. It could be seen in **Figure 1**. Different patterns have different discriminative capacities; as a result, the recognition performance changes with the image sharpness. In practice, the scale index *σ* corresponds to palm distance. Once the palm is moved away from the DOF of the system, the generated image suffers from defocus-blur, and the recognition performance

*Image Sharpness-Based System Design for Touchless Palmprint Recognition*

*DOI: http://dx.doi.org/10.5772/intechopen.92828*

In order to analyze the recognition performance variations, we utilize the Gaussian image pyramid to generate palmprint images at different scales. For a

changes.

**Figure 7.**

**77**

*The palmprint Gaussian scale space.*

#### **Table 4.**

*The EERs obtained from different palm width using different lens models.*

**Figure 6.**

*The relationship between recognition performance, image sharpness, and palm width (in units of pixel).*

#### **3.2 Image sharpness range and recognition performance**

In the above subsection, based on the imaging model and the capture device, we studied the relationship between palm distance, PPI, and EER. However, the hardware and the parameters of the imaging model are not always available in practice. Besides FOV, depth of focus (DOF) should be considered, since defocus-blur also will affect the final accuracy. DOF is highly related to specific applications. Our previous work [23] shows that the accuracy of palmprint recognition has a relationship with the image sharpness. Here, what we want to know is in which sharpness range the palmprint recognition accuracy is acceptable.

In this section, we try to analyze the palmprint image sharpness based on the Gaussian scale space [24]. The transform function is defined as

*Image Sharpness-Based System Design for Touchless Palmprint Recognition DOI: http://dx.doi.org/10.5772/intechopen.92828*

$$L(\mathfrak{x}, \mathfrak{y}) = I(\mathfrak{x}, \mathfrak{y}) \* G(\sigma) \tag{9}$$

where *x*, *y* is the specific coordinates of the pixel and *σ* is the scale-coordinate. *G*ð Þ *σ* is the Gaussian smooth filter used for smooth the input image, and *σ* is its standard deviation. *I* is the initial image, and *L* is the smoothed image. So images in the scale space have different sharpness levels. As is shown in **Figure 7**, scale space function tries to generate all the potential palmprint images that may be captured in practice. In order to achieve the scale-invariant capacity, SIFT [24] tries to utilize all the information of the scale space. The method proposed in [25] is utilized here to conduct SIFT-based palmprint verifications, in which each palmprint ROI image will match against all the other images in the database. After SIFT feature extraction and matching, the random sample consensus (RANSAC) algorithm will be used to further delete the outliers. The matching between two images captured from the same palm is genuine matching, and the matching between two images captured from different palms is impostor matching. The matching number is selected as the matching score. A Gaussian image pyramid is a sampling subset of the Gaussian scale space. We wonder whether all the image layers in the Gaussian image pyramid has the same contribution to the final matches. In this experiment, once two key points from the two intra-class images are matched, the points'scales are recorded. At last, the statistical information of *σ* is shown in **Figure 8**. From it, we can see that the contributions of different scales are not the same; most of the distinctive local patterns only exist in some specific scales. The other layers are not discriminative. So the captured palm ROI image should not fall into those useless scale ranges. In fact, the palmprint shows different patterns at different scales. When the image is captured clearly, the palmprint consists of principal lines, wrinkles, ridges, valleys, and some minutiae points. When *σ* is increasing, the palmprint ROI image tends to show the spot patterns; the fine-grained ridges and valleys are smoothed and reduced to be large-scale textures. It could be seen in **Figure 1**. Different patterns have different discriminative capacities; as a result, the recognition performance changes with the image sharpness. In practice, the scale index *σ* corresponds to palm distance. Once the palm is moved away from the DOF of the system, the generated image suffers from defocus-blur, and the recognition performance changes.

In order to analyze the recognition performance variations, we utilize the Gaussian image pyramid to generate palmprint images at different scales. For a

**Figure 7.** *The palmprint Gaussian scale space.*

**3.2 Image sharpness range and recognition performance**

*The EERs obtained from different palm width using different lens models.*

**Table 4.**

*Biometric Systems*

**Figure 6.**

**76**

sharpness range the palmprint recognition accuracy is acceptable.

Gaussian scale space [24]. The transform function is defined as

In the above subsection, based on the imaging model and the capture device, we studied the relationship between palm distance, PPI, and EER. However, the hardware and the parameters of the imaging model are not always available in practice. Besides FOV, depth of focus (DOF) should be considered, since defocus-blur also will affect the final accuracy. DOF is highly related to specific applications. Our previous work [23] shows that the accuracy of palmprint recognition has a relationship with the image sharpness. Here, what we want to know is in which

*The relationship between recognition performance, image sharpness, and palm width (in units of pixel).*

*pw* **Long-focus Standard Wide-angle Ultrawide**

524.8 1.445 29.0 1.539 28.6 1.508 28.1 1.634 28.4 459.2 1.477 26.5 1.634 26.3 1.571 25.9 1.602 26.1 393.6 1.445 26.1 1.619 25.9 1.553 25.5 1.634 25.8 328.0 1.414 25.4 1.571 25.3 1.550 25.1 1.631 25.3 262.4 1.414 23.7 1.602 23.6 1.508 23.4 1.539 23.6 196.8 1.477 23.9 1.571 23.5 1.539 23.1 1.602 23.2 131.2 1.508 20.2 1.783 20.0 1.634 19.7 1.627 19.8 98.4 1.571 18.4 1.759 18.3 1.728 18.1 1.728 18.2 65.6 2.177 14.8 2.136 14.7 2.325 14.6 2.262 14.7 32.8 6.346 9.9 6.313 9.8 6.274 9.8 6.535 9.8

**EER (%)** eav **EER (%)** eav **EER (%)** eav **EER (%)** eav

In this section, we try to analyze the palmprint image sharpness based on the

**Figure 8.**

*Scale contributions for key point matching: (a) obtained from COEP, (b) obtained from IITD, (c) obtained from KTU, (d) obtained from GPDS.*

given dataset, all the ROI images in it are filtered with Gaussian filter banks, and then 20 scaled datasets are generated. The *σ* used in this experiment is defined as

$$
\sigma = \sigma\_0 \cdot \mathbf{2}^{\sigma + s/S} \tag{10}
$$

$$k = \mathbf{2}^{1/\mathbb{S}} \tag{11}$$

database is tested in their study. In order to ensure the idea is applicable on different databases, devices, and algorithms, we utilize CompCode [28], OLOF [29], and RLOC [30] to further test the recognition accuracy variations on those generated datasets. In this experiment, different databases are used including GPDS [31], IITD

*The curves of EER and eav on different databases obtained by different recognition algorithms. (a) The EER is obtained by Competitive Code. (b) The EER is obtained by OLOF. (c) The EER is obtained by RLOC.*

corresponding *eav*. From it we can see that the trend of GPDS is not the same with the other databases. It is because GPDS is a difficult database, which contains big illumination variations and localization errors. Hence, the recognition accuracy of this database is affected more by other factors. According to **Figure 10**, in order to guarantee the system's discriminative capacity, *eav* should be large than 10.

When designing a touchless palmprint recognition system, FOV and DOF are two key problems of palmprint imaging. FOV is related to image PPI, and DOF is

[32], KTU [33], and TJU [34]. **Figure 10** shows the curves of EER and

*The curves of eav and corresponding scale induces on different databases.*

*Image Sharpness-Based System Design for Touchless Palmprint Recognition*

*DOI: http://dx.doi.org/10.5772/intechopen.92828*

**4. Conclusions**

**79**

**Figure 9.**

**Figure 10.**

$$id = (o - o\_{min}) \cdot \mathbb{S} + \mathfrak{s} \tag{12}$$

where *σ*<sup>0</sup> is the base standard deviation; *k* is the step factor for increasing and decreasing *σ*; *S* is the number of intervals in each octave; *o* and *s* are octave and interval induces, respectively; and *id* is the image layer ID in the Gaussian scale space. *omin* is the minimum octave index. If *omin* <0, it can generate a *σ* smaller than *σ*0. Here, *σ*<sup>0</sup> ¼ 1*:*6 ∗ *k* which is the default setting in VLfeat [26]. In this experiment, *omin* ¼ �2, *smin* ¼ 0, and *S* ¼ 4, so the range of *σ* is from 0.476 to 5.709, which covers the range used in [27]. So, given one dataset, we can generate 20 datasets according to different scales. The mean EAV (*eav*) is utilized to quantify the sharpness level of each generated dataset. **Figure 9** shows the distributions of *eav* and scale index *σ* on different publicly available palmprint databases. It shows that the sharpness level decreases almost linearly with *id* in the Gaussian scale space when *id* is smaller than 10 (*σ* ¼ 2*:*3). Of course, the specific parameters of the curves are not the same on different databases; they are related to the database's initial sharpness level *eav*.

The work reported in [27] shows that there exist a relationship between the recognition performance and the image sharpness. In their work, a sharpness adjustment technique is developed to improve the system EER. Different sharpness induces are tested, and EAV performs better. But only one touch-based palmprint

*Image Sharpness-Based System Design for Touchless Palmprint Recognition DOI: http://dx.doi.org/10.5772/intechopen.92828*

**Figure 9.** *The curves of eav and corresponding scale induces on different databases.*

**Figure 10.**

given dataset, all the ROI images in it are filtered with Gaussian filter banks, and then 20 scaled datasets are generated. The *σ* used in this experiment is defined as

*Scale contributions for key point matching: (a) obtained from COEP, (b) obtained from IITD, (c) obtained*

where *σ*<sup>0</sup> is the base standard deviation; *k* is the step factor for increasing and decreasing *σ*; *S* is the number of intervals in each octave; *o* and *s* are octave and interval induces, respectively; and *id* is the image layer ID in the Gaussian scale space. *omin* is the minimum octave index. If *omin* <0, it can generate a *σ* smaller than *σ*0. Here, *σ*<sup>0</sup> ¼ 1*:*6 ∗ *k* which is the default setting in VLfeat [26]. In this experiment, *omin* ¼ �2, *smin* ¼ 0, and *S* ¼ 4, so the range of *σ* is from 0.476 to 5.709, which covers the range used in [27]. So, given one dataset, we can generate 20 datasets according to different scales. The mean EAV (*eav*) is utilized to quantify the sharpness level of each generated dataset. **Figure 9** shows the distributions of *eav* and scale index *σ* on different publicly available palmprint databases. It shows that the sharpness level decreases almost linearly with *id* in the Gaussian scale space when *id* is smaller than 10 (*σ* ¼ 2*:*3). Of course, the specific parameters of the curves are not the same on different databases; they are related to the database's

The work reported in [27] shows that there exist a relationship between the recognition performance and the image sharpness. In their work, a sharpness adjustment technique is developed to improve the system EER. Different sharpness induces are tested, and EAV performs better. But only one touch-based palmprint

initial sharpness level *eav*.

**78**

**Figure 8.**

*Biometric Systems*

*from KTU, (d) obtained from GPDS.*

*<sup>σ</sup>* <sup>¼</sup> *<sup>σ</sup>*<sup>0</sup> � <sup>2</sup>*<sup>o</sup>*þ*s=<sup>S</sup>* (10) *<sup>k</sup>* <sup>¼</sup> 21*<sup>=</sup><sup>S</sup>* (11)

*id* ¼ ð Þ� *o* � *omin S* þ *s* (12)

*The curves of EER and eav on different databases obtained by different recognition algorithms. (a) The EER is obtained by Competitive Code. (b) The EER is obtained by OLOF. (c) The EER is obtained by RLOC.*

database is tested in their study. In order to ensure the idea is applicable on different databases, devices, and algorithms, we utilize CompCode [28], OLOF [29], and RLOC [30] to further test the recognition accuracy variations on those generated datasets. In this experiment, different databases are used including GPDS [31], IITD [32], KTU [33], and TJU [34]. **Figure 10** shows the curves of EER and corresponding *eav*. From it we can see that the trend of GPDS is not the same with the other databases. It is because GPDS is a difficult database, which contains big illumination variations and localization errors. Hence, the recognition accuracy of this database is affected more by other factors. According to **Figure 10**, in order to guarantee the system's discriminative capacity, *eav* should be large than 10.

### **4. Conclusions**

When designing a touchless palmprint recognition system, FOV and DOF are two key problems of palmprint imaging. FOV is related to image PPI, and DOF is

**Figure 11.**

*The framework of this chapter.*

related to image blur. **Figure 11** shows the main idea and framework of our system. In this chapter, we first studied the required image PPI for palmprint identification. Based on it, the minimum and maximum palm distances are determined in the FOV. It also provides a reference for image sensor resolution selection. Then, image blur is taken into consideration; different datasets are generated by Gaussian scale space function. The EER variation curves are obtained by different features on different databases. During the image collection process, when the palm moves out of the DOF, the sharpness of the captured image changes, so *eav* can be an index to show whether the palm is put correctly in the DOF.

Based on the findings of this research, when designing new systems, the palm width in the captured image should be larger than 300 pixels; it at least should not smaller than 130 pixels. After the system is deployed, when the user is putting his/ her hand, the *eav* of the ROI image should be larger than 10. A more precise *eav* threshold should be obtained from the training dataset of the real system, because some other factors may affect the final EER distributions, such as the autoexposure-control and auto-white-balance-control functions of the imaging sensor. But the major trends are similar. The main contribution of this work is providing some key references for system design based on image sharpness.

**Author details**

Shenzhen, China

csdzhang@comp.polyu.edu.hk

† These authors are contributed equally.

provided the original work is properly cited.

China

**81**

Xu Liang1,3†, Zhaoqun Li2,3†, Jinyang Yang<sup>1</sup> and David Zhang1,3,4\*

*Image Sharpness-Based System Design for Touchless Palmprint Recognition*

*DOI: http://dx.doi.org/10.5772/intechopen.92828*

3 Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen,

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

4 School of Science and Engineering, The Chinese University of Hong Kong,

1 Harbin Institute of Technology, Shenzhen, China

2 The Chinese University of Hong Kong, Shenzhen, China

\*Address all correspondence to: davidzhang@cuhk.edu.cn;

## **Acknowledgements**

This work is supported in part by the NSFC under grant 61332011, in part by the Shenzhen Fundamental Research under grants JCYJ20180306172023949 and JCYJ20170412170438636, in part by the Shenzhen Institute of Artificial Intelligence and Robotics for Society.

*Image Sharpness-Based System Design for Touchless Palmprint Recognition DOI: http://dx.doi.org/10.5772/intechopen.92828*
