**Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics**

Miroslav Bača, Petra Grd and Tomislav Fotak

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/51912

## **1. Introduction**

18 New Trends and Developments in Biometrics

76 New Trends and Developments in Biometrics

skin, *RITA*, vol.11, no.1, pp.33-62, 2004.

[42] Baranoski, G.V.G.; Krishnaswamy, A. An introduction to light interaction with human

Researchers in the field of biometrics found that human hand, especially human palm, contains some characteristics that can be used for personal identification. These character‐ istics mainly include thickness of the palm area and width, thickness and length of the fingers. Large numbers of commercial systems use these characteristics in various appli‐ cations.

Hand geometry biometrics is not a new technique. It is first mentioned in the early 70's of the 20th century and it is older than palm print which is part of dactiloscopy. The first known use was for security checks in Wall Street.

Hand geometry is based on the palm and fingers structure, including width of the fin‐ gers in different places, length of the fingers, thickness of the palm area, etc. Although these measurements are not very distinctive among people, hand geometry can be very useful for identity verification, i.e. personal authentication. Special task is to combine some non-descriptive characteristics in order to achieve better identification results. This techni‐ que is widely accepted and the verification includes simple data processing. Mentioned features make hand geometry an ideal candidate for research and development of new acquisition, preprocessing and verification techniques.

Anthropologists believe that humans survived and developed to today's state (Homo sa‐ piens) thanks to highly developed brains and separated thumbs. Easily moved and elas‐ tic human fist enables us catching and throwing various things, but also making and using various kinds of tools in everyday life. Today, human fist is not used just for that purpose, but also as a personal identifier, i.e. it can be used for personal identification.

Even old Egyptians used personal characteristics to identify people. Since then technolo‐ gy made a great improvement in the process of recognition, and modern scanners based on hand geometry now use infrared light and microprocessors to achieve the best possi‐ ble comparison of proposed hand geometry patterns.

During the last century some technologies using hand geometry were developed. They ranged from electromechanical devices to electronic scanners. The history of those devices begins in 1971 when US Patent Office patented device for measuring hand characteristics and capturing some features for comparison and identity verification [1-3]. Another impor‐ tant event in the hand geometry history was in the mid 80's when Sidlauskas patented de‐ vice for hand scanning and founded Recognition Systems Inc. Of Campbell, California [4]. The absolute peek of this biometric characteristic was in 1996 during the Olympic Games in Atlanta when it was used for access control in the Olympic village [5].

Human hand contains enough anatomical characteristics to provide a mechanism for per‐ sonal identification, but it is not considered unique enough to provide mechanism for com‐ plete personal identification. Hand geometry is time sensitive and the shape of the hand can be changed during illness, aging or weight changing. It is actually based on the fact that ev‐ ery person has differently formed hand which will not drastically change in the future.

When placing a hand on the scanner, the device usually takes three-dimensional image of the hand. The shape and length of the fingers are measured, as well as wrists. Devices com‐ pare information taken from the hand scanner against already stored patterns in the data‐ base. After the identification data are confirmed, one can usually gain access right to secured place. This process has to be quick and effective. It takes less than five seconds for the whole procedure. Today, hand scanners are well accepted in the offices, factories and other busi‐ ness organization environments.

**Figure 1.** Palm areas according to [6]

the text.

Second category uses similar approach for capturing hand image, but instead of using ordi‐ nary camera or scanner it rather uses specialized devices containing scanners with infrared light or some other technology that can be used for retrieving image of veins under the hu‐ man skin. Hand vein biometrics is gaining popularity in the last years and it is likely that this will be one of the main biometric characteristics for the future. Using contactless ap‐ proach for capturing the structure of human veins gives promising results in this field.

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

79

Third category is the primary interest of this chapter. Therefore, it will be explained later in

Hand image taken with digital camera is usually placed on a semitransparent base which is later processed to extract hand shape (usually known as preprocessing of the image to get image data in the form that is suitable for the system it is being used in). It includes extract‐ ing small hand curves that can be parts of the one bigger curve which represents hand shape. By using those curves and its characteristics, one can define hand features that will be

The first part of the chapter will give an introduction to hand geometry and hand shape, along with description of two different systems for hand geometry. After that, acquiring of characteristics and different extraction techniques will be described. Next chapter will give

used in authentication or identification system that is being built.

Based on the data used for personal identification, technologies for reading human hand can be divided in three categories:


The first category is considered the classic approach in the hand biometrics. As mentioned earlier, it is part of dactiloscopy, so methods used here are similar to those used for finger‐ prints. The size, shape and flow of papillae are measured and minutiae are the main features in the identification process. Image preprocessing and normalization in this category gives us binary image containing papillae and their distances. Because of the different lightning when taking an image, the palm can be divided into five areas [6], although strictly medical‐ ly speaking, if we consider the muscles, it has only three areas. The areas of the palm are: lower palm, middle palm, upper palm, thenar (thumb part) and hypothenar (little finger part). The location of these areas can be seen in Figure 1.

**Figure 1.** Palm areas according to [6]

Even old Egyptians used personal characteristics to identify people. Since then technolo‐ gy made a great improvement in the process of recognition, and modern scanners based on hand geometry now use infrared light and microprocessors to achieve the best possi‐

During the last century some technologies using hand geometry were developed. They ranged from electromechanical devices to electronic scanners. The history of those devices begins in 1971 when US Patent Office patented device for measuring hand characteristics and capturing some features for comparison and identity verification [1-3]. Another impor‐ tant event in the hand geometry history was in the mid 80's when Sidlauskas patented de‐ vice for hand scanning and founded Recognition Systems Inc. Of Campbell, California [4]. The absolute peek of this biometric characteristic was in 1996 during the Olympic Games in

Human hand contains enough anatomical characteristics to provide a mechanism for per‐ sonal identification, but it is not considered unique enough to provide mechanism for com‐ plete personal identification. Hand geometry is time sensitive and the shape of the hand can be changed during illness, aging or weight changing. It is actually based on the fact that ev‐ ery person has differently formed hand which will not drastically change in the future.

When placing a hand on the scanner, the device usually takes three-dimensional image of the hand. The shape and length of the fingers are measured, as well as wrists. Devices com‐ pare information taken from the hand scanner against already stored patterns in the data‐ base. After the identification data are confirmed, one can usually gain access right to secured place. This process has to be quick and effective. It takes less than five seconds for the whole procedure. Today, hand scanners are well accepted in the offices, factories and other busi‐

Based on the data used for personal identification, technologies for reading human hand can

The first category is considered the classic approach in the hand biometrics. As mentioned earlier, it is part of dactiloscopy, so methods used here are similar to those used for finger‐ prints. The size, shape and flow of papillae are measured and minutiae are the main features in the identification process. Image preprocessing and normalization in this category gives us binary image containing papillae and their distances. Because of the different lightning when taking an image, the palm can be divided into five areas [6], although strictly medical‐ ly speaking, if we consider the muscles, it has only three areas. The areas of the palm are: lower palm, middle palm, upper palm, thenar (thumb part) and hypothenar (little finger

ble comparison of proposed hand geometry patterns.

78 New Trends and Developments in Biometrics

ness organization environments.

be divided in three categories:

**•** Hand geometry and hand shape technology.

part). The location of these areas can be seen in Figure 1.

**•** Palm technology,

**•** Hand vein technology,

Atlanta when it was used for access control in the Olympic village [5].

Second category uses similar approach for capturing hand image, but instead of using ordi‐ nary camera or scanner it rather uses specialized devices containing scanners with infrared light or some other technology that can be used for retrieving image of veins under the hu‐ man skin. Hand vein biometrics is gaining popularity in the last years and it is likely that this will be one of the main biometric characteristics for the future. Using contactless ap‐ proach for capturing the structure of human veins gives promising results in this field.

Third category is the primary interest of this chapter. Therefore, it will be explained later in the text.

Hand image taken with digital camera is usually placed on a semitransparent base which is later processed to extract hand shape (usually known as preprocessing of the image to get image data in the form that is suitable for the system it is being used in). It includes extract‐ ing small hand curves that can be parts of the one bigger curve which represents hand shape. By using those curves and its characteristics, one can define hand features that will be used in authentication or identification system that is being built.

The first part of the chapter will give an introduction to hand geometry and hand shape, along with description of two different systems for hand geometry. After that, acquiring of characteristics and different extraction techniques will be described. Next chapter will give an overview of new trends in hand geometry. At the end, technology and advantages and disadvantages will be described.

**8.** Middle finger width

**11.** Thumb circle radius

**12.** Index circle radius lower **13.** Index circle radius upper

**14.** Middle circle radius lower **15.** Middle circle radius upper

**16.** Ring circle radius lower **17.** Ring circle radius upper

**18.** Pinkie circle radius lower **19.** Pinkie circle radius upper

**20.** Thumb perimeter

**21.** Index finger perimeter

**22.** Middle finger perimeter

**23.** Ring finger perimeter

**24.** Pinkie perimeter

**26.** Index finger area

**27.** Middle finger area

**30.** Largest inscribed circle radius

cation or authentication process.

Those features became typical features in the systems that use hand geometry in the identifi‐

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

81

Many hand geometry systems have pegs guided hand placement. User has to place his/her hand according to pegs in the device surface. The image of the hand is captured using ordi‐ nary digital camera [8]. Length of the fingers, their width, thickness, curvature and relative location of all mentioned features are different among all people. Hand geometry scanners usually use ordinary CCD camera, sometimes with infrared light and reflectors that are be‐ ing used for image capturing. This type of scanner does not take palm details such as papil‐ lae into consideration. It is not interested in fingerprints, palm life lines or other ridges,

**28.** Ring finger area

**29.** Pinkie area

**25.** Thumb area

**9.** Ring finger width

**10.** Pinkie width

## **2. Hand geometry and hand shape**

Every human hand is unique. In 2002, thirty global features of hand geometry were defined [7]. These features are very exact, but can be represented as global features of contact-less 2D hand geometry. Measures that authors have defined are shown in the Figure 2.

**Figure 2.** Hand geometry features according to [7]

Features which authors defined in their works and shown in the Figure 2 are following:


an overview of new trends in hand geometry. At the end, technology and advantages and

Every human hand is unique. In 2002, thirty global features of hand geometry were defined [7]. These features are very exact, but can be represented as global features of contact-less 2D

Features which authors defined in their works and shown in the Figure 2 are following:

hand geometry. Measures that authors have defined are shown in the Figure 2.

disadvantages will be described.

80 New Trends and Developments in Biometrics

**2. Hand geometry and hand shape**

**Figure 2.** Hand geometry features according to [7]

**1.** Thumb length

**5.** Pinkie length **6.** Thumb width

**7.** Index finger width

**2.** Index finger length **3.** Middle finger length **4.** Ring finger length


Those features became typical features in the systems that use hand geometry in the identifi‐ cation or authentication process.

Many hand geometry systems have pegs guided hand placement. User has to place his/her hand according to pegs in the device surface. The image of the hand is captured using ordi‐ nary digital camera [8]. Length of the fingers, their width, thickness, curvature and relative location of all mentioned features are different among all people. Hand geometry scanners usually use ordinary CCD camera, sometimes with infrared light and reflectors that are be‐ ing used for image capturing. This type of scanner does not take palm details such as papil‐ lae into consideration. It is not interested in fingerprints, palm life lines or other ridges, colors or even some scars on the hand surface. In the combination with reflectors and mir‐ rors, optical device can provide us with two hand images, one from the top and another from the bottom side of the hand.

users place the hand on the surface. Measuring palm and fingers was made through four‐ teen intersection points. System provided support through control points and helps in defin‐ ing the interception points. Two different techniques were used to obtain skin color differences, lightning and noise which are relevant for eigenvector calculation. Researchers found that there are no big differences in system characteristics when using either one of proposed techniques. They acquired 500 images from 50 people. The system was divided in two phases: acquisition and verification. In the first phase the new user was added to data‐ base or the existing user was updated. Five images of the same hand were extracted. One had to remove its hand from the device surface before every scanning and place it again ac‐ cording to pegs in the surface. Acquired images were used to obtain eigenvector. This proc‐ ess includes calculating arithmetic mean of eigenvalues. The verification phase represents the process of comparing currently acquired hand image with the one that is already in the database. Two hand images were acquired in this phase and the 'mean' eigenvector was cal‐ culated. This vector was compared with the vector in the database which was stored for the

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

83

Let, *F* =( *f* 1, *f* 2, …, *f <sup>d</sup>* ) be an n-dimensional eigenvector stored in the database and *Y* =(*y*1, *y*2, …, *yd* ) is an n-dimensional eigenvector of the hand that is being verified. The verification has positive result if the distance between *F* and *Y* is smaller than defined threshold. For distance calculating, authors used absolute, weighted absolute, Euclidean and

user that system was trying to verify.

**•** Absolute distance:

**•** Euclidean distance:

Where:

**•** *<sup>σ</sup> <sup>j</sup>* 2

**•** Weighted absolute distance:

**•** Weighted Euclidean distance:

is feature variance of *j*th feature, and

weighted Euclidean distances, with corresponding formulas:

∑*<sup>J</sup>* =1

∑*<sup>J</sup>* =1 *<sup>d</sup>* <sup>|</sup>*<sup>y</sup> <sup>j</sup>* - *<sup>f</sup> <sup>j</sup>*

> ∑*<sup>J</sup>* =1 *<sup>d</sup>* (*<sup>y</sup> <sup>j</sup>* - *<sup>f</sup> <sup>j</sup>*

∑*<sup>J</sup>* =1 *<sup>d</sup>* (*<sup>y</sup> <sup>j</sup>* - *<sup>f</sup> <sup>j</sup>*

*<sup>d</sup>* <sup>|</sup>*<sup>y</sup> <sup>j</sup>* - *<sup>f</sup> <sup>j</sup>*


> )2 *σ j*


< *ωα* (2)

)<sup>2</sup> < (3)

<sup>&</sup>lt;*<sup>ω</sup>* (4)

Other than digital cameras, document scanners are also commonly used for capturing hand image. While systems that use digital camera take images and place it on a semitransparent base to achieve better contrast, document scanners use its technology only and the process is something slower than one with the digital camera.

**Figure 3.** How hand geometry scanners work

Shown in Figure 3, one can see that devices use 28cm optical path length between camera and the surface on which hand is placed on. Reflective optical path minimizes the space needed for building such a device. Device measures a hand couple of times to get a repre‐ sentative sample that will be compared against all others. Using the application defined for given purpose, the processor converts these measurements to biometric pattern. This proc‐ ess is simply called sampling.

#### **2.1. Systems with pegs**

Peg based hand geometry systems use pegs on the device board to guide the hand place‐ ment on the device. During sampling process scanner prompts a person to put his hand on the board, several times. The board is highly reflective and projects the image of a shaded palm, while the pegs that come out of the surface plate place the fingers in the position nec‐ essary for exclusion of a sample. In this way, these systems allow for better measuring com‐ pared to systems without pegs, because the hand is fixed to the surface and cannot be shifted. The advantage of this system over the system with no pegs are predefined bases for measurement of characteristics, while the biggest disadvantage is that the system can de‐ form, to a certain extent, the appearance of a hand, so measurements are not very precise, which leads to suboptimal results. It has to be mentioned that various finger positions can effect on variations in measuring features on the fixed axes.

A system that uses pegs was developed by Jain et al. [9]. Their system was able to capture images in 8-bit grayscale, size 640x480 pixels. Authors also developed GUI which helped users place the hand on the surface. Measuring palm and fingers was made through four‐ teen intersection points. System provided support through control points and helps in defin‐ ing the interception points. Two different techniques were used to obtain skin color differences, lightning and noise which are relevant for eigenvector calculation. Researchers found that there are no big differences in system characteristics when using either one of proposed techniques. They acquired 500 images from 50 people. The system was divided in two phases: acquisition and verification. In the first phase the new user was added to data‐ base or the existing user was updated. Five images of the same hand were extracted. One had to remove its hand from the device surface before every scanning and place it again ac‐ cording to pegs in the surface. Acquired images were used to obtain eigenvector. This proc‐ ess includes calculating arithmetic mean of eigenvalues. The verification phase represents the process of comparing currently acquired hand image with the one that is already in the database. Two hand images were acquired in this phase and the 'mean' eigenvector was cal‐ culated. This vector was compared with the vector in the database which was stored for the user that system was trying to verify.

Let, *F* =( *f* <sup>1</sup>, *f* <sup>2</sup>, …, *f <sup>d</sup>* ) be an n-dimensional eigenvector stored in the database and *Y* =(*y*1, *y*2, …, *yd* ) is an n-dimensional eigenvector of the hand that is being verified. The verification has positive result if the distance between *F* and *Y* is smaller than defined threshold. For distance calculating, authors used absolute, weighted absolute, Euclidean and weighted Euclidean distances, with corresponding formulas:

**•** Absolute distance:

colors or even some scars on the hand surface. In the combination with reflectors and mir‐ rors, optical device can provide us with two hand images, one from the top and another

Other than digital cameras, document scanners are also commonly used for capturing hand image. While systems that use digital camera take images and place it on a semitransparent base to achieve better contrast, document scanners use its technology only and the process is

Shown in Figure 3, one can see that devices use 28cm optical path length between camera and the surface on which hand is placed on. Reflective optical path minimizes the space needed for building such a device. Device measures a hand couple of times to get a repre‐ sentative sample that will be compared against all others. Using the application defined for given purpose, the processor converts these measurements to biometric pattern. This proc‐

Peg based hand geometry systems use pegs on the device board to guide the hand place‐ ment on the device. During sampling process scanner prompts a person to put his hand on the board, several times. The board is highly reflective and projects the image of a shaded palm, while the pegs that come out of the surface plate place the fingers in the position nec‐ essary for exclusion of a sample. In this way, these systems allow for better measuring com‐ pared to systems without pegs, because the hand is fixed to the surface and cannot be shifted. The advantage of this system over the system with no pegs are predefined bases for measurement of characteristics, while the biggest disadvantage is that the system can de‐ form, to a certain extent, the appearance of a hand, so measurements are not very precise, which leads to suboptimal results. It has to be mentioned that various finger positions can

A system that uses pegs was developed by Jain et al. [9]. Their system was able to capture images in 8-bit grayscale, size 640x480 pixels. Authors also developed GUI which helped

effect on variations in measuring features on the fixed axes.

from the bottom side of the hand.

82 New Trends and Developments in Biometrics

**Figure 3.** How hand geometry scanners work

ess is simply called sampling.

**2.1. Systems with pegs**

something slower than one with the digital camera.

$$\sum\_{j=1}^{d} |y\_j - f\_j| <\_{\alpha} \tag{1}$$

**•** Weighted absolute distance:

$$\sum\_{j=1}^{d} \frac{|y\_j \cdot f\_j|}{\sigma\_j} <\_{\omega \alpha} \tag{2}$$

**•** Euclidean distance:

$$\sqrt{\Sigma\_{J=1}^{d}(y\_{j}-f\_{j})^{2}}<\tag{3}$$

**•** Weighted Euclidean distance:

$$\sqrt{\sum\_{j=1}^{d} \frac{(y\_j \cdot f\_j)^2}{\sigma\_j}} <\_{\omega} \tag{4}$$

Where:

**•** *<sup>σ</sup> <sup>j</sup>* 2 is feature variance of *j*th feature, and **•** *<sup>α</sup>*, *ωα*, , *<sup>ω</sup>* are thresholds for each respective distance metrics

Another research attempt in the field of hand geometry and hand shape has been made in 2000. Sanchez-Reillo and associates developed system which takes 640x640 pixels images in the JPEG format. Surface on which hand had to be placed had 6 guiding pegs. They used 200 images from 20 persons. Not all people had same gender, affiliations or personal habits. Before the features were extracted all images were transformed in the binary form, using the following formula [10]:

$$I\_{BW} = \left\{ I\_R + I\_G \right\} - I\_B \right\} \tag{5}$$

**2.2. Systems without pegs**

shown in Figure 4.

which is not.

he extracted templates, size 128x128 pixels.

As an alternative for systems that used pegs to measure hand geometry features, research‐ ers started to explore hand shape as new biometric characteristic. Researchers in [13] ex‐ tracted 353 hand shape images from 53 persons. The number of images per person varied from 2 to 15. Pegs were used to place the hand in the right position. They were re‐ moved before the comparison and covered with background color. Hand shape was extract‐ ed using hand segmentation. During the finger extraction a set of points is produced as

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

85

Five fingers were extracted from the hand shape and analyzed separately. To automatize the whole process fingers of the same hand were aligned according to set of all defined points. This alignment is also shown in the Figure 4. The mean distance between two points was defined as Mean Alignment Error (MAE). This error was used to quantify matching results. Positive matching is found if the MAE falls in the predefined set of values. This kind of sys‐ tem achieves False Acceptance Rate (FAR) about 2% and False Rejection Rate (FRR) about 1.5%. This can be comparable to professional and commercial hand geometry systems. The outcome of this approach was larger data storage system because few hundred points need‐ ed to be stored for just one hand shape. Authors used randomly created set of 3992 image pairs to create set of interclass distances. By using that set of distances, it is possible to calcu‐ late distribution and with very high degree of certainty determine which user is real and

This was just one way of using hand shape for personal verification. Beside that approach, one can use a palm area size approach. Some researchers like Lay [14], conducted researches based on the palm area size. The hand image was acquired by projecting lattice pattern on the top size of the hand. Image was captured in the lattice frame ant it presented the curva‐ ture of the hand. An example of this approach can be seen in Figure 5. Author acquired a hundred images (number of persons is not known), size 512x512 pixels. From those images

**Figure 4.** Hand shapes of the same hand extracted, overlaid and aligned [13]

Where:


Sometimes, digital image does not use its whole contrast range. By stretching lightness val‐ ues in allowed range the contrast of the image is increasing. This allows better extraction of the hand from the image background. Every 'false' pixel is later (if necessary) removed from the image by using some threshold value. To avoid deviations, image is formatted on the fixed size. Two pegs are used to locate the hand. Afterwards, using *Sobel edge detector*, sys‐ tem can extract hand shape. Final result of this process is image containing hand shape and side view image containing pegs in the predefined positions. First image is used for extract‐ ing palm and fingers features. Authors of this paper extracted 31 features to construct eigen‐ vector. They also defined deviation as distance between middle point of the finger and middle point of the line between two fingers and the height on which the finger was meas‐ ured. Euclidean distance, Hamming distance and Gaussian Mixture Model (GMM) were used for similarities in eigenvectors. This paper is the first that presented hand geometry based identification with satisfying results. Each subject (user) had form 3 to 5 templates stored in the database, each template containing from 9 to 25 features. The GMM gave the best results in all tested cases. With 5 stored templates the system based on GMM achieved 96% identification accuracy, and 97% verification accuracy.

Techniques that are used in hand geometry biometrics are relatively simple and easy to use [11]. Hand geometry systems have tendency to become the most acceptable biometric char‐ acteristic, especially comparing to fingerprints or iris [12]. Beside this fact, it has to be men‐ tioned that this biometric technique has some serious disadvantages, and low recognition rate is probably one of the biggest. Most researchers believe that hand geometry itself can‐ not satisfy needs of modern biometric security devices [8].

#### **2.2. Systems without pegs**

**•** *<sup>α</sup>*, *ωα*, , *<sup>ω</sup>* are thresholds for each respective distance metrics

**•** *IR*, *IG*, *I <sup>B</sup>* are values of red, green and blue channel respectively, and

96% identification accuracy, and 97% verification accuracy.

not satisfy needs of modern biometric security devices [8].

following formula [10]:

84 New Trends and Developments in Biometrics

**•** *I BW* is resulting binary image,

**•** is contrast stretching function

Where:

Another research attempt in the field of hand geometry and hand shape has been made in 2000. Sanchez-Reillo and associates developed system which takes 640x640 pixels images in the JPEG format. Surface on which hand had to be placed had 6 guiding pegs. They used 200 images from 20 persons. Not all people had same gender, affiliations or personal habits. Before the features were extracted all images were transformed in the binary form, using the

Sometimes, digital image does not use its whole contrast range. By stretching lightness val‐ ues in allowed range the contrast of the image is increasing. This allows better extraction of the hand from the image background. Every 'false' pixel is later (if necessary) removed from the image by using some threshold value. To avoid deviations, image is formatted on the fixed size. Two pegs are used to locate the hand. Afterwards, using *Sobel edge detector*, sys‐ tem can extract hand shape. Final result of this process is image containing hand shape and side view image containing pegs in the predefined positions. First image is used for extract‐ ing palm and fingers features. Authors of this paper extracted 31 features to construct eigen‐ vector. They also defined deviation as distance between middle point of the finger and middle point of the line between two fingers and the height on which the finger was meas‐ ured. Euclidean distance, Hamming distance and Gaussian Mixture Model (GMM) were used for similarities in eigenvectors. This paper is the first that presented hand geometry based identification with satisfying results. Each subject (user) had form 3 to 5 templates stored in the database, each template containing from 9 to 25 features. The GMM gave the best results in all tested cases. With 5 stored templates the system based on GMM achieved

Techniques that are used in hand geometry biometrics are relatively simple and easy to use [11]. Hand geometry systems have tendency to become the most acceptable biometric char‐ acteristic, especially comparing to fingerprints or iris [12]. Beside this fact, it has to be men‐ tioned that this biometric technique has some serious disadvantages, and low recognition rate is probably one of the biggest. Most researchers believe that hand geometry itself can‐

*I BW* = *IR* + *IG* - *I <sup>B</sup>* (5)

As an alternative for systems that used pegs to measure hand geometry features, research‐ ers started to explore hand shape as new biometric characteristic. Researchers in [13] ex‐ tracted 353 hand shape images from 53 persons. The number of images per person varied from 2 to 15. Pegs were used to place the hand in the right position. They were re‐ moved before the comparison and covered with background color. Hand shape was extract‐ ed using hand segmentation. During the finger extraction a set of points is produced as shown in Figure 4.

**Figure 4.** Hand shapes of the same hand extracted, overlaid and aligned [13]

Five fingers were extracted from the hand shape and analyzed separately. To automatize the whole process fingers of the same hand were aligned according to set of all defined points. This alignment is also shown in the Figure 4. The mean distance between two points was defined as Mean Alignment Error (MAE). This error was used to quantify matching results. Positive matching is found if the MAE falls in the predefined set of values. This kind of sys‐ tem achieves False Acceptance Rate (FAR) about 2% and False Rejection Rate (FRR) about 1.5%. This can be comparable to professional and commercial hand geometry systems. The outcome of this approach was larger data storage system because few hundred points need‐ ed to be stored for just one hand shape. Authors used randomly created set of 3992 image pairs to create set of interclass distances. By using that set of distances, it is possible to calcu‐ late distribution and with very high degree of certainty determine which user is real and which is not.

This was just one way of using hand shape for personal verification. Beside that approach, one can use a palm area size approach. Some researchers like Lay [14], conducted researches based on the palm area size. The hand image was acquired by projecting lattice pattern on the top size of the hand. Image was captured in the lattice frame ant it presented the curva‐ ture of the hand. An example of this approach can be seen in Figure 5. Author acquired a hundred images (number of persons is not known), size 512x512 pixels. From those images he extracted templates, size 128x128 pixels.

Systems without pegs are more tolerant when it comes to placing a hand on the device used

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

87

Hand image acquisition is very simple process, especially when it is done in the system without pegs. Hand acquisition system with pegs consists from light source, camera, mir‐ rors and flat surface (with 5 pegs on it). User puts its hand (palm facing down) on the sur‐ face. Pegs are used to guide hand placement so the hand would be in the correct position for the system that is being built. Mirror projects side view image of the hand to the camera. In this way, system can obtain hand image and can extract biometric features from the ac‐ quired image. The user is being registered to database along with eigenvector of his hand (from now on we call this "eigen-hand"). The acquired image is compared to the already ex‐ isting images in the database and, if necessary, new eigen-hand is being calculated. A simple way for acquiring image was presented in [9] where the image was taken in 8-bit grayscale,

The quality of sampling has an effect on the number of false rejected templates, especially in the beginning of the system usage. Sampling depends on large number of factors. For in‐ stance, different heights of the biometric device change relative position of the body and hand. This will lead to different hand shape and differently calculated features. Acquiring hand image on the one height and verifying it on another can cause the system to reject a legal user. Besides that, not knowing how the device works can have great impact on the system and make it complicated to work with. If one wants to reduce this complication in verification phase, it can practice putting its hand on the device's surface. Practicing in‐ cludes correctly placing hand on the surface (no matter whether it uses pegs or not). When a human is born, their hands are almost symmetrical. By getting older there are also some changes in our hands, mainly because of environmental factors. Most people become left- or right-handed leading this hand to become a little bigger that the other one. Young people hands are changing much more that the hands of older people. These processes require that hand geometry and hand shape devices are capable of following those changes and learn

Identification systems based on hand geometry are using geometric differences in the hu‐ man hands. Typical features include length and width of the fingers, palm and fingers posi‐ tion, thickness of the hand, etc. There are no systems that are taking some non-geometric features (e.g. skin color) in consideration. Pegs that some scanners are using are also helpful in determining axes needed for the feature extraction. An example is shown in the Figure 6 where the hand was represented as the vector containing measuring results and 16 charac‐

**1.** F1 – thumb width on the second phalange (bones that form toes and fingers)

for image acquiring.

size 640x480 pixels.

teristic points were extracted:

**3. Hand characteristics acquisition**

how to update every change that is made to person's hand.

**2.** F2 – index finger length on the third phalange

**Figure 5.** Capturing lattice pattern of the hand

Curvature lattice image was transformed in binary image. This system did not use pegs for hand placement, but one could not move its hand freely. The hand had to be in the right position for the verification process. The system prompted user to place his hand as much as possible the same it was placed during the registration phase. Acquired binary image was coded in quadric tree on seven levels. Those trees were used in the matching process for cal‐ culating the similarity of proposed hands. Value less than Root Mean Square (RMS) is proof of better similarity between images. Author claims that he achieved 99.04% verification ac‐ curacy, with FAR = FRR = 0.48%.

One can notice that each of the two described systems is capable of competing with the com‐ mercial hand geometry systems. The main problem in those systems is relatively small num‐ ber of users (from 25 to 55 and from 100 to 350 images). This leaves an open question of systems behavior with the larger number of subjects.

An important element (researchers did not consider it) in this field is aging, i.e. the changing of the hand during the time. Hand image is time sensitive, but there is an open question if it is necessary to change the input images of the hand from time to time. This can be done to achieve better recognition rate and event hand features extraction.

Systems without pegs are more tolerant when it comes to placing a hand on the device used for image acquiring.

## **3. Hand characteristics acquisition**

**Figure 5.** Capturing lattice pattern of the hand

86 New Trends and Developments in Biometrics

curacy, with FAR = FRR = 0.48%.

systems behavior with the larger number of subjects.

achieve better recognition rate and event hand features extraction.

Curvature lattice image was transformed in binary image. This system did not use pegs for hand placement, but one could not move its hand freely. The hand had to be in the right position for the verification process. The system prompted user to place his hand as much as possible the same it was placed during the registration phase. Acquired binary image was coded in quadric tree on seven levels. Those trees were used in the matching process for cal‐ culating the similarity of proposed hands. Value less than Root Mean Square (RMS) is proof of better similarity between images. Author claims that he achieved 99.04% verification ac‐

One can notice that each of the two described systems is capable of competing with the com‐ mercial hand geometry systems. The main problem in those systems is relatively small num‐ ber of users (from 25 to 55 and from 100 to 350 images). This leaves an open question of

An important element (researchers did not consider it) in this field is aging, i.e. the changing of the hand during the time. Hand image is time sensitive, but there is an open question if it is necessary to change the input images of the hand from time to time. This can be done to

Hand image acquisition is very simple process, especially when it is done in the system without pegs. Hand acquisition system with pegs consists from light source, camera, mir‐ rors and flat surface (with 5 pegs on it). User puts its hand (palm facing down) on the sur‐ face. Pegs are used to guide hand placement so the hand would be in the correct position for the system that is being built. Mirror projects side view image of the hand to the camera. In this way, system can obtain hand image and can extract biometric features from the ac‐ quired image. The user is being registered to database along with eigenvector of his hand (from now on we call this "eigen-hand"). The acquired image is compared to the already ex‐ isting images in the database and, if necessary, new eigen-hand is being calculated. A simple way for acquiring image was presented in [9] where the image was taken in 8-bit grayscale, size 640x480 pixels.

The quality of sampling has an effect on the number of false rejected templates, especially in the beginning of the system usage. Sampling depends on large number of factors. For in‐ stance, different heights of the biometric device change relative position of the body and hand. This will lead to different hand shape and differently calculated features. Acquiring hand image on the one height and verifying it on another can cause the system to reject a legal user. Besides that, not knowing how the device works can have great impact on the system and make it complicated to work with. If one wants to reduce this complication in verification phase, it can practice putting its hand on the device's surface. Practicing in‐ cludes correctly placing hand on the surface (no matter whether it uses pegs or not). When a human is born, their hands are almost symmetrical. By getting older there are also some changes in our hands, mainly because of environmental factors. Most people become left- or right-handed leading this hand to become a little bigger that the other one. Young people hands are changing much more that the hands of older people. These processes require that hand geometry and hand shape devices are capable of following those changes and learn how to update every change that is made to person's hand.

Identification systems based on hand geometry are using geometric differences in the hu‐ man hands. Typical features include length and width of the fingers, palm and fingers posi‐ tion, thickness of the hand, etc. There are no systems that are taking some non-geometric features (e.g. skin color) in consideration. Pegs that some scanners are using are also helpful in determining axes needed for the feature extraction. An example is shown in the Figure 6 where the hand was represented as the vector containing measuring results and 16 charac‐ teristic points were extracted:


Third technique that will be presented here was described in [16]. Since this technique does not have its name we will call it F&K technique which describes hand image through mini‐

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

89

In order to offset the effects of background lighting, color of the skin, and noise, the follow‐ ing approach was devised to compute the various feature values. A sequence of pixels along

Total number of pixels considered is referred as *Len*, *Pe* and *Ps* refer to end points within

The actual gray scale profile tends to be spiky as shown in Figure 7 (right image). The first step author presented was to model the profile. Let the pixels along a measurement axis be numbered from 1 to *Len*. Let *X* =(*x*1, *x*2, …, *xLen* ) be the gray values of the pixels along that

**1.** The observed profile (Figure 7 (right)) is obtained from the ideal profile (Figure 7 (left)) by the addition of Gaussian noise to each of the pixels in the latter. Thus, for example, the gray level of a pixel lying between *Ps* and *Pe* were assumed to be drawn from the

> <sup>2</sup> *exp*{ -1 2*σ<sup>B</sup>*

**2.** The gray level of an arbitrary pixel along a particular axis is independent of the gray

Operating under these assumptions, author could write the joint distribution of all the pixel

<sup>2</sup> (*x* - *B*)<sup>2</sup>

} (6)

which the object to measured is located and *A1*, *A2* and *B* are the gray scale values.

a measurement axis will have an ideal gray scale profile as shown in Figure 7.

**Figure 7.** The gray scale profile of pixels along a measurement axis [15]

axis. The following assumptions about the profile were made:

*G*(*x* / *B*, *σ<sup>B</sup>*

is the variance of x in the interval *R, Ps < R ≤ Pe.*

2 ) <sup>=</sup> <sup>1</sup> 2*πσ<sup>B</sup>*

mum spanning trees.

distribution:

level of other pixels in the line.

values along a particular axis as:

where *σ<sup>B</sup>* 2

*3.1.1. The parameter estimation technique*

**Figure 6.** Axes on which hand features are extracted and extracted features [9]

#### **3.1. Extraction techniques**

Ross [15] presented two techniques for feature extraction: The Parameter Estimation Techni‐ que and The Windowing Technique.

In the Parameter Estimation Technique peg-based acquisition system was used. This ap‐ proach is called intensity based approach. The other presented technique used fixed win‐ dows size and determined points whose intensity was changed along the axes. These techniques will be presented later in the chapter.

Third technique that will be presented here was described in [16]. Since this technique does not have its name we will call it F&K technique which describes hand image through mini‐ mum spanning trees.

### *3.1.1. The parameter estimation technique*

**3.** F3 – index finger length on the second phalange

**4.** F4 – middle finger length on the third phalange

**5.** F5 – middle finger length on the second phalange

**6.** F6 – ring finger width on the third phalange

**7.** F7 – ring finger width on the second phalange

**8.** F8 – little finger width on the third phalange

**13.** F13 – palm width based on the four fingers

**15.** F15 – thickness of the fingers on the first phalange

**16.** F16 – thickness of the fingers on the second phalange

**Figure 6.** Axes on which hand features are extracted and extracted features [9]

Ross [15] presented two techniques for feature extraction: The Parameter Estimation Techni‐

In the Parameter Estimation Technique peg-based acquisition system was used. This ap‐ proach is called intensity based approach. The other presented technique used fixed win‐ dows size and determined points whose intensity was changed along the axes. These

**14.** F14 – palm width in the thumb area

**9.** F9 – index finger length

**11.** F11 – ring finger length

**12.** F12 – little finger length

**3.1. Extraction techniques**

que and The Windowing Technique.

techniques will be presented later in the chapter.

**10.** F10 – middle finger length

88 New Trends and Developments in Biometrics

In order to offset the effects of background lighting, color of the skin, and noise, the follow‐ ing approach was devised to compute the various feature values. A sequence of pixels along a measurement axis will have an ideal gray scale profile as shown in Figure 7.

**Figure 7.** The gray scale profile of pixels along a measurement axis [15]

Total number of pixels considered is referred as *Len*, *Pe* and *Ps* refer to end points within which the object to measured is located and *A1*, *A2* and *B* are the gray scale values.

The actual gray scale profile tends to be spiky as shown in Figure 7 (right image). The first step author presented was to model the profile. Let the pixels along a measurement axis be numbered from 1 to *Len*. Let *X* =(*x*1, *x*2, …, *xLen* ) be the gray values of the pixels along that axis. The following assumptions about the profile were made:

**1.** The observed profile (Figure 7 (right)) is obtained from the ideal profile (Figure 7 (left)) by the addition of Gaussian noise to each of the pixels in the latter. Thus, for example, the gray level of a pixel lying between *Ps* and *Pe* were assumed to be drawn from the distribution:

$$G\left(\mathbf{x} \mid \mathbf{B}, \ \sigma\_{\mathbf{B}}^{2}\right) = \frac{1}{\sqrt{2\pi\sigma\_{\mathbf{B}}^{2}}} \exp\left\{\frac{-1}{2\sigma\_{\mathbf{B}}^{2}}(\mathbf{x} \cdot \mathbf{B})^{2}\right\} \tag{6}$$

where *σ<sup>B</sup>* 2 is the variance of x in the interval *R, Ps < R ≤ Pe.*

**2.** The gray level of an arbitrary pixel along a particular axis is independent of the gray level of other pixels in the line.

Operating under these assumptions, author could write the joint distribution of all the pixel values along a particular axis as:

$$\left[\prod\_{j=1}^{Ps} \frac{1}{\sqrt{2\pi\sigma\_{A1}^{2}}} \exp\left[\cdot \frac{1}{2\sigma\_{A1}^{2}} (\mathbf{x}\_{\cdot j} \cdot A\mathbf{1})^{2}\right]\right]$$

$$P(\mathbf{X} \mid \boldsymbol{\Theta}) = \left[\prod\_{j=\mathbf{P}s+1}^{\frac{\mathbf{P}c}{\mathbf{1}\mathbf{1}}} \frac{1}{\sqrt{2\pi\sigma\_{\mathbf{S}}^{2}}} \exp\left[\cdot \frac{1}{2\sigma\_{\mathbf{S}}^{2}} (\mathbf{x}\_{\cdot j} \cdot \mathbf{B})^{2}\right]\right] \tag{7}$$

$$\left[\prod\_{j=\mathbf{P}c+1}^{Lm} \frac{1}{\sqrt{2\pi\sigma\_{A2}^{2}}} \exp\left[\cdot \frac{1}{2\sigma\_{A2}^{2}} (\mathbf{x}\_{\cdot j} \cdot A\mathbf{2})^{2}\right]\right]$$

*Maxvalω<sup>i</sup>* = max

*Maxindexω<sup>i</sup>* = *arg* max

*Minvalω<sup>i</sup>* = min

*Minindexω<sup>i</sup>* = *arg* min

was the maximum. This indicated a sharp change in the gray scale of the profile.

*Ps* and *Pe* could then be obtained by locating the position *Wi*

**Figure 8.** Hand shape and the characteristic hand points defined in [16]

*3.1.3. F&K technique*

shown in the Figure 8.

Figure 9.

*j*∈*ω<sup>i</sup>*

*j*∈*ω<sup>i</sup>*

Fotak and Karlovčec [16] presented a different method of feature extraction. They decided to use mathematical graphs on the two-dimensional hand image. Hand image was normalized by using basic morphological operators and edge detection. They created a binary image from the image captured with an ordinary document scanner. On the binary image the pixel values were analyzed to define the location of characteristic points. They extracted 31 points,

For the hand placement on y-axis a referential point on the top of the middle finger was used. The location of that point was determined by using the horizontal line y1. Using that line, authors defined 6 points that represents the characteristic points of index, mid‐ dle and ring finger. Using lines y2 and y3 they extracted enough characteristic points for four fingers. Thumb has to be processed in the different manner. To achieve that the right-most point of the thumb had to be identified. Using two vertical lines they found the edges of the thumb. By analyzing points on those lines and their midpoints the top of the thumb could be extracted. Example of the thumb top extracting is shown in the

*j*∈*ω<sup>i</sup>*

*j*∈*ω<sup>i</sup>*

*<sup>G</sup>*( *<sup>j</sup>*) (9)

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

*<sup>G</sup>*( *<sup>j</sup>*) (10)

*<sup>G</sup>*( *<sup>j</sup>*) (11)

*<sup>G</sup>*( *<sup>j</sup>*) (12)

where (*Maxvalω<sup>i</sup>* - *Minvalω<sup>i</sup>* )

http://dx.doi.org/10.5772/51912

91

where *θ* =(*Ps*, *Pe*, *A*1, *A*2, *B*, *σA*<sup>1</sup> <sup>2</sup> , *<sup>σ</sup>A*<sup>2</sup> <sup>2</sup> , *<sup>σ</sup><sup>B</sup>* 2 ) and *σA*<sup>1</sup> <sup>2</sup> , *σA*<sup>2</sup> <sup>2</sup> and *σ<sup>B</sup>* 2 are the variances of x in the three intervals [1, *Ps*], [*Ps* + 1, *Pe*] and [*Pe* + 1, *Len*] respectively.

The goal now is to estimate *Ps* and *Pe* using the observed pixel values along the chosen axis (Authors used Maximum Likelihood Estimate-MLE).

By taking algorithm on both sides of (7) one could obtain likelihood function as:

$$\frac{1}{\sigma\_{A1}^2} \sum\_{1}^{p\_\mathcal{S}} \left( \mathbf{x}\_{\cdot j} \cdot A \mathbf{1} \right) ^2 + \frac{1}{\sigma\_B^2} \sum\_{P\*+1}^{p\_\mathcal{E}} \left( \mathbf{x}\_{\cdot j} \cdot B \right) ^2$$

$$\begin{split} L \quad \text{(\$\Theta\$)} &= + \frac{1}{\sigma\_{A2}^2} \sum\_{P\*+1}^{Len} \left( \mathbf{x}\_{\cdot j} \cdot A \mathbf{2} \right) ^2 + P \mathbf{s} \log \sigma\_{A1}^2 \\ &\quad + \text{(\$\mathbf{P} \mathbf{e}\$ - \$\mathbf{Ps}\$)} \log \sigma\_B^2 + \text{(\$Len - \$\mathbf{Pe}\$)} \log \sigma\_{A2}^2 \end{split} \tag{8}$$

The parameters could then be estimated iteratively [15].

The initial estimates of *A*1, *σA*<sup>1</sup> <sup>2</sup> , *A*2, *σA*<sup>2</sup> <sup>2</sup> , *B* and *σ<sup>B</sup>* 2 were obtained as follows:


#### *3.1.2. The windowing technique*

This technique was developed to locate the end points *Ps* and *Pe* from the gray scale profile in Figure 7. A heuristic method was adopted to locate these points. A window of length *wlen* was moved over the profile, one pixel at a time, starting from the left-most pixel.

Let *Wi* , 0≤*i* ≤ *N* , refer to sequence of pixels covered by the window after the *i*th move, with *WN* indicating the final position. For each position *Wi* , author computed four values *Maxvalω<sup>i</sup>* , *Maxindexω<sup>i</sup>* , *Minvalω<sup>i</sup>* and *Minindexω<sup>i</sup>* as:

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics http://dx.doi.org/10.5772/51912 91

$$\text{Maxval}\omega\_{i} = \max\_{j \in \omega\_{i}} G(j) \tag{9}$$

$$\text{Maxindex}\omega\_i = \arg\max\_{j \in \omega\_i} \mathcal{G}(j) \tag{10}$$

$$\dim \text{val}\,\omega\_{i} = \min\_{j \in \omega\_{i}} G(j) \tag{11}$$

$$\text{Minindex}\omega\_i = \arg\min\_{j \in \omega\_i} G(j) \tag{12}$$

*Ps* and *Pe* could then be obtained by locating the position *Wi* where (*Maxvalω<sup>i</sup>* - *Minvalω<sup>i</sup>* ) was the maximum. This indicated a sharp change in the gray scale of the profile.

#### *3.1.3. F&K technique*

*P*(*X* / *θ*)=

(Authors used Maximum Likelihood Estimate-MLE).

*L* (*θ*)=

where *θ* =(*Ps*, *Pe*, *A*1, *A*2, *B*, *σA*<sup>1</sup>

90 New Trends and Developments in Biometrics

The initial estimates of *A*1, *σA*<sup>1</sup>

**•** *<sup>A</sup>*1 and *σA*<sup>1</sup>

**•** *<sup>A</sup>*2 and *σA*<sup>2</sup>

**•** *<sup>B</sup>* and *σ<sup>B</sup>*

Let *Wi*

*Maxvalω<sup>i</sup>*

2

*3.1.2. The windowing technique*

, *Maxindexω<sup>i</sup>*

(*Len* / 2 + *NB*).

∏ *J* =1 *Ps* 1

∏ *J* =*Ps*+1 *Pe* 1

∏ *J* =*Pe*+1 *Len* 1

<sup>2</sup> , *<sup>σ</sup>A*<sup>2</sup> <sup>2</sup> , *<sup>σ</sup><sup>B</sup>* 2

three intervals [1, *Ps*], [*Ps* + 1, *Pe*] and [*Pe* + 1, *Len*] respectively.

1 *σA*<sup>1</sup> <sup>2</sup> ∑ 1 *Ps*

+ 1 *σA*<sup>2</sup> <sup>2</sup> ∑ *Pe*+1 *Len*

The parameters could then be estimated iteratively [15].

were set to *Len* / 2 - 10 and *Len* / 2 + 10 respectively.

*WN* indicating the final position. For each position *Wi*

, *Minvalω<sup>i</sup>*

2*πσA*<sup>1</sup>

2*πσ<sup>B</sup>*

2*πσA*<sup>2</sup>

By taking algorithm on both sides of (7) one could obtain likelihood function as:

(*x <sup>j</sup>* - *A*1)<sup>2</sup> +

+ (*Pe* - *Ps*)log *σ<sup>B</sup>*

<sup>2</sup> , *A*2, *σA*<sup>2</sup>

<sup>2</sup> *exp*{- <sup>1</sup> 2*σA*<sup>1</sup>

> <sup>2</sup> *exp*{- <sup>1</sup> 2*σ<sup>B</sup>*

<sup>2</sup> *exp*{- <sup>1</sup> 2*σA*<sup>2</sup>

) and *σA*<sup>1</sup>

The goal now is to estimate *Ps* and *Pe* using the observed pixel values along the chosen axis

1 *σB* <sup>2</sup> ∑ *Ps*+1 *Pe*

(*x <sup>j</sup>* - *A*2)<sup>2</sup> + *Ps*log *σA*<sup>1</sup>

2

2 were estimated using the gray values of the first *NA*1 pixels along the axis

2 were estimated using the gray values of the pixels from (*Len* - *NA*2) to *Len*

**•** The values of *NA*1, *NA*2 and *NB* were fixed for the system and the values of the *Ps* and *Pe*

This technique was developed to locate the end points *Ps* and *Pe* from the gray scale profile in Figure 7. A heuristic method was adopted to locate these points. A window of length *wlen*

, 0≤*i* ≤ *N* , refer to sequence of pixels covered by the window after the *i*th move, with

as:

was moved over the profile, one pixel at a time, starting from the left-most pixel.

and *Minindexω<sup>i</sup>*

were estimated using the gray values of the pixel between (*Len* / 2 - *NB*) and

<sup>2</sup> , *B* and *σ<sup>B</sup>*

<sup>2</sup> (*x <sup>j</sup>* - *A*1)<sup>2</sup>

<sup>2</sup> (*x <sup>j</sup>* - *B*)<sup>2</sup>

<sup>2</sup> (*x <sup>j</sup>* - *A*2)<sup>2</sup>

<sup>2</sup> and *σ<sup>B</sup>* 2

(*x <sup>j</sup>* - *B*)<sup>2</sup>

2

2

were obtained as follows:

<sup>2</sup> <sup>+</sup> (*Len* - *Pe*)log *<sup>σ</sup>A*<sup>2</sup>

<sup>2</sup> , *σA*<sup>2</sup>

}

}

}

(7)

(8)

are the variances of x in the

, author computed four values

Fotak and Karlovčec [16] presented a different method of feature extraction. They decided to use mathematical graphs on the two-dimensional hand image. Hand image was normalized by using basic morphological operators and edge detection. They created a binary image from the image captured with an ordinary document scanner. On the binary image the pixel values were analyzed to define the location of characteristic points. They extracted 31 points, shown in the Figure 8.

**Figure 8.** Hand shape and the characteristic hand points defined in [16]

For the hand placement on y-axis a referential point on the top of the middle finger was used. The location of that point was determined by using the horizontal line y1. Using that line, authors defined 6 points that represents the characteristic points of index, mid‐ dle and ring finger. Using lines y2 and y3 they extracted enough characteristic points for four fingers. Thumb has to be processed in the different manner. To achieve that the right-most point of the thumb had to be identified. Using two vertical lines they found the edges of the thumb. By analyzing points on those lines and their midpoints the top of the thumb could be extracted. Example of the thumb top extracting is shown in the Figure 9.

**Figure 9.** Extracting characteristic points of the thumb

In order to get enough information for their process, each hand had to be scanned four times. For each characteristic point authors constructed the complete graph. The exam‐ ple of characteristic points from four scans and the corresponding complete graph of one point are shown in the Figure 10 and Figure 11 respectively.

**Figure 13.** All minimum spanning trees of one user

ievements that were produced in last few years.

**•** Constrained and contact-based

**•** Unconstrained and contact-based

The verification process is made by comparing every point minimum spanning tree with the location of currently captured corresponding point. The results of the system are very prom‐

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

93

So far we described the basics of hand geometry biometrics. In this section we will mention some new trends and new researches in this field. Reading this section requires a great un‐ derstanding of the hand geometry biometrics and the extraction and verification methods that are mentioned here. We will describe everything in detail, but rather mention some ach‐

Hand geometry has been contact-based from its beginnings and still is in almost all commer‐ cial systems. Since it has evolved in last 30 years, one can categorize this field as in [17]:

While the first category requires a flat platform and pegs or pins to restrict hand degree of freedom, second one is peg- and pin-free, although still requiring a platform to place a hand

The second category gives users more freedom in the process of image acquisition. This step is considered as the evolution forward from constrained contact-based systems. Some newer works in this field are [18], [19]. In the [18] authors presented a method based on three keys. The system was based on using Natural Reference System (NRS) defined on the hand's lay‐ out. Therefore, neither hand-pose nor a pre-fixed position were required in the registration process. Hand features were obtained through the polar representation of the hand's con‐ tour. Their system uses both right and left hand which allowed them to consider distance measures for direct and crossed hands. Authors of the second paper [19] used 15 geometric features to analyze the effect of changing the image resolution over biometric system based on hand geometry. The images were diminished from an initial 120dpi up to 24dpi. They used two databases, one acquiring the images of the hand underneath whereas the second database acquires the image over the hand. According to that they used two classifiers: mul‐

(e.g. scanner). Main papers of this category were described earlier in this chapter.

ising for future development, and are FAR = 1.21% and FRR = 7.75%.

**4. New trends in hand geometry and hand shape biometrics**

**Figure 10.** Characteristic points of the four scanning of the hand

**Figure 11.** The complete graph of one characteristic point

The number of edges in the complete graph is well known. In order to construct mini‐ mum spanning tree this graph needed to be weighted graph. The weights are distances between two graph vertices that are connected with an edge. Distances were measured using Euclidean distance. In the end, Prim algorithm was used to construct minimum spanning tree of one characteristic point. The same procedure was made for each of 31 points. The example of minimum spanning tree of one characteristic point and all mini‐ mum spanning trees are shown in the Figure 12 and Figure 13 respectively.

**Figure 12.** Minimum spanning tree of the graph from the Figure 12

**Figure 13.** All minimum spanning trees of one user

**Figure 9.** Extracting characteristic points of the thumb

92 New Trends and Developments in Biometrics

point are shown in the Figure 10 and Figure 11 respectively.

**Figure 10.** Characteristic points of the four scanning of the hand

**Figure 11.** The complete graph of one characteristic point

**Figure 12.** Minimum spanning tree of the graph from the Figure 12

In order to get enough information for their process, each hand had to be scanned four times. For each characteristic point authors constructed the complete graph. The exam‐ ple of characteristic points from four scans and the corresponding complete graph of one

The number of edges in the complete graph is well known. In order to construct mini‐ mum spanning tree this graph needed to be weighted graph. The weights are distances between two graph vertices that are connected with an edge. Distances were measured using Euclidean distance. In the end, Prim algorithm was used to construct minimum spanning tree of one characteristic point. The same procedure was made for each of 31 points. The example of minimum spanning tree of one characteristic point and all mini‐

mum spanning trees are shown in the Figure 12 and Figure 13 respectively.

The verification process is made by comparing every point minimum spanning tree with the location of currently captured corresponding point. The results of the system are very prom‐ ising for future development, and are FAR = 1.21% and FRR = 7.75%.

## **4. New trends in hand geometry and hand shape biometrics**

So far we described the basics of hand geometry biometrics. In this section we will mention some new trends and new researches in this field. Reading this section requires a great un‐ derstanding of the hand geometry biometrics and the extraction and verification methods that are mentioned here. We will describe everything in detail, but rather mention some ach‐ ievements that were produced in last few years.

Hand geometry has been contact-based from its beginnings and still is in almost all commer‐ cial systems. Since it has evolved in last 30 years, one can categorize this field as in [17]:


While the first category requires a flat platform and pegs or pins to restrict hand degree of freedom, second one is peg- and pin-free, although still requiring a platform to place a hand (e.g. scanner). Main papers of this category were described earlier in this chapter.

The second category gives users more freedom in the process of image acquisition. This step is considered as the evolution forward from constrained contact-based systems. Some newer works in this field are [18], [19]. In the [18] authors presented a method based on three keys. The system was based on using Natural Reference System (NRS) defined on the hand's lay‐ out. Therefore, neither hand-pose nor a pre-fixed position were required in the registration process. Hand features were obtained through the polar representation of the hand's con‐ tour. Their system uses both right and left hand which allowed them to consider distance measures for direct and crossed hands. Authors of the second paper [19] used 15 geometric features to analyze the effect of changing the image resolution over biometric system based on hand geometry. The images were diminished from an initial 120dpi up to 24dpi. They used two databases, one acquiring the images of the hand underneath whereas the second database acquires the image over the hand. According to that they used two classifiers: mul‐ ticlass support vector machine (Multiclass SVM) and neural network with error correction output codes.

**5. The hand recognition technology**

**Figure 14.** Schlage HandPunch 4000 [26]

of 32000 pixels. One if their device is shown in Figure 14.

Hand features, described earlier in the chapter, are used in the devices for personal verifica‐ tion and identification. One of the leading commercial companies in this field is *Schlage*. In their devices a CCD digital camera is used for acquiring a hand image. This image has size

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

95

The system presented in the Figure X14 consists from light source, camera, mirrors and flat surface with 5 pegs. The user places the hand facing down on a flat plate on which five pins serve as a control mechanism for the proper accommodation of the right hand of the user. The device is connected with the computer through application which enables to see live im‐ age of the top side of the hand as well as side view of the hand. The GUI helps in image acquisition while the mirror in the device used to obtain side view of the hand. This gives a partially three-dimensional image of the hand. The device captures two hand images. After the user places its hand on the device, the hand is being captured. The location and the size of the image are determined by segmentation of reflected light from the dark mask. Second image is captured with the same camera but using the mirror for measuring the hand thick‐ ness. By using only binary image and the reflected background the system is not capable of capturing scars, pores or tattoos. On the other hand, big rings, bandages or gloves can have

The captured hand silhouette is used to calculate length, width and the thickness of the four fingers (thumb is not included). The system makes 90 measurements which are stored in the 9B size template. For template matching the Euclidean distance is used. The acquisition pro‐ cedure takes 30 seconds to complete and during that period user has to place its hand on the

great impact on the image so it could lead to false rejection of the hand.

There are many different verification approaches in the contact-based hand geometry sys‐ tems. So far, the GMMs and SVM give the best results but they are far from satisfying for commercial use.

Due to user acceptability, contact-less biometrics is becoming more important. In this ap‐ proach neither pegs nor platform are required for hand image acquisition. Papers in this field are relatively new according to ones in the contact-based approach. It is for the best to present just new trends in contact-less hand geometry biometrics.

The most used verification methods in this approach are *k – Nearest Neighbor (k-NN)* and SVM. These methods are also the most competitive in the existing literature.

In the last few years, literature on this problem is rapidly increasing. SVM is the most com‐ mon used verification and identification method. Authors in [20] acquired hand image with static video camera. Using the decision tree they segmented the hand and after that meas‐ ured the local feature points extracted along fingers and wrists. The identification was based on the geometry measurements of a query image against a database of recorded measure‐ ments using SVM. Another use of SVM can be found in the [21]. They also presented bio‐ metric identification system based on geometrical features of the human hand. The right hand images were acquired using classic web cam. Depending on illumination, binary im‐ ages were constructed and the geometrical features (30-40 finger widths) were obtained from them. SVM was used as a verifier. Kumar and Zhang used SVM in their hybrid recog‐ nition system which uses feature-level fusion of hand shape and palm texture [22]. They ex‐ tracted features from the single image acquired from digital camera. Their results proved that only a small subset of hand features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features was evalu‐ ated on the diverse classification schemes: naïve Bayes (normal, estimated, multinomial), de‐ cision trees (4 5, LMT), k-NN, SVM, and FFN.

A hybrid system fusing the palmprint and hand geometry of a human hand based on mor‐ phology was presented in [23]. Authors utilized the image morphology and concept of Vor‐ onoi diagram to cut the image of the front of the whole palm apart into several irregular blocks in accordance with the hand geometry. Statistic characteristics of the gray level in the blocks were employed as characteristic values. In the recognition phase SVM was used.

Beside SVM which is the most competitive method in the contact-less hand geometry verifica‐ tion and identification, the literature contains other very promising methods such as neural net‐ works [24], a new feature called 'SurfaceCode' [25] and template distances matching [17].

Mentioned methods are not the only ones but they have the smallest Equal Error Rate and therefore are the most promising methods for the future development of the contact-less hand geometry biometric systems.

## **5. The hand recognition technology**

ticlass support vector machine (Multiclass SVM) and neural network with error correction

There are many different verification approaches in the contact-based hand geometry sys‐ tems. So far, the GMMs and SVM give the best results but they are far from satisfying for

Due to user acceptability, contact-less biometrics is becoming more important. In this ap‐ proach neither pegs nor platform are required for hand image acquisition. Papers in this field are relatively new according to ones in the contact-based approach. It is for the best to

The most used verification methods in this approach are *k – Nearest Neighbor (k-NN)* and

In the last few years, literature on this problem is rapidly increasing. SVM is the most com‐ mon used verification and identification method. Authors in [20] acquired hand image with static video camera. Using the decision tree they segmented the hand and after that meas‐ ured the local feature points extracted along fingers and wrists. The identification was based on the geometry measurements of a query image against a database of recorded measure‐ ments using SVM. Another use of SVM can be found in the [21]. They also presented bio‐ metric identification system based on geometrical features of the human hand. The right hand images were acquired using classic web cam. Depending on illumination, binary im‐ ages were constructed and the geometrical features (30-40 finger widths) were obtained from them. SVM was used as a verifier. Kumar and Zhang used SVM in their hybrid recog‐ nition system which uses feature-level fusion of hand shape and palm texture [22]. They ex‐ tracted features from the single image acquired from digital camera. Their results proved that only a small subset of hand features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features was evalu‐ ated on the diverse classification schemes: naïve Bayes (normal, estimated, multinomial), de‐

A hybrid system fusing the palmprint and hand geometry of a human hand based on mor‐ phology was presented in [23]. Authors utilized the image morphology and concept of Vor‐ onoi diagram to cut the image of the front of the whole palm apart into several irregular blocks in accordance with the hand geometry. Statistic characteristics of the gray level in the blocks were employed as characteristic values. In the recognition phase SVM was used.

Beside SVM which is the most competitive method in the contact-less hand geometry verifica‐ tion and identification, the literature contains other very promising methods such as neural net‐ works [24], a new feature called 'SurfaceCode' [25] and template distances matching [17].

Mentioned methods are not the only ones but they have the smallest Equal Error Rate and therefore are the most promising methods for the future development of the contact-less

present just new trends in contact-less hand geometry biometrics.

cision trees (4 5, LMT), k-NN, SVM, and FFN.

hand geometry biometric systems.

SVM. These methods are also the most competitive in the existing literature.

output codes.

94 New Trends and Developments in Biometrics

commercial use.

Hand features, described earlier in the chapter, are used in the devices for personal verifica‐ tion and identification. One of the leading commercial companies in this field is *Schlage*. In their devices a CCD digital camera is used for acquiring a hand image. This image has size of 32000 pixels. One if their device is shown in Figure 14.

**Figure 14.** Schlage HandPunch 4000 [26]

The system presented in the Figure X14 consists from light source, camera, mirrors and flat surface with 5 pegs. The user places the hand facing down on a flat plate on which five pins serve as a control mechanism for the proper accommodation of the right hand of the user. The device is connected with the computer through application which enables to see live im‐ age of the top side of the hand as well as side view of the hand. The GUI helps in image acquisition while the mirror in the device used to obtain side view of the hand. This gives a partially three-dimensional image of the hand. The device captures two hand images. After the user places its hand on the device, the hand is being captured. The location and the size of the image are determined by segmentation of reflected light from the dark mask. Second image is captured with the same camera but using the mirror for measuring the hand thick‐ ness. By using only binary image and the reflected background the system is not capable of capturing scars, pores or tattoos. On the other hand, big rings, bandages or gloves can have great impact on the image so it could lead to false rejection of the hand.

The captured hand silhouette is used to calculate length, width and the thickness of the four fingers (thumb is not included). The system makes 90 measurements which are stored in the 9B size template. For template matching the Euclidean distance is used. The acquisition pro‐ cedure takes 30 seconds to complete and during that period user has to place its hand on the device four times. Internal processor generates template which is mean template of all read‐ ings during this process. Image captured with this device can be seen in the Figure 15.

**Author details**

Miroslav Bača\*

**References**

(1971).

(1988).

(1990). , 91-0276.

, Petra Grd and Tomislav Fotak

Centre for biometrics, Faculty of Organization and Informatics, Varaždin, Croatia

[2] Jacoby OH, Giordano AJ, Fioretti WH. Personal Identification Apparatus. US Patent.

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

97

[4] Sidlauskas DP. 3D HHHand Profile Identification Apparatus.US Patent 4736203;

[5] Van Tilborg, H. C. E., Jajodia, S., & editors, . Encyclopedia of Cryptography and Se‐

[6] Fotak T. Razvoj biometrijskih tehnika. BSc thesis. University of Zagreb, Faculty of or‐

[7] Bulatov, Y., Jambawalikar, S., Kumar, P., & Sethia, S. Hand Recognition System Us‐ ing Geometric Classifiers. DIMACS Workshop on Computational Geometry, (14-15

[8] Jain, A., Bolle, R., Pankanti, S., editors, Biometrics., Personal, identification., in, net‐

[9] Jain, A., Ross, A., Panakanti, S. A., prototype, hand., geometry-based, verification., & system, A. V. B. P. AVBPA: proceedings of the 2nd International Conference on Au‐ dio- and Video-based Biometric Person Authentication, Washington DC; (1999).

[10] Sanchez-Reillo, R., Sanchez-Avila, C., & Gonzales-Marcos, A. (2000). Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics . *IEEE Transactions on*

[11] Jain, A., Hong, L., Prabhakar, S., Biometrics, promising., frontiers, for., the, emerg‐ ing., & identification, market. Communications of the ACM (2000). , 91-98.

[12] Holmes, J. P., Maxwell, R. L., Righ, L. J. A., performance, evaluation., of, biometric., & identification, devices. Technical Report SANDSandia National Laboratories;

worked., & society, . Norwell: Kluwer Academic Publishers; (1999).

*Pattern Analysis and Machine Intelligence*, 1168 EOF-1171 EOF.

curity 2nd Ed. New York: Springer Science + Business Media, LLC; (2011).

\*Address all correspondence to: miroslav.baca@foi.hr

[1] Ernst R.H. Hand ID System. US Patent. (1971).

ganization and informatics; 2008.

[3] Lay HC. Hand Shape Recognition. US Patent. (1971).

November 2002). Piscataway, NJ; 2002., 14-15.

**Figure 15.** Hand silhouette captured with Schalge device

## **6. Conclusion**

Hand recognition biometrics is probably the most developed and applicable biometric tech‐ nique that found its application in many organizations. This is due to its user friendliness. Moreover, hand recognition is a simple technique which is very easy to use and does not require much memory space. Hand geometry is invariant from the environmental impacts and has acceptable privacy violation level. For the image capturing one can use classic CCD cameras which are easy to use (it is easy to obtain hand image) and have a low price.

The biggest disadvantages of hand geometry lie in the following facts. The size of the hand restricts biometric systems on the smaller number of applications. From a hundred random‐ ly chosen persons, at least two will have similar hand geometry. The hand injury can poten‐ tially have great impact on the recognition system. Measurements have to be done several times, since in the acquisition process one cannot always obtain all information needed.

It is obvious that this technique is easy to forge by finding the most appropriate hand (one has to find the hand that is "close enough"). The technology based on the hand image is the most common in modern biometric systems.

In this chapter we presented the basics of the hand geometry and hand shape biometrics. Researchers in the field of biometrics found that human hand, especially human palm, con‐ tains some characteristics that can be used for personal identification. These characteristics mainly include thickness of the palm area and width, thickness and length of the fingers, etc. Hand recognition biometrics is probably the most developed and applicable biometric technique that found its application in many organizations.

## **Author details**

device four times. Internal processor generates template which is mean template of all read‐ ings during this process. Image captured with this device can be seen in the Figure 15.

Hand recognition biometrics is probably the most developed and applicable biometric tech‐ nique that found its application in many organizations. This is due to its user friendliness. Moreover, hand recognition is a simple technique which is very easy to use and does not require much memory space. Hand geometry is invariant from the environmental impacts and has acceptable privacy violation level. For the image capturing one can use classic CCD

The biggest disadvantages of hand geometry lie in the following facts. The size of the hand restricts biometric systems on the smaller number of applications. From a hundred random‐ ly chosen persons, at least two will have similar hand geometry. The hand injury can poten‐ tially have great impact on the recognition system. Measurements have to be done several times, since in the acquisition process one cannot always obtain all information needed.

It is obvious that this technique is easy to forge by finding the most appropriate hand (one has to find the hand that is "close enough"). The technology based on the hand image is the

In this chapter we presented the basics of the hand geometry and hand shape biometrics. Researchers in the field of biometrics found that human hand, especially human palm, con‐ tains some characteristics that can be used for personal identification. These characteristics mainly include thickness of the palm area and width, thickness and length of the fingers, etc. Hand recognition biometrics is probably the most developed and applicable biometric

cameras which are easy to use (it is easy to obtain hand image) and have a low price.

**Figure 15.** Hand silhouette captured with Schalge device

96 New Trends and Developments in Biometrics

most common in modern biometric systems.

technique that found its application in many organizations.

**6. Conclusion**

Miroslav Bača\* , Petra Grd and Tomislav Fotak

\*Address all correspondence to: miroslav.baca@foi.hr

Centre for biometrics, Faculty of Organization and Informatics, Varaždin, Croatia

## **References**


[13] Jain, A., & Duta, N. Deformable matching of hand shapes for verification. IEEE Inter‐ national Conference in Image Processing: Proceedings of the IEEE International Con‐ ference in Image Processing. Kobe, Japan; (1999).

ognition: Proceedings of the IEEE Computer Society Conference on Computer Vision

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

http://dx.doi.org/10.5772/51912

99

[26] Schlage. HandPunch 4000: Biometrics. http://w3.securitytechnologies.com/products/ biometrics/time\_attendance/HandPunch/Pages/details.aspx?InfoID=18 (accessed

and Pattern Recognition Workshops, San Francisco, CA; (2010).

20May (2012).


ognition: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, San Francisco, CA; (2010).

[26] Schlage. HandPunch 4000: Biometrics. http://w3.securitytechnologies.com/products/ biometrics/time\_attendance/HandPunch/Pages/details.aspx?InfoID=18 (accessed 20May (2012).

[13] Jain, A., & Duta, N. Deformable matching of hand shapes for verification. IEEE Inter‐ national Conference in Image Processing: Proceedings of the IEEE International Con‐

[15] Ross, A. A., prototype, Hand., Geometry-based, Verification., & System, . MS Project

[16] Fotak, T., & Karlovčec, M. Personal authentication using minimum spanning trees on

[17] De Santos, Sierra. A., Sanchez-Avila, C., Bailador del, Pozo. G., & Guerra-Casanova, J. Unconstrained and Contactless Hand Geometry Biometrics. Sensors (2011). , 11,

[18] Adan, M., Adan, A., & Vasquez, Torres. R. (2008). Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics . *Image and Vision Computing*, 26(4), 451-465.

[19] Ferrer, Fabregas. J., Faundez, M., Alonso, J. B., & Travieso, C. M. Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics . International Carnahan Conference on Security Technology: Proceedings of the 43rd Annual 2009 Interna‐

[20] Jiang, X., Xu, W., Sweeney, L., Li, Y., Gross, R., & Yurovsky, D. New drections in con‐ tact free hand recognition. International Conference in Image Processing : Proceed‐ ings of the IEEE International Conference in Image Processing, San Antonio, TX;

[21] Ferrer, M. A., Alonso, J. B., & Travieso, C. M. Comparing infrared and visible illumi‐ nation for contactless hand based biometric scheme. International Carnahan Confer‐ ence on Security Technology: Proceedings of the 42nd Annual IEEE International

[22] Kumar, A., & Zhang, D. (2006). Basic Principles and Trends in Hand Geometry and

[23] Wang WC, Chen WS, Shih SW.Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics . International Conference on Acoustics: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei;

[24] Rahman, A., Anwar, F., Azad, S. A., Simple, , Effective, Technique., for, Human., Ver‐ ification, with., & Hand, Geometry. International Conference on Computer and Com‐ munication Engineering: Proceedings of the International Conference on Computer

[25] Kanhangad, V., Kumar, A., & Zhang, D. Human Hand Identification with 3D Hand Pose Variations. Computer Society Conference on Computer Vision and Pattern Rec‐

tional Carnahan Conference on Security Technolog, Zurich; (2009).

Carnahan Conference on Security Technology, Prague; (2008).

Hand Shape Biometrics . *IEEE Transactions on Image Processing*.

and Communication Engineering, Kuala Lumpur; (2008).

ference in Image Processing. Kobe, Japan; (1999).

twodimensional hand image. Varaždin: FOI; (2009).

Report; (1999).

98 New Trends and Developments in Biometrics

10143-10164.

(2007).

(2009).

[14] Lay HC. Hand shape recognition.Optics and Laser Technology (2000).

**Chapter 5**

**Genetic & Evolutionary Biometrics**

Aniesha Alford, Joseph Shelton, Joshua Adams, Derrick LeFlore, Michael Payne, Jonathan Turner, Vincent McLean, Robert Benson, Gerry Dozier,

Additional information is available at the end of the chapter

spawn a subfield known as Evolutionary Scheduling [6,24].

Genetic & Evolutionary Computation (GEC) is the field of study devoted to the design, development, and analysis of problem solvers based on natural selection [1-4] and has been successfully applied to a wide range of complex, real world optimization problems in the areas of robotics [5], scheduling [6], music generation [7], aircraft design [1], and cyber secur‐ ity [8-11], just to name a few. Genetic and Evolutionary Computations (referred to as GECs) differ from most traditional problems solvers in that they are stochastic methods that evolve a population of candidate solutions (CSs) rather than just operating on a single CS. Due to the evolutionary nature of GECs, they are able to discover a wide variety of novel solutions to a particular problem at hand – solutions that radically differ from those developed by tradition‐

GECs are general-purpose problem solvers [1,2,4]. Because of this fact and their ability to hybridize well with traditional problem solvers [1], a number of new subfields have emerged. In the field of Evolutionary Robotics [5,14], GECs are used in path planning [15], robot behavior design [16], and robot gait design [17]. In the field of Evolutionary Design [1,18], GECs are being used to evolve lunar habitats [19], emoticons [20], and music [7,21]. GECs have also been used successfully in a wide variety of scheduling applications [22,23] – which in turn has

Currently we are seeing the emergence of a new and exciting field of study devoted to‐ wards the design, development, analysis, and application of GECs to problems within the area of biometrics [25-29]. We refer to this new subfield of study as Genetic and Evolutionary Biometrics (GEB) [25-27,31]. In this chapter, we will provide a brief history of GEB as well as

> © 2012 Alford et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2012 Alford et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Kelvin Bryant and John Kelly

http://dx.doi.org/10.5772/51386

al problem solvers [3,12,13].

**1. Introduction**
