Preface

Chapter 6 **Empirical Wavelet Transform-based Detection of Anomalies in**

Gonzalez and Jose Antonio Cruz-Abeyro

**VI** Contents

**ULF Geomagnetic Signals Associated to Seismic Events with a Fuzzy Logic-based System for Automatic Diagnosis 111** Omar Chavez Alegria, Martin Valtierra-Rodriguez, Juan P. Amezquita-Sanchez, Jesus Roberto Millan-Almaraz, Luis Mario Rodriguez, Alejandro Mungaray Moctezuma, Aurelio Dominguez-

> Wavelets are excellent signal-processing tools that enable the analysis of several timescales of the local properties of complex signals presenting nonstationary zones. They have a large number of applications in several fields of science and engineering. The penetration of wavelets in the scientific community was very fast. This book presents some interesting realworld applications of wavelet theory.

> In Chapter 1, we present the progressive–regressive strategy for biometrical authentication. Chapter 2 deals with the resolution enhancement–based image compression technique using singular value decomposition and wavelet transforms. In Chapter 3, we treat the adaptive wavelet packet transform. In Chapter 4, the scaling factor threshold estimator in different color models using a discrete wavelet transform for steganographic algorithms has been de‐ scribed. Chapter 5 explains the wavelet-based analysis of MCSA for fault detection in elec‐ trical machines. The book ends with Chapter 6 which discusses the empirical wavelet transform–based detection of anomalies in ULF geomagnetic signals associated with seismic activities with a fuzzy logic–based system for automatic diagnosis.

> The book is intended for both students and researchers interested in the fascinating field of wavelet theory and its real-world applications.

> > **Dumitru Baleanu** Cankaya University, Turkey

#### **Chapter 1**

## **Progressive-Regressive Strategy for Biometrical Authentication**

Tilendra Shishir Sinha, Raj Kumar Patra, Rohit Raja, Devanshu Chakravarty and Ravi Prakash Dubey

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/61786

#### **Abstract**

This chapter thoroughly investigates the use of the progressive–regressive strategy for biometrical authentication through the use of human gait and face images. A considera‐ ble amount of features were extracted and relevant parameters computed for such an in‐ vestigation and a vast number of datasets developed. The datasets consist of features and computed parameters extracted from human gait and face images from various subjects of different ages. Soft-computing techniques, discrete wavelet transform (DWT), princi‐ pal component analysis and the forward–backward dynamic programming method were applied for the best-fit selection of parameters and the complete matching process. The paretic and non-paretic characteristics were classified through Naïve Bayes' classification theorem. Both classification and recognition were carried out in parallel with test and trained datasets and the whole process of investigation was successfully carried out through an algorithm developed in this chapter. The success rate of biometrical authenti‐ cation is 89%.

**Keywords:** Lifting scheme of discrete wavelet transform (LSDWT), inverse-lifting scheme of discrete wavelet transform (ILSDWT), soft-computing technique, unidirectional tem‐ porary associative memory technique (UTAM), forward–backward dynamic program‐ ming, principal component analysis

#### **1. Introduction**

This chapter attempts to explain the process of biometrical authentication by considering human gait and face images. The authentication process has been carried out in parallel with the test data and the trained data, which consists of a variety of human gait and face images taken of subjects of different ages.

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This chapter is separated into three parts: the first deals with the general mathematical problem of formulating a model for biometrical authentication; the second provides a methodology for corpus formation using human gait and face images; and the third presents case studies for biometrical authentication, along with conclusions and applications. In the first part, the frames of images were mathematically analysed and normalised. They were categorised into odd and even components, thus validating the process of splitting the frames using the lifting scheme of discrete wavelet transform (LSDWT). The detail and coarser components have been estimated. Further the above calculated values have been used to validate the process of merging the frames through the inverse-lifting scheme of discrete wavelet transform (ILSDWT). A considerable amount of parameters have been estimated using statistical, digital and morphological image-processing methods. The next part of this chapter presents the experimental formation of two different corpuses: firstly a noise-free artificial gait model and secondly a noise-free artificial face model. The facts and figures of the above mentioned corpuses were reached and discussed in considerable length using; the LSDWT, the ILSDWT, soft-computing based techniques, the forward–backward dynamic programming of neural networks, the Unidirectional Temporary Associative Memory (UTAM) technique of neural network and fuzzy and genetic algorithms. In the third part of this chapter, two different case studies are considered for proper biometrical authentication and analysis. The analysis has been carried out both in progressive and regressive modes – the progressive mode of analysis meaning in an incremental way and the regressive in a decremental way. This chapter presents two case studies for progressive and regressive nature. In one case, the step length of gait has been considered – in pixels from each frame, whereby the subject is moving from left to right – and in another case, the face step angle – measured in degrees from each frame, whereby the subject's face is analysed from the side-view, parallel to the x-axis switched by five degrees. Prior to the analysis being carried out on the above case studies, an appropriate and desired analysis was carried out on the acquisition, enhancement, segmentation and pre-processing stages. In the acquisition stage, the original image was captured through a high-density web camera or a digital camera at random – meaning, the image was captured blindly using the image-warping technique. The image-warping technique is the combination of image regis‐ tration and rectification. When the image of any subject has been captured blindly, the desired region of interest, along with the object of interest, are detected and selected. Image data is registered and rectified for the selected region, and object, of interest. 2D-transformation techniques such as translation, scaling and shearing were applied for registration and rectifi‐ cation. After the rectification of selected image data, proper and enhanced images were restored. Hence the background and foreground of the images were distinguished using proper image subtraction methods. In this chapter, the foreground part of the image (ROI and OOI) is discussed. This part was used for further processing such as; obtaining silhouette images using proper segmentation techniques, distinguishing the upper part (human face) and the lower part (human gait) of the object and computing the connected components of the upper and lower parts of the object. Considering these two portions of the object, relevant features were extracted, resulting in two knowledge-based models or corpuses.

For the biometrical authentication, a test image of the subject was captured. After proper enhancement, registration, rectification and segmentation of the test images, relevant features were extracted and stored in the template. Using LSWT, the upper and lower portion of the test image was separated, forming two sub-templates: the Slave Human Gait Model (SHGM) and the Slave Human Face Model (SHFM). The data that was stored in the above two subtemplates was used for restoring the original image after employing the merging mechanism using the ILSWT. The later technique was applied for the verification of data that had been separated earlier using the LSWT method. After proper verification, authentication was carried out using soft-computing techniques and their hybrid approaches. Neuro-genetic and neurofuzzy approaches were applied as hybrid methods. Other methods for further processing were: Fisher's linear discriminant analysis (FLDA), discrete cosine transform, DWT and principal component analysis (PCA).

This chapter discusses an algorithm developed for the formation of noise-free corpuses, using relevant geometrical parameters which aid the authentication of a subject using human gait and face images. The complexity of the developed algorithm is also discussed using a case study on the change in the subject's getups. The trained data is matched with the test data for the best-fit which involves the application of forward–backward dynamic programming (FBDP), fuzzy set rules and genetic algorithms (GA).

### **2. Modelling for biometrical authentication**

This chapter is separated into three parts: the first deals with the general mathematical problem of formulating a model for biometrical authentication; the second provides a methodology for corpus formation using human gait and face images; and the third presents case studies for biometrical authentication, along with conclusions and applications. In the first part, the frames of images were mathematically analysed and normalised. They were categorised into odd and even components, thus validating the process of splitting the frames using the lifting scheme of discrete wavelet transform (LSDWT). The detail and coarser components have been estimated. Further the above calculated values have been used to validate the process of merging the frames through the inverse-lifting scheme of discrete wavelet transform (ILSDWT). A considerable amount of parameters have been estimated using statistical, digital and morphological image-processing methods. The next part of this chapter presents the experimental formation of two different corpuses: firstly a noise-free artificial gait model and secondly a noise-free artificial face model. The facts and figures of the above mentioned corpuses were reached and discussed in considerable length using; the LSDWT, the ILSDWT, soft-computing based techniques, the forward–backward dynamic programming of neural networks, the Unidirectional Temporary Associative Memory (UTAM) technique of neural network and fuzzy and genetic algorithms. In the third part of this chapter, two different case studies are considered for proper biometrical authentication and analysis. The analysis has been carried out both in progressive and regressive modes – the progressive mode of analysis meaning in an incremental way and the regressive in a decremental way. This chapter presents two case studies for progressive and regressive nature. In one case, the step length of gait has been considered – in pixels from each frame, whereby the subject is moving from left to right – and in another case, the face step angle – measured in degrees from each frame, whereby the subject's face is analysed from the side-view, parallel to the x-axis switched by five degrees. Prior to the analysis being carried out on the above case studies, an appropriate and desired analysis was carried out on the acquisition, enhancement, segmentation and pre-processing stages. In the acquisition stage, the original image was captured through a high-density web camera or a digital camera at random – meaning, the image was captured blindly using the image-warping technique. The image-warping technique is the combination of image regis‐ tration and rectification. When the image of any subject has been captured blindly, the desired region of interest, along with the object of interest, are detected and selected. Image data is registered and rectified for the selected region, and object, of interest. 2D-transformation techniques such as translation, scaling and shearing were applied for registration and rectifi‐ cation. After the rectification of selected image data, proper and enhanced images were restored. Hence the background and foreground of the images were distinguished using proper image subtraction methods. In this chapter, the foreground part of the image (ROI and OOI) is discussed. This part was used for further processing such as; obtaining silhouette images using proper segmentation techniques, distinguishing the upper part (human face) and the lower part (human gait) of the object and computing the connected components of the upper and lower parts of the object. Considering these two portions of the object, relevant

2 Wavelet Transform and Some of Its Real-World Applications

features were extracted, resulting in two knowledge-based models or corpuses.

For the biometrical authentication, a test image of the subject was captured. After proper enhancement, registration, rectification and segmentation of the test images, relevant features

#### **2.1. Brief literature survey on human gait as a biometrical trait**

The analysis of human walking movements, or gait, has been an on-going area of research since the advent of the still camera in 1896. Since that time many researchers have investigated the dynamics of human gait in order to fully understand and describe the complicated process of upright bipedal motion as suggested by Boyd et al. [1], and Nixon et al. [2], and Murray et al. [3], and Ben-Abdelkader et al. [4]

Considerable research has been carried out exploiting the analysis of this motion, including: clinical gait analysis, used for rehabilitation purposes, and biometric gait analysis for automatic person identification.

In 2002 Ben-Abdelkader et al. [4] proposed a parametric method to automatically identify people in low-resolution videos by estimating the height and stride parameters of their gait. Later in 2004 Ben-Abdelkader et al. [5] proposed a method of interpreting gait as synchronised, integrated movements of hundreds of muscles and joints in the body. Kale and his colleagues [6] carried out work on an appearance-based approach to gait recognition. In their work the width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Huang et al. [7] proposed an approach in 1998 which recognised people by their gait from a sequence of images. They proposed a statistical approach, which combined eigenspace transformation with canonical-space transformation for feature transformation of spatial templates. In 1997 Cunado et al. [8 proposed a method for evidencing gathering techniques. The proposed techniques were developed as a moving model, representing the motion of human thighs, providing an automatic gait signature. In 2002 Phillips et al. [9] proposed a baseline algorithm for the challenge of identifying humans using gait analysis. In the same year Phillips [10] and his colleagues worked on the baseline algorithm, investigating prob‐ lematic variations of gait identification, such as: view point, footwear, and walking surface. In 2008 Jack M. Wang et al. [11] introduced Gaussian process dynamic models for non-linear time series analysis to develop models of human pose and motion captured from high-dimensional data. The model proposed by Jack M. Wang et.al [11] comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. In the same year Sina Samangooei and M. S. Nixon [12] proposed a set of semantic traits dis‐ cernible by humans at a distance, outlining their psychological validity. In 2011 Imed Bou‐ chrika et al. [13] investigated the translation of gait biometrics for forensic use. They used ankle, knee and hip locations to derive a measure of match for image sequences of walking subjects. The match was achieved by instantaneous posture matching which determines the difference between the positions of a set of human vertices. In 2011 Sinha et al. [14] proposed a technique for the detection of abnormal footprints using human gait images. In 2012 Chen Wang et al. [15] proposed a strategy for the formation of a human gait corpus using the temporal infor‐ mation preserving gait template. Until now, a lot of research was dedicated to the recognition of individuals, by finding foot problems through human gait images. However, very little work has been completed on investigating progressive-regressive strategies for biometric authenti‐ cation using human gait images. Keeping the progressive–regressive strategy, the extraction of features and its recognition for biometric authentication using DWT and soft-computing techniques has been discussed thoroughly using human gait images. In this chapter, biometric authentication through human gait is the topic of the first case study and human face from the side view as the second case study. The literature survey on human face is discussed in Section 2.2 of this chapter.

#### **2.2. Brief literature survey on human face as biometrical trait**

For six decades research has been carried out on human face. From the literature, it has been observed that, this work was not only completed by researchers from the field of engineering and technology, but also from the field of medical sciences. Automatic face recognition is one of the prime components for any biometrical study and it has gradually progressed over the past sixty years. A thorough review report was written and the issues for further research in this were investigated.

During the past few decades, a considerable amount of research into human face recognition has dealt with the immensely challenging variability in head pose, lighting intensity and direction, facial expression and aging. A great deal of progress has been made by many researchers in improving the performance of human face recognition. Based on two-dimen‐ sional intensity images, a number of human face recognition algorithms have been developed during this time. Trunk et al. [16] suggested a method called Principal component analysis (PCA) as the best method for the distribution of human face images within the entire image space. These vectors define the subspace of a human face image called the face space. Kirby et al. [17] has developed an idea for the extension of PCA, such as modular eigen-spaces. Hu Y. Jiang et al. [18] proposed a strategy using one neutral frontal-view image of human face. They proposed a strategy of creating synthetic images under different poses and expressions to aid recognition. A similar idea, but a very new approach, was proposed by Lee et al. [19] who presented a combination of an edge model and a colour region model for human face recog‐ nition after the synthetic image using a 3D model. In the same year Michal Valstar et al. [20] attempted to measure a large range of facial behaviour by recognising facial action units (generally atomic facial signals) that produce expressions. The proposed system performs action unit recognition using temporal templates as input data. Jolly D. Shah et al. [21] presents a multiple human face detection method, based on skin colour information and a *lines of separability* face model and recognition method, based on principle component analysis and an artificial neural network. The face detection method uses a YCbCr colour model and sigma control limits for variation in its colour components. In 2007, Richa Singh et al. [22] described a human face mosaicking scheme that generates a composite human face image during registration or training, based on the evidence provided by the frontal-view and semi-profile human face image of an individual. In this scheme the side-view profile images are aligned with the frontal image using a hierarchical registration algorithm, which exploits neighbour‐ hood properties to determine the transformation relating the two images together. In 2008, Edward Kao et al., [23] shared the process of automatically tracking people in video sequences, which is currently receiving a great deal of interest within the computer vision research community. In this work they contrasted the performance of the popular mean-shift algo‐ rithms gradient descent-based strategy with a more advanced swarm intelligence technique and proposed a practical swarm optimisation algorithm to replace the gradient descent search. They also combined the swarm-based search strategy with a probabilistic data association filter state estimator to perform the track association and maintenance stages. In the same year Xiaozheng Zhang et al. [24] presented a novel-appearance based approach in face recognition using frontal and side-views of human face images to tackle pose variation. This has great potential in forensic and security application, involving the police mugshot database. In 2011 Li Cheng et al. [25] examined the problem of the segmentation of foreground objects in live videos when background subtraction as minimizing a penalized instantaneous risk functionyield a local online discriminative algorithms that can quickly adapt to temporal changes. In the same year, Hossian et al. [26] surveyed several important research works published in this area and proposed new technology to identify a person using multimodal physiology and behavioural biometrics. In 2013, Tilendra Shishir Sinha et al. [27] continued this research using human gait and human faces for the recognition of behavioural and physiological traits of the subject. This research adopted a vast amount of logical concepts of soft-computing techniques for the recognition of behavioural and physiological subject. Geometrical features are defined as functions belonging to one or more quality of objects that are capable of distinguishing objects from each other. Generally the human face image feature vector technique has been considered with geometric parameters of moment, shape, switching and texture features. These parameters are reasonably robust to the varying conditions and are capable enough of describing the quality of subjects. The two basic feature extraction techniques are: the geo‐ metric and holistic approaches, as suggested by J-H, Na et al. [28] The geometric approach, selects individual features and characteristics of the human face image, based on geometrical relational parameters. The holistic approach, selects complete features and characteristics of

baseline algorithm for the challenge of identifying humans using gait analysis. In the same year Phillips [10] and his colleagues worked on the baseline algorithm, investigating prob‐ lematic variations of gait identification, such as: view point, footwear, and walking surface. In 2008 Jack M. Wang et al. [11] introduced Gaussian process dynamic models for non-linear time series analysis to develop models of human pose and motion captured from high-dimensional data. The model proposed by Jack M. Wang et.al [11] comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. In the same year Sina Samangooei and M. S. Nixon [12] proposed a set of semantic traits dis‐ cernible by humans at a distance, outlining their psychological validity. In 2011 Imed Bou‐ chrika et al. [13] investigated the translation of gait biometrics for forensic use. They used ankle, knee and hip locations to derive a measure of match for image sequences of walking subjects. The match was achieved by instantaneous posture matching which determines the difference between the positions of a set of human vertices. In 2011 Sinha et al. [14] proposed a technique for the detection of abnormal footprints using human gait images. In 2012 Chen Wang et al. [15] proposed a strategy for the formation of a human gait corpus using the temporal infor‐ mation preserving gait template. Until now, a lot of research was dedicated to the recognition of individuals, by finding foot problems through human gait images. However, very little work has been completed on investigating progressive-regressive strategies for biometric authenti‐ cation using human gait images. Keeping the progressive–regressive strategy, the extraction of features and its recognition for biometric authentication using DWT and soft-computing techniques has been discussed thoroughly using human gait images. In this chapter, biometric authentication through human gait is the topic of the first case study and human face from the side view as the second case study. The literature survey on human face is discussed in Section

4 Wavelet Transform and Some of Its Real-World Applications

2.2 of this chapter.

this were investigated.

**2.2. Brief literature survey on human face as biometrical trait**

For six decades research has been carried out on human face. From the literature, it has been observed that, this work was not only completed by researchers from the field of engineering and technology, but also from the field of medical sciences. Automatic face recognition is one of the prime components for any biometrical study and it has gradually progressed over the past sixty years. A thorough review report was written and the issues for further research in

During the past few decades, a considerable amount of research into human face recognition has dealt with the immensely challenging variability in head pose, lighting intensity and direction, facial expression and aging. A great deal of progress has been made by many researchers in improving the performance of human face recognition. Based on two-dimen‐ sional intensity images, a number of human face recognition algorithms have been developed during this time. Trunk et al. [16] suggested a method called Principal component analysis (PCA) as the best method for the distribution of human face images within the entire image space. These vectors define the subspace of a human face image called the face space. Kirby et al. [17] has developed an idea for the extension of PCA, such as modular eigen-spaces. Hu Y. Jiang et al. [18] proposed a strategy using one neutral frontal-view image of human face. They the human face image, based on the calculations of the principal component analysis, Fisher's linear discriminant analysis, the independent component analysis, soft-computing techniques and the forward-backward dynamic programming method. As per the methods proposed by Mohammad et al., [29] and Heng et al. [30] both approaches have been applied because of some acceptable benefits to the research work with respect to fast recognition. The main advantage of using such methods is the calculations over features with reduced dimensionality by projections and original data onto the basic vectors. As a matter of fact, during the initial start of switching of the frame of the human-face image considering side-view of the face of the subject, the neuron fires and hence the human-face muscle activates. Stefanos et al. [31] analyzed further by considering frame by frame data of the human-face, in the year 2013. These frames of data have been fed as input for the computation of more additional parameters in steps: first the real-valued, second the neutral, third the normalized and finally the optimized and normalized parameter is computed.

From the above literature discussed so far it has been observed that very few researchers' have adopted DWT, soft-computing tools and their hybrid approaches, for the recognition of a human face from side-view (parallel to image plane). Also it has been found that in the last six decades, research in automatic face recognition has been intensively carried out worldwide in the field of biometrical studies, and has been summarized by the following changes:


The literature has also evidenced that knowledge-based models are still playing a vital role in any biometrical research work. There continues to be a scope for automatic human-face recognition, using the innovative approach of DWT, soft-computing tools and their hybrid approaches. Thorough mathematical formulations have been done in Section 2.3 of this chapter, considering both human gait and human face images.

#### **2.3. Mathematical formulation for biometrical authentication**

Biometrical authentication on human gait and human face has been investigated using the progressive-regressive strategy and the implementation of DWT and the soft-computing technique. The soft-computing technique involves artificial neural network, genetic algorithm and the fuzzy set theory. For the computation of human gait and human face features, the firing concepts of artificial neural network have been incorporated. As per the literature and through experimental setup, it has been found that, a neuron is fired with a sigmoid function when the output is more than its threshold value. As a matter of fact, each neuron has input and output characteristics and performs a computation or function of the form, given in equation (1):

$$\mathbf{O\_i = f\left(S\_i\right)} \text{ and } \mathbf{S\_i = W^T X}$$

where X = (x1,x2,x3,....,xm) is the vector input to the neuron, W is the weight matrix with wij being the weight (connection strength) of the connection between the jth element of the input vector and ith neuron,WT means the transpose of the weight matrix, the f (.) is an activation or nonlinear function (usually a sigmoid), Oi is the output of the ith neuron and Si is the weighted sum of the inputs.

**Figure 1.** A simple artificial neuron.

the human face image, based on the calculations of the principal component analysis, Fisher's linear discriminant analysis, the independent component analysis, soft-computing techniques and the forward-backward dynamic programming method. As per the methods proposed by Mohammad et al., [29] and Heng et al. [30] both approaches have been applied because of some acceptable benefits to the research work with respect to fast recognition. The main advantage of using such methods is the calculations over features with reduced dimensionality by projections and original data onto the basic vectors. As a matter of fact, during the initial start of switching of the frame of the human-face image considering side-view of the face of the subject, the neuron fires and hence the human-face muscle activates. Stefanos et al. [31] analyzed further by considering frame by frame data of the human-face, in the year 2013. These frames of data have been fed as input for the computation of more additional parameters in steps: first the real-valued, second the neutral, third the normalized and finally the optimized

From the above literature discussed so far it has been observed that very few researchers' have adopted DWT, soft-computing tools and their hybrid approaches, for the recognition of a human face from side-view (parallel to image plane). Also it has been found that in the last six decades, research in automatic face recognition has been intensively carried out worldwide in

the field of biometrical studies, and has been summarized by the following changes:

**•** From maximum likelihood to discriminative approach (genetic algorithm method).

**•** From no commercial biometrical applications to commercial biometrical applications.

The literature has also evidenced that knowledge-based models are still playing a vital role in any biometrical research work. There continues to be a scope for automatic human-face recognition, using the innovative approach of DWT, soft-computing tools and their hybrid approaches. Thorough mathematical formulations have been done in Section 2.3 of this

Biometrical authentication on human gait and human face has been investigated using the progressive-regressive strategy and the implementation of DWT and the soft-computing technique. The soft-computing technique involves artificial neural network, genetic algorithm and the fuzzy set theory. For the computation of human gait and human face features, the firing concepts of artificial neural network have been incorporated. As per the literature and through experimental setup, it has been found that, a neuron is fired with a sigmoid function when the output is more than its threshold value. As a matter of fact, each neuron has input and output characteristics and performs a computation or function of the form, given in

( ) <sup>T</sup> O = f S and S = W X ii i (1)

**•** From template-matching approach to knowledge-based approach.

chapter, considering both human gait and human face images.

**2.3. Mathematical formulation for biometrical authentication**

equation (1):

**•** From distance-based to likelihood-based methods.

and normalized parameter is computed.

6 Wavelet Transform and Some of Its Real-World Applications

The real power comes when a single neuron is combined into a multi-layer structure called artificial neural networks. The neuron has a set of nodes which connect it to the inputs, output or other neurons called synapses. A linear combiner is a function that takes all inputs and produces a single value. Let the input sequence be {X1,X2,...,XN} and the synaptic weight be {W1,W2,W3,....,WN}, so the output of the linear combiner, Y, yields to equation (2),

$$Y = \sum\_{i=1}^{N} X\_i W\_i \tag{2}$$

An activation function will take any input from minus infinity to infinity and squeeze it into the range –1 to +1 or between 0 to 1 intervals. Usually an activation function is treated as a sigmoid function which relates as given in the below equation (3):

$$f(Y) = \frac{1}{1 + e^{-Y}} \tag{3}$$

The threshold defines the internal activity of the neuron, which is fixed to –1. In general, for the neuron to fire or activate, the sum should be greater than the threshold value. This has been analysed further by considering frame by frame data of human walking.

The human gait images have been fed as input for the computation of more additional parameters. The additional parameters are firstly the real-valued, secondly the neutral, thirdly the normalised and finally the optimised and normalised parameters.

Mathematically, this is discussed below:

Consider that 'Z' numbers of frames have been read. Each frame has been read as FRAME1, FRAME2, FRAME3, FRAME4, FRAMEZ-2, FRAMEZ-1, FRAMEZ. The whole process has been carried out adhering to the following guidelines:


Where P1L signifies step length of first frame with left-to-right direction and


Repeat the above process for the next frames. Hence it yields to the average step-length measures as real-valued parameters, such as: F2avg = (P2L + P2R) / 2, F3avg = (P3L + P3R) / 2, F4avg = (P4L + P4R) / 2, F5avg = (P5L + P5R) / 2,..............,F(Z-1)avg = (P(Z-1)L + P(Z-1)R) / 2,

$$\mathbf{F}\_{\text{Zavg}} = \left(\mathbf{P}\_{\text{ZL}} + \mathbf{P}\_{\text{ZR}}\right) / \mathbf{2} \tag{4}$$

Next, to compute the neutral parameter, consider the 'even' and 'odd' frames separately. Let 'Nodd' and 'Neven' be the number of odd and even frames respectively. Hence, the neutral parameter yields to:

$$\mathbf{F}\_{\text{Oddavg}} = \left(\mathbf{F}\_{\text{1avg}} + \mathbf{F}\_{\text{3avg}} + \dots + \mathbf{F}\_{\text{(2Z-1)avg}}\right) / \mathbf{N}\_{\text{odd}} \tag{5}$$

$$\mathbf{F}\_{\text{Everavg}} = \left(\mathbf{F}\_{2\text{avg}} + \mathbf{F}\_{4\text{avg}} + \dots + \mathbf{F}\_{\text{(2Z)avg}}\right) / \left|\mathbf{N}\_{\text{even}}\right.\tag{6}$$

The normalised parameters for each frame have been computed further. The solution yields for 'odd' frames:

$$\begin{aligned} \mathbf{F}\_{\text{Norm1}} &= \frac{\mathbf{F}\_{\text{1avg}} \cdot \mathbf{F}\_{\text{Oddavg}}}{\mathbf{F}\_{\text{Oddavg}}}, \mathbf{F}\_{\text{Norm3}} = \frac{\mathbf{F}\_{\text{2avg}} \cdot \mathbf{F}\_{\text{Oddavg}}}{\mathbf{F}\_{\text{Oddavg}}}, \mathbf{F}\_{\text{Norm5}} = \frac{\mathbf{F}\_{\text{3avg}} \cdot \mathbf{F}\_{\text{Oddavg}}}{\mathbf{F}\_{\text{Oddavg}}}, \\ \mathbf{F}\_{\text{Norm2}(2Z-1)} &= \frac{\mathbf{F}\_{(2Z-1)\text{avg}} \cdot \mathbf{F}\_{\text{Oddavg}}}{\mathbf{F}\_{\text{Oddavg}}} \end{aligned} \tag{7}$$

Similarly for 'even' frames:

Consider that 'Z' numbers of frames have been read. Each frame has been read as FRAME1, FRAME2, FRAME3, FRAME4, FRAMEZ-2, FRAMEZ-1, FRAMEZ. The whole process has been

carried out adhering to the following guidelines:

8 Wavelet Transform and Some of Its Real-World Applications

**•** Similarly read the first Frame from right to left.

**•** Similarly extract the step-length parameter, P1R.

**•** Compute an average step-length parameter F1avg = (P1L + P1R) / 2.

Where P1L signifies step length of first frame with left-to-right direction and

**•** F1avg signifies average step length of the first frame with both directions.

Repeat the above process for the next frames. Hence it yields to the average step-length measures as real-valued parameters, such as: F2avg = (P2L + P2R) / 2, F3avg = (P3L + P3R) / 2, F4avg =

Next, to compute the neutral parameter, consider the 'even' and 'odd' frames separately. Let 'Nodd' and 'Neven' be the number of odd and even frames respectively. Hence, the neutral

The normalised parameters for each frame have been computed further. The solution yields

1avg Oddavg 2avg Oddavg 3avg Oddavg Oddavg Oddavg Oddavg


FF FF FF FFF

F = P + P / 2 Zavg ZL ZR ( ) (4)

Oddavg 1avg 3avg ( ( ) 2Z-1 avg ) odd F = F + F + ......... + F / N (5)

Evenavg 2avg 4avg ( ( ) 2Z avg ) even F = F + F + ......... + F / N (6)

rm5

(7)

**•** P1R signifies step length of first frame with right-to-left direction and

(P4L + P4R) / 2, F5avg = (P5L + P5R) / 2,..............,F(Z-1)avg = (P(Z-1)L + P(Z-1)R) / 2,

**•** Read the first frame from left to right.

**•** Extract the step-length parameter, P1L.

parameter yields to:

for 'odd' frames:

( )

*Zm*

2 1



<sup>F</sup> <sup>=</sup> *Nor*

( )


F F

*Z avg*

Norm1 Norm3 No

2 1 Oddavg

davg

Od

$$\begin{aligned} \mathbf{F}\_{\text{Norm2}} &= \frac{\mathbf{F}\_{\text{2avg}} \cdot \mathbf{F}\_{\text{Evenavg}}}{\mathbf{F}\_{\text{Evenavg}}}, \mathbf{F}\_{\text{Norm4}} = \frac{\mathbf{F}\_{\text{4avg}} \cdot \mathbf{F}\_{\text{Evenavg}}}{\mathbf{F}\_{\text{Evenavg}}}, \mathbf{F}\_{\text{Norm6}} = \frac{\mathbf{F}\_{\text{4avg}} \cdot \mathbf{F}\_{\text{Evenavg}}}{\mathbf{F}\_{\text{Evenavg}}}, \\ \mathbf{F}\_{\text{Norm2Z}} &= \frac{\mathbf{F}\_{\text{(2Z-1)avg}} \cdot \mathbf{F}\_{\text{Evenavg}}}{\mathbf{F}\_{\text{Evenavg}}} \end{aligned} \tag{8}$$

Further computing the average neutral and normalised parameters (NNP) for 'odd' and 'even' components, the solution yields to:

$$\mathbf{F}\_{\text{NormOddavg}} = \left(\mathbf{F}\_{\text{Norm1}} + \mathbf{F}\_{\text{Norm3}} + \dots + \mathbf{F}\_{\text{Norm(2Z-1)}}\right) / \mathbf{N}\_{\text{odd}} \tag{9}$$

$$\mathbf{F}\_{\text{NormEovany}} = \left(\mathbf{F}\_{\text{Norm2}} + \mathbf{F}\_{\text{Norm4}} + \dots + \mathbf{F}\_{\text{Norm(2Z)}}\right) / \mathbf{N}\_{\text{even}} \tag{10}$$

The next step is to compute the average NNP for each frame of the dataset, the solution yields to:

$$\begin{aligned} \mathbf{F}\_{\text{NNIP}} &= \frac{\mathbf{F}\_{\text{Norm1}} - \mathbf{F}\_{\text{NormOddavg}}}{\mathbf{F}\_{\text{NormOddavg}}}, \mathbf{F}\_{\text{NNIP3}} = \frac{\mathbf{F}\_{\text{Norm3}} - \mathbf{F}\_{\text{NormOddavg}}}{\mathbf{F}\_{\text{NormOddavg}}} \\ \mathbf{F}\_{\text{NNIP3}} &= \frac{\mathbf{F}\_{\text{Norm5}} - \mathbf{F}\_{\text{NormOddavg}}}{\mathbf{F}\_{\text{NormOddavg}}}, \mathbf{F}\_{\text{NNIP}(2Z-1)} = \frac{\mathbf{F}\_{\text{Norm1}(2Z-1)} - \mathbf{F}\_{\text{NormOddavg}}}{\mathbf{F}\_{\text{NormOddavg}}}. \end{aligned} \tag{11}$$

Similarly for 'even' frames:

$$\begin{aligned} \mathbf{F}\_{\text{NNIP}2} &= \frac{\mathbf{F}\_{\text{Norm1}2} \cdot \mathbf{F}\_{\text{NormEwenavg}}}{\mathbf{F}\_{\text{NormEwenavg}}}, \mathbf{F}\_{\text{NNIP4}} = \frac{\mathbf{F}\_{\text{Norm4}} \cdot \mathbf{F}\_{\text{NormEwenavg}}}{\mathbf{F}\_{\text{NormEwenavg}}} \\ \mathbf{F}\_{\text{NNIP6}} &= \frac{\mathbf{F}\_{\text{Gray}} \cdot \mathbf{F}\_{\text{Evenavg}}}{\mathbf{F}\_{\text{Evenavg}}}, \mathbf{F}\_{\text{NNIP}(2Z)} = \frac{\mathbf{F}\_{\text{Norm1}(2Z)} \cdot \mathbf{F}\_{\text{NormEwenavg}}}{\mathbf{F}\_{\text{NormEwenavg}}}. \end{aligned} \tag{12}$$

In general the dimensions of the feature vectors are of higher dimensions. In the present work, for better results during the classification and recognition process, the dimensions of these feature vectors have been reduced to lower dimensions, using the forward–backward dynamic programming method. To illustrate this method mathematically, the following initial condi‐ tions were set:


Assume two distinguished human gait walking patterns, say x(ti ) and x(tj ) are defined, each with its own time base, ti and tj . Also assume that the beginning and end of the walking pattern are known, denoted as (tis, tif) and (tjs, tjf) respectively. If both the patterns are sampled at the same rate, then both patterns begin 't' sample i = j = 1, that occurs without any loss of generality. Thus, the mapping function, i = j. (I / J), is linearly related. As the human gait patterns appear non-linear, so non-linear time warping functions are calculated, with several assumptions. Let the warping function, w(k), be defined as a sequence of points: c(1), c(2),.....,c(k), where c(k) = (I(k), j(k)) is the matching of the point i(k) on the first time base and the point j(k) on the second time base.

Further illustration has shown that humangait can be distinguished into five possible direc‐ tions:


Setting the initial conditions let the search window be restricted to the limit:


From Figure 2, the warping, w(k), only allows us to compare the appropriate parts of x(ti ) with that of x(tj ). Setting the monotonic and continuity conditions on the warping function, it restricts to the relations between four consecutive warping points, c(k), c(k-1) c(k+1) and c(kk), where kk signifies +/- or -/+.

Thus from Figure 2, there are eight ways to get to the point c(i,j), which has been given in equations (13), (14), (15) and (16), below:

$$\mathbf{c(k)} = \mathbf{c(i,j)}\tag{13}$$

$$\mathbf{c}\begin{cases} \text{i}(k), \text{j}(k) - 1\\ \text{i}(k) - 1, \text{j}(k) - 1\\ \text{i}(k) - 1, \text{j}(k) \end{cases} \tag{14}$$

#### Progressive-Regressive Strategy for Biometrical Authentication http://dx.doi.org/10.5772/61786 11

$$\mathbf{c}\left(\mathbf{k}+\mathbf{1}\right) = \begin{cases} \left(i(k), j(k)+1\right) \\ \left(i(k)+1, j(k)+1\right) \\ \left(i(k)+1, j(k)\right) \end{cases} \tag{15}$$

$$\mathbf{c}\begin{pmatrix} \mathbf{kk} \end{pmatrix} = \begin{pmatrix} (i(k) - 1, j(k) + 1) \\ (i(k) + 1, j(k) - 1) \end{pmatrix} \tag{16}$$

And the boundary condition or circular movements yields to:

**•** Limit the area over which the search has to be performed.

Assume two distinguished human gait walking patterns, say x(ti

and tj

characteristics of human gait.

10 Wavelet Transform and Some of Its Real-World Applications

with its own time base, ti

time base.

axis).

that of x(tj

**•** Vertically (parallel to y-axis).

where kk signifies +/- or -/+.


equations (13), (14), (15) and (16), below:

tions:

**•** Searching must be performed using constraints for the computation of best dynamic

are known, denoted as (tis, tif) and (tjs, tjf) respectively. If both the patterns are sampled at the same rate, then both patterns begin 't' sample i = j = 1, that occurs without any loss of generality. Thus, the mapping function, i = j. (I / J), is linearly related. As the human gait patterns appear non-linear, so non-linear time warping functions are calculated, with several assumptions. Let the warping function, w(k), be defined as a sequence of points: c(1), c(2),.....,c(k), where c(k) = (I(k), j(k)) is the matching of the point i(k) on the first time base and the point j(k) on the second

Further illustration has shown that humangait can be distinguished into five possible direc‐

**•** Horizontally left to right movement and horizontally right to left movement (parallel to x-

**•** Diagonally left to right and diagonally right to left (45 degree to x-axis).

**•** Diagonally right to left and diagonally left to right (135 degree to x-axis).

**•** Circularly clockwise direction and circularly anti-clockwise direction.

( )

=

c k-1

Setting the initial conditions let the search window be restricted to the limit:

From Figure 2, the warping, w(k), only allows us to compare the appropriate parts of x(ti

restricts to the relations between four consecutive warping points, c(k), c(k-1) c(k+1) and c(kk),

Thus from Figure 2, there are eight ways to get to the point c(i,j), which has been given in

( ( ), ( ) 1) ( ( ) 1, ( ) 1) ( ( ) 1, (

*ik jk ik jk ik jk*

<sup>ì</sup> - <sup>ï</sup> í - <sup>ï</sup> - <sup>î</sup>

)

). Setting the monotonic and continuity conditions on the warping function, it

c k = c i,j () ( ) (13)

) and x(tj

. Also assume that the beginning and end of the walking pattern

) are defined, each

) with

(14)

$$\mathbf{c(k)} = \begin{pmatrix} \mathbf{I(J)} \end{pmatrix} \tag{17}$$

By the boundary condition, the matching of the beginning and the end of the walking pattern and the tracing of the optimal route for normal walking have been analysed using the forwardbackward dynamic programming method. To formulise this method for the tracing of the best match, the walking patterns have been represented at each point, by their feature vectors, β<sup>i</sup> (k) and β<sup>j</sup> (k), where β<sup>i</sup> (k) denotes the feature vector of the walking pattern x(ti ) and β<sup>j</sup> (k) denotes the feature vector of the walking pattern x(tj ). The distance between the two feature vectors is defined by:

$$\mathbf{d}\left(\mathbf{c}\left(\mathbf{k}\right)\right) = \mathbf{d}\left(\mathbf{i}\left(\mathbf{k}\right), \mathbf{j}\left(\mathbf{k}\right)\right) = \begin{vmatrix} \boldsymbol{\beta}\_i(\boldsymbol{k}) & - & \boldsymbol{\beta}\_j(\boldsymbol{k}) \end{vmatrix} \tag{18}$$

The warping function is then assessed, so that the performance index D(x(ti ),x(tj )) gets minimised. The performance index is the normalised average weighted distance, which has been related as:

$$\mathbf{D}\left(\mathbf{x}\left(\mathbf{t}\_{i}\right),\mathbf{x}\left(\mathbf{t}\_{j}\right)\right) \stackrel{\text{=}}{=} \underset{\mathbf{w}}{\text{Min}} \left[\frac{\sum\_{k=1}^{k} d(\mathbf{c}(k))\rho(k)}{\sum\_{k=1}^{k} \rho(k)}\right] \tag{19}$$

where ρ(k) are the weights, that yields to I + J, thus the equation (19) results to:

$$D\left(\mathbf{x}\left(\mathbf{t}\_{i}\right),\mathbf{x}\left(\mathbf{t}\_{i}\right)\right) = \frac{1}{I+J}M\dot{m}\left[\sum\_{k=1}^{k}d(\mathbf{c}(k))\rho(k)\right] \tag{20}$$

On substituting the values of equations (13), (14), (15) and (16) in equation (20), each point in the search window has been attached with information for an optimal match up to its desti‐ nation point (I, J). This approach to searching is said to be a forward technique of dynamic programming. After scanning is terminated, the construction of an optimal match is carried out by going backward from the (I,J) to (0,0) or (1,1) point. This approach is said to be a backward technique of dynamic programming and the reversal process is a forward technique of dynamic programming. The combination of this two-way searching technique results in a forward-backward dynamic programming searching method. For an optimal solution, a minimum number of divergence values must be found. Thus, to compute the divergence values for an optimal solution, let the probability of getting a feature vector, β, given that it belongs to some class wi, yields, p(β/w<sup>i</sup> ), similarly for the class wj , yields p(β/w<sup>j</sup> ). The sum of the average logarithmic ratio between these two conditional probabilities yields information concerning the separability between the two classes and shows that there is no loss to the concept. This gives the divergence values of the features. Thus the mathematical formulation yields:

$$\mathbf{D}\_{i,\downarrow} = (\mu\_{\text{i}} \cdot \mu\_{\text{j}}) \left(\mu\_{\text{i}} \cdot \mu\_{\text{j}}\right)^{\text{T}} \Sigma^{\text{-1}} \tag{21}$$

where μ = μi = μj means the expectations and Σ mean the covariance.

From the equation (21) divergence values have been calculated up to 19 feature vectors.

The mathematical analysis for the detection of behavioural trait through human gait image has been formulated using two features: step length and walking speed.

Let the source be 'S' and the destination be 'D'. Also assume that normally this distance is to achieve in 'T' steps. So 'T' frames or samples of images are required. Considering the first frame, with left foot (FL) at the back and right foot (FR) at the front, the coordinates with (x,y) for the first frame, such that FL(x1,y1) and FR(x2,y2). Thus applying Manhattan distance meas‐ ures, the step length has been computed and it yields to:

$$\left| \left| step - length \right| = \left| \mathbf{x}\_2 - \mathbf{x}\_1 \right| + \left| y\_2 - y\_1 \right| \tag{22}$$

Normally, Tact steps are required to achieve the destination. From equation (22), T1 has to be calculated for the first frame. Similarly, for 'nth' frame, Tn has to be calculated. Thus the total steps, calculated are:

$$\mathbf{T\_{calc}} = \mathbf{T\_1} + \mathbf{T\_2} + \mathbf{T\_3} + ... \dots + \mathbf{T\_n} \tag{23}$$

Thus walking speed or walking rate has been calculated and it yields to:

$$\text{walking} - \text{speed} = \begin{cases} \text{norm} & \text{if } \qquad T\_{\text{act}} = T\_{\text{calc}} \\ \quad \text{fast} & \text{,} \qquad T\_{\text{act}} < T\_{\text{calc}} \\ \quad \text{slow} & \text{,} \qquad T\_{\text{act}} > T\_{\text{calc}} \end{cases} \tag{24}$$

Two measures, one of accuracy and the other of precision, have been derived to access the performance of the overall system, which has been formulated as:

$$\text{accuracy} = \frac{\text{Correctly Recogenized}}{\text{Total number of features}} \tag{25}$$

$$\text{Precision} = \frac{\text{TPD}}{\text{TPD} + \text{FPD}} \tag{26}$$

where TPD = true positive detection and FPD = false positive detection.

out by going backward from the (I,J) to (0,0) or (1,1) point. This approach is said to be a backward technique of dynamic programming and the reversal process is a forward technique of dynamic programming. The combination of this two-way searching technique results in a forward-backward dynamic programming searching method. For an optimal solution, a minimum number of divergence values must be found. Thus, to compute the divergence values for an optimal solution, let the probability of getting a feature vector, β, given that it

the average logarithmic ratio between these two conditional probabilities yields information concerning the separability between the two classes and shows that there is no loss to the concept. This gives the divergence values of the features. Thus the mathematical formulation

> i,j i j i j = - (mm mm

means the expectations and Σ mean the covariance.

has been formulated using two features: step length and walking speed.

Thus walking speed or walking rate has been calculated and it yields to:

*walking speed fast if T T*

<sup>ì</sup> <sup>=</sup> <sup>ï</sup> - = <sup>í</sup> <sup>&</sup>lt;

ures, the step length has been computed and it yields to:

From the equation (21) divergence values have been calculated up to 19 feature vectors.

The mathematical analysis for the detection of behavioural trait through human gait image

Let the source be 'S' and the destination be 'D'. Also assume that normally this distance is to achieve in 'T' steps. So 'T' frames or samples of images are required. Considering the first frame, with left foot (FL) at the back and right foot (FR) at the front, the coordinates with (x,y) for the first frame, such that FL(x1,y1) and FR(x2,y2). Thus applying Manhattan distance meas‐

Normally, Tact steps are required to achieve the destination. From equation (22), T1 has to be calculated for the first frame. Similarly, for 'nth' frame, Tn has to be calculated. Thus the total

> , , ,

<sup>ï</sup> <sup>&</sup>gt; <sup>î</sup>

*norm if T T*

*slow if T T*

), similarly for the class wj

T -1

)D -( ) S (21)

21 21 *step length x x y y* - =-+ - (22)

calc 1 2 3 <sup>n</sup> T = T + T + T + ......+ T (23)

*act calc act calc act calc*

, yields p(β/w<sup>j</sup>

). The sum of

(24)

belongs to some class wi, yields, p(β/w<sup>i</sup>

12 Wavelet Transform and Some of Its Real-World Applications

yields:

where μ = μi

steps, calculated are:

= μj

Further analysis has been carried out for the classification of behavioural traits with two target classes (normal and abnormal). It has been further illustrated that the corpus developed in the present work has various states, each of which corresponds to a segmental feature vector. In one state, the segmental feature vector is characterised by nineteen parameters. Considering only three parameters of the step length: distance, mean, and standard deviation, the model yields to the equation :

$$\text{AHGM}\_1 = \left( D\_{s1'} \mu\_{s1'} \sigma\_{s1} \right) \tag{27}$$

where AHGM1 means an artificial human gait model of the first feature vector, Ds1 means the distance, μs1 means the mean and σs1 means the standard deviation based on step length. Let wnorm and wabnorm be the two target classes representing 'normal behaviour' and 'abnormal behaviour' respectively. The clusters of features have been estimated by taking the probability distribution of these features. This has been achieved by employing Bayes' decision theorem. Let P(wi ) be the probabilities of the classes, such that: i = 1,2,....M also let p(β/w<sup>i</sup> ) be the conditional probability density. Assume a test human gait image represented by the features, β. So, the conditional probability p(wj /β), which belongs to jth class, is given by Bayes' rule as:

$$P(\mathbf{w}\_{\rangle}/\beta) \stackrel{p(\mathcal{J}/\mathbf{w}\_{\rangle})P(w\_{\rangle})}{p(\mathcal{J})} \tag{28}$$

So, for the class j = 1 to 2 the probability density function p(β), yields:

$$\mathbf{P}\left(\boldsymbol{\beta}\right) = \sum\_{j=1}^{2} p(\boldsymbol{\beta} \mid \boldsymbol{w}\_{j}) P(\boldsymbol{w}\_{j}) \tag{29}$$

Equation (28) gives a posteriori probability in terms of a priori probability P(wj ). Hence it is quite logical to classify the signal, β, as follows:

If P(wpositive/ β) > P(wnegative/ β), then the decision yields β Є wpositive meaning 'positive biometric authentication' else the decision yields β Є wnegative meaning 'negative biometric authentica‐ tion'. If P(wpositive/ β) = P(wnegative/ β), then it remains undecided or there may be 50% chance of being right when making a decision. During this situation further analysis was conducted using the fuzzy c-means clustering technique.

Similarly, the relevant physiological traits have to be extracted from the frontal human face images and template matching has to be employed for the recognition of behavioural traits. Little work has been completed in the area of human face recognition by extracting features from the side-view of the human face. When frontal images are tested for recognition with minimum orientation of the face or the image boundaries, the performance of the recognition system degrades. A path between pixels 'pix1' and 'pixn' is a sequence of pixels pix1, pix2, pix3,.....,pixn-1,pixn, such that pixk is adjacent to pk+1, for 1 ≤ k < n. Thus a connected component has to be obtained from the path, defined from a set of pixels, which in return depends upon the adjacent position of the pixel in that path. In order to compute the orientation using the reducing strategy, the phase angle must initially be calculated for an original image.

Let Ik be the side-view of an image with the orientation 'k'. If k = 90, then I90 is the image with an actual side-view. If the real and imaginary component of this oriented image is Rk and Ak. For k = 90 degree orientation:

$$\implies \left| I\_k \right| = \left[ \mathbf{R}\_k^2 + \mathbf{A}\_k^2 \right]^{1/2} \tag{30}$$

For k = 90 0, orientation:

$$\implies \left| I\_{\mathbf{g}\_{0}} \right| = \left[ \mathbf{R}\_{\mathbf{g}\_{0}}^{2} + \mathbf{A}\_{\mathbf{g}\_{0}}^{2} \right]^{1/2} \tag{31}$$

Thus the phase angle of an image with k = 90 orientations is:

$$\phi\_k = \tan^{-1} \left[ \frac{A\_k}{R\_k} \right] \tag{32}$$

If k = k-5, (applying the reducing strategy), equation (32) yields:

$$\phi\_{k-5} \tan^{-1} \left[ \frac{A\_{k-5}}{R\_{k-5}} \right] \tag{33}$$

There will be a lot of variety in the output between the equations (32) and (33). Hence these must be normalised, by imposing logarithmic to both equations:

$$\varphi\_{\text{normalized}} = \log\left(1 + \left(\phi\_k - \phi\_{k-5}\right)\right) \tag{34}$$

Taking the covariance of (34), it yields perfect orientation between two side-views of the images, that is, I 90 and I 85 :

$$\mathbf{I}\_{\text{perfect}-orientation} = \mathbf{Cov} \left( \phi\_{\text{nonmaliz}} \right) \tag{35}$$

The distances between the connected components have to be computed using the Euclidean distance method. A perfect matching has to be undertaken with best-fit measures using genetic algorithm. If the matching fails, then the orientation is to be reduced further by 50 , that is k = k-5 and the process repeated till k = 450 . The combination of this two-way searching technique results in the forward–backward dynamic programming–searching method. For optimal solution, a minimum number of divergence values results. Thus, to compute the divergence values for an optimal solution, let the probability of getting a feature vector, β, given that it belongs to some class wi, yields: p(β/w<sup>i</sup> ), similarly for the class wj , yields: p(β/w<sup>j</sup> ). The sum of the average logarithmic ratio between the two conditional probabilities, results in information concerning the separability between the two classes. It has also been discovered that there is no loss to the concept. This gives the divergence values of the features. Thus the mathematical formulation yields:

$$\mathbf{D}\_{\mathbf{i},\mathbf{j}} = (\boldsymbol{\mu}\_{\mathbf{i}} \cdot \boldsymbol{\mu}\_{\mathbf{j}}) \left(\boldsymbol{\mu}\_{\mathbf{i}} \cdot \boldsymbol{\mu}\_{\mathbf{j}}\right)^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} \tag{36}$$

where μ = μi = μj means the expectations and Σ mean the covariance.

From equation (18) divergence values have been calculated up to nineteen feature vectors. These divergence values have been categorised into basic metrics: *true positive (TP), true negative (TN), false positive (FP)* and *false negative (FN)*. These metric values are useful for further analysis. In the present work, five assessments have been analysed: the *false positive rate (FPR),* the *false negative rate (FNR), sensitivity (SV), specificity (SC)* and *accuracy (AC)*. The assessment, *false positive rate (FPR)*, is the segmentation of the object on a test image resulting in incomplete correct data. Mathematically, it yields to:

$$FPR = \frac{FP}{FP + TN} \tag{37}$$

The assessment, *false negative rate (FNR)*, is the segmentation of the object of interest of a test image resulting in complete incorrect data.

Mathematically, it yields to:

If P(wpositive/ β) > P(wnegative/ β), then the decision yields β Є wpositive meaning 'positive biometric authentication' else the decision yields β Є wnegative meaning 'negative biometric authentica‐ tion'. If P(wpositive/ β) = P(wnegative/ β), then it remains undecided or there may be 50% chance of being right when making a decision. During this situation further analysis was conducted

Similarly, the relevant physiological traits have to be extracted from the frontal human face images and template matching has to be employed for the recognition of behavioural traits. Little work has been completed in the area of human face recognition by extracting features from the side-view of the human face. When frontal images are tested for recognition with minimum orientation of the face or the image boundaries, the performance of the recognition system degrades. A path between pixels 'pix1' and 'pixn' is a sequence of pixels pix1, pix2, pix3,.....,pixn-1,pixn, such that pixk is adjacent to pk+1, for 1 ≤ k < n. Thus a connected component has to be obtained from the path, defined from a set of pixels, which in return depends upon the adjacent position of the pixel in that path. In order to compute the orientation using the

reducing strategy, the phase angle must initially be calculated for an original image.

Let Ik be the side-view of an image with the orientation 'k'. If k = 90, then I90 is the image with an actual side-view. If the real and imaginary component of this oriented image is Rk and Ak.

2 2 1/2

*k A R*

ë û

1 5

tan *<sup>k</sup>*

 - - -

5

 é ù ê ú ë û

*k A R*

There will be a lot of variety in the output between the equations (32) and (33). Hence these

<sup>1</sup> tan *<sup>k</sup>*


*k*

5

*k*

f

f

Thus the phase angle of an image with k = 90 orientations is:

If k = k-5, (applying the reducing strategy), equation (32) yields:

must be normalised, by imposing logarithmic to both equations:

2 2 1/2 [R A ] *k kk* Þ= + *I* (30)

90 90 90 Þ= + *I* [R A ] (31)

(32)

(33)

using the fuzzy c-means clustering technique.

14 Wavelet Transform and Some of Its Real-World Applications

For k = 90 degree orientation:

For k = 90 0, orientation:

$$FNR = \frac{FN}{FN + TP} \tag{38}$$

The *sensitivity (SV)* assessment involves positive values of the object of interest on the test image being proportioned properly and being recognised with full capacity.

Mathematically, it yields to:

$$\text{Sensitivity} = \frac{\text{Number of true positives}}{\text{Number of true positives} + \text{Number of false negatives}} \times 100\tag{39}$$

The *specificity (SC)* assessment involves negative values of the object of interest being propor‐ tioned properly and being recognised with full capacity.

Mathematical, it yields to:

$$Specificity = \frac{\text{Number of true negatives}}{\text{Number of true negatives} + \text{Number of false positives}} \times 100\tag{40}$$

The *accuracy (AC)* assessment involves the measured and weighted values of the object of interest being classified properly resulting in linearity.

Mathematically, it yields to:

$$Accuracy = \frac{TP + TN}{TP + FP + TN + FN} \times 100\tag{41}$$

#### **3. Solution methodology for the progressive–regressive strategy**

The investigation of biometrical authentication on human gait and human face has been undertaken using the progressive-regressive strategy, implementing wavelet transform and soft-computing techniques. Initially, the analysis was carried out using wavelet transform. Hence soft-computing techniques like artificial neural network, genetic algorithm and fuzzy set theory were applied. Here in this chapter, the solution methodology for biometrical authentication is carried out based on the progressive–regressive strategy. The algorithms for the formation of corpus (for both human gait and human face), the progressive strategy, the regressive strategy and for the final authentication of biometrics are given below.

#### **a. Algorithm for the formation of corpus using human face images:**

**1.** Read the front-view of human face image and convert it into a grayscale image.


*FN FNR*

<sup>100</sup> *Number of true positives Sensitivity Number of true positives Number of false negatives*

<sup>100</sup> *Number of true negatives Specificity Number of true negatives Number of false positives*

<sup>100</sup> *TP TN Accuracy TP FP TN FN*

**3. Solution methodology for the progressive–regressive strategy**

regressive strategy and for the final authentication of biometrics are given below.

**a. Algorithm for the formation of corpus using human face images:**

being proportioned properly and being recognised with full capacity.

tioned properly and being recognised with full capacity.

interest being classified properly resulting in linearity.

Mathematically, it yields to:

16 Wavelet Transform and Some of Its Real-World Applications

Mathematical, it yields to:

Mathematically, it yields to:

*FN TP*

The *sensitivity (SV)* assessment involves positive values of the object of interest on the test image

The *specificity (SC)* assessment involves negative values of the object of interest being propor‐

The *accuracy (AC)* assessment involves the measured and weighted values of the object of

The investigation of biometrical authentication on human gait and human face has been undertaken using the progressive-regressive strategy, implementing wavelet transform and soft-computing techniques. Initially, the analysis was carried out using wavelet transform. Hence soft-computing techniques like artificial neural network, genetic algorithm and fuzzy set theory were applied. Here in this chapter, the solution methodology for biometrical authentication is carried out based on the progressive–regressive strategy. The algorithms for the formation of corpus (for both human gait and human face), the progressive strategy, the

**1.** Read the front-view of human face image and convert it into a grayscale image.

= ´ <sup>+</sup> (39)

= ´ <sup>+</sup> (40)

<sup>+</sup> = ´ +++ (41)

<sup>=</sup> <sup>+</sup> (38)


#### **b. Algorithm for the progressive strategy for biometrical authentication:**


#### **c. Algorithm for the regressive strategy for biometrical authentication:**


#### **d. Algorithm for the validation of biometric authentication:**


**c. Algorithm for the regressive strategy for biometrical authentication:**

**2.** Initialise a regressive-switching angle, say theta\_reg = 90.

patterns. Hence compute the mean of the clusters.

a decision is made for recognition.

**d. Algorithm for the validation of biometric authentication:**

and also an object of interest.

18 Wavelet Transform and Some of Its Real-World Applications

image.

objects.

pixel pairing.

theta4.

with parallel projections.

**1.** Read the side-view of the human face image (test image) and convert into a grayscale

**3.** Perform filtering for the removal of noise from the image and select a region of interest

**4.** Perform morphological image processing for the thinning and thickening of the

**6.** Employ statistical methods of computing, such as cross-correlation and autocorrelation with deviation of neighboring pixels using 4-pair and 8-pair concepts of

**7.** Employ the fuzzy c-means clustering method for the computation of behavioural

**8.** Compute the distance measure of the extracted features of the test image and the parameters that are stored in the corpus (formed through trained image).

**9.** Compare the patterns for the best fit using forward-backward dynamic programming of artificial neural network and validate the whole process using genetic algorithm. If the best fit testing fails, then increment the regressive-switching angle, theta\_reg by five degrees and repeat Step 3. For fast processing decrement by ten degrees. **10.** Perform classification and characterisation using a support vector machine and hence

**5.** Crop the image and extract features along with relevant parameters.

**11.** Compute the divergence values of metrics. Hence plot the results.

**1.** Read the original image, convert the RGB image into a grayscale image.

**4.** Crop the image after the extraction of features and relevant parameters.

**3.** Employ morphological components for obtaining the thinning and thickening image.

**5.** Employ Radon transform normalisation technique for computing muscle activation. **6.** Set the angle value for the parallel projection of data, say theta1, theta2, theta3 and

**8.** Employ the inverse Radon transform method for the regeneration of cropped images

**2.** Perform filtering for the removal of noise from the grayscale image.

**7.** Set four counter values for parallel projections as 12, 18, 36 and 90.


## **4. Experimental results and discussions of case studies**

First a human gait image is captured through eight digital cameras placed at a known fixed distance and fed as input for the investigation. The image is then enhanced and the loss-less compression technique, discrete cosine transform, is applied for the removal of distortions. Further it is segmented for contour detection and the relevant physiological features are extracted. All the features of the human gait image are stored in a corpus called the automatic human gait model. Relevant biometrical features with covariant mode (wearing no footwear) are extracted in the initial investigation. The relevant physiological feature, that is, step length and knee-to-ankle distance are also extracted.

After the extraction of relevant features, a limited number of parameters are utilized for the formation of a corpus. For this the Radon transform method and its inverse mechanism are applied and the relevant output of the algorithm is obtained which is shown in Figure 2, Figure 3 and Figure 4.

From Figure 2 and Figure 3, it is observed that the projection at 90 degrees has a wider profile than at a 0 degree projection. This means the energy and intensity value of muscle activation and contraction appears at a maximum when the angle for parallel projection of extracted data is at 90 degrees. The behavioural pattern matching of test datasets stored in a corpus called automatic human gait model (AHGM) for one subject is shown in Figure 4.

From Figure 4, it can be observed that three traits or moods of behaviour are analysed: over act, normal act and under act. The behaviour is normal, when there is no presence of pertur‐ bations in the behavioural characteristic curve. When a large number of perturbations is available in the behavioural characteristic curve, then it is under act behaviour. When a smaller number of perturbations is available in the behavioural characteristic curve; then it is under over act behaviour. This is further illustrated for both trained datasets and tests datasets and is shown in Figure 5.

This is also illustrated for both trained and test datasets, considering two different subjects shown in Figure 6.

This is also illustrated for both trained and test datasets, considering both the frames of walking cycle, that is odd and even cycle, which is shown in Figure 7.

**Figure 2.** Muscle activation and contraction with projection count 36 and 90.

**Figure 3.** Muscle activation and contraction with projection count 12 and 18.

The clusters of features for the detection of the behavioural pattern or trait of human gait is plotted using fuzzy c-means clustering method, and the result is shown in Figure 8.

A boundary is formed, as shown in Figure 9, for the detection of gait code using UTAM technique of artificial neural network.

The overall behavioural pattern for trained and test datasets is shown in Figure 10.

Progressive-Regressive Strategy for Biometrical Authentication http://dx.doi.org/10.5772/61786 21

**Figure 4.** Behaviour pattern matching of test data set using AHGM of a subject gait (ten- second walk) with overact, normal act and underact moods.

**Figure 5.** Behavioural pattern matching of the same subject gait (one-second walk) with over act, normal act and under act moods.

Similarly, for human face images, an observation is made for different distance measures. The distance measures along with the number of wrinkles or edges, texture of the human face and the normal behavioural pattern are calculated and is depicted in Table 1.

The clusters of features for the detection of the behavioural pattern or trait of human gait is

A boundary is formed, as shown in Figure 9, for the detection of gait code using UTAM

plotted using fuzzy c-means clustering method, and the result is shown in Figure 8.

**Figure 2.** Muscle activation and contraction with projection count 36 and 90.

20 Wavelet Transform and Some of Its Real-World Applications

**Figure 3.** Muscle activation and contraction with projection count 12 and 18.

The overall behavioural pattern for trained and test datasets is shown in Figure 10.

technique of artificial neural network.

From Table 1 it is observed that the number of wrinkles or edges (NOW) ranges between 2 and 3. These values are extracted for male and female subjects whose age lies between 35 to

**Figure 6.** Behavioural pattern matching of two different subjects' gait (one-second walk) with over act, normal act and under act moods.

**Figure 7.** Behavioural pattern of odd and even frames of the same female subject gait (ten-second walk) with over act, normal act and under act moods.

50. The method that is applied, for the extraction of wrinkles or edges is the morphological components of digital image processing through Canny and Sobel properties. Similarly, the values of the texture of the human face is calculated as a unity. A texture is calculated using statistical methods of computation through cross-correleation and auto-correlation. Also the deviation of neighbouring pixels for texture analysis are also done using 4-pair and 8-pair computations of neighbouring pixels. The final conclusion on the texture calculation is made

**Figure 8.** Clusters of features for the detection of the behavioural pattern using fuzzy c-means algorithm of a subject gait (ten-second walk).

**Figure 9.** Boundary for gait code detection using UTAM and best fit detection using genetic algorithm of a subject gait (ten-second walk).

50. The method that is applied, for the extraction of wrinkles or edges is the morphological components of digital image processing through Canny and Sobel properties. Similarly, the values of the texture of the human face is calculated as a unity. A texture is calculated using statistical methods of computation through cross-correleation and auto-correlation. Also the deviation of neighbouring pixels for texture analysis are also done using 4-pair and 8-pair computations of neighbouring pixels. The final conclusion on the texture calculation is made

**Figure 7.** Behavioural pattern of odd and even frames of the same female subject gait (ten-second walk) with over act,

**Figure 6.** Behavioural pattern matching of two different subjects' gait (one-second walk) with over act, normal act and

under act moods.

22 Wavelet Transform and Some of Its Real-World Applications

normal act and under act moods.

using the forward backward dynamic–programming method of soft-computing techniques. Further from Table 1 it is observed that the normal behavioural pattern (NBP) ranges from 10 to 15. These values are extracted using the fuzzy c-means clustering method with a mean value of the clusters for the normal behavioural pattern. The graphical representation of the param‐ eters extracted from the frontal part of the human face image is shown in Figure 11.

**Figure 10.** Overall behavioural pattern of a subject gait (ten-second walk) for both trained and test datasets.


**Table 1.** Distance measures of parameters and features of the human face of ten subjects.

For recognition of the human face, the side-view of the image is considered as a test data sample. Initiallly, a side-view of the human face which is parallel to the x-axis with zero degree orientation is fed as a test data sample. Hence preprocessing techniques of digital image processing are applied and a result is obtained. The techniques applied are: the loss-less compression technique, DWT for obtaining detail and coarser components of the switching pattern, statistical methods of computation for the computation of the mean covariance of transformed vectors and the principal component analysis for the computation of eigen vectors and eigen values. The results obtained from the human face test image with a side-view is shown in Figure 12, Figure 13 and Figure 14, along with brief discussions and observations.

**Figure 11.** Graphical representation of the parameters extracted from the human face image.

**Data**

24 Wavelet Transform and Some of Its Real-World Applications

**source FHW ELD LCD ECD END NOW TOF NBP**

**Figure 10.** Overall behavioural pattern of a subject gait (ten-second walk) for both trained and test datasets.

**Img1** 40.26 50.23 6.07 56.30 46.96 2.00 1.00 10.00 **Img2** 40.23 50.26 6.05 56.31 46.95 3.00 1.00 11.00 **Img3** 40.05 50.29 6.00 56.29 46.98 2.00 1.00 10.50 **Img4** 40.15 50.19 6.06 56.25 46.99 2.00 1.00 12.00 **Img5** 40.28 50.18 6.10 56.28 47.02 2.00 1.00 12.05 **Img6** 40.27 50.16 6.09 56.25 47.01 3.00 1.00 13.50 **Img7** 40.24 50.21 6.08 56.29 47.03 2.00 1.00 12.50 **Img8** 40.12 50.22 6.01 56.23 47.04 2.00 1.00 13.60 **Img9** 40.09 50.27 6.04 56.31 46.94 2.00 1.00 12.06 **Img10** 40.19 50.09 6.03 56.12 46.89 3.00 1.00 13.75

For recognition of the human face, the side-view of the image is considered as a test data sample. Initiallly, a side-view of the human face which is parallel to the x-axis with zero degree orientation is fed as a test data sample. Hence preprocessing techniques of digital image processing are applied and a result is obtained. The techniques applied are: the loss-less compression technique, DWT for obtaining detail and coarser components of the switching pattern, statistical methods of computation for the computation of the mean covariance of transformed vectors and the principal component analysis for the computation of eigen vectors and eigen values. The results obtained from the human face test image with a side-view is shown in Figure 12, Figure 13 and Figure 14, along with brief discussions and observations.

**Table 1.** Distance measures of parameters and features of the human face of ten subjects.

**Figure 12.** The loss-less compression of a human face test image captured from a side-view.

After performing the loss-less compression on the test image, further calculations are carried out such as coarser and detail components of the switching pattern using DWT and the results are plotted in Figure 13.

Figure 13 shows the calculation of detail and coarser components, which are then utilised for further calculation of odd and even components of the human face image. In the present work, this is achieved by employing the lifting and inverse-lifting schemes of DWT. Further analysis and outcomes are plotted in Figure 14.

**Figure 13.** DWT of the test image captured from the side-view of the human face.

**Figure 14.** Principal component analysis of the test image captured from the side-view of a human face.

From Figure 14, it is observed that transformed eigen vectors and their corresponding eigen value are extracted for analysing the switching pattern. The switching angle is gradually increased. Initial analysis is carried out on five degree progressive displacement. Later the same is done with a ten degree increment. The comparison of the progressive switching patterns for odd multiples of frames of human face images is computed and shown in Figure 15.

**Figure 15.** Comparison of the switching pattern for odd frames of the test image.

**Figure 16.** Normal distribution of progressive switching pattern for odd frames of the test image.

**Figure 14.** Principal component analysis of the test image captured from the side-view of a human face.

**Figure 13.** DWT of the test image captured from the side-view of the human face.

26 Wavelet Transform and Some of Its Real-World Applications

Figure 15.

From Figure 14, it is observed that transformed eigen vectors and their corresponding eigen value are extracted for analysing the switching pattern. The switching angle is gradually increased. Initial analysis is carried out on five degree progressive displacement. Later the same is done with a ten degree increment. The comparison of the progressive switching patterns for odd multiples of frames of human face images is computed and shown in

From Figure 15 it is observed that for the first frame, with a five degree orientation, the extracted parameters are matched. However the fitness test failed. Hence further analysis is carried out for availing th the best-fitness test. This is achieved using progressive switching of the human face with a ten degree displacement. Finally, it is found that most parameters follow the normal pattern of the trained data set stored in a corpus. Hence the best fit measures are carried out and further analysis for the classification and recognition is performed, using the genetic algorithm of soft-computing techniques. Further, the normal and cumulative distri‐ bution of progressive switching patterns of the test image is shown in Figure 16 and Figure 17.

**Figure 17.** Cumulative distribution of progressive switching pattern for odd frames of the test image.

The classification and characterisation process of the progressive switching pattern of the human face test image captured from the side-view is carried out using the support vector machine of artificial neural network. The results found are very remarkable and the plotting is shown in Figure 18.

**Figure 18.** Classification of progressive switching pattern using the support vector machine of artificial neural net‐ work.


The different divergence value of metrics for the test image captured from the side-view of the human face is shown in Table 2 and the plot in Figure 19.

**Table 2.** Divergence values of metrics for the human face captured from the side-view of ten subjects.

**Figure 19.** Graphical representation of divergence values of metrics of the test image of the human face, captured from the side-view for ten different subjects.

#### **5. Conclusion and further scope**

genetic algorithm of soft-computing techniques. Further, the normal and cumulative distri‐ bution of progressive switching patterns of the test image is shown in Figure 16 and Figure 17.

28 Wavelet Transform and Some of Its Real-World Applications

**Figure 17.** Cumulative distribution of progressive switching pattern for odd frames of the test image.

is shown in Figure 18.

work.

The classification and characterisation process of the progressive switching pattern of the human face test image captured from the side-view is carried out using the support vector machine of artificial neural network. The results found are very remarkable and the plotting

**Figure 18.** Classification of progressive switching pattern using the support vector machine of artificial neural net‐

The results gained so far must be further analysed using the fan-beam projection method in order to obtain more accurate values required for biometric authentication through human gait and human face images. The volume of the corpus must also be increased and further analysis carried out with the developed algorithm. Statistical and high-end computing measures must also be taken using known algorithms from the literature. The analysis will be also based on performance measures with an optimal number of parameters for the validation of biometric authentication.

Furthermore an appropriate amount of results has been obtained and analysis carried out with an acceptable value of recognition of human face captured from the side-view. Still there are scopes for carrying out the research work on the obtained results. The algorithm developed for the recognition of human face from the side-view must be analysed further with complexity under a worst-case condition. To achieve such goals, high-end computing measures have to be carried out using advanced mathematical formulations and known algorithms from the literature. The performance measures with an optimal number of parameters required for the recognition of the human face from the side-view must also be analysed.

## **Author details**

Tilendra Shishir Sinha1\*, Raj Kumar Patra2 , Rohit Raja2 , Devanshu Chakravarty2 and Ravi Prakash Dubey3

\*Address all correspondence to: tssinha1968@gmail.com

1 School of Engineering and Information Technology, MATS University, Raipur, Chhattis‐ garh State, India

2 Computer Science and Engineering Department, Dr. C.V. Raman University, Bilaspur, Chhattisgarh State, India

3 Dr. C. V. Raman University, Bilaspur, Chhattisgarh State, India

#### **References**


[5] Ben-Abdelkader C., Cutler R., and Davis L., *Person identification using automatic height and stride estimation*, In 16th International conference on pattern recognition, Quebec, Canada, pp 377-380, August, 2002.

measures must also be taken using known algorithms from the literature. The analysis will be also based on performance measures with an optimal number of parameters for the validation

Furthermore an appropriate amount of results has been obtained and analysis carried out with an acceptable value of recognition of human face captured from the side-view. Still there are scopes for carrying out the research work on the obtained results. The algorithm developed for the recognition of human face from the side-view must be analysed further with complexity under a worst-case condition. To achieve such goals, high-end computing measures have to be carried out using advanced mathematical formulations and known algorithms from the literature. The performance measures with an optimal number of parameters required for the

, Rohit Raja2

1 School of Engineering and Information Technology, MATS University, Raipur, Chhattis‐

2 Computer Science and Engineering Department, Dr. C.V. Raman University, Bilaspur,

[1] Boyd J. E., and Little J. J., *Biometric gait recognition*, Springer-Verlag Berlin Heidelberg,

[2] Nixon M. S., Carter J. N., Cunado D., Huang P. S., and Stevenage S. V., *Automatic gait recognition*, Eds. Biometrics: personal identification in networked society, pp 231-250,

[3] Murray M. P., Drought A. B., and Kory R. C., *Walking patterns of normal men*, Journal

[4] Ben-Abdelkader C., Cutler R., and Davis L., *Gait recognition using image self-similarity*,

EURASIP Journal on applied signal processing, vol. 4, pp 572-585, 2004.

, Devanshu Chakravarty2

and

recognition of the human face from the side-view must also be analysed.

of biometric authentication.

30 Wavelet Transform and Some of Its Real-World Applications

**Author details**

Ravi Prakash Dubey3

garh State, India

**References**

1999.

Chhattisgarh State, India

Tilendra Shishir Sinha1\*, Raj Kumar Patra2

LNCS 3161, pp 19-42, 2005.

\*Address all correspondence to: tssinha1968@gmail.com

3 Dr. C. V. Raman University, Bilaspur, Chhattisgarh State, India

of bone and joint surgery, vol. 46-A(2), pp 335-360, 1964


[30] Heng Fui Liau and Dino Isa, *New illumination compensation method for face recognition*, International journal of computer and network security, vol. 2, no. 45, pp. 5-12, 2010.

[18] Hu. Y., Jiang, D., Yan, S., Zhang, H., *Automatic 3D reconstruction for face recognition*, in the proceeding of the IEEE International Conference on automatic face and gesture

[19] Lee, C.H., Park, S.W., Chang, W., Park, J.W: *Improving the performance of multi-class SVMs in the face recognition with nearest neighbour rule*, in the proceeding of the IEEE International Conference on Tools with Artificial Intelligence, pp. 411-415, 2003. [20] Michal Valstar, Loannis Patras and Maja Pantic, *Facial action unit recognition using temporal templates*, in the proceeding of the IEEE International Workshop on Robot and Human Interactive Communication, Okayama Japan, September 20-22, 2004. [21] Jolly D. Shah and S.H. Patil, *Biometric authentication based on detection and recognition of*

[22] Richa Singh, Mayank Vasta Arun Ross and Afzel, *A mosaicing scheme for pose - invari‐ ant face recognition*, in IEEE Transactions on Systems, MAN and CYBERNETICS, vol.

[23] Edward Kao, Peter VanMaasdam, John Sheppard, *Image-based tackling with particle swarms and probabilistic data association*, in IEEE Swarn Intelligence Symposium, St.

[24] Xiaozheng Zhang, Yongsheng Gao, Maylor K. H. Leung *Recognizing rotated faces from frontal and side views: an approach toward effective use of mugshot databases*, in IEEE

[25] Li Cheng, Minglun Gong, Dale Schurmans and Terry Calli, *Real-time discriminative background subtraction*, in IEEE Transaction on image processing, vol. 20, no. 5, 2011.

[26] Hossain S.M.E and Chetty G. *Human identity verification by using physiology and behav‐ ioral biometric traits*, in the International Journal of Bioscience and Bioinformatics, vol.

[27] Tilendra Shishir Sinha, Devanshu Chakravarty, Raj Kumar Patra and Rohit Raja, *Modelling and simulation for the recognition of physiological and behavioral traits through human gait and face images*, published as a book chapter in a book entitled "Discrete Wavelet Transforms – a compendium of new approaches and recent applications", InTech Open, ISBN 978-953-51-0940-2, pp. 95-125, 2013, edited by Dr. Awad Kh. Al-

[28] J.-H., Na, M.-S, Park, and J.-Y,Choi, *Pre-clustered principal component analysis for fast training of new face databases*, International conference control, automation system, pp.

[29] Mohammad Said El-Bashir, *Face recognition using multi-classifier*, Applied sciences

Transactions on Information Forensics and Security, vol. 3, no. 4, 2008.

recognition. pp. 843- 850, 2000.

32 Wavelet Transform and Some of Its Real-World Applications

37, no. 5, 2007.

1, no. 3, 2011.

1144-1149, 2007.

Asmari. DOI:105772/52565.

journal, vol. 6, no. 45, pp. 2235-2244, 2012.

*multiple face in image*, in the proceeding of IC-3, 2008.

Louis MO USA, September 21-23, 2008.

[31] Stefanos Zafeiriou, Gary A. Atkinson, Mark F. Hansen, William A. P. Smith, Vasi‐ leios Argyriou, Maria Petrou, Melvyn L. Smith and Lyndon N. Smith, *Face recognition and verification using photometric stereo: the photo-face database and a comprehensive evalu‐ ation*, IEEE transactions on information forensics and security, vol. 8, no. 1, 2013.
