**3. Vision sensor-based gait analysis**

To avoid wearing sensors on the tested subjects, we propose a vision sensor-based gait analysis method. This method is composed of four parts: input video frame decomposition, preprocessing, feature extraction and gait analysis. The input video takes the side view of the subjects when they are performing a 6-min brisk walking test. Then, the input video is decomposed into individual frames for further processing.

In the pre-processing part, there are three components to extract the silhouette of the tested subject: background subtraction, shadow removal and connected component labelling (CCL). The background subtraction subtracts the background image by the current video frame to capture the moving object. Because the human shadow cannot be removed by background subtraction, we adopt Jamie and Richard's method [17] to solve the shadow problem. Then, there are some noises that cannot be removed, so we use the connected component-labelling method to reduce the noises and obtain the maximum object.

In the feature extraction part, there are two steps to obtain the desired features: segmentation and feature extraction. In the segmentation part, we have to find the central of gravity (COG) first, and use the COG point to perform body segmentation and extract legs of the subjects. Then, we can get the features such as pace distance and pace velocity in the feature extraction part.

In the gait analysis part, we divide whole subjects into two groups as the Bad and Good by consulting the proposed respiratory index formula. We use support vector machine (SVM) to perform classification and use adaptive network-based fuzzy inference system (ANFIS) to perform prediction.

#### **3.1. Pre-processing**

#### *3.1.1. Background subtraction*

We take the first frame of the input video as the background image. The background subtrac‐ tion method is shown in Eq. (1). The components *x* and *y* are the pixel location. The factor *t* is the current frame number. *I* represents the RGB value of the pixel, which is located at (*x*, *y*) and *F* is the -subtraction result. In our experiment, we set the Th-value at 15

$$F = \left| I\left(\mathbf{x}, \mathbf{y}, t\right) - B\left(\mathbf{x}, \mathbf{y}, 1\right) \right| > Th \tag{1}$$

#### *3.1.2. Shadow removal*

After subtract background image, some interferences still exist. The result of background subtraction is shown in **Figure 1**. The human shadow is viewed as foreground and we need to remove it. We consult Jamie and Richard's [17] method to solve the shadow problem. The method is divided into two parts: (1) brightness distortion and (2) chromatic distortion.

**Figure 1.** (a) Input frame and (b) background subtraction result.

**3. Vision sensor-based gait analysis**

4 Advanced Biosignal Processing and Diagnostic Methods

decomposed into individual frames for further processing.

method to reduce the noises and obtain the maximum object.

part.

perform prediction.

**3.1. Pre-processing**

*3.1.2. Shadow removal*

*3.1.1. Background subtraction*

To avoid wearing sensors on the tested subjects, we propose a vision sensor-based gait analysis method. This method is composed of four parts: input video frame decomposition, preprocessing, feature extraction and gait analysis. The input video takes the side view of the subjects when they are performing a 6-min brisk walking test. Then, the input video is

In the pre-processing part, there are three components to extract the silhouette of the tested subject: background subtraction, shadow removal and connected component labelling (CCL). The background subtraction subtracts the background image by the current video frame to capture the moving object. Because the human shadow cannot be removed by background subtraction, we adopt Jamie and Richard's method [17] to solve the shadow problem. Then, there are some noises that cannot be removed, so we use the connected component-labelling

In the feature extraction part, there are two steps to obtain the desired features: segmentation and feature extraction. In the segmentation part, we have to find the central of gravity (COG) first, and use the COG point to perform body segmentation and extract legs of the subjects. Then, we can get the features such as pace distance and pace velocity in the feature extraction

In the gait analysis part, we divide whole subjects into two groups as the Bad and Good by consulting the proposed respiratory index formula. We use support vector machine (SVM) to perform classification and use adaptive network-based fuzzy inference system (ANFIS) to

We take the first frame of the input video as the background image. The background subtrac‐ tion method is shown in Eq. (1). The components *x* and *y* are the pixel location. The factor *t* is the current frame number. *I* represents the RGB value of the pixel, which is located at (*x*, *y*)

After subtract background image, some interferences still exist. The result of background subtraction is shown in **Figure 1**. The human shadow is viewed as foreground and we need to remove it. We consult Jamie and Richard's [17] method to solve the shadow problem. The method is divided into two parts: (1) brightness distortion and (2) chromatic distortion.

*F I x y t B x y Th* =- > ( , , , ,1 ) ( ) (1)

and *F* is the -subtraction result. In our experiment, we set the Th-value at 15

*Ii* is the ith pixel of the input frame which can be represented in RGB space by the vector *I* i = [(IR(*i*), IG(*i*))] as shown in **Figure 2**. *Ei* is the ith pixel of the background image which can be represented as *Ei* = [(ER(*i*), EG(*i*), EB(*i*))]. The lengths of these lines are the intensity of the ith pixel. The projection of *Ii* onto *Ei* is denoted as *α<sup>i</sup> Ei* . We call the *α<sup>i</sup>* as brightness distortion. We can solve *α<sup>i</sup>* by Eq. (2)

**Figure 2.** Colour representation in RGB space.

$$\alpha\_{l} = \min\_{\alpha\_{l}} \left\| f\_{l} - \alpha\_{l} \vec{E}\_{i} \right\|^{2} \tag{2}$$

$$
\alpha\_i < \tau\_{\text{RD}} \vdots \text{Forgeryd} \tag{3}
$$

There is a threshold *τ*BD. We take those pixels whose *α<sup>i</sup>* values are smaller than threshold *τ*BD as foreground such as Eq. (3) expressing. In our experiment, we set the threshold *τ*BD at 0.7.

In the chromatic distortion part, we calculate the distance in RGB space between *Ii* and *Ei* . **Figure 2** shows this as line CD*<sup>i</sup>* and we can solve CD*<sup>i</sup>* value by Eq. (4)

$$\mathcal{C}D\_{\ell} = \left\|{\left[\right]}\_{\iota} - \alpha\_{\iota} \triangleleft\_{\iota} \right\|\tag{4}$$

In the same way, we also set a threshold *τ*CD to determine this pixel as background or fore‐ ground such as Eqs. (5) and (6) expressing. Those pixels whose CD*<sup>i</sup>* values are greater than threshold *τ*CD are viewed as foreground and those pixels whose values are smaller as back‐ ground. In our experiment, we set the threshold *τ*CD at 10.

$$\{ \text{'} \text{CD}\_i > \text{τ}\_{\text{cp}} ; \text{''} \text{''} \text{''} \text{'''} \text{''} \text{''} \} \tag{5}$$

$$\text{Otherwise}: \text{Background}$$

After finishing these two parts above, we combine these two images and the background subtraction result together to have a result without shadow.

#### *3.1.3. Connected component labelling*

Connected component labelling is used to detect the connected components and label them. These components have their own label number. **Figure 3** shows an example. **Figure 3(a)** shows three different regions and **(b)** shows the labels of the regions. We keep the largest group as our result. In this case, the region of label 3 is reserved and discards other regions.

**Figure 3.** (a) Three disconnected components and (b) labelling image.

Though we have an image without shadow, some noises exist. We perform connected component labelling method so that we can have an image without these noises. **Figure 4** shows the flow of pre-processing.

**Figure 4.** The flow of pre-processing.

#### **3.2. Feature extraction**

There is a threshold *τ*BD. We take those pixels whose *α<sup>i</sup>* values are smaller than threshold *τ*BD as foreground such as Eq. (3) expressing. In our experiment, we set the threshold *τ*BD at 0.7.

In the same way, we also set a threshold *τ*CD to determine this pixel as background or fore‐ ground such as Eqs. (5) and (6) expressing. Those pixels whose CD*<sup>i</sup>* values are greater than threshold *τ*CD are viewed as foreground and those pixels whose values are smaller as back‐

After finishing these two parts above, we combine these two images and the background

Connected component labelling is used to detect the connected components and label them. These components have their own label number. **Figure 3** shows an example. **Figure 3(a)** shows three different regions and **(b)** shows the labels of the regions. We keep the largest group as

Though we have an image without shadow, some noises exist. We perform connected component labelling method so that we can have an image without these noises. **Figure 4**

our result. In this case, the region of label 3 is reserved and discards other regions.

value by Eq. (4)

(5)

*Otherwise Background* : (6)

 and *Ei* .

(4)

In the chromatic distortion part, we calculate the distance in RGB space between *Ii*

: *CD Foreground i CD* >t

and we can solve CD*<sup>i</sup>*

ground. In our experiment, we set the threshold *τ*CD at 10.

subtraction result together to have a result without shadow.

**Figure 3.** (a) Three disconnected components and (b) labelling image.

shows the flow of pre-processing.

**Figure 2** shows this as line CD*<sup>i</sup>*

6 Advanced Biosignal Processing and Diagnostic Methods

*3.1.3. Connected component labelling*

After the pre-processing, we get the complete target silhouette. In this section, we separate legs from extracted silhouette by segmentation. Then, we find out the gait features such as pace distance and pace velocity in the feature extraction part.

#### *3.2.1. Segmentation*

In the segmentation part, we need to find the central of gravity first. Then, we use the extracted silhouette to get the contour by edge detection in order to build the distance map (DM). We can separate legs from human silhouette by DM.

We find the central of gravity from the extracted human silhouette by Eq. (7). After finding the COG of the whole body (COG*x*, COG*y*), we use edge detection on human silhouette to extract human contour such as in **Figure 5**. Then, we draw a DM by computing the Euclidean distance between (COG*x*, COG*y*) and extracted human contour shown in **Figure 6(b)**. We compute the distance map by Eq. (8) and *xi* ,*yi* are the location of the ith pixel of the extracted human contour.

$$(COG\_x, COG\_y) = (\frac{\sum\_{i=1}^{N} Body\_{xi}}{N}, \frac{\sum\_{i=1}^{N} Body\_{yi}}{N})\tag{7}$$

$$DM = \sqrt{\left(COG\_x - \mathbf{x}\_i\right)^2 + \left(COG\_y - \mathbf{y}\_i\right)^2} \tag{8}$$

**Figure 5.** (a) Silhouette image and (b) the contour image of (a).

We find the Nlegs, Nb2(l), Nb2(r) three points from DM as shown in **Figure 6**. Connecting Nb2(l) and Nb2(r) divides human body into top and down two parts. Connecting (COG*x*, COG*y*) and Nlegs separates legs into leg(l) and leg(r) shown in **Figure 7**.

**Figure 6.** (a) Finding COG(*x*, *y*) in silhouette. (b) Distance map of COG(*x*, *y*).

**Figure 7.** Separated legs.

#### *3.2.2. Gait features*

In this part, we extract gait features such as pace distance, pace time and pace velocity. **Figure 8** shows a pace cycle model and we can extract pace distance (*Dp*), pace time (*Tp*) and pace velocity (*Vp*) from this model and Eq. (9). The distance value of the pace model comes from the horizontal distance between leg(l) and leg(r) such as D1. If the feet are close, we calculate the distance of the closed feet such as D2. D1 and D2 are the longest and shortest distance values, respectively. T1 and T2 are the start and end frame numbers of this step, respectively, so we have to multiply the frame rate to get the actual *Tp*. The frame rate is 1/30 s per frame in our experiment. Then, *Vp* comes from dividing *Dp* by *Tp*.

$$\begin{cases} \begin{aligned} D\_p &= D1 - D2\\ T\_p &= (T2 - T1)^\ast \text{ } \text{primerate} \end{aligned} \end{cases} \tag{9} $$

**Figure 8.** The pace cycle model.

#### **3.3. Gait analysis**

**Figure 5.** (a) Silhouette image and (b) the contour image of (a).

8 Advanced Biosignal Processing and Diagnostic Methods

We find the Nlegs, Nb2(l), Nb2(r) three points from DM as shown in **Figure 6**. Connecting Nb2(l) and Nb2(r) divides human body into top and down two parts. Connecting (COG*x*,

In this part, we extract gait features such as pace distance, pace time and pace velocity. **Figure 8** shows a pace cycle model and we can extract pace distance (*Dp*), pace time (*Tp*) and pace velocity (*Vp*) from this model and Eq. (9). The distance value of the pace model comes from the horizontal distance between leg(l) and leg(r) such as D1. If the feet are close, we calculate the distance of the closed feet such as D2. D1 and D2 are the longest and shortest distance values,

COG*y*) and Nlegs separates legs into leg(l) and leg(r) shown in **Figure 7**.

**Figure 6.** (a) Finding COG(*x*, *y*) in silhouette. (b) Distance map of COG(*x*, *y*).

**Figure 7.** Separated legs.

*3.2.2. Gait features*

To verify that there exists a correlation between pulmonary spirometer and our system, we perform classification and prediction experiments. There are two groups, namely bad and good, which are classified by the parameters from pulmonary spirometer and it becomes our classification standard. We take support vector machine for classification and take adaptive network-based fuzzy inference system for prediction. Here, we introduce the tools: SVM and ANFIS.

#### *3.3.1. Support vector machine*

SVM comes from Vapnik's statistical learning theory [18] and it is a machine-learning method, which can be a powerful tool for learning from data and for solving classification problem [18]. In a two-group classification problem such as our study (Bad/Good), the target is to find the *Hyperplane* between the two data groups. SVM finds the *Hyperplane* by looking for the maxi‐ mum margin between two groups. The main idea of SVM is to transform data into higher dimensions and then construct a *Hyperplane* between two classes in the transformed space. Those data vectors, which are nearest to the constructed line in the transformed space, are called the support vectors which contain information about *Hyperplane*. **Figure 9** shows the concept about the SVM.

**Figure 9.** Example of two-group problem showing optimal *Hyperplane* (dotted line).

#### *3.3.2. Adaptive network-based fuzzy inference system*

ANFIS was presented by Jang in 1993 [19]. Adaptive network-based fuzzy inference system can construct an input-output mapping based on human knowledge by a hybrid-learning algorithm. The fuzzy inference system is employed with adaptive network. ANFIS contains a five-layer forward neural network to construct the inference system.

**Figure 10.** ANFIS structure with two inputs and four rules.

Input space is mapped to a given membership function (MF). By membership function, the input becomes a degree between 0 and 1. With different membership functions and the number of membership functions, the results are different. **Figure 10** shows the ANFIS structure with two inputs and four rules.

The study [20] explained the function of each layer. In Layer(1), the outputs are the membership function degree of the inputs, which are given by Eqs. (10) and (11)

$$O\_{1,i} = \mu\_{\mathcal{A}}(\mathbf{x}), i = 1, 2 \tag{10}$$

$$O\_{1,i} = \mu\_{\text{Al}-2}(\text{y}), i = \text{3,4} \tag{11}$$

where *x* and *y* are the inputs to node *i*.

called the support vectors which contain information about *Hyperplane*. **Figure 9** shows the

ANFIS was presented by Jang in 1993 [19]. Adaptive network-based fuzzy inference system can construct an input-output mapping based on human knowledge by a hybrid-learning algorithm. The fuzzy inference system is employed with adaptive network. ANFIS contains a

Input space is mapped to a given membership function (MF). By membership function, the input becomes a degree between 0 and 1. With different membership functions and the number of membership functions, the results are different. **Figure 10** shows the ANFIS structure with

**Figure 9.** Example of two-group problem showing optimal *Hyperplane* (dotted line).

five-layer forward neural network to construct the inference system.

*3.3.2. Adaptive network-based fuzzy inference system*

**Figure 10.** ANFIS structure with two inputs and four rules.

two inputs and four rules.

concept about the SVM.

10 Advanced Biosignal Processing and Diagnostic Methods

Layer(2) involves fuzzy operations. ANFIS fuzzifies the inputs by using AND operation. The label \_ means that they perform simple multiplier. Equations (12) and (13) can show the output of Layer(2)

$$\mu O\_{2,i} = w\_i = \mu\_{A1}(\mathbf{x}) \,^\*\mu\_{Ri}(\mathbf{y}), i = 1, 2 \tag{12}$$

$$O\_{2,i} = w\_i = \mu\_{A2}(\mathbf{x}) \* \mu\_{B1-2}(\mathbf{y}), i = \mathbf{3}, 4 \tag{13}$$

In Layer(3), the label *N* indicates normalization. This layer can be represented by Eq. (14)

$$O\_{3,i} = \psi\_i = \frac{w\_l}{w\_1 + w\_2 + w\_3 + w\_4}, i = 1, 2, 3, 4 \tag{14}$$

Layer(4) is the product of the normalized data which can be represented as Eq. (15). The parameters *pi* , *qi* and *ri* are determined during the training process

$$\mathbf{0}\_{4,i} = \boldsymbol{\psi}\_{i}\mathbf{f}\_{i} = \boldsymbol{\psi}\_{i}(\mathbf{p}\_{i}\mathbf{x} + q\_{i}\mathbf{x} + r\_{i}),\\ i = \mathbf{1}\_{r}\mathbf{2}\_{r}\mathbf{3},\\ \tag{15}$$

Layer(5) implements sum of all inputs such as Eq. (16)

$$
\Delta O\_{5,l} = \sum \psi \, f\_l = \frac{\sigma\_l w\_l f\_l}{\sigma\_l w\_l} \tag{16}
$$

#### **4. Proposed gait features**

We propose some gait features from *Dp* and *Vp* and call the *i*th step of *Dp* and *Vp* as *Dpi* and *Vpi*. The mean distance and mean velocity are denoted as *Dp* ' and *Vp*, respectıvely. In addition, we divide all steps into *S* sections depending on the step counts in order to reveal the variation of the subject's movement during the 6-min brisk walking test. **Figure 11** shows an example that we divide *Dp* of all steps into six sections and there are *μDi* and *σDi* of each section.

**Figure 11.** The six sections of *Dp*.

The mean of distance for section *i* is called as *μDi* and the mean of velocity for section *i* is called as *μVi* . The variance of distance for section *i* is called as *σDi* and the variance of velocity for section *i* is called as *σVi* . These parameters are listed in Eq. (17)

$$\begin{cases} \mathcal{N}: \text{Total steps}, n = \frac{N}{S}, \\\ D\_p = \frac{1}{N} \star \sum\_{l=1}^{N} D\_{p\ell}, V\_p = \frac{1}{n} \star \sum\_{l=1}^{N} V\_{pl} \\\ \mu\_{D\_l} = \frac{1}{n} \star \sum\_{j=(l-1)\star n+1}^{(i)\star n} D\_{pj}, \mu\_{V\_l} = \frac{1}{n} \star \sum\_{j=(l-1)\star n+1}^{(i)\star n} V\_{pj} \\\ \sigma\_{D\_l} = \sqrt{\frac{1}{n} \star \sum\_{j=(l-1)\star n+1}^{(l)\star n} \left(D\_{pj} - \mu\_{D\_l}\right)^2}, \sigma\_{V\_l} = \sqrt{\frac{1}{n} \star \sum\_{j=(l-1)\star n+1}^{(i)\star n} \left(V\_{pj} - \mu\_{V\_l}\right)^2} \end{cases} \tag{17}$$

From *σDi* and *σVi* , we can calculate the mean of distance variance and the mean of velocity variance, which are denoted as *σD*(1,*<sup>S</sup>* ) ´ and *σ<sup>V</sup>* (1,*<sup>S</sup>* ) ´ . The mean of distance variance for the first two sections and the mean of velocity variance for the first two sections are denoted as *σD*(1,2) ´ and *σ<sup>V</sup>* (1,2), ´ respectively. The mean of distance variance for the last two sections and the mean of velocity variance for the last two sections are denoted as *σD*(*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ and *σ<sup>V</sup>* (*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ respectively. In addition, we also calculate the distance variance ratio and velocity variance ratio, which are denoted as *γD* and *γ<sup>V</sup>* . The distance variance ratio is defined as the multiplication of *σD*(1,*<sup>S</sup>* ) ´ and the result of dividing *σD*(*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ by *σD*(1,2) ´ . The velocity variance ratio is defined as the multipli‐ cation of *σ<sup>V</sup>* (1,*<sup>S</sup>* ) ´ and the result of dividing *σ<sup>V</sup>* (*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ by *σ<sup>V</sup>* (1,2) ´ . **Figure 12** shows the region of these parameters and these parameters are listed in Eq. (18)

Classifying and Predicting Respiratory Function Based on Gait Analysis http://dx.doi.org/10.5772/63917 13

$$\begin{cases} \begin{aligned} \sigma\_{D(1,\mathcal{S})} &= \frac{1}{\mathcal{S}} \ast \sum\_{l=1}^{\mathcal{S}} \sigma\_{D\_{l}}, \sigma\_{V(1,\mathcal{S})} = \frac{1}{\mathcal{S}} \ast \sum\_{l=1}^{\mathcal{S}} \sigma\_{V\_{l}}\\ \sigma\_{D(1,2)} &= \frac{1}{2} \ast \sum\_{l=1}^{2} \sigma\_{D\_{l}}, \sigma\_{V(1,2)} = \frac{1}{2} \ast \sum\_{l=1}^{2} \sigma\_{V\_{l}}\\ \sigma\_{D(\dot{\mathcal{S}}-1,\mathcal{S})} &= \frac{1}{2} \ast \sum\_{l=5-1}^{\mathcal{S}} \sigma\_{D\_{l}}, \sigma\_{V(\dot{\mathcal{S}}-1,\mathcal{S})} = \frac{1}{2} \ast \sum\_{l=5-1}^{\mathcal{S}} \sigma\_{V\_{l}}\\ \mathcal{Y}\_{D} &= \frac{\sigma\_{D(1,\mathcal{S})} \ast \sigma\_{D(\dot{\mathcal{S}}-1,\mathcal{S})}}{\sigma\_{D(1,2)}}, \mathcal{Y}\_{V} = \frac{\sigma\_{V(1,\mathcal{S})} \ast \sigma\_{V(\dot{\mathcal{S}}-1,\mathcal{S})}}{\sigma\_{V(1,2)}} \end{aligned} \tag{18}$$

**Figure 12.** The variance mean of whole sections (purple region) and first two sections (orange region) and last two sec‐ tions (green region).

#### **5. Clinical experiment environment**

#### **5.1. Experiment set-up and flow**

**Figure 11.** The six sections of *Dp*.

section *i* is called as *σVi*

as *μVi*

From *σDi*

and *σVi*

The mean of distance for section *i* is called as *μDi*

12 Advanced Biosignal Processing and Diagnostic Methods

parameters and these parameters are listed in Eq. (18)

. The variance of distance for section *i* is called as *σDi*

. These parameters are listed in Eq. (17)

, we can calculate the mean of distance variance and the mean of velocity

variance, which are denoted as *σD*(1,*<sup>S</sup>* ) ´ and *σ<sup>V</sup>* (1,*<sup>S</sup>* ) ´ . The mean of distance variance for the first two sections and the mean of velocity variance for the first two sections are denoted as *σD*(1,2) ´ and *σ<sup>V</sup>* (1,2), ´ respectively. The mean of distance variance for the last two sections and the mean of velocity variance for the last two sections are denoted as *σD*(*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ and *σ<sup>V</sup>* (*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ respectively. In addition, we also calculate the distance variance ratio and velocity variance ratio, which are denoted as *γD* and *γ<sup>V</sup>* . The distance variance ratio is defined as the multiplication of *σD*(1,*<sup>S</sup>* ) ´ and the result of dividing *σD*(*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ by *σD*(1,2) ´ . The velocity variance ratio is defined as the multipli‐ cation of *σ<sup>V</sup>* (1,*<sup>S</sup>* ) ´ and the result of dividing *σ<sup>V</sup>* (*<sup>S</sup>* <sup>−</sup>1,*<sup>S</sup>* ) ´ by *σ<sup>V</sup>* (1,2) ´ . **Figure 12** shows the region of these

and the mean of velocity for section *i* is called

and the variance of velocity for

(17)

The experiments are run at Shuang-Ho Hospital in New Taipei, Taiwan. We film the side view of the subjects when they perform the 6-min brisk walking test. We set up a green curtain to exclude the interferences such as the movement of other people from our experiment. We film the walking subjects using Nikon P330 digital camera.

Firstly, the therapists ask the subjects' profile including height, weight and age. Secondly, by using pulmonary spirometer, we can get respiratory parameters such as FEV1 and FVC data about the subjects. Thirdly, before the experiment starts, the therapist helps the subjects wear a pulse oximeter on the index finger, which is used to measure the oxygen and pulse. Fourthly, the subjects need to take a 2-min break so that the pulse oximeter can record the oxygen and pulse in normal condition. Fifthly, when the walking test begins, the subjects should walk along the trail as fast as possible. While the subjects start their walking test, we film the side view of the subjects. Sixthly, after the 6-min walking test, the subjects use the pulmonary spirometer to measure the respiration parameter after exercising again.

#### **5.2. Data collection**

We run the experiments from September 2014 to July 2015. In the experiments, there are 60 subjects who aged between 24 and 91 years. Among these 60 subjects, there are 48 men and 12 women.

There are two rooms: the subjects walk from the right room to the left one, then turnaround to walk into the right room. When the subjects walk to the other side of the trail, they need to turnaround and continue to walk along the trail. They decrease their walking speed so that they can turnaround easily when they are close to the border. To avoid recording those slowdown steps, we abandon those steps and keep normal steps. Taking **Figure 13** as an example, there are six steps in this walking trail. We just consider steps 1 and 6 as the normal steps and abandon steps 2–5.

Depending on the respiratory index that comes from Eq. (19), these subjects are divided into three levels: level 1 (the worst respiratory function), level 2 (poor respiratory function) and level 3 (normal respiratory function). **Table 1** shows the respiratory index used to classify the three levels. We call the respiratory index as *REX*. The smaller *REX* represents the worse respiratory function

**Figure 13.** (a) Walking trail before turnaround. (b)Walking trail after turnaround.

#### Classifying and Predicting Respiratory Function Based on Gait Analysis http://dx.doi.org/10.5772/63917 15

$$\frac{\frac{1^\ast \text{ postFEV1}}{preFEV1} \ast \text{postIC}}{preEV1} \ast \text{postFVC}}{preEV} \ast \text{postFVC}}{preFVC} \ast \text{post} \frac{FEV1}{FVC} \tag{19}$$


**Table 1.** The respiratory index (*REX*) used to classify the three levels.

about the subjects. Thirdly, before the experiment starts, the therapist helps the subjects wear a pulse oximeter on the index finger, which is used to measure the oxygen and pulse. Fourthly, the subjects need to take a 2-min break so that the pulse oximeter can record the oxygen and pulse in normal condition. Fifthly, when the walking test begins, the subjects should walk along the trail as fast as possible. While the subjects start their walking test, we film the side view of the subjects. Sixthly, after the 6-min walking test, the subjects use the pulmonary

We run the experiments from September 2014 to July 2015. In the experiments, there are 60 subjects who aged between 24 and 91 years. Among these 60 subjects, there are 48 men and 12

There are two rooms: the subjects walk from the right room to the left one, then turnaround to walk into the right room. When the subjects walk to the other side of the trail, they need to turnaround and continue to walk along the trail. They decrease their walking speed so that they can turnaround easily when they are close to the border. To avoid recording those slowdown steps, we abandon those steps and keep normal steps. Taking **Figure 13** as an example, there are six steps in this walking trail. We just consider steps 1 and 6 as the normal

Depending on the respiratory index that comes from Eq. (19), these subjects are divided into three levels: level 1 (the worst respiratory function), level 2 (poor respiratory function) and level 3 (normal respiratory function). **Table 1** shows the respiratory index used to classify the three levels. We call the respiratory index as *REX*. The smaller *REX* represents the worse

spirometer to measure the respiration parameter after exercising again.

**Figure 13.** (a) Walking trail before turnaround. (b)Walking trail after turnaround.

**5.2. Data collection**

14 Advanced Biosignal Processing and Diagnostic Methods

steps and abandon steps 2–5.

respiratory function

women.

The main item of REX formula is postFEV1 and other items are used to adjust it. The three items of postFEV1 preFEV1 , postIC preIC and postFVC preFVC are greater than one in normal respiratory function subjects but smaller than one in poor respiratory function subjects. The value of post FEV1 FVC is lower than 0.75 in those subjects who have poor respiratory function. **Figure 14** shows the lung capacity changes of respiratory factors.

**Figure 14.** Lung capacity changes [21].

The FEV1 is the volume that has been exhaled at the end of the first second of forced expiration. The FVC is the forced vital capacity that is used for the determination of the vital capacity from a maximally forced expiratory effort. The IC is the inspiratory capacity that is the sum of inspiratory reserve volume and tidal volume. The FEV1 FVC is the ratio that is used for the diagnosis of obstructive and restrictive lung disease.
