**4. Capture device**

Based on criteria provided by clinical experts, the space selected to record the gait signals with the Microsoft Kinect was a corridor of 1.5 m wide by 4 m long. Each volunteer did walk in the selected space three times. Kinect's represent the joints in a basic human shape with 20 points, three of these points were used (the ankle, the wrist, and the spine base) because they are in the same positions as in the standard anthropometric model used in the benchmark data [24, 26].

To obtain the distance between the Kinect and the subject, we use our eMotion Capture software, which provides the distances to each joint in meters. In the preliminary review [26], we obtain results that suggest the ankle trajectory accurate *Wavelet Transform and Complexity*

*x*(*n*) = ∑

signal *x*(*n*). The signal *a*2*<sup>j</sup>*

by the following equations:

and *a*2*<sup>j</sup>*−1(*k*). The approximate signals *a*2*<sup>j</sup>*

*a*2*<sup>j</sup>*

*d*2*<sup>j</sup>*

detailed signals *d*2*<sup>j</sup>*

**4. Capture device**

wavelet and scaling functions are defined as

ϕ*J*,*k*(*n*) = 2<sup>−</sup>*j*/2ϕ(2−*<sup>j</sup>*

ψ*J*,*k*(*n*) = 2<sup>−</sup>*j*/2ψ(2−*<sup>j</sup>*

coefficients according to the size of the original signal.

*j*=1 *J* ∑ *kZ*

In this chapter, we apply wavelet decomposition using multiple wavelet mothers, like Daubechies. The discrete wavelet transform (DWT) uses a set of basic functions to perform a decomposition over a *x*(*n*) signal in two resultant signals: detailed and approximated signals. The first one is the scaling function, called the basic dilation function. The second one is the main wavelet function. This decomposition is defined by the equation used in [39, 40] and represented as follows:

> <sup>∗</sup> (*n*) + ∑ *kZ*

*a*2*<sup>j</sup>*(*k*)ϕ*J*,*k*(*n*) (1)

*n* − *k*) (2)

*n* − *k*) (3)

*n*) *a*2*<sup>j</sup>*−1(*k*) (4)

*n*) *a*2*<sup>j</sup>*−1(*k*) (5)

(*n*)

(*n*) are replaced

*d*2*<sup>j</sup>*(*k*)ψ*J*,*<sup>k</sup>*

where (1)*j* is the scale that represents the dilation index and *k* represents the index in time. *J* is the decomposition level and ∗ denotes complex conjugation. The

In ϕ*J*,*k*(*n*) and ψ*J*,*k*(*n*), *j* allows the scaling and the wavelet function the dilation or compression. *k* controls the translation in time. The functions ϕ*J*,*k*(*n*) and ψ*J*,*k*(*n*) have the essential properties of low-pass and band-pass Fourier transform, respectively. The approximation obtained with *a*20(*n*) at scale *j* = 0 is equivalent to the original

*h*(*k* − 2*<sup>j</sup>*

*g*(*k* − 2*<sup>j</sup>*

where *h* and *g* represent the coefficients of the discrete low-pass and high-pass filters associated with the scaling function and the wavelet function, respectively. Given that each level of wavelet decomposition generates coefficients of length less than the original signal, it is important to clarify that for the use of the approximation and detail coefficients, it was necessary to perform an interpolation process to adjust the size of the

Based on criteria provided by clinical experts, the space selected to record the gait signals with the Microsoft Kinect was a corridor of 1.5 m wide by 4 m long. Each volunteer did walk in the selected space three times. Kinect's represent the joints in a basic human shape with 20 points, three of these points were used (the ankle, the wrist, and the spine base) because they are in the same positions as in the standard anthropometric model used in the benchmark data [24, 26]. To obtain the distance between the Kinect and the subject, we use our eMotion Capture software, which provides the distances to each joint in meters. In the preliminary review [26], we obtain results that suggest the ankle trajectory accurate

(*n*) = ∑ *k*

(*n*) = ∑ *k*

(*n*) at lower resolutions represents smoothed *a*2*<sup>j</sup>*−1(*k*). The

(*n*) and the detailed signals *d*2*<sup>j</sup>*

(*n*) are given by the difference between approximate signals *a*2*<sup>j</sup>*

**3. Wavelet background**

**6**

for gait tracking. The clinical space settings are shown in **Figure 1**. The acceptable capture area was restricted to a distance of 1.5–3.5 m from the camera, which was able to record at least one full gait cycle during each walking test.

**Figure 1.** *Graphic interface from eMotion Capture software and acceptable capture area.*

### **Figure 2.**

*Signals obtained from Kinect. The first image shows the spine base movement, the second shows the movement related to the left and right ankles, and the third shows the movement related to the left and right wrist.*

This software allows us to obtain a representation of the distance between the person and the Kinect, for each articulation of interest, at each instant of time. **Figure 2** shows a representation of the movement of the base of the spine, ankle left and right, and wrist left and right, respectively.
