**3.6. Symplectic principal component analysis**

Computational Intelligence in Electromyography Analysis – 146 A Perspective on Current Applications and Future Challenges

does not consider the numerical calculation error.

a. The principal components of Lorenz chaos

c. The principal components of Lorenz chaos time series,

τ=1

**Figure 16.** The principal component analysis of gaussian noise and Lorenz chaos time series by different sampling intervals based on SVD, *d*=3 : 2 : 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>*

τ=0.005

time series,

*series[49]* 

Fig. 16a). When

τ

when the principal components above noise floor are only held. The number of the principal

Besides, the new coordinate system corresponding to the principal axes can eliminate the noise floor to reduce the noise from the data. However, the truncated position of the principal components depends on the signal-noise-ratio, especially for the measurement noise. The principal components of the chaotic time series based on SVD spectrum more easily subject to the measurement noise so that the embedding dimension estimation is directly affected. For the smaller noise, there is the more number of principal components above the noise floor. For the larger noise, the number of the corresponding principal components will be reduced. Here, the above calculation accuracy is 2.2204e-016, which

*3.5.3. Influence of sampling interval on the principal component spectrum of chaotic time* 

The Lorenz chaotic system is considered to give the state variable *x* in order to study the influence of sampling interval on the principal component spectrum. The principal

=0.1 (see Fig. 16b), the principal component spectrum are basically similar

τ

b. The principal components of Lorenz chaos time series,

d. The principal components of noise, *n*=10000

> σ *tr* σ

τ=0.1

=0.005 (see

component spectrum slant and have no floor for the chaotic time series *x* with

components above the noise floor is the optimal embedding dimension.

The symplectic geometry is a kind of phase space geometry. Its nature is nonlinear. It can describe the system structure, especially nonlinear structure, very well. It has been used to study various nonlinear dynamical systems[50-52] since Feng Kang[53] has proposed a symplectic algorithm for solving symplectic differential. However, from the view of data analysis, few literatures have employed symplectic geometry theory to explore the dynamics of the system. Our previous works have proposed the estimation of the embedding dimension based on symplectic geometry from a time series[49, 54-56]. Subsequently, Niu et al. have used our method to evaluate sprinter's surface EMG signals[57]. Xie et al[58] have proposed a kind of symplectic geometry spectra based on our work. Subsequently, we show that SPCA can well represent chaotic time series and reduce noise in chaotic data[59, 60].

In SPCA, a fundamental step is to build the multidimensional structure (attractor) in symplectic geometry space. Here, in terms of Taken's embedding theorem, we first construct an attractor in phase space, i.e. the trajectory matrix *X* from a time series. That is, for a measured data (the observable of the system under study) *x*1, *x*2, ..., *xn* recorded with sampling interval *ts*, the corresponding *d*-dimension reconstruction attractor, *Xm*×*<sup>d</sup>* can be given (refer to Eq.40).Then we describe the symplectic principal component analysis (SPCA) based on symplectic geometry theory and give its corresponding algorithm.

## **3.7. Symplectic principal component method**

SPCA is a kind of PCA approaches based on symplectic geometry. Its idea is to map the investigated complex system in symplectic space and elucidate the dominant features underlying the measured data. The first few larger components capture the main relationship between the variables in symplectic space. The remaining components are composed of the less important components or noise in the measured data. In symplectic space, the used geometry is called symplectic geometry. Different from Eulid geometry, symplectic geometry is the even dimensional geometry with a special symplectic structure. It is dependent on a bilinear antisymmetric nonsingular cross product——symplectic cross product:

Computational Intelligence in Electromyography Analysis – 148 A Perspective on Current Applications and Future Challenges

$$\{\mathbf{x}, \mathbf{y}\} = \{\mathbf{x}, J\mathbf{y}\} \tag{53}$$

Nonlinear Analysis of Surface EMG Signals 149

) (57)

conjugate transposition. (Proof refers to

(58)

(60)

 

2

 

 

*A*

0

 

*P*

0

*<sup>Q</sup> NQ* 0 (59)

′ 

*P*

*<sup>d</sup>* ( ) , , , = <sup>1</sup> <sup>2</sup> by using symplectic *QR* decomposition

μ

(*A*) (61)

*d* (63)

*X* (62)

0

*B B R*

 

*T*

 <sup>=</sup> *<sup>T</sup>*

where *B* is up Hessenberg matrix (*bij*=0, *i*>*j*+1). The matrix *Q* may be a symplectic Household matrix *H*. If the matrix *M* is a real symmetry matrix, *M* can be considered as *N*.

> 

*B*

μ

method; if *M* is a real symmetry matrix, the eigenvalues of *A* is equal to those of *B*:

( ) ( ) <sup>2</sup>

Thus the calculation of 2*d* dimension space is transformed into that of that of *d* dimension space. The *μ* is the symplectic principal component spectrums of *A* with relevant symplectic

*<sup>d</sup>* , , , **μ** = <sup>1</sup> <sup>2</sup> are sorted by descending order, that is

0

 − ′ 

*P*

0

*A*

0

− ′ ′

0

*PA P*

 <sup>=</sup> <sup>=</sup> *<sup>P</sup> P*

ϖ

For Hamilton matrix *M*,its eigenvalues can be given by symplectic similar transform and the primary 2*d* dimension space can be transformed into *d* dimension space to resolve[17-19],

0 0

*H H k*

ϖis

2

*T*

Then one can get an upper Hessenberg matrix (referred to equ. 13), namely,

*P*

0

 <sup>−</sup> ′ <sup>=</sup>

 *B* { } μ μ

> μ =λ(*B*) =λ

μ

μ1 > μ<sup>2</sup> >> μ*<sup>k</sup>* >>μ*<sup>k</sup>*+<sup>1</sup> ≥≥

λ *A* = λ

 

 ′ =

*HMH*

where *H* is the symplectic Householder matrix.

iv. These eigenvalues { }

λ

μ μ

iii. Calculate eigenvalues

′ <sup>=</sup>

*PAP*

0

*B*

0

*M*

 <sup>−</sup> <sup>=</sup> *<sup>F</sup> <sup>A</sup> A G*

where

appendix A)

as follows:

i. Let <sup>2</sup> *N* = *M*

ϖ ϖ

ϖϖ

\* <sup>2</sup> *<sup>P</sup>* <sup>=</sup> *In* <sup>−</sup> , <sup>=</sup> (0 , , <sup>0</sup> ; , , ) <sup>≠</sup> <sup>0</sup> *<sup>T</sup>*

ω *<sup>k</sup>* ω*d*

ϖ

so, *H* is symplectic unitary matrix. \*

ii. Construct a symplectic matrix *Q*,

\*

( ,ω

$$\text{where}\\
\text{i.e.}\\
\qquad J = J\_{z\_n} = \begin{bmatrix} 0 & +I\_n \\ -I\_n & 0 \end{bmatrix} \tag{54}$$

When *n* =1 , [ , ] <sup>1</sup> <sup>2</sup> *x* = *x x* , [ , ] <sup>1</sup> <sup>2</sup> *y* = *y y* ,

$$J = \begin{bmatrix} \mathbf{0} & \mathbf{1} \\ -\mathbf{1} & \mathbf{0} \end{bmatrix} \tag{55}$$

$$\begin{aligned} \begin{bmatrix} \mathbf{x}, \mathbf{y} \end{bmatrix} &= \begin{bmatrix} \mathbf{x}\_1 \ \mathbf{x}\_2 \end{bmatrix} \mathbf{J} \begin{bmatrix} \mathbf{y}\_1 \\ \mathbf{y}\_2 \end{bmatrix} \\ &= \begin{vmatrix} \mathbf{x}\_1 & \mathbf{y}\_1 \\ \mathbf{x}\_2 & \mathbf{y}\_2 \end{vmatrix} \end{aligned} \tag{56}$$

The measurement of symplectic space is area scale. In symplectic space, the length of arbitrary vectors always equals zero and without signification, and there is the concept of orthogonal cross-course. In symplectic geometry, the symplectic transform is the nonlinear transform in essence, which is also called canonical transform, since it has measure preserving characteristics and can keep the natural properties of the original data unchanged. It is fit for nonlinear dynamics systems.

The symplectic principal components are given by symplectic similar transform. It is similar to SVD-based PCA. The corresponding eigenvalues can be obtained by symplectic *QR* method. Here, we first construct the autocorrelation matrix *Ad*×*<sup>d</sup>* of the trajectory matrix *Xm*×*<sup>d</sup>*. Then the matrix *A* can be transformed as a Hamilton matrix *M* in symplectic space.

**Definition 1** Let *S* is a matrix, if <sup>−</sup> −∗ *JSJ* = *S* <sup>1</sup> , then *S* is a symplectic matrix.

**Definition 2** Let *H* is a matrix, if <sup>−</sup> <sup>∗</sup> *JHJ* = −*H* <sup>1</sup> , then *H* is a Hamilton matrix.

**Theorem 1** Any *d*×*d* matrix can be made into a Hamilton matrix. Let a matrix as *A*, so − *<sup>T</sup> A A* 0 <sup>0</sup> is a Hamilton matrix. (Proof refers to appendix A)

**Theorem 2** Hamilton matrix *M* keeps unchanged at symplectic similar transform. (Proof refers to appendix A)

**Theorem 3** Let *<sup>d</sup> <sup>d</sup> M C*<sup>2</sup> <sup>×</sup><sup>2</sup> ∈ as Hamilton matrix, so *Me* is symplectic matrix.

**Theorem 4** Let *<sup>d</sup> <sup>d</sup> S C*<sup>2</sup> <sup>×</sup><sup>2</sup> ∈ as symplectic matrix,there is *S* = *QR* , where *Q* is symplectic unitary matrix, *R* is upper triangle matrix.

**Theorem 5** The product of sympletcic matrixes is also a symplectic matrix. (Proof refers to appendix A)

**Theorem 6** Suppose Household matrix *H* is:

Nonlinear Analysis of Surface EMG Signals 149

$$H = H(k, a) = \begin{pmatrix} P & 0 \\ 0 & P \end{pmatrix} \tag{57}$$

where ϖ ϖ ϖϖ \* \* <sup>2</sup> *<sup>P</sup>* <sup>=</sup> *In* <sup>−</sup> , <sup>=</sup> (0 , , <sup>0</sup> ; , , ) <sup>≠</sup> <sup>0</sup> *<sup>T</sup>* ϖ ω *<sup>k</sup>* ω*d*

so, *H* is symplectic unitary matrix. \* ϖ is ϖ conjugate transposition. (Proof refers to appendix A)

For Hamilton matrix *M*,its eigenvalues can be given by symplectic similar transform and the primary 2*d* dimension space can be transformed into *d* dimension space to resolve[17-19], as follows:

i. Let <sup>2</sup> *N* = *M*

Computational Intelligence in Electromyography Analysis – 148 A Perspective on Current Applications and Future Challenges

unchanged. It is fit for nonlinear dynamics systems.

 

refers to appendix A)

unitary matrix, *R* is upper triangle matrix.

**Theorem 6** Suppose Household matrix *H* is:

− *<sup>T</sup> A*

appendix A)

  *A* 0

When *n* =1 , [ , ] <sup>1</sup> <sup>2</sup> *x* = *x x* , [ , ] <sup>1</sup> <sup>2</sup> *y* = *y y* ,

[ ] *x*, *y* = *x*, *Jy* (53)

*J J* (54)

*J* (55)

(56)

*n*

*I*

*n*

where,

[ ]

<sup>1</sup> <sup>2</sup> , [ ]

=

*<sup>y</sup> <sup>x</sup> <sup>y</sup> <sup>x</sup> <sup>x</sup> <sup>J</sup>*

<sup>=</sup>

2

 − <sup>+</sup> <sup>=</sup> <sup>=</sup> <sup>0</sup> 0

 <sup>−</sup> <sup>=</sup> <sup>1</sup> <sup>0</sup> 0 1

> 2 1

 

*y*

2 2 1 1

*x y x y*

The measurement of symplectic space is area scale. In symplectic space, the length of arbitrary vectors always equals zero and without signification, and there is the concept of orthogonal cross-course. In symplectic geometry, the symplectic transform is the nonlinear transform in essence, which is also called canonical transform, since it has measure preserving characteristics and can keep the natural properties of the original data

The symplectic principal components are given by symplectic similar transform. It is similar to SVD-based PCA. The corresponding eigenvalues can be obtained by symplectic *QR* method. Here, we first construct the autocorrelation matrix *Ad*×*<sup>d</sup>* of the trajectory matrix *Xm*×*<sup>d</sup>*.

**Theorem 1** Any *d*×*d* matrix can be made into a Hamilton matrix. Let a matrix as *A*, so

**Theorem 2** Hamilton matrix *M* keeps unchanged at symplectic similar transform. (Proof

**Theorem 4** Let *<sup>d</sup> <sup>d</sup> S C*<sup>2</sup> <sup>×</sup><sup>2</sup> ∈ as symplectic matrix,there is *S* = *QR* , where *Q* is symplectic

**Theorem 5** The product of sympletcic matrixes is also a symplectic matrix. (Proof refers to

Then the matrix *A* can be transformed as a Hamilton matrix *M* in symplectic space.

**Definition 1** Let *S* is a matrix, if <sup>−</sup> −∗ *JSJ* = *S* <sup>1</sup> , then *S* is a symplectic matrix.

**Definition 2** Let *H* is a matrix, if <sup>−</sup> <sup>∗</sup> *JHJ* = −*H* <sup>1</sup> , then *H* is a Hamilton matrix.

**Theorem 3** Let *<sup>d</sup> <sup>d</sup> M C*<sup>2</sup> <sup>×</sup><sup>2</sup> ∈ as Hamilton matrix, so *Me* is symplectic matrix.

<sup>0</sup> is a Hamilton matrix. (Proof refers to appendix A)

*<sup>n</sup> I*

$$M^{\,2} = \begin{bmatrix} A^T & G \\ F & -A \end{bmatrix}^2 \tag{58}$$

ii. Construct a symplectic matrix *Q*,

$$
\underline{\boldsymbol{B}}^{\boldsymbol{T}} \boldsymbol{N} \boldsymbol{\underline{\boldsymbol{Q}}} = \begin{bmatrix} \boldsymbol{B} & \boldsymbol{R} \\ \boldsymbol{0} & \boldsymbol{B}^{\boldsymbol{T}} \end{bmatrix} \tag{59}
$$

where *B* is up Hessenberg matrix (*bij*=0, *i*>*j*+1). The matrix *Q* may be a symplectic Household matrix *H*. If the matrix *M* is a real symmetry matrix, *M* can be considered as *N*. Then one can get an upper Hessenberg matrix (referred to equ. 13), namely,

$$\begin{aligned} HMH' &= \begin{pmatrix} P & 0 \\ 0 & P \end{pmatrix} \begin{pmatrix} A & 0 \\ 0 & -A' \end{pmatrix} \begin{pmatrix} P & 0 \\ 0 & P \end{pmatrix} \\ &= \begin{pmatrix} PAP' & 0 \\ 0 & -PA'P' \end{pmatrix} \\ &= \begin{pmatrix} B & 0 \\ 0 & -B' \end{pmatrix} \end{aligned} \tag{60}$$

where *H* is the symplectic Householder matrix.

iii. Calculate eigenvalues λ *B* { } μ μ μ *<sup>d</sup>* ( ) , , , = <sup>1</sup> <sup>2</sup> by using symplectic *QR* decomposition method; if *M* is a real symmetry matrix, the eigenvalues of *A* is equal to those of *B*:

$$
\mu = \mathcal{\lambda}(B) = \mathcal{\lambda}(A) \tag{61}
$$

$$
\mathcal{A}(A) = \mathcal{E}(X) \tag{62}
$$

iv. These eigenvalues { } μ μ μ*<sup>d</sup>* , , , **μ** = <sup>1</sup> <sup>2</sup> are sorted by descending order, that is

$$
\mu > \mu^{\gamma} > \dots > \mu^{\epsilon} > \mu^{\epsilon\_{\text{old}}} \ge \dots \ge \mu^{\epsilon} \tag{63}
$$

Thus the calculation of 2*d* dimension space is transformed into that of that of *d* dimension space. The *μ* is the symplectic principal component spectrums of *A* with relevant symplectic orthonormal bases. In the so-called noise floor, values of , μ*<sup>i</sup> i* = *k* +1,,*d* , reflect the noise level in the data[49, 55]. The corresponding matrix *Q* denotes symplectic eigenvectors of *A*.

#### *3.7.1. Proposed algorithm of symplectic principal component method*

For a measured data *x*1, *x*2, ..., *xn*, our proposed algorithm consists of the following steps:


$$A = (X - X\_{\text{man}})'(X - X\_{\text{man}}) \tag{64}$$

Nonlinear Analysis of Surface EMG Signals 151

<sup>2</sup> [ ( ) ˆ( )] <sup>1</sup> (67)

*Representation of chaotic signals* 

We first show that for the clean chaotic time series, SPCA can perfectly reconstruct the original data in a high-dimensional space. We first embed the original time series to a phase space. Considering the dimension of the Lorenz system(see Eq. 30) is 3, *d* of the matrix *A* is chosen as 8 in our SPCA analysis. To quantify the difference between the original data and

> = − = *N i*

When *k* = *d*, the RMSE values are lower than 10-14 (see Figure 17). In Figure 17, the original data are generated by Eq. 30. The estimated data is obtained by SPCA with *k*=*d*. The results show that the SPCA method is better than the PCA. Since the real systems are usually unknown, it is necessary to study the effect of sampling time, data length, and noise to the SPCA approach. From the Figure 17 and 18, we can see that the sampling time and data

1

*N*

*x i x i*

the SPCA-filtered data, we employ the root-mean-square error (RMSE) as a measure:

*RMSE*

where *x*(*i*) and *x*ˆ(*i*) are the original data and estimated data, respectively.

length have less effect on SPCA method in the case of free-noise.

**Figure 17.** (Color online) RMSE vs. Sampling time curves for the SPCA and PCA.

**Figure 18.** (Color online) RMSE vs. data length curves for the SPCA and PCA.

For analyzing noisy data, we use the percentage of principal components (PCs) to study the occupancy rate of each PC in order to reduce noise. The percentage of PCs is defined by

Here, *d* should be larger than the dimension of the system in terms of Taken's embedding theorem.


$$\text{i.e.} S\_i = W^\* X\_i, \quad i = 1, \cdots, m \tag{65}$$

7. Reestimate the *Xs* from *S*,

$$X\_{\
u} = W S\_{\
u} \tag{66}$$

Then the reestimation data *<sup>s</sup> <sup>s</sup> sm x* , *x* , , *x* <sup>1</sup> <sup>2</sup> can be given.

8. For the noisy time series, the first estimation of data is usually not good. Here, one can go back to the step (6) and let *Xi* =*Xs* in Eq.(65) to do step (6) and (7) again. Generally, the second estimated data will be better than the first estimated data.

Besides, it is necessary to note that for the clean time series, the step (8) is unnecessary to handle.

#### *3.7.2. Performance evaluation*

SPCA, like PCA, can not only represent the original data by capturing the relationship between the variables, but also reduce the contribution of errors in the original data. Here, the performance analysis of SPCA is studied from the two views, i.e. representation of chaotic signals and noise reduction in chaotic signals.

#### *Representation of chaotic signals*

Computational Intelligence in Electromyography Analysis – 150 A Perspective on Current Applications and Future Challenges

dimension of the matrix *X*, and *m* = *n*-*d*+1.

3. Build the real *d*×*d* symmetry matrix *A*, that is,

embedding theorem.

7. Reestimate the *Xs* from *S*,

*3.7.2. Performance evaluation* 

chaotic signals and noise reduction in chaotic signals.

handle.

orthonormal bases. In the so-called noise floor, values of ,

*3.7.1. Proposed algorithm of symplectic principal component method* 

2. Remove the mean values *Xmean* of each row of the matrix *X*.

constructed from real matrix (refer to appendix B).

6. Get the transformed coefficients *S* = {*S*1, *S*2, …, *Sm*}, where

Then the reestimation data *<sup>s</sup> <sup>s</sup> sm x* , *x* , , *x* <sup>1</sup> <sup>2</sup> can be given.

μ

( ) ( ) *A X X mean X* − *X mean* = − ′ (64)

, *i* =1,,*m* (65)

*X si* = *WS <sup>i</sup>* (66)

level in the data[49, 55]. The corresponding matrix *Q* denotes symplectic eigenvectors of *A*.

For a measured data *x*1, *x*2, ..., *xn*, our proposed algorithm consists of the following steps:

1. Reconstruct the attractor *Xm*×*<sup>d</sup>* from the measured time series, where *d* is the embedding

Here, *d* should be larger than the dimension of the system in terms of Taken's

4. Calculate the symplectic principal components of the matrix *A* by *QR* decomposition, and choose the Householder matrix *H* instead of the transform matrix *Q*. It is easy to prove that *H* is a symplectic unitary matrix (Proof refers to appendix A) and *H* can be

5. Construct the corresponding principal eigenvalue matrix *W* according to the number *k* of the chosen symplectic principal components of the matrix *A*, where *W* ⊆ *Q*. That is,

8. For the noisy time series, the first estimation of data is usually not good. Here, one can go back to the step (6) and let *Xi* =*Xs* in Eq.(65) to do step (6) and (7) again. Generally,

Besides, it is necessary to note that for the clean time series, the step (8) is unnecessary to

SPCA, like PCA, can not only represent the original data by capturing the relationship between the variables, but also reduce the contribution of errors in the original data. Here, the performance analysis of SPCA is studied from the two views, i.e. representation of

when *k*=*d*, *W*=*Q*, otherwise *W*⊂*Q*. In use, *k* can be chosen according to Eq.63.

*Si* =*W*'*Xi*

the second estimated data will be better than the first estimated data.

*<sup>i</sup> i* = *k* +1,,*d* , reflect the noise

We first show that for the clean chaotic time series, SPCA can perfectly reconstruct the original data in a high-dimensional space. We first embed the original time series to a phase space. Considering the dimension of the Lorenz system(see Eq. 30) is 3, *d* of the matrix *A* is chosen as 8 in our SPCA analysis. To quantify the difference between the original data and the SPCA-filtered data, we employ the root-mean-square error (RMSE) as a measure:

$$RMSE = \sqrt{\frac{1}{N} \sum\_{i=1}^{N} [\mathbf{x}(i) - \hat{\mathbf{x}}(i)]^2} \tag{67}$$

where *x*(*i*) and *x*ˆ(*i*) are the original data and estimated data, respectively.

When *k* = *d*, the RMSE values are lower than 10-14 (see Figure 17). In Figure 17, the original data are generated by Eq. 30. The estimated data is obtained by SPCA with *k*=*d*. The results show that the SPCA method is better than the PCA. Since the real systems are usually unknown, it is necessary to study the effect of sampling time, data length, and noise to the SPCA approach. From the Figure 17 and 18, we can see that the sampling time and data length have less effect on SPCA method in the case of free-noise.

**Figure 17.** (Color online) RMSE vs. Sampling time curves for the SPCA and PCA.

**Figure 18.** (Color online) RMSE vs. data length curves for the SPCA and PCA.

For analyzing noisy data, we use the percentage of principal components (PCs) to study the occupancy rate of each PC in order to reduce noise. The percentage of PCs is defined by

$$P\_i = \frac{\mu\_i}{\sum\_{i=1}^d \mu\_i} \times 100\% \tag{68}$$

Nonlinear Analysis of Surface EMG Signals 153

estimated data is very close to the original data not only in time domain (see Figure 20a) but also in phase space (see Figure 20b). We further explore the effect of sampling time in different number of PCs. When the PCs number *k* =1 and *k* =7, respectively, the SPCA and PCA give the change of RMSE values with the sampling time in Figure 21. We can see that the RMSE values of the SPCA are smaller than those of the PCA. The sampling time has less impact on the SPCA than the PCA. In the case of *k* = 7, the data length has also less effect on

Comparing with PCA, the results of SPCA are better in the above Figures. We can see that the SPCA method keep the essential dynamical character of the primary time series generated by chaotic continuous systems. These indicate that the SPCA can reflect intrinsic nonlinear characteristics of the original time series. Moreover, the SPCA can elucidate the dominant features underlying the observed data. This will help to retrieve dominant patterns from the noisy data. For this, we study the feasibility of the proposed algorithm to

**Figure 21.** The RMSE values vs. the sampling time for the SPCA and PCA, where (a) the PCs number *k*

**Figure 22.** The RMSE vs. the data length for the SPCA and PCA, where k =7. The sampling time is 0.1.

For the noisy Lorenz data *x*, the phase diagrams of the noisy and clean data are given in Figure 23a and 23b. The clean data is the chaotic Lorenz data *x* with noise-free (see Eq. 30).

the SPCA than the PCA(see Fig. 22).

=7; (b) *k* =1.

*Noise reduction in chaotic signals* 

reduce noise by using the noisy chaotic Lorenz data.

where *d* is the embedding dimension, *<sup>i</sup>* μ is the *i*-th principal component value. From the Figure 19, we find that the first largest symplectic principal component (SPC) of the SPCA is a little larger than that of the PCA. It is almost possessed of all the proportion of the symplectic principal components. This shows that it is feasible for the SPCA to study the principal component analysis of time series.

Next, we study the reduced space spanned by a few largest symplectic principal components (SPCs) to estimate the chaotic Lorenz time series (see Fig. 20). In Figure 20, the data *x* is given with a sampling time of 0.01 from chaotic Lorenz system. The estimated data is calculated by the first three largest SPCs. The average error and standard deviation between the original data and the estimated data is -6.55e-16 and 1.03e-2, respectively. The

**Figure 19.** (Color online) The percentage of principal components for the SPCA and PCA.

**Figure 20.** (Colour online) Chaotic signal reconstructed by the proposed SPCA algorithm with *k*=3, where (a) the time series of the original Lorenz data *x* without noise and the estimated data; (b) phase diagrams with L =11 for the original Lorenz data *x* without noise and the estimated data. The sampling time *ts* = 0.01.

estimated data is very close to the original data not only in time domain (see Figure 20a) but also in phase space (see Figure 20b). We further explore the effect of sampling time in different number of PCs. When the PCs number *k* =1 and *k* =7, respectively, the SPCA and PCA give the change of RMSE values with the sampling time in Figure 21. We can see that the RMSE values of the SPCA are smaller than those of the PCA. The sampling time has less impact on the SPCA than the PCA. In the case of *k* = 7, the data length has also less effect on the SPCA than the PCA(see Fig. 22).

Comparing with PCA, the results of SPCA are better in the above Figures. We can see that the SPCA method keep the essential dynamical character of the primary time series generated by chaotic continuous systems. These indicate that the SPCA can reflect intrinsic nonlinear characteristics of the original time series. Moreover, the SPCA can elucidate the dominant features underlying the observed data. This will help to retrieve dominant patterns from the noisy data. For this, we study the feasibility of the proposed algorithm to reduce noise by using the noisy chaotic Lorenz data.

**Figure 21.** The RMSE values vs. the sampling time for the SPCA and PCA, where (a) the PCs number *k* =7; (b) *k* =1.

**Figure 22.** The RMSE vs. the data length for the SPCA and PCA, where k =7. The sampling time is 0.1.

#### *Noise reduction in chaotic signals*

Computational Intelligence in Electromyography Analysis – 152 A Perspective on Current Applications and Future Challenges

where *d* is the embedding dimension, *<sup>i</sup>*

principal component analysis of time series.

time *ts* = 0.01.

100%

(68)

is the *i*-th principal component value. From the

×

1

Figure 19, we find that the first largest symplectic principal component (SPC) of the SPCA is a little larger than that of the PCA. It is almost possessed of all the proportion of the symplectic principal components. This shows that it is feasible for the SPCA to study the

Next, we study the reduced space spanned by a few largest symplectic principal components (SPCs) to estimate the chaotic Lorenz time series (see Fig. 20). In Figure 20, the data *x* is given with a sampling time of 0.01 from chaotic Lorenz system. The estimated data is calculated by the first three largest SPCs. The average error and standard deviation between the original data and the estimated data is -6.55e-16 and 1.03e-2, respectively. The

μ

 =

*<sup>i</sup> Pi* μ

μ

**Figure 19.** (Color online) The percentage of principal components for the SPCA and PCA.

a b **Figure 20.** (Colour online) Chaotic signal reconstructed by the proposed SPCA algorithm with *k*=3, where (a) the time series of the original Lorenz data *x* without noise and the estimated data; (b) phase diagrams with L =11 for the original Lorenz data *x* without noise and the estimated data. The sampling

= *d <sup>i</sup> <sup>i</sup>*

> For the noisy Lorenz data *x*, the phase diagrams of the noisy and clean data are given in Figure 23a and 23b. The clean data is the chaotic Lorenz data *x* with noise-free (see Eq. 30).

Nonlinear Analysis of Surface EMG Signals 155

The noisy data is the chaotic Lorenz data *x* with Gaussian white noise of zero mean and one variance (see Eq. 30). The sampling time is 0.01. The time delay *L* is 11 in Figure 23. It is obvious that noise is very strong. The first denoised data is obtained in terms of the proposed SPCA algorithm (see Figure 23c- f). Here, we first build an attractor *X* with the embedding dimension of 8. Then the transform matrix *W* is constructed when *k*=1. The first denoised data is generated by Eq.(65) and (66). In Figure 23c, the first denoised data is compared with the noisy Lorenz data *x* from the view of time field. Figure 23d shows the corresponding phase diagram of the first denoised data. Compared with Fig. 23a, the first denoised data can basically give the structure of the original system. In order to obtain better results, this denoised data is reduced noise again by the step (8). We can see that after the second noise reduction, the results are greatly improved in Fig. 23e and 23f, respectively. The curves of the second denoised data are better than those of the first denoised data whether in time domain or in phase space by contrast with Fig. 23c and 23d. Figure 23g shows that the PCA technique gives the first denoised result. We refer to our algorithm to deal with the first denoised data again by the PCA (see Figure 23h). Some of noise has been further reduced but the curve of PCA is not better than that of SPCA in Figure 23e. The reason is that the PCA is a linear method indeed. When nonlinear structures have to be considered, it can be misleading, especially in the case of a large sampling time (see Figure 24). The used program code of the PCA comes from the TISEAN tools (http://www.mpipks–

Figure 24 shows the variation of correlation dimension *D*2 with embedding dimension *d* in the sampling time of 0.1 for the clean, noisy, and denoised Lorenz data. We can observe that for the clean and SPCA denoised data, the trend of the curves tends to smooth in the vicinity of 2. For the noisy data, the trend of the curve is constantly increasing and has no platform. For the PCA denoised data, the trend of the curve is also increasing and trends to a platform with 2. However, this platform is smaller than that of SPCA. It is less effective than the SPCA algorithm. This indicates that it is difficult for the PCA to describe the nonlinear structure of a system, because the correlation dimension *D*2 manifests nonlinear properties of chaotic systems. Here, the correlation dimension *D*2 is estimated by the Grassberger-

dresden.mpg.de/ ~tisean).

Procaccia's algorithm[33, 40].

**Figure 24.** (Color online) *D*2 vs. embedding dimension *d* 

**Figure 23.** The noise reduction analysis of the proposed SPCA algorithm and PCA for the noisy Lorenz time series, where L=11.

The noisy data is the chaotic Lorenz data *x* with Gaussian white noise of zero mean and one variance (see Eq. 30). The sampling time is 0.01. The time delay *L* is 11 in Figure 23. It is obvious that noise is very strong. The first denoised data is obtained in terms of the proposed SPCA algorithm (see Figure 23c- f). Here, we first build an attractor *X* with the embedding dimension of 8. Then the transform matrix *W* is constructed when *k*=1. The first denoised data is generated by Eq.(65) and (66). In Figure 23c, the first denoised data is compared with the noisy Lorenz data *x* from the view of time field. Figure 23d shows the corresponding phase diagram of the first denoised data. Compared with Fig. 23a, the first denoised data can basically give the structure of the original system. In order to obtain better results, this denoised data is reduced noise again by the step (8). We can see that after the second noise reduction, the results are greatly improved in Fig. 23e and 23f, respectively. The curves of the second denoised data are better than those of the first denoised data whether in time domain or in phase space by contrast with Fig. 23c and 23d. Figure 23g shows that the PCA technique gives the first denoised result. We refer to our algorithm to deal with the first denoised data again by the PCA (see Figure 23h). Some of noise has been further reduced but the curve of PCA is not better than that of SPCA in Figure 23e. The reason is that the PCA is a linear method indeed. When nonlinear structures have to be considered, it can be misleading, especially in the case of a large sampling time (see Figure 24). The used program code of the PCA comes from the TISEAN tools (http://www.mpipks– dresden.mpg.de/ ~tisean).

**Figure 24.** (Color online) *D*2 vs. embedding dimension *d* 

Computational Intelligence in Electromyography Analysis – 154 A Perspective on Current Applications and Future Challenges

**Figure 23.** The noise reduction analysis of the proposed SPCA algorithm and PCA for the noisy Lorenz

time series, where L=11.

Figure 24 shows the variation of correlation dimension *D*2 with embedding dimension *d* in the sampling time of 0.1 for the clean, noisy, and denoised Lorenz data. We can observe that for the clean and SPCA denoised data, the trend of the curves tends to smooth in the vicinity of 2. For the noisy data, the trend of the curve is constantly increasing and has no platform. For the PCA denoised data, the trend of the curve is also increasing and trends to a platform with 2. However, this platform is smaller than that of SPCA. It is less effective than the SPCA algorithm. This indicates that it is difficult for the PCA to describe the nonlinear structure of a system, because the correlation dimension *D*2 manifests nonlinear properties of chaotic systems. Here, the correlation dimension *D*2 is estimated by the Grassberger-Procaccia's algorithm[33, 40].

#### Computational Intelligence in Electromyography Analysis – 156 A Perspective on Current Applications and Future Challenges

### *3.7.3. Estimation of embedding dimension based on symplectic geometry*

In terms of Eq. 63, the values of μ*i*, *i*=*k*+1, ..., *d*, are far smaller than μ*<sup>k</sup>*. These values form a noise floor. Therefore, the embedding dimension of the reconstruction system can be determined by the noise floor. Here, the noise and nonlinear time series are used to investigate the feasibility of the embedding dimension estimation based on symplectic geometry.

Nonlinear Analysis of Surface EMG Signals 157

b. The symplectic geometry spectrums of Logistic chaotic series with no noise

d. The SVD principal components of Logistic chaotic series with no noise

a. The symplectic geometry spectrums of Gauss white noise with mean value 0 and variance 1

c. The symplectic geometry spectrums of

8, 13, 18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>*

analysis of short time series.

τ=0.005

e. The SVD principal components of Lorenz

**Figure 25.** The study of embedding dimension based on symplectic geometry algorithm, *N*=1000, *d*=3,

And yet when *N* is rather large, e.g. *N*=10000, the SVD method can just have the similar results (see Fig. 26b) with Fig. 25a. These show that the SG method is more robust to changes of the data length than the SVD method. Then the SG method is fitter to the

τ=0.005

chaotic series with no niose,

μ *tr* μ

Lorenz chaotic series with no noise,

For noise (which is generally regarded as Gauss white noise with mean value 0 and variance 1 in practical systems), symplectic geometry spectrums of this noise give the even distribution of its total energy (see Fig. 25a). From this figure, we can see that the symplectic geometry spectrums of noise can reflect the characteristic of noise very well when *N*=1000. This shows SG method can reflect noise level in the condition of short data length. For the time series of state variable *x* in Logistic chaos system without noise interference, the symplectic geometry spectrums (see Fig. 25b) are slant in the beginning then turn into plane area with the increase of index *i*. In other words, the distribution of total energy on the different axes is obviously different and with increasing the embedding dimension, the slants of symplectic geometry spectrums transit into noise floor. So one can determine embedding dimension from the number of symplectic geometry spectrums over noise floor, in which its determining criterion is similar to that in [37]. From Fig. 25b, the embedding dimension of Logistic chaotic time series can be estimated at 4 because the symplectic geometry spectrums begin to turn into noise floor at index 5. In a similar way, for Lorenz chaos time series without noise, when sampling interval τ=0.005, the embedding dimension can be estimated at 6 (see Fig. 25c).

Comparison of the results of our method (see Fig. 25b and 25c) and the results of SVD method (see Fig. 25d and 25e) shows that in SG method, the position of the noise floor is determined by the intrinsic dynamical structure of the nonlinear dynamic system rather than the numerical accuracy of the input data and the computation precision, but in SVD method, the noise floor was determined reversely[8, 37, 61].

In a word, the numerical experiments discuss that for the nonlinear dynamic systems, SG method can give the appropriate embedding dimension from their time series but SVD method cannot. So SG method is fit to deal with nonlinear systems.

#### *3.7.4. Robustness of the embedding dimension estimation based on symplectic geometry*

It is well known that the recent methods about embedding dimension are almost more or less subjective, or are affected by changes of the data length, noise, time lag, or sampling time, etc. Here, it is necessary that the robustness of the SG method is studied.

#### *The effect of data length*

In order to avoid the effect of the characteristics of the nonlinear system, this paper only considers and uses the noise to analyze the effect of data length. For Gauss white noise with mean value 0 and variance 1, when *N*=1000, the SG method can give better results than the SVD method (see Fig. 26a) because the total energy is distributed equably (see Fig. 25a).

Computational Intelligence in Electromyography Analysis – 156 A Perspective on Current Applications and Future Challenges

In terms of Eq. 63, the values of

geometry.

*3.7.3. Estimation of embedding dimension based on symplectic geometry* 

*i*, *i*=*k*+1, ..., *d*, are far smaller than

τ

=0.005, the embedding dimension

noise floor. Therefore, the embedding dimension of the reconstruction system can be determined by the noise floor. Here, the noise and nonlinear time series are used to investigate the feasibility of the embedding dimension estimation based on symplectic

For noise (which is generally regarded as Gauss white noise with mean value 0 and variance 1 in practical systems), symplectic geometry spectrums of this noise give the even distribution of its total energy (see Fig. 25a). From this figure, we can see that the symplectic geometry spectrums of noise can reflect the characteristic of noise very well when *N*=1000. This shows SG method can reflect noise level in the condition of short data length. For the time series of state variable *x* in Logistic chaos system without noise interference, the symplectic geometry spectrums (see Fig. 25b) are slant in the beginning then turn into plane area with the increase of index *i*. In other words, the distribution of total energy on the different axes is obviously different and with increasing the embedding dimension, the slants of symplectic geometry spectrums transit into noise floor. So one can determine embedding dimension from the number of symplectic geometry spectrums over noise floor, in which its determining criterion is similar to that in [37]. From Fig. 25b, the embedding dimension of Logistic chaotic time series can be estimated at 4 because the symplectic geometry spectrums begin to turn into noise floor at index 5. In a similar way, for Lorenz

Comparison of the results of our method (see Fig. 25b and 25c) and the results of SVD method (see Fig. 25d and 25e) shows that in SG method, the position of the noise floor is determined by the intrinsic dynamical structure of the nonlinear dynamic system rather than the numerical accuracy of the input data and the computation precision, but in SVD

In a word, the numerical experiments discuss that for the nonlinear dynamic systems, SG method can give the appropriate embedding dimension from their time series but SVD

*3.7.4. Robustness of the embedding dimension estimation based on symplectic geometry* 

time, etc. Here, it is necessary that the robustness of the SG method is studied.

It is well known that the recent methods about embedding dimension are almost more or less subjective, or are affected by changes of the data length, noise, time lag, or sampling

In order to avoid the effect of the characteristics of the nonlinear system, this paper only considers and uses the noise to analyze the effect of data length. For Gauss white noise with mean value 0 and variance 1, when *N*=1000, the SG method can give better results than the SVD method (see Fig. 26a) because the total energy is distributed equably (see Fig. 25a).

μ

*<sup>k</sup>*. These values form a

μ

chaos time series without noise, when sampling interval

method, the noise floor was determined reversely[8, 37, 61].

method cannot. So SG method is fit to deal with nonlinear systems.

can be estimated at 6 (see Fig. 25c).

*The effect of data length* 

a. The symplectic geometry spectrums of Gauss white noise with mean value 0 and variance 1

c. The symplectic geometry spectrums of Lorenz chaotic series with no noise, τ=0.005

b. The symplectic geometry spectrums of Logistic chaotic series with no noise

d. The SVD principal components of Logistic chaotic series with no noise

**Figure 25.** The study of embedding dimension based on symplectic geometry algorithm, *N*=1000, *d*=3, 8, 13, 18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>* μ *tr* μ

And yet when *N* is rather large, e.g. *N*=10000, the SVD method can just have the similar results (see Fig. 26b) with Fig. 25a. These show that the SG method is more robust to changes of the data length than the SVD method. Then the SG method is fitter to the analysis of short time series.

Computational Intelligence in Electromyography Analysis – 158 A Perspective on Current Applications and Future Challenges

Nonlinear Analysis of Surface EMG Signals 159

=0.005, the embedding dimension is

b. The principal components based on SVD,

τ=0.1

d. The symplectic geometry

τ=1

=0.1, this paper finds that the

spectrums,

μ

μ*tr*

τ

=0.005 to

systems; such systems, however, seem hardly to occur in the real life. Therefore, this paper studies the robustness of the SG method under noise. For the signal obtained from the real system, it is always contaminated by noise (inner noise or/and outer noise). Although contaminated by inner or/and outer noise, the embedding dimension of Logistic system can always be noted at 4 by using the SG method because the noise floor begins at the embedding dimension 5 (see Fig. 27a and 27b). These show either inner noise or outer noise has little impact on the symplectic geometry spectrums. On the further increase of noise, the position of noise floor is obviously raised from the Figure 27c, but the appropriate embedding dimension 2 can still be obtained. In the similar way, for Lorenz chaos time

6 without noise and 3 with noise, respectively (see Fig. 25c and Fig. 27d). These results show that the SG method is useful for Lorenz system with noise, too. Meanwhile, we find that the SG method can obtain the results similar to nonlinear high singular spectrum algorithm[62]. Thus, it further shows that the SG method can reflect intrinsic nonlinear characteristics of

τ

series without and with noise, when sampling interval

a. The symplectic geometry spectrums, τ=0.1

c. The symplectic geometry

For the changes of the sampling interval from

τ=0.005

intervals, *N*=1000, *d*=3, 8, 13, 18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>*

**Figure 28.** The symplectic geometry spectrum analysis of Lorenz chaos series by different sampling

τ

embedding dimension can be estimated at 6 from the corresponding symplectic geometry

spectrums,

*The effect of sampling interval* 

the raw data.

**Figure 26.** The analysis of SVD principal components of noise with different data length, *d*=3, 8, 13, 18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>* μ *tr* μ

**Figure 27.** The study of symplectic geometry spectrum analysis in different noises, *N*=1000, *d*=3, 8, 13, 18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>* μ *tr* μ

#### *The effect of noise*

At present, there are many estimators of appropriate embedding dimension, but it has gradually been realized that such estimators are useful only for low-dimensional noise-free systems; such systems, however, seem hardly to occur in the real life. Therefore, this paper studies the robustness of the SG method under noise. For the signal obtained from the real system, it is always contaminated by noise (inner noise or/and outer noise). Although contaminated by inner or/and outer noise, the embedding dimension of Logistic system can always be noted at 4 by using the SG method because the noise floor begins at the embedding dimension 5 (see Fig. 27a and 27b). These show either inner noise or outer noise has little impact on the symplectic geometry spectrums. On the further increase of noise, the position of noise floor is obviously raised from the Figure 27c, but the appropriate embedding dimension 2 can still be obtained. In the similar way, for Lorenz chaos time series without and with noise, when sampling interval τ=0.005, the embedding dimension is 6 without noise and 3 with noise, respectively (see Fig. 25c and Fig. 27d). These results show that the SG method is useful for Lorenz system with noise, too. Meanwhile, we find that the SG method can obtain the results similar to nonlinear high singular spectrum algorithm[62]. Thus, it further shows that the SG method can reflect intrinsic nonlinear characteristics of the raw data.

**Figure 28.** The symplectic geometry spectrum analysis of Lorenz chaos series by different sampling intervals, *N*=1000, *d*=3, 8, 13, 18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>* μ *tr* μ

#### *The effect of sampling interval*

Computational Intelligence in Electromyography Analysis – 158 A Perspective on Current Applications and Future Challenges

23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>*

a. The symplectic geometry spectrums of Logistic chaos series with interior noise, 2 2

c. The symplectic geometry spectrums of Logistic chaos series with exterior noise, 2 2 σ= 0.8

> μ *tr* μ

18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>*

*The effect of noise* 

σ= 0.001

μ *tr* μ

**Figure 26.** The analysis of SVD principal components of noise with different data length, *d*=3, 8, 13, 18,

**Figure 27.** The study of symplectic geometry spectrum analysis in different noises, *N*=1000, *d*=3, 8, 13,

At present, there are many estimators of appropriate embedding dimension, but it has gradually been realized that such estimators are useful only for low-dimensional noise-free

b. The symplectic geometry spectrums of Logistic chaos series with exterior noise, 2 2

d. The symplectic geometry spectrums of Lorenz chaos series with exterior noise, 2 <sup>2</sup> σ= 0.01

σ= 0.001

> For the changes of the sampling interval from τ=0.005 to τ=0.1, this paper finds that the embedding dimension can be estimated at 6 from the corresponding symplectic geometry

spectrums of Lorenz chaos time series (see Fig. 25c and Fig. 28a), although the position of noise floor is constantly driven up. However, in the same condition, SVD method cannot give the appropriate embedding dimension (see Fig. 25e and 28b), the results of which are similar to the results of the literature[61]. Besides, no matter the sampling interval is over sampling or under sampling, SG method can always give the appropriate embedding dimension *d* of Lorenz chaos time series (see Fig. 28c and 28d) because the correlation dimension *m* of Lorenz system is 2.07, in general, if *d*>*m*, *d* is viable.

Nonlinear Analysis of Surface EMG Signals 161

*S*(*x*) = *T*(*x*) + *b* (69)

*S x S y C x y <sup>i</sup>* ( ) − *<sup>i</sup>* ( ) ≤ *<sup>i</sup>* − , ∀*x*, *y*∈*D* (70)

for the *k*th iterate of S (so *S* (*E*) = *E* <sup>0</sup> and ( ) ( ( )) <sup>1</sup> *S E S S E <sup>k</sup> <sup>k</sup>* <sup>−</sup> = for

= (71)

= (72)

*<sup>k</sup> F S E* (73)

(74)

**4.1. Self-affine fractal[63-65]** 

*<sup>n</sup> D* ⊂ *R* , so that

For *E*∈*φ*, and write *Sk*

for every set *E*∈*φ* such that *Si*(*E*) ⊂ *E* for all *i*.

Theorem 1 is termed a self-affine set.

has the self-affine fractal characteristic.

Let a time series be *x*(*t*) , *t* ∈[0, *T*]. Its spectrum is given by

where *f* is frequency. The power spectrum of *f* is defined by

**4.2. Spectrum analysis[65, 66]** 

*k* ≥1 ), then

that

**Definition 1** Let the mapping *<sup>n</sup> <sup>n</sup> S* : *R* → *R* . S is defined by

where *T* is a linear transformation on *Rn*. *b* is a vector in *Rn*. Thus, S is a combination of a translation, rotation, dilation and, perhaps, a reflection, called an affine mapping. Unlike

**Theorem 1** Consider the iterated function system given by the contractions{ *<sup>m</sup> S* , , *S* <sup>1</sup> }on

with ∃ < 1 *Ci* for each *i*. Then there is a unique attractor *F*, i.e. a non-empty compact set such

 *m i <sup>F</sup> Si <sup>F</sup>* <sup>1</sup> ( )

=

 *m i <sup>S</sup> <sup>E</sup> Si <sup>E</sup>* <sup>1</sup> ( ) ( ) =

> ∞ = = 1

If an IFS consists of affine contractions { *<sup>m</sup> S* , , *S* <sup>1</sup> } on *Rn*, the attractor *F* guaranteed by

Since self-affine time series have a power-law dependence of the power-spectral density function on frequency, .self-affine time series exhibit long-range persistence. For a practical data, one can use the relationship of power spectrum and frequency to determine if the data

> = *<sup>T</sup> ift X f T x t e dt* <sup>0</sup> <sup>2</sup> ( , ) ( )

> > ( , ) <sup>1</sup> ( ) *<sup>X</sup> <sup>f</sup> <sup>T</sup> T*

π

2

*S f* = (75)

*k*

( )

Moreover, if we define a transformation S on the class *φ* of non-empty compact sets by

similarities, affine mappings contract with differing ratios in different directions.

### *3.7.5. Analysis of the surface EMG signal based on symplectic geometry*

For the action surface EMG signal (ASEMG) collected from a normal person, SVD method cannot give its appropriate embedding dimension (see Fig. 29a). The method based on correlation theory can do it but costs much time for computation. Here, SG method can fast obtain its embedding dimension. Figure 29b is the symplectic geometry spectrums of action surface EMG signal. The embedding dimension can be chosen as 6, which is the same as that of correlation dimension analysis[3]. This further shows that the SG method has stronger practicability for the small sets of experiment data.

**Figure 29.** The analysis of action surface EMG signal, *d*=3, 8, 13, 18, 23, abscissa is *d*, ordinate is log( / ( )) *<sup>i</sup> <sup>i</sup>* μ *tr* μ
