**Nonlinear Analysis of Surface EMG Signals**

Min Lei and Guang Meng

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/49986

**1. Introduction** 

The aim of this chapter is to answer the essence of SEMG and to explore the potential use of nonlinear analysis as a tool in the clinical and biomechanical applications. The technical tools include nonlinear time series test, time series analysis based on chaos theory, multifractal analysis.

In Section 2, we discuss the two methods of nonlinear time series test: surrogate data test method and Volterra-Wiener-Korenberg (VWK) model test method. Theoretically, the two methods can detect the nonlinearity of the data indirectly. The surrogate data method is used to analyze the SEMG. The result shows that the SEMG has deterministic nonlinear components. Meanwhile, we introduce the VWK model test method and compare it with the surrogate data method. The nonlinearity of SEMG during muscle fatigue can be detected by the VWK.

In Section 3, we describe the time series analysis based on chaos theory. The chaos definition and chaotic characteristics are discussed. The embedding theory of the attractor reconstruction is investigated for the dynamical system. From the view of the fractal structure of the chaotic attractor, the correlation dimension is used to test the chaotic characteristics of the SEMG during arm movements. The Largest Lyapunov exponent is also studied. Then, we investigate the influence of measure noise, internal noise and sampling interval on the principal components of chaotic time series. The symplectic principal component analysis is given. We illustrate the feasibility of this method and give the embedding dimension of the action surface EMG signal.

In Section 4, the self-affine fractal definition and nature are described. The power spectrum and frequency relationship is used to calculate the self-affine fractal dimension of the time series, such as SEMG. Then, the multifractal dimension is given for the SEMG.

The conclusion and future research are shown in Section 5. Here, it is necessary to note that this chapter is actually the result of many years work. The new methods presented here

build on a broad and strong foundation of nonlinear time series analysis and chaotic dynamical theory.

Nonlinear Analysis of Surface EMG Signals 121

*2.1.1. Null hypotheses and algorithms[1]* 

distributed (IID) random variables.

some dynamics structure[9].

an average over time *t*. That is,

is that one first calculates the mean

2 1

and (1 )

2 σ = γ

σ

the random numbers to generate the surrogate data.

The null hypotheses usually specify some certain properties of the original data that reflect some structure characteristics of the dynamical system, such as mean and variance, and possibly also the Fourier power spectrum. Different null hypotheses describe different specific dynamical systems. In terms of the corresponding null hypothesis, the surrogate data can be generated so as to test the corresponding specific dynamical system class.

**Null hypothesis 1** The observed data is produced by independent and identically

For this hypothesis, the corresponding surrogate data can be generated by shuffling the time-order of the original time series so that it has the same mean, variance and amplitude distribution as the original data. But any temporal correlations of the original data are destroyed in the surrogate data. Schienkman and LeBaron[8] applied this hypothesis to analyze stock market returns. Breeden and Packard also used this hypothesis to demonstrate that a time series of quasar data which were sampled nonuniformly in time has

The algorithm of the null hypothesis is that one first create gaussian random numbers from 1 to *N*, where *N* is the length of the original data *x*. Then, the original data *x* is permuted by

The surrogate data generated by the Ornstein-Uhlenbeck process is a sequence that has the

*<sup>t</sup> <sup>t</sup> <sup>t</sup> x* = *a* + *a x* +

where *et* is a Gaussian random with zero mean and unit variance. The coefficients *a*0, *a*1, and

τ

 <sup>−</sup> <sup>−</sup> <sup>=</sup> <sup>−</sup> <sup>⋅</sup> <sup>−</sup> <sup>≡</sup> *<sup>e</sup> <sup>x</sup> <sup>x</sup>*

*t t t t t* 2 2

In order to generate the surrogate data that is consistent with this hypothesis, the algorithm

the original data *x*. Then, the coefficients in Eq. (1) can be estimated: (1) *a*<sup>1</sup> = *A* , (1 ) <sup>0</sup> <sup>1</sup> *a* =

**Null hypothesis 3** The observed data is produced by the linear autocorrelated gaussian

The hypothesis has been usually used to test whether the original time series contains nonlinear components. It can be described by using a linear autoregressive (AR) model.

γ

− *a* . The Gaussian *et* can be generated by a pseudorandom number generator.

work together to determine the mean, variance, and autocorrelation time of the time series

σ

2

λ τ

( ) (2)

*e* <sup>0</sup> <sup>1</sup> <sup>−</sup>1 (1)

λ

and autocorrelation *A*(1) (in Eq. (2)) from

= − log *a* , ⋅ denotes

μ− *a* ,

**Null hypothesis 2** The observed data is produced by the Ornstein-Uhlenbeck process.

simplest time correlation. The Ornstein-Uhlenbeck process can be given as follows.

*xt*. In this case, its autocorrelation function is exponential form. Let <sup>1</sup>

*<sup>x</sup> <sup>x</sup> <sup>x</sup> <sup>A</sup>*

, variance

τ

Finally, the surrogate data can be produced by iterating Eq. (1).

process with the mean and variance of the original time series.

μ
