**3.4. Irregularity quantification**

4 Atrial Fibrillation

to the analysis of nonlinear dynamics. A valid phase space is any vector space in which the state of the dynamical system can be unequivocally defined at any point [21]. The most used way of reconstructing the full dynamics of the system from scalar time measurements is based on the embedding theorem [21], which justifies the transformation of a time series into a *m*-dimensional multivariate time series. This is done by associating to each *m* successive

Several methods and algorithms are currently available to characterize a reconstructed phase space. Thus, two features widely used to emphasize the geometrical properties of the attractor are the *correlation dimension* (CD) and the *correlation entropy* (CorEn). The CD is a measure of the dimensionality of the attractor, i.e., of the organization of points in the phase space. Although there are several algorithms for its estimation, the CD can be computed by first calculating the correlation sum of the time series, which is defined as the number of points in the phase space that are closer than a certain threshold *r* [21]. Then, the CD is defined as the slope of the line fitting the log-log plot of the correlation sum as a function of the threshold. On the other hand, the CorEn is a measure of how fast the distance between two initially nearby states in phase space grows in time. This can be envisaged by taking a point in the reconstructed phase space, which corresponds to a segment in the time series. Another point in phase space located closely to the first one refers to a different segment in the time series. Thus, the CorEn is a measure of how fast these time segments loose their

*Lyapunov exponents* (LEs) are also found habitually in the literature to enhance the dynamics of trajectories in the phase space. Precisely, these exponents quantify the exponential divergence or convergence of initially close phase space trajectories. LEs quantify also the amount of instability or predictability of the process. An *m*-dimensional dynamical system has *m* exponents but in most applications it is sufficient to compute only the largest LE (LLE), which can be computed as follows. First, a starting point is selected in the reconstructed phase space and all the points which are closer to this point than a predetermined distance, *ǫ*, are found. Then the average value of the distances between the trajectory of the initial point and the trajectories of the neighboring points are calculated as the system evolves. The slope of the line obtained by plotting the logarithms of these average values versus time gives the LLE. To remove the dependence of calculated values on the starting point, the procedure

Symbolic time series analysis involves the transformation of the original time series into a series of discrete symbols that are processed to extract useful information about the state of the system generating the process [20]. The first step of symbolic time series analysis is, hence, the transformation of the time series into a symbolic/binary sequence using a context-dependent symbolization procedure. After symbolization, the next step is the construction of words from the symbol series by collecting groups of symbols together in temporal order. This process typically involves definition of a finite word-length template that can be moved along the symbol series one step at a time, each step revealing a new

Quantitative measures of word sequence frequencies include statistics of words (word frequency or transition probabilities between words) and information theoretic based on

is repeated for different starting points and the LLE is taking as the average.

samples distant a certain number *τ* of samples, a point in the phase space.

resemblance when both the segments are lengthened.

**3.3. Information content quantification**

sequence.

*Approximate entropy* (ApEn) provides a measure of the degree of irregularity or randomness within a series of data. ApEn assigns a non-negative number to a sequence or time series, with larger values corresponding to greater process randomness or serial irregularity, and smaller values corresponding to more instances of recognizable features or patterns in the data [21]. ApEn measures the logarithmic likelihood that runs of patterns that are close (within a tolerance window *r*) for length *m* continuous observations remain close (within the same tolerance *r*) on next incremental comparison. The input variables *m* and *r* must be fixed to calculate ApEn. The method can be applied to relatively short time series, but the amounts of data points has an influence on the value of ApEn. This is due to the fact that the algorithm counts each sequence as matching itself to avoid the occurrence of ln(0) in the calculations. The *sample entropy* (SampEn) algorithm excludes self-matches in the analysis and is less dependent of the length of data series [22].

On the other hand, the *multiscale entropy* (MSE) has been developed as a more robust measure of regularity of physiological time series which typically exhibit structure over multiple time scales [23]. For its computation, the sample mean inside each non-overlapping window of the original time series is calculated, thus constituting this set of sample means a new time series. Repeating the process *N* times with a set of window lengths starting from 1 to a certain length *N*, this will give a set of *N* time series of sample means. The MSE is obtained by computing any entropy measure (SampEn is suggested) for each time series, and displaying it as a function of the number of data points *N* inside the window (i.e. of the scale).

Another index that can be used to quantity the regularity of a time series is the *conditional entropy* (CE) [24]. This index computed for a time series measures the amount of information carried by its most recent sample which is not explained by the knowledge of a predetermined conditioning vector containing information about the past of the observed multivariate process. The CE computation can be expressed as the difference between the ShEn calculated for the time series divided both in *L* and *L* − 1 sample-length patterns. Thus, this index measures the amount of information obtained when the pattern length is augmented from *L* − 1 to *L*. If a process is periodic (i.e. perfectly predictable) and has been observed for a sufficient time, it will be possible to predict the next samples. Therefore, there will be no increase of information by increasing the pattern length and CE will go to zero after a certain *L*. Nonetheless, this algorithm requires a corrective term to estimate accurately the CE. The correction is thought to counteract the bias toward a reduction of the CE which occurs increasing the size of the conditioning vectors and depends strongly on the length of the time series [24].

10.5772/53407

187

http://dx.doi.org/10.5772/53407

**4. Atrial activity analysis**

take the appropriate decisions on its management [27].

**4.1. Surface organization assessment**

Although the mechanisms of AF still are unclear, several studies have demonstrated that this arrhythmia is associated with the propagation, throughout the atrial tissue, of multiple activation wavelets, resulting in complex ever-changing patterns of electrical activity [5]. As a consequence, the morphology of the registered *f* waves during AF changes constantly both in time and space showing different levels of organization, according to a definition of organization as repetitive wave morphologies in the AF signals [19]. Given that various morphologies reflect different activation patterns such as slow conduction, wave collision, and conduction blocks [26], AF organization analysis plays an important role to understand the mechanisms responsible for its induction and maintenance. In addition, the analysis of the degree of complexity characterizing the shape of the activation waves could provide useful information to improve AF treatment, which still is unsatisfactory, and contribute to

The Contribution of Nonlinear Methods in the Understanding of Atrial Fibrillation

Since a rigorous definition of organization does not exist, a variety of nonlinear indices have been applied to the AA signal extracted from both surface ECG recordings and intraatrial EGMs to quantify AF pattern dynamic and morphology. In the next subsections, the state of the art related to the AF organization estimation by using nonlinear methods is summarized.

From a clinical point of view, the assessment of AF organization from the standard surface ECG is very interesting, because it can be easily and cheaply obtained [10]. Previous works have shown that structural changes into surface *f* waves reflect the intraatrial activity organization variation [28, 29]. Thus, it has been observed that ECGs acquired during intraatrial organized rhythms present *f* waves with well-defined and repetitive morphology and ECGs recorded during highly disorganized AA with fragmented activations contain surface *f* waves with very dissimilar morphologies [30]. Taking advantage of this finding, several nonlinear indices have been applied to single-lead ECG recordings to estimate the amount of repetitive patterns existing in their extracted AA signal. Leads V1 and II have been most often selected for this purpose, because the atrial signal is larger in these recordings [10]. The first proposed method to estimate non-invasively temporal organization of AF is based on the application of SampEn to the fundamental waveform of the AA signal, which have been named as main atrial wave (MAW) in the literature [4]. Note that SampEn computation directly from the AA has also been investigated, but an unsuccessful AF organization assessment has been reported by several authors [4]. The presence of ventricular residua and other nuisance signals together with the SampEn sensitivity to noise have been considered the main reasons for this poor result [4]. In contrast, the MAW-SampEn strategy has provided ability to reliably reflect the intraatrial fibrillatory activity dynamics [29] and has been validated by predicting successfully a variety of AF organization-dependent events. In this respect, the method has shown a high diagnostic accuracy in the paroxysmal AF termination prediction, presenting more regular *f* waves for terminating than non-terminating episodes [4]. This result is in agreement with the decrease in the number of reentries prior to sinus rhythm (SR) restoration observed in previous invasive studies, where AF termination was achieved by using different therapies [6]. In a similar way, according with the invasive observation that self-sustained AF is associated

It is interesting to remark that a slightly modified version of the CE, such as *Cross-CE* (CCE), is able to assess the coupling degree between two time series [24]. Synchronization occurs when interactive dynamics between two signals are repetitive. In this line, this index computes the amount of information included in the most recent sample of a times series when the past *L*-sample-length pattern of the other series is given. Given that CCE suffers from the same limitation as CE, it has to be corrected in the same way.

Finally, other measure proposed to estimate coupling between time series is the *causal entropy* (CauEn) [25]. This index is an asymmetric, time-adaptive, event-based measure of the regularity of the phase- or time-lag with which point *i* fires after point *j*. It is calculated from two components: a non-parametric time-adaptive estimate of the probability density of spike time lag between two points *i* and *j* such that *i* follows *j* (and, independently, the distribution of *j* following *i*), and a cost function estimate of the spread and stability of the distribution. Although a variety of alternatives exists to compute this metric, CauEn can be easily estimated by choosing an event-normalized histogram as the time-adaptive density estimator and the ShEn as the cost function [25].

### **3.5. Geometric structure quantification**

A *Recurrence plot* (RP) is a visual representation of all the possible distances between the points constituting the phase space of a time series [21]. Whenever the distance between two points is below a certain threshold, there is a recurrence in the dynamics: i.e. the dynamical system visited multiple times a certain area of the phase space. From this transformation, well suited for the study of short non-stationary signals, many geometric features can be extracted. In this sense, there are four main elements characterizing a RP: isolated points (reflecting stochasticity in the signal), diagonal lines (index of determinism) and horizontal/vertical lines (reflecting local stationarity in the signal). The combination of these elements creates large-scale and small-scale patterns from which is possible to compute several features, mainly based on the count of number of points within each element.

On the other hand, the *Poincaré plots* (PPs) are a particular case of phase space representation created selecting *m* = 2 and *τ* = 1; that corresponds to displaying a generic sample *n* of the time series as a function of the sample *n* − 1 [21]. This is also known as a return map or a Lorenz plot. The main limitation of this technique is that assumes that a low dimensional representation of a dynamical attractor is enough to detect relevant features of the dynamics. Despite its simplicity, this transformation has been successfully employed also with high dimensional systems. The benefit is that, given the low dimensionality, it is possible to easily design and visualize several types of geometric features. These features are based on an ellipse fitted to the PP. These features can be seen as measures of nonlinear autocorrelation. If successive values in the time series are not linearly correlated, there will be a deviation from a line that is often properly modeled using an ellipse. The different features involve the centroid of the ellipse, the length of the two axes of the ellipse, the standard deviation in the direction of the identity line (called SD2) and the standard deviation in the direction orthogonal to the identity line (called SD1).
