**9.3. Embedding dimension**

The Embedding Theorem serves to remake from the observed or measured time series, the evolution of the states in the phase space, where the exponents of Lyapunov and the fractal dimension can be calculated (for example). It uses the method of delayed coordinates (reconstruction with delays). If we have the data series x<sup>1</sup> , x2 , x3 , x4 … xn, we can form the set of points (x1 , x2 …, xp), (x<sup>2</sup> , x3 …, xp <sup>+</sup> <sup>1</sup> ) …, and (x<sup>i</sup> , x<sup>i</sup> <sup>+</sup> <sup>1</sup> …, xp+i). These points determine a trajectory in the space R<sup>P</sup> . The dynamics of the empirical system represented by the "minimum" dynamics (in a dimensional sense) of this set of points:

If the system is random, the fractal dimension grows, as the dimension of the embedding space increases, that is, p.

If the system is periodic, the fractal dimension grows to a value k and then remains constant and whole (it is not fractal).

If the system is chaotic, the fractal dimension stabilizes for a certain embedding dimension p. Also, at least one exponent of Lyapunov will be positive.

#### **9.4. Matrix and correlation dimension**

The decomposition of time series into main analysis components gives rise to the correlation matrix. This matrix is two-dimensional (M × M) constructed by placing the values of the correlation function for τ = 0 along the main diagonal. Then, the values for τ = 1 put to the right and left of the diagonal, following with τ = 2 and so on until completing the matrix, which can be as large as 16 × 16 [37, 60].

In conclusion, the calculation of the entropy of Kolmogorov:

random and it is impossible to make any prediction.

prediction can make, nor elaboration of scenarios.

this case, a learning process.

**9.6. Determination of information loss**

mation, < ΔI > [51]:

〈Δ*I*〉

The expressions for <Δ*<sup>I</sup>* <sup>&</sup>gt;*<sup>i</sup>*

information is greater [51, 64].

sustainability of a learning process.

The coefficients of Lyapunov λ<sup>x</sup>

**1.** Check if the entropy of Kolmogorov is between zero and infinity (0 < SK < ∞), which allows verifying the presence of chaotic behavior. If the Kolmogorov entropy is equal to 0, no information loss, and the system is regular and predictable. If SK = ∞, the system is entirely

Initial Condition and Behavior Patterns in Learning Dynamics: Study of Complexity and…

**2.** Determine the amount of information needed to predict the future behavior of a system, in

**3.** Calculate the speed with which the system loses (or downgrade information over time).

**4.** To establish the maximum horizon of temporal predictability of the system, from which no

There is a relationship between the entropy of Kolmogorov and the characteristic parameter of chaos, the exponent of Lyapunov, λ, which shows that it is proportional to the loss of infor-

Persuasion (X (t)), Positivity/Negativity (Y (t)), and Internal Information/External Information

If λ < 0, the movement is not chaotic, information does not lose, because the prediction is

If λ > 0, the movement is chaotic, the prediction is less accurate and, therefore, the loss of

In chaos theory, the calculation of the Lyapunov coefficients is fundamental because it allows studying the effect of the initial condition, the irreversibility of the processes, the entropy, the time of predictability, the complexity, and, based on these parameters, to characterize the

**10. Experimental results: Application of the chaos data analyzer** 

*<sup>i</sup>* <sup>=</sup> <sup>−</sup>*<sup>λ</sup>* \_\_\_\_*<sup>i</sup>*

log2, *<sup>i</sup>* <sup>=</sup> *<sup>X</sup>*,*Y*, *<sup>Z</sup>* (7)

http://dx.doi.org/10.5772/intechopen.74140

59

are associated with the time series of the Inquiry/

are in bits/time and show the relationship between the entropy of

*<sup>i</sup>* log2 = −*λ<sup>i</sup>* ⇒ 〈Δ*I*〉

, λy, and λ<sup>z</sup>

(Z (t)) coefficients obtained according to learning dynamics.

**software (CDA) to the experimental time series**

accurate. (It is the main idea of the current educational paradigm).

Kolmogorov and the exponent of Lyapunov, λ.

The number of significant eigenvalues of the correlation matrix, of the order of the correlation dimension, is a measure of the complexity of the system [37]. A procedure of these characteristics was applied the time series of X (t), Y (t), and Z (t).

#### **9.5. The entropy of Kolmogorov and its relation to the loss of information**

Following Farmer [61, 62], one of the essential differences between chaotic and predictable behavior is that chaotic trajectories continuously generate new information, while predictable trajectories do not. Metric entropy makes this notion more rigorous. In addition to providing a good definition of "chaos," metric entropy provides a quantitative way to describe "how chaotic" a dynamic system is.

The entropy of Kolmogorov [48, 49] is the average information loss [51, 63], when "l" (cell side in units of information) and τ (time) become infinitesimal:

$$S\_{\mathbf{x}} = -\lim\_{\text{res}\to 0} \lim\_{l\to 0} \lim\_{l\to 0} \lim\_{l\to \infty} \frac{l}{l^{\text{tr}}} \sum\_{a\dots a} P\_{a\dots a} \log P\_{a\dots a} \quad \text{(Kolmogorov entropy)}\tag{6}$$

Expressed in information bits/sec or bits/orbits for a continuous system and bits/iteration for a discrete system.

The entropy difference of Kolmogorov (ΔSK) between one cell and another (SK n+1 - SK n) represents the additional information needed to know, in which cell (i n+1) system will be found in the future. Therefore, the difference (SK n+1 - SK n) measures the loss of system information over time.

In conclusion, the calculation of the entropy of Kolmogorov:

