**9.6. Determination of information loss**

**9.4. Matrix and correlation dimension**

be as large as 16 × 16 [37, 60].

58 Behavior Analysis

chaotic" a dynamic system is.

*SK* <sup>=</sup> <sup>−</sup>lim*<sup>τ</sup>*→<sup>0</sup> lim*<sup>l</sup>*→<sup>0</sup> lim*<sup>n</sup>*→<sup>∞</sup>

a discrete system.

over time.

The decomposition of time series into main analysis components gives rise to the correlation matrix. This matrix is two-dimensional (M × M) constructed by placing the values of the correlation function for τ = 0 along the main diagonal. Then, the values for τ = 1 put to the right and left of the diagonal, following with τ = 2 and so on until completing the matrix, which can

The number of significant eigenvalues of the correlation matrix, of the order of the correlation dimension, is a measure of the complexity of the system [37]. A procedure of these character-

Following Farmer [61, 62], one of the essential differences between chaotic and predictable behavior is that chaotic trajectories continuously generate new information, while predictable trajectories do not. Metric entropy makes this notion more rigorous. In addition to providing a good definition of "chaos," metric entropy provides a quantitative way to describe "how

The entropy of Kolmogorov [48, 49] is the average information loss [51, 63], when "l" (cell side

Expressed in information bits/sec or bits/orbits for a continuous system and bits/iteration for

The entropy difference of Kolmogorov (ΔSK) between one cell and another (SK n+1 - SK n) represents the additional information needed to know, in which cell (i n+1) system will be found in the future. Therefore, the difference (SK n+1 - SK n) measures the loss of system information

*Po*.…*<sup>n</sup>* log*P*0..…*<sup>n</sup>* (Kolmogorov entropy) (6)

istics was applied the time series of X (t), Y (t), and Z (t).

in units of information) and τ (time) become infinitesimal:

\_\_*l n* ∑ 0.…*n*

**9.5. The entropy of Kolmogorov and its relation to the loss of information**

There is a relationship between the entropy of Kolmogorov and the characteristic parameter of chaos, the exponent of Lyapunov, λ, which shows that it is proportional to the loss of information, < ΔI > [51]:

$$
\langle \Delta I \rangle\_i \log 2 = -\lambda\_i \Rightarrow \langle \Delta I \rangle\_i = \frac{-\lambda\_i}{\log 2} \quad i = X, Y, Z \tag{7}
$$

The coefficients of Lyapunov λ<sup>x</sup> , λy, and λ<sup>z</sup> are associated with the time series of the Inquiry/ Persuasion (X (t)), Positivity/Negativity (Y (t)), and Internal Information/External Information (Z (t)) coefficients obtained according to learning dynamics.

The expressions for <Δ*<sup>I</sup>* <sup>&</sup>gt;*<sup>i</sup>* are in bits/time and show the relationship between the entropy of Kolmogorov and the exponent of Lyapunov, λ.

If λ < 0, the movement is not chaotic, information does not lose, because the prediction is accurate. (It is the main idea of the current educational paradigm).

If λ > 0, the movement is chaotic, the prediction is less accurate and, therefore, the loss of information is greater [51, 64].
