**7. Time series generation using artificial neural network (ANN)**

A large class of different architecture have been used in neural network for various application. Among these application an issue relates the approximation of a nonlinear mapping *<sup>f</sup>*ð Þ **<sup>x</sup>** with the network *<sup>f</sup> ANN*ð Þ **<sup>x</sup>** , **<sup>x</sup>**<sup>∈</sup> **RK** where *<sup>K</sup>* corresponds to the size of te input. Besides the Radial Basis Function (RBF), a Multi Layer Perceptron (MLP) has been used extensively in function approximation. A MLP neural network comprises an *input layer*, several *hidden layers* and an *output layer* as shown in **Figure 24**.

tanh function. The output at node *<sup>i</sup>* is given by *yi* <sup>¼</sup> *<sup>g</sup>* <sup>P</sup>*<sup>K</sup>*

*Chaotic Dynamics and Complexity in Real and Physical Systems*

*DOI: http://dx.doi.org/10.5772/intechopen.96573*

*yi* <sup>¼</sup> *<sup>g</sup>* <sup>X</sup> 3

*j*

and (d) finally validating the result with independent data.

parameters viz., *net:trainParam:show* ¼ 50; *net:trainParam:lr* ¼ 0*:*05;

*ji*Þ) and biases(*θ<sup>k</sup>*

weights (*w<sup>k</sup>*

**Figure 25.**

*(blue).*

**8. Conclusion**

**55**

illustrates a typical *MLP* network where the output is given by

*j*¼1 *w*2 *ji <sup>g</sup>* <sup>X</sup> *K*

*k*¼1 *w*1 *kjxk* <sup>þ</sup> *<sup>θ</sup>*<sup>1</sup> *j*

*Neural network generated Lorenz time series (black*,*red*, *brown) and respective deviation from input time series*

Several algorithms are available to determine the network parameters e.g.,

algorithms. The basic procedure involving the learning algorithm of an *MLP* network are: (a) Define the network structure, selecting the activation function and initializing the weights and biases, (b) providing the error estimates and number of epochs for training algorithm before running the training algorithm, (c) the output is simulated using input data to the network and compared with the given output,

In this work, using the inputs as *x*, *y* and *z* time series from Lorenz system exhibiting chaotic dynamics and using the *newff*, *train* and *sim MATLAB* commands [31], we simulated each of these time series. In our simulation we take the learning

*net:trainParam:epochs* <sup>¼</sup> 1000; *net:trainParam:goal* <sup>¼</sup> <sup>1</sup>*e*�3, and use 100 neurons and 3 output layers [48]. **Figure 25** shows the *MLP* network generated time series of Lorenz variables and the corresponding deviations from the input time series.

In this chapter, we have applied phase portrait, bifurcation diagram, Poincare surface of section, LCEs, correlation dimension, topological entropy and multi-scale

!

" #

*<sup>j</sup>*¼<sup>1</sup>*wjix <sup>j</sup>*

<sup>þ</sup> *<sup>θ</sup>*<sup>2</sup> *j*

). Such algorithms are termed as *teaching* or *learning*

þ*θ<sup>i</sup>* h i. **Figure 24**

*:* (26)

An *MLP* comprises inputs *xi*, *i* ¼ 1, 2, ⋯, *K* to the neurons gets multiplied with weights *wki* and summed up along with the bias *θi*. The resulting *ni* is then acts as an input to the activation function *g* which could be chosen as a sigmoid function or a

**Figure 24.** *A multilayer perceptron network.*

*Chaotic Dynamics and Complexity in Real and Physical Systems DOI: http://dx.doi.org/10.5772/intechopen.96573*

is further higher than that of the price index *Z* time series. Therefore we may conclude that the complexity measure relation for the considered financial model time series can be expressed as *MPEX* > *MPEY* > *MPEZ*. The behavior of the complexity measure of the considered finance model has been found to be quite similar to that of chaotic Rossler attractor [with parameters *a* ¼ 0*:*15; *b* ¼ 0*:*20; *c* ¼ 10*:*0]

*Advances in Dynamical Systems Theory, Models, Algorithms and Applications*

**7. Time series generation using artificial neural network (ANN)**

the size of te input. Besides the Radial Basis Function (RBF), a Multi Layer Perceptron (MLP) has been used extensively in function approximation. A MLP neural network comprises an *input layer*, several *hidden layers* and an *output layer* as

A large class of different architecture have been used in neural network for various application. Among these application an issue relates the approximation of a nonlinear mapping *<sup>f</sup>*ð Þ **<sup>x</sup>** with the network *<sup>f</sup> ANN*ð Þ **<sup>x</sup>** , **<sup>x</sup>**<sup>∈</sup> **RK** where *<sup>K</sup>* corresponds to

An *MLP* comprises inputs *xi*, *i* ¼ 1, 2, ⋯, *K* to the neurons gets multiplied with weights *wki* and summed up along with the bias *θi*. The resulting *ni* is then acts as an input to the activation function *g* which could be chosen as a sigmoid function or a

The simulation results for the multi-scale permutation entropy, *MPE*, presented for the financial model and the Rossler chaotic model exhibit long term correlation of the respective time series of a dynamical variable. Such inference is made in view of the increasing trend of *MPE* with scale factor *s* for a given *m*. In case of a standard financial model, the efficacy of such model could be made on comparing the *MPE* trend of resulting simulated time series for interest rate (*X*), investment demand (*Y*) and that of price index (*Z*) with the availability of the real time series data for the corresponding dynamical variables. Finally, we introduce the idea of generation of time series of a nonlinear chaotic dynamical system, say a Lorenz system, using

(**Figure 23b**).

artificial neural network.

shown in **Figure 24**.

**Figure 24.**

**54**

*A multilayer perceptron network.*

**Figure 25.** *Neural network generated Lorenz time series (black*,*red*, *brown) and respective deviation from input time series (blue).*

tanh function. The output at node *<sup>i</sup>* is given by *yi* <sup>¼</sup> *<sup>g</sup>* <sup>P</sup>*<sup>K</sup> <sup>j</sup>*¼<sup>1</sup>*wjix <sup>j</sup>* þ*θ<sup>i</sup>* h i. **Figure 24** illustrates a typical *MLP* network where the output is given by

$$\mathcal{Y}\_i = \mathbf{g} \left[ \sum\_{j=1}^3 w\_{ji}^2 \, \mathbf{g} \left( \sum\_{k=1}^K w\_{kj}^1 \mathbf{x}\_k + \theta\_j^1 \right) + \theta\_j^2 \right]. \tag{26}$$

Several algorithms are available to determine the network parameters e.g., weights (*w<sup>k</sup> ji*Þ) and biases(*θ<sup>k</sup> j* ). Such algorithms are termed as *teaching* or *learning* algorithms. The basic procedure involving the learning algorithm of an *MLP* network are: (a) Define the network structure, selecting the activation function and initializing the weights and biases, (b) providing the error estimates and number of epochs for training algorithm before running the training algorithm, (c) the output is simulated using input data to the network and compared with the given output, and (d) finally validating the result with independent data.

In this work, using the inputs as *x*, *y* and *z* time series from Lorenz system exhibiting chaotic dynamics and using the *newff*, *train* and *sim MATLAB* commands [31], we simulated each of these time series. In our simulation we take the learning parameters viz., *net:trainParam:show* ¼ 50; *net:trainParam:lr* ¼ 0*:*05; *net:trainParam:epochs* <sup>¼</sup> 1000; *net:trainParam:goal* <sup>¼</sup> <sup>1</sup>*e*�3, and use 100 neurons and 3 output layers [48]. **Figure 25** shows the *MLP* network generated time series of Lorenz variables and the corresponding deviations from the input time series.
