**3. ChSANN and LSANN**

#### **3.1. Methodology**

The functional mapping of (LSANN) and (ChSANN) is shown in **Figure 1** demonstrating the structure of both methods, but for the convenience of the reader, a stepwise explanation of both the methods is also presented.

**Figure 1.** Model diagram of ChSANN and LSANN.

The combined steps for both the methods are explained because of the structural similarity in them, except the polynomial basis that affects the accuracy of the results.

Step 1: The summation of the product of network adaptive coefficients (NAC) and Chebyshev or Legendre polynomials is calculated for the independent variable of fractional differential equation for an arbitrary value of *m* as shown in **Figure 1**.

Step 2: The activation of *μ* or *η* will be performed by the first three terms of the series of tangent hyperbolic function tanh (•), these terms have been mentioned in **Figure 1**.

Step 3: The trial solution will be calculated by using initial conditions as in the study of Lagaris and Fotiadis [28].

Step 4: Required derivatives of the trial solution will be calculated.

Step 5: The optimization of mean square error (MSE) or learning of NAC will be executed by the thermal minimization methodology known as simulated annealing. The equation used to calculate MSE would be discussed in next section. Before optimization, the independent variable will be discretized by an array of trial points.

Step 6: If the value of MSE is in an acceptable range, then the values of trial points and NAC will be replaced in trial solution to get the output. On the other hand, the procedure will be repeated from step 1 with a different value of *m*.

#### **3.2. Implementation on fractional Riccati differential equation**

In this section, the ChSANN and LSANN are employed for the fractional Riccati differential equation of the type:

$$\frac{d^a \mathbf{y}(t)}{dt^a} = f\left(t, \mathbf{y}\right), \quad \mathbf{y}(0) = A \quad , \ 0 < a \le 1 \tag{1}$$

For implementing both methodologies, Eq. (1) can be written in the following form:

$$(\nabla^a \mathbf{y}\_\nu(t, \boldsymbol{\psi}) - F(t, \mathbf{y}\_\nu(t, \boldsymbol{\psi})) = 0, \quad t \in \left[0, 1\right] \tag{2}$$

*ytr* (*t*, *ψ*) can be defined as trial solution, where *ψ* is defined as NAC, generally known as weights, and ∇ is defined as differential operator. Trial solution will be obtained by applying Taylor series on the activation of *μ* by using the initial conditions, while *μ* being the sum of the product of network adaptive coefficients and Chebyshev polynomials. For obtaining the trial solution of LSANN, the above procedure will be pursued, but *η* will be calculated in spite of *μ*, that is the sum of the product of NAC and Legendre polynomials. Here, tanh (•)is used as activation function, but for fractional derivative based on Caputo sense, first three terms of the Taylor series of tanh (•) are considered that are given for ChSANN as follows:

$$N = \mu - \frac{\mu^3}{3} + \frac{2\mu^5}{15} \tag{3}$$

while for ChSANN, *μ* can be defined as follows:

$$
\mu = \sum\_{i=1}^{n} \mu\_i T\_{i-1} \tag{4}
$$

where here *Ti*−1 are Chebyshev polynomials with the following recursive formula:

$$T\_{m+1}(\mathbf{x}) = 2\mathbf{x}T\_m(\mathbf{x}) - T\_{m-1}(\mathbf{x}), \ m \ge 2 \tag{5}$$

while hile *T*<sup>0</sup> (*x*)=1 and *T*<sup>1</sup> (*x*)= *x*.

**Figure 1.** Model diagram of ChSANN and LSANN.

100 Numerical Simulation - From Brain Imaging to Turbulent Flows

and Fotiadis [28].

The combined steps for both the methods are explained because of the structural similarity in

Step 1: The summation of the product of network adaptive coefficients (NAC) and Chebyshev or Legendre polynomials is calculated for the independent variable of fractional differential

Step 2: The activation of *μ* or *η* will be performed by the first three terms of the series of tangent

Step 3: The trial solution will be calculated by using initial conditions as in the study of Lagaris

Step 5: The optimization of mean square error (MSE) or learning of NAC will be executed by the thermal minimization methodology known as simulated annealing. The equation used to calculate MSE would be discussed in next section. Before optimization, the independent

them, except the polynomial basis that affects the accuracy of the results.

hyperbolic function tanh (•), these terms have been mentioned in **Figure 1**.

Step 4: Required derivatives of the trial solution will be calculated.

equation for an arbitrary value of *m* as shown in **Figure 1**.

variable will be discretized by an array of trial points.

For LSANN, the activation function and *η* can be defined as follows:

$$N = \eta - \frac{\eta^3}{3} + \frac{2\eta^3}{15} \tag{6}$$

$$\eta = \sum\_{i=1}^{m} \psi\_i L\_{i-1} \tag{7}$$

whereas hereas *L <sup>i</sup>*−1 are the Legendre polynomials with the recursive formula:

$$L\_{m+1} = \frac{1}{(m+1)}\ (2m+1) \ge L\_m(\mathbf{x}) - \frac{1}{(m+1)}m \|L\_{m-1}(\mathbf{x}),\ m \ge 2\tag{8}$$

where here *L* <sup>0</sup> (*x*)=1 and *L* <sup>1</sup> (*x*)= *x*, and value of *m* is adjustable to reach the utmost accuracy. For Eq. (1), the trial solution can be written as defined in the study of Lagaris and Fotiadis [28], but *N* will be used according to the method.

$$
\Box \mathcal{Y}\_{\nu}(t, \psi^{\prime}) = A + t \ N \tag{9}
$$

Trial solution can be written in expanded form for ChSANN at *m*=2 as follows:

$$\mathbf{y}\_{\nu}(t,\boldsymbol{\psi}) = A + t\left(\boldsymbol{\psi}\_{1} + t\boldsymbol{\psi}\_{2} - \frac{1}{3}(\boldsymbol{\psi}\_{1} + t\boldsymbol{\psi}\_{2})^{3} + \frac{2}{15}(\boldsymbol{\psi}\_{1} + t\boldsymbol{\psi}\_{2})^{5}\right) \tag{10}$$

Fractional derivative in Caputo sense of Eq. (10) is as follows:

$$\begin{split} \nabla^{a} \mathbf{y}\_{\nu}(t, \boldsymbol{\nu}) &= \frac{\Gamma 2}{\Gamma(2-\alpha)} t^{1-a} \left( \boldsymbol{\nu}\_{1} - \frac{\boldsymbol{\nu}\_{1}^{\ast}}{3} + \frac{2 \boldsymbol{\nu}\_{1}^{\ast}}{15} \right) + \left( \frac{2}{3} \right) \frac{\Gamma 6}{\Gamma(7-\alpha)} t^{5-a} \left( \boldsymbol{\nu}\_{1} \boldsymbol{\nu}\_{2}^{\ast} \right) \\ &+ \left( \frac{2}{15} \right) \frac{\Gamma 7}{\Gamma(7-\alpha)} t^{6-a} \left( \boldsymbol{\nu}\_{2}^{\ast} \right) + \frac{\Gamma 3}{\Gamma(3-\alpha)} \quad t^{2-a} \left( \boldsymbol{\nu}\_{2} - \boldsymbol{\nu}\_{1}^{\ast} \boldsymbol{\nu}\_{2} + \frac{2}{3} \boldsymbol{\nu}\_{1}^{\ast} \boldsymbol{\nu}\_{2} \right) \\ &+ \frac{\Gamma 4}{\Gamma(4-\alpha)} t^{3-a} \left( \frac{4}{3} \boldsymbol{\nu}\_{1}^{3} \boldsymbol{\nu}\_{2}^{\ast} - \boldsymbol{\nu}\_{1} \boldsymbol{\nu}\_{2}^{\ast} \right) + \frac{\Gamma 5}{\Gamma(5-\alpha)} t^{4-a} \left( \frac{4}{3} \boldsymbol{\nu}\_{1}^{\ast} \boldsymbol{\nu}\_{2}^{3} - \frac{\boldsymbol{\nu}\_{1}^{\ast}}{3} \right) \end{split} \tag{11}$$

The mean square error (MSE) of the Eq. (1) will be calculated from the following:

$$MSE(\boldsymbol{\mu}\_{i}) = \sum\_{j=1}^{n} \frac{1}{n} \left( \nabla^{a} \boldsymbol{\chi}\_{\nu} \left( t\_{j}, \boldsymbol{\nu}\_{i} \right) - F \left( t\_{j}, \boldsymbol{\chi}\_{\nu} \left( t\_{j}, \boldsymbol{\nu}\_{i} \right) \right) \right)^{2}, \quad t \in \left[ 0, 1 \right] \tag{12}$$

whereas here as *n* can be defined as number of trial points. The learning of NAC will be performed from Eq. (10) by minimizing the MSE to the lowest possible acceptable minimum value. The thermal minimization methodology and simulated annealing is applied here for the learning of NAC. The process of simulated annealing can be described as a physical model of annealing, where a metal object is first heated and then slowly cooled down to minimize the system energy. Here, we have implemented the procedure by Mathematica 10, but the interested readers can learn the details of simulated annealing from the study of Ledesma et al. [29].
