**4. Results and discussion**

*x* **ChSANN LSANN MHPM [7]**

*x* **ChSANN LSANN Reference [13]** 0.2 0.31018 0.30567 0.314869 0.4 0.69146 0.68661 0.697544 0.5 0.89758 0.89230 0.903673 0.6 1.10220 1.09708 1.107866 0.8 1.47288 1.46889 1.477707 1.0 1.76276 1.76355 1.765290

0 0 0 0 0.1 0.22983 0.21885 0.216866 0.2 0.46136 0.45018 0.482292 0.3 0.69478 0.68545 0.654614 0.4 0.92279 0.91423 0.891404 0.5 1.13531 1.12664 1.132763 0.6 1.32357 1.31532 1.370240 0.7 1.48314 1.47660 1.594278 0.8 1.61485 1.61045 1.794879 0.9 1.72401 1.71972 1.962239 1.0 1.81844 1.80882 2.087384

108 Numerical Simulation - From Brain Imaging to Turbulent Flows

**Table 7.** Numerical comparison for *α* =0.75.

**Table 8.** Numerical comparison for *α* =0.9.

*α* **ChSANN LSANN**

**Table 9.** Value of mean square error at different values of *α*.

**Table 10.** Numerical values of ChSANN at *t* =1 and for *α* =1.

**NAC Training points MSE NAC Training points MSE** 1 6 20 1.6127 × 10−7 6 20 4.68641 × 10−6 0.9 6 10 7.2486 × 10−6 6 10 7.36985 × 10−6 0.75 6 20 6.9229 × 10−5 6 10 1.91318 × 10−5

**No. of NAC No of training points MSE** *y***(***t***) AE**

4 10 1.1531 × 10−5 1.67997 9.52973 × 10−3 5 10 4.9226 × 10−7 1.69016 6.61306 × 10−4 6 20 1.6127 × 10−7 1.68956 6.10629 × 10−5 In this chapter, two new algorithms have been developed and verified for the Riccati differ‐ ential equation with fractional order, based on the functional neural network, Chebyshev and Legendre polynomials and simulated annealing for fractional differential equations. Substan‐ tiation of the methods is carried out by examining two benchmark examples that were already solved by some previously renowned methods. The numerical evaluation with previously obtained results for fractional-order derivative exhibited the achievement of proposed methods.

For test example 1, better results with less value of mean square error were obtained for each method. Comparison of the mean square errors 5.501631×10<sup>−</sup><sup>9</sup> and 1.21928×10<sup>−</sup><sup>9</sup> for ChSANN and LSANN, respectively, showed that the mean square error is less for LSANN when *α* =1. However, it can be observed from **Table 1** and **Figure 2** that ChSANN gave the better results with slightly more mean square error than LSANN. It can be noted from **Table 5** that better results can be attained with variable number of weights and training points, while the trend witnessed from **Table 5** indicated that for ChSANN, decreasing value of mean square error is directly proportional to the absolute error for *α*=*1*.

The test example 2 showed quite similar trends as of example 1. **Tables 6** and **9** exhibited that for *α* =*1*, less mean square error for ChSANN than LSANN was noted due to which, more accurate results were achieved by ChSANN at *α* =*0.9* as compared to LSANN that can be viewed in **Figure 4**. The results obtained for fractional values of derivatives are compared with MHPM for *α* =*0.75* and a collocation-based method of Bernstein polynomials for *α* =*0.9* as presented in **Tables 7** and **8**. The comparison showed that the results achieved by ChSANN and LSANN are quite similar to the results obtained by MHPM and collocation-based method of Bernstein polynomials. However, according to the observations from the case of *α* =*1*, it can be assumed that the results obtained for *α* =*0.75* will be accurate up to 2–3 decimal places because the MSE was detected up to 6.9229×10-5 for ChSANN and 1.91318×10-5 for LSANN. While the results achieved for *α* =0.9 will be accurate up to 3–4 decimal places as the MSE was noticed up to 7.2486×10-6 for ChSANN and 7.36985×10-6 for LSANN.

The methods proposed in this study are capable of handling highly non-linear systems. Both the proposed neural architectures are less computational and exhaustive than MLP. With ease of computation, the suggested activation function has made fractional differential equations possible to solve. Training of NAC by simulated annealing with Chebyshev and Legendre neural architecture minimized the MSE up to a tolerable level that leads to more accurate numerical approximation. Simulated annealing is a probabilistic procedure that is mostly free of initial values and can easily escape from local optimum to global optimum unlike other methods. As well as it can successfully optimize the functions with crests and plateaus. The methods can be enhanced by introducing more advanced optimization techniques. The motivation behind the work is the successful implementation of neural algorithms in the field of calculus that gave the solution of fractional differential equations a new direction with ease of implementation.
