*VIGOR: A Versatile, Individualized and Generative ORchestrator to Motivate the Movement… DOI: http://dx.doi.org/10.5772/intechopen.96025*

#### **Figure 14.**

*Movement choreography enabled by visible GAN.*

where {**X***,* **Y**} indicates labelled training data; *f* (**X***, θ*) is predicting output of neural network; *θ* ∈ ℜ*<sup>n</sup>* indicates the parameters (weight and bias) of neural network; Rð Þ*<sup>θ</sup>* : <sup>ℜ</sup>*<sup>n</sup>* ! <sup>ℜ</sup> is the regularizer, whose importance is controlled by regularization strength ϱ∈ ℜ; same as Eq. (3), *Lbiomechanics*(*f* (**X***, θ*)) denotes the bio-mechanics violation of choreography with weigh *γ* ∈ ℜ; *Laesthetics*(*f* (**X***, θ*)) denotes the violation of athletic elegance violation about the designed choreography with weigh *η*∈ ℜ.

**Figure 14** also illustrates that the generated kinetics needs to made temporally consistent according to specific time series prediction models such as ARIMA (Eq. (4)), LSTM, and Fast Fourier Transformation (FFT).

#### **5.3 Polynomial-based Hessian-free Newton–Raphson optimizer**

Many deep-learning-enabled applications suffer from training data scarcity. Various strategies have been investigated to overcome this limitation. Besides visible neural network, polynomial-based Hessian-free Newton-Raphson algorithm (poly-HFNR) [69, 113] is proposed to deal with data scarcity issue by speeding up the NN learning efficiency. The superiority of poly-HFNR optimizers includes: (1) A fewer number of training epochs in NN configuration than first-order-convergence optimizers such as stochastic gradient decent (SGD) algorithms; (2) Less computation and storage complexity (*O*(*N*) where N is the degree-of-freedom of neural network) than typical implementation of Newton-Raphson based algorithms; (3) Non-convex tolerance; and (4) Circumventing the explicit formulation of the Hessian matrix and the iterative/direct solution to Newton's equations (for optimization) during the training process of the neural network.

Poly-HFNR based on Neumann-series-based (Neumann-poly-HFNR) and poly-HFNR based on generalized least-squared polynomial (GLS-poly-HFNR) [47, 69, 113, 114] have been developed and critically assessed with respect to benchmark problems such as iris-classification, air-foil recognition, simulation of yachtdynamics, and pima Indian diabetes. Both implementations demonstrate reliable and super-linear convergence performance. The experimental results illustrate that: (1) from the point of view of storage and computation complexity, poly-HFNR is comparable with SGD; (2) from the point of view of convergence performance, poly-HFNR is completely comparable with Quasi-Newton. Our future work will focus on (a) evaluating poly-HFNR on various large-scale benchmark problems; (b) improving the convergence of poly-HFNR from super-linear to quadratic convergence rate; and (c) developing CUDA-version poly-HFNR and then transplanting it into popular deep learning framework such as Pytorch, TensorFlow, and Caffe.
