**4. Training algorithms for recurrent neural networks**

A training algorithm is a procedure that adapts the free parameters of a neural network in response to the behavior of the network embedded in its environment. The goal of the adaptations is to improve the neural network performance for the given task. Most training algorithms for neural networks adapt the network parameters in such a way that a certain error measure (also called cost function) is minimized. Alternatively, the *negative* error measure or a *performance measure* can be maximized.

Error measures can be postulated because they seem intuitively right for the task, because they lead to good performance of the neural network for the task, or because they lead to a convenient training algorithm. An example is the Sum Squared Error measure, which always has been a default choice of error measure in neural network research. Error measures can also be obtained by accepting some general.

Different error measures lead to different algorithms to minimize these error measures. Training algorithms can be classified into categories depending on certain distinguishing properties of those algorithms. Four main categories of algorithms that can be distinguished are:( Dijk, 1999)


The following algorithms for the Fully Recurrent Neural Network and the subsets of the FRNN are investigated in this item:
