**3.1 Creation of model – The training process**

The process of adaptation of a neural network is called "training" or "learning". During supervised training, the input – output pairs are presented to the neural network, and the training algorithm iteratively changes the weights of the neural network.

The measured data points are consecutively presented to the neural network. For each data point, the neural network produces an output value which normally differs from the **target value**. The difference between the two is the approximation error in the particular data point. The error is then propagated back through the neural network towards the input, and the correction of the connection weights is made to lower the output error. There are numerous methods for correction of the connection weight. The most frequently used algorithm is called the **error backpropagation algorithm**.

The training process continues from the first data point included in the training set to the very last, but the queue order is not important. A single training run on a complete training data set is called an **epoch**. Usually several epochs are needed to achieve the acceptable error (**training error**) for each data point. The number of epochs depends on various parameters but can easily reach numbers from 100,000 to several million.

When the training achieves the desired accuracy, it is stopped. At this point, the model can reproduce the given data points with a prescribed precision for all data points. It is good practice to make additional measurements (**test data set**) to validate the model in the points not included in the training set. The model produces another error called the **test error**, which is normally higher than the training error.
