**8. Conclusions**

The field of neural networks as a tool for approximation (modelling) is very rapidly evolving. Numerous authors have reported their research results on the topic.

In this chapter, two very important issues of the theory of approximation are questioned. One is the universality of approximation, and the second is the best approximation property. Both properties are not applicable to the neural networks that run on digital computers.

Theoretically, it has been proven that a three layer neural network can approximate any kind of function. Although this is theoretically true, it is not a practical situation since the number of epochs needed to reach the prescribed approximation precision drops significantly with the increase in the number of hidden layers.

The newly introduced concept of approximation of wide-range functions by using logarithmic or segmented values gives the possibility to use neural networks as approximators in special cases where the modelled function has values spanning over several decades.

The training stability analysis is a tool for assessment of training diversity of neural networks. It gives information on the possible outcomes of the training process. It also provides the ground for further optimization.

The important conclusions from the presented example are that the nature of the modelled problem dictates which neural network configuration performs the most appropriate approximation, and that, for each data set to be modelled, a separate neural network training performance analysis should be performed.
