**1. Introduction**

Neural networks are mainly used for two specific tasks. The first and most commonly mentioned one is pattern recognition and the second one is to generate an approximation to a function usually referred to as modelling.

In the pattern recognition task the data is placed into one of the sets belonging to given classes. Static modelling by neural networks is dedicated to those systems that can be probed by a series of reasonably reproducible measurements. Another quite important detail that justifies the use of neural networks is the absence of suitable mathematical description of modelled problem.

Neural networks are model-less approximators, meaning they are capable of modelling regardless of any knowledge of the nature of the modelled system. For classical approximation techniques, it is often necessary to know the basic mathematical model of the approximated problem. Least square approximation (regression models), for example, searches for the best fit of the given data to the known function which represents the model.

Neural networks can be divided into dynamic and static neural (feedforward) networks, where the term dynamic means that the network is permanently adapting the functionality (i.e., it learns during the operation). The static neural networks adapt their properties in the so called learning or training process. Once adequately trained, the properties of the built model remain unchanged – static.

Neural networks can be trained either according to already known examples, in which case this training is said to be supervised, or without knowing anything about the training set outcomes. In this case, the training is unsupervised.

In this chapter we will focus strictly on the static (feedforward) neural networks with supervised training scheme.

An important question is to decide which problems are best approached by implementation of neural networks as approximators. The most important property of neural networks is their ability to learn the model from the data presented. When the neural network builds the model, the dependences among the parameters are included in the model. It is important to know that neural networks are not a good choice when research on the underlying mechanisms and interdependencies of parameters of the system is being undertaken. In such cases, neural networks can provide almost no additional knowledge.

Neural Networks and Static Modelling 5

Under the formal concept of static system we can also imply a somewhat narrower definition as described in (1). Here the system input – output relationship does not include

Although this kind of representation does not seem to be practical, it addresses a very large group of practical problems where the nonlinear characteristic of a modelled system is corrected and accounted for (various calibrations and re-calibrations of measurement systems). Another understanding of static modelling refers to the relative speed (time constant) of the system compared to the model. Such is the case where the model formed by the neural network (or any other modelling technique) runs many times faster than does the original

We are referring to the static modelling when the relation (3) holds true.

the neural network which builds the model through the process of learning.

*m* represents the time constant of the model, and

**(.)**

**Summing junction**

**Bias input**

*wkb*

**Activation function**

*m <<* 

the observed system. Due to the large difference in the time constants, the operation of the

The main reason to introduce the neural networks to the static modelling is that we often do not know the function *f* (1,2) analytically but we have the chance to perform the direct or indirect measurements of the system performance. Measured points are the entry point to

The basic building element of any neural network is an **artificial neural network** cell (Fig. 2 left).

*Output yk*

Fig. 2. The artificial neural network cell (left) and the general neural network system (right)

1The measurement systems usually (for example vacuum gauge) operate indirectly. Through measurement of different parameters the observed value of the system (output) can be deduced. Such is the case with the measurement of the cathode current at inverted magnetron. The current is in nonlinear dependence with the pressure in the vacuum system. In such system the dependence of the current versus pressure is not known analytically – at least not good enough - to use the analytical expression

directly. This makes ideal ground to use neural network to build the adequate model.

*Ym (Xn) = f(Xn, Pu)* (2)

*<sup>s</sup>* (3)

*<sup>s</sup>*represents the time constant of

*Input layer* *Hidden layers*

*Outputs Inputs*

*. . . . . .*

*. . . . . . . . .*

*. . .*

*12 n*

*Output layer*

the time component (2).

Where 

**3. The terminology** 

*x1 x2 x3*

**Inputs**

*xn*

**Synaptic weights**

*wkn*

*wk1 wk2 wk3*

process which is corrected by the model1.

model can be regarded as instantaneous.

The first sub-chapter starts with an introduction to the terminology used for neural networks. The terminology is essential for adequate understanding of further reading.

The section entitled "Some critical aspects" summarizes the basic understanding of the topic and shows some of the errors in formulations that are so often made.

The users who want to use neural network tools should be aware of the problems posed by the input and output limitations. These limitations are often the cause of bad modelling results. A detailed analysis of the neural network input and output considerations and the errors that may be produced by these procedures are given.

In practice the neural network modelling of systems that operate on a wide range of values represents a serious problem. Two methods are proposed for the approximation of wide range functions.

A very important topic of training stability follows. It defines the magnitude of diversity detected during the network training and the results are to be studied carefully in the course of any serious data modelling attempt.

At the end of the chapter the general design steps for a specific neural network modelling task are given .
