**2.2 Types of RNN according to NN structure**

94 Recurrent Neural Networks and Soft Computing

Fig. 3. One example RNN presented as (a). a two-layer neural network with delays and

The output vector (*n*) of the FRNN consists of the first L elements of the state vector [(*n*), as was shown in Fig. 2. So the output signals are a .subset. of state signals. In a general state space description this is certainly not the case, the output is determined by a separate calculation (the output equation) which is some function of the external input and the state. To obtain a network that effectively has separate state and output units (analogous to a state space system that has separate process and output equations), the feedback connections from all L output neurons yi(*n*) with *i*=1.L are removed. An example of the *partially recurrent neural network* (PRN) [Robinson e.a., 1991], also named the *simplified recurrent neural network*  [Janssen, 1998], that results is shown in Fig. 4. The name partially recurrent neural network. will be used in this report to avoid confusion in the terms simple/simplified recurrent

recurrent connections; and (b) as a FRNN with removed connections.

**2.1.3 Partially recurrent networks (PRN)** 

networks in the next subsection.

Fig. 4. Example of a partially recurrent neural network

Some neural network architectures can be best described as modular architectures. The definition of a modular architecture as used in this report is: an architecture that consists of several static neural networks, that are interconnected in a specific way. There is, in most cases, not a clear boundary between a modular network and a single neural network because the total modular architecture can be looked at as a single neural network, and some existing single networks can also be described as modular networks. It is rather a convenient way of describing complex neural networks.( Paine W. Rainer & Tani Jun, 2004).

In this section the category of modular *recurrent* neural network architectures is looked at, modular architectures that all have one or more internal feedback connections. The modular recurrent neural network architectures were not introduced in previous sections, because they do not fit very well in the state-space system description or the NARX description.

Formally they can be described as a state-space system (like any dynamic system) but this could result in a very complicated and unnecessarily large state-space system description. In this section three classes of modular recurrent neural network architectures are presented:

