**2.2.1 Recurrent multi-layer perceptron (RMLP)**

The first model (RMLP) is a rather specific one and it is included as an example of a modular architecture. Undoubtedly, many more such architectures are proposed in literature and they cannot all be listed here. Another example is the Pipelined Recurrent Neural Network found in [Haykin, 1998] and applied to speech prediction in [Baltersee e.a., 1998]. The second model is far more general and was meant to provide a structured way to describe a large class of recurrent neural networks and their training algorithms. The third model attempts to do the same and it turns out that this model is the most general one: it incorporates the first two as special cases, so in this section the attention will be mainly focussed on the third model, the general modular network framework. An extension of the regular MLP has been proposed by Puskorias e.a. (see [Haykin, 1998]) which adds selffeedback connections for each layer of the standard MLP. The resulting Recurrent Multilayer Perceptron (RMLP) structure with N layers is shown in Fig 5.

Each layer is a standard MLP layer. The layer outputs are fed forward to the inputs of the next layer and the delayed layer outputs are fed back into the layer itself. So the layer output of time *n*-1 for a certain layer acts as the state variable at time *n* for this layer. The global state of the network consists of all layer states [i(*n*) together. Effectively, this type of network can have both a very large total state vector and a relatively small number of parameters because the neurons in the network are not fully interconnected. There are no recurrent interconnections across layers. All recurrent connections are local (1-layer-to-itself).(Sit, 2005).

Recurrent Neural Network with Human Simulator Based Virtual Reality 97

Single layer blocks can be connected together using the four elementary connections shown in Fig. 7. They are called the cascade, the sum, the split and the feedback connection. Each of these connections consists of one or two embedded BFN blocks (these are called N1 and N2 in the figure) and one connection layer (which has the structure of a single-layer block). This connection layer consists of the weight matrices \$ and %, and the vector function )(.). Each of the four elementary connections itself is defined as a block and can therefore be used as the

Fig. 7. The four elementary BFN connections: the cascade (a), feedback (b), split (c) and sum

 In [Bengio, 1996] a general modular network framework is introduced. This framework is similar to the BFN framework: it can be used to describe many different *modular networks*  which are built of neural network modules linked together. Each module is a static feedforward neural network and the links between the modules can incorporate delay



The use of the framework for developing such a training algorithm, that can do *joint training* 

*o n F Ai n* ( ) ( . ( )) (1)

This block computes the function:

(d) connection.

embedded block of yet another elementary connection.

**2.2.3 General modular neural network framework** 

of the individual network modules.( Dijk O. Esko, 1999).

elements. The purpose of this framework is:

Fig. 5. Recurrent multi-layer perceptron (RMLP) architecture with N layers
