**2.2.3 General modular neural network framework**

 In [Bengio, 1996] a general modular network framework is introduced. This framework is similar to the BFN framework: it can be used to describe many different *modular networks*  which are built of neural network modules linked together. Each module is a static feedforward neural network and the links between the modules can incorporate delay elements. The purpose of this framework is:


The use of the framework for developing such a training algorithm, that can do *joint training*  of the individual network modules.( Dijk O. Esko, 1999).

Recurrent Neural Network with Human Simulator Based Virtual Reality 99

Another difference is that the network was used by Jordan so that a constant input vector is given to the net, and the output of the net performs a temporal sequence of vectors. The variation in time is produced by the two types of recurrent connections, namely those from the output layer to the state layer and those within the state layer. For each input vector, another temporal sequence can be produced. However, as the Elman net and the Jordan net are quite similar, each can be used for both purposes. Both types of networks have the advantage that only weights in the forward connections are modifiable and therefore no

When we are interested to construct a recurrent neural network that shows a complex, but given dynamic behavior, back propagation through time can be applied as described above. A much simpler solution, however, is to use echo state networks (Jaeger and Haas 2004). An echo state network consists of two parts, a recurrent network with fixed weights, called dynamic reservoir, and output units that are connected to the neuroses of the dynamic

The dynamic reservoir consists of recurrently, usually sparsely connected units with logistic activation functions. The randomly selected strengths of the connections have to be small enough to avoid growing oscillations (this is guaranteed by using a weight matrix with the largest eigenvector smaller than 1). To test this property, the dynamic reservoir, after being excited by an impulse-like input to some of the units, may perform complex dynamics which however should decrease to zero with time. These dynamics are exploited by the output units. These output units are again randomly and recurrently connected to the units of the dynamic reservoir. Only those weights that determine the connections from the dynamic reservoir to the output units are learnt. All other weights are specified at the

y

special training methods for recurrent nets have to be introduced.

reservoir. Only one output unit is depicted in Fig. 9 for simplicity.

x

**3.2 Echo state networks** 

Fig. 9. Echo state network.

beginning and then held fixed (Hafizah et al., 2008 ).
