**3.2 Echo state networks**

98 Recurrent Neural Networks and Soft Computing

There are many special recurrent networks according to purpose of application or structure

As was mentioned earlier, recurrent networks represent the most general format of a network. However, there is yet no general theoretical framework that describes the properties of recurrent nets. Therefore, several networks will be discussed below which have a specifically defined structure. In the first example, most layers have only feedforward connections, and only one contains specified recurrent connections. This example is given by Elman (1990) (Fig. 8.**a**). The system has an input layer, a hidden layer, and an output layer all of which are connected in a feedforward manner. The hidden layer, however, is not only connected to the output layer but also, in a simple 1 : 1 connection, to a further layer called the context layer. To form recurrent connections, the output of this context layer is also inputted to the hidden layer. Except for these 1 : 1 connections from hidden to context layer, the weights of which are fixed to 1, all other layers may be fully connected and all weights may be modifiable. The recurrent connections of the context layer provide the system with a short-term memory; the hidden units do not only observe the actual input but, via the context layer, also obtain information on their own state at the last time step. Since, at a given time step, hidden units have already been influenced by inputs at the earlier time steps, this recurrency comprises a memory which depends on earlier states (though their influence decays with

Fig. 8. Two networks with partial recurrent connections. (a) Elman net. (b) Jordan net. Only


Recurrent networks are particularly interesting in relation to motor control. For this purpose, Jordan (1986) proposed a very similar net (Fig. 8.**b**). A difference is that the recurrent connections start from the output, rather than the hidden layer. Furthermore, the layer corresponding to the context, here called state layer, comprises a recurrent net itself

the weights in the feed forward channels can be modified.

with 1:1 connections and fixed weights.

**3. Some special recurrent networks** 

**3.1 Elman nets and Jordan nets** 

time).( Cruse Holk, 2006 ).

of networks. From these types can be explain the following:

When we are interested to construct a recurrent neural network that shows a complex, but given dynamic behavior, back propagation through time can be applied as described above. A much simpler solution, however, is to use echo state networks (Jaeger and Haas 2004). An echo state network consists of two parts, a recurrent network with fixed weights, called dynamic reservoir, and output units that are connected to the neuroses of the dynamic reservoir. Only one output unit is depicted in Fig. 9 for simplicity.

Fig. 9. Echo state network.

The dynamic reservoir consists of recurrently, usually sparsely connected units with logistic activation functions. The randomly selected strengths of the connections have to be small enough to avoid growing oscillations (this is guaranteed by using a weight matrix with the largest eigenvector smaller than 1). To test this property, the dynamic reservoir, after being excited by an impulse-like input to some of the units, may perform complex dynamics which however should decrease to zero with time. These dynamics are exploited by the output units. These output units are again randomly and recurrently connected to the units of the dynamic reservoir. Only those weights that determine the connections from the dynamic reservoir to the output units are learnt. All other weights are specified at the beginning and then held fixed (Hafizah et al., 2008 ).

Recurrent Neural Network with Human Simulator Based Virtual Reality 101

Fig. 10. An arm consisting of three segments described by the vectors L1 L2, and L3 which


Using recurrent neural networks like Jordan nets or MMC nets for the control of behavior, i.e. use their output for direct control of the actuators. Several models are proposed to control the movement of such a redundant (or non-redundant) arm. One model corresponds to a schema shown in Fig 11.**a**, where DK and IK may represent feedforward (e. g., threelayer) networks, computing the direct (DK) and inverse kinematics (IK) solutions, respectively. (In the redundant case, IK has to represent a particular solution)( Liu Meiqin,

Fig. 11. A two-joint arm (a) and a three-joint arm (b) moving in a two dimensional (x-y) plane. When the tip of the arm has to follow a given trajectory (dotted arrow), in the

redundant case (b) an infinite number of joint angle combinations can solve the problem for

are connected by three planar joints.

any given position of the tip.

2006).

**3.5 Forward models and inverse models** 
