**6. References**

84 Recurrent Neural Networks and Soft Computing

The Neuro-Fuzzy Digital Filter time evolution responses was less that the reference process time state change proposed with a value of 0.08 sec, and is delimited by the processor,

Neural net in identification sense, considered the adaptation process requiring adjust the weights dynamically using the common proportional condition. But in many cases, these applications generate convergence problems because the gain in all cases increase the neural net weights positive or negatively without converge to desired value. In the black box traditional scheme the internal weights are known, but in real conditions it is impossible and only has a desired or objective answer. But, the neural nets help jumping weights estimation adjusting dynamically their internal weights, needing adaptation process with smooth movements as a function of identification error (function generated by the difference between the filter answer with respect to desired answer.). An option considered was the fuzzy logic in where its natural actions based on distribution function error allowing built the adjustable membership functions and mobile inference limits. Therefore, the neural weights are adjusted dynamically considering the fussy logic adaptable properties applied in the law actions, shown in figure 7. Stable weights conditions were exposed in section 3, with movements bounded in (8). In the results section, the figure 13, illustrated the Neuro-Fuzzy Digital Filter advantages without lost the stability with respect to desired system answer observed in distribution sense, observing the Hausdorff condition approximating the filter to desired system answer in

considered in (10). The convergence time is 0.0862 sec, described in (Medel, 2008).

Figure 14, shows the evolution functional error described by (5).

Fig. 14. Functional error considered in (5).

**5. Conclusion** 

distribution sense.

Abraham K. (1991). *Fuzzy Expert Systems, Florida*, Ed. CRC Press, ISBN: 9780849342974


**Part 2** 

**Recurrent Neural Network** 

Yamakawa F. (1989). *Fuzzy Neurons and Fuzzy Neural Networks*.

Zadeh L. (1965). Fuzzy Sets, *Information and control*, Vol. 8, pp. 338-353.

National security. (2008). Intelligence on the Brain, A New Dialogue on Neuro- Research and National Security

http://www.scienceprogress.org/2008/11/intelligence-on-the-brain/

**Part 2** 

**Recurrent Neural Network** 

86 Recurrent Neural Networks and Soft Computing

National security. (2008). Intelligence on the Brain, A New Dialogue on Neuro- Research

http://www.scienceprogress.org/2008/11/intelligence-on-the-brain/

Yamakawa F. (1989). *Fuzzy Neurons and Fuzzy Neural Networks*.

and National Security

Zadeh L. (1965). Fuzzy Sets, *Information and control*, Vol. 8, pp. 338-353.

**5** 

*Iraq* 

**Recurrent Neural Network with Human** 

During almost three decades, the study on theory and applications of artificial neural network has increased considerably, due partly to a number of significant breakthroughs in research on network types and operational characteristics, but also because of some distinct advances in the power of computer hardware which is readily available for net implementation. In the last few years, recurrent neural networks (RNNs), which are neural network with feedback (closed-loop) connects, have been an important focus of research and development. Examples include bidirectional associative memory (BAM), Hopfield, cellular neural network (CNN), Boltzmann machine, and recurrent back propagation nets, etc.. RNN techniques have been applied to a wide variety of problems due to their dynamics and parallel distributed property, such as identifying and controlling the real-time system,

RNNs are widely acknowledged as an effective tool that can be employed by a wide range of applications that store and process temporal sequences. The ability of RNNs to capture complex, nonlinear system dynamics has served as a driving motivation for their study. RNNs have the potential to be effectively used in modeling, system identification, and adaptive control applications, to name a few, where other techniques may fall short. Most of the proposed RNN learning algorithms rely on the calculation of error gradients with respect to the network weights. What distinguishes recurrent neural networks from static, or feedforward networks, is the fact that the gradients are time dependent or dynamic. This implies that the current error gradient does not only depend on the current input, output, and targets, but rather on its possibly infinite past. How to effectively train RNNs remains a

The learning problem consists of adjusting the parameters (weights) of the network, such that the trajectories have certain specified properties. Perhaps the most common online learning algorithm proposed for RNNs is the real-rime recurrent learning (RTRL), which calculates gradients at time (k) in terms of those at time instant (k-1). Once the gradients are evaluated, weight updates can be calculated in a straightf\_\_Gorward manner. The RTRL algorithm is very attractive in that it is applicable to real-time systems. However, the two main drawbacks of RTRL are the large computational complexity of O(N4) and, even more critical, the storage

requirements of O(N3), where N denotes the number of neurons in the network.

**1. Introduction** 

neural computing, image processing and so on.

challenging and active research topic.

**Simulator Based Virtual Reality** 

*Al Anbar University, Engineering College, Electrical Dept.,* 

Yousif I. Al Mashhadany
