Preface

#### **Section 1: Soft Computing**

This section illustrates some general concepts of artificial neural networks, their properties, mode of training, static training (feedforward) and dynamic training (recurrent), training data classification, supervised, semi-supervised and unsupervised training. **Prof. Belic Igor's** chapter that deals with ANN application in modeling, illustrating two properties of ANN: universality and optimization. **Prof. Shoukry Amin** discusses both symbolic and non-symbolic data and ways of bridging neural networks with fuzzy logic, as discussed in the second chapter including an application to the robot problem. **Dr. Hajjari Tayebeh** discusses fuzzy logic and various ordering indices approaches in detail, including defuzzification method, reference set method and the fuzzy relation method. A comparative example is provided indicating the superiority of each approach. The next chapter discusses the combination of ANN and fuzzy logic, where neural weights are adjusted dynamically considering the fuzzy logic adaptable properties.

**Dr. Medel Jeus** describes applications of artificial neural networks, both feed forward and recurrent, and manners of improving the algorithms by combining fuzzy logic and digital filters. Finally **Dr. Ghaemi O. Kambiz** discusses using genetic algorithm enhanced fuzzy logic to handle two medical related problems: Hemodynamic control & regulation of blood glucose.

### **Section 2: Recurrent Neural Network**

Recurrent Neural Networks (RNNs), are like other ANN abstractions of biological nervous systems, yet they differ from them in allowing using their internal memory of the training to be fed recurrently to the neural network. This makes them applicable for adaptive robotics, speech recognition, attentive vision, music composition, hand-writing recognition, etc. There are several types of RNNs, such as Fully recurrent network, Hopfield network, Elman networks and Jordan networks, Echo state network, Long short term memory network, Bi-directional RNN, Continuous-time RNN, Hierarchical RNN, Recurrent multilayer perceptron, etc. In this section, some of these types of RNN are discussed, as well as application of each type. **Dr. Al-Mashhadany Yousif** Illustrates types of ANN providing a detailed illustration of RNN, types and advantages with application to human simulator. This chapter discusses time-delay recurrent neural network (TDRNN) model, as a novel approach. **Prof. Ge Hongwei's** chapter demonstrates the advantages of using such an approach over other approaches using a graphic illustration of the results. Moreover, it illustrates its usage in non-linear systems. **Dr Othman Razib M.** describes in his chapter the usage of a combination of Bidirectional Recurrent Neural Network and Support vector machine (BRNN-SVM) with the aim of protein domain prediction, where BRNN acts to predict the secondary structure as SVM later processes the produced data to classify the domain. The chapter is well constructed and easy to understand. **Dr. Rocha Brigida**'s chapter discusses using Recurrent self-organizing map (RSOM) for weather prediction. It applies Self-Organizing Map (SOM), single layer neural network. To deal with the dynamic nature of the data, Temporal Kohonen Map (TKM) was implemented including the temporal dimension, which is followed by applying Recurrent Self-Organizing Map (RSOM) to open the window for model retraining. It proceeds to describe data preprocessing, training and evaluation in a comparative analysis of these three approaches. **Dr. Baruch Ieroham** describes the usage of different RNN approaches in aerobic digestion bioprocess featuring dynamic Backpropagation and the Levenberg-Marquardt algorithms. It provides sufficient introductory information, mathematical background, training methodology, models built, and results analysis. **Dr. Tarkov Mikhail** discusses using RNN for solving the problem of mapping graphs of parallel programs, mapping graph onto graph of a distributed CS as one of the approaches of over heading and optimization solutions. They used different mapping approaches, as Hopfield, achieving improvement in mapping quality with less frequency, Splitting that achieved higher frequency, Wang was found to achieve higher frequency of optimal solutions. Then they proposed using nesting ring structures in "toroidal graphs with edge defect" and "splitting method". **Dr. Zgallai Walid A** discusses the use of RNN in ECG classification and detection targeting different medical cases, adult and fetal ECG as well as other ECG abnormalities. In **Prof. Micu Dan Doru's** work, Electromagnetic Interference problems are described and how they could be solved using artificial intelligence. The authors apply various artificial intelligence techniques featuring two models, one for evaluation of magnetic vector potential, and the other for self & mutual impedance matrix determination. **Finally Dr. Ghazali Rozaida** discusses the use of Jordan Pi-Sigma Neural Network (that featuring RNN) for weather forecasting, providing sufficient background information, training and testing against other methods, MLP and PSNN.

We hope this book will be of value and interest to researchers, students and those working in the artificial intelligence, machine learning, and related fields. It offers a balanced combination of theory and application, and each algorithm/method is applied to a different problem and evaluated statistically using robust measures like ROC and other parameters.

#### **Mahmoud ElHefnawi**

Division of Genetic Engineering and Biotechnology, National Research Centre & Biotechnology, Faculty of Science, American University in Cairo, Egypt

#### **Mohamed Mysara (MSc).**

Biomedical Informatics and Chemoinformatics Group, National Research Centre, Cairo, Egypt

**Part 1** 

**Soft Computing** 

**1** 

Igor Belič

*Slovenia* 

*Institute of Metals and Technology* 

**Neural Networks and Static Modelling** 

Neural networks are mainly used for two specific tasks. The first and most commonly mentioned one is pattern recognition and the second one is to generate an approximation to

In the pattern recognition task the data is placed into one of the sets belonging to given classes. Static modelling by neural networks is dedicated to those systems that can be probed by a series of reasonably reproducible measurements. Another quite important detail that justifies the use of neural networks is the absence of suitable mathematical

Neural networks are model-less approximators, meaning they are capable of modelling regardless of any knowledge of the nature of the modelled system. For classical approximation techniques, it is often necessary to know the basic mathematical model of the approximated problem. Least square approximation (regression models), for example, searches for the best fit of the given data to the known function which represents the model. Neural networks can be divided into dynamic and static neural (feedforward) networks, where the term dynamic means that the network is permanently adapting the functionality (i.e., it learns during the operation). The static neural networks adapt their properties in the so called learning or training process. Once adequately trained, the properties of the built

Neural networks can be trained either according to already known examples, in which case this training is said to be supervised, or without knowing anything about the training set

In this chapter we will focus strictly on the static (feedforward) neural networks with

An important question is to decide which problems are best approached by implementation of neural networks as approximators. The most important property of neural networks is their ability to learn the model from the data presented. When the neural network builds the model, the dependences among the parameters are included in the model. It is important to know that neural networks are not a good choice when research on the underlying mechanisms and interdependencies of parameters of the system is being undertaken. In

such cases, neural networks can provide almost no additional knowledge.

**1. Introduction** 

a function usually referred to as modelling.

description of modelled problem.

model remain unchanged – static.

supervised training scheme.

outcomes. In this case, the training is unsupervised.
