**1. Introduction**

274 Recurrent Neural Networks and Soft Computing

Al-Badi, A., Ellithy, K. & Al-Alawi, S., (2007), "Prediction of Voltage on Mitigated Pipelines

Baboian, R. (Ed.), (2002), *NACE Corrosion Engineer's Reference Book,* NACE International,

Christoforidis, G. C., Labridis, D. P. & Dokopoulos, S., (2003), "Inductive Interference

Christoforidis, G. C., Labridis, D. P. & Dokopoulos, S., (2005), "A Hybrid method for calculating

Czumbil, L. , Micu, D. D. & Ceclan, A., (2009), "Artificial Intelligence Techniques Applied to

CIGRÉ Working Group 36.02, (1995), *Guide on the Influence of High Voltage AC Power Systems* 

Collet, E., Delores, B., Gabillard, M. & Ragault, I., (2001), "Corrosion due to AC Influence of

Damousis, I. G., Satsios, K. J., Labridis, D. P. & Dokopoulos, P. S., (2002), "Combiend Fuzzy Logic

Micu, D. D., Czumbil, L., Christoforidis, G., Ceclan, A., Darabant, L. & Stet, D. (2009),

Micu, D. D., Czumbil, L., Christoforidis, G., & Ceclan, A. (2011), "Layer Recurrent Neural

Satsios, K. J., Labridis, D. P. & Dokopoulos, P. S., (1999), "An Artificial Intelligence System

Satsios, K. J., Labridis, D. P. & Dokopoulos, P. S., (1999), "An Artificial Intelligence System

*Magnetics*, Vol. 47, No. 5, (May 2011), pp. 1410-1413, ISSN: 0018-9464. Papagiannis, G. K., Triantafyllidis, D. G., Labridis, D. P., (2000) "A One-Step Finite Element

*IEEE Trans. On Power Systems*, vol. 15, Nr. 1, 2000, pp.33-38.

7, Vol. 26, Cluj-Napoca, Romania, September 23-26, 2009

Vol. 4, No. 3, (July 1989), pp. 1840-1849, ISSN: 0272-1724.

(August 2001), pp. 221-226, ISSN: 0003-5599.

5 , Lodz, Poland, September 15-17, 2009.

1999), pp. 516-522, ISSN: 0018-9464.

1999), pp. 523-527, ISSN: 0018-9464.

ISBN: 1-57590-139-0, Huston, USA.

pp. 139-148, ISSN: 0378-7796.

*on Metallic Pipelines*.

1724.

Conditions", *COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering*, Vol. 24, No. 1, (2005), pp. 69-80,ISSN: 0332-1649.

Paralleling Electric Transmission Lines using an Artificial Neural Network", *The Journal of Corrosion Science and Engineering*, Vol. 10, Preprint 28, (March 2007), ISSN: 1466-8858.

Calculation on Imperfect Coated Pipelines due to Nearby Faulted Parallel Transmission Lines", *Electric Power Systems Research,* Vol. 6, No. 2., (August 2003),

the Inductive Interference caused by Faulted Power Lines to nearby Buried Pipelines", *IEEE Trans. on Power Delivery,* Vol. 20, No. 2, (April 2005), pp. 1465-1473, ISSN: 0272-

Electromagnetic Interference Problems", *IFMBE Proceedings*,ISBN:978-3-642-22585-

Very High Voltage Power Lines on Polyethylene-Coated Steel Pipelines Evaluation of Riks-Preventive Measures", *Anti-Corrosion Methods and Materials*, Vol. 48, No. 4,

and Genetic Algorithm Techniques – Application to an Electromagnetic Field Problem", *Fuzzy Sets and Systems*, Vol. 129, No. 3, (August 2002), pp. 371-386, ISSN: 0165-0114. Dawalibi, F. P. & Southey, R. D., (1989), "Analysis of electrical interference from power lines

to gas pipelines. Part I – Computation methods", *IEEE Trans. on Power Delivery,* 

"Electromagnetic interferences between HV power lines and metallic pipelines evaluated with neural network technique", *Proceedings of the 10th International Conference on Electrical Power Quality and Utilization (EPQU)*,ISBN: 978-1-4244-5171-

Network Solution for an Electromagnetic Interference Problem", *IEEE Trans. on* 

Formulation for the Modelling of Single and Double Circuit Transmission Lines",

for a Complex Electromagnetic Field Problem: Part I – Finite Element Calculations and Fuzzy Logic Development", *IEEE Trans. on Magnetics*, Vol. 35, No. 1, (January

for a Complex Electromagnetic Field Problem: Part II – Method Implementation and Performance Analysis", *IEEE Trans. on Magnetics*, Vol. 35, No. 1, (January Temperature forecasting is mainly issued in qualitative terms with the use of conventional methods, assisted by the data projected images taken by meteorological satellites to assess future trends (Paras *et al.*, 2007). Several criteria that need to be considered when choosing a forecasting method include the accuracy, the cost and the properties of the series being forecast. Considering those criteria, it is noted that such empirical approaches that has been conducted for temperature forecasting is intrinsically costlier and only proficient of providing certain information, which is usually generalized over a larger geographical area (Paras *et al.*, 2007). Despite of involving sophisticated mathematical models to justify the use of empirical rules, it also requires a prior knowledge of the characteristics of the input time-series to predict future events. Not only that, most temperature forecasts today have limited information about uncertainty. Yet, meteorologists often find it challenging to communicate uncertainty effectively. Regardless of the extensive use of the numerical weather method, they are still restricted by the availability of numerical weather prediction products, leading to various studies being conducted for temperature forecasting (Barry & Chorley, 1982; Paras *et al.*, 2007)

Due to that inadequacy, Neural Network (NN) has been applied in such temperature forecasting. NN mimic human intelligence in learning from complicated or imprecise data and can be used to extract patterns and detect trends that are too complex to be perceived by humans and other computer techniques (Mielke, 2008). NN which can be described as an adaptive machine that has a natural tendency for storing experiential knowledge, are able to discover complex nonlinear relationships in the meteorological processes by communicating forecast uncertainty that relates the forecast data to the actual weather (Chang *et al.*, 2010). However, when the number of inputs to the model and the number of training examples becomes extremely large, the training procedure for ordinary neural network, especially the Multilayer Perceptron (MLP) becomes tremendously slow and unduly tedious. Indeed, MLP are prone to overfit the data (Radhika & Shashi, 2009) and adopts computationally intensive training algorithms. On the other hand, MLP also suffer long training times and often reach local minima (Ghazali & al-Jumeily, 2009).

An Application of Jordan Pi-Sigma

where *wkji* and

Neural Network for the Prediction of Temperature Time Series Signal 277

(Shin & Ghosh, 1991-a). The number of summing units in PSNN reflects the network order. By using an additional summing unit, it will increase the network's order by 1 whilst

In PSNN, weights from summing layer to the output layer are fixed to unity, resulting to a reduction in the number of tuneable weights. Therefore, it can reduce the training time. Sigmoid and linear functions are adopted in the output layer and summing layer, respectively. The use of linear summing units makes the convergence analysis of the learning rules for the PSNN more accurate and tractable (Ghazali & al-Jumeily, 2009; Ghazali *et al.*, 2006). Compared to other HONN models, Shin and Ghosh (1991-b) argued that PSNN can contribute to maintain the high learning capabilities of HONN, needs a much smaller number of weights, with at least two orders of magnitude less number of computations when compared to the MLP for similar performance levels, and over a broad class of problems (Ghazali *et al.*, 2006). Moreover, the PSNN is superior to other HONN in approaching precision computation complexity and has a highly regular structure. Since weights from hidden layer to the output are fixed at 1, the property of PSNN drastically reduces the training time. The applicability of this network was successfully applied for image processing (Hussain and Liatsis, 2002), time series prediction (Knowles, 2005; Ghazali *et al.*, 2011), function approximation ( Shin & Ghosh, 1991-a; Shin & Ghosh, 1991-b), pattern

is the nonlinear transfer function

*ji* are adjustable coefficients, and

recognition ( Shin & Ghosh, 1991-a), Cryptography (Song, 2008), and so forth.

architecture of the proposed JPSN is shown in Figure 2 below.

Fig. 2. The architecture of JPSN

**3. The properties and structure of Jordan pi-sigma neural network (JPSNN)**  The structure of JPSN is quite similar to the ordinary PSNN. The main difference is the architecture of JPSN is constructed by having a recurrent link from output layer back to the input layer. This structure gives the temporal dynamics of the time-series process that allows the network to compute in a more parsimonious way (Hussain & Liatsis, 2002). The

preserving old connections and maintaining network topology.

Considering the limitations of MLP, therefore in this work, the intention of utilizing the use of higher order neural networks (HONN) which have the ability to expand the input representation space is considered. The Pi-Sigma Neural Network (PSNN) (Shin & Ghosh, 1991-a), a class of HONN, is able to perform high learning capabilities that require less memory in terms of weights and nodes, and at least two orders of magnitude less number of computations when compared to the MLP for similar performance levels, and over a broad class of problems (Ghazali & al-Jumeily, 2009; Shin & Ghosh, 1991-b).

In conjunction with the benefits of PSNN, a new model called Jordan Pi-Sigma Neural Network (JPSN) which posses a Jordan Neural Network architecture (Jordan, 1986) is proposed to perform temperature forecasting. In this regard, the JPSN that managed to incorporates feedback connections in their structure and having the superior properties of PSNN is mapped to function variable and coefficient related to the research area. Consequently, this work is conducted in order to prove that JPSN is suitable for one-stepahead temperature prediction.
