**2. Pi-sigma neural network (PSNN)**

PSNN is a type of HONN and was first introduced by Shin & Ghosh (1991-a). The basic idea behind the network is due to the fact that a polynomial of input variables is formed by a product ("pi") of several weighted linear combinations ("sigma") of input variables. That is why this network is called pi-sigma instead of sigma-pi. The PSNN exhibits fast learning while greatly reducing network complexity by utilising an efficient form of polynomials for many input variables. This special polynomial form helps the PSNN to dramatically reduce the number of weights in its structure. Figure 1 shows the architecture of PSNN:

Fig. 1. Structure of *Kth* Order PSNN

Input *x* is an *N* dimensional vector and *<sup>k</sup> x* is the *k* th component of *x* . The weighted inputs are fed to a layer of *K* linear summing units; *ji h* is the output if the *j* th summing units for the *i* th output *<sup>i</sup> y* , viz:

$$y\_i = \sigma \left( \prod\_j \left( \sum\_k w\_{kji} \mathbf{x}\_k + \theta\_{ji} \right) \right) \tag{1}$$

Considering the limitations of MLP, therefore in this work, the intention of utilizing the use of higher order neural networks (HONN) which have the ability to expand the input representation space is considered. The Pi-Sigma Neural Network (PSNN) (Shin & Ghosh, 1991-a), a class of HONN, is able to perform high learning capabilities that require less memory in terms of weights and nodes, and at least two orders of magnitude less number of computations when compared to the MLP for similar performance levels, and over a broad

In conjunction with the benefits of PSNN, a new model called Jordan Pi-Sigma Neural Network (JPSN) which posses a Jordan Neural Network architecture (Jordan, 1986) is proposed to perform temperature forecasting. In this regard, the JPSN that managed to incorporates feedback connections in their structure and having the superior properties of PSNN is mapped to function variable and coefficient related to the research area. Consequently, this work is conducted in order to prove that JPSN is suitable for one-step-

PSNN is a type of HONN and was first introduced by Shin & Ghosh (1991-a). The basic idea behind the network is due to the fact that a polynomial of input variables is formed by a product ("pi") of several weighted linear combinations ("sigma") of input variables. That is why this network is called pi-sigma instead of sigma-pi. The PSNN exhibits fast learning while greatly reducing network complexity by utilising an efficient form of polynomials for many input variables. This special polynomial form helps the PSNN to dramatically reduce

Input *x* is an *N* dimensional vector and *<sup>k</sup> x* is the *k* th component of *x* . The weighted inputs are fed to a layer of *K* linear summing units; *ji h* is the output if the *j* th summing

> *i kji k ji j k y wx*

 

 

(1)

the number of weights in its structure. Figure 1 shows the architecture of PSNN:

class of problems (Ghazali & al-Jumeily, 2009; Shin & Ghosh, 1991-b).

ahead temperature prediction.

**2. Pi-sigma neural network (PSNN)** 

Fig. 1. Structure of *Kth* Order PSNN

units for the *i* th output *<sup>i</sup> y* , viz:

where *wkji* and *ji* are adjustable coefficients, and is the nonlinear transfer function (Shin & Ghosh, 1991-a). The number of summing units in PSNN reflects the network order. By using an additional summing unit, it will increase the network's order by 1 whilst preserving old connections and maintaining network topology.

In PSNN, weights from summing layer to the output layer are fixed to unity, resulting to a reduction in the number of tuneable weights. Therefore, it can reduce the training time. Sigmoid and linear functions are adopted in the output layer and summing layer, respectively. The use of linear summing units makes the convergence analysis of the learning rules for the PSNN more accurate and tractable (Ghazali & al-Jumeily, 2009; Ghazali *et al.*, 2006). Compared to other HONN models, Shin and Ghosh (1991-b) argued that PSNN can contribute to maintain the high learning capabilities of HONN, needs a much smaller number of weights, with at least two orders of magnitude less number of computations when compared to the MLP for similar performance levels, and over a broad class of problems (Ghazali *et al.*, 2006). Moreover, the PSNN is superior to other HONN in approaching precision computation complexity and has a highly regular structure. Since weights from hidden layer to the output are fixed at 1, the property of PSNN drastically reduces the training time. The applicability of this network was successfully applied for image processing (Hussain and Liatsis, 2002), time series prediction (Knowles, 2005; Ghazali *et al.*, 2011), function approximation ( Shin & Ghosh, 1991-a; Shin & Ghosh, 1991-b), pattern recognition ( Shin & Ghosh, 1991-a), Cryptography (Song, 2008), and so forth.
