4.1 Model of a neuron

The main function of artificial neuron is to generate output from an activated nonlinear function with the weighted sum of all inputs. Figure 2 illustrates a nonlinear model of a neuron, which forms the basis for designing ANN. The input layer neurons receive the input signals (xi) and these signals are passed to the cell body through the synapses. A set of synapses or connecting links is characterized by its own weight or strength. A signal at the input of synapse 'i' connected to neuron 'k' is multiplied by the synaptic weight 'wki'. The input signals, weighted by the respective synapses of the neuron are added by a linear combiner. An activation function or squashing function is used for limiting the permissible amplitude range of the output of a neuron to some finite value. An external bias (bk) has an effect of increasing or decreasing the net input of the activation function depending on whether it is positive or negative, respectively.

Figure 2. A nonlinear model of a neuron.

4. Theoretical consideration

Station locations and period of records.

Figure 1.

Semiarid

Subhumid

Table 1.

26

Geographical locations of study sites in India.

(m)

Advanced Evapotranspiration Methods and Applications

Period Tmax

(°C)

Tmin (°C)

Parbhani 423 2001–2005 33.75 18.32 71.13 41.02 5.04 20.87 Solapur 25 2001–2005 34.15 20.14 73.28 45.09 6.15 18.96 Bangalore 930 2001–2005 28.90 17.70 89.15 47.30 8.68 18.95 Kovilpatti 90 2001–2005 35.11 23.37 80.36 48.52 6.60 19.30 Udaipur 433 2001–2005 31.81 16.33 72.36 36.44 3.74 19.45

Arid Anantapur 350 2001–2005 34.43 21.78 73.32 33.91 9.64 20.27

Humid Palampur 1291 2001–2005 24.41 13.24 69.70 57.88 5.56 16.35

Hissar 215 2001–2005 31.17 16.23 81.00 44.27 5.20 17.26

Raipur 298 2001–2005 32.60 19.91 80.62 44.08 5.33 17.80 Faizabad 133 2001–2005 31.56 18.18 87.02 52.11 3.51 17.88 Ludhiana 247 2001–2005 30.06 17.42 83.97 49.14 4.26 18.10 Ranichauri 1600 2001–2005 20.08 9.66 81.15 61.55 4.99 16.23

Jorhat 86 2001–2005 27.97 19.23 92.70 75.27 3.00 14.68 Mohanpur 10 2001–2005 32.20 21.04 96.18 61.48 1.27 18.06 Dapoli 250 2001–2005 31.13 18.87 93.77 69.22 4.92 18.02

RHmax (%)

RHmin (%)

Ws (km h<sup>1</sup> )

Sra (MJ m<sup>2</sup> day<sup>1</sup>

)

AER Location Alt.

The concept of neural networks was introduced by [31]. The neural-network approach, also referred to as 'connectionism' or 'paralleled distributed processing',

In the mathematical form, a neuron k may be described by the following equations:

$$\mathbf{u\_k} = \sum\_{i=1}^{n} w\_{ki} \mathbf{x\_i} \tag{1}$$

training and testing. However, few hidden neurons results inaccurate model and provide a solution surface that deviates from training patterns. Therefore, choosing optimum number of hidden neurons is one of the important training parameter in ANN. To solve this problem, several neural networks with different number of hidden neurons are used for calibration/training and one with best performance

Nonlinear Evapotranspiration Modeling Using Artificial Neural Networks

The activation function or transfer function, denoted by φ(v), defines the output of a neuron in terms of the induced local field v. It is valuable in ANN applications as it introduces a degree of nonlinearity between inputs and outputs. Logistic sigmoid, hyperbolic tangent and linear functions are some widely used transfer

Logistic sigmoid function: This function is a continuous function that reduces the

Hyperbolic tangent function: It is used when the desired range of output of a

Linear function: It calculates the neuron's output by simply returning the value

The manner in which the neurons of a neural network are structured is intimately

MLPs are layered (single-layered or multi-layered) feed forward networks typically trained with static back-propagation (Figure 3). Therefore, it is also called as FFBP neural networks. These networks have found their way into countless applications requiring static pattern classification. This architecture consists of input layers, output layer(s) and one or more hidden layers. The input signal moves in only forward direction from the input nodes to the output nodes through the hidden nodes. The function of hidden layer is to perform intermediate computations in between input and output layers through weights. The major advantage of FFBP is that they are easy to handle and can easily approximate any input-output

linked with the learning algorithm used to train the network. This leads to the formation of network architectures. The neural network architectures are classified into distinct classes depending upon the information flow. The different network architectures are: (a) multilayer perceptions, (b) recurrent, (c) RBF, (d) Kohonen

φð Þ¼ v tanhð Þ¼ v

1 1 þ exp ð Þ �v

<sup>1</sup> � <sup>e</sup>�2<sup>v</sup>

<sup>1</sup> <sup>þ</sup> <sup>e</sup>�2<sup>v</sup> (7)

φð Þ¼ v v (8)

(6)

φð Þ¼ v

together with compact structure is accepted.

DOI: http://dx.doi.org/10.5772/intechopen.81369

output into the range of 0–1 and is defined as [32]:

neuron is between �1 and 1 and is expressed as [32]:

4.5 Types of activation functions

passed to it. It can be expressed as:

4.6 Neural network architectures

self-organizing feature map, etc.

map [37].

29

4.7 Multilayer perceptions (MLPs)

function in ANN modeling.

$$\mathbf{y}\mathbf{k} = \boldsymbol{\phi}(\mathbf{u}\mathbf{k} + \mathbf{b}\mathbf{k})\tag{2}$$

where x1, x2, x3, ……….. xn = input signals; wk1,wk2, …….wkn = synaptic weights of neuron k; uk = linear combiner output due to the input signal; bk = bias; φ(.) = activation function; yk = output signal of the neuron k.

Let v<sup>k</sup> be the induced local field or activation potential, which is given as:

$$
\boldsymbol{\upsilon}\_{\mathbf{k}} = \mathbf{u}\_{\mathbf{k}} + \mathbf{b}\_{\mathbf{k}} \tag{3}
$$

Now, Eqs. (1), (2) and (3) can be written as:

$$w\_{\mathbf{k}} = \sum\_{i=0}^{m} w\_{kn} \mathbf{x}\_n \tag{4}$$

$$\mathbf{yk} = \boldsymbol{\Phi}(\upsilon k) \tag{5}$$

In Eq. (5), a new synapse with input x0 = +1 is added and its weight is wk0 = bk to consider the effect of the bias.

#### 4.2 Neural network architecture parameters

Determination of appropriate neural network architecture is one of the most important tasks in model-building process. Various types of neural networks are analyzed to find the most appropriate architecture of a particular problem. Multilayer feed forward networks are found to outperform all the others. Although multilayer feed forward networks are one of the most fundamental models, they are the most popular type of ANN structure suited for practical applications.

### 4.3 Number of hidden layers

There is no fixed rule for selection of hidden layers of a network. Therefore, trial and error method was used for selection of number of hidden layers. Even one hidden layer of neuron (operating sigmoid activation function) can also be sufficient to model any solution surface of practical interest [36].

#### 4.4 Number of hidden neurons

The ability of the ANN to generalize data not included in training depends on selection of sufficient number of hidden neurons to provide a means for storing higher order relationships necessary for adequately abstracting the process. There is no direct and precise way of determining the most appropriate number of neurons to include in hidden layer and this problem becomes more complicated as number of hidden layer increases. Some studies indicated that more number of neurons in hidden layer provide a solution surface that closely fit to training patterns. But in practice, more number of hidden neurons results the solution surface that deviate significantly from the trend of the surface at intermediate points or provide too literal interpretation of the training points which is called 'over fitting'. Further, large number of hidden neurons reduces the speed of operation of network during

training and testing. However, few hidden neurons results inaccurate model and provide a solution surface that deviates from training patterns. Therefore, choosing optimum number of hidden neurons is one of the important training parameter in ANN. To solve this problem, several neural networks with different number of hidden neurons are used for calibration/training and one with best performance together with compact structure is accepted.
