2.7 Boosting

successfully handle high dimensionality and multicollinearity, because it is both fast and insensitive to overfitting. It is, however, sensitive to the sampling design.

2.6 Adaptive neuro-fuzzy inference system or adaptive network-based fuzzy

ANFIS is a hybrid learning procedure which employs the linguistic concept of fuzzy systems (human knowledge) and the training power of the ANN to solve a regression problem [41]. All ANFIS works reported here are based on the Takagi-Sugeno fuzzy inference system [42], where the fuzzy rule applied has the form: if x is A and y is B then z ¼ f xð Þ ; y . Other fuzzy methods are Mamdani-type or

Figure 6 depicts a typical ANFIS architecture. Square nodes (adaptive nodes) have parameters, while circle nodes (fixed nodes) do not. The first and the fourth layers contain the parameters that can be modified over time. A particular learning

In layer 1, every node is adaptive and associated with an appropriate continuous and piecewise differentiable function such as Gaussian, generalized bell-shaped,

In layer 2, every node is fixed and represents the firing strength of each rule. This is calculated by the fuzzy and connective method of the "product" of the

<sup>i</sup> ¼ wi ¼ μAið Þ x ∗ μBið Þ x , i\_1, 2: In layer 3, every node is also fixed, showing the normalized firing strength of each rule. The ith node calculates the ratio of the ith rule's firing strength to the

In every adaptive node of layer 4 (consequent nodes) is a function indicating the

One of the most important steps in developing a satisfactory forecasting model is the selection of the input variables. These variables determine the structure of the forecasting model and affect the weighted coefficients and the results of the model

,

, ri is the parameter set. Finally, layer 5

contribution of the ith rule to the overall output: O4,i ¼ wif ¼ wi pi þ qi þ ri

, qi

(output node) is a single node that computes the overall output of the ANFIS as:

inference system (ANFIS)

Drought - Detection and Solutions

method was required to update these parameters.

trapezoidal-shaped, and triangular-shaped functions.

Architecture of an adaptive network-based fuzzy inference system (ANFIS).

Tsukamoto-type [42].

incoming signals, that is, O<sup>2</sup>

<sup>O</sup>5, <sup>1</sup> <sup>¼</sup> <sup>∑</sup>wifi <sup>¼</sup> <sup>∑</sup><sup>i</sup> wifi

Figure 6.

8

summation of two rules' firing strengths.

where wi is the output of layer 3 and pi

<sup>∑</sup><sup>i</sup> wi .

Boosting attempts to increase the performance of a given learning algorithm by iteratively adjusting the weight of an observation based on the last training/testing process. In other words, the meta-algorithm produces a sequence of models by adaptive reweighting of the training set [45].

AdaBoost, the first boosting algorithm, is definitely beaten by noisy data; its performance is highly affected by outliers, as the algorithm tries to fit every point perfectly. Friedman [46] extended the concept to present gradient boosting, which constructs additive regression models by sequentially fitting a simple parameterized function (base learner) to current "pseudo"-residuals by least squares at each iteration. The pseudo-residuals are the gradient of the loss function being minimized with respect to the model values at each training data point evaluated in the current step. This reduces the loss of the loss function. We iteratively added each model and computed the loss. The loss represents the difference between the actual value and the predicted value (the error residual), and using this loss value, the predictions are updated to minimize these residuals.

A regularization method that penalizes various parts of the boosting algorithm is necessary to avoid overfitting. This generally improves the performance of the algorithm by reducing overfitting.
