2. Time series methods

This section provides an overview of the most commonly used time series methods for solar and wind forecasting. A brief description is provided for each method along with its mathematical representation.

### 2.1. Autoregressive (AR)

The autoregressive (AR) model presents a process whose current value can be represented as a linear combination of the past values and a signal noise ωt. The AR model of order m, AR(m), is described by [2]:

$$\tilde{\mathbf{x}}\_{t} = \sum\_{i=1}^{m} \Phi\_{i} \mathbf{x}\_{t-i} + \omega\_{t} = \Phi\_{1} \mathbf{x}\_{t-1} + \Phi\_{2} \mathbf{x}\_{t-2} + \dots + \Phi\_{m} \mathbf{x}\_{t-m} + \omega\_{t} \tag{1}$$

where xt is the time series values, ω<sup>t</sup> is the noise, Φ = (Φ1, Φ2, …, Φm) is the vector of model coefficients and m is a positive integer.

### 2.2. Moving average (MA)

Unlike the AR model that uses a weighted sum of past values (~xt�<sup>i</sup> ) to provide a time-series representation, the moving average (MA) model combines n past noise values (ωt, ω<sup>t</sup> � 1, ω<sup>t</sup> � 2, ω<sup>t</sup> � <sup>n</sup>) to develop a time-series process. The MA model of order n, MA(n), is describes as, is describes as [3]:

$$\tilde{\mathbf{x}}\_{t} = \sum\_{j=0}^{n} \theta\_{j} \omega\_{t-j} = \omega\_{t} + \theta\_{1} \omega\_{t-1} + \theta\_{2} \omega\_{t-2} + \dots + \theta\_{n} \omega\_{t-n} \tag{2}$$

where θ = (θ1, θ2, …, θn) is the vector of model coefficients and θ<sup>0</sup> = 1.

#### 2.3. Autoregressive moving average (ARMA)

There are three major methods for wind and solar forecasting; classical statistical techniques, computational intelligent methods, and hybrid algorithms. Each category includes several

Time series methods are one of the most commonly used statistical techniques for forecasting. Time series can be defined as "the evolution of a set of observations sampled at regular intervals along time. The specificity of time series models, compared to other statistic methods, is that it introduces 'time' as one of its explicative variables" [1]. Time series develop mathematical models that can forecast future observations on the basis of available data. Section below provides definitions and explanations for time series methods commonly in use for

This section provides an overview of the most commonly used time series methods for solar and wind forecasting. A brief description is provided for each method along with its mathe-

The autoregressive (AR) model presents a process whose current value can be represented as a linear combination of the past values and a signal noise ωt. The AR model of order m, AR(m), is

where xt is the time series values, ω<sup>t</sup> is the noise, Φ = (Φ1, Φ2, …, Φm) is the vector of model

representation, the moving average (MA) model combines n past noise values (ωt, ω<sup>t</sup> � 1, ω<sup>t</sup> � 2, ω<sup>t</sup> � <sup>n</sup>) to develop a time-series process. The MA model of order n, MA(n), is describes as, is

Unlike the AR model that uses a weighted sum of past values (~xt�<sup>i</sup>

where θ = (θ1, θ2, …, θn) is the vector of model coefficients and θ<sup>0</sup> = 1.

Φ<sup>i</sup> xt�<sup>i</sup> þ ω<sup>t</sup> ¼ Φ<sup>1</sup> xt�<sup>1</sup> þ Φ<sup>2</sup> xt�<sup>2</sup> þ … þ Φ<sup>m</sup> xt�<sup>m</sup> þ ω<sup>t</sup> (1)

θ<sup>j</sup> ω<sup>t</sup>�<sup>j</sup> ¼ ω<sup>t</sup> þ θ<sup>1</sup> ω<sup>t</sup>�<sup>1</sup> þ θ<sup>2</sup> ω<sup>t</sup>�<sup>2</sup> þ … þ θ<sup>n</sup> ω<sup>t</sup>�<sup>n</sup> (2)

) to provide a time-series

methods.

78 Time Series Analysis and Applications

forecasting.

2. Time series methods

matical representation.

2.1. Autoregressive (AR)

<sup>~</sup>xt <sup>¼</sup> <sup>X</sup><sup>m</sup> i¼1

coefficients and m is a positive integer.

<sup>~</sup>xt <sup>¼</sup> <sup>X</sup><sup>n</sup> j¼0

2.2. Moving average (MA)

describes as [3]:

described by [2]:

The autoregressive moving average (ARMA) model is developed by combining AR and MA terms to provide a parsimonious parametrization for a process. The ARMA model of orders m and n, ARMA(m, n) is given by [3]:

$$\tilde{\mathbf{x}}\_t = \sum\_{i=1}^m \Phi\_i \mathbf{x}\_{t-i} + \sum\_{j=0}^n \Theta\_j w\_{t-j} \tag{3}$$

where Φ<sup>i</sup> and θ<sup>j</sup> are the autoregressive and moving average coefficients of the ARMA model.

#### 2.4. Autoregressive moving average model with exogenous variables (ARMAX)

The auto regressive moving average model with exogenous variables (ARMAX) provides a multivariate time-series representation to enhance the accuracy of the univariate ARMA model by including relevant information in addition to the time-series under consideration. For example, climate information such as cloud cover, humidity, wind speed and direction can be included as exogenous variables in an ARMA model to develop an ARMAX for more accurate forecasting of solar radiation time series. The ARMAX model of orders m, n and p, ARMAX (m, n, p), is defined as [3]:

$$\tilde{\mathbf{x}}\_{t} = \sum\_{i=1}^{m} \Phi\_{i} \mathbf{x}\_{t-i} + \sum\_{j=0}^{n} \Theta\_{j} \omega\_{t-j} + \sum\_{k=1}^{p} \lambda\_{k} \mathbf{e}\_{t-k} \tag{4}$$

where Φi, θ<sup>j</sup> and λ<sup>k</sup> are the autoregressive, moving average and exogenous coefficients of the ARMAX model, and et is the exogenous input term.

#### 2.5. Autoregressive integrated moving average (ARIMA)

The autoregressive integrated moving average (ARIMA) model is used for non-stationary time series. Despite representing differences in local trend or level, different sections of nonstationary processes exhibit certain levels of similarity. A stationary ARMA (m, n) process with the dth difference of the time-series develops an ARIMA (m, d, n) model. The ARIMA (m, d, n) model is represented by [4]:

$$\tilde{\mathbf{x}}\_{t} = \sum\_{i=1}^{m} \Phi\_{i} \mathbf{S}^{d} \mathbf{x}\_{t-i} + \sum\_{j=0}^{n} \theta\_{j} w\_{t-j} \tag{5}$$

where <sup>S</sup> = 1 � <sup>q</sup>�<sup>1</sup> and <sup>Φ</sup>m(q) is a stationary and invertible AR(m) operator; xt, <sup>ω</sup>t, <sup>Φ</sup><sup>i</sup> and <sup>θ</sup><sup>j</sup> are the observed time series values, error, AR and MA parameters, respectively; d is the number of non-seasonal differences; m is the number of autoregressive terms, and n is the number of lagged forecast errors.

### 2.6. Autoregressive fractionally integrated moving average (ARFIMA)

The autoregressive fractionally integrated moving average (ARFIMA) model is used for longmemory forecasting. ARFIMA generalizes ARIMA by allowing the differencing to take fractional values. An ARFIMA model is given by [5]:

$$\left(1 - \sum\_{i=1}^{m} \Phi\_i L^i \right) (1 - L)^d \tilde{\mathbf{x}}\_t = \left(1 + \sum\_{j=1}^{n} \theta\_j L^j \right) \omega\_t \tag{6}$$

where powers of L indicate a corresponding number of shifts backward in the time series, and (1 � L) <sup>d</sup> is the fractional differencing operator.

### 2.7. Autoregressive integrated moving average with exogenous variables (ARIMAX)

The autoregressive integrated moving average with exogenous variables (ARIMAX) includes the previous values of an exogenous time-series in the ARIMA to enhance its performance and accuracy. It is more applicable to time-series with sudden changes in trends. An ARIMA (m, d, n) process including the past p values of an exogenous variable et develops an ARIMAX process of order (m, d, n, p). The ARIMAX (m, d, n, p) model is represented by [3]:

$$\tilde{\mathbf{x}}\_t = \sum\_{i=1}^m \Phi\_i \mathbf{S}^d \mathbf{x}\_{t-i} + \sum\_{j=0}^n \theta\_j \, \boldsymbol{\omega}\_{t-j} + \sum\_{k=1}^p \lambda\_k \, \boldsymbol{e}\_{t-k} \tag{7}$$

where ω<sup>t</sup> is the white noise. Φi, θ<sup>j</sup> and λ<sup>k</sup> are the coefficients of the autoregressive, moving average and exogenous inputs, respectively.

#### 2.8. Vector autoregressive (VAR)

The vector autoregressive (VAR) model characterizes linear dependences between two or more time-series. VAR model uses multiple variables to generalize the univariate autoregressive model (AR model). A k-dimensional VAR model of order L is given by [6].

$$\check{\mathbf{x}}\_{t} = \mathbf{v} + \sum\_{i=1}^{L} A\_{i} \mathbf{x}\_{t-i} + \omega\_{t} = \mathbf{v} + A\_{1} \mathbf{x}\_{t-1} + \dots + A\_{L} \mathbf{x}\_{t-L} + \omega\_{t} \tag{8}$$

where xt and v are k � 1 vectors of variables and constants, respectively. L is the maximum lag in the VAR model, Ai is a k � k matrix of lag order parameters, and ω<sup>t</sup> = (ω1t, …, ωkt) is the vector of white noise [6, 7].

#### 2.9. Autoregressive conditional heteroscedasticity (ARCH)—generalized ARCH (GARCH)

The autoregressive conditional heteroscedasticity (ARCH) is used for time series with specific variances for the error terms [7].

Estimated values are calculated using the following equations [8]:

$$\mathbf{x}\_t = \varepsilon\_t \sigma\_t \tag{9a}$$

$$
\sigma\_t = \sqrt{a\_0 + \sum\_{i=1}^q a\_i \mathbf{x}\_{t-i}^2} \tag{9b}
$$

where xt is the observed time series values; ε<sup>t</sup> is the error; σ<sup>t</sup> is the conditional standard deviation; and a<sup>0</sup> is the constant added to the model.

The generalized ARCH (GARCH) model estimates the values by:

$$\mathbf{x}\_t = \varepsilon\_t \sigma\_t \tag{10a}$$

$$\sigma\_t = \sqrt{a\_0 + \sum\_{i=1}^p a\_i \mathbf{x}\_{t-i}^2 + \sum\_{i=1}^q \beta\_j \sigma\_{t-i}^2} \tag{10b}$$

By setting p = 0, the GARCH model reduces to an ARCH process with parameter q.

### 3. Performance metrics

The performance of the forecast methods is measured by various metrics related to the forecast error. Higher values of errors correspond to less forecast accuracies. This section provides the definitions and equations for performance metrics which are commonly used to calculate the forecast error. Note that x represents the observed value, ~x is the predicted value (forecast) and n is the total number of samples.

#### 3.1. MSE

2.6. Autoregressive fractionally integrated moving average (ARFIMA)

Φ<sup>i</sup> L<sup>i</sup> !

tional values. An ARFIMA model is given by [5]:

80 Time Series Analysis and Applications

(1 � L)

<sup>1</sup> �X<sup>m</sup> i¼1

> <sup>~</sup>xt <sup>¼</sup> <sup>X</sup><sup>m</sup> i¼1

average and exogenous inputs, respectively.

<sup>~</sup>xt <sup>¼</sup> <sup>v</sup> <sup>þ</sup><sup>X</sup>

L

i¼1

2.8. Vector autoregressive (VAR)

vector of white noise [6, 7].

variances for the error terms [7].

<sup>d</sup> is the fractional differencing operator.

The autoregressive fractionally integrated moving average (ARFIMA) model is used for longmemory forecasting. ARFIMA generalizes ARIMA by allowing the differencing to take frac-

where powers of L indicate a corresponding number of shifts backward in the time series, and

The autoregressive integrated moving average with exogenous variables (ARIMAX) includes the previous values of an exogenous time-series in the ARIMA to enhance its performance and accuracy. It is more applicable to time-series with sudden changes in trends. An ARIMA (m, d, n) process including the past p values of an exogenous variable et develops an ARIMAX

<sup>~</sup>xt <sup>¼</sup> <sup>1</sup> <sup>þ</sup>X<sup>n</sup>

0 @

j¼1

θ<sup>j</sup> Lj

1

A ω<sup>t</sup> (6)

λ<sup>k</sup> et�<sup>k</sup> (7)

ð Þ <sup>1</sup> � <sup>L</sup> <sup>d</sup>

2.7. Autoregressive integrated moving average with exogenous variables (ARIMAX)

process of order (m, d, n, p). The ARIMAX (m, d, n, p) model is represented by [3]:

xt�<sup>i</sup> <sup>þ</sup>X<sup>n</sup>

j¼0

where ω<sup>t</sup> is the white noise. Φi, θ<sup>j</sup> and λ<sup>k</sup> are the coefficients of the autoregressive, moving

The vector autoregressive (VAR) model characterizes linear dependences between two or more time-series. VAR model uses multiple variables to generalize the univariate autoregressive

where xt and v are k � 1 vectors of variables and constants, respectively. L is the maximum lag in the VAR model, Ai is a k � k matrix of lag order parameters, and ω<sup>t</sup> = (ω1t, …, ωkt) is the

2.9. Autoregressive conditional heteroscedasticity (ARCH)—generalized ARCH (GARCH) The autoregressive conditional heteroscedasticity (ARCH) is used for time series with specific

<sup>θ</sup><sup>j</sup> <sup>ω</sup><sup>t</sup>�<sup>j</sup> <sup>þ</sup><sup>X</sup>

p

k¼1

Ai xt�<sup>i</sup> þ ω<sup>t</sup> ¼ v þ A<sup>1</sup> xt�<sup>1</sup> þ … þ AL xt�<sup>L</sup> þ ω<sup>t</sup> (8)

ΦiSd

model (AR model). A k-dimensional VAR model of order L is given by [6].

Mean square error (MSE) is calculated by:

$$\text{MSE} = \frac{1}{n} \sum\_{i=1}^{n} \left( \tilde{\mathbf{x}}\_{i} - \mathbf{x}\_{i} \right)^{2} \tag{11}$$

#### 3.2. NMSE

Normalized mean square error (NMSE) is calculated by normalizing the MSE as:

$$\text{NMSE} = \frac{n \sum\_{i=1}^{n} \left(\tilde{\mathbf{x}}\_{i} - \mathbf{x}\_{i}\right)^{2}}{\sum\_{i=1}^{n} \mathbf{x}\_{i} \sum\_{i=1}^{n} \tilde{\mathbf{x}}\_{i}} \tag{12}$$

#### 3.3. RMSE

Root mean square error is given by calculating the square root of the MSE as:

$$\text{RMSE} = \sqrt{\frac{1}{n} \sum\_{i=1}^{n} \left(\check{\mathbf{x}}\_{i} - \mathbf{x}\_{i}\right)^{2}}\tag{13}$$

#### 3.4. NRMSE

Normalized root mean square error (NRMSE) is calculated by normalizing the RMSE as:

$$\text{NRMSE} = \frac{\sqrt{\frac{1}{n}\sum\_{i=1}^{n} \left(\tilde{\mathbf{x}}\_{i} - \mathbf{x}\_{i}\right)^{2}}}{\frac{1}{n}\sum\_{i=1}^{n} \mathbf{x}\_{i}} \tag{14}$$

#### 3.5. MAE

Mean absolute error is calculated by:

$$\text{MAE} = \frac{1}{n} \sum\_{i=1}^{n} |\tilde{\mathbf{x}}\_i - \mathbf{x}\_i| \tag{15}$$

#### 3.6. NMAE

Normalized mean absolute error (NMAE) is calculated by normalizing the MAE as:

$$\text{NMAE} = \frac{1}{n} \sum\_{i=1}^{n} \frac{|\check{\mathbf{x}}\_i - \mathbf{x}\_i|}{\max(\mathbf{x}\_i)} \tag{16}$$

#### 3.7. MRE

Mean relative error (MRE) is calculated by:

$$\text{MRE} = \frac{1}{n} \sum\_{i=1}^{n} \frac{|\check{\mathbf{x}}\_i - \mathbf{x}\_i|}{\mathbf{x}\_i} \tag{17}$$

#### 3.8. MBE

Mean bias error (MBE) is calculated by:

$$\text{MBE} = \frac{1}{n} \sum\_{i=1}^{n} \left( \tilde{\boldsymbol{\alpha}}\_{i} - \boldsymbol{\alpha}\_{i} \right) \tag{18}$$

#### 3.9. MAPE

Mean absolute percentage error is calculated by:

Time Series and Renewable Energy Forecasting http://dx.doi.org/10.5772/intechopen.70845 83

$$\text{MAPE} = \frac{1}{n} \sum\_{i=1}^{n} \left| \frac{\tilde{\mathbf{x}}\_i - \mathbf{x}\_i}{\mathbf{x}\_i} \right| \times 100\% \tag{19}$$

### 3.10. MASE

RMSE ¼

NRMSE ¼

MAE <sup>¼</sup> <sup>1</sup> n Xn i¼1

Normalized mean absolute error (NMAE) is calculated by normalizing the MAE as:

NMAE <sup>¼</sup> <sup>1</sup>

MRE <sup>¼</sup> <sup>1</sup> n Xn i¼1

MBE <sup>¼</sup> <sup>1</sup> n Xn i¼1

n Xn i¼1

3.4. NRMSE

82 Time Series Analysis and Applications

3.5. MAE

3.6. NMAE

3.7. MRE

3.8. MBE

3.9. MAPE

Mean absolute error is calculated by:

Mean relative error (MRE) is calculated by:

Mean bias error (MBE) is calculated by:

Mean absolute percentage error is calculated by:

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

~xi � xi � �<sup>2</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

~xi � xi � � �

> ~xi � xi � � � � maxð Þ xi

~xi � xi � � � � xi

~xi � xi

~xi � xi � �<sup>2</sup> (13)

(14)

(16)

(17)

� (15)

� � (18)

1 n Xn i¼1

s

Normalized root mean square error (NRMSE) is calculated by normalizing the RMSE as:

1 n Pn i¼1

> 1 n Pn i¼1 xi

s

Mean absolute scaled error is calculated by:

$$\text{RMSE} = \frac{\sum\_{i=1}^{n} |\tilde{\mathbf{x}}\_i - \mathbf{x}\_i|}{\frac{n}{n-1} \sum\_{i=2}^{n} |\mathbf{x}\_i - \mathbf{x}\_{i-1}|} \tag{20}$$

### 3.11. MSPE

Mean square percentage error is calculated by:

$$\text{MSPE} = \frac{1}{n} \sum\_{i=1}^{n} \left( \frac{\tilde{\mathbf{x}}\_i - \mathbf{x}\_i}{\mathbf{x}\_i} \right)^2 \times 100\% \tag{21}$$

### 4. Time series methods for solar energy/wind power forecasting

Time series methods have been extensively used to forecast solar radiation/power and wind speed/power. Typically, solar and wind data exhibit features such as non-linearity and nonstationarity which cannot be captured by most of the time series methods. To address this limitation, these methods are used in combination with other computational intelligent or data processing methods to take advantage of their capabilities to better characterize wind and solar data for more accurate forecasting. These combinations are referred to as hybrid methods which are proven effective for renewables forecasting.

#### 4.1. Time series methods for solar energy forecasting

This section provides a review of the articles that use time series methods individually or in hybrid algorithms for solar radiation/power forecasting. The literature review provides a summary of the solar-related variable that is predicted, the horizon for which the variable is predicted, the performance metrics in use to calculate the forecast error, the time series methods and data in use, and the research findings of each article. Table 1 provides the summary.

#### 4.2. Time series methods for wind power forecasting

This section provides a review of the articles that use time series methods individually or in hybrid algorithms for wind speed/power forecasting. The literature review provides a summary of the wind variable that is predicted, the horizon for which the variable is predicted, the performance metrics in use to calculate the forecast error, the time series methods and data in use, and the research findings of each article. Table 2 provides the summary.



References Forecast

[9] 5, 15, 30, and 60 min averaged global horizontal irradiance (GHI)

[12] Half daily

[13] Hourly solar irradiance

[14] Hourly solar radiation

[15] Daily

[16] Daily solar

average of solar irradiance

irradiance

values of GHI

variable

84 Time Series Analysis and Applications

[10] Daily GHI 1 day RMSE,

[11] Hourly GHI 1 h MBE and

Up to 3 days

1 h RMSE, and NRMSE

1 h RMSE, and NRMSE

Forecast horizon

5 min to several hours

Error metric

NRMSE, MAE, and MBE

RMSE

Time series method

logs, ARIMA, and hybrid (ARIMA and ANN)

MAPE Regressions in

Data Finding

ARIMA can obtain better results if used in logs with time varying

AR and ANN models perform better than other prediction methods (ARMA, k-Nearest Neighbors, Markov Chains, etc.), if the time-series data is not pre-processed

coefficients

Cloud cover information yields to more accurate forecasting

Neural network models obtain better results for almost all stations except for Lerida station where the clearness index-based models outperform

ARMA model has competitive results as compared to similarity method (SIM), support vector machine (SVM) and neural network

The combination of the ARMA and TDNN provides more accurate results than each individual forecaster

ARIMA models are proved to effectively capture the autocorrelative structure of the solar irradiance

Various climate time series are dependent on

(NN)

4 years of hourly GHI data for three locations

from the metrological station of Ajaccio,

including GHI, diffuse horizontal irradiance (DHI), direct normal irradiance (DNI) and cloud cover from two weather stations in USA (Miami and Orlando)

measurements from stations of the Spanish National Radiometric

solar irradiance of the Paris suburb of Alfortville

10 months of solar radiation data from the observation station in

from a 4.0 kW PV panel in the city of Awali, Kingdom of Bahrain

surface air temperature

Nanyang Technological University

Network

Naive, ARMA 144 months of hourly

in USA

AR, ARMA 19 years of daily GHI

ARIMA Meteorological data

NRMSE AR Hourly GHI

Hybrid (ARMA and time delay neural network (TDNN))

1–15 h MAPE ARIMA Solar irradiance data

1 day NA ARIMA Solar irradiance and

France


Table 1. Summary of the articles with time series methods (individual or hybrid) for solar radiation/power forecasting.



References Forecast

[26] Solar

[27] Hourly solar irradiance

References Forecast

[28] Hourly

[29] Wind

[30] Mean

variable

average wind Speed

power density

hourly wind speed

generation

variable

86 Time Series Analysis and Applications

[25] Solar power 1 min MAE,

Forecast horizon

Error metric

RMSE, and MASE

MSE, and MAXE

1–5 h MAE,

1 h and 3 h

Forecast horizon

1– 10 days Error metric

MAE, and RMSE

and MSE

RMAE Hybrid (non-linear

regression and PR)

Table 1. Summary of the articles with time series methods (individual or hybrid) for solar radiation/power forecasting.

1 h NA ARMA 2 years of wind speed

AR-GARCH, ARFI-

1 h RMSE AR, and ARIMA 744 hours of wind

GARCH

Time series method

Time series method

Hybrid (Wavelet, ARMA, and Nonlinear Autoregressive model with exogenous variables (NARX)) Data Finding

The hybrid ARIMA-BP does not outperform

ARIMA

method

ARMA outperforms the persistence model for short and medium term solar predictions

The hybrid method excels the benchmark methods including the regression, ARIMA and ANN by 40% and 33.33% for 1-h and 3-h ahead, respectively

ARMA is more appropriate for prediction intervals and probability forecasts

Weather ensemblebased forecasters are shown to perform better than time series models and atmospheric-based

The neural logic-based models perform better than the time series methods

models

Capability of the ARMA process to model the linear features of the data and the NARX advantage to compensate the error of Wavelet-ARMA enhances the forecast accuracy of the hybrid Wavelet-ARMA-NARX

National Solar Radiation Data Base (NSRDB) site between 2008 and 2009

ARMA 14 years of hourly solar

radiation data from SolarAnywhere

Solar radiation data from sensors, and National Digital Forecast Database, as well as the meteorological measurements from local airports in Los Angeles region

Data Finding

data from Quetta in

Daily midday wind speed measurements from 1995 to 2004, as well as weather ensemble predictions from 1997 to 2004 for five wind farms in UK

speed measurements in Odigitria of the Greek island of Crete in March 1996

Pakistan

1 min solar power data from the solar panel at UCLA for nearly 200,000 observations



Table 2. Summary of the articles with time series methods (individual or hybrid) for wind speed/power forecasting.
