4.3. Out-of-sample forecast evaluation of the combined forecasts of the DFM and ANN models

Table 5 reports the results of combining forecasts of the DFM and ANN models. We aim of using the DFM and ANN models in particular to merge their advantages where the ANN model with its flexibility to account for potentially complex nonlinear relationships that is not easily captured by traditional linear models, and the DFM model can accommodate a large number of variables. Similar to Table 2, Table 5 shows the ratio of the RMSE for a given combining method to the RMSE for the AR benchmark model. We found that the AR benchmark model poorly performs compared to all combining methods. Generally, the nonlinear ANN combining method outperforms all other combining methods for all variables at all forecasting horizons; hence, it offers a more reliable method for generating forecasts of the variables of interest. Compared to the AR, the nonlinear ANN combining method provides a large reduction in RMSE of around 7–20% relative to the AR model overall forecasting horizons and variables. The nonlinear ANN combining method also beats the best individual forecasting of the DFM and the ANN models for all variables and overall forecasting horizons with sizable reductions in RMSE of around 1–15% of the RMSE of the best individual forecasts. We note in addition that the discount MSFE with δ = 0.9 as a combining method performs Dynamic Factor Model and Artificial Neural Network Models: To Combine Forecasts or Combine Models? http://dx.doi.org/10.5772/intechopen.71536 89


RMSE relative to the AR benchmark model of 10–18%. The average of the RMSE reduction over the forecast horizons is 12%. On average the FAANN outperforms the ANN and

• Long-term interest rate: for estimation purpose the same package and algorism that are used with previous variables are implemented. Thus, the optimal network in abbreviated form is Nð Þ <sup>8</sup>�3�<sup>1</sup> . Table 4 results show the performance of the FAANN model where the model produces more accurate forecasts compared to all competing model on both the single-level forecast horizons and the average of these horizons. Compared to the AR benchmark, the FAANN provides a reduction in the RMSE range from 45–27%, while the average RMSE reduction is around 38%. The performance of the FAANN model stands out followed by the ANN and the DFM with average reduction in RMSE of 9 and 5%, respectively, relative to the AR benchmark model. Comparing the FAANN performance to the ANN and the DFM, the FAANN model RMSE reduction is around 28 and 32%,

Model 3 months 6 months 12 months Average FAANN 0.7281 0.6051 0.5498 0.6277 DFM 0.9834 0.9042 0.9584 0.9487 ANN 0.9893 0.8981 0.8306 0.9060 AR 0.2052 0.2140 0.2308 0.2167

4.3. Out-of-sample forecast evaluation of the combined forecasts of the DFM and ANN

Table 5 reports the results of combining forecasts of the DFM and ANN models. We aim of using the DFM and ANN models in particular to merge their advantages where the ANN model with its flexibility to account for potentially complex nonlinear relationships that is not easily captured by traditional linear models, and the DFM model can accommodate a large number of variables. Similar to Table 2, Table 5 shows the ratio of the RMSE for a given combining method to the RMSE for the AR benchmark model. We found that the AR benchmark model poorly performs compared to all combining methods. Generally, the nonlinear ANN combining method outperforms all other combining methods for all variables at all forecasting horizons; hence, it offers a more reliable method for generating forecasts of the variables of interest. Compared to the AR, the nonlinear ANN combining method provides a large reduction in RMSE of around 7–20% relative to the AR model overall forecasting horizons and variables. The nonlinear ANN combining method also beats the best individual forecasting of the DFM and the ANN models for all variables and overall forecasting horizons with sizable reductions in RMSE of around 1–15% of the RMSE of the best individual forecasts. We note in addition that the discount MSFE with δ = 0.9 as a combining method performs

DFM models with reduction in RMSE of 6 and 8%, respectively.

Table 4. Out-of-sample (January 2007–December 2011) RMSE for long-term interest rate.

respectively.

Note: See note to Table 2.

88 Advanced Applications for Artificial Neural Networks

models

Table 5. Forecast combining results of the DFM and ANN-RMSE for variables (January 2007–December 2011).

nearly as well as the best individual model for all variables and forecasting horizons. The combining method of variance–covariance (VACO), on average, performs less accurate compared to other combining methods' overall forecasting horizons and variables. We note that the combined forecasts produce more accurate forecasts for long horizons which we attributed to the contribution of the nonlinear model in the combination as nonlinear models produce more accurate forecast in the long horizon.

#### 4.4. Comparison of forecasting performance of combination of models or information and combination of forecasts

Here, we compare the forecasting performance of the combination of models (information) the FAANN model—with the best forecast combinations of the ANN and DFM models.


Table 6. Forecast results of the best combination of DFM and ANN model and FAANN-RMSE for variables (January 2007–December 2011).

Table 6 presents the RMSE ratios of the FAANN model and the best forecast combination to the AR benchmark model over the out-of-sample period. Compared to the DFM, the results indicate that the FAANN model generates accurate forecasts for all variables and with all forecast horizons. The improvement of the FAANN model is compared to the DFM between 2 and 10% reduction in RMSE for all variables and horizons. Thus, these results indicate the superiority of augmentation of factors to nonlinear method (FAANN) over the linear one (DFM) across the three different series and three different time horizons.

To confirm the RMSE results, the test of equal forecast accuracy of Diebold and Mariano [14] is used to evaluate forecasts. The test of equal forecast accuracy employed here is given by

<sup>S</sup> <sup>¼</sup> <sup>d</sup>ffiffiffi b V p ð Þ<sup>d</sup> , where <sup>d</sup> <sup>¼</sup> <sup>1</sup> T P T t¼1 e2 1t � e2 2t � � is the mean difference of the squared prediction error

and Vb d � � is the estimated variance. Here, e<sup>2</sup> 1t denotes the forecast errors from the FAANN model, and e2 2t denotes the forecast errors from the AR benchmark model or the best combined forecasts of DFM and ANN. The S statistic follows a standard normal distribution asymptotically. Note, a significant negative value of S means that the FAANN model outperforms the other model in out-of-sample forecasting. Table 7 shows the result of the Diebold and Mariano test between the FAANN and the AR benchmark and between the FAANN and the best combined forecasts of DFM and ANN. The test results confirm that the FAANN models provide the lowest RMSEs. In summary the FAANN models provide significantly better forecasts at the 5% and 10% level compared to the AR and the best combined forecasts of DFM and ANN models.


Table 7. Diebold-Mariano test (January 2007–December 2011).
