**5. Volatility forecasting**

The evidence accumulated so far suggests that fixed income return volatility in emerging markets follows a long memory process. This, in turn, implies the existence of fractional dynamics in the data which may be exploited to potentially construct improved volatility forecasts, especially over longer forecasting horizons. In order to evaluate the forecasting performance of long memory models (especially over long(er) horizons) versus short memory models (i.e., the GARCH model), the respective data sets are simply split in half and then each model is estimated for all series covering the first part of the sample and then these estimates are used to forecast volatility over the sample period covered by the second half of the data. In this manner, out-of-sample forecast accuracy is evaluated. In addition to calculating the daily forecasts, this study also calculates monthly forecasts using the wellknown property that volatility forecasts are additive, such that the sum of five daily volatility forecasts produces the weekly forecasts. And, the summation of weekly forecasts produces monthly forecasts.

In addition to the GARCH and long memory model the RiskMetrics model is also considered for comparative purposes. The RiskMetrics model was popularised by the investment bank JP Morgan and is widely used by financial institutions to model and forecast volatility, especially in the context of the Basle Committee adequacy criteria. This model is essentially an exponentially weighted moving average (EWMA). Under the EWMA, the fitted variance from the model, , *<sup>t</sup> h* which provides the multi-step ahead volatility forecast, is a weighted function of the immediately preceding volatility forecast and actual volatility is given below:

$$\mathbf{h}\_{t} = \mathcal{X}\mathbf{h}\_{t-1} + \left(\mathbf{1} - \mathcal{X}\right)\hat{\mathbf{h}}\_{t-1} \tag{6}$$

where 0 1 is the smoothing parameter, such that when = 0 the model reduces to a random walk process and when =1 the model is equivalent to the prior period forecast of volatility. The value of is determined empirically by the value that minimizes the 'insample' sum of squared prediction errors. In this study is set to 0.94 following standard market practice, which is also consistent with previous research which indicates that this value produces accurate forecasts. [25]

#### **5.1. Standard forecast evaluation**

106 Risk Management – Current Issues and Challenges

Hong Kong Intercept

Mexico Intercept

South Africa Intercept

**5. Volatility forecasting** 

series: <sup>2</sup> , ln ln 2 , *J j Var d d i k* 

significance at the 1% and 5% levels, respectively.

stochastic processes which have a long memory component.

Slope (*d*)

Slope (*d*)

Slope (*d*)

long memory relative to the standard volatility models.

(4) is re-estimated using the Daubechies 4 (D4) wavelet. These results are presented in Table 4. The results are broadly similar in magnitude to those obtained using the Haar wavelet. The noticeable exception relates to the case of South Africa where the long memory parameter falls from 0.2679 (when the Haar wavelet is used) to 0.1784 (when the D4 wavelet is used). This notwithstanding, the results are all statistically significant. In sum, the results of this analysis suggest that bond return volatility in emerging markets is characterised by

Volatility series Identifier Parameter Estimate Standard Error 2 *R*

1.0822\*\* 0.3577\*\*

1.5824\*\* 0. 2611\*

1.6585\*\* 0.1784\*\*

**Table 4.** Estimates of the Long Memory Parameter using the Daubechies 4 Wavelet

Notes: To estimate the long memory parameter *d*, the following regression is performed on the respective volatility

corresponding to the value of the scaling parameter *j* = 1,. . . , *J*, and ε is the error term. '\*\*' and '\*' indicate statistical

The analysis indicates robust evidence of long memory behaviour in the return volatility of emerging market debt. Further, wavelet methods provide a robust fit for the data as evidence by the <sup>2</sup> *R* readings presented in the final columns of both Table 3 and 4. If fixed income data exhibit long memory, then it displays significant autocorrelation between distant observations. This, in turn, implies that the series realisations may have a predictable component; and, perhaps, past trends in the data can be used to predict future volatility. Therefore, attention now turns to an exploration of the forecast performance of models with

The evidence accumulated so far suggests that fixed income return volatility in emerging markets follows a long memory process. This, in turn, implies the existence of fractional dynamics in the data which may be exploited to potentially construct improved volatility forecasts, especially over longer forecasting horizons. In order to evaluate the forecasting performance of long memory models (especially over long(er) horizons) versus short memory models (i.e., the GARCH model), the respective data sets are simply split in half and then each model is estimated for all series covering the first part of the sample and then these estimates are used to forecast volatility over the sample period covered by the second half of the data. In this manner, out-of-sample forecast accuracy is evaluated. In addition to calculating the daily forecasts, this study also calculates monthly forecasts using the well-

where *Var d i k*, is the variance of the detail coefficients

0.0105

0.1996

0.2210

0.0887 0.8953

0.1083 0.9076

0.1575 0.9412

Two standard symmetric measures are used to evaluate forecast accuracy, namely, the mean absolute error (MAE) and the root mean square error (RMSE). They are defined below:

$$MAE = \frac{1}{\tau} \sum\_{t=T+1}^{T+\tau} \left| h\_t^f - r\_t^2 \right| \tag{7}$$

$$RMSE = \sqrt{\frac{1}{\tau} \sum\_{t=T+1}^{T+\tau} \left( h\_t^f - r\_t^2 \right)^2} \tag{8}$$

where is the number of forecast data points and <sup>2</sup> *t r* is the proxy for volatility . Both the MAE and RMSE assume the underlying loss function to be symmetric. Furthermore, under these evaluation criteria the model which minimises the loss function is preferred.

Table 5 reports out-of-sample performance of the estimated models based on the MAE and RMSE forecast error statistics. At the daily level, the results are not unexpected. That is, the GARCH model dominates forecast accuracy for South Africa on the basis of both the MAE and RMSE. For Mexico, the GARCH model dominates forecast performance on the basis of the MAE while the RiskMetrics models delivers the most accuracy when the RMSE is used as a criterion. For Hong Kong the GARCH process is preferred on the basis of the MAE, which surprisingly, the long memory model delivers the best performance when the RMSE

is used as a reference. However, in some cases the forecast accuracy of all the models are close; for instance at the daily level the forecast MAE statistics for the GARCH, RiskMetrics and FIGARCH models are virtually indistinguishable. More generally, the findings of GARCH superiority at the daily level are consistent with a wide empirical literature attesting to the forecast superiority of the GACH model at forecasting volatility over daily frequencies or short horizons.

Long Memory in the Volatility of Local Currency Bond Markets:

(9)

VaR is a widely used measure to capture the exposure of a portfolio to market risk. The VaR of a position describes the expected maximum loss over a target horizon within a given confidence interval due to an adverse movement in the relevant fixed income yield (or price). VaR is now widely used as an internal risk management tool by financial institutions and as a regulatory measure of risk exposure. [26] In addition, the VaR method is the cornerstone of the 1996 market risk amendment to the Basle Accord (Bank of International Settlements, (BIS), 1996). The Basle Accord prescribes the VaR method in order that financial institutions can meet the capital requirements to cover the market risk they incur in the process of their daily business operations. Under this framework, operational evaluation

In particular, the Basle Accord stipulates that for the purpose of calculating regulatory market risk capital it is required that VaR estimates be calculated at the 99 percent probability level using daily data over a minimum sample period of at least one business year (equivalent to 250 trading days) and that these estimates be updated at least every quarter (i.e., 60 trading days). Against this background, the well-known delta-normal

> *<sup>f</sup> VaR N h V*

number three represents the minimum regulatory Basle multiplicative factor and *V* is the initial portfolio value. While Basle Accord prescribes a 99 percent probability the 97.5 percent and 95 percent confidence level is also examined for greater comprehensive and consistency with previous studies. The validity of such VaR calculations are assessed or 'backtested' by comparing actual daily trading (net) losses with the estimated VaR and noting the number of 'exceptions', in the sense of days when the VaR estimate was insufficient to cover actual trading losses. Regulatory scrutiny is therefore triggered where such exceptions occur frequently, and in practice this leads to a range of penalties for the

In line with the rolling window approach to VaR evaluation mandated by the Basle Committee rules initial volatility forecasts and VaR measures are constructed over intervals of 60 trading days, with the initial estimation sample then rolled forward and the models updated every 60 observations before the next set of volatility forecasts are produced. The first 3-years of data (representing 752 observations) are used for initial model parameter estimation, leaving 1819 observations for volatility forecasting and the construction and evaluation of VaR measures. Specifically, this provides 30 sub-samples of 60 trading days length over which VaR is assessed. This assessment is conducted through appraisal of the out-of-sample VaR failure rates associated with VaR measures constructed using the forecast values of those volatility measures. The assessment of VaR performance is conducted through appraisal of the 'out-of-sample' VaR failure rates associated with VaR measures constructed using the forecast values derived from the GARCH, RiskMetrics and

3

is the appropriate standard normal deviate, *<sup>f</sup> h* is the volatility forecast, the

takes the form of backtesting volatility forecasts and exception reporting.

**6. Value-At-Risk evaluation** 

specification is employed:

financial institution concerned. [27]

where *N*

Evidence from Hong Kong, Mexico and South Africa 109


Notes: '\*' indicates the preferred model.

**Table 5.** Daily Forecast Results

At the monthly level (i.e., at a longer horizon) the GARCH model also delivers the most accurate results. This finding is surprising. Long memory implies that widely separated observations are associated with each other which in turn suggests that volatility realizations are connected over long lags. The results shows that at even comparatively longer horizons the GARCH model still delivers the most accurate volatility forecasts. Indeed, Table 6 shows that the forecast MAE statistics for Mexico and South Africa are 3.13e-03 and 3.92e-03, respectively, which are smaller than those from long memory models. The same results holds true for the forecast RMSE error statistics. For Hong Kong fixed returns This result appears to suggest that long memory models while theoretically appealing are not particularly helpful in deriving accurate volatility forecast especially over long horizons.


Notes: '\*' indicates the preferred model.

**Table 6.** Monthly Forecast Results

#### **6. Value-At-Risk evaluation**

108 Risk Management – Current Issues and Challenges

frequencies or short horizons.

Notes: '\*' indicates the preferred model. **Table 5.** Daily Forecast Results

over long horizons.

Forecast Error Statistic

Notes: '\*' indicates the preferred model. **Table 6.** Monthly Forecast Results

Forecast Error

Statistic

is used as a reference. However, in some cases the forecast accuracy of all the models are close; for instance at the daily level the forecast MAE statistics for the GARCH, RiskMetrics and FIGARCH models are virtually indistinguishable. More generally, the findings of GARCH superiority at the daily level are consistent with a wide empirical literature attesting to the forecast superiority of the GACH model at forecasting volatility over daily

MAE RMSE MAE RMSE MAE RMSE

Model GARCH RiskMetrics FIGARCH

Hong Kong 1.84e-05\* 2.82e-04 3.36e-05 8.09e-04 2.73e-04 1.92e-05\* Mexico 1.91e-04 3.18e-04\* 4.49e-05\* 5.72e-04 6.22e-05 6.73e-04 South Africa 2.63e-05\* 1.83e-04\* 2.82e-04 1.95e-04 2.88e-05 2.31e-04

At the monthly level (i.e., at a longer horizon) the GARCH model also delivers the most accurate results. This finding is surprising. Long memory implies that widely separated observations are associated with each other which in turn suggests that volatility realizations are connected over long lags. The results shows that at even comparatively longer horizons the GARCH model still delivers the most accurate volatility forecasts. Indeed, Table 6 shows that the forecast MAE statistics for Mexico and South Africa are 3.13e-03 and 3.92e-03, respectively, which are smaller than those from long memory models. The same results holds true for the forecast RMSE error statistics. For Hong Kong fixed returns This result appears to suggest that long memory models while theoretically appealing are not particularly helpful in deriving accurate volatility forecast especially

Model GARCH RiskMetrics FIGARCH

Hong Kong 4.27e-04 2.26e-03\* 2.35e-03\* 4.09e-03 4.39e-04 4.17e-03 Mexico 3.13e-03\* 4.66e-03 5.72e-03 4.31e-03\* 4.58e-03 6.30e-03 South Africa 3.92e-03\* 4.23e-03\* 4.89e-03 4.82e-03 4.27e-03 4.31e-03

MAE RMSE MAE RMSE MAE RMSE

VaR is a widely used measure to capture the exposure of a portfolio to market risk. The VaR of a position describes the expected maximum loss over a target horizon within a given confidence interval due to an adverse movement in the relevant fixed income yield (or price). VaR is now widely used as an internal risk management tool by financial institutions and as a regulatory measure of risk exposure. [26] In addition, the VaR method is the cornerstone of the 1996 market risk amendment to the Basle Accord (Bank of International Settlements, (BIS), 1996). The Basle Accord prescribes the VaR method in order that financial institutions can meet the capital requirements to cover the market risk they incur in the process of their daily business operations. Under this framework, operational evaluation takes the form of backtesting volatility forecasts and exception reporting.

In particular, the Basle Accord stipulates that for the purpose of calculating regulatory market risk capital it is required that VaR estimates be calculated at the 99 percent probability level using daily data over a minimum sample period of at least one business year (equivalent to 250 trading days) and that these estimates be updated at least every quarter (i.e., 60 trading days). Against this background, the well-known delta-normal specification is employed:

$$\text{VaR} = \text{N}\_{\alpha} h^{f} \text{ 3V} \tag{9}$$

where *N* is the appropriate standard normal deviate, *<sup>f</sup> h* is the volatility forecast, the number three represents the minimum regulatory Basle multiplicative factor and *V* is the initial portfolio value. While Basle Accord prescribes a 99 percent probability the 97.5 percent and 95 percent confidence level is also examined for greater comprehensive and consistency with previous studies. The validity of such VaR calculations are assessed or 'backtested' by comparing actual daily trading (net) losses with the estimated VaR and noting the number of 'exceptions', in the sense of days when the VaR estimate was insufficient to cover actual trading losses. Regulatory scrutiny is therefore triggered where such exceptions occur frequently, and in practice this leads to a range of penalties for the financial institution concerned. [27]

In line with the rolling window approach to VaR evaluation mandated by the Basle Committee rules initial volatility forecasts and VaR measures are constructed over intervals of 60 trading days, with the initial estimation sample then rolled forward and the models updated every 60 observations before the next set of volatility forecasts are produced. The first 3-years of data (representing 752 observations) are used for initial model parameter estimation, leaving 1819 observations for volatility forecasting and the construction and evaluation of VaR measures. Specifically, this provides 30 sub-samples of 60 trading days length over which VaR is assessed. This assessment is conducted through appraisal of the out-of-sample VaR failure rates associated with VaR measures constructed using the forecast values of those volatility measures. The assessment of VaR performance is conducted through appraisal of the 'out-of-sample' VaR failure rates associated with VaR measures constructed using the forecast values derived from the GARCH, RiskMetrics and

long memory model. The focus on the 'out-of-sample' failure rates is motivated by the requirements of risk managers, who obtain VaR estimates in real time and must use parameters obtained from an already observed sample in order to evaluate the risks associated with current and future random movements in risk factors. As a result, credible test of VaR construction methods under alternative volatility forecasting models is their performance outside the sample used to estimate the underlying parameters.

Long Memory in the Volatility of Local Currency Bond Markets:

income returns is an important aspect of portfolio management it is essential to accurately characterise the time series properties of fixed income volatility especially in the context of emerging markets where local currency-denominated sovereign bonds have been the fastest growing market in recent years. Accordingly, the objective of this analysis was to examine the existence of long memory behaviour in the volatility structure of total return indices for the local currency bond markets of Hong Kong, Mexico and South Africa. Against this background, the long memory parameter is estimated using methods based on wavelets, which have gained prominence in recent years. Furthermore, this study has compared and evaluated the performance of a long memory model versus a standard volatility models (the ubiquitous GARCH and RiskMetrics processes) in order to evaluate their power in delivering accurate volatility forecasts over long(er) horizons in an out-of-sample setting. This endeavour is motivated by recognition of the importance of accurate volatility forecasts in a wide range of applications, including tactical and strategic decision making and the limited empirical evidence available to date for emerging fixed income markets. Then, the performance of the standard GARCH, RiskMetrics and FIGARCH models are evaluated in

the context of value-at-risk (VaR) estimation given the Basle regulatory framework.

generally deliver more accurate VaR measures relative to the long memory process.

These findings have three important implications. First, the exploitation of long memory models based on wavelet analysis may not have great relevance in the context of emerging market debt in terms of delivering superior forecast performance. Second, the existence of a long memory structure in volatility is not an essential condition for the derivation of accurate volatility forecasts at any time horizon, especially over a long horizon. Indeed, this research suggests that long memory models appear to be of limited practical forecast value,

The main findings of this research are threefold. First, evidence of long memory is conclusively demonstrated in emerging market local currency sovereign debt markets. In addition, to counteract the possibility of finding spurious evidence of long memory a variety of wavelet forms are considered. The findings from these tests are complementary and therefore suggest that the finding of long memory is not spurious. Second, the presence of a long memory structure in the volatility of these fixed income markets suggests volatility observations in the recent past and the remote past are associated with each other. Since the series realisations are not independent over time then past volatility may potentially be exploited to predict future volatility, especially over long horizons. Accordingly, the out-ofsample forecasting performance of the long memory model and the standard GARCH and RiskMetrics models are compared. While, none of the estimated models consistently outperforms the others, a key generalisation can be made. In particular, on the basis of the forecast MAE and RMSE statistics it is shown that the information content of long memory models does not consistently generate improved volatility forecasts, especially over long horizons, relative to the standard GARCH model. Indeed, the GARCH model generally provides the most accurate forecasts at the monthly horizon. With respect to VaR estimation, the results show that both the standard GARCH and RiskMetrics models

Evidence from Hong Kong, Mexico and South Africa 111

Table 7 reports the out-of-sample VaR failure rates. The results are very diverse and highlight that in many of the markets considered the forecasting model that minimises the percentage number of daily VaR exceedances is sensitive to the specification of the probability level. When the Basle Committee rules are applied (i.e., the 99 percent probability level) the results indicate that the GARCH and RiskMetrics that provide the exceedance-minimising method for the fixed income markets considered. At the 99 percent probability level the long memory model is the generally the weakest performer. However, in the case of Hong Kong and South Africa the long memory model is second best model in terms of delivering accurate VaR measures. In addition, it is important to note that in many cases is level of accuracy between the various models is close as reflected by the closeness of the VaR failure rates. At the 97.5 and 95 percent probability levels model performance is more varied with all models demonstrating varied degrees of accuracy. As a generalization, these results are mixed but the evidence suggests that at the Basle prudential level the simpler models help in providing improved VaR estimates that minimise occasions when the minimum capital requirement identified by the VaR methodology would have fallen short of actual trading losses.


Notes:

1. VaR is value-at-risk

2. RM is the RiskMetrics model, GARCH is the generalized autoregressive conditional heteroskedasticity and FIGARCH is the fractionally integrated GARCH.

3. Model failure rates are the number of exceptions divided by the number of observations.

4. '\*' (asterisks) denote the preferred model.

**Table 7.** VaR Failure Rates – Out-of-Sample

## **7. Conclusions**

Recent empirical evidence concerning the nature of volatility dynamics in fixed income markets suggests the existence of a long memory component. Since volatility in fixed income returns is an important aspect of portfolio management it is essential to accurately characterise the time series properties of fixed income volatility especially in the context of emerging markets where local currency-denominated sovereign bonds have been the fastest growing market in recent years. Accordingly, the objective of this analysis was to examine the existence of long memory behaviour in the volatility structure of total return indices for the local currency bond markets of Hong Kong, Mexico and South Africa. Against this background, the long memory parameter is estimated using methods based on wavelets, which have gained prominence in recent years. Furthermore, this study has compared and evaluated the performance of a long memory model versus a standard volatility models (the ubiquitous GARCH and RiskMetrics processes) in order to evaluate their power in delivering accurate volatility forecasts over long(er) horizons in an out-of-sample setting. This endeavour is motivated by recognition of the importance of accurate volatility forecasts in a wide range of applications, including tactical and strategic decision making and the limited empirical evidence available to date for emerging fixed income markets. Then, the performance of the standard GARCH, RiskMetrics and FIGARCH models are evaluated in the context of value-at-risk (VaR) estimation given the Basle regulatory framework.

110 Risk Management – Current Issues and Challenges

short of actual trading losses.

0.0178\* 0.0192 0.0185

FIGARCH is the fractionally integrated GARCH.

4. '\*' (asterisks) denote the preferred model. **Table 7.** VaR Failure Rates – Out-of-Sample

0.0347 0.0256\* 0.0391

RM GARCH FIGARCH

Notes:

1. VaR is value-at-risk

**7. Conclusions** 

long memory model. The focus on the 'out-of-sample' failure rates is motivated by the requirements of risk managers, who obtain VaR estimates in real time and must use parameters obtained from an already observed sample in order to evaluate the risks associated with current and future random movements in risk factors. As a result, credible test of VaR construction methods under alternative volatility forecasting models is their

Table 7 reports the out-of-sample VaR failure rates. The results are very diverse and highlight that in many of the markets considered the forecasting model that minimises the percentage number of daily VaR exceedances is sensitive to the specification of the probability level. When the Basle Committee rules are applied (i.e., the 99 percent probability level) the results indicate that the GARCH and RiskMetrics that provide the exceedance-minimising method for the fixed income markets considered. At the 99 percent probability level the long memory model is the generally the weakest performer. However, in the case of Hong Kong and South Africa the long memory model is second best model in terms of delivering accurate VaR measures. In addition, it is important to note that in many cases is level of accuracy between the various models is close as reflected by the closeness of the VaR failure rates. At the 97.5 and 95 percent probability levels model performance is more varied with all models demonstrating varied degrees of accuracy. As a generalization, these results are mixed but the evidence suggests that at the Basle prudential level the simpler models help in providing improved VaR estimates that minimise occasions when the minimum capital requirement identified by the VaR methodology would have fallen

performance outside the sample used to estimate the underlying parameters.

Hong Kong Mexico South Africa

0.0224\* 0.0536 0.0493

3. Model failure rates are the number of exceptions divided by the number of observations.

model 99% 97.5% 95% 99% 97.5% 95% 99% 97.5% 95%

0.0326 0.0312 0.0297\* 0.0378 0.0356\* 0.0521

0.0224 0.0192\* 0.0222

0.0311 0.0286\* 0.0303

0.0218\* 0.0261 0.0323

0.0154 0.0152\* 0.0179

2. RM is the RiskMetrics model, GARCH is the generalized autoregressive conditional heteroskedasticity and

Recent empirical evidence concerning the nature of volatility dynamics in fixed income markets suggests the existence of a long memory component. Since volatility in fixed The main findings of this research are threefold. First, evidence of long memory is conclusively demonstrated in emerging market local currency sovereign debt markets. In addition, to counteract the possibility of finding spurious evidence of long memory a variety of wavelet forms are considered. The findings from these tests are complementary and therefore suggest that the finding of long memory is not spurious. Second, the presence of a long memory structure in the volatility of these fixed income markets suggests volatility observations in the recent past and the remote past are associated with each other. Since the series realisations are not independent over time then past volatility may potentially be exploited to predict future volatility, especially over long horizons. Accordingly, the out-ofsample forecasting performance of the long memory model and the standard GARCH and RiskMetrics models are compared. While, none of the estimated models consistently outperforms the others, a key generalisation can be made. In particular, on the basis of the forecast MAE and RMSE statistics it is shown that the information content of long memory models does not consistently generate improved volatility forecasts, especially over long horizons, relative to the standard GARCH model. Indeed, the GARCH model generally provides the most accurate forecasts at the monthly horizon. With respect to VaR estimation, the results show that both the standard GARCH and RiskMetrics models generally deliver more accurate VaR measures relative to the long memory process.

These findings have three important implications. First, the exploitation of long memory models based on wavelet analysis may not have great relevance in the context of emerging market debt in terms of delivering superior forecast performance. Second, the existence of a long memory structure in volatility is not an essential condition for the derivation of accurate volatility forecasts at any time horizon, especially over a long horizon. Indeed, this research suggests that long memory models appear to be of limited practical forecast value,

especially over long horizons, for Hong Kong, Mexico and South Africa. Put differently, the computational complexity of long memory modelling is not commensurate with the benefits (in terms of forecast power). Third, the results of the VaR estimation may provide guidance on more effective prudential standards for operational risk measurement and, as result, may help ensure adequate capitalisation and reduce the probability of financial distress. The results highlight the importance of using out-of-sample forecasting techniques and the stipulated probability level for the identification of methods that minimise the occurrence of VaR exceptions. Standard models – RiskMetrics and GARCH – that are already widely used by market participants are generally shown to outperform the more computationally intensive wavelet-derived FIGARCH model in estimating VaR across the probability levels considered.

Long Memory in the Volatility of Local Currency Bond Markets:

[5] Bollerslev T, Mikkelsen HO. Modelling and pricing long memory in stock market

[6] Gencay R, Selcuk F, Whitcher B. Scaling properties of foreign exchange volatility. Phys

[7] Hurst HE. Long-term storage capacity of reservoirs. Transactions of the American

[8] Mandelbrot BB, Wallis J. Noah, Joseph and operational hydrology. Water Resources

[9] Mandelbrot BB. When can price be arbitraged efficiently? A limit to the validity of the

[14] Baillie RT. Long memory processes and fractional integration in econometrics. J

[15] Poon SH. A practical guide to forecasting financial market volatility. John Wiley & Sons

[16] Jarque CM, Bera AK. A test of normality of observations and regression residuals. Int

[17] Engle RF. Autoregressive conditional heteroskedasticity with estimates of the variance

[18] DiSario R, Li H, McCarthy J, Saraoglu H. Long memory in the volatility of an emerging equity market: the case of Turkey. J International Financial Markets Instittions &

[19] Jensen M. Using wavelets to obtain a consistent ordinary least squares estimator of the

[20] Gencay R, Selcuk F, Whitcher B. An introduction to wavelets and other filtering

[21] Ramsey JB. Wavelets in economics and finance: past and future. Stud Nonlinear Dyn E.

[22] Aussem A, Campbell J, Murtagh F. Wavelet-based feature extraction and decomposition strategies for financial forecasting. J Computational Intelligence Financ.

[23] Mallat, SG. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1989;

[24] Jensen M. An alternative maximum likelihood estimator of long-memory processes

using compactly supported wavelets. J Econ Dyn Control. 2000; 24: 361-387.

[10] Mandelbrot BB. Fractals: form, chance and dimensions. New York: Free Press. 1977. [11] Lo A. A long-term memory in stock market prices. Econometrica. 1991; 59(5): 1279-1313. [12] Granger CWJ, Joyeux R. An introduction to long-memory time series models and

random walk and martingale models. Rev Econ Stat. 1971; 53: 225-236.

fractional differencing. J Time Series Analysis. 1980; 1(1), 15-29. [13] Hosking JRM. Fractional differencing. Biometrika. 1981; 1(1), 165-76.

of United Kingdom inflation. Econometrica. 1982; 50: 987-1007.

fractional differencing parameter. J Forecasting. 1999; 18: 17-32.

methods in finance and economics. Academic Press: San Diego; 2002.

volatility. J Econometrics. 1996; 73: 151-84.

Society of Civil Engineers. 1951; 116: 770-99.

A, 2001; 249-66.

Research. 1968; 4: 909-18.

Econometrics. 1996; 73(1): 5-59.

Ltd: Chichester; 2005.

Stat Rev. 1987; 55:163-72.

Money. 2008; 18: 305-12.

1998; March/April: 5-12.

2002; 6(3): 1-27.

11: 674-693.

Evidence from Hong Kong, Mexico and South Africa 113

In sum, this research has evaluated the long memory properties of return volatility in fixed income markets. This paper also complements the literature on long memory models and the forecast performance of these models that has attracted interest in other asset classes. In addition, the results of this study may potentially be used to inform portfolio and risk analysis. In particular, it is shown that in the context of VaR estimation existing models based on the GARCH and/or RiskMetrics process are more accurate (and simpler) than their long memory counterpart. Some caveats to these results exist, however. First, squared returns provide a noisy proxy for the 'true' volatility. In the case of this analysis, data constraints limited alternative options. However, future research may find that the application of realised variance may produce more accurate forecasts. Second, future research may also consider exploring the relevance of other long memory models, for example, models with asymmetric effects given that market volatility is often reported as being 'directional', i.e., volatility is higher in a down – than an upmarket.
