Machine Learning for Time Series Analysis

## ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples

*Guy Mélard*

## **Abstract**

About 25 years ago, effective methods for dealing with time series models that vary with time appeared in the statistical literature. Except in a few cases, they have never been used for economic statistics. In this chapter, we consider autoregressive integrated moving average (ARIMA) models with time-dependent coefficients (tdARIMA) applied to monthly industrial production series. We start with a smallsize study with time-dependent integrated autoregressive (tdARI) models on Belgian series compared to standard ARI models with constant coefficients. Then, a second, bigger, illustration is given on 293 U.S. industrial production time series with tdARIMA models. We employ the software package Tramo to obtain linearized series and model specifications and build both the ARIMA models with constant coefficients (cARIMA) and the tdARIMA models, using specialized software. In these tdARIMA models, we use the simplest specification for each coefficient: a simple regression with respect to time. Surprisingly, for a large part of the series, there are statistically significant slopes, indicating that the tdARIMA models fit better the series than the cARIMA models.

**Keywords:** nonstationary process, time series, time-dependent model, time-varying model, local stationarity

## **1. Introduction**

About 25 years ago, effective methods for dealing with time series models that vary with time appeared in the statistical literature. Except in a few cases, like Van Bellegem and von Sachs [1] for marginal heteroscedasticity in financial data or Kapetanios et al. [2], they are not used for economic statistics. In this chapter, we consider autoregressive integrated moving average (ARIMA) models with timedependent coefficients (tdARIMA) that provide a natural alternative to standard ARMA models. Several theories appeared in the last 25 years for parametric estimation in that context, including Dahlhaus' approach based on locally stationary processes, see Dahlhaus [3, 4]. To simplify the presentation of the method in Section 2, we first focus on autoregressive integrated (ARI) models before going to the general case of ARIMA. Section 3 is devoted to illustrations of official time series, more precisely

industrial production series. We start with a small-size study on Belgian monthly industrial production and show an improvement for time-dependent autoregressive integrated (tdARI) models with respect to standard ARI models with constant coefficients. Then, a second, bigger, illustration of tdARIMA models is given on 293 U.S. industrial production time series, already used by Proietti and Lütkepohl [5], with a different objective. We employ the software package Tramo from Gómez and Maravall [6] to obtain linearized series and model specifications, and we built both ARIMA models with constant coefficients (cARIMA) and tdARIMA models based on the Tramo specifications. This is done in specialized software since no existing package can cope with these tdARIMA models. In these tdARIMA models, we use the simplest specification for each coefficient: a simple regression with respect to time, hence two parameters, a constant and a slope. Indeed, this is the closest departure from constancy, and this seems natural in an evolving world. We will see that, for a large part of the series, there are statistically significant slopes, indicating that the tdARIMA models fit better the series than the cARIMA models. In the second step, since many of the slopes introduced as additional parameters in the model are not significantly different from 0, they are omitted one by one, starting with the least significant one, until all the remaining slopes are different from 0 at the 5% level. Most of the summary results are improved. Section 4 contains our conclusions.

## **2. Methods**

We consider the well-known class of multiplicative seasonal ARIMA models, see e.g. Gómez and Maravall and Box *et al*. [6, 7]. Models with time-dependent coefficients appear often in econometrics but not in ARIMA models. For a very long time series, there is no reason that the coefficients would stay constant. They can be supposed to vary slowly with time although breaks could also be considered. This is the reason why linear (or other) functions of time replace the constant coefficients. Time series models with time-varying coefficients have been studied, mainly from a theoretical point of view. In addition to [3, 4], several papers [8–10] provide conditions for the asymptotic properties, hence the justification for statistical inference. Otherwise, our tests on slopes would have no foundation. These conditions are of course enforced in the estimation procedure.

### **2.1 The model**

To illustrate a simple ARIMA model with a time-dependent coefficient, we can consider the ARMA(1,1) model. Let the series be denoted by *y* = (*y*1, *y*2, … , *yn*). Then a tdARMA(1,1) model is described by the equation as follows:

$$
\partial\_t \boldsymbol{y}\_t = \boldsymbol{\phi}\_t^{(n)} \boldsymbol{y}\_{t-1} + \boldsymbol{e}\_t - \boldsymbol{\theta}\_t^{(n)} \boldsymbol{e}\_{t-1}, \tag{1}
$$

where the *et* are independent random variables with mean zero and with standard deviation σ, and the time-dependent coefficients *ϕ*ð Þ *<sup>n</sup> <sup>t</sup>* and *θ* ð Þ *n <sup>t</sup>* depend on time *t*, also on *n*, the length of the series, and also on a small number of parameters stored in a *m* � 1 vector *β*. The simplest specification for *ϕ*ð Þ *<sup>n</sup> <sup>t</sup>* , for example, is as follows:

*ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*

$$
\phi\_t^{(n)}(\beta) = \phi + \frac{1}{n-1} \left( t - \frac{n+1}{2} \right) \phi', \tag{2}
$$

where *ϕ* is an intercept and *ϕ*<sup>0</sup> is a slope, and a similar expression for *θ* ð Þ *n <sup>t</sup>* ð Þ *β* using two other parameters *θ* and *θ*<sup>0</sup> . The vector *β* contains all parameters to be estimated, those in *ϕ*ð Þ *<sup>n</sup> <sup>t</sup>* ð Þ *β* (like *ϕ* and *ϕ*<sup>0</sup> , here) and *θ* ð Þ *n <sup>t</sup>* ð Þ *β* (*θ* and *θ*<sup>0</sup> ), but not the scale factor *σ* which is estimated separately. For the corresponding cARIMA model, there is of course no slope, i.e., *ϕ*<sup>0</sup> = *θ*<sup>0</sup> ¼ 0. For a lag *k* instead of 1, we add a subscript *k* to the coefficient symbols.

Let us now consider a general tdARMA(*p*, *q*) model. It is defined by the equation

$$\mathcal{Y}\_t = \sum\_{k=1}^p \phi\_{tk}^{(n)}(\beta)\mathcal{Y}\_{t-k} + \mathbf{e}\_t - \sum\_{k=1}^q \theta\_{tk}^{(n)}(\beta)\mathbf{e}\_{t-k},\tag{3}$$

where the coefficients *ϕ*ð Þ *<sup>n</sup> tk* ð Þ *β* , *k* = 1, … , *p*, and *θ* ð Þ *n tk* ð Þ *β* , *k* = 1, … , *q*, are deterministic functions of *t* and, possibly, of *n*. The *et*, *t* ¼ 1,2, … , are like before. We suppose that the additional number of parameters is small. Practically, for economic time series, linear or exponential functions of time, like in Eq. (2), seem to be enough instead of constant coefficients, but there is no problem to use other functions, up to some point. In other cases, see Alj *et al.* [11], periodic functions can be considered. In practice, we suppose that the coefficients are constant before the first observation.

Adding marginal heteroscedasticity should also be tried. Van Bellegem and von Sachs [1] had already shown the usefulness of a time-dependent variance. Indeed, there is no reason why the innovation standard deviation is constant. We replace *et*�*<sup>k</sup>*, *k* = 0, 1, … , *q*, in Eq. (3) with *g* ð Þ *n <sup>t</sup>*�*<sup>k</sup>*ð Þ *<sup>β</sup> et*�*<sup>k</sup>*, where *<sup>g</sup>* ð Þ *n <sup>t</sup>* ð Þ *β* is a (strictly positive) deterministic function of *t* and, possibly, of *n*, depending on the parameters, so that the standard deviation becomes *g* ð Þ *n <sup>t</sup>* ð Þ *β σ* > 0. Adding *g* ð Þ *n <sup>t</sup>* ð Þ *β* is also covered by Azrak and Mélard [8, 12]. In practice, we used an exponential function of time for *g* ð Þ *n <sup>t</sup>* ð Þ *β* .

Since the series are nonstationary, we need to consider also regular ∇ and seasonal differences ∇*s*, where *s* is the seasonal period (*s* = 12, for monthly data), on the possibly square roots or log-transformed observations. Furthermore, the series is not seasonally adjusted, so the so-called seasonal multiplicative models of Box *et al.* [7] are also needed.

### **2.2 The estimation method**

For any tdARIMA model, we can estimate the parameters by maximizing the logarithm of the Gaussian likelihood. Time Series Expert [13], and more precisely its computational engine ANSECH is used for that purpose. It is based on an exact algorithm for the computation of the Gaussian likelihood [14] and an implementation of a Levenberg–Marquardt nonlinear least-squares algorithm. Under some very general conditions [8, 12], it is shown that the quasi-maximum likelihood estimator ^*β* converges to the true value of *β*, and ^*β* is asymptotically normal, more precisely <sup>√</sup>*<sup>n</sup> <sup>β</sup>*^ � *<sup>β</sup>* � � ! *<sup>D</sup> <sup>N</sup>* 0, *<sup>V</sup>*�<sup>1</sup> � �, when *<sup>n</sup>* ! <sup>∞</sup> where ! *<sup>D</sup>* indicates convergence in distribution, and *V*�<sup>1</sup> is the asymptotic covariance matrix. Moreover, *V* can be estimated as a

by-product of estimation. Let us denote its estimator by *V*^ *<sup>n</sup>*. The Student *t* statistics shown in the next section make use of the standard errors deduced from the estimation of *V*. Using the asymptotic covariance matrix, it is also possible to design a Wald test for a subset **b** of *r* among the *m* parameters in *β*, for example, to test that all the slopes are equal to 0, using a <sup>χ</sup><sup>2</sup> distribution. Let **<sup>R</sup>** be a *<sup>r</sup>* � *<sup>m</sup>* restriction matrix composed of the rows of the *m* � *m* identity matrix that correspond to the parameters in the subset **b**. Then, *b* ¼ *Rβ*. The Wald statistic for testing that *b* ¼ 0 is then *n*^ *<sup>b</sup>*<sup>0</sup> *RV*^ *nR*<sup>0</sup> �1^ *b*, where ^ *b* is the estimate of *b* and <sup>0</sup> indicates transposition. Under the null hypothesis, the statistic converges in distribution to a χ<sup>2</sup> distribution with *r* degrees of freedom when *n* ! ∞.

Note that centering of time around its mean (*n* + 1)/2 in Eq. (2) improves the statistical properties of the estimators by reducing the amount of correlation between their elements and that the factor 1/(*n*–1) is there to avoid explosive behavior when *n* ! ∞.

Note also that the conditions for convergence and asymptotic normality are satisfied in the present case because a sufficient condition [15] is that the AR and MA polynomials have their roots outside the unit circle at all times and that condition is checked during estimation.

An asymptotic theory for locally stationary processes due to Dahlhaus [3, 4] can also be used. There seems to exist only one software implementation, the R package LSTS (for Locally Stationary Time Series) by Olea *et al.* [16] to support the estimation of locally stationary ARMA models, see also Palma *et al.* [17]. Since it does not cope with the multiplicative seasonal models necessary to deal with seasonally unadjusted time series, we have preferred to use Azrak and Mélard [8] with Time Series Expert for estimation. See Azrak and Mélard [18] for a comparison of the existing theories.

## **2.3 The datasets**

In the first empirical analysis, the number of series is limited, and simple pure autoregressive models are used. The purpose is to show the basic elements of the methodology. We used a dataset of indices for the monthly Belgian industrial production for the period 1985–1994 by the various branches of activity, 26 in all. Nine years are used for fitting the models and a tenth year is used to compute ex-post forecasts and the mean absolute percentage error (MAPE). An automatic procedure is applied to fit ARIMA models and we retained the 20 series out of 26 for which pure integrated autoregressive or ARIð Þ *p*, *d* ð Þ *P*, *D* <sup>12</sup> models are fitted to the series of 108 observations. Let us remind that these models are defined by the equation as follows:

$$
\phi\_p(L)\Phi\_P(L^s)\nabla^d\nabla\_{12}^D\mathcal{Y}\_t = \mathfrak{e}\_t,\tag{4}
$$

where *<sup>L</sup>* is the lag operator, such that *Lyt* <sup>¼</sup> *yt*�<sup>1</sup>,*ϕp*ð Þ *<sup>L</sup>* and <sup>Φ</sup>*<sup>P</sup> Ls* ð Þ are, respectively, the regular autoregressive and the seasonal autoregressive polynomials, of degree *p* and 12*P* in *L*. The model can include transformations and interventions (additive or on the differenced series) which are not detailed here. The fit is characterized by the value of the SBIC criterion. For using time-dependent ARI, or tdARI, models, slope parameters are added for each of the existing coefficients, like *ϕ*<sup>0</sup> for *ϕ* in Eq. (2). The models have therefore coefficients that are linear functions of time. For models in

*ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*

multiplicative seasonal form, the product of the regular and seasonal polynomials is first computed and slope parameters are added to each lag, but only to lags smaller than 14, for practical reasons. For example, for the AR 2ð Þð Þ1 <sup>12</sup> model, with the polynomial in the lag operator *L*

$$\begin{aligned} \left(\mathbf{1} - \phi\_1 \mathbf{L} - \phi\_2 \mathbf{L}^2\right) \left(\mathbf{1} - \Phi\_1 \mathbf{L}^{12}\right) &= \left(\mathbf{1} - \phi\_1 \mathbf{L} - \phi\_2 \mathbf{L}^2 - \Phi\_1 \mathbf{L}^{12} + \phi\_1 \Phi\_1 \mathbf{L}^{13} + \phi\_2 \Phi\_1 \mathbf{L}^{14}\right), \\ &\tag{5} \end{aligned} \tag{6}$$

the specification is 1 � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> <sup>t</sup>*<sup>1</sup> *<sup>L</sup>* � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> <sup>t</sup>*<sup>2</sup> *<sup>L</sup>*<sup>2</sup> � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> <sup>t</sup>*,12*L*<sup>12</sup> � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> <sup>t</sup>*,13*L*<sup>13</sup> <sup>þ</sup> *<sup>ϕ</sup>*2*Φ*1*L*<sup>14</sup> , where *<sup>ϕ</sup>*ð Þ *<sup>n</sup> t*1 is like in Eq. (2), and

$$
\phi\_{t2}^{(n)} = \phi\_2 + \frac{1}{n-1} \left( t - \frac{n+1}{2} \right) \phi\_2', \\
\phi\_{t,12}^{(n)} = \Phi\_1 + \frac{1}{n-1} \left( t - \frac{n+1}{2} \right) \phi\_{12}',
$$

$$
\phi\_{t,13}^{(n)} = -\phi\_1 \Phi\_1 + \frac{1}{n-1} \left( t - \frac{n+1}{2} \right) \phi\_{13}',
$$

with seven parameters instead of the full form 1 � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> <sup>t</sup>*<sup>1</sup> *<sup>B</sup>* � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> <sup>t</sup>*<sup>2</sup> *<sup>B</sup>*<sup>2</sup> � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> t*,12*B*<sup>12</sup> �*ϕ*ð Þ *<sup>n</sup> <sup>t</sup>*,13*B*<sup>13</sup> � *<sup>ϕ</sup>*ð Þ *<sup>n</sup> t*,14*B*<sup>14</sup> that would involve 10 parameters in all. This is enforced to restrict the number of parameters and avoid numerical problems. Note that the factor 1*=*ð Þ *n* � 1 is there only for the asymptotic theory and will be omitted in practice.

In the second empirical analysis, we use a big dataset of U.S. industrial production time series, already used by Proietti and Lütkepohl [5] for assessing transformations in forecasting. See http://www.federalreserve.gov/releases/g17/ipdisk/ip\_nsa.txt. These are now 293 time series from January 1986 to December 2018 at least. Most series start before and they are even a few ones starting in 1919. The models were fitted until December 2016 leaving the remaining months to compare the data to the ex-post forecasts, using either a fixed forecast origin for several horizons or rolling forecasts each for given horizons.

We employ the software package Tramo described by Gómez and Maravall [6] to obtain partially linearized series by removing outliers and trading day effects. Indeed, the presence of outliers and trading day effects can distort the analysis, as could be seen in preliminary analyses. Selecting the cARIMA models in an automated way is also done using Tramo. Then we replace the constant coefficients by linear functions of *t* for order *k* ≤ 13, giving tdARIMA models, like in Eq. (2) for each lag *k* coefficient in the model. At this stage, we do not omit nonsignificant parameters. The cARIMA and tdARIMA models are fitted using the same specialized software package ANSECH included in Time Series Expert, to facilitate the comparison. See **Figure 1** for a schematic representation of the whole automatic procedure. For more complex time dependency, an automatic selection procedure like the one exposed by Van Bellegem and Dahlhaus [19] is possible.

We compare the results of tdARIMA versus cARIMA models using the following criteria:


#### **Figure 1.**

*Schematic representation for the whole automatic treatment.*


In the early stage of this project, the data were limited to 2016 and without a correction for outliers or trading days, and only fixed origin forecasts were considered. This gave worse results that were indicative, and not conclusive.

Note that one can object against the use of the Ljung-Box test statistic to compare models, especially here because there is no foundation to its limit behavior for tdARIMA models. Like the other criteria, we use it as a descriptive indicator.

## **3. Empirical results**

## **3.1 Two examples**

Before showing the results, we will consider two examples, to justify the recourse to the class of tdARIMA models which is the object of this chapter.

*ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*

**Figure 2.** *The original series, the index of land transportation (TRTER).*

The first example is taken from the first dataset: the index of land transportation (series TRTER) for Belgium for the period from January 1985 to December 1994, see **Figure 2**. Two additive interventions were automatically considered, respectively, in May 1986 and in February 1992. Let I8605 and I9202 denote, respectively, the corresponding binary variables. Otherwise, the series is taken in square roots and seasonally differenced. Let us denote the transformed series TRTER\_TF after all these operations. It equal to ∇<sup>12</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi TRTER <sup>p</sup> � *<sup>b</sup>*I8605I8605 � *<sup>b</sup>*I9202I9202 � �, up to a normalizing factor, and shown in **Figure 3**. The partial autocorrelations of that series shown in **Figure 4** show a truncation after lag 12. The usual Box and Jenkins analysis lead to the suggestion of a seasonal AR model. Then adding time dependency to the AR coefficient leads to the following model:

$$(\mathbf{1} - \Phi\_{\mathrm{t},12}\mathbf{B})\left[\nabla\_{12}\left(\sqrt{\mathrm{TRTER}} - b\_{\mathrm{18605}}\mathbf{I8605} - b\_{\mathrm{19202}}\mathbf{I9202}\right) - \mu\right] = e\_{\mathrm{t}},$$

where the <sup>Φ</sup>*t*,12, is estimated by �0.686 + 6.47 10�<sup>03</sup> (*<sup>t</sup>* – 60.5), and the estimates of *b*I8605, *b*I9202, and *μ* are, respectively, equal to �30.9, 24.8, and 4.23. The standard error corresponding to the slope of Φ*<sup>t</sup>*,12 is equal to 1.70 10�03, so the associated Student statistic is equal to 3.8, hence the slope is significantly different from 0. To explain that significance, let us look at the partial autocorrelation at lag 12 for the transformed series TRTER\_TF: for the first 4 years it is �0.488 and for the last 4 years it is �0.391. That explains the significantly positive slope for Φ*<sup>t</sup>*,12

**Figure 3.** *The transformed series (TRTER\_TF).*

#### **Figure 4.** *The partial autocorrelation function of the transformed series.*

The second example is taken from the second dataset: the U.S. production index of clothing (B51212) for the period from January 1986 to January 2019, see **Figure 5**.

Tramo has adjusted the series for outliers and proposed a logarithmic transform, and both a regular and a seasonal difference, giving the transformed series BS51212DS shown in **Figure 6**.

**Figure 5.** *The original series, the U.S. production index of clothing (B51212).*

**Figure 6.** *The transformed B51212 series (B51212DS).*

*ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*

Subsequently, Tramo has suggested modeling the series by a seasonal ARIMA model with a regular autoregressive polynomial of degree 3 and a seasonal moving average. We fitted that model using ANSECH and obtained

$$(\mathbf{1} + \mathbf{0}.035\mathbf{L} - \mathbf{0}.\mathbf{1}42\mathbf{L}^2 - \mathbf{0}.\mathbf{2}49\mathbf{L}^3)\nabla\nabla\_{\mathbf{1}2}\log\left(\mathbf{B}\mathbf{5}\mathbf{1}\mathbf{2}\mathbf{1}\mathbf{2}\_t\right) = \left(\mathbf{1} - \mathbf{0}.\mathbf{8}\mathbf{5}\mathbf{0}\mathbf{L}^{12}\right)\mathbf{e}\_t\cdot\mathbf{1}$$

Then, we replaced the constant coefficients with linear functions of time and replaced the constant innovation variance with an exponential function of time. Omitting one by one the nonsignificant parameters at the 5% probability level, we obtained finally the following heteroscedastic model but with constant autoregressive and moving average polynomials:

$$(1 + 0.044L - 0.142L^2 - 0.228L^3) \nabla \nabla\_{12} \log(\text{B51212}\_l) = (1 - 0.855L^{12}) \text{g}\_l \text{e}\_l$$

where *gt* is given by *gt* ¼ exp 0ð Þ *:*001753ð Þ *t* � 193 *:*

#### **3.2 First empirical analysis**

In the first experiment, the number of series is small and simple pure integrated autoregressive models are used.

**Table 1** shows the main results, including those tdAR coefficients for which the test of zero slope leads to a rejection at the 5% level, and the corresponding *t*-statistic.

For example, the AR 2ð Þð Þ1 <sup>12</sup> model in Eq. (5) which was shown in Section 2.3 is used for nonmetallic manufacturing, in addition to a regular and a seasonal difference. In that case, the standard ARI model is better than the tdARI model for SBIC (826 versus 830) and provides also betters forecasts (MAPE = 5.5% versus 9.4%) although there is one significant slope for lag 12 with a Student statistic of 3.6. Even if tdARI models are not systematically better, they often produce better forecasts and sometimes show a better fit or at least some statistically significant slope parameters at the 5% level. All the nonsignificant slopes were left in the model and that can explain why the SBIC criterion was generally worse for tdARI models. Since that analysis was promising, we were led to consider a bigger dataset.

#### **3.3 Second empirical analysis**

The second empirical analysis bears on 293 seasonally unadjusted time series in a dataset of U.S. industrial production. We will show three tables of results. **Table 2** presents a summary of the dataset resulting from Tramo. For example for 280 series out of 293, a regular difference was used, which accounts for 96% of the dataset. As said above, we preserved this and all parameters in our cARIMA and tdARIMA models. For the tdARIMA models, slopes were added to all autoregressive and moving average coefficients for lag less or equal to 13.

**Table 3** is based on the initial tdARIMA models, with possibly nonsignificant parameters. We show the percentages for each criterion across the 293 series. For example, more than 50% of the dataset had at least one of the slopes with a Student *t* value greater than 1.96. If we use the Wald test, which should show a better view, for more than 44% of the series, the hypothesis of null slopes leads to a rejection at the 5% level. If the series were randomly drawn from cARIMA processes, we should expect 5% of rejections, on average. Of course, because of the multiple-test argument, the


#### **Table 1.**

*For each branch of the economy, we give the orders (p*,*d)(P*,*D) of the model,* SBIC*and* MAPE *(in italics) for the raw ARI model and for the tdARI model (resultsin bold type are better), the statistically significant slopes (AR*k *denotesϕ*<sup>0</sup> *k) and the corresponding* t*-value.*

*ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*


#### **Table 2.**

*Summary of the model selections made by Tramo on the 293 U.S. industrial production series.*

Student tests on the slopes would give a higher proportion of rejections. A simulation study will confirm this later. The use of the Wald test in the present context is therefore essential. Some results are partially deceptive but can be explained: only about 4% of the series have a smaller SBIC for the tdARIMA but this is mainly due to the useless parameters. About one-half of the series have a smaller residual standard deviation, but for more than 57% the test on residual autocorrelation, based on the Ljung-Box test with 48 lags, gives a better result.

If we retain only the series where the Wald test rejected the constancy of the coefficients, the percentage of smaller SBIC for tdARIMA models is only slightly higher at about 9% and reaches 61% for the residual standard deviation. The percentage for the Ljung-Box test is lower. Indeed, the theory for that test was never undertaken for tdARIMA models. Forecasting performance was evaluated using the MAPE criterion. For fixed origin forecasts, about 47% of the series have a smaller MAPE for the tdARIMA models rather than for the cARIMA models. Among the series for which


#### **Table 3.**

*For each criterion, the percentages of improvement from cARIMA models to tdARIMA models are given over the 293 U.S. series.*

time dependency is retained, only 45% of them benefit from better forecasts. For rolling forecasts for various horizons, the percentages are even smaller, in particular for horizons of 1 and 3 months. The percentages are about the same if the Wald test rejects constancy or not. That means that, even if the introduction of timedependency improved the fits, it does not improve the forecasts. Let us remind that, at this stage, the tdARIMA may have many statistically nonsignificant slopes.

For **Table 4**, starting from the full tdARIMA models of **Table 3**, we omitted, one by one, the most nonsignificant slope at the 5% level, see **Figure 1**. In the end, all remaining slopes are thus significantly different from 0. This was done in an automated way in order to avoid mistakes. We will refer to these models as parsimonious tdARIMA models. Of course, the cARIMA models are the same as previously, essentially the same as given by Tramo, but estimated with more digits of accuracy. We notice that the percentage of at least one statistically significant slope, 54.61%, differs slightly from the percentage of rejection of the Wald test on all the slopes, 54.27%. Indeed, for one series (G325A4, Chemicals except for pharmaceuticals and medicines), there are two slightly significant slopes but the global test does not reject their nullity, although the *p*-value is close to 0.05. Anyway, these percentages of improved tdARIMA models are slightly higher than in **Table 3**.

The fitting results are partially better with more than 18% smaller SBIC for tdARIMA models (respectively 34% if we condition on the rejection of the Wald test). Some are worse, however, with 38% for the residual standard deviation instead of 49% for the fully parameterized model (respectively 70% and 61%, if we condition on the rejection of the Wald test), and 27% for the residual autocorrelation instead of 57% for the full model (respectively 50% and 54%, if we condition on the rejection of the Wald test).

Strangely, the forecasting performance with a fixed origin is worse for the parsimonious model than for the full model with the percentage of improvement of tdARIMA models with respect to cARIMA models equal to 28%, instead of 47% (respectively 51% and 45%, if we condition on the rejection of the Wald test). That means that the omitted slopes seem to contribute to the forecasting performance but


*Notes: (\*) Statistically significant slope parameters at the 5% level, (\*\*) Contrarily to Table 3, nonsignificant slope parameters are omitted one by one until all were statistically significant*.

#### **Table 4.**

*For each criterion, the percentages of improvement from cARIMA models to tdARIMA models are given over the 293 U.S. series. The last column contains percentages conditional to the rejection of nullity of all the slopes by the Wald test.*

### *ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*

that, among the series with time-dependent coefficients, about one-half have provided better forecasts. The picture for rolling forecasts is again worse for the parsimonious models with smaller percentages of improvement in the range of 17-22%, according to the horizon, instead of 32-37% for the full models, but again similar under the condition of rejection of the Wald test (respectively 32-44% instead of 29-40%). Surprisingly, the percentages are systematically higher for horizons 6 and 12 months rather than for those of 1 and 3 months.

One can object that introducing time dependency can introduce some over-fitting: a certain proportion of the tests of nullity of the slopes *ϕ*<sup>0</sup> *<sup>k</sup>* or *θ*<sup>0</sup> *<sup>k</sup>* can lead to rejection, about 5% when there is only one slope, more otherwise.

To try to answer that natural question, we generated artificially 320 series using cARIMA models, with the same length of 372, again leaving the last 12 values. We have used an airline model for that purpose instead of the large variety of models fitted by Tramo-Seats. Then we added time dependency and proceeded exactly like before. The results are shown in **Table 5**. The percentage of 14.06 for the first criterion (instead of five) shows that our rough examination of the largest |*t*| value should be better replaced by a simultaneous test on the td parameters, as we did. For SBIC, there are many superfluous parameters, as could be guessed. But about one-half of the tdARIMA models give smaller residual standard deviations, less residual autocorrelation, and smaller forecast errors than their cARIMA counterparts, as expected.

The results show that for a majority of series there is (i) at least one statistically significant slope parameter at the 5% level, (ii) rejection of the nullity of all the slopes using a Wald test that provides better-founded results than the *t*-tests, (iii) smaller residual standard deviation, and (iv) less residual autocorrelation. This is true for the full tdARIMA model specifications but also, at least partly, with more parsimonious tdARIMA models obtained by omitting, one by one, the statistically nonsignificant slopes. At least it is true conditionally on significant time dependency, i.e. when the Wald test rejects the constancy of the coefficients.

The results for the SBIC criterion are not good. For the full tdARIMA model, an explanation is the presence of nonsignificant slope parameters. It remains, however, for the parsimonious models. The only unsatisfactory aspect of tdARIMA models is that they fail to improve the forecasts for a majority of the series. Indeed, they confirm that only one-third of the "time-dependent series", i.e. those series which have at least one statistically significant slope parameter, provide better forecasts with a tdARIMA model than with a cARIMA model.

We had already observed similar results with slightly shorter series of the big dataset when the outliers and trading day effects were not handled. Consequently, the


#### **Table 5.**

*For each criterion, the percentages of improvement from going from cARIMA models to tdARIMA models are given over the 320 artificial series. The last column contains the corresponding percentages obtained for the U.S. series taken from Table 3.*

presence of outliers or trading day effects is not the cause of better fits by tdARIMA models, as we feared. A common feature is nevertheless that the forecasts are not better by replacing the cARIMA models with tdARIMA models. This is surprising although we know that a better fit is not a guarantee for better forecasts. It should be investigated why the forecasts seem to be worse in the U.S. series for tdARIMA models. Of course, it can be due to a global change in 2016.

## **4. Conclusions**

It took several decades to go from ARIMA models with constant coefficients to suitable and powerful generalizations with time-dependent coefficients that vary deterministically. We showed the usefulness of the approach for dealing with official statistics time series that have generally a seasonal component.

We used linear functions of time. We do not hope that other functions than linear functions should be useful with the inconvenience to add many parameters, except if we exploit the fact that since 2019 most of the series in the dataset are available before 1986, often since 1972, or sometimes earlier.

Finally, one weak point in the analysis is due to the detection of outliers and trading day effects, and the time series linearization is based on cARIMA models. If the time dependency of the coefficients becomes serious for very long official time series, it would be worth trying to extend Tramo features to tdARIMA models, e.g. to detect outliers simultaneously with the estimation of time-dependent coefficients for the ARIMA model.

On the other side, it would be also interesting to conclude that traditional cARIMA models are enough to forecast very long time series and that no substantial gain can be obtained by considering tdARIMA models.

It can be interesting to repeat the analysis with other datasets, quarterly or preferably monthly, like those maintained by Eurostat. Good candidates would be in the industry, trade, and services, short-term business statistics, production, turnover, etc. A U.S. database like FRED (https://research.stlouisfed.org/econ/mccracken/fred-da tabases/) could also be considered.

## **Acknowledgements**

I thank Agustín Maravall (for his help in producing linearized time series in Tramo), Rajae Azrak (for her contributions to the theory), Ahmed Ben Amara (for his contribution to a very first version of a part of the program chain which includes Tramo-Seats and Microsoft Excel Visual Basic modules, in addition to our specialized code for estimating tdARIMA models), and Dario Buono, Team Leader of Methodology, Eurostat, Unit B1 (for exchanges, suggestions, and encouragement on previous versions of this chapter). Finally, I thank the editors for their remarks and Mrs. Karla Skuliber, the author service manager, for her efficiency.

## **Conflict of interest**

I declare there is no conflict of interest.

*ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*

## **Author details**

Guy Mélard Université libre de Bruxelles, ECARES, Brussels, Belgium

\*Address all correspondence to: guy.melard@ulb.be

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Van Bellegem S, von Sachs R. Forecasting economic time series with unconditional time-varying variance. International Journal of Forecasting. 2004;**20**:611-627. DOI: 10.1016/ j.ijforecast.2003.10.002

[2] Kapetanios G, Marcellino M, Venditti F. Large time-varying parameter VARs: A nonparametric approach. Journal of Applied Econometrics. 2019;**34**:1027-1049. DOI: 10.1002/jae.2722

[3] Dahlhaus R. Maximum likelihood estimation and model selection for locally stationary processes. Journal of Nonparametric Statistics. 1996;**6**: 171-191. DOI: 10.1080/10485259 608832670

[4] Dahlhaus R. Fitting time series models to nonstationary processes. Annals of Statistics. 1997;**25**:1-37. DOI: 10.1214/aos/1034276620

[5] Proietti T, Lütkepohl H. Does the Box-Cox transformation help in forecasting macroeconomic time series? International Journal of Forecasting. 2013;**29**:88-99. DOI: 10.1016/j.ijfore cast.2012.06.001

[6] Gómez V, Maravall A. Automatic modelling methods for univariate series. In: Peña D, Tiao GC, Tsay RS, editors. A Course in Time Series Analysis. New York: Wiley; 2001. pp. 171-201. DOI: 10.1002/9781118032978.ch7

[7] Box GEP, Jenkins GM, Reinsel GC, Ljung GS. Time Series Analysis, Forecasting and Control. 5th ed. New York: Wiley; 2015 xxvi+669 p

[8] Azrak R, Mélard G. Asymptotic properties of quasi-maximum likelihood estimators for ARMA models with

time-dependent coefficients. Statistical Inference for Stochastic Processes. 2006; **9**:279-330. DOI: 10.1007/s11203-005- 1055-6

[9] Dahlhaus R. A likelihood approximation for locally stationary processes. Annals of Statistics. 2000;**28**: 1762-1794. DOI: 10.1214/aos/1015957480

[10] Dahlhaus R. Locally stationary processes. In: Subba Rao T, Subba Rao S, Rao CR, editors. Handbook of Statistics, Volume 30: Time Series Analysis: Methods and Applications. Amsterdam: Elsevier; 2012. pp. 145-159. DOI: 10.1016/B978-0-444-53858- 1.00013-2

[11] Alj A, Azrak R, Ley C, Mélard G. Asymptotic properties of QML estimators for VARMA models with time-dependent coefficients. Scandinavian Journal of Statistics. 2017; **44**:617-635. DOI: 10.1111/sjos.12268

[12] Azrak R, Mélard G. Asymptotic properties of conditional least-squares estimators for array time series. Statistical Inference for Stochastic Processes. 2021;**24**:525-547. DOI: 10.1007/s11203-021-09242-8

[13] Mélard G, Pasteels J-M. User's Manual of Time Series Expert (TSE Version 2.3). Brussels: Institut de Statistique, Université Libre de Bruxelles; 1998. Available from: https:// dipot.ulb.ac.be/dspace/retrieve/829842/ TSE23E.PDF [Accessed: October 17, 2022]

[14] Mélard G. The likelihood function of a time-dependent ARMA model. In: Anderson OD, Perryman MR, editors. Applied Time Series Analysis. Amsterdam: North-Holland; 1982. pp. 229-239

*ARIMA Models with Time-Dependent Coefficients: Official Statistics Examples DOI: http://dx.doi.org/10.5772/intechopen.108789*

[15] Mélard G. An indirect proof for the asymptotic properties of VARMA model estimators. Econometrics and Statistics. 2022;**21**:96-111. DOI: 10.1016/j. ecosta.2020.12.004

[16] Olea R, Palma W, Rubio P. Package LSTS. 2015. Available from: https://cran.r-project.org/web/packages/ LSTS/LSTS.pdf [Accessed: October 17, 2022]

[17] Palma W, Olea R, Ferreira G. Estimation and forecasting of locally stationary processes. Journal of Forecasting. 2013;**32**:86-96. DOI: 10.1002/for.1259

[18] Azrak R, Mélard G. Autoregressive models with time-dependent coefficients a comparison between several approaches. Stats. 2022;**5**:784-804. DOI: 10.3390/ stats5030046

[19] Van Bellegem S, Dahlhaus R. Semiparametric estimation by model selection for locally stationary processes. Journal of the Royal Statistical Society Series B. 2006;**68**:721-746. DOI: 10.1111/ j.1467-9868.2006.00564.x

Section 3
