**2.1 Restricted** *PEXPAR*ð Þ**1 model**

parameters, necessary and sufficient conditions of stationarity and geometric ergodicity for the *EXPAR*ð Þ1 model are given by [6], the problem of estimation of nonlinear time series in a general framework by conditional least squares CLS and maximum likelihood ML methods is treated by [7] with application in *EXPAR* models, a forecasting method is proposed by [8], the *LAN* property was shown in [9] and asymptotically efficient estimates was constructed there for the restricted *EXPAR*ð Þ1 , a genetic algorithm for estimation is used in [10], Bayesian analysis of these models is introduced in [11], a parametric and nonparamtric test for the detection of exponential component in *AR*ð Þ1 is constructed by [12], sup-tests are constructed by [13] with the trilogy Likelihood Ratio (LR), Wald and Lagrange Multiplier (LM) for linearity in a general nonlinear *AR*ð Þ1 model with *EXPAR*ð Þ1 as special cases, the extended Kalman filter ð Þ *EKF* is used in [14]. Given that nonlinear estimation is time consuming [15] proposed to estimate heuristically the nonlinear parameter from the data and this is a very interesting remark because when the nonlinear parameter is known we get the Restricted *EXPAR* model. The applications of the *EXPAR* model are multiple: ecology, hydrology, speech signal, macroeco-

On the other hand, fitted seasonal time series exhibiting nonlinear behavior such cited before and having a periodic autocovariance structure by *SARIMA* models will be inadequate. These models are linear and the seasonally adjusted data may still show seasonal variations because the structure of the correlations depends on the season. The solution is the use of a periodic version of *EXPAR* models. The notion of periodicity, introduced by [22], was used to fit hydrological and financial series and allowed the emergence of new classes of time series models such as Periodic *GARCH*, Periodic Bilinear, *MPAR* model. Motivated by all this, we introduced recently the Periodic restricted *EXPAR*ð Þ1 model see [23], which consists of having different restricted *EXPAR*ð Þ1 for each cycle and we established a most stringent test of periodicity since a periodic model is more complicated than a nonperiodic one and its consideration must be justified. We studied the problem of estimation by the least squares (*LS*) method in [24] and the test of Student was used for testing the nullity of the coefficients in the application. Traditionally, the step of estimation must be followed by tests of nullity of coefficients and the major tests used are Wald, LR and LM tests. We used a Wald test for testing the nullity of one coeffi-

In this chapter, we will present the quasi maximum likelihood (*QML*) estimation of the parameters, which are the *LS* estimators in [24] under the assumption that the density is Gaussian, these estimators are asymptotically normal under quite general conditions. This will play a role in the construction of the confidence interval for the parameters and then we treat the problem of testing the nullity of parameters which lead us to a linearity test using the standard and well known LR test. This test is based on the comparison between the maximum of the constrained and unconstrained quasi log likelihood, see for example [26] or [27], the null hypothesis is accepted, if the difference is small enough or equivalently *H*<sup>0</sup> ought to be rejected for large values of the difference. The problem is standard because the periodic model is restricted, i.e. the nonlinear parameter is known and for the other parameters 0 is an interior point of the parameter space, then the LR statistic asymptotically follows the *χ*<sup>2</sup> distribution under *H*<sup>0</sup> just like the Wald test, but we chose the former because it does not require estimation of the information matrix. It is known that the two tests are asymptotically equivalent and may be identical see

The chapter is organized as follows. In Section 2, we introduce the Restricted *PEXPAR* model and we present the asymptotic normality of the QML estimators and we construct confidence intervals of the parameters. Section 3 provides the LR

nomic and others see, for example, [16–21].

*Recent Advances in Numerical Simulations*

cient and consequently testing linearity in [25].

[26] for more details.

**196**

Let f g *Yt <sup>t</sup>*≥<sup>1</sup> be a seasonal stochastic process with period *S S*ð Þ ≥2 . **Definition 1**

The process f g *Yt <sup>t</sup>*≥<sup>1</sup> is a Periodic Restricted EXPonential AutoRegressive model (restricted *PEXPAR*ð Þ1 ) of order 1 if it is a solution of the nonlinear difference equation given by

$$Y\_t = \left(\rho\_{t,1} + \,\,\rho\_{t,2} \exp\left(-\gamma Y\_{t-1}^2\right)\right) Y\_{t-1} + \varepsilon\_t, \,\, t \in \mathbb{N},\tag{1}$$

where f g *<sup>ε</sup><sup>t</sup> <sup>t</sup>*≥<sup>1</sup> is *iid* 0, *<sup>σ</sup>*<sup>2</sup> *t* � �, *φ<sup>t</sup>*,1 and *φ<sup>t</sup>*,2 are the autoregressive parameters and *γ* >0 is the known nonlinear parameter. A heuristic determination of *γ* from data is

$$\hat{\boldsymbol{\gamma}} = -\frac{\log \boldsymbol{\varepsilon}}{\max\_{1 \le t \le n} Y\_t^2},\tag{2}$$

where *ε* is a small number and *n* is the number of observations. (cf. [15]).

The autoregressive parameters and the innovation variance are periodic of period *S*, that is,

$$
\rho\_{t+k\mathbb{S},1} = \rho\_{t,1}, \rho\_{t+k\mathbb{S},2} = \rho\_{t,2} \text{ and } \sigma\_{t+k\mathbb{S}}^2 = \sigma\_t^2 \text{ , } \forall k, \ t \in \mathbb{N}. \tag{3}
$$

To point out the periodicity, let *t* ¼ *i* þ *Sτ*, *i* ¼ 1, … , *S* and *τ* ∈ , then Eq. (1) becomes

$$Y\_{i+\mathbb{S}\tau} = \left(\rho\_{i,1} + \,\,\rho\_{i,2} \exp\left(-\gamma Y\_{i+\mathbb{S}\tau-1}^2\right)\right) Y\_{i+\mathbb{S}\tau-1} + \varepsilon\_{i+\mathbb{S}\tau}, i = \mathbf{1}, \dots, \mathbf{S}, \quad \tau \in \mathbb{N} \tag{4}$$

In Eq. (4), *Yi*þ*S<sup>τ</sup>* is the value of *Yt* during the *i*-th season of the cycle *τ* and *φi*,1, *φ<sup>i</sup>*,2 are the model parameters at the season *i:* It is clear that the parameters depend on *Yi*þ*Sτ*�<sup>1</sup> in the sense that for large j j *Yi*þ*Sτ*�<sup>1</sup> we have *<sup>φ</sup><sup>i</sup>*,1<sup>þ</sup> *<sup>φ</sup><sup>i</sup>*,2 exp �*γY*<sup>2</sup> *i*þ*Sτ*�1 � � � *<sup>φ</sup><sup>i</sup>*,1 while for small j j *Yi*þ*Sτ*�<sup>1</sup> : *<sup>φ</sup><sup>i</sup>*,1<sup>þ</sup> *<sup>φ</sup><sup>i</sup>*,2 exp �*γY*<sup>2</sup> *i*þ*Sτ*�1 � � � *<sup>φ</sup><sup>i</sup>*,1<sup>þ</sup> *<sup>φ</sup><sup>i</sup>*,2 of course the change is done smoothly between these regimes. In application, the restricted *PEXPAR*ð Þ1 model is fitted to seasonal time series displaying nonlinearity features like amplitude dependent frequency.

These forms of models are new in the literature of the time series it is interesting to make several simulations to see their characteristics. An important fact is their property of non normality as is shown by histogram in **Figure 1** and confirmed by the test of Shapiro Wilk where the *p* � *value* ¼ 0*:*008226 is less than 0*:*05. The realization of the process (A) is given in **Figure 1** from it and from the correlogram we can see that the process is stationary in each season due to the fast decay to 0 as *h* increases. Another interesting fact, that these types of models can exhibit, is the limit cycle behavior which is a well known feature in nonlinear vibrations and is one of possible mode of oscillations. Such phenomena is shown in **Figure 2** from model (B).

$$\text{Model (A)}: \begin{cases} Y\_{1+2\text{r}} = \left(-0.3 + 2 \exp\left(-Y\_{2\text{r}}^2\right)\right) Y\_{2\text{r}} + \varepsilon\_{1+2\text{r}}\\ Y\_{2+2\text{r}} = \left(-0.8 + \exp\left(-Y\_{1+2\text{r}}^2\right)\right) Y\_{1+2\text{r}} + \varepsilon\_{2+2\text{r}} \end{cases} . \tag{5}$$

for any *t*∈ , since *E ε*<sup>4</sup>

*Ln φ*, *Y*1, … , *Yn*

� � ¼� *mS*

(assuming) *σ<sup>i</sup>* 6¼ 0*:*

minimization of the quantity:

*Qn φ* � � <sup>¼</sup> <sup>1</sup> *n* X*n t*¼1

*N* 0, *σ*<sup>2</sup> *t* � � :

and

*φ*^*<sup>i</sup>*,1 *<sup>φ</sup>*^*<sup>i</sup>*,2 " # <sup>¼</sup>

*σ*^2 *<sup>i</sup>* <sup>¼</sup> <sup>1</sup> *m* X*m*�1 *τ*¼0

**199**

*<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup><sup>i</sup>*

� � <sup>¼</sup> <sup>1</sup> *m* X*m*�1 *τ*¼0

> X*m*�1 *τ*¼0 *Y*2

�

**Theorem**

*m* P�1 *τ*¼0

next theorem in the same way.

*t*

*The Periodic Restricted EXPAR(1) Model DOI: http://dx.doi.org/10.5772/intechopen.94078*

> � X *S*

> > *i*¼1

� �<sup>&</sup>lt; <sup>∞</sup> ) *E Y*<sup>4</sup>

<sup>2</sup> log 2ð Þ� *<sup>π</sup> <sup>m</sup>*

X*m*�1 *τ*¼0

*t*

2 X *S*

*i*¼1

Let *φ*^ the *QML* estimator, one can see that maximizing *Ln* is equivalent to

The initial value is unknown but its choice is not important for the asymptotic behavior of the *QML*estimator so we put *Y*<sup>0</sup> ¼ 0, which defines the operational criterion

*Yi*þ*S<sup>τ</sup>* � *<sup>φ</sup><sup>i</sup>*,1 <sup>þ</sup> *<sup>φ</sup><sup>i</sup>*,2 exp �*γY*<sup>2</sup>

*Sτ*þ*i*�1 � � <sup>X</sup>*<sup>m</sup>*�<sup>1</sup>

The first order condition of the *QML* minimization problem is a system of 2*S*

X*m*�1 *τ*¼0 *Y*2

*τ*¼0 *Y*2

*Sτ*þ*i*�1

*Sτ*þ*i*�1 � �

We remark that the *QML* estimator is the *LS* estimator and we can proof the

� � � � *YS<sup>τ</sup>*þ*i*�<sup>1</sup> � �<sup>2</sup>

The *QML* estimator is strongly consistent and we have for *i* ¼ 1, … , *S*

*i*¼1

*Q*~ *<sup>n</sup> φ* � � <sup>¼</sup> <sup>1</sup> *S* X *S*

linear equations with 2*S* unknowns. The solution is

*<sup>S</sup>τ*þ*i*�<sup>1</sup> exp �*γY*<sup>2</sup>

*YS<sup>τ</sup>*þ*i*�<sup>1</sup>*YS<sup>τ</sup>*þ*<sup>i</sup>*

*YS<sup>τ</sup>*þ*i*�<sup>1</sup>*YS<sup>τ</sup>*þ*<sup>i</sup>* exp �*γY*<sup>2</sup>

*YS<sup>τ</sup>*þ*<sup>i</sup>* � *<sup>φ</sup>*^*<sup>i</sup>*,1 <sup>þ</sup> *<sup>φ</sup>*^*<sup>i</sup>*,2 exp �*γY*<sup>2</sup>

X*m*�1 *τ*¼0 *Y*2 *Sτ*þ*i*�1

*m* P�1 *τ*¼0

log *σ*<sup>2</sup> *i* � �

*Yi*þ*S<sup>τ</sup>* � *<sup>φ</sup>i*,1 <sup>þ</sup> *<sup>φ</sup>i*,2 exp �*γY*<sup>2</sup>

*Yt* � *<sup>φ</sup><sup>t</sup>*,1 <sup>þ</sup> *<sup>φ</sup><sup>t</sup>*,2 exp �*γY*<sup>2</sup>

� � � � *Yt*�<sup>1</sup> � �<sup>2</sup>

*<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup><sup>i</sup>*

� � � � *Yi*þ*Sτ*�<sup>1</sup> � �<sup>2</sup>

outliers are improbable and the existence of the information matrix is guaranteed. Given initial value *Y*0, the conditional log likelihood of the observations evaluated at *φ* depends on *f* . The *QML* estimator is obtained by replacing *f* by the

� �< ∞*:* Under this condition significant

� � � � *Yi*þ*Sτ*�<sup>1</sup> � � 2*σ*<sup>2</sup> *i*

*t*�1

*i*þ*Sτ*�1

*<sup>S</sup>τ*þ*i*�<sup>1</sup> exp �*γY*<sup>2</sup>

*<sup>S</sup>τ*þ*i*�<sup>1</sup> exp �2*γY*<sup>2</sup>

*:*

� � (9)

*Sτ*þ*i*�1 � �

�1

(11)

*Sτ*þ*i*�1 � �

*i*þ*Sτ*�1

2 ,

*:* (8)

*:* (10)

(7)

**Figure 1.** *Realization of (A) with corresponding histogram and correlogram.*

#### **Figure 2.**

*Limit cycle from PEXPAR*2ð Þ 1 *model.*

$$\text{Model (B)}: \begin{cases} Y\_{1+2\mathfrak{r}} = \left( 0.2 - 1.5 \exp\left( -Y\_{2\mathfrak{r}}^2 \right) \right) Y\_{2\mathfrak{r}} + e\_{1+2\mathfrak{r}}\\ Y\_{2+2\mathfrak{r}} = \left( 0.8 + 0.3 \exp\left( -Y\_{1+2\mathfrak{r}}^2 \right) \right) Y\_{1+2\mathfrak{r}} + e\_{2+2\mathfrak{r}} \end{cases} . \tag{6}$$

#### **2.2 QML Estimation**

Let *φ* ¼ *φ*<sup>0</sup> 1 , … , *φ*<sup>0</sup> *S* � �<sup>0</sup> <sup>∈</sup> <sup>2</sup>*<sup>S</sup>* the parameter vector where *<sup>φ</sup><sup>i</sup>* <sup>¼</sup> *<sup>φ</sup>i*,1, *<sup>φ</sup><sup>i</sup>*,2 � �<sup>0</sup> , *i* ¼ 1, … , *S:* We want to estimate the true parameter *φ*<sup>0</sup> from observations *Y*1, … , *Yn* where *n* ¼ *mS* which means that we have *m* full period of data*:* The problem is resolved by the *QML* method and under the conditions:

*A*1: The Periodical restricted exponential autoregressive parameters *φ* satisfy the stationary periodically condition of (1). A sufficient condition is given by *φ<sup>i</sup>*,1 � � � � < 1, *φ<sup>i</sup>*,2 ∈ , *i* ¼ 1, … , *S*.

*<sup>A</sup>*2: The periodically ergodic process *Yt* f g ; *<sup>t</sup>*<sup>∈</sup> is such that *E Y*<sup>4</sup> *t* � �< ∞, for any *t*∈ .

Periodic stationarity has not been treated for this model so stationarity is required for each season hence *A*1. We can replace the assumption *A*2 by *E ε*<sup>4</sup> *t* � �< ∞, *The Periodic Restricted EXPAR(1) Model DOI: http://dx.doi.org/10.5772/intechopen.94078*

for any *t*∈ , since *E ε*<sup>4</sup> *t* � �<sup>&</sup>lt; <sup>∞</sup> ) *E Y*<sup>4</sup> *t* � �< ∞*:* Under this condition significant outliers are improbable and the existence of the information matrix is guaranteed.

Given initial value *Y*0, the conditional log likelihood of the observations evaluated at *φ* depends on *f* . The *QML* estimator is obtained by replacing *f* by the *N* 0, *σ*<sup>2</sup> *t* � � :

$$\begin{split} L\_{n}\left(\underline{\boldsymbol{\varrho}},\boldsymbol{Y}\_{1},\ldots,\boldsymbol{Y}\_{n}\right) &= -\frac{m\boldsymbol{\mathcal{S}}}{2}\log\left(2\pi\right) - \frac{m}{2}\sum\_{i=1}^{S}\log\left(\sigma\_{i}^{2}\right) \\ &- \sum\_{i=1}^{S}\sum\_{\mathbf{r}=0}^{m-1} \frac{\left(\boldsymbol{Y}\_{i+\text{Sr}} - \left(\boldsymbol{\rho}\_{i,1} + \underset{\boldsymbol{\rho}\_{i,2}}{\boldsymbol{\rho}\_{i,2}}\exp\left(-\boldsymbol{\gamma}\boldsymbol{Y}\_{i+\text{Sr}-1}^{2}\right)\right)\boldsymbol{Y}\_{i+\text{Sr}-1}\right)^{2}}{2\sigma\_{i}^{2}}, \end{split} \tag{7}$$

(assuming) *σ<sup>i</sup>* 6¼ 0*:*

Let *φ*^ the *QML* estimator, one can see that maximizing *Ln* is equivalent to minimization of the quantity:

$$Q\_n\left(\underline{\rho}\right) = \frac{1}{n} \sum\_{t=1}^n \left(Y\_t - \left(\rho\_{t,1} + \,\,\rho\_{t,2} \exp\left(-\gamma Y\_{t-1}^2\right)\right) Y\_{t-1}\right)^2. \tag{8}$$

The initial value is unknown but its choice is not important for the asymptotic behavior of the *QML*estimator so we put *Y*<sup>0</sup> ¼ 0, which defines the operational criterion

$$\tilde{Q}\_n\left(\underline{\rho}\right) = \frac{1}{S} \sum\_{i=1}^{S} \tilde{Q}\_{i,n}\left(\underline{\rho}\_i\right) \tag{9}$$

and

$$\tilde{Q}\_{i,m}\left(\underline{\rho}\_{i}\right) = \frac{1}{m} \sum\_{\tau=0}^{m-1} \left(Y\_{i+\text{Sr}} - \left(\rho\_{i,1} + \rho\_{i,2} \exp\left(-\gamma Y\_{i+\text{Sr}-1}^{2}\right)\right) Y\_{i+\text{Sr}-1}\right)^2. \tag{10}$$

The first order condition of the *QML* minimization problem is a system of 2*S* linear equations with 2*S* unknowns. The solution is

$$
\begin{split}
\left[\hat{\boldsymbol{\rho}}\_{i,1}\right] &= \left[\sum\_{\tau=0}^{m-1} Y\_{Sr+i-1}^{2} \sum\_{\tau=0}^{m-1} Y\_{Sr+i-1}^{2} \exp\left(-\gamma Y\_{Sr+i-1}^{2}\right)\right]^{-1} \\ &= \left[\sum\_{\tau=0}^{m-1} Y\_{Sr+i-1}^{2} \exp\left(-\gamma Y\_{Sr+i-1}^{2}\right) \sum\_{\tau=0}^{m-1} Y\_{Sr+i-1}^{2} \exp\left(-2\gamma Y\_{Sr+i-1}^{2}\right)\right] \\ &\times \left[\sum\_{\tau=0}^{m-1} Y\_{Sr+i-1} Y\_{Sr+i}\right] \\ &= \sum\_{\tau=0}^{m-1} Y\_{Sr+i-1} Y\_{Sr+i} \exp\left(-\gamma Y\_{Sr+i-1}^{2}\right) \\ &= \frac{1}{m} \sum\_{\tau=0}^{m-1} \left(Y\_{Sr+i} - \left(\hat{\boldsymbol{\rho}}\_{i,1} + \hat{\boldsymbol{\rho}}\_{i,2} \exp\left(-\gamma Y\_{Sr+i-1}^{2}\right)\right) Y\_{Sr+i-1}\right)^{2}.
\end{split}
\tag{11}
$$

We remark that the *QML* estimator is the *LS* estimator and we can proof the next theorem in the same way.

#### **Theorem**

The *QML* estimator is strongly consistent and we have for *i* ¼ 1, … , *S*

Model ðBÞ :

*Limit cycle from PEXPAR*2ð Þ 1 *model.*

1 , … , *φ*<sup>0</sup> *S*

< 1, *φ<sup>i</sup>*,2 ∈ , *i* ¼ 1, … , *S*.

� �<sup>0</sup>

**2.2 QML Estimation**

Let *φ* ¼ *φ*<sup>0</sup>

**Figure 1.**

**Figure 2.**

**198**

(

*Realization of (A) with corresponding histogram and correlogram.*

*Recent Advances in Numerical Simulations*

resolved by the *QML* method and under the conditions:

*<sup>Y</sup>*<sup>1</sup>þ2*<sup>τ</sup>* <sup>¼</sup> <sup>0</sup>*:*<sup>2</sup> � <sup>1</sup>*:*5 exp �*Y*<sup>2</sup>

*<sup>Y</sup>*<sup>2</sup>þ2*<sup>τ</sup>* <sup>¼</sup> <sup>0</sup>*:*<sup>8</sup> <sup>þ</sup> <sup>0</sup>*:*3 exp �*Y*<sup>2</sup>

1, … , *S:* We want to estimate the true parameter *φ*<sup>0</sup> from observations *Y*1, … , *Yn* where *n* ¼ *mS* which means that we have *m* full period of data*:* The problem is

stationary periodically condition of (1). A sufficient condition is given by *φ<sup>i</sup>*,1

Periodic stationarity has not been treated for this model so stationarity is required for each season hence *A*1. We can replace the assumption *A*2 by *E ε*<sup>4</sup>

*<sup>A</sup>*2: The periodically ergodic process *Yt* f g ; *<sup>t</sup>*<sup>∈</sup> is such that *E Y*<sup>4</sup>

*A*1: The Periodical restricted exponential autoregressive parameters *φ* satisfy the

2*τ* � � � � *<sup>Y</sup>*2*<sup>τ</sup>* <sup>þ</sup> *<sup>ε</sup>*<sup>1</sup>þ2*<sup>τ</sup>*

<sup>∈</sup> <sup>2</sup>*<sup>S</sup>* the parameter vector where *<sup>φ</sup><sup>i</sup>* <sup>¼</sup> *<sup>φ</sup>i*,1, *<sup>φ</sup><sup>i</sup>*,2

1þ2*τ* � � � � *<sup>Y</sup>*<sup>1</sup>þ2*<sup>τ</sup>* <sup>þ</sup> *<sup>ε</sup>*<sup>2</sup>þ2*<sup>τ</sup>*

*:* (6)

, *i* ¼

� � � �

*t* � �< ∞,

� �< ∞, for any *t*∈ .

� �<sup>0</sup>

*t*

$$\sqrt{m}\begin{bmatrix}\hat{\rho}\_{i,1} - \rho\_{i,1} \\ \hat{\rho}\_{i,2} - \rho\_{i,2}\end{bmatrix} \xrightarrow[m\to\infty]{\mathcal{L}} N\left(\underline{0}, \sigma\_i^2 \begin{pmatrix} E\left(Y\_{i-1}^2\right) & E\left(X\_{i-1}^2 \exp\left(-\gamma Y\_{i-1}^2\right)\right) \\ E\left(Y\_{i-1}^2 \exp\left(-\gamma Y\_{i-1}^2\right)\right) & E\left(Y\_{i-1}^2 \exp\left(-2\gamma Y\_{i-1}^2\right)\right) \end{pmatrix}^{-1}\right). \tag{12}$$

Furthermore, *φ*^*i*,*<sup>m</sup>* and *φ*^ *<sup>j</sup>*,*<sup>m</sup>* are asymptotically independant, *i* 6¼ *j*, *i*, *j* ¼ 1, … , *S*. **Proof**

The proof is very standard in the literature of time series. The consistency is based on an ergodicity argument and for the normality a central limit version for martingale differences is used. The detail is similar to the *LSE* (see [24]) hence it is omitted. The independence of the *εi*þ*S<sup>τ</sup>* implies that all the terms for *i* ¼6 *j* are zero, this implies that ffiffiffiffi *<sup>m</sup>*<sup>p</sup> *<sup>φ</sup>*^*i*,*<sup>m</sup>* � *<sup>φ</sup><sup>i</sup>* � � and ffiffiffiffi *<sup>m</sup>*<sup>p</sup> *<sup>φ</sup>*^ *<sup>j</sup>*,*<sup>m</sup>* � *<sup>φ</sup><sup>j</sup>* � �, *<sup>i</sup>* 6¼ *<sup>j</sup>*, are asymptotically uncorrelated.

The *QML* estimators (Eq. (4)) yields a point estimator, a confidence interval (*CI*) gives a region where the parameters fall in with a given probability (usually 95% or 90%). Based on the asymptotic normality of the *QML* estimators, with asymptotic probability 1 � *α*, *φ<sup>i</sup>*,*<sup>j</sup>* is in the interval

$$\left(\hat{\rho}\_{ij} \pm \Phi\_{1-a/2} \frac{\hat{\sigma}}{\sqrt{m\_i}} \sqrt{(\Gamma\_i)\_{jl}}\right), \ j = 1, 2, i = 1, \ldots, \mathbb{S},\tag{13}$$

**3. Likelihood Ratio tests**

*The Periodic Restricted EXPAR(1) Model DOI: http://dx.doi.org/10.5772/intechopen.94078*

*CI of parameters for n = 1000.*

**Table 3.**

tions of the form

mean square error *<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup>*^*<sup>i</sup>*

estimator given under *H*<sup>0</sup> where

The usual LR statistic is

where *χ*<sup>2</sup>

freedom.

**201**

**3.1 Test for the Nullity of One Coefficient**

The asymptotic normality of the *QML* in Eq. (12) can be exploited to perform tests on the parameters. This problem is very standard, especially when 0 is an interior point of the parameter space and can be done with the trilogy: Wald, LR and LM tests. We treated the former in [25] and in this chapter, we will use the LR test which is based upon the difference between the maximum of the likelihood under the null and under the alternative hypotheses and has the advantage of not estimating information matrix. In this section, we are interested in testing assump-

*<sup>n</sup>* <sup>¼</sup> **<sup>1000</sup>** *CI <sup>φ</sup>***1,1** � � *CI <sup>φ</sup>***1,2** � � *CI <sup>φ</sup>***2,1** � � *CI <sup>φ</sup>***2,2** � � *α* ¼ 10% ½ � �0*:*8027, �0*:*7952 ½ � 1*:*1909, 1*:*2191 ½ �� 0*:*3978, 0*:*4040 ½ � 0*:*9072, �0*:*8793 *α* ¼ 5% ½ � �0*:*8030, �0*:*7938 ½ � 1*:*1797, 1*:*2139 ½ �� 0*:*3958, 0*:*4034 ½ � 0*:*9119, �0*:*8791

*<sup>H</sup>*<sup>0</sup> : *<sup>φ</sup><sup>i</sup>*,2 <sup>¼</sup> 0 or *<sup>H</sup>*<sup>0</sup> : *<sup>φ</sup><sup>i</sup>*,1 <sup>¼</sup> <sup>0</sup> � � vs *<sup>H</sup>*<sup>1</sup> : *<sup>φ</sup><sup>i</sup>*,2 6¼ 0 or *<sup>H</sup>*<sup>1</sup> : *<sup>φ</sup><sup>i</sup>*,1 6¼ <sup>0</sup> � �, (15)

for some given *i:* Under *H*1, we have the *QML* estimator *φ*^*<sup>i</sup>* given by Eq. (11) and

*<sup>τ</sup>*¼<sup>0</sup> *YS<sup>τ</sup>*þ*i*�<sup>1</sup>*YS<sup>τ</sup>*þ*<sup>i</sup>*

*Sτ*þ*i*�1

*Yi*þ*S<sup>τ</sup>* � *φ*~*<sup>i</sup>*,1*Yi*þ*Sτ*�<sup>1</sup> � �<sup>2</sup>

> *<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup>*^*<sup>i</sup>* � �

1 A

*m* 2

0 @

*<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup>*~*<sup>i</sup>* � �

*<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup>*^*<sup>i</sup>*

� � <sup>&</sup>gt;*χ*<sup>2</sup>

<sup>1</sup>ð Þ <sup>1</sup> � *<sup>α</sup>* is the 1ð Þ� � *<sup>α</sup>* quantile of the *<sup>χ</sup>*<sup>2</sup> distribution with 1 degree of

*<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup>*~*<sup>i</sup>* � � 0

� �, is the *QML*

*:* (17)

<sup>1</sup>ð Þ <sup>1</sup> � *<sup>α</sup>* , (19)

(16)

(18)

� � given by Eq. (10) and *<sup>φ</sup>*~*<sup>i</sup>* <sup>¼</sup> *<sup>φ</sup>*~*<sup>i</sup>*,1

P*<sup>m</sup>*�<sup>1</sup>

P*<sup>m</sup>*�<sup>1</sup> *<sup>τ</sup>*¼<sup>0</sup> *<sup>Y</sup>*<sup>2</sup>

*φ*~*<sup>i</sup>*,1 ¼

and the corresponding mean square error under the null

� � <sup>¼</sup> <sup>1</sup> *m* X*m*�1 *τ*¼0

> *L φ*~*<sup>i</sup>* , *σ*~<sup>2</sup> *i* � �

*L φ*^*<sup>i</sup>* , *σ*^<sup>2</sup> *i* � � ¼

*LRi*,*<sup>m</sup>* ¼ �2 log *λ<sup>i</sup>*,*<sup>m</sup>*

¼ *m* log

*<sup>Q</sup>*<sup>~</sup> *<sup>i</sup>*,*<sup>m</sup> <sup>φ</sup>*~*<sup>i</sup>*

*λ<sup>i</sup>*,*<sup>m</sup>* ¼

then the test rejects *H*<sup>0</sup> at the asymptotic level *α* when

where

$$\Gamma\_{i} = \begin{pmatrix} E\left(Y\_{i-1}^{2}\right) & E\left(Y\_{i-1}^{2} \exp\left(-\gamma Y\_{i-1}^{2}\right)\right) \\ E\left(Y\_{i-1}^{2} \exp\left(-\gamma Y\_{i-1}^{2}\right)\right) & E\left(Y\_{i-1}^{2} \exp\left(-2\gamma Y\_{i-1}^{2}\right)\right) \end{pmatrix}^{-1},\tag{14}$$

and Φ<sup>1</sup>�*α=*<sup>2</sup> is the 1 � *α=*2 quantile of the standard normal distribution. That is, the *CI* contains the true parameters in 100 1ð Þ � *α* % of all repeated samples.

To examine the performance of the *QML* estimators, we construct *CI* of the parameters from the simulation of restricted *PEXPAR*2ð Þ1 model with parameters: *φ*<sup>1</sup> ¼ �ð Þ 0*:*8, 1*:*2 <sup>0</sup> and *φ*<sup>2</sup> ¼ ð Þ 0*:*4, �0*:*9 <sup>0</sup> with sizes *n* ¼ 200, 500 and 1000 and for the significance levels: *α* ¼ 10% and 5% and 1000 replications*:* From the **Tables 1–3** we deduce that the parameters are well estimated and when *n* increases the length of *CI* decreases showing that the estimates are consistent. Obviously, a higher confidence level produces wider *CI*.


**Table 1.**

*CI of parameters for n = 200.*


**Table 2.** *CI of parameters for n = 500.* *The Periodic Restricted EXPAR(1) Model DOI: http://dx.doi.org/10.5772/intechopen.94078*


**Table 3.**

ffiffiffiffi *<sup>m</sup>*<sup>p</sup> *<sup>φ</sup>*^*<sup>i</sup>*,1 � *<sup>φ</sup>i*,1 *φ*^*<sup>i</sup>*,2 � *φi*,2 � �

ffiffiffiffi *<sup>m</sup>*<sup>p</sup> *<sup>φ</sup>*^*i*,*<sup>m</sup>* � *<sup>φ</sup><sup>i</sup>* � �

**Proof**

where

*m* !!<sup>∞</sup>

*Recent Advances in Numerical Simulations*

and ffiffiffiffi

probability 1 � *α*, *φ<sup>i</sup>*,*<sup>j</sup>* is in the interval

<sup>Γ</sup>*<sup>i</sup>* <sup>¼</sup> *E Y*<sup>2</sup>

*E Y*<sup>2</sup>

confidence level produces wider *CI*.

*n* ¼ **200** *CI φ***1,1**

*CI of parameters for n = 200.*

*CI of parameters for n = 500.*

*n* ¼ **500** *CI φ***1,1**

**Table 1.**

**Table 2.**

**200**

*<sup>m</sup>*<sup>p</sup> *<sup>φ</sup>*^ *<sup>j</sup>*,*<sup>m</sup>* � *<sup>φ</sup><sup>j</sup>* � �

*φ*^*<sup>i</sup>*,*<sup>j</sup>* � Φ<sup>1</sup>�*α=*<sup>2</sup>

*σ*^ ffiffiffiffi *<sup>m</sup>*<sup>p</sup> *<sup>i</sup>*

� � *E Y*<sup>2</sup>

*i*�1 � � � � *E Y*<sup>2</sup>

the *CI* contains the true parameters in 100 1ð Þ � *α* % of all repeated samples. To examine the performance of the *QML* estimators, we construct *CI* of the parameters from the simulation of restricted *PEXPAR*2ð Þ1 model with parameters: *φ*<sup>1</sup> ¼ �ð Þ 0*:*8, 1*:*2 <sup>0</sup> and *φ*<sup>2</sup> ¼ ð Þ 0*:*4, �0*:*9 <sup>0</sup> with sizes *n* ¼ 200, 500 and 1000 and for the significance levels: *α* ¼ 10% and 5% and 1000 replications*:* From the **Tables 1–3** we deduce that the parameters are well estimated and when *n* increases the length of *CI* decreases showing that the estimates are consistent. Obviously, a higher

� � q

*i*�1

� � *CI φ***1,2**

� � *CI φ***1,2**

*<sup>i</sup>*�<sup>1</sup> exp �*γY*<sup>2</sup>

<sup>L</sup> *N* 02, *σ*<sup>2</sup>

@

*i*

*E Y*<sup>2</sup>

*E Y*<sup>2</sup> *i*�1

*<sup>i</sup>*�<sup>1</sup> exp �*γY*<sup>2</sup>

Furthermore, *φ*^*i*,*<sup>m</sup>* and *φ*^ *<sup>j</sup>*,*<sup>m</sup>* are asymptotically independant, *i* 6¼ *j*, *i*, *j* ¼ 1, … , *S*.

The proof is very standard in the literature of time series. The consistency is based on an ergodicity argument and for the normality a central limit version for martingale differences is used. The detail is similar to the *LSE* (see [24]) hence it is omitted. The independence of the *εi*þ*S<sup>τ</sup>* implies that all the terms for *i* ¼6 *j* are zero, this implies that

The *QML* estimators (Eq. (4)) yields a point estimator, a confidence interval (*CI*) gives a region where the parameters fall in with a given probability (usually 95% or 90%). Based on the asymptotic normality of the *QML* estimators, with asymptotic

> ffiffiffiffiffiffiffiffiffiffi ð Þ Γ*<sup>i</sup> jj*

!�<sup>1</sup>

and Φ<sup>1</sup>�*α=*<sup>2</sup> is the 1 � *α=*2 quantile of the standard normal distribution. That is,

� � *E X*<sup>2</sup>

!�<sup>1</sup> 0

, *i* 6¼ *j*, are asymptotically uncorrelated.

*<sup>i</sup>*�<sup>1</sup> exp �*γY*<sup>2</sup>

*<sup>i</sup>*�<sup>1</sup> exp �2*γY*<sup>2</sup>

� � *CI φ***2,1**

� � *CI φ***2,1**

*α* ¼ 10% ½ � �0*:*8206, �0*:*7796 ½ � 1*:*1136, 1*:*2576 ½ �� 0*:*3801, 0*:*4119 ½ � 0*:*9559, �0*:*8156 *α* ¼ 5% ½ � �0*:*8126, �0*:*7661 ½ � 1*:*0893, 1*:*2618 ½ �� 0*:*3808, 0*:*4194 ½ � 0*:*9967, �0*:*8260

*α* ¼ 10% ½ � �0*:*8038, �0*:*7879 ½ � 1*:*1551, 1*:*2113 ½ �� 0*:*3874, 0*:*4001 ½ � 0*:*9015, �0*:*8470 *α* ¼ 5% ½ � �0*:*8097, �0*:*7912 ½ � 1*:*1783, 1*:*2453 ½ �� 0*:*3873, 0*:*4020 ½ � 0*:*9104, �0*:*8448

� � � �

� � � �

, *j* ¼ 1, 2, *i* ¼ 1, … , *S*, (13)

� � *CI φ***2,2**

� � *CI φ***2,2**

*i*�1

*i*�1

*i*�1 � � � � *E Y*<sup>2</sup>

*<sup>i</sup>*�<sup>1</sup> exp �*γY*<sup>2</sup>

*<sup>i</sup>*�<sup>1</sup> exp �2*γY*<sup>2</sup>

� � � �

� � � �

*i*�1

1 A*:*

(12)

*i*�1

, (14)

� �

� �

*CI of parameters for n = 1000.*
