Diffusion Processes

## **Chapter 8**

## The 2� Structure of Subordinated Solution of Continuous-Time Bilinear Time Series

*Abdelouahab Bibi*

## **Abstract**

The models of stochastic subordination, or random time indexing, has been recently applied to model financial returns ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> exhibiting some characteristic periods of constant values for instance exchange rate. In reality, sharp and large variations for *X*(*t*) do occur. These sharp and large variations are linked to information arrivals and/or represent sudden events and hence we have a model with jumps. For this purpose, by substituting the usual deterministic time *t* as a subordinator ð Þ *T t*ð Þ *<sup>t</sup>*≥<sup>0</sup> in a stochastic process ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> we obtain a new process ð Þ *XTt* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> whose stochastic time is dominated by the subordinator ð Þ *T t*ð Þ *<sup>t</sup>*≥<sup>0</sup>. Therefore we propose in this paper an alternative approach based on a combination of the continuous-time bilinear (*COBL*) process subordinated by a Poisson process (that it is a Levy process) which permits us to introduce further randomness for the phenomena which exhibit either a speeded up or slowed down behavior. So, the main probabilistic properties of such models are studied and the explicit expression of the higher-order moments properties are given. Moreover, moments method (*MM*) is proposed as an estimation issue of the unknown parameters. Simulation studies confirm the theoretical findings and show that the *MM* method proposal can effectively reduce both the bias and the mean square error of parameter estimates.

**Keywords:** diffusion processes, subordination, Poisson process

## **1. Introduction**

The non-linear time-continuous models were initially discussed by Mohler [1] in control theory and then rapidly extended to a time-series analysis by several authors (see [2] for review). One of the classes of non-linear time-continuous models which has attracted considerable attention of the researchers is the classes of bilinear diffusion processes which have been widely studied and considered in time series analysis and in the theory of stochastic differential equations (*SDE*). For instance, among others, Le Breton and Musiela [3] and Bibi and Merahi [4] have considered a process ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> generated by the following *SDE*

$$\begin{split}dX(t) &= (a\mathbf{X}(t) + \mu)dt + (\gamma \mathbf{X}(t) + \beta)dw(t), t \geq \mathbf{0}, \mathbf{X}(\mathbf{0}) = \mathbf{X}\_0 \\ &= \mu(\mathbf{X}(t))dt + \sigma(\mathbf{X}(t))dw(t) \end{split} \tag{1}$$

denoted hereafter by *COBL* (1) in which *μ*ð Þ¼ *x αx* þ *μ* and *σ*ð Þ¼ *x γx* þ *β* are respectively the drift and diffusion functions representing respectively the conditional mean and variance of the infinitesimal change of *X*(*t*) at time *t:*ð Þ *w t*ð Þ *<sup>t</sup>*≥<sup>0</sup> is a real standard Brownian motion defined on some basic filtered space <sup>Ω</sup>, <sup>A</sup>,ð Þ <sup>A</sup>*<sup>t</sup> <sup>t</sup>*≥0, *<sup>P</sup>* and *EXt* f g ð Þ*dw t*ð Þ ¼ 0. The initial condition *X*(0) of *X*(*t*) can be either deterministic or random variable defined on ð Þ Ω, A, *P* independent of *w* such that *E X*f g ð Þ 0 ¼ *m*1ð Þ 0 and *Var X*f g ð Þ 0 ¼ *KX*ð Þ 0 . However, the distribution of stochastic processes *X*(*t*) solution of (1) evaluated at random times process say *T*(*t*), are receiving increasing attention in various applied fields. Some examples we have in mind are:


One of the first papers in this field is by Lee and Whitmore [5], who studied general properties of processes delayed by randomized times. In the literature, due to the interesting properties of the Poisson process, its popularity, and applicability, various researchers have generalized it in several directions; e.g., compound Poisson processes and/or weighted Poisson distributions, special attention is given to the case of a Poisson process with randomized time or Poisson subordinator, i.e.; the time process is supposed to be a subordinator – Poisson process with nondecreasing sample paths. Most published research involving this approach, Clark [6] and German and Ane [7].

In this paper, our interest lies in the statistical inference of the parameters involved in the diffusion process defined in (1) and in its subordination by a Poisson process. Diffusion processes estimation has been widely studied in the statistical literature by many authors under several restrictions (see [8] for a survey). The major approach used in parameters estimation is the maximum likelihood method which in general presents a difficulty to obtaining a tractable expression for the transition densities. So, certain econometric methods have been recently proposed. Hence, parameters estimation of continuous-time processes can be achieved through the use for instance the moments method (*MM*) and/or its generalization (*GMM*). These methods are useful for modeling some events that occur randomly over a the fixed period of time or in a fixed space chaotic subordination by assuming a Poisson process for the subordinating variable for *COBL*(1,1) and hence some statistical and probabilistic properties are studied. For this purpose, in next section we describe some theoretical framework for certain specification of *COBL*(1,1). More precisely, we discuss the condition of their existence, uniqueness, and their distribution. The moments properties of *COBL*(1,1) process are presented in section 3 followed by its extended to that subordination by a Poisson process. In section 4 we discuss the properties of the subordinated process, in particular, its moment properties and its distribution of the subordinated version. An estimation issue based on *MM* and on *GMM* (considered as a benchmark) are presented in section 5, substantially enriched by the asymptotic properties of such estimations. In section 6, Monte-Carlo simulation is carried out through a simulation study of *COBL*(1,1) and its subordinated process. The end section is for the conclusions.

## **2. Theoretical background**

The *SDE* (1,1) covers many models commonly used in the literature. Some specific examples among others are:

1.*COGARCH*(1,1): This class of processes is defined as a *SDE* by *dX t*ðÞ¼ *σ*ð Þ*t dB*1ð Þ*t* with *<sup>d</sup>σ*<sup>2</sup>ðÞ¼ *<sup>t</sup> <sup>μ</sup>* � *ασ*<sup>2</sup> ð Þ ð Þ*<sup>t</sup> dt* <sup>þ</sup> *γσ*<sup>2</sup>ð Þ*<sup>t</sup> dB*2ð Þ*<sup>t</sup>* , *<sup>t</sup>*>0 where *<sup>B</sup>*<sup>1</sup> and *<sup>B</sup>*<sup>2</sup> are independent Brownian motions, *μ*>0, *α* ≥0, and *γ* ≥0. So, the stochastic volatility equation can be regarded as a particular case of (1) by assuming *β* ¼ 0. (see [9]).

2.*CAR*(1): This classes of *SDE* may be obtained by assuming *γ* ¼ 0 (see [10]).

3.Gaussian Ornstein-Uhlenbeck (*OU*) process: The *OU* process is defined as

$$dX(t) = (\mu + aX(t))dt + \beta dw(t), t \ge 0 \tag{2}$$

with the diffusion parameter *β* > 0. So it can be obtained from (1) by assuming *γ* ¼ 0 (see [10] and the reference therein).

4.Geometric Brownian motion (*GBM*): This class of processes is defined as a �

valued solution process ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> of *dX t*ðÞ¼ *αX t*ð Þ*dt* þ *γX t*ð Þ*dw t*ð Þ, *t*≥0. So it can be obtained from (1) by assuming *β* ¼ *μ* ¼ 0 (see [11]).

#### **2.1 Existence of ergotic and stationary solutions**

The existence of solution process of equation (1), was investigated by several authors, for instance, Iglói and Terdik [12] have studied the same model driven by fractional Brownian innovation. A class of *COBL* with time-varying coefficients was studied by Le Breton and Musiela [3], Bibi and Merahi [4] and Leon and Perez-Abreu [13]. Moreover, there are several monographic which discuss the theoretical probabilistic and statistical properties (interested readers are advised to see [14, 15] and the references therein). Hence, a Markovian Itô solution of *SDE* (1) is given by

$$X(t) = \Phi(t)\left\{X(\mathbf{0}) + (\mu - \gamma\rho)\int\_0^t \Phi^{-1}(s)ds + \beta \int\_0^t \Phi^{-1}(s)dw(s)\right\}, a.s.,\tag{3}$$

where <sup>Φ</sup>ðÞ¼ *<sup>t</sup>* exp *<sup>α</sup>* � <sup>1</sup> <sup>2</sup> *<sup>γ</sup>*<sup>2</sup> � �*<sup>t</sup>* <sup>þ</sup> *<sup>γ</sup>w t*ð Þ � � is the fundamental process solution (see e.g., [14] chapter 8) and its first and second moments functions ΨðÞ¼ *t <sup>E</sup>*f g <sup>Φ</sup>ð Þ*<sup>t</sup>* <sup>¼</sup> exp f g *<sup>α</sup><sup>t</sup>* and *<sup>ϕ</sup>*ðÞ¼ *<sup>t</sup> <sup>E</sup>* <sup>Φ</sup><sup>2</sup> ð Þ*<sup>t</sup>* � � <sup>¼</sup> exp 2*<sup>α</sup>* <sup>þ</sup> *<sup>γ</sup>*<sup>2</sup> ð Þ*<sup>t</sup>* � �. The key tool in studying the asymptotic stability of solution (3) is the top-Lyapunov exponent defined by *<sup>λ</sup><sup>L</sup>* <sup>¼</sup> lim sup *<sup>t</sup>*!þ<sup>∞</sup> 1 *<sup>t</sup>* log j j *X t*ð Þ , so if it exists then *λ<sup>L</sup>* controls the long-time asymptotic behavior of *X*. Indeed if *λ<sup>L</sup>* < þ ∞, *a.s* then for sufficiently large *t*, there exists a positive random variable *<sup>ξ</sup>* such that j j *X t*ð Þ <sup>≤</sup>*ξe<sup>λ</sup>Lt* and hence if *<sup>λ</sup><sup>L</sup>* <sup>&</sup>lt;0, then lim*t*!þ<sup>∞</sup>*X t*ðÞ¼ 0, *a.s*.

Though the condition *λ<sup>L</sup>* < 0 could be used as a sufficient condition for asymptotic stability, it is of little use for the practice of checking for stationarity of the solution (3). On the other hand, and in statistical applications, we often suggest conditions ensuring the existence of some moments for the process solution. This suggestion cannot be achieved by the top-Lyapunov exponent criterion. However, since the functions *μ*ð Þ *x* and *σ*ð Þ *x* are locally Lipschitz, then the existence and uniqueness of stationary and ergodic solution process ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> given by (3) is ensured by the integrability on <sup>þ</sup> of the speed density *g y*ð Þ¼ <sup>1</sup> *<sup>σ</sup>*2ð Þ*<sup>y</sup>* exp 2*<sup>y</sup>* 1 *μ*ð Þ *x <sup>σ</sup>*2ð Þ *<sup>x</sup> dx* n o (see [16]) and that the density function *f*(.) of the stationary distribution of a diffusion process (1) is proportional to *g y*ð Þ. Moreover, the unique invariant probability is absolutely continuous with respect to the Lebesgue measure with a density function equal to *g* (up to a constant). Hence, the integrability on <sup>þ</sup> of the function *g* may be discussed case by case in the following cases

1.*<sup>γ</sup>* <sup>¼</sup> 0 and *<sup>β</sup>* 6¼ 0 (*OU* case), in this case *g y*ð Þ¼ *<sup>C</sup>*exp *<sup>α</sup> <sup>β</sup>*<sup>2</sup> *<sup>y</sup>* <sup>þ</sup> *<sup>μ</sup> α* � �<sup>2</sup> n o for some positive constant *C*, and hence *g y*ð Þ is integrable on <sup>þ</sup> if and only if *α*< 0 for all *<sup>μ</sup>*<sup>∈</sup> . Therefore we recognize a <sup>N</sup> � *<sup>μ</sup> <sup>α</sup>*, � *<sup>β</sup>*<sup>2</sup> 2*α* � � for the invariant distribution of *OU* process and

$$f(\boldsymbol{\chi}) = \frac{1}{\sqrt{-2\pi \frac{\rho^2}{2a}}} \exp\left\{ \frac{1}{\frac{\rho^2}{a}} \left( \boldsymbol{\chi} + \frac{\mu}{a} \right)^2 \right\}$$

2.*<sup>β</sup>* <sup>¼</sup> 0, *<sup>μ</sup>* <sup>¼</sup> 0 (*GBM* case) in this case *g y*ð Þ¼ *Cy*<sup>2</sup> *<sup>α</sup>*�*γ*<sup>2</sup> ð Þ*<sup>=</sup>γ*<sup>2</sup> and hence *g y*ð Þ is not integrable on þ, therefore there is no stationary and ergodic solution for *GBM* process.

3.*<sup>β</sup>* <sup>¼</sup> 0, *<sup>μ</sup>* 6¼ 0 (*COBL*(1,1) case) the function *g y*ð Þ¼ *<sup>C</sup>* <sup>1</sup> *y* � � *<sup>γ</sup>* ð Þ <sup>2</sup>�2*<sup>α</sup> <sup>=</sup>γ*2þ<sup>1</sup> exp �<sup>2</sup> *<sup>μ</sup> γ*2*y* n o, the integrability conditions hold if and only if *μ*>0, and hence the unique ergodic and stationary solution exists on þ. Therefore, we recognize a inversegamma distribution noted IGð Þ *δ*, *θ* for the invariant distribution of *COBL*(1,1) process where the shape parameter *<sup>δ</sup>* <sup>¼</sup> *<sup>γ</sup>*ð Þ <sup>2</sup> � <sup>2</sup>*<sup>α</sup> <sup>=</sup>γ*<sup>2</sup> <sup>&</sup>gt;0 and the scale parameter *<sup>θ</sup>* <sup>¼</sup> <sup>2</sup> *<sup>μ</sup> <sup>γ</sup>*<sup>2</sup> >0 and

$$f(\boldsymbol{\jmath}) = \frac{\theta^{\delta}}{\Gamma(\delta)} \boldsymbol{\jmath}^{-\delta - 1} \exp\left\{-\theta/\boldsymbol{\jmath}\right\}; \ \boldsymbol{\jmath} > 0.$$

The inverse-gamma distribution appears in Bayesian inference, in a natural way, as the posterior distribution of the variance in normal sampling. The process associated with this parametrization is often referred to *GARCH* diffusion models. Note that the IG distribution nests some well-known distributions such as the Inverse Exponential, Inverse *χ*<sup>2</sup> and Scaled Inverse *χ*<sup>2</sup> distributions.

*The* L*2*� *Structure of Subordinated Solution of Continuous-Time Bilinear Time Series DOI: http://dx.doi.org/10.5772/intechopen.105718*

In view of the above discussion, and since we are interested in the stationary non-Gaussian solution of (1), therefore it is necessary to assume throughout the rest of the paper that the parameters, *α, μ, γ* and *β* are subject to the following assumption:

**Assumption 1.** *αβ* 6¼ *γμ, <sup>μ</sup>*>0*, <sup>γ</sup>* 6¼ <sup>0</sup> *and* <sup>2</sup>*<sup>α</sup>* <sup>þ</sup> *<sup>γ</sup>*<sup>2</sup> <sup>&</sup>lt;0*.* **Remark 2.1.** *The case β* 6¼ 0 *may be treated as that β* ¼ 0 *by considering the affine*

*transformation X*<sup>~</sup> ðÞ¼ *<sup>t</sup> <sup>μ</sup> γμ*�*αβ* ð Þ *<sup>γ</sup>X t*ðÞþ *<sup>β</sup> . On the contrary, the condition γμ* 6¼ *αβ must be hold, otherwise the equation* (1) *has only a degenerate solution, i.e., X t*ðÞ¼� *<sup>β</sup> <sup>γ</sup>* ¼ � *<sup>μ</sup> <sup>α</sup>. The solution* (3) *is however Markovian when β* 6¼ 0*, otherwise the solution process is neither a standarized diffusion process nor a martingale. In contrast, if γ* ¼ 0 *(OU process), the stochastic term is a martingale and hence it has a vanishing expectation. So, In the sequel, and without loss of generality we shall assume, that β* ¼ 0*, i.e.,*

$$dX(t) = (aX(t) + \mu)dt + \gamma X(t)dw(t), t \ge 0,\\ X(0) = X\_0,\tag{4}$$

and this equation will be the subject of our investigation so it is noted hereafter *COBL*(1,1).

**Remark 2.2.** *In OU diffusion with <sup>μ</sup>* <sup>¼</sup> <sup>0</sup>*, its solution is given by X t*ðÞ¼ *<sup>X</sup>*ð Þ <sup>0</sup> *<sup>e</sup>*�*α<sup>t</sup>* <sup>þ</sup>

*β* Ð*t* 0 *<sup>e</sup>*�*α*ð Þ *<sup>t</sup>*�*<sup>s</sup> dw s*ð Þ*, t* <sup>≥</sup><sup>0</sup> *and its invariant probability distribution is Gaussian with mean 0 and variance <sup>γ</sup>*<sup>2</sup> <sup>2</sup>*α. Moreover under the Assumption 1,*


**Remark 2.3.** *In GBM with X*ð Þ 0 >0*, its solution is given by X t*ðÞ¼ exp *<sup>α</sup>* � <sup>1</sup> <sup>2</sup> *<sup>γ</sup>*<sup>2</sup> � �*<sup>t</sup>* <sup>þ</sup> *<sup>γ</sup>w t*ð Þ � �*X*ð Þ <sup>0</sup> *. So, the distribution of X t*ð Þ *given X*ð Þ <sup>0</sup> *is* log *normal with EXt* f g ð Þ <sup>¼</sup> *E X*f g ð Þ <sup>0</sup> *<sup>e</sup><sup>α</sup><sup>t</sup> and Var X t* f g ð Þ <sup>¼</sup> *E X*<sup>2</sup> ð Þ <sup>0</sup> � �*e*<sup>2</sup>*α<sup>t</sup> <sup>e</sup><sup>γ</sup>*2*<sup>t</sup>* � <sup>1</sup> n o*. Hence, for any k*<sup>∈</sup> *, we have E X<sup>k</sup>*ð Þ*<sup>t</sup>* � � <sup>¼</sup> *E X<sup>k</sup>*ð Þ <sup>0</sup> � � exp *<sup>k</sup> <sup>α</sup>* � *<sup>γ</sup>*<sup>2</sup> 2 � �*<sup>t</sup>* <sup>þ</sup> *<sup>k</sup>*<sup>2</sup> *<sup>γ</sup>*<sup>2</sup> 2 *t* n o*, so E X<sup>k</sup>* ð Þ*<sup>t</sup>* � � ! þ<sup>∞</sup> *as t* ! <sup>∞</sup> *whenever <sup>α</sup>* � *<sup>γ</sup>*<sup>2</sup> 2 � �*<sup>k</sup>* <sup>þ</sup> *<sup>γ</sup>*<sup>2</sup> <sup>2</sup> *<sup>k</sup>*<sup>2</sup> <sup>&</sup>gt;<sup>0</sup> *. Additionally,*


## **3. Moments properties of** *COBL***(1,1) process**

In the sequel, we shall focus on the popular sub–model (4). The popularity of such a model comes from its solution in terms of stochastic integral, i.e.,

$$X(t) = X(0)\Phi(t) + \mu \int\_0^t \Phi(t)\Phi^{-1}(s)d(s), t \ge 0. \tag{5}$$

or equivalently

$$X(t) = X(0) + \int\_{0}^{t} (aX(s) + \mu) ds + \gamma \int\_{0}^{t} X(s) dw(s), t \ge 0,\\ X(0) = X\_0 \tag{6}$$

It is easy verified that the process ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> as defined by (5) satisfies (1) for any *α, μ, γ, β* = 0 and *X*ð Þ 0 , it is the unique strong solution to (1). The following proposition summarizes the second-order properties.

**Proposition 3.1.** *If X*ð Þ 0 *is a random variable, then under the Assumption 1, we have*

$$\begin{aligned} \mathbf{1}. \ m(t) &= E\{X(t)\} = \Psi(t) \Big( E\{X(0)\} + \mu \int\_0^t \Psi^{-1}(s) ds \Big) \text{ and as } t \to +\infty, E\{X(t)\} = 0, \\\ m &= -\frac{\mu}{a} > 0. \end{aligned}$$

2.For any *h*≥0, *K t*ð Þ¼ , *t* þ *h Cov X t* ð ð Þ,*X t*ð Þ þ *h* Þ ¼ Ψð Þ *h K t*ð Þ, where *K t*ðÞ¼ *K t*ð Þ , *t* is the variance function given by *K t*ðÞ¼ *<sup>ϕ</sup>*ð Þ*<sup>t</sup> <sup>K</sup>*ð Þþ <sup>0</sup> *<sup>γ</sup>*<sup>2</sup> <sup>Ð</sup>*<sup>t</sup>* 0 *ϕ*�<sup>1</sup> ð Þ*<sup>s</sup> <sup>m</sup>*<sup>2</sup>ð Þ*<sup>s</sup> ds* � �, so as *<sup>t</sup>* ! þ∞, *K t*ðÞ¼�*m*<sup>2</sup> *<sup>γ</sup>*<sup>2</sup> <sup>2</sup>*α*þ*<sup>γ</sup>*2. Hence *K t*ð Þ¼� , *<sup>t</sup>* <sup>þ</sup> *<sup>h</sup>* <sup>Ψ</sup>ð Þ *<sup>h</sup> <sup>m</sup>*<sup>2</sup> *<sup>γ</sup>*<sup>2</sup> <sup>2</sup>*α*þ*γ*<sup>2</sup> and the correlation function is however *<sup>ρ</sup>*ð Þ¼ *<sup>h</sup> <sup>e</sup>α<sup>h</sup>*. Therefore asymptotic stationary *COBL*(1,1) process has autocorrelation function similar to a *CAR*(1) processes.

## **Proof.**


*The* L*2*� *Structure of Subordinated Solution of Continuous-Time Bilinear Time Series DOI: http://dx.doi.org/10.5772/intechopen.105718*

*dE Y*<sup>2</sup> ð Þ*<sup>t</sup>* � � <sup>¼</sup> *<sup>γ</sup>*2*E Y*<sup>2</sup> ð Þ*<sup>t</sup>* � � <sup>þ</sup> *<sup>γ</sup>*2Ψ�<sup>2</sup> ð Þ*<sup>t</sup> <sup>m</sup>*2ð Þ*<sup>t</sup>* . Thus *dK t*ðÞ¼ <sup>2</sup>*<sup>α</sup>* <sup>þ</sup> *<sup>γ</sup>*<sup>2</sup> ð Þ*K t*ðÞþ *<sup>γ</sup>*2*m*2ð Þ*<sup>t</sup>* . By solving the last differential equation, the expression of the variance follows. The rest of the proof fellows immediately by the dominated convergence Theorem.

**Remark 3.2.** *If X*ð Þ 0 *is a real constant, then the mean and variance of X t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *reduces to*

$$m(t) = E\{X(t)\} = \Psi(t)\left(X(0) + \mu \int\_0^t \Psi^{-1}(s)ds\right) \text{and} \\ Var\{X(t)\} = \gamma^2 \phi(t) \int\_0^t \phi^{-1}(u)m^2(u)du.$$

Moreover the *K t*ð Þ , *t* þ *h* depends in general on time and on initial condition, thus the *COBL*(1,1) process is not stationary but is asymptotically stationary. Except, for instance, in the following cases:


#### **3.1 Higher-order moment of** *COBL***(1,1) process**

In what follows, we consider the function *f x*ð Þ¼ *xn*, then *fXt* ð Þ ð Þ is also an Itô'<sup>s</sup> process. Applying Itô's formula on *fXt* ð Þ ð Þ , we have

$$df(X(t)) = f'(X(t))dX(t) + \frac{1}{2}f''(X(t))(dX(t))^2$$

$$= f'(X(t))\mu(X(t))dt + f'(X(t))\sigma(X(t))dw(t) + \frac{1}{2}f''(X(t))\sigma^2(X(t))d(t)^2$$

which results to *dX<sup>n</sup>* ðÞ¼ *<sup>t</sup> anX<sup>n</sup>*ðÞþ*<sup>t</sup> bnX<sup>n</sup>*�<sup>1</sup> ð Þ*<sup>t</sup>* � �*dt* <sup>þ</sup> *cnX<sup>n</sup>*ð Þ*<sup>t</sup> dw t*ð Þ or equivalently

$$X^{n}(t) = X^{n}(0) + \int\_{0}^{t} (a\_{n}X^{n}(s) + b\_{n}X^{n-1}(s))ds + c\_{n} \int\_{0}^{t} X^{n}(s)dw(s) \tag{7}$$

where *an* <sup>¼</sup> *<sup>n</sup><sup>α</sup>* <sup>þ</sup> *n n*ð Þ �<sup>1</sup> <sup>2</sup> *<sup>γ</sup>*2, *bn* <sup>¼</sup> *<sup>n</sup><sup>μ</sup>* and *cn* <sup>¼</sup> *<sup>n</sup>γ*. Due to stationarity and the fact that the last term of equation (7) is a zero mean martingale, then the moments of invariant distribution satisfy

$$E\{X^n(t)\} = -a\_n^{-1}b\_n E\{X^{n-1}(t)\} = (-1)^n \prod\_{i=1}^n a\_i^{-1}b\_i. \tag{8}$$

The above equation allows us to find the moments of the invariant probability distribution for the Markov process generated by (5) for example *EXt* f g ð Þ ¼ � *<sup>μ</sup> α*, *E X*<sup>2</sup> ð Þ*<sup>t</sup>* � � <sup>¼</sup> <sup>2</sup>*μ*<sup>2</sup> *<sup>α</sup>* <sup>2</sup>*α*þ*γ*<sup>2</sup> ð Þ and *Var X t* ð Þ¼� ð Þ ð Þ *μγ* <sup>2</sup> *<sup>α</sup>*<sup>2</sup> <sup>2</sup>*α*þ*γ*<sup>2</sup> ð Þ.

**Example 3.1.** *As already pointed out in the above section, the unique invariant probability distribution for the stationary solution of* (5) *has the form signG*�<sup>1</sup> � *<sup>μ</sup> α* � � *where G has Gamma-distribution G a*ð Þ , *<sup>b</sup> with a* <sup>¼</sup> *<sup>γ</sup>*ð Þ <sup>2</sup> � <sup>2</sup>*<sup>α</sup> <sup>=</sup>γ*<sup>2</sup> *the shape parameter, b* <sup>¼</sup> *<sup>γ</sup>*<sup>2</sup> <sup>2</sup>*<sup>μ</sup> is the scale parameter and the density f x*ð Þ¼ <sup>1</sup> <sup>Γ</sup>ð Þ *<sup>a</sup> <sup>b</sup><sup>a</sup> xa*�<sup>1</sup> exp f g �*x=<sup>b</sup> , x*>0*. So simple computation give E G*f g <sup>¼</sup> *ab and Var G*f g <sup>¼</sup> *ab*<sup>2</sup> *however E* <sup>G</sup>�<sup>1</sup> � � ¼ � *<sup>μ</sup> <sup>α</sup> and Var* <sup>G</sup>�<sup>1</sup> � � ¼ � *<sup>μ</sup>*2*γ*<sup>2</sup> *<sup>α</sup>*<sup>2</sup> <sup>2</sup>*α*þ*γ*<sup>2</sup> ð Þ*. More generally for a*<sup>&</sup>gt; *n we have E* <sup>G</sup>�*<sup>n</sup>* f g <sup>¼</sup> <sup>2</sup>*<sup>μ</sup> γ*2 � �*n*Q*<sup>n</sup> <sup>i</sup>*¼1ð Þ *<sup>a</sup>* � *<sup>i</sup>* �<sup>1</sup> *. However, the above expression coincides with* (8)*.*

Now, define *mn*ð Þ¼ *<sup>t</sup>*, *<sup>x</sup> E X<sup>n</sup>* f g ð Þj *<sup>t</sup> <sup>X</sup>*ð Þ¼ <sup>0</sup> *<sup>x</sup>* to represent the *<sup>n</sup>*-*th* conditional moment of the process ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> defined by (5) for *n* ¼ 0, 1, 2, … with *m*0ð Þ¼ *t*, *x* 1. Then simple manipulation of conditional expectation shows that *mn*ð Þ *t*, *x* satisfy the following first-order recursive differential equation

$$dm\_n(t, \mathbf{x}) = a\_n m\_n(t, \mathbf{x}) dt + b\_n m\_{n-1}(t, \mathbf{x}) dt,\tag{9}$$

its solution is given in the following proposition

**Proposition 3.3.** *Suppose that the constants a*0, *a*1, *a*2, … *an are distinct. Then under the Assumption 1, the solution of* (9) *for n* <sup>¼</sup> 0, 1, 2, … *is given by mn*ð Þ¼ *<sup>t</sup>*, *<sup>x</sup>* <sup>P</sup>*<sup>n</sup> <sup>i</sup>*¼<sup>0</sup>*ξi*ð Þ *<sup>n</sup> <sup>e</sup>ait where ξi*ð Þ *n satisfies the recursion*

$$\xi\_i(n) = \sum\_{j=0}^i B\_{j+1}^{(n)} A\_{i,j}^{(n)} \mathfrak{x}^j,\\ B\_{j+1}^{(n)} = \prod\_{k=j+1}^n b\_k,\\ A\_{i,j}^{(n)} = \prod\_{k=j}^n \frac{1}{a\_i - a\_k} \tag{10}$$

*with the convenient B*ð Þ *<sup>n</sup> <sup>n</sup>*þ<sup>1</sup> <sup>¼</sup> 1, *<sup>A</sup>*ð Þ *<sup>n</sup> <sup>n</sup>*,*<sup>n</sup>* ¼ 1. **Proof.** See Bibi and Merahi [17]. **Example 3.2.** *The first and the second conditional moments are*

$$m\_1(t, \mathbf{x}) = -\frac{b\_1}{a\_1} + P\_0(\mathbf{x})e^{a\_1 t} \\ \text{and} \\ m\_2(t, \mathbf{x}) = \frac{b\_1 b\_2}{a\_1 a\_2} + P\_1(\mathbf{x})e^{a\_1 t} + P\_2(\mathbf{x})e^{a\_2 t}$$

$$where \ P\_0(\mathbf{x}) = \begin{pmatrix} \frac{b\_1}{a\_1} + \mathbf{x} \\ \frac{b\_1}{a\_1} \end{pmatrix}, P\_1(\mathbf{x}) = \begin{pmatrix} \frac{b\_1 b\_2}{a\_1 (a\_1 - a\_2)} + \frac{b\_2}{(a\_1 - a\_2)} \mathbf{x} \\ \frac{b\_1 b\_2}{a\_2 (a\_2 - a\_1)} + \frac{b\_2}{(a\_2 - a\_1)} \mathbf{x} + \mathbf{x}^2 \end{pmatrix} \text{ and } P\_2(\mathbf{x}) = \frac{b\_2}{a\_2}$$

**Remark 3.4.** *Note that when <sup>α</sup>* <sup>þ</sup> *<sup>n</sup>*�<sup>1</sup> <sup>2</sup> *<sup>γ</sup>*<sup>2</sup> <sup>&</sup>lt;<sup>0</sup> *for any n, the mn*ð Þ *<sup>t</sup>*, *<sup>x</sup> converges as t* ! þ<sup>∞</sup> *to unconditional moments E* IG*<sup>n</sup>* f g*. Moreover, when X t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *is a GBM process mn*ð Þ *<sup>t</sup>*, *<sup>x</sup> reduces to mn*ð Þ¼ *<sup>t</sup>*, *<sup>x</sup> xneant because polynomial B*ð Þ *<sup>n</sup> <sup>j</sup>*þ<sup>1</sup> <sup>¼</sup> <sup>0</sup> *for any j*<*n and B*ð Þ *<sup>n</sup> <sup>n</sup>*þ<sup>1</sup> ¼ 1*. Additionally, since for any n*≥ 1*, mn*ð Þ *t*, *x depends on time, thus COBL*(1,1) *process with initial condition is non stationary, however it is asymptotically stationary.*

## **4. Subordinated COBL(1,1) process**

The main idea of subordination (or change of time method) is to find a simple representation for ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> with a complicated structure, using some simple process and subordinator process ð Þ *T t*ð Þ *<sup>t</sup>*≥<sup>0</sup>. For example, if we consider a Brownian motion

ð Þ *w t*ð Þ *<sup>t</sup>*≥<sup>0</sup> as a simple process and ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> that satisfies the stochastic differential equation (4) as a complicated process, then the question is: can we represent ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> in the following form *X t*ðÞ¼ *wTt* ð Þ ð Þ ? In many cases, the answer is "yes" (see [18]). Hence, in this paper, we propose that *T* is represented by a homogeneous Poisson process.

### **4.1 Poisson counting process**

The Poisson counting process, f g *N t*ð Þ; *t*>0 consists of a nonnegative integer random variable *N t*ð Þ and satisfy the following definition

**Definition 4.1.** *A Poisson process N t* ð Þ ð Þ *<sup>t</sup>*><sup>0</sup> *is a counting process with the following additional properties*

1.*N*ð Þ¼ 0 0

2.*The process has stationary and independent increments*.

3.*PNt* ð Þ¼ ðÞ¼ *<sup>n</sup>* ð Þ *<sup>λ</sup><sup>t</sup> <sup>n</sup> <sup>n</sup>*! exp ð Þ �*λt* for *t*> 0 *and n* ¼ 0, 1, *:: the parameter λ is called the rate of the Poisson process.*

**Remark 4.1.** *Note that N t*ð Þ *is not a martingale but N t*ð Þ� *λt it is. Moreover, in general, the "intensity" quantity λt may be replaced by a function λ*ð Þ*t which may be stochastic, to obtain an inhomogeneous Poisson process. It is worth noting that the definition 4.1 is quite close to the definition of the Wiener process and therefore have a similar method of approaching the simulation.*

Recalling that the probability generating function of ð Þ *N t*ð Þ *<sup>t</sup>*><sup>0</sup> is given by *E zN t*ð Þ � � <sup>¼</sup> <sup>P</sup><sup>∞</sup> *<sup>n</sup>*¼<sup>0</sup>*znPNt* ð Þ¼ ðÞ¼ *<sup>n</sup> <sup>e</sup>*�*λt*ð Þ <sup>1</sup>�*<sup>z</sup> :* So by differentiation, we obtain the 4*th*�

order non centered moments *vk*ðÞ¼ *<sup>t</sup> E N<sup>k</sup>*ð Þ*<sup>t</sup>* � �, *<sup>k</sup>* <sup>¼</sup> 1, … , 4

$$\boldsymbol{\nu}\_{\mathcal{V}}(\mathbf{t}) = \boldsymbol{\lambda}\mathbf{t}, \boldsymbol{\nu}\_{\mathcal{V}}(\mathbf{t}) = \left(\boldsymbol{\lambda}\mathbf{t}\right)^{2} + \boldsymbol{\lambda}\mathbf{t}, \boldsymbol{\nu}\_{\mathcal{V}}(\mathbf{t}) = \left(\boldsymbol{\lambda}\mathbf{t}\right)^{3} + \left(\boldsymbol{\lambda}\mathbf{t}\right)^{2} + \boldsymbol{\lambda}\mathbf{t}, \boldsymbol{\nu}\_{\mathcal{A}}(\mathbf{t}) = \left(\boldsymbol{\lambda}\mathbf{t}\right)^{4} + \boldsymbol{\mathsf{6}}(\boldsymbol{\lambda}\mathbf{t})^{3} + \boldsymbol{\mathsf{7}}(\boldsymbol{\lambda}\mathbf{t})^{2} + \boldsymbol{\lambda}\mathbf{t}.$$

Moreover, the first four central moments *<sup>μ</sup>k*ðÞ¼ *<sup>t</sup> E Nt* ð Þ ðÞ� *<sup>v</sup>*1ð Þ*<sup>t</sup> <sup>k</sup>* n o, *<sup>k</sup>* <sup>¼</sup> 1, … , 4, are given by *<sup>μ</sup>*1ðÞ¼ *<sup>t</sup>* 0, *<sup>μ</sup>*2ðÞ¼ *<sup>t</sup> <sup>λ</sup>t*, *<sup>μ</sup>*3ðÞ¼ *<sup>t</sup> <sup>λ</sup><sup>t</sup>* and *<sup>μ</sup>*4ðÞ¼ *<sup>t</sup>* <sup>3</sup>ð Þ *<sup>λ</sup><sup>t</sup>* <sup>2</sup> <sup>þ</sup> *<sup>λ</sup>t*. Additionally, the skewness *Sk t*ð Þ and the excess kurtosis *Ku t*ð Þ coefficients of *N t*ð Þ are given by *Sk t*ðÞ¼ *μ*2 <sup>3</sup>ð Þ*t μ*3 <sup>2</sup>ð Þ*<sup>t</sup>* <sup>¼</sup> <sup>1</sup> *<sup>λ</sup><sup>t</sup>* and *Ku t*ðÞ¼ *<sup>μ</sup>*4ð Þ*<sup>t</sup> μ*2 <sup>2</sup>ð Þ*<sup>t</sup>* <sup>¼</sup> <sup>3</sup> <sup>þ</sup> <sup>1</sup> *λt* . Therefore the Poisson process is always a skewed and leptokurtic distribution for any *t*>0.

### **4.2 Subordinated COBL(1,1) process and their second-order properties**

In what follows, we shall focus on the *COBL*(1,1) subordinate by a Poisson process **Definition 4.2.** *The COBL*(1,1) *process X t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *delayed by a Poisson process* ð Þ *N t*ð Þ *<sup>t</sup>*≥<sup>0</sup> *is defined by*

$$Y(t) = X(N(t))\tag{11}$$

*that is, the role of time is played by the Poisson process which makes Y t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *a Lévy process.* From the above definition, we can see that there are two sources of randomness: the ground process ð Þ *X t*ð Þ *<sup>t</sup>*><sup>0</sup> and a time process ð Þ *N t*ð Þ *<sup>t</sup>*>0. So, it's referred to as a stochastic time change, or 'time deformation'. From the solution (6), it follow that

*XNt* ð Þ¼ ð Þ *<sup>X</sup>*ð Þþ <sup>0</sup> <sup>Ð</sup> *N t*ð Þ 0 ð Þ *<sup>α</sup>X s*ðÞþ *<sup>μ</sup> ds* <sup>þ</sup> *<sup>γ</sup>* <sup>Ð</sup> *N t*ð Þ 0 *X s*ð Þ*dw s*ð Þ, *t*≥ 0, then the 1*st* change-of-

variable formula yields

$$dY(t) = (aY(t) + \mu)dN(t) + \gamma Y(t)dw(N(t)), t \ge 0,\\ Y(\mathbf{0}) = \mathcal{y}\_0. \tag{12}$$

Therefore, several authors have considered the process *Y t*ðÞ¼ *<sup>X</sup> N t* <sup>b</sup>ð Þ � � where *N t* bð Þ is the inverse of *N t*ð Þ, *i:e:*;

$$dY(t) = (aY(t) + \mu)d\hat{N}(t) + \gamma Y(t)dw\Big|\hat{N}(t), t \ge 0, Y(0) = y\_0 \tag{13}$$

(see [19] and the references therein) who gave the connection between the classical Itô *SDE* (4) and their corresponding subordinated *SDE* (12) and (13). The above discussion is summarized in the next lemma

**Lemma 4.2.** *[Duality of SDEs]. Let N t*ð Þ *be a Poisson process, then*

$$\text{1. a. If } (X(t))\_{t \ge 0} \text{ satisfies the SDE (4) }, \text{ then } Y(t) = X(N(t)) \text{ satisfies the SDE (12).}$$

b. If ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> satisfies the SDE (13), then *X t*ðÞ¼ *<sup>Y</sup> N t* <sup>b</sup>ð Þ � � satisfies the SDE (4).

**Proof.** See [19].

Now, we are in a position to state the following proposition

**Proposition 4.3.** *The unique, strong solution to homogeneous SDE* (12) *is explicitely written as*

$$Y(t) = \mathcal{F}(t)\left\{Y(\mathbf{0}) + \mu \int\_{0}^{N(t)} \Phi^{-1}(s)ds\right\}, t \ge \mathbf{0} \tag{14}$$

*where* <sup>F</sup>ðÞ¼ *<sup>t</sup>* exp f g *Z t*ð Þ *is the fundamental solutionw with Z t*ðÞ¼ *<sup>α</sup>* � <sup>1</sup> <sup>2</sup> *<sup>γ</sup>*<sup>2</sup> � �*N t*ðÞþ *γwNt* ð Þ ð Þ .

**Proof.** It suffices to show that the process ð Þ *Y t*ð Þ given by (14) satisfies *SDE* (12). Set *Y t*ðÞ¼ <sup>F</sup>ð Þ*<sup>t</sup> g t*ð Þ where *g t*ðÞ¼ *<sup>Y</sup>*ð Þþ <sup>0</sup> *<sup>μ</sup>* <sup>Ð</sup> *N t*ð Þ 0 Φ�<sup>1</sup> ð Þ*s ds*. By the Ito formula and the differential identities we have

$$dY(t) = e^{Z(t)}g(t)dZ(t) + e^{Z(t)}g'(t)dt + \frac{1}{2}\left(e^{Z(t)}g(t)dZ(t) + e^{Z(t)}g'(t)dt\right)'$$

$$= Y(t)dZ(t) + \mu dN(t) + \frac{1}{2}Y(t)d[Z,Z]$$

$$= Y(t)\left(\left(a - \frac{1}{2}\gamma^2\right)dN(t) + \gamma dw(N(t))\right) + \mu dN(t) + \frac{1}{2}\gamma^2Y(t)dN(t)$$

$$= (aY(t) + \mu)dN(t) + \gamma Y(t)dw(N(t)).$$

**182**

*The* L*2*� *Structure of Subordinated Solution of Continuous-Time Bilinear Time Series DOI: http://dx.doi.org/10.5772/intechopen.105718*

Thus *Y t*ð Þ satisfies (12), completing the proof.

**Remark 4.4.** *If X t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *is a GMB, then the explicite solution of its subordinated version* ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> *is Y t*ðÞ¼ Fð Þ*t Y*ð Þ 0 *, t*≥0 *and hence More generally, for any k*∈ *, we have*

$$E\{Y^k(t)\} = E\{Y^k(0)\} \exp\left\{-\lambda t \left(1 - \exp\left\{ \left(a - \frac{\gamma^2}{2}\right)k + k^2 \frac{\gamma^2}{2} \right\} \right) \right\}$$

*and hence E Y<sup>k</sup>* ð Þ*<sup>t</sup>* � � ! þ<sup>∞</sup> *as t* ! <sup>∞</sup> *whenever <sup>α</sup>* � *<sup>γ</sup>*<sup>2</sup> 2 � �*<sup>k</sup>* <sup>þ</sup> *<sup>γ</sup>*<sup>2</sup> <sup>2</sup> *<sup>k</sup>*<sup>2</sup> <sup>&</sup>gt;<sup>0</sup> *. Additionally,*


An extension of Proposition 3.3 for the process ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> is stated in the following proposition

**Proposition 4.5.** *Let Mn*ð Þ¼ *<sup>t</sup>*, *<sup>y</sup> E Y<sup>n</sup>* f g ð Þj *<sup>t</sup> <sup>Y</sup>*ð Þ¼ <sup>0</sup> *<sup>y</sup> the n-th conditional moment of the process Y t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *defined by* (11) *Then under the condition of proposition 3.3, we have Mn*ð Þ¼ *<sup>t</sup>*, *<sup>y</sup>* <sup>P</sup>*<sup>n</sup> <sup>i</sup>*¼<sup>0</sup>*ξi*ð Þ *<sup>n</sup> <sup>e</sup>*�*<sup>λ</sup>* <sup>∗</sup> *<sup>i</sup> <sup>t</sup> where λ* <sup>∗</sup> *<sup>i</sup>* <sup>¼</sup> *<sup>λ</sup>* <sup>1</sup> � *<sup>e</sup>ai* ð Þ *and <sup>ξ</sup>i*ð Þ *<sup>n</sup> satisfies the recursion* (10).

**Proof.** From Example 3.2, moments properties of the Poisson process and some manipulation of conditional expectation properties, the results follows.

**Example 4.1.** *For the COBL*(1,1) *process delayed by N t*ð Þ *process defined by* (11) *with fixed initial value, the second-order properties of the process Y t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *defined by* (11) *are given by*

$$\begin{aligned} E\{M\_1(t,\boldsymbol{y})\} &= -\frac{b\_1}{a\_1} + P\_0(\boldsymbol{y}) \exp\left\{-\lambda\_1^\* t\right\}, \\ t \ge 0 \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{0} \, &\,\boldsymbol{$$

*where λ* <sup>∗</sup> <sup>1</sup> <sup>¼</sup> *<sup>λ</sup>* <sup>1</sup> � *<sup>e</sup><sup>a</sup>*<sup>1</sup> ð Þ*, <sup>λ</sup>* <sup>∗</sup> <sup>2</sup> <sup>¼</sup> *<sup>λ</sup>* <sup>1</sup> � *<sup>e</sup><sup>a</sup>*<sup>2</sup> ð Þ*. Note that when the initial value is random, the expressions of E Y t* f g ð Þ *and E Y*<sup>2</sup> ð Þ*<sup>t</sup>* � � *may be obtained by replacing the polynomials P*0ð Þ *<sup>Y</sup>* , *P*1ð Þ *Y and P*2ð Þ *Y by their expectations. Moreover, it is clear that the first and second moments depends in general on time and on the initial condition, thus the Y t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *process is not stationary but is asymptotically stationary.*

#### **4.3 Distribution**

The distribution of the process ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> defined by (11) is given by

$$F\_Y(\underline{\mathbf{y}}) = P(X(N(t)) \le \underline{\mathbf{y}}) = E\{I\_{X(N(t)) \le \underline{\mathbf{y}}}\} = E\{E\{I\_{X(N(t)) \le \underline{\mathbf{y}}} | N(t)\}\}.$$

Since *<sup>X</sup>*⇝IG with shape *<sup>δ</sup>* <sup>¼</sup> *<sup>γ</sup>*ð Þ <sup>2</sup> � <sup>2</sup>*<sup>α</sup> <sup>=</sup>γ*<sup>2</sup> and scale *<sup>θ</sup>*�<sup>1</sup> <sup>¼</sup> *<sup>γ</sup>*<sup>2</sup> <sup>2</sup>*<sup>μ</sup>*, then each *X t*ð Þ follows an *<sup>I</sup>*Gð Þ *<sup>δ</sup>t*, *<sup>θ</sup>* , that is, has a probability density function (*PDF)*, *<sup>f</sup> X t*ð Þð Þ¼ *<sup>x</sup>*

*θtδ* <sup>Γ</sup>ð Þ *<sup>t</sup><sup>δ</sup> <sup>x</sup>*�*tδ*�<sup>1</sup> exp f g �*θ=<sup>x</sup>* , *<sup>x</sup>*>0 and cumulative distribution function (*CDF*) *FX k*ð Þð Þ¼ *<sup>x</sup>* <sup>Γ</sup> *<sup>t</sup>δ*, *<sup>θ</sup>* ð Þ*<sup>x</sup>* <sup>Γ</sup>ð Þ *<sup>t</sup><sup>δ</sup>* then the *PDF* and *CDF* functions of ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> are given respectively by

$$f\_{Y}(\mathbf{y}) = e^{-\lambda t} I\_{\{\mathbf{y} = 0\}} + e^{-\theta/\mathfrak{y}} e^{-\lambda t} \frac{1}{\mathcal{Y}} \sum\_{k=1}^{\infty} \left( \left(\frac{\theta}{\mathfrak{y}}\right)^{\delta} \lambda t \right)^{k} \frac{1}{k! \Gamma(\delta k)} I\_{\{\mathbf{y} > 0\}}.$$

$$F\_{Y}(\mathbf{y}) = H(\mathbf{y}) e^{-\lambda t} + e^{-\lambda t} \sum\_{k=1}^{\infty} \Gamma(\delta k, \theta/\mathfrak{y}) \frac{(\lambda t)^{k}}{k! \Gamma(\delta k)}.$$

where *H*(.) is the Heaviside step function, therefore, the probability law of ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> has atom *<sup>e</sup>*�*λ<sup>t</sup>* at zero, that is, has a discrete part *PYt* ð Þ¼ ðÞ¼ <sup>0</sup> *<sup>e</sup>*�*λ<sup>t</sup>* .

**Remark 4.6.** *An equivalent expression of the above PDF and CDF functions may be given by the following Poisson mixture: f <sup>Y</sup>*ð Þ¼ *<sup>y</sup>* <sup>P</sup><sup>∞</sup> *<sup>k</sup>*¼<sup>0</sup>*<sup>f</sup> X k*ð Þð Þ*<sup>y</sup> PNt* ð Þ ðÞ¼ *<sup>k</sup> , FY*ð Þ¼ *<sup>y</sup>* P<sup>∞</sup> *<sup>k</sup>*¼<sup>0</sup>*FX k*ð Þð Þ*<sup>y</sup> PNt* ð Þ ðÞ¼ *<sup>k</sup> . These PDF and CDF function are the same as for Z t*ðÞ¼ P*N t*ð Þ *<sup>n</sup>*¼<sup>1</sup> *<sup>ξ</sup><sup>n</sup> where <sup>ξ</sup><sup>n</sup>* ð Þ*<sup>n</sup>* <sup>≥</sup><sup>1</sup> *is a sequence of i:i:d: random variables independent of N t*ð Þ*. Note that when X t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *and N t* ð Þ ð Þ *<sup>t</sup>*≥<sup>0</sup> *are independent processes and the relevant moments exist, then E Y t* f g ð Þ <sup>¼</sup> *<sup>t</sup>μXμ<sup>N</sup> and Var Y t* f g ð Þ <sup>¼</sup> *<sup>t</sup> <sup>σ</sup>*<sup>2</sup> *Nμ*<sup>2</sup> *<sup>X</sup>* � *<sup>σ</sup>*<sup>2</sup> *<sup>X</sup>μ<sup>N</sup>* � � *where <sup>μ</sup><sup>N</sup>* <sup>¼</sup> *E N*f g ð Þ<sup>1</sup> *, <sup>μ</sup><sup>X</sup>* <sup>¼</sup> *E X*f g ð Þ<sup>1</sup> *, <sup>σ</sup>*<sup>2</sup> *<sup>N</sup>* <sup>¼</sup> *Var N*ð Þ ð Þ<sup>1</sup> *and <sup>σ</sup>*<sup>2</sup> *<sup>X</sup>* ¼ *Var X*ð Þ ð Þ1 *.*

The IG distribution belongs to the exponential family of distribution with respect to *θ* ¼ ð Þ *δ* þ 1, *θ* <sup>0</sup> *:* Indeed, *<sup>f</sup>* IGð Þ¼ *<sup>x</sup> <sup>θ</sup><sup>δ</sup>* <sup>Γ</sup>ð Þ*<sup>δ</sup> <sup>x</sup>*�*δ*�<sup>1</sup> exp f g �*θ=<sup>x</sup>* <sup>¼</sup> exp �*θ*<sup>0</sup> f g *T x*ð Þþ *<sup>A</sup>*ð Þ*<sup>θ</sup>* where *T x*ð Þ¼ log ð Þ *<sup>x</sup>* , <sup>1</sup> *x* � �<sup>0</sup> and *A*ð Þ¼ *θ δ* log ð Þ�*θ* log ð Þ Γð Þ*δ* . The function *A*(.) is known as the cumulant function its first and second derivative provide the mean and the variance of *<sup>T</sup>*(*X*). So *<sup>f</sup> Y t*ð Þð Þ*<sup>y</sup>* may be rewritten as

$$f\_{Y(t)}(y) = e^{-\lambda t} I\_{\{y=0\}} + e^{-\lambda t} \sum\_{k=1}^{\infty} \exp\left\{-\underline{\theta}'(k)T(\mathbf{x}) + A(\theta(k))\right\} P(N(t) = k) I\_{\{y>0\}}$$

in which the vector *θ*ð Þ*k* is obtained by replacing the parameter *β* in *θ* by *kβ*. So, the distribution of ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> may be regarded (asymptotically) as the distribution of *GIG* subordinated by the Poisson process. Regardless of the form of the expected value of the function *<sup>h</sup>*(*Y*) is expressed as *E hY* f g ð Þ <sup>¼</sup> <sup>Ð</sup> Θ *EX*∣*N*¼*<sup>k</sup>*f g *h Y*ð Þ *gN*ð Þ*k dν*ð Þ*k* where *EX*∣*<sup>N</sup>*f g*:* is

taken with respect to the conditional distribution of *<sup>X</sup>*. In particular, *E Y*f g <sup>¼</sup> *E EX*∣*<sup>N</sup>*ð Þ *<sup>X</sup>* � � and *Var Y*f g <sup>¼</sup> *Var EX*∣*<sup>N</sup>*ð Þ *<sup>X</sup>* � � <sup>þ</sup> *E VarX*∣*<sup>N</sup>*ð Þ *<sup>X</sup>* � �. Moreover

$$E\{Y^n\} = e^{-\lambda t} \sum\_{k=0}^{\infty} \left\{ \left. \int\_0^{\infty} y^{n-\delta k - 1} e^{-\theta/\gamma} dy \right\} \theta^{\delta k} \left(\lambda t\right)^k \frac{1}{k!\Gamma(k\delta)} \right\}$$

$$= \theta^n e^{-\lambda t} \sum\_{k=0}^{\infty} \frac{\Gamma(k\delta - n)}{\Gamma(k\delta)} \frac{\left(\lambda t\right)^k}{k!}$$

$$= \theta^n e^{-\lambda t} \,\_1\Psi\_1(\delta, -n, \delta, 0, \lambda t)$$

where <sup>1</sup>Ψ1ð Þ¼ *<sup>ρ</sup>*, *<sup>a</sup>*, *<sup>ρ</sup>*, *<sup>b</sup>*, *<sup>x</sup>* <sup>P</sup><sup>∞</sup> *k*¼0 Γð Þ *ρk*þ*a* Γð Þ *ρk*þ*b xk <sup>k</sup>*! is the confluent hypergeometric function that plays an important role in mixing theory. For certain values of the parameter *ρ* and for *n*>0, it is possible to give representations of <sup>1</sup>Ψ1ð Þ *ρ*, *a*; *ρ*, 0; *x* in terms of wellknown special functions. In general, the exact expression of <sup>1</sup>Ψ1ð Þ *ρ*, *a*, *ρ*, *b*, *x* is very difficult to express it, so in literature, the solution is given for certain specific case, (interested readers are advised to see [20] and the references therein). The solution of <sup>1</sup>Ψ1ð Þ *ρ*, *a*, *ρ*, *b*, *x* is a vast subject and we will not develop it further here.

## **5. Estimation issues**

In this section, we propose the moment's method (*MM*) for estimating the unknown parameters, *α, μ* and *γ* gathered in vector *θ* involved in *COBL*(1,1) and in its distribution IG. The estimates parameters according to *MM* are obtained from two processes ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> and ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup>. Moreover, we concentrated on the weak and/or asymptotically stationary case and we assume that the parameter *λ* in the Poisson process is known. The first and second moments of the asymptotically stationary process ð Þ *Y t*ð Þ as defined in Example 4.1 are *<sup>μ</sup>*<sup>1</sup> ¼ � *<sup>b</sup>*<sup>1</sup> *<sup>a</sup>*<sup>1</sup> ¼ � *<sup>μ</sup> <sup>α</sup>*, and *<sup>μ</sup>*<sup>2</sup> <sup>¼</sup> *<sup>b</sup>*1*b*<sup>2</sup> *<sup>a</sup>*1*a*<sup>2</sup> <sup>¼</sup> �*μ*<sup>2</sup> *<sup>α</sup>* <sup>2</sup>*α*þ*γ*<sup>2</sup> ð Þ. Additionally, from proposition 3.1, we have asymptotically *<sup>ρ</sup>*ð Þ¼ <sup>1</sup> *<sup>e</sup><sup>α</sup>*. So the following formulas for the parameters can be derived *<sup>α</sup>* <sup>¼</sup> log *<sup>ρ</sup>*ð Þ<sup>1</sup> , *<sup>μ</sup>* ¼ �*αμ*1, and *<sup>γ</sup>*<sup>2</sup> <sup>¼</sup>

� *<sup>μ</sup>*ð Þ <sup>2</sup>þ2*αμ*<sup>2</sup> *μ*2 . These relationships can be used for estimating *θ* by *MM*, more precisely the estimators are given by

$$
\widehat{\alpha} = \widehat{\log \rho}(\mathbf{1}), \widehat{\mu} = -\widehat{a}\widehat{\mu}\_1 \mathbf{and} \\
\widehat{\boldsymbol{\gamma}^2} = -\widehat{\alpha}\frac{(\widehat{\mu}\_1^2 + 2\widehat{\mu}\_2)}{\widehat{\mu}\_2}
$$

where <sup>b</sup>*μ*1, <sup>b</sup>*μ*2, and logd*ρ*ð Þ<sup>1</sup> are respectively the empirical first, second-order moment, and the empirical logarithm of autocorrelation. Their consistency and asymptotic normality are given the following proposition

**Proposition 5.1.** *Under the Assumption 1,we have*

$$\begin{aligned} \text{1. } \left(\widehat{\underline{\theta}}\_{n}\right) \text{ converges in probability to } \underline{\theta}\_{0} \\\\ \text{2. } \sqrt{n}\left(\widehat{\underline{\theta}}\_{n} - \underline{\theta}\_{0}\right) \leadsto N(\underline{\theta}, \Sigma(\underline{\theta}\_{0})) \text{ where } \Sigma(\underline{\theta}\_{0}) \text{ is } \mathfrak{Z} \times \mathfrak{Z} \text{ asymptotic covariance matrix.} \end{aligned}$$

**Proof.** The proof follows essentially the same arguments as in Bibi and Merahi [21].

### **5.1 Some simulation results**

In order to check the effectiveness of the described estimation procedure, we simulated 500 trajectories of length *n* ∈f g 1000, 2000 with parameters *θ* shown at the bottom of each table below. The vector *θ* is chosen to satisfy the second-order stationarity and the existence of moments up to fourth-order. For the purpose of

illustration, the vector of parameters *θ* is estimated with ð Þ *X t*ð Þ *<sup>t</sup>*≥<sup>0</sup> noted *θ* dð Þ *<sup>X</sup> <sup>m</sup>* and


**Table 1.**

*The* MM *and* GMM *estimation of the processes* X*(*t*) and* Y*(*t*).*

*The* L*2*� *Structure of Subordinated Solution of Continuous-Time Bilinear Time Series DOI: http://dx.doi.org/10.5772/intechopen.105718*

**Figure 1.**

*Top panels: The overlay of asymptotic kernel of* ffiffiffi *<sup>n</sup>* <sup>p</sup> ^*θm*ðÞ�*<sup>i</sup> <sup>θ</sup>*ð Þ*<sup>i</sup>* � *and* ffiffiffi *<sup>n</sup>* <sup>p</sup> ^*θ<sup>g</sup>* ðÞ�*<sup>i</sup> <sup>θ</sup>*ð Þ*<sup>i</sup>* � *based on* <sup>X</sup>*(*t*). Bottom panels: Are the corresponding boxplots summary of* ^*θm*ð Þ*<sup>i</sup> and* ^*θ<sup>g</sup>* ð Þ*<sup>i</sup>* , *<sup>i</sup>* <sup>¼</sup> 1, 2 *and 3 according to Design (1) illustrated in Table 1.*


**Table 2.**

*The estimation of the distribution of* X*(*t*).*

compared with its delayed ð Þ *Y t*ð Þ *<sup>t</sup>*≥<sup>0</sup> process noted *<sup>θ</sup>*ð Þ <sup>d</sup>*<sup>Y</sup> <sup>m</sup> :*As a parameter of configuration we estimate *θ* by the generalized method of moment ð Þ *GMM* noted *θ* dð Þ *<sup>X</sup> <sup>g</sup>* and *θ* dð Þ *<sup>Y</sup> <sup>g</sup>* . In the Tables below, the column "Mean" correspond to the average of the parameters estimates over the 500 simulations. In order to show the performance of the estimators, we have reported in each table the root means squared error (*RMSE*) (results between brackets). The results of estimating corresponding to the process (*X*(*t*)) and (*Y*(*t*)) are summarized in **Table 1.**

The plots of the asymptotic density of each component of b*θ* according to *MM* and *GMM* methods based on process (*X*(*t*)) (resp. on process (*Y*(*t*))) are summarized in the **Figure 1** (resp. in **Figure 3**)

Additionally, the estimates of scale and shape parameters of IG distribution are reported in **Table 2**.

The plot of estimate IG distribution of the process *X*(*t*) is shown in **Figure 2**.

#### **5.2 Comments**

Now a few comments can be made

#### **Figure 2.**

*The left plot is the overlay of exact, MM estimate and GMM estimate associated to design (1) of* IG *distribution of the process* X*(*t*) with* n *= 1000. The right plot is similar to the left with* n *= 2000.*

#### **Figure 3.**

*Top panels: The overlay of asymptotic kernel of* ffiffiffi *<sup>n</sup>* <sup>p</sup> ^*θm*ðÞ�*<sup>i</sup> <sup>θ</sup>*ð Þ*<sup>i</sup>* � *and* ffiffiffi *<sup>n</sup>* <sup>p</sup> ^*θ<sup>g</sup>* ðÞ�*<sup>i</sup> <sup>θ</sup>*ð Þ*<sup>i</sup>* � *based on* <sup>Y</sup>*(*t*). Bottom panels: Are the corresponding boxplots summary of* ^*θm*ð Þ*<sup>i</sup> and* ^*θ<sup>g</sup>* ð Þ*<sup>i</sup>* , *<sup>i</sup>* <sup>¼</sup> 1, 2 *and 3 according to Design (1) illustrated in Table 1.*

A. By inspecting **Table 1**


to the robustness properties of *GMM* and hence its capability to detect the outliers in nonlinear models.

	- 1.The performances of *GMM* and *MM* are according to their order, and are close to each other.
	- 2. It seems that it is very difficult to distinguish between the results reported in **Table 1** and **Table 2** and the plots of the asymptotic kernels showed in **Figures 1** and **3**. This is due to asymptotic stationary which led to the same parameters involved in both *SDE* (4) and (12).
	- 3.For *n* = 1000 and/or *n* = 2000, it is observed that *GMM* works the best from *MM* for both designs of the two parameters *α* and *μ*.

## **6. Concluding remarks and future research direction**

The stochastic subordination model proposed in this paper is practically and theoretically appealing for the modeling of several phenomena already pointed out in Section 1. Such models are rich enough to model among others, the observed nonnormal returns, significant autocorrelation of squared returns. In this paper, we have proposed a theoretical model that not only takes under consideration such specific property but also exhibits short-range dependence and can be used for data with visible jumps. This model is based on the stable *COBL*(1,1) process delayed the Poisson subordinator. The proposed model is non-linear and non-normal, involves three additional parameters which may easily and quickly be estimated under the asymptotically stationarity assumption with a moments method (*MM)* and compared with a generalized method of moments (*GMM*). Clearly, the analyzed process is complex and the estimation is challenging. A significant advantage of the stochastic subordination model is that it inherits some properties of the process to be subordinated and hence the stationary and nonstationary process can be obtained through the subordination approach. These issues are of importance to theoreticians and practitioners alike and will be the subject of further papers. Further research is required to investigate the asymptotic theory of estimator under more matching conditions. The model presented in this paper may be slightly modified by replacing the Poisson process by other processes subject to some appropriate condition.

## **Classification**

**2010 AMS Math. Subject Classification:** Primary 40A05, 40A25; Secondary 45G05.

*Time Series Analysis - New Insights*

## **Author details**

Abdelouahab Bibi Department of Mathematics, Larbi Ben M'hidi University, O.E.B., Algeria

\*Address all correspondence to: abd.bibi@gmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*The* L*<sup>2</sup> Structure of Subordinated Solution of Continuous-Time Bilinear Time Series DOI: http://dx.doi.org/10.5772/intechopen.105718*

## **References**

[1] Mohler RR. Nonlinear Time Series and Signal Processing. Berlin: Springer Verlag; 1988

[2] Aït-Sahalia Y. Estimating continuoustime models with discretely sampled data. In: Blundell R, Persson R, Newey W, editors. Econometrics Theory and Applications. Cambridge: Cambridge University Press; 2007. pp. 261-327

[3] Le Breton A, Musiela M. A study of one-dimensional bilinear differential model for stochastic processes. Probability and Mathematical Statistics. 1984;**4**:91-107

[4] Bibi A, Merahi F. A note on L2- Structure of continuous-time bilinear processes with time-varying coefficients. International Journal of Statististics and Probability. 2015;**4**:150-160

[5] Lee MLT, Whitmore GA. Stochastic processes directed by randomized time. Journal of Applied Probability. 1993;**30**: 302-314

[6] Clark PK. A subordinated stochastic process model with finite variance for speculative prices. Econometrica. 1973; **41**:135-155

[7] German H, Ane T. Stochastic subordination. RISK. 1996;**9**:146-149

[8] Aït-Sahalia Y, Jacod J. High-Frequency Financial Econometrics. New Jersey: Princeton University Press; 2014

[9] Klüppelberg C, Lindner A, Maller R. A continuous time GARCH process driven by a Lévy process: Stationarity and second order behaviour. Journal of Applied Probability. 2004;**41**:601-622

[10] Brockwell PJ. Continuous-time ARMA processes. In: Shanbhag DN, Rao CR, editors. Handbook of Statistics. Amsterdam: North Holland; 2001. pp. 249-276

[11] Ksendal B. Stochastic Differential Equations: An Introduction with Applications. New York: Springer-Verlag; 2000

[12] Iglói E, Terdik G. Bilinear stochastic systems with fractional Brownian motion input. The Annals of Applied Probability. 1999;**9**:46-77

[13] Leon JA, Perez-Abreu V. Strong solutions of stochastic bilinear equations with anticipating drift in the first Wiener chaos. In: Cambanis S, Ghosh JK, Karandikar R, Sen PK, editors. Stochastic Processes: A Festschrift in Honor of Gopinath Kallianpur. Berlin: Springer-Verlag; 1993. pp. 235-243

[14] Arnold L. Stochastic Differential Equations, Theory and Applications. New York: John Wiley; 1974

[15] Bishwal J, Jaya PN. Parameter Estimation in Stochastic Differential Equations. Berlin: Spring-Verlag; 2008

[16] Has'minskii RZ. Stochastic Stability of Differential Equations. Sijthoff \& Noordh. Berlin, Heidelberg: Springer-Verlag; 1980

[17] Bibi A, Merahi F. Yule-Walker type estimator of first-order time-varying periodic bilinear differential model for stochastic processes. Communication Statististics: Theory and methods. 2020; **49**:4046-4072

[18] Ikeda N, Watanabe S. Stochastic Differential Equations and Diffusion Processes. Tokyo: North-Holland/ Kodansha Ltd.; 1981

[19] Kobayashi K. Stochastic calculus for a time-changed semimartingale and the associated stochastic differential equations. Journal of Theoretical Probability. 2011;**24**:789-820

[20] Paris RB, Vinogradov V. Asymptotic and structural properties of special cases of the Wright function arising in probability theory. Lithuanian Math. 2016;**56**:377-409

[21] Bibi A, Merahi F. Moment method estimation of first-order *continuous*-time bilinear processes. Communications in Statistics: Simulation and Computation. 2019;**48**:1070-1087

## *Edited by Rifaat Abdalla, Mohammed El-Diasty, Andrey Kostogryzov and Nikolay Makhutov*

Time series data consist of a collection of observations obtained through repeated measurements over time. When the points are plotted on a graph, one of the axes is always time. Time series analysis is a specific way of analyzing a sequence of data points. Time series data are everywhere since time is a constituent of everything that is observable. As our world becomes increasingly digitized, sensors and systems are constantly emitting a relentless stream of time series data, which has numerous applications across various industries. The editors of this book are happy to provide the specialized reader community with this book as a modest contribution to this rapidly developing domain.

Published in London, UK © 2023 IntechOpen © Anna Bliokh / iStock

Time Series Analysis - New Insights

Time Series Analysis

New Insights

*Edited by Rifaat Abdalla, Mohammed El-Diasty,* 

*Andrey Kostogryzov and Nikolay Makhutov*