**4. Main results**

In this section, we present our main result that asserts the relative error decreases to zero as the small random perturbation tends to zero, which in turn implies the uniform log-efficiency for the estimation problem in (36).

In what follows, let **x**^*<sup>ε</sup> <sup>t</sup>* be the solution to the following SDE

$$\begin{aligned} d\hat{\mathbf{x}}\_t^\varepsilon &= \mathbf{f}\left(t, \hat{\mathbf{x}}\_t^\varepsilon\right) dt + \mathbf{b}\,\sigma\left(t, \hat{\mathbf{x}}\_t^\varepsilon\right) v^\varepsilon\left(t, \hat{\mathbf{x}}\_t^\varepsilon\right) dt + \sqrt{\varepsilon}\mathbf{b}\,\sigma\left(t, \hat{\mathbf{x}}\_t^\varepsilon\right) dW\_t, \\ \text{with an initial condition} \quad \hat{\mathbf{x}}\_t^\varepsilon &= \mathbf{x}\_t^\varepsilon, \end{aligned} \tag{45}$$

where *v<sup>ε</sup>* is an appropriate control function (which also depends on *ε*) to be chosen so as to reduce the variance of the importance sampling estimator. Let

$$x^{\varepsilon} = \exp\left(-\frac{1}{\sqrt{\varepsilon}} \int\_{\varepsilon}^{T} \langle v^{\varepsilon}(t, \hat{\mathbf{x}}^{x}\_{t}), dW\_{t} \rangle - \frac{1}{2\varepsilon} \int\_{\varepsilon}^{T} \left| v^{\varepsilon}(t, \hat{\mathbf{x}}^{x}\_{t}) \right|^{2} dt\right). \tag{46}$$

Then, the corresponding importance sampling estimator is given by

$$\hat{\rho}(\varepsilon) = \frac{1}{N} \sum\_{j=1}^{N} \exp\left(-\frac{1}{\varepsilon} \Phi^{\varepsilon} \left(\hat{\mathbf{x}}^{\varepsilon^{(j)}}\right)\right) \mathbf{z}^{\varepsilon(j)},\tag{47}$$

where **x**^*<sup>ε</sup>*ð Þ*<sup>j</sup>* , *z<sup>ε</sup>*ð Þ*<sup>j</sup>* n o � � *<sup>N</sup> j*¼1 are *N*-copies of independent samples of **x**^*<sup>ε</sup>* , *<sup>z</sup><sup>ε</sup>* ð Þ. Note that, for an appropriately chosen control function *v<sup>ε</sup>* , the above importance sampling estimator in (47) is an unbiased estimator for (37), i.e.,

$$\begin{split} \mathbb{E}^{\varepsilon}\_{\boldsymbol{\varepsilon}, \mathbf{x}^{\varepsilon}\_{\boldsymbol{\varepsilon}}}[\hat{\rho}(\boldsymbol{\varepsilon})] &= \mathbb{E}^{\varepsilon}\_{\boldsymbol{\varepsilon}, \mathbf{x}^{\varepsilon}\_{\boldsymbol{\varepsilon}}} \Bigg[ \exp \left( -\frac{1}{\varepsilon} \boldsymbol{\Phi}^{\varepsilon}(\mathbf{x}^{\varepsilon}) \right) \Bigg] \\ & \equiv \mathbb{E}^{\varepsilon}\_{\boldsymbol{\varepsilon}, \mathbf{x}^{\varepsilon}\_{\boldsymbol{\varepsilon}}}[\rho(\boldsymbol{\varepsilon})]. \end{split} \tag{48}$$

Moreover, the relative estimation error is given by

$$\mathbf{R}\_{\text{err}}(\hat{\rho}(\varepsilon)) = \frac{\sqrt{\mathbf{Var}(\hat{\rho}(\varepsilon))}}{\mathbb{E}\_{\varepsilon, \mathbf{x}\_i^{\varepsilon}}^{\varepsilon}[\hat{\rho}(\varepsilon)]} \tag{49}$$

which can be rewritten as follows

$$\mathbf{R}\_{\text{err}}(\hat{\rho}(\varepsilon)) = \left(\mathbf{1}/\sqrt{N}\right)\sqrt{\Delta(\hat{\rho}(\varepsilon)) - \mathbf{1}},\tag{50}$$

## *A Collection of Papers on Chaos Theory and Its Applications*

where

$$\Delta(\hat{\rho}(\varepsilon)) = \frac{\mathbb{E}\_{\varepsilon, \mathbf{x}\_{\varepsilon}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{2}{\varepsilon} \Phi^{\varepsilon}(\hat{\mathbf{x}}^{\varepsilon}) \right) \right] \left( \mathbf{z}^{\varepsilon} \right)^{2}}{\mathbb{E}\_{\varepsilon, \mathbf{x}\_{\varepsilon}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{1}{\varepsilon} \Phi^{\varepsilon}(\mathbf{x}^{\varepsilon}) \right) \right]^{2}}. \tag{51}$$

Hence, in order to reduce the relative estimation error Rerrð Þ ^*ρ ε*ð Þ , we need to control the term Δð Þ ^*ρ ε*ð Þ in (50). Note that, from Jensen's inequality, we have the following condition

$$\lim\_{\varepsilon \to 0} \sup - \left. e \log \mathbb{E}\_{\boldsymbol{s}, \mathbf{x}\_{\boldsymbol{\varepsilon}}^{\varepsilon}}^{\varepsilon} \right[ \exp \left( - \frac{2}{\varepsilon} \Phi^{\varepsilon} (\hat{\mathbf{x}}^{\varepsilon}) \right) \Bigg] \leq 2 \lim\_{\varepsilon \to 0} - \varepsilon \left. \log \mathbb{E}\_{\boldsymbol{s}, \mathbf{x}\_{\boldsymbol{\varepsilon}}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( - \frac{1}{\varepsilon} \Phi^{\varepsilon} (\hat{\mathbf{x}}^{\varepsilon}) \right) \right] \tag{52}$$

which also implies Δð Þ ^*ρ ε*ð Þ ≥1 with lim *<sup>ε</sup>*!0Δð Þ¼ ^*ρ ε*ð Þ 1. Moreover, the statement in (49) further implies the following

$$\mathcal{R}\_{\text{err}}(\hat{\rho}(\varepsilon)) = \frac{1}{\sqrt{N}} \exp\left(o(\mathbf{1})/\varepsilon\right) \quad \text{as} \quad \varepsilon \to \mathbf{0},\tag{53}$$

which is generally referred as asymptotic efficiency or optimality. In this paper, our main objective is to choose appropriately the control function *v<sup>ε</sup>* in (45), so that the resulting importance sampling estimator achieves a minimum rate of error growth. For this reason, we introduce the following standard definition from simulation theory (e.g., see [29] or [12]) which is useful for interpreting our main result.

**Definition 1** An importance sampling estimator of the form (47) is log-efficient (i.e., asymptotic efficiency or optimal) if

$$\lim\_{\varepsilon \to 0} -\varepsilon \log \Delta(\hat{\rho}(\varepsilon)) = \mathbf{0}.\tag{54}$$

Then, we state the following result as follows.

**Proposition 3** Suppose that the importance sampling estimator ^*ρ ε*ð Þ in (47), with *v<sup>ε</sup>* ð Þ¼� *<sup>t</sup>*, **<sup>x</sup>** *<sup>σ</sup><sup>T</sup>*ð Þ **<sup>x</sup>** <sup>∇</sup>**x***<sup>J</sup> ε* ð Þ *t*, **x** , is uniformly log-efficient (i.e., asymptotic efficient), where *J ε* ð Þ *t*, **x** satisfies the corresponding dynamic programming equation in Ω*<sup>T</sup>* with respect to the system in (45), with *J <sup>ε</sup>* <sup>¼</sup> <sup>Φ</sup>*<sup>ε</sup>* on *<sup>∂</sup>* <sup>∗</sup> <sup>Ω</sup>*<sup>T</sup>*. Then, there exits a set ⊂ <sup>3</sup> such that the Hausedorf dimension of *<sup>c</sup>* is zero and

$$\lim\_{\varepsilon \to 0} R\_{\text{err}}(\hat{\rho}(\varepsilon)) = 0,\tag{55}$$

for all *x*∈ .

*Proof:* The above proposition basically asserts that the relative error Rerrð Þ ^*ρ ε*ð Þ decreases to zero as the small random perturbation level *ε* tends to zero. Note that, if *J <sup>ε</sup> <sup>s</sup>*, **<sup>x</sup>***<sup>ε</sup>* ð Þ satisfies the dynamic programming equation in (21), then, with *vε* ð Þ¼� *<sup>t</sup>*, **<sup>x</sup>** *<sup>σ</sup><sup>T</sup>*ð Þ **<sup>x</sup>** <sup>∇</sup>**x***<sup>J</sup> ε* ð Þ *t*, **x** , the importance sampling for the estimation problem in (36), i.e., *<sup>ε</sup> s*,**x***<sup>ε</sup> <sup>s</sup>* exp � <sup>1</sup> *<sup>ε</sup>* <sup>Φ</sup>*<sup>ε</sup>* **<sup>x</sup>***<sup>ε</sup>* ð Þ � � � � , is uniformly log-efficient if the point *<sup>s</sup>*, **<sup>x</sup>***<sup>ε</sup> s* � � is contained in a region of sufficient regularity that encompasses almost all <sup>3</sup> . As a result of this, it only suffices to show that

$$\lim\_{\varepsilon \to 0} \frac{\mathbb{E}\_{\boldsymbol{\varepsilon}, \mathbf{x}\_{\boldsymbol{\varepsilon}}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{2}{\varepsilon} \boldsymbol{\Phi}^{\varepsilon} (\hat{\mathbf{x}}^{\varepsilon}) \right) (\boldsymbol{z}^{\varepsilon})^2 \right]}{\mathbb{E}\_{\boldsymbol{\varepsilon}, \mathbf{x}\_{\boldsymbol{\varepsilon}}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{1}{\varepsilon} \boldsymbol{\Phi}^{\varepsilon} (\mathbf{x}^{\varepsilon}) \right) \right]^2} = \mathbf{1} \tag{56}$$

holds uniformly for all *s*, **x***<sup>ε</sup> s* � � in any compact subset Ω*<sup>T</sup>* . *Rare Event Simulation in a Dynamical Model Describing the Spread of Traffic Congestions… DOI: http://dx.doi.org/10.5772/intechopen.95789*

Let us define following two functions

$$\psi\_1^{\varepsilon}(\mathbf{s}, \mathbf{x}\_{\varepsilon}^{\varepsilon}) = -\varepsilon \log \mathbb{E}\_{\mathbf{s}, \mathbf{x}\_{\varepsilon}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{2}{\varepsilon} \Phi^{\varepsilon}(\hat{\mathbf{x}}^{\varepsilon}) \right) \right] \tag{57}$$

and

$$\begin{split} \psi\_{2}^{\varepsilon}(\mathbf{s}, \mathbf{x}\_{\varepsilon}^{\varepsilon}) &= -\varepsilon \log \mathbb{E}\_{\mathbf{s}, \mathbf{x}\_{\varepsilon}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{2}{\varepsilon} \Phi^{\varepsilon}(\hat{\mathbf{x}}^{\varepsilon}) \right) (\mathbf{z}^{\varepsilon})^{2} \right] \\ &= -\varepsilon \log \mathbb{E}\_{\mathbf{s}, \mathbf{x}\_{\varepsilon}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{2}{\varepsilon} \Phi^{\varepsilon}(\hat{\mathbf{x}}^{\varepsilon}) - \frac{2}{\sqrt{\varepsilon}} \right) \int\_{\varepsilon}^{T} \langle v^{\varepsilon}(t, \hat{\mathbf{x}}\_{t}^{\varepsilon}), dW\_{t} \rangle \right. \\ &\left. -\frac{1}{\varepsilon} \int\_{\varepsilon}^{T} \left| v^{\varepsilon}(t, \hat{\mathbf{x}}\_{t}^{\varepsilon}) \right|^{2} dt \right]. \end{split} \tag{58}$$

Note that, from the large deviations results for the diffusion process **x**^*<sup>ε</sup> <sup>t</sup>* (e.g., see [21], Chapter 4, [30] or [27], pp.332–334; and see also the asymptotic estimates in Proposition 2 of Section 3), then there exists a constant *C*, *γ* > 0 and *ε*0, with *ε*∈ð Þ 0, *ε*<sup>0</sup> , such that

$$\begin{split} \mathbb{E}\_{0, \mathbf{x}\_{0}^{\varepsilon}}^{\varepsilon} \left[ \exp \left( -\frac{1}{\varepsilon} (\boldsymbol{\mu}\_{2}^{\varepsilon}(\hat{\mathbf{r}}^{\varepsilon}, \hat{\mathbf{x}}\_{\mathbf{i}^{\varepsilon}}^{\varepsilon}) - 2\boldsymbol{\mu}\_{1}^{\varepsilon}(\hat{\mathbf{r}}^{\varepsilon}, \hat{\mathbf{x}}\_{\mathbf{i}^{\varepsilon}}^{\varepsilon})) - \int\_{0}^{\hat{\mathbf{r}}^{\varepsilon}} \sum\_{i,j=1}^{3} a\_{ij} \left( \mathbf{x}\_{i}^{\varepsilon} \right) \frac{\partial^{2} \boldsymbol{\mu}\_{1}^{\varepsilon} \left( \boldsymbol{s}, \hat{\mathbf{x}}\_{i}^{\varepsilon} \right)}{\partial \mathbf{x}^{i} \partial \mathbf{x}^{j}} ds \right) \right] \\ \leq \mathsf{C} \exp \left( -\gamma/2 \epsilon \right), \end{split} \tag{59}$$

where ^*τ<sup>ε</sup>* <sup>¼</sup> inf *<sup>t</sup>*>*s*j**x**^*<sup>ε</sup> <sup>t</sup>* ∈ ∂<sup>Ω</sup> � �∧*T*. Note that the above relation further implies that

$$\lim\_{\varepsilon \to 0} \exp \left( -\frac{1}{\varepsilon} \left( \boldsymbol{\mu}\_2^{\varepsilon}(\mathbf{0}, \mathbf{x}\_0) - 2\boldsymbol{\mu}\_1^0(\mathbf{0}, \mathbf{x}\_0) \right) \right) = \exp \left( \int\_0^T \sum\_{i,j=1}^3 a\_{ij} \left( \mathbf{x}\_i^{\varepsilon} \right) \frac{\partial^2 \boldsymbol{\mu}\_1^{\varepsilon} \left( \boldsymbol{s}, \hat{\mathbf{x}}\_i^{\varepsilon} \right)}{\partial \mathbf{x}^i \partial \mathbf{x}^j} d\mathbf{s} \right). \tag{60}$$

Moreover, in the same way, we can also show the following relation

$$\lim\_{\varepsilon \to 0} \exp \left( -\frac{1}{\varepsilon} \left( \boldsymbol{\nu}\_1^{\varepsilon}(\mathbf{0}, \mathbf{x}\_0) - \boldsymbol{\nu}\_1^0(\mathbf{0}, \mathbf{x}\_0) \right) \right) = \exp \left( \int\_0^T \sum\_{i,j=1}^3 a\_{ij} \left( \mathbf{x}\_i^{\varepsilon} \right) \frac{\partial^2 \boldsymbol{\nu}\_1^{\varepsilon} \left( \boldsymbol{s}, \dot{\mathbf{x}}\_i^{\varepsilon} \right)}{\partial \mathbf{x}^i \partial \mathbf{x}^j} ds \right). \tag{61}$$

Finally, if we combine the above two equations, then we have the condition following

$$\lim\_{\varepsilon \to 0} \exp\left(-\frac{1}{\varepsilon} (\psi\_2^{\varepsilon}(\mathbf{0}, \mathbf{x}\_0) - \psi\_1^{\varepsilon}(\mathbf{0}, \mathbf{x}\_0))\right) = \mathbf{1},\tag{62}$$

which implies the uniform log-efficiency for the estimation problem in (36). This completes the proof of Proposition 3.

**Remark 2** The above proposition basically ensures a minimum relative estimation error in the small noise limit case for the estimation problem in (36). Note that, if *J <sup>ε</sup> <sup>t</sup>*, **<sup>x</sup>***<sup>ε</sup>* ð Þ satisfies the dynamic programming equation in (21) (i.e., if it is the solution for the family of stochastic control problems that are associated with the

underlying distributed system with small random perturbation). Then, with *vε* ð Þ¼� *<sup>t</sup>*, **<sup>x</sup>** *<sup>σ</sup>T*ð Þ **<sup>x</sup>** <sup>∇</sup>**x***<sup>J</sup> ε* ð Þ *t*, **x** , one can provide a numerical computational framework for constructing efficient importance sampling estimators, with an exponential variance decay rate � based on an exponentially-tilted biasing distribution – for rare-event simulations involving the behavior of the diffusion process **x***<sup>ε</sup>* .

**Remark 3** Here, our primary intent is to provide a theoretical framework, rather than considering some specific numerical simulation results with respect to system parameters (such as the propagation rate *β* and recovery rate *μ* of the network), which is an ongoing research area.
