� � a<sup>J</sup> a<sup>M</sup>

f <sup>n</sup>ðrÞ ¼

$$\mathcal{W}\_1^{\rm MCTF} = -\int\_{S\_w} d\mathbf{r} t\_m(\mathbf{r}) \cdot E^{\rm inc}(\mathbf{r}) \tag{20}$$

$$\mathfrak{w}\_2^{\text{MCTF}} = -\eta\_o \eta\_p \int\_{S\_m} d\mathbf{r} t\_m(\mathbf{r}) \cdot H^{\text{circ}}(\mathbf{r}),\tag{21}$$

respectively. Furthermore, the discretized operators can be written as

$$\overline{T}^{\mathrm{T}}\_{u}[m,n] = \mathrm{ik}\_{u} \int\_{S\_{n}} d\mathbf{r} t\_{m}(\mathbf{r}) \cdot \int\_{S\_{n}} d\mathbf{r}' \mathcal{g}\_{u}(\mathbf{r},\mathbf{r}') \mathbf{b}\_{n}(\mathbf{r}') + \frac{\mathrm{i}}{k\_{u}} \int\_{S\_{n}} d\mathbf{r} t\_{m}(\mathbf{r}) \cdot \int\_{S\_{n}} d\mathbf{r}' \nabla \mathcal{g}\_{u}(\mathbf{r},\mathbf{r}') \nabla^{\circ} \cdot \mathbf{b}\_{n}(\mathbf{r}') \tag{22}$$

$$\overline{\mathbf{K}}\_{\text{PV},u}^{T}[m,n] = \int\_{S\_{\text{m}}} d\mathbf{r} \mathbf{t}\_{m}(\mathbf{r}) \cdot \int\_{\text{PV},S\_{\text{n}}} d\mathbf{r}' \mathbf{b}\_{n}(\mathbf{r}') \times \nabla' \mathbf{g}\_{u}(\mathbf{r},\mathbf{r}'),\tag{23}$$

where the integrals are evaluated on the supports of the testing and basis functions (Sm and Sn).

At this stage, we can consider the interaction of two half RWG functions associated with the ath triangle of the mth edge and bth triangle of the nth edge, respectively ({a; b} ¼ {1, 2}). One can obtain

$$\begin{split} \overline{T}\_{u}^{T}[m,n,a,b] &= \frac{\gamma\_{mi}\gamma\_{mb}l\_{n}l\_{n}}{4} ik\_{u} \frac{1}{A\_{mu}} \int\_{S\_{mu}} dr (\mathbf{r} - \mathbf{r}\_{\rm nu}) \cdot \frac{1}{A\_{nb}} \int\_{S\_{nb}} dr' (\mathbf{r'} - \mathbf{r}\_{\rm nb}) \mathbf{g}\_{u}(\mathbf{r}, \mathbf{r'}) \\ &- \gamma\_{mu}\gamma\_{nb}l\_{n}l\_{n} \frac{i}{k\_{u}} \frac{1}{A\_{mu}} \int\_{S\_{mu}} dr \frac{1}{A\_{nb}} \int\_{S\_{nb}} dr' \mathbf{g}\_{u}(\mathbf{r}, \mathbf{r'}) \end{split} \tag{24}$$

$$\overline{\mathbf{K}}\_{\rm PV,u}^{\rm T}[m,n,a,b] = \frac{\mathcal{V}\_{mu}\mathcal{V}\_{nb}l\_{m}l\_{n}}{4} \frac{1}{A\_{mu}} \int\_{S\_{nu}} dr (\mathbf{r} - \mathbf{r}\_{mt}) \cdot (\mathbf{r} - \mathbf{r}\_{nb}) \times \frac{1}{A\_{nb}} \int\_{\rm PV,S\_{nb}} dr' \mathbf{\hat{V}}\_{u}^{\rm r} \mathbf{g}\_{u}(\mathbf{r}, \mathbf{r}'),\tag{25}$$

where γnb; γma ¼ �1, depending on the direction of the basis and testing functions on triangles. For the integrations on the testing and basis triangles, alternative methods can be used. Applying Gaussian quadrature is common in the literature, if the singularity of Green's function is extracted from the inner integrals. In any case, the integration methods used on the testing and basis triangles do not have to be the same, that is, different sampling schemes can be used. For the sake of brevity, we consider a single-point testing scheme by using the center point of each triangle rcr ma, leading to

$$\begin{aligned} \left[\overline{T}\_{u}^{T}[m,n,a,b]\right] &= \frac{\gamma\_{mu}\gamma\_{nb}l\_{m}l\_{n}}{4} \mathrm{ik}\_{ul}(r\_{ma}^{\rm cr} - r\_{ma}) \cdot \frac{1}{A\_{nb}} \int\_{S\_{nb}} dr'(\mathbf{r'} - \mathbf{r}\_{nb}) \mathcal{g}\_{u}(\mathbf{r'}\_{ma}^{\rm cr}, \mathbf{r'})\\ &- \gamma\_{mu}\gamma\_{nb}l\_{m}l\_{n} \frac{i}{k\_{u}} \frac{1}{A\_{nb}} \int\_{S\_{nb}} dr' \mathcal{g}\_{u}(\mathbf{r'}\_{ma}^{\rm cr}, \mathbf{r'})\end{aligned} \tag{26}$$

$$\begin{split} \overline{\mathbf{T}}\_{u}^{\mathrm{T}}[m,n,a,b] &= \frac{\gamma\_{ma}\gamma\_{nb}l\_{ml}l\_{n}}{4} \mathrm{ik}\_{u}(\rho\_{ma}^{\mathrm{cr}} - \rho\_{ma}) \cdot \frac{1}{A\_{nb}} \int\_{S\_{ab}} dr'(\rho' - \rho\_{ma}^{\mathrm{cr}}) \mathbf{g}\_{u}(r\_{ma}^{\mathrm{cr}},r') \\ &+ \frac{\gamma\_{ma}\gamma\_{nb}l\_{ml}l\_{n}}{4} \mathrm{ik}\_{u}(\rho\_{ma}^{\mathrm{cr}} - \rho\_{ma}) \cdot (\rho\_{ma}^{\mathrm{cr}} - \rho\_{nb}) \frac{1}{A\_{nb}} \int\_{S\_{ab}} dr' \mathbf{g}\_{u}(r\_{ma}^{\mathrm{cr}},r') \\ &- \gamma\_{ma}\gamma\_{nb}l\_{nl}l\_{n} \frac{i}{\hbar\_{u}} \frac{1}{A\_{nb}} \int\_{S\_{ab}} dr' \mathbf{g}\_{u}(r\_{ma}^{\mathrm{cr}},r') \end{split} \tag{27}$$

$$\overline{\mathbf{K}}\_{\rm PV,u}^{\rm T}[m,n,a,b] = \frac{\mathbf{y}\_{\rm mu}\mathbf{y}\_{\rm nb}^{\prime}\mathbf{l}\_{\rm n}\mathbf{l}\_{\rm n}}{4}(\mathbf{r}\_{\rm mu}^{\rm cr} - \mathbf{r}\_{\rm nu}) \cdot (\mathbf{r}\_{\rm ma}^{\rm cr} - \mathbf{r}\_{\rm nb}) \times \frac{1}{A\_{\rm nb}}\int\_{\rm PV, \, S\_{\rm nb}} d\mathbf{r}^{\prime}\nabla \cdot \mathbf{g}\_{u}(\mathbf{r}\_{\rm mu}^{\rm cr}, \mathbf{r}^{\prime}),\tag{28}$$

where {ρ′; ρma; ρnb; ρcr ma} represent the projections of {r′ ; rma; rnb; rcr ma} onto the basis plane.

It is generally more efficient to compute the interactions via triangle by triangle (rather than RWG by RWG) since common integrals related to a basis triangle can be evaluated once and used in multiple interactions related to the triangle. For MCTF, interactions are calculated (for a ¼ 1; 2 and b ¼ 1; 2, and u ¼ o; p) as

$$\mathbf{Z}\_{11}^{\text{MCTF}}[m,n] \leftarrow \frac{\gamma\_{ma}\gamma\_{nb}l\_{m}l\_{n}}{4} \mathbf{ik}\_{u} \eta\_{u} \left\{ (\boldsymbol{\rho}\_{mu}^{\text{cr}} - \boldsymbol{\rho}\_{ma}) \cdot [\boldsymbol{I}\_{mu,nb,u}^{\text{B}} + (\boldsymbol{\rho}\_{mu}^{\text{cr}} - \boldsymbol{\rho}\_{nb})\boldsymbol{I}\_{mu,nb,u}^{\text{A}}] - \frac{4}{k\_{u}^{2}}\boldsymbol{I}\_{mu,nb,u}^{\text{A}} \right\} \tag{29}$$

$$\overline{\mathbf{Z}}\_{22}^{\text{MCTF}}[m,n] \leftarrow \frac{\mathcal{V}\_{\text{ma}}\mathcal{V}\_{\text{mb}}l\_{\text{m}}l\_{\text{n}}}{4} \text{ik}\_{u} \frac{\eta\_{o}\eta\_{p}}{\eta\_{u}} \left\{ (\mathcal{\boldsymbol{\rho}}\_{\text{ma}}^{\text{cr}} - \mathcal{\boldsymbol{\rho}}\_{\text{ma}}) \cdot [\mathcal{I}\_{\text{ma},\text{nb},u}^{\text{B}} + (\mathcal{\boldsymbol{\rho}}\_{\text{ma}}^{\text{cr}} - \mathcal{\boldsymbol{\rho}}\_{\text{nb}})\mathcal{I}\_{\text{ma},\text{nb},u}^{\text{A}}] - \frac{4}{k\_{u}^{2}}I\_{\text{mat},\text{nb},u}^{\text{A}} \right\} \tag{30}$$

$$\overline{\mathbf{Z}}\_{12}^{\text{MCTF}}[m,n] \leftarrow -\frac{\mathcal{V}\_{ma}\mathcal{V}\_{nb}l\_{m}l\_{n}}{4} \left(\mathbf{r}\_{ma}^{\text{cr}} - \mathbf{r}\_{ma}\right) \cdot \left[\left(\mathbf{r}\_{ma}^{\text{cr}} - \mathbf{r}\_{nb}\right) \times \mathbf{I}\_{ma,nb,u}^{\text{C},\text{PV}}\right] \tag{31}$$

$$\overline{\mathbf{Z}}\_{21}^{\text{MCTF}}[m,n] \leftarrow \frac{\mathcal{V}\_{ma}\mathcal{V}\_{nb}l\_{m}l\_{n}}{4} \eta\_{o}\eta\_{p} \left(r\_{ma}^{\text{cr}} - r\_{mt}\right) \cdot \left[\left(r\_{ma}^{\text{cr}} - r\_{nb}\right) \times I\_{ma,nb,u}^{\mathbb{C},\text{PV}}\right],\tag{32}$$

where ← indicates the update operation. Each matrix element is obtained by combining the contributions of four triangle-triangle interactions. By using triangle-triangle interactions, a basis integral (I A ma; nb; <sup>u</sup>, I B ma; nb; <sup>u</sup>, or I C; PV ma; nb; <sup>u</sup>) are used in nine different RWG-RWG interactions. These common integrals (with singularity extractions) can be listed as

$$\begin{split} I\_{\rm un, nb, u}^{A} &= \frac{1}{A\_{\rm nb}} \int\_{S\_{\rm nb}} d\mathbf{r}' \mathcal{g}\_{u}(\mathbf{r}\_{\rm ma}^{\rm cr}, \mathbf{r}') = \frac{1}{A\_{\rm nb}} \int\_{S\_{\rm nb}} d\mathbf{r}' \Big[ \mathcal{g}\_{u}(\mathbf{r}\_{\rm ma}^{\rm cr}, \mathbf{r}') - \frac{1}{4\pi|\mathbf{r}\_{\rm ma}^{\rm cr} - \mathbf{r}'|} \Big] \\ &+ \frac{1}{A\_{\rm nb}} \int\_{S\_{\rm ab}} d\mathbf{r}' \frac{1}{4\pi|\mathbf{r}\_{\rm ma}^{\rm cr} - \mathbf{r}'|} \end{split} \tag{33}$$

$$\begin{split} I\_{\rm un,ub.u}^{\rm B} &= \frac{1}{A\_{\rm nb}} \int\_{S\_{\rm ab}} dr' (\mathbf{\rho}' - \mathbf{\rho}\_{\rm mu}^{\rm ct}) \mathbf{\mathcal{g}}\_{\rm u} (\mathbf{r}\_{\rm nu}^{\rm ct}, r') = \frac{1}{A\_{\rm nb}} \int\_{S\_{\rm ab}} dr' (\mathbf{\rho}' - \mathbf{\rho}\_{\rm mu}^{\rm ct}) \left[ \mathbf{\mathcal{g}}\_{\rm u} (\mathbf{r}\_{\rm nu}^{\rm ct}, r') - \frac{1}{4\pi |\mathbf{r}\_{\rm nu}^{\rm ct} - \mathbf{r}'|} \right] \\ &+ \frac{1}{A\_{\rm nb}} \int\_{S\_{\rm ab}} dr' (\mathbf{\rho}' - \mathbf{\rho}\_{\rm mu}^{\rm ct}) \frac{1}{4\pi |\mathbf{r}\_{\rm nu}^{\rm ct} - \mathbf{r}'|} \end{split} \tag{34}$$

$$\begin{split} \mathbf{f}\_{\mathit{mu},\mathit{nb},\mathit{u}}^{\mathit{C},\textup{PV}} &= \frac{1}{A\_{\mathit{nb}}} \int\_{\mathit{PV},\mathit{S}\_{\mathit{nb}}} d\mathbf{r}' \nabla^{\mathit{r}} \mathbf{g}\_{\mathit{u}}(\mathbf{r}\_{\mathit{mu}}^{\textup{cr}},\mathbf{r}') = \frac{1}{A\_{\mathit{nb}}} \int\_{\mathit{PV},\mathit{S}\_{\mathit{nb}}} d\mathbf{r}' \nabla^{\mathit{r}} \left[ \mathbf{g}\_{\mathit{u}}(\mathbf{r}\_{\mathit{mu}}^{\textup{cr}},\mathbf{r}') - \frac{1}{4\pi|\mathbf{r}\_{\mathit{mu}}^{\textup{cr}} - \mathbf{r}'|} \right] \\ &+ \frac{1}{A\_{\mathit{nb}}} \int\_{\mathit{PV},\mathit{S}\_{\mathit{nb}}} d\mathbf{r}' \nabla^{\mathit{r}} \left( \frac{1}{4\pi|\mathbf{r}\_{\mathit{mu}}^{\textup{cr}} - \mathbf{r}'|} \right). \end{split} \tag{35}$$

Using the same convention and single-point testing, the elements of the right-hand-side vectors are evaluated as

Integral-Equation Formulations of Plasmonic Problems in the Visible Spectrum and Beyond http://dx.doi.org/10.5772/67216 199

$$\mathbf{w}\_1^{\text{MCTF}}[m] \leftarrow -\frac{\gamma\_{mu}l\_m}{2} \left(\mathbf{r}\_{ma}^{\text{cr}} - \mathbf{r}\_{ma}\right) \cdot \mathbf{E}^{\text{inc}}(\mathbf{r}\_{ma}^{\text{cr}}) \tag{36}$$

$$\mathbf{w}\_2^{\rm MCTF}[m] \leftarrow -\frac{\gamma\_{\rm ma} l\_m}{2} \eta\_o \eta\_p (r\_{ma}^{\rm cr} - r\_{\rm ma}) \cdot H^{\rm inc}(r\_{\rm ma}^{\rm cr}) \tag{37}$$

for a ¼ {1, 2}.

KT

where {ρ′; ρma; ρnb; ρcr

ZMCTF

ZMCTF

basis integral (I

I B

ma; nb; <sup>u</sup> <sup>¼</sup> <sup>1</sup>

I C; PV ma; nb; <sup>u</sup> <sup>¼</sup> <sup>1</sup>

tors are evaluated as

a ¼ 1; 2 and b ¼ 1; 2, and u ¼ o; p) as

<sup>11</sup> <sup>½</sup>m; <sup>n</sup>�<sup>←</sup> <sup>γ</sup>maγnblmln

<sup>22</sup> <sup>½</sup>m; <sup>n</sup>�<sup>←</sup> <sup>γ</sup>maγnblmln

PV; <sup>u</sup>½m; <sup>n</sup>; <sup>a</sup>, <sup>b</sup>� ¼ <sup>γ</sup>maγnblmln

198 Dynamical Systems - Analytical and Computational Techniques

<sup>4</sup> <sup>ð</sup><sup>r</sup> cr ma−rmaÞ�ðr

ma} represent the projections of {r′

<sup>4</sup> ikuη<sup>u</sup> <sup>ð</sup>ρcr

ηoη<sup>p</sup> ηu

<sup>4</sup> iku

<sup>12</sup> ½m; n�←−

<sup>21</sup> <sup>½</sup>m; <sup>n</sup>�<sup>←</sup> <sup>γ</sup>maγnblmln

ma; nb; <sup>u</sup>, or I

ZMCTF

ZMCTF

ma; nb; <sup>u</sup> <sup>¼</sup> <sup>1</sup>

Anb Z

PV; Snb

þ 1 Anb Z Snb

Anb Z Snb

<sup>d</sup>r′ðρ′−ρcr

<sup>d</sup>r′ðρ′−ρcr

PV; Snb

A ma; nb; <sup>u</sup>, I B

I A

Anb Z Snb

þ 1 Anb Z Snb

þ 1 Anb Z cr ma−rnbÞ ·

It is generally more efficient to compute the interactions via triangle by triangle (rather than RWG by RWG) since common integrals related to a basis triangle can be evaluated once and used in multiple interactions related to the triangle. For MCTF, interactions are calculated (for

B

B

cr

where ← indicates the update operation. Each matrix element is obtained by combining the contributions of four triangle-triangle interactions. By using triangle-triangle interactions, a

> Anb Z Snb

Anb Z Snb

> Anb Z

Using the same convention and single-point testing, the elements of the right-hand-side vec-

:

ma; nb; <sup>u</sup> þ ðρcr

ma; nb; <sup>u</sup> þ ðρcr

ma−rmaÞ � ½ðr

ma−rmaÞ � ½ðr

ma−ρmaÞ�½I

ma−ρmaÞ�½I

<sup>ð</sup>ρcr

C; PV

These common integrals (with singularity extractions) can be listed as

dr′guðr cr ma; <sup>r</sup>′Þ ¼ <sup>1</sup>

<sup>d</sup>r′ <sup>1</sup> 4πjrcr ma−r′j

maÞguðr cr ma; <sup>r</sup>′Þ ¼ <sup>1</sup>

dr′∇′ guðr cr ma; <sup>r</sup>′Þ ¼ <sup>1</sup>

<sup>d</sup>r′∇′ <sup>1</sup> 4πjrcr ma−r′j � �

ma<sup>Þ</sup> <sup>1</sup> 4πjrcr ma−r′j

γmaγnblmln <sup>4</sup> <sup>ð</sup><sup>r</sup> cr

<sup>4</sup> <sup>η</sup>oηpð<sup>r</sup>

1 Anb Z

; rma; rnb; rcr

ma−ρnbÞI A ma; nb; <sup>u</sup>�<sup>−</sup> <sup>4</sup> k 2 u I A ma; nb; u

( )

cr ma−rnbÞ · I

cr ma−rnbÞ · I

dr′ guðr cr

<sup>d</sup>r′ðρ′−ρcr

PV; Snb

ma−ρnbÞI A ma; nb; <sup>u</sup>�<sup>−</sup> <sup>4</sup> k 2 u I A ma; nb; u

ma; nb; <sup>u</sup>) are used in nine different RWG-RWG interactions.

maÞ guðr

<sup>d</sup>r′∇′ guð<sup>r</sup>

ma; <sup>r</sup>′Þ<sup>−</sup> <sup>1</sup> 4πjrcr ma−r′j

cr

cr

ma; <sup>r</sup>′Þ<sup>−</sup> <sup>1</sup> 4πjrcr ma−r′j

ma; <sup>r</sup>′Þ<sup>−</sup> <sup>1</sup> 4πjrcr ma−r′j

� �

� �

� �

C; PV

C; PV

ma; nb; <sup>u</sup>� (31)

ma; nb; <sup>u</sup>�; (32)

( )

PV; Snb

dr′∇′ guðr cr

ma} onto the basis plane.

ma; r′Þ, (28)

(29)

(30)

(33)

(34)

(35)

Matrix equations obtained as summarized in this section can be solved in different ways, particularly via iterative techniques accelerated via fast algorithms. Once the current coefficients a<sup>J</sup> and a<sup>M</sup> are found, electric and magnetic fields can be obtained at any location inside or outside the object. Using the RWG functions, secondary fields can be written as

$$E^{\rm sec}(\mathbf{r}) = \sum\_{n=1}^{N} \sum\_{b=1}^{2} a\_l[n] \frac{\mathcal{V}\_{nb} l\_n}{2} ik\_n \eta\_u \left\{ (\mathbf{p} \cdot \mathbf{p}\_{nb}) I\_{nb, \
u}^{A}(\mathbf{r}) + I\_{nb, \
u}^{3}(\mathbf{r}) - \frac{2}{k\_{\
u}^2} I\_{nb, \
u}^{\mathbb{C}}(\mathbf{r}) \right\} $$

$$- \sum\_{n=1}^{N} \sum\_{b=1}^{2} a\_{\mathcal{M}}[n] \frac{\mathcal{V}\_{nb} l\_n}{2} (\mathbf{r} \cdot \mathbf{r}\_{nb}) \times I\_{nb, \
u}^{\mathbb{C}}(\mathbf{r}) \tag{38}$$

$$H^{\rm exc}(r) = \sum\_{n=1}^{N} \sum\_{b=1}^{2} a\_{\mathcal{M}}[n] \frac{\mathcal{V}\_{nb} l\_n}{2} \frac{ik\_u}{\eta\_u} \left\{ (\rho \text{-} \rho\_{nb}) I\_{nb,\
u}^{A}(r) + I\_{nb,\
u}^{B}(r) - \frac{2}{k\_u^2} \mathcal{E}\_{nb,\
u}^{C}(r) \right\}$$

$$+ \sum\_{n=1}^{N} \sum\_{b=1}^{2} a\_{\mathcal{I}}[n] \frac{\mathcal{V}\_{nb} l\_n}{2} (r \text{-} r\_{nb}) \times I\_{nb,\
u}^{C}(r),\tag{39}$$

where

$$I\_{nb,\
u}^{A}(\mathbf{r}) = \frac{1}{A\_{nb}} \int\_{S\_{nb}} d\mathbf{r}' \mathbf{g}\_{\mu}(\mathbf{r}, \mathbf{r}') \tag{40}$$

$$I\_{nb,\
u}^{\mathbb{B}}(\mathbf{r}) = \frac{1}{A\_{nb}} \int\_{S\_{nb}} d\mathbf{r}' (\mathbf{p}' \cdot \mathbf{p}) \mathbb{g}\_{\boldsymbol{u}}(\mathbf{r}, \mathbf{r}') \tag{41}$$

$$\boldsymbol{A}\_{n\flat,\boldsymbol{u}}^{\mathbb{C}}(\boldsymbol{r}) = \frac{1}{A\_{n\flat}} \int\_{\boldsymbol{S}\_{nb}} d\boldsymbol{r}' \nabla\_{\mathcal{S}\_{\boldsymbol{u}}}^{\cdot} (\boldsymbol{r}, \boldsymbol{r}'). \tag{42}$$

Similar to the matrix elements, a triangle loop (rather than an RWG loop) can be used to efficiently perform the near-field computations. If the observation point r is close to the surface of the object, singularity extractions must be used for accurate integrations. If the medium parameters are set to ε<sup>p</sup> and μp, the computations above lead to inner electromagnetic fields, while the fields outside the surface vanish due to the equivalence theorem. In fact, this can be used to assess the accuracy of numerical solutions, since any nonzero field outside corresponds to a numerical error. Similarly, using ε<sup>o</sup> and μo, inner fields must be zero, while secondary fields are obtained outside. Then, the total fields outside the object can be obtained as

$$E(r) = E^{\rm inc}(r) + E^{\rm sec}(r) \tag{43}$$

$$H(r) = H^{\rm inc}(r) + H^{\rm sec}(r). \tag{44}$$

#### 4. Matrix-vector multiplications with MLFMA

Plasmonic problems often involve large structures in terms of wavelength. In addition, typical λo=10 triangulations may not be sufficient to obtain accurate results, and dense discretizations are usually needed, leading to a large number of unknowns. Since direct solutions (e.g., Gaussian elimination) of the resulting matrix equations may not be feasible, fast iterative solvers are required for efficient analysis of plasmonic structures in reasonable processing times and using available memory. MLFMA is an efficient algorithm that can be used to perform fast matrix-vector multiplications with OðNlogNÞ complexity for an N · N dense matrix equation derived from an electrodynamic problem [25, 26]. Hence, MLFMA can be used within a Krylov subspace algorithm, such as the generalized minimal residual (GMRES) method, for efficient iterative solutions.

MLFMA is well known in the literature as a method with controllable accuracy. In practice, however, its accuracy heavily depends on the expansion method. In the most standard form, plane waves are used to diagonalize the addition theorem for Green's function. Then, the interaction distances, hence, the recursive clustering of the electrodynamic interactions, are limited by a low-frequency breakdown. For example, two to three digits of accuracy (1% and 0.1% maximum relative error) using a one-box-buffer scheme need a minimum box size of around λu. It is possible to use smaller boxes and/or to achieve higher accuracy, if alternative expansion tools [38, 39], such as a direct application of multipoles [40] or evanescent waves [41], are employed. In this chapter, where numerical solutions are performed with maximum 1% error, we restrict ourselves to the plane-wave expansion.

Using plane waves, Green's function is decomposed as

$$g\_{u}(r,r') = \frac{\exp\left(ik\_{\mathrm{il}}|r-r'|\right)}{4\pi|r-r'|} = \frac{\exp\left(ik\_{\mathrm{il}}|\nu+\nu|\right)}{4\pi|\nu+\nu|} \approx \frac{ik\_{\mathrm{il}}}{\left(4\pi\right)^{2}} \int d^{2}\hat{k}\beta(\mathbf{k}\_{\mathrm{il}},\nu)\alpha\_{\mathrm{r}}(\mathbf{k}\_{\mathrm{il}},\boldsymbol{\nu})\tag{45}$$

for <sup>w</sup> ¼ jw<sup>j</sup> <sup>&</sup>gt; <sup>v</sup> ¼ jvj, where <sup>k</sup><sup>u</sup> <sup>¼</sup> <sup>k</sup>^ku, <sup>d</sup><sup>2</sup> <sup>k</sup>^ <sup>¼</sup> <sup>d</sup>θd<sup>φ</sup> sin <sup>θ</sup>, and

$$\beta(k\_{\boldsymbol{\nu}}, \boldsymbol{\nu}) = \exp\left(i k\_{\boldsymbol{\nu}} \cdot \boldsymbol{\nu}\right) \tag{46}$$

$$\alpha\_{\tau}(k\_u, \mathbf{w}) = \sum\_{t=0}^{\tau} (i)^t (2t+1) h\_t^{(1)}(k\_u w) P\_t(\hat{k} \cdot \hat{\mathbf{w}}) \tag{47}$$

are diagonal shift and translation operators, respectively. It is remarkable that, as a result of the factorization, the shift vector v and the translation vector w, which satisfy w þ v ¼ r−r′, are separated. In addition, with the help of the diagonalization, sampling of the shift and translation operators leads to diagonal matrices, as the shift or translation of a plane wave in a given direction does not contribute to plane waves in other directions. In the above, the translation operator is written in terms of the Legendre polynomials Pt and the spherical Hankel function of the first kind h ð1Þ <sup>t</sup> , while τ is the truncation number that can be found via excess-bandwidth formulas [42]. We note that the derivatives of Green's function can also be obtained as

<sup>E</sup>ðrÞ ¼ <sup>E</sup>incðrÞ þ <sup>E</sup>secðr<sup>Þ</sup> (43)

<sup>H</sup>ðrÞ ¼ <sup>H</sup>incðrÞ þ <sup>H</sup>secðrÞ: (44)

4. Matrix-vector multiplications with MLFMA

200 Dynamical Systems - Analytical and Computational Techniques

1% error, we restrict ourselves to the plane-wave expansion.

ατðku; <sup>w</sup>Þ ¼ <sup>X</sup><sup>τ</sup>

<sup>4</sup>πjr−r′<sup>j</sup> <sup>¼</sup> exp <sup>ð</sup>ikuj<sup>w</sup> <sup>þ</sup> <sup>v</sup>jÞ

t¼0 ðiÞ t

Using plane waves, Green's function is decomposed as

guðr; <sup>r</sup>′Þ ¼ exp <sup>ð</sup>ikujr−r′jÞ

for <sup>w</sup> ¼ jw<sup>j</sup> <sup>&</sup>gt; <sup>v</sup> ¼ jvj, where <sup>k</sup><sup>u</sup> <sup>¼</sup> <sup>k</sup>^ku, <sup>d</sup><sup>2</sup>

method, for efficient iterative solutions.

Plasmonic problems often involve large structures in terms of wavelength. In addition, typical λo=10 triangulations may not be sufficient to obtain accurate results, and dense discretizations are usually needed, leading to a large number of unknowns. Since direct solutions (e.g., Gaussian elimination) of the resulting matrix equations may not be feasible, fast iterative solvers are required for efficient analysis of plasmonic structures in reasonable processing times and using available memory. MLFMA is an efficient algorithm that can be used to perform fast matrix-vector multiplications with OðNlogNÞ complexity for an N · N dense matrix equation derived from an electrodynamic problem [25, 26]. Hence, MLFMA can be used within a Krylov subspace algorithm, such as the generalized minimal residual (GMRES)

MLFMA is well known in the literature as a method with controllable accuracy. In practice, however, its accuracy heavily depends on the expansion method. In the most standard form, plane waves are used to diagonalize the addition theorem for Green's function. Then, the interaction distances, hence, the recursive clustering of the electrodynamic interactions, are limited by a low-frequency breakdown. For example, two to three digits of accuracy (1% and 0.1% maximum relative error) using a one-box-buffer scheme need a minimum box size of around λu. It is possible to use smaller boxes and/or to achieve higher accuracy, if alternative expansion tools [38, 39], such as a direct application of multipoles [40] or evanescent waves [41], are employed. In this chapter, where numerical solutions are performed with maximum

<sup>4</sup>πj<sup>w</sup> <sup>þ</sup> <sup>v</sup><sup>j</sup> <sup>≈</sup> iku

<sup>k</sup>^ <sup>¼</sup> <sup>d</sup>θd<sup>φ</sup> sin <sup>θ</sup>, and

ð2t þ 1Þh

are diagonal shift and translation operators, respectively. It is remarkable that, as a result of the factorization, the shift vector v and the translation vector w, which satisfy w þ v ¼ r−r′, are separated. In addition, with the help of the diagonalization, sampling of the shift and translation operators leads to diagonal matrices, as the shift or translation of a plane wave in a given direction does not contribute to plane waves in other directions. In the above, the translation

ð1Þ

ð4πÞ 2 Z d2

βðku; vÞ ¼ exp ðik<sup>u</sup> � vÞ (46)

<sup>t</sup> <sup>ð</sup>kuwÞPtðk^ � <sup>w</sup>^<sup>Þ</sup> (47)

<sup>k</sup>^βðku; <sup>v</sup>Þατðku; <sup>w</sup><sup>Þ</sup> (45)

$$\nabla \mathcal{g}\_u(\mathbf{r}, \mathbf{r}') \approx \frac{\mathrm{i}k\_u}{\left(4\pi\right)^2} \int d^2 \hat{\mathbf{k}}\left(\mathrm{i}k\_u\right) \beta(\mathbf{k}\_u, \mathbf{v}) \alpha\_\tau(\mathbf{k}\_u, \mathbf{w}) \tag{48}$$

$$
\nabla \nabla^{'} \mathcal{g}\_{\boldsymbol{u}}(\boldsymbol{r}, \boldsymbol{r}') \approx \frac{\mathrm{i}\mathbf{k}\_{\boldsymbol{u}}}{\left(4\pi\right)^{2}} \int d^{2}\hat{\mathbf{k}} \, (\mathbf{k}\_{\boldsymbol{u}} \mathbf{k}\_{\boldsymbol{u}}) \beta(\mathbf{k}\_{\boldsymbol{u}}, \boldsymbol{v}) \alpha\_{\boldsymbol{\pi}}(\mathbf{k}\_{\boldsymbol{u}}, \boldsymbol{w}).\tag{49}$$

These expressions can directly be used to factorize the discretized operators by replacing Green's function with the diagonalized forms. In the context of MCTF, we have

$$\overline{T}^{T}\_{\
u}[m,n,a,b] = \left(\frac{\mathrm{i}\mathbf{k}\_{\rm u}}{4\pi}\right)^{2} \int d^{2}\hat{k}\,\mathbf{R}^{T}\_{\mathrm{nu}}(\mathbf{k}\_{\rm u},\mathbf{r}\_{\rm C}) \cdot \boldsymbol{\alpha}\_{\rm \tau}(\mathbf{k}\_{\rm u},\mathbf{r}\_{\rm C} \cdot \mathbf{r}\_{\rm C}) \mathbf{S}\_{\mathrm{nb}}(\mathbf{k}\_{\rm u},\mathbf{r}\_{\rm C})\tag{50}$$

$$\overline{\mathbf{K}}\_{\rm PV,u}^{T}[m,n,a,b] = \left(\frac{ik\_{\rm u}}{4\pi\varepsilon}\right)^{2} \int d^{2}\hat{k}\,\mathbf{R}\_{\rm ma}^{\rm C}(\mathbf{k}\_{\rm u},\mathbf{r}\_{\rm C}) \cdot \alpha\_{\rm \tau}(\mathbf{k}\_{\rm u},\mathbf{r}\_{\rm C} - \mathbf{r}\_{\rm C})\mathbf{S}\_{\rm nb}(\mathbf{k}\_{\rm u},\mathbf{r}\_{\rm C}),\tag{51}$$

where r<sup>C</sup> and rC′ are testing and basis centers, respectively. Using the RWG functions, the radiation and receiving patterns of the half basis and testing functions are derived as

$$\mathcal{S}\_{nb}(\mathbf{k}\_{\rm u}, \mathbf{r}\_{\mathbf{C}^{\cdot}}) = \frac{\gamma\_{nb} l\_{\rm u}}{2} (\overline{\mathbf{I}}\_{3 \times 3} - \hat{\mathbf{k}} \,\hat{\mathbf{k}}) \cdot \frac{1}{A\_{nb}} \int\_{S\_{nb}} d\mathbf{r}' \beta(\mathbf{k}\_{\rm u}, \mathbf{r}\_{\mathbf{C}^{\cdot}} - \mathbf{r}') (\mathbf{r}' - \mathbf{r}\_{\rm nb}) \tag{52}$$

$$\mathcal{R}\_{\rm mu}^{\rm T}(\mathbf{k}\_{\rm u}, \mathbf{r}\_{\rm C}) = \frac{\gamma\_{\rm mu} l\_{\rm m}}{2} (\overline{\mathbf{I}}\_{3 \times 3} \mathbf{-} \hat{\mathbf{k}} \,\hat{\mathbf{k}}) \cdot \frac{1}{A\_{\rm m}} \int\_{S\_{\rm m}} d\mathbf{r} \beta(\mathbf{k}\_{\rm u}, \mathbf{r} - \mathbf{r}\_{\rm C}) (\mathbf{r} - \mathbf{r}\_{\rm m}) \tag{53}$$

$$\mathcal{R}\_{\rm mat}^{\mathcal{K}}(\mathbf{k}\_{\rm u}, \mathbf{r}\_{\rm C}) = -\frac{\gamma\_{\rm mu} l\_{\rm m}}{2} \hat{\mathbf{k}} \times \frac{1}{A\_{\rm m}} \int\_{S\_{\rm m}} d\mathbf{r} \beta(\mathbf{k}\_{\rm u}, \mathbf{r} - \mathbf{r}\_{\rm C}) (\mathbf{r} - \mathbf{r}\_{\rm m}) = -\hat{\mathbf{k}} \times \mathcal{R}\_{\rm m}^{\mathcal{T}}(\mathbf{k}\_{\rm u}, \mathbf{r}\_{\rm C}), \tag{54}$$

where <sup>I</sup><sup>3</sup> · <sup>3</sup> <sup>¼</sup> <sup>k</sup>^k^ <sup>þ</sup> <sup>θ</sup>^θ^ <sup>þ</sup> <sup>φ</sup>^φ^ . Using a single-point integration, the patterns can be calculated as

$$\mathcal{S}\_{n\flat}(\mathbf{k}\_{\boldsymbol{u}}, \mathbf{r}\_{\circ}, \mathbf{r}\_{\circ}) = \frac{\gamma\_{n\flat} l\_{\boldsymbol{n}}}{2} \beta(\mathbf{k}\_{\boldsymbol{u}}, \mathbf{r}\_{\circ} - \mathbf{r}\_{n\flat}^{\mathrm{cr}}) (\overline{\mathbf{I}}^{3 \times 3} - \hat{\mathbf{k}} \, \hat{\mathbf{k}}) \cdot (\mathbf{r}\_{n\flat}^{\mathrm{cr}} - \mathbf{r}\_{n\flat}) \tag{55}$$

$$\mathcal{R}\_{\text{ma}}^{\mathcal{T}}(\mathbf{k}\_{u},\mathbf{r}\_{\text{C}}) = \frac{\gamma\_{\text{ma}}l\_{m}}{2} \beta(\mathbf{k}\_{u},\mathbf{r}\_{\text{ma}}^{\text{cr}} - \mathbf{r}\_{\text{C}}) (\overline{\mathbf{I}}^{3 \times 3} - \hat{\mathbf{k}} \,\hat{\mathbf{k}}) \cdot (\mathbf{r}\_{\text{mat}}^{\text{cr}} - \mathbf{r}\_{\text{mut}}) \tag{56}$$

$$\mathcal{R}\_{ma}^{\mathcal{K}}(k\_u, r\_{\mathcal{C}}) = -\hat{k} \times \mathcal{R}\_{mu}^{\mathcal{T}}(k\_u, r\_{\mathcal{C}}),\tag{57}$$

where rcr ma and rcr nb represent the centers of the associated testing and basis triangles, respectively. Then, the radiation/receiving patterns of the full RWG functions can be obtained as <sup>S</sup><sup>n</sup> <sup>¼</sup> <sup>S</sup>n<sup>1</sup> <sup>þ</sup> <sup>S</sup>n<sup>2</sup> and <sup>R</sup><sup>K</sup>,<sup>T</sup> <sup>m</sup> <sup>¼</sup> <sup>R</sup><sup>K</sup>; <sup>T</sup> <sup>m</sup><sup>1</sup> <sup>þ</sup> <sup>R</sup><sup>K</sup>; <sup>T</sup> <sup>m</sup><sup>2</sup> by combining the contributions of the half functions. These patterns, as well as the truncation operator, are sampled on the unit sphere, where the sampling scheme is a matter choice depending on the implementation.

In a standard implementation of MLFMA, the object is placed inside a computational cubic box, which is divided into sub-boxes until the smallest possible box size determined by the desired accuracy. Empty boxes that do not contain a part of the object (discretized surface) are omitted directly and are not divided further. This way, it is possible to construct a tree structure (consisting of L ¼ OðlogNÞ levels) involving nonempty boxes at different levels with OðNÞ complexity. Using the child/parent relationship between the boxes, the stages of a matrixvector multiplication, namely aggregation, translation, and disaggregation, are as follows.

In an aggregation stage, radiated fields of boxes are computed from bottom to top. At the lowest level, we have

$$a[n] \mathbf{S}\_n(\mathbf{k}\_\mathbf{u}, r\_{\mathbf{C}'}) \to \mathbf{S}\_{\mathbf{C}'}(\mathbf{k}\_\mathbf{u}, r\_{\mathbf{C}'}), \qquad (\mathbf{b}\_n \in \mathbb{C}'), \tag{58}$$

where the coefficients provided by the iterative solver are used to weight the contributions of the basis functions to the overall radiation patterns of the boxes C′ at the lowest level. At higher levels (l ¼ 2; 3;…; L), aggregation is performed recursively as

$$\beta(\boldsymbol{k}\_{\rm{u}}, \boldsymbol{r}\_{P\{\boldsymbol{C}\}} - \boldsymbol{r}\_{\boldsymbol{C}^{\cdot}}) \mathbf{S}\_{\boldsymbol{C}^{\cdot}}(\boldsymbol{k}\_{\rm{u}}, \boldsymbol{r}\_{\boldsymbol{C}^{\cdot}}) \longrightarrow \mathbf{S}\_{P\{\boldsymbol{C}\}}(\boldsymbol{k}\_{\rm{u}}, \boldsymbol{r}\_{P\{\boldsymbol{C}\}}),\tag{59}$$

where P{C′ } represents the parent of C′ . Due to the exponential shifts from different locations within a box, the radiated fields become more oscillatory as the box size gets larger. Hence, the sampling rate must be increased, generally with <sup>O</sup>ðk<sup>2</sup> uD<sup>2</sup> Þ where D is the box size.

After completing an aggregation stage, the radiated fields are translated between the boxes at the same level. For l ¼ 1; 2;…; L, this can be written as

$$\alpha\_{\tau}(\boldsymbol{k}\_{\boldsymbol{u}}, \boldsymbol{r}\_{\mathcal{C}} - \boldsymbol{r}\_{\mathcal{C}}) \mathbf{S}\_{\mathcal{C}}(\boldsymbol{k}\_{\boldsymbol{u}}, \boldsymbol{r}\_{\mathcal{C}}) \to \mathbf{G}\_{\mathcal{C}}(\boldsymbol{k}\_{\boldsymbol{u}}, \boldsymbol{r}\_{\mathcal{C}}), \qquad (\boldsymbol{\mathbb{C}}^{\prime} \in F\{\mathcal{C}\}), \tag{60}$$

where F{C} represents the far-zone boxes for a given box C. It is remarkable that F{C} contains O(1) elements since interactions between too far boxes, for example, C and C′ at level l, are made at a higher level ðl ′ <sup>&</sup>gt; <sup>l</sup>Þ. Using a one-box-buffer scheme, the condition for translation is that the boxes should not intersect at a surface, line, or corner, while their parents must intersect at a surface, line, or corner.

In a translation stage, incoming fields are collected at the box centers, but they are only partial data, since the total incoming fields at the center of a box contain contribution from its parent (if exists) due to the translations at higher levels. Therefore, a disaggregation stage is performed recursively for l ¼ L−1;L−2;…; 1 as

$$\mathbf{G}\_{\mathbb{C}}(\mathbf{k}\_{\mathrm{u}},\mathbf{r}\_{\mathrm{C}}) + \beta(\mathbf{k}\_{\mathrm{u}},\mathbf{r}\_{\mathrm{C}} - \mathbf{r}\_{\mathrm{P}\{\mathrm{C}\}}) \mathbf{G}^{+}\_{\mathrm{P}\{\mathrm{C}\}}(\mathbf{k}\_{\mathrm{u}},\mathbf{r}\_{\mathrm{P}\{\mathrm{C}\}}) \to \mathbf{G}^{+}\_{\mathbb{C}}(\mathbf{k}\_{\mathrm{u}},\mathbf{r}\_{\mathrm{C}}).\tag{61}$$

At the lowest level, the testing functions receive the incoming fields as

$$\left(\frac{ik\_{\rm u}}{4\pi}\right)^{2} \int d^{2}\hat{k}\,\mathsf{R}^{\infty,\top}\_{\rm m}(\mathsf{k}\_{\rm u},\mathsf{r}\_{\rm C}) \cdot \mathsf{G}^{+}\_{\rm C}(\mathsf{k}\_{\rm u},\mathsf{r}\_{\rm C}) \to \mathsf{y}^{\rm FF}[m], \quad (\mathsf{t}\_{\rm m} \in \mathsf{C}), \tag{62}$$

where

In a standard implementation of MLFMA, the object is placed inside a computational cubic box, which is divided into sub-boxes until the smallest possible box size determined by the desired accuracy. Empty boxes that do not contain a part of the object (discretized surface) are omitted directly and are not divided further. This way, it is possible to construct a tree structure (consisting of L ¼ OðlogNÞ levels) involving nonempty boxes at different levels with OðNÞ complexity. Using the child/parent relationship between the boxes, the stages of a matrixvector multiplication, namely aggregation, translation, and disaggregation, are as follows.

In an aggregation stage, radiated fields of boxes are computed from bottom to top. At the

<sup>a</sup>½n�Snðku; <sup>r</sup>C′Þ ! <sup>S</sup>C′ðku; <sup>r</sup>C′Þ, <sup>ð</sup>b<sup>n</sup> <sup>∈</sup> <sup>C</sup>′

where the coefficients provided by the iterative solver are used to weight the contributions of the basis functions to the overall radiation patterns of the boxes C′ at the lowest level. At higher

−rC′ÞSC′ðku; rC′Þ ! SP{C′

within a box, the radiated fields become more oscillatory as the box size gets larger. Hence, the

After completing an aggregation stage, the radiated fields are translated between the boxes at

where F{C} represents the far-zone boxes for a given box C. It is remarkable that F{C} contains O(1) elements since interactions between too far boxes, for example, C and C′ at level l, are

that the boxes should not intersect at a surface, line, or corner, while their parents must

In a translation stage, incoming fields are collected at the box centers, but they are only partial data, since the total incoming fields at the center of a box contain contribution from its parent (if exists) due to the translations at higher levels. Therefore, a disaggregation stage is

ατðku; <sup>r</sup>C−rC′ÞSC′ðku; <sup>r</sup>C′Þ ! <sup>G</sup>Cðku; <sup>r</sup>CÞ, <sup>ð</sup>C′

} ðku; rP{C′ }

′ <sup>&</sup>gt; <sup>l</sup>Þ. Using a one-box-buffer scheme, the condition for translation is

<sup>P</sup>{C}ðku; rP{C}Þ ! G<sup>þ</sup>

uD<sup>2</sup>

. Due to the exponential shifts from different locations

Þ where D is the box size.

levels (l ¼ 2; 3;…; L), aggregation is performed recursively as βðku; rP{C′ }

} represents the parent of C′

sampling rate must be increased, generally with <sup>O</sup>ðk<sup>2</sup>

the same level. For l ¼ 1; 2;…; L, this can be written as

Þ, (58)

Þ, (59)

∈F{C}Þ, (60)

<sup>C</sup> ðku; rCÞ: (61)

<sup>C</sup> <sup>ð</sup>ku; <sup>r</sup>CÞ ! <sup>y</sup>FF½m�, <sup>ð</sup>t<sup>m</sup> <sup>∈</sup>CÞ, (62)

lowest level, we have

202 Dynamical Systems - Analytical and Computational Techniques

where P{C′

made at a higher level ðl

intersect at a surface, line, or corner.

performed recursively for l ¼ L−1;L−2;…; 1 as

iku 4π � �<sup>2</sup>Z

d2

GCðku; rCÞ þ βðku; rC−rP{C}ÞG<sup>þ</sup>

At the lowest level, the testing functions receive the incoming fields as

kR^ <sup>K</sup>; <sup>T</sup> <sup>m</sup> <sup>ð</sup>ku; <sup>r</sup>CÞ � <sup>G</sup><sup>þ</sup>

$$\mathbf{y}^{\text{FF}}[m] = \sum\_{n=1}^{N} \overline{\mathbf{Z}}^{\text{FF}}[m, n] \mathbf{a}[n] \tag{63}$$

for m ¼ 1; 2;…; N. The overall matrix-vector multiplication is completed by also considering the near-field interactions (that cannot be calculated via aggregation-translation-disaggregation stages) as

$$\mathbf{y}[m] = \mathbf{y}^{\text{FF}}[m] + \overline{\mathbf{Z}}^{\text{NF}}[m, n]\mathbf{a}[n]. \tag{64}$$

Using MLFMA, each matrix-vector multiplication can be performed in OðNlogNÞ time and using OðNlogNÞ memory.

For plasmonic objects with high negative permittivity values, electromagnetic interactions decay quickly with respect to the distance between the observation and source points. For a given accuracy, interactions at long distances can be omitted since the inner and outer interactions are combined in the surface formulations and outer interactions (related to the free space) dominate the related matrix elements [30]. The threshold distance for this purpose can also be found by considering the exponential behavior of the decay for large imaginary values of the wavenumber. This way, the processing time for the matrix-vector multiplication can significantly be reduced. As the negative permittivity increases, far-zone interactions related to the inner medium may completely vanish, leaving only near-zone interactions. In the limit, nearzone interactions further reduce into self interactions of basis/testing functions, leading to the Gram matrix to represent the inner medium.

## 5. Case example: numerical simulations of nanowires

Using surface integral equations and MLFMA, electrical properties of a plasmonic object are simply parameters, which can be used as variables in the implementations. For the electrical properties, that is, permittivity and permeability, alternative choices, including measurement data and those based on certain models for the materials, can be used. As an example, Figure 1 presents the relative permittivity of silver (Ag) with respect to frequency from 200 to 1600 THz. In addition to measurement data [10], Drude (D) and Lorentz-Drude (LD) models are used to predict the real and imaginary parts of the relative permittivity. It can be observed that the real part of the permittivity has large negative values at the lower (infrared) frequencies and it increases smoothly toward unity as the frequency increases to the visible range and beyond. For imaginary values, which represent ohmic losses, we observe varying values between 0.01 and 10, while large discrepancies exist between measurement and D/LD models (especially considering the logarithmic scale of the y-axis). These discrepancies are responsible for different results in the simulations of plasmonic problems presented below.

Figure 1. Real and imaginary parts of the relative permittivity of Ag with respect to frequency. In addition to measurement data [10], values based on Drude (D) and Lorentz-Drude (LD) models are depicted.

As a case study, we consider transmission though a pair of Ag nanowires described in Figure 2. The length of the nanowires is 5μm and each nanowire has 0.1 +0.1μm (square) cross section. The distance between the nanowires is also 0.1μm. The transmission line is excited by a pair of Hertzian dipoles oriented in the opposite directions and located at 0.2μm distance from the nanowires. Figure 3 presents the electromagnetic response of the transmission line in the infrared frequencies from 250 to 430 THz. The power density in dB scale (dBW/m<sup>2</sup> ) in the vicinity of the nanowires is depicted (normalized to 0 dB and using 40-dB dynamic range), when LD model and measurement data are used for the permittivity values. It can be observed that the electromagnetic power is effectively transmitted from the source region (right) to the transmission region (left). Coupling to the free space at the end of the line leads to two beams with decaying amplitudes due to propagation. Comparing the results, we observe very good agreement between the power density values when LD and measurement permittivity values are used. Considering Figure 1, the negative real permittivity dominates the response of the nanowires at these frequencies.

Figure 2. A transmission line involving two Ag nanowires of length 5μm. The nanowires are excited by a pair of dipoles located at 0.2μm distance.

As a case study, we consider transmission though a pair of Ag nanowires described in Figure 2.

Figure 1. Real and imaginary parts of the relative permittivity of Ag with respect to frequency. In addition to measure-

The distance between the nanowires is also 0.1μm. The transmission line is excited by a pair of Hertzian dipoles oriented in the opposite directions and located at 0.2μm distance from the nanowires. Figure 3 presents the electromagnetic response of the transmission line in the infrared frequencies from 250 to 430 THz. The power density in dB scale (dBW/m<sup>2</sup>

vicinity of the nanowires is depicted (normalized to 0 dB and using 40-dB dynamic range), when LD model and measurement data are used for the permittivity values. It can be observed that the electromagnetic power is effectively transmitted from the source region (right) to the transmission region (left). Coupling to the free space at the end of the line leads to two beams with decaying amplitudes due to propagation. Comparing the results, we observe very good agreement between the power density values when LD and measurement permittivity values are used. Considering Figure 1, the negative real permittivity dominates the response of the

Figure 2. A transmission line involving two Ag nanowires of length 5μm. The nanowires are excited by a pair of dipoles

+

0.1μm (square) cross section.

) in the

The length of the nanowires is 5μm and each nanowire has 0.1

204 Dynamical Systems - Analytical and Computational Techniques

ment data [10], values based on Drude (D) and Lorentz-Drude (LD) models are depicted.

nanowires at these frequencies.

located at 0.2μm distance.

Figure 3. Power density in the vicinity of the nanowire system (Figure 2) from 250 to 430 THz. Numerical results obtained by using permittivity values derived from the LD model (left column) and those based on the measurement data (right column) are compared.

Figure 4 presents similar results when the frequency is in the visible range. In this case, there are significant discrepancies between the power density values when the LD model and the measurement data for the permittivity are used. This is mainly due to the higher values for the imaginary permittivity predicted by the LD model. As the frequency increases, using the LD model, the transmission ability of the nanowire system deteriorates significantly. Specifically, the power density on the surfaces of the nanowires decreases and the transmitted power toward the left-hand side of the nanowires diminishes, leading to progressively weaker beams. It is remarkable that, using the measurement data that may be more accurate description of Ag, the transmission ability of the nanowire system is still at high levels, indicating that the transmission line operates as desired. These results may explain some of the contradictory results (especially simulations vs. measurements) for the nanowire and similar plasmonic systems investigated in the visible spectrum.

As depicted in Figure 5, nanowires cannot maintain a good transmission ability as the frequency increases. Using the measurement data, the transmission of the nanowire system deteriorates significantly at the higher frequencies of the visible spectrum (e.g., at 770 THz). At 750 THz, the power density drops to less than −40 dB after a few μm along the nanowires. We note that the effective length of the nanowires increases with the frequency. For example, at 250 THz, the length of the nanowires is approximately 4:17λo, while it is around 12:5λ<sup>o</sup> at 750 THz. In addition, the effective distance between the sources and the nanowires increases. However, investigating the power values on the nanowire surfaces close to the source, it is obvious that the poor power transmission cannot be explained only with the increasing effective lengths at the higher frequencies. Since the power cannot be coupled to the free space, the power density along the nanowires possesses an oscillatory behavior. At the end of the visible spectrum, the discrepancy between the results obtained by using the LD and measurement values decreases, both predicting reduced interaction between the sources and nanowires.

Figure 6 presents the results even at higher (lower-ultraviolet) frequencies. In this range, the nanowires are not expected to demonstrate transmission abilities, as predicted by both LD model and measurement data for the permittivity values. At lower frequencies of the range, the nanowires are more visible close to the source region, while, as the frequency increases, their effects diminish and the power distribution becomes close to that of two dipoles in free space. Figures 7 and 8 present the summary of input/output of the transmission line, for the LD model and measurement data, respectively, from 450 to 750 THz. For the input, the power density is sampled at 30 nm distance from the nanowires on a horizontal line from −1 to 1μm. The double-peak pattern due to two dipoles in opposite directions is clearly visible, with some variations due to reflections from the nanowires. For the output, samples are selected again on a horizontal line from −1 to 1μm in the transmission side at 40 μm distance from the nanowires. Using the LD model, the output pattern deteriorates significantly as the frequency increases. Using measured permittivity values, however, the double-peak pattern is effectively maintained for most frequencies until 750 THz, at which the transmission fails. Figure 9 presents the average input/output graphics, confirming consistency between the LD model and measurement data at lower and higher frequencies. On the other hand, at some frequencies in the visible range, there is more than 30 dB difference between the predicted output levels.

Figure 4 presents similar results when the frequency is in the visible range. In this case, there are significant discrepancies between the power density values when the LD model and the measurement data for the permittivity are used. This is mainly due to the higher values for the imaginary permittivity predicted by the LD model. As the frequency increases, using the LD model, the transmission ability of the nanowire system deteriorates significantly. Specifically, the power density on the surfaces of the nanowires decreases and the transmitted power toward the left-hand side of the nanowires diminishes, leading to progressively weaker beams. It is remarkable that, using the measurement data that may be more accurate description of Ag, the transmission ability of the nanowire system is still at high levels, indicating that the transmission line operates as desired. These results may explain some of the contradictory results (especially simulations vs. measurements) for the nanowire and similar plasmonic

As depicted in Figure 5, nanowires cannot maintain a good transmission ability as the frequency increases. Using the measurement data, the transmission of the nanowire system deteriorates significantly at the higher frequencies of the visible spectrum (e.g., at 770 THz). At 750 THz, the power density drops to less than −40 dB after a few μm along the nanowires. We note that the effective length of the nanowires increases with the frequency. For example, at 250 THz, the length of the nanowires is approximately 4:17λo, while it is around 12:5λ<sup>o</sup> at 750 THz. In addition, the effective distance between the sources and the nanowires increases. However, investigating the power values on the nanowire surfaces close to the source, it is obvious that the poor power transmission cannot be explained only with the increasing effective lengths at the higher frequencies. Since the power cannot be coupled to the free space, the power density along the nanowires possesses an oscillatory behavior. At the end of the visible spectrum, the discrepancy between the results obtained by using the LD and measurement values decreases, both predicting reduced interaction between the sources

Figure 6 presents the results even at higher (lower-ultraviolet) frequencies. In this range, the nanowires are not expected to demonstrate transmission abilities, as predicted by both LD model and measurement data for the permittivity values. At lower frequencies of the range, the nanowires are more visible close to the source region, while, as the frequency increases, their effects diminish and the power distribution becomes close to that of two dipoles in free space. Figures 7 and 8 present the summary of input/output of the transmission line, for the LD model and measurement data, respectively, from 450 to 750 THz. For the input, the power density is sampled at 30 nm distance from the nanowires on a horizontal line from −1 to 1μm. The double-peak pattern due to two dipoles in opposite directions is clearly visible, with some variations due to reflections from the nanowires. For the output, samples are selected again on a horizontal line from −1 to 1μm in the transmission side at 40 μm distance from the nanowires. Using the LD model, the output pattern deteriorates significantly as the frequency increases. Using measured permittivity values, however, the double-peak pattern is effectively maintained for most frequencies until 750 THz, at which the transmission fails. Figure 9 presents the average input/output graphics, confirming consistency between the LD model and measurement data at lower and higher frequencies. On the other hand, at some frequencies in the visible range, there is more than 30 dB difference between the predicted output levels.

systems investigated in the visible spectrum.

206 Dynamical Systems - Analytical and Computational Techniques

and nanowires.

Figure 4. Power density in the vicinity of the nanowire system (Figure 2) from 450 to 630 THz. Numerical results obtained by using permittivity values derived from the LD model (left column) and those based on the measurement data (right column) are compared.

Figure 5. Power density in the vicinity of the nanowire system (Figure 2) from 650 to 830 THz. Numerical results obtained by using permittivity values derived from the LD model (left column) and those based on the measurement data (right column) are compared.

Figure 6. Power density in the vicinity of the nanowire system (Figure 2) from 850 to 1000 THz. Numerical results obtained by using permittivity values derived from the LD model (left column) and those based on the measurement data (right column) are compared.

Figure 5. Power density in the vicinity of the nanowire system (Figure 2) from 650 to 830 THz. Numerical results obtained by using permittivity values derived from the LD model (left column) and those based on the measurement

data (right column) are compared.

208 Dynamical Systems - Analytical and Computational Techniques

Figure 7. Power density on the input and output sides of the nanowire system (Figure 2) from 450 to 750 THz. For the input and output, the samples are selected on 2μm lines at 30 and 40 nm distances from the nanowires. LD model is used for the permittivity values.

Figure 8. Power density on the input and output sides of the nanowire system (Figure 2) from 450 to 750 THz. For the input and output, the samples are selected on 2μm lines at 30 and 40 nm distances from the nanowires. Measurement data are used for the permittivity values.

Figure 9. Average input and output power density values for the nanowire system (Figure 2) from 150 to 1000 THz. Despite the consistency of the inputs, significant discrepancies in the output values obtained when LD model and measurement data for the permittivity values are used from 450 to 750 THz.

## 6. Concluding remarks

Surface integral equations combined with iterative algorithms employing MLFMA provide accurate solutions of nano-plasmonic problems without resorting to fundamental assumptions, such as periodicity and infinity. Three-dimensional and finite structures, which are typically of tens of wavelengths, but at the same time containing small details, can be investigated both precisely and efficiently. In addition to the visible ranges, the developed solvers are very beneficial at higher frequencies, where the discrepancy between the experimental results and theoretical predictions, such as based on the Drude and Lorentz-Drude models, increases. Surface formulations enable trivial integration of electrical parameters, allowing for fast tuning of the numerical results with the increasingly precise measurements. On the other hand, such a reliable simulation environment can be constructed only with appropriate combinations of surface integral equations, discretizations, numerical integrations, fast algorithms, and iterative techniques, as shown in this chapter. We present how to construct such an implementation with all details from formulations to iterative solutions using MLFMA, along with a set of results involving a nanowire transmission line in a wide range of frequencies to demonstrate the capabilities of the developed solvers.

## Acknowledgements

Figure 7. Power density on the input and output sides of the nanowire system (Figure 2) from 450 to 750 THz. For the input and output, the samples are selected on 2μm lines at 30 and 40 nm distances from the nanowires. LD model is used

Figure 8. Power density on the input and output sides of the nanowire system (Figure 2) from 450 to 750 THz. For the input and output, the samples are selected on 2μm lines at 30 and 40 nm distances from the nanowires. Measurement data

Figure 9. Average input and output power density values for the nanowire system (Figure 2) from 150 to 1000 THz. Despite the consistency of the inputs, significant discrepancies in the output values obtained when LD model and

measurement data for the permittivity values are used from 450 to 750 THz.

for the permittivity values.

210 Dynamical Systems - Analytical and Computational Techniques

are used for the permittivity values.

This work was supported by the Scientific and Technical Research Council of Turkey (TUBITAK) under the Research Grant 113E129 and by the Turkish Academy of Sciences (TUBA) in the framework of the Young Scientist Award Program.

## Authors contributions

Ö.E. developed the theory and formulations, B.K. and Ö.E. implemented the solvers. A.Ç. conducted the numerical experiments.

## Author details

Abdulkerim Çekinmez, BarIşcan Karaosmanoğlu and Özgür Ergül\*

\*Address all correspondence to: ozgur.ergul@eee.metu.edu.tr

Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey

## References

[1] Yao J, Liu Z, Liu Y, Wang Y, Sun C, Bartal G, Stacy AM, Zhang X. Optical negative refraction in bulk metamaterials of nanowires. Science. 2008;321:930.


[18] Araujo MG, Taboada JM, Solis DM, Rivero J, Landesa L, Obelleiro F. Comparison of surface integral equation formulations for electromagnetic analysis of plasmonic nanoscatterers. Opt. Exp. 2012;20:9161–9171.

[2] Liu Y, Bartal G, Zhang X. All-angle negative refraction and imaging in a bulk medium made of metallic nanowires in the visible region. Opt. Exp. 2008;16:15439–15448.

[3] Schuck PJ, Fromm DP, Sundaramurthy A, Kino GS, Moerney WE. Improving the mismatch between light and nanoscale objects with gold bowtie nanoantennas. Phys.

[4] Alda J, Rico-Garcia JM, Lopez-Alonso JM, Boreman G. Optical antennas for nano-pho-

[5] Muhlschlegel P, Eisler HJ, Martin OJF, Hecht B, Pohl DW. Resonant optical antennas.

[6] Kinkhabwala A, Yu Z, Fan S, Avlasevich Y, Mullen K, Moerney WE. Large single-molecule fluorescence enhancements produced by a bowtie nanoantenna. Nat. Photonics.

[7] Kosako T, Kadoya Y, Hofmann HF. Directional control of light by a nano-optical Yagi-

[8] Solis DM, Taboada JM, Obelleiro F, Landesa L. Optimization of an optical wireless

[9] Karaosmanoğlu B, Gür UM, Ergül Ö. Investigation of nanoantennas using surface integral equations and the multilevel fast multipole algorithm. In Proceedings of the Progress

[10] Johnson PB, Christy RW. Optical constants of the noble metals. Phys. Rev. B. 1972;6:4370–4379. [11] Poggio AJ, Miller EK. Integral equation solutions of three-dimensional scattering problems. In Mittra R, editor. Computer Techniques for Electromagnetics. Oxford: Pergamon

[12] Ylä-Oijala P, Taskinen M, Järvenpää S. Surface integral equation formulations for solving electromagnetic scattering problems with iterative methods. Radio Sci. 2005;40:6002–1-19.

[13] Hohenester U, Krenn J. Surface plasmon resonances of single and coupled metallic nanoparticles: a boundary integral method approach. Phys. Rev. B. 2005;72:195429–1-9.

[14] Sondergaard T. Modeling of plasmonic nanostructures: Green's function integral equa-

[15] Kern AM, Martin OFJ. Surface integral formulation for 3D simulations of plasmonic and

[16] Gallinet B, Martin OFJ. Scattering on plasmonic nanostructures arrays modeled with a surface integral formulation. Photon. Nanostruct. Fund. Appl. 2010;8:278–284.

[17] Rodriguez-Oliveros R, Sanchez-Gil JA. Localized surface-plasmon resonances on single and coupled nanoparticles through surface integral equations for flexible surfaces. Opt.

high permittivity nanostructures. J. Opt. Soc. Am. A. 2009;26:732–740.

tion methods. Phys. Status Solidi B. 2007;244:3448–3462.

nanolink using directive nanoantennas. Opt. Exp. 2013;21:2369–2377.

in Electromagnetics Research Symposium (PIERS); 2015. p. 2026–2030.

Rev. Lett. 2005;94:017402–1-4.

212 Dynamical Systems - Analytical and Computational Techniques

Science. 2005;308:1607–1609.

2009;3:654–657.

Press; 1973. Chap. 4.

Exp. 2011;16:12208–12219.

tonic applications. Nanotechnology. 2005;16:230–234.

Uda antenna. Nat. Photonics. 2010;4:312–315.


**Provisional chapter**

## **Numerical Random Periodic Shadowing Orbits of a Class of Stochastic Differential Equations Numerical Random Periodic Shadowing Orbits of a Class of Stochastic Differential Equations**

Qingyi Zhan and Yuhong Li Qingyi Zhan and Yuhong Li

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67010

#### **Abstract**

[32] Karaosmanoğlu B, YIlmaz A, Ergül Ö. On the accuracy and efficiency of surface formulations in fast analysis of plasmonic structures via MLFMA. In Proceedings of the Progress

[33] Rao SM, Wilton DR, Glisson AW. Electromagnetic scattering by surfaces of arbitrary

[34] Medgyesi-Mitschang LN, Putnam JM, Gedera MB. Generalized method of moments for three-dimensional penetrable scatterers. J. Opt. Soc. Am. A. 1994;11:1383–1398.

[35] Ylä-Oijala P, Taskinen M. Application of combined field integral equation for electromagnetic scattering by dielectric and composite objects. IEEE Trans. Antennas Propag.

[36] Ergül Ö, Gürel L. Discretization error due to the identity operator in surface integral

[37] Cools K, Bogaert I, Peeters J, Vande Ginste D, Rogier H, De Zutter D. Accurate and efficient algorithms for boundary element methods in electromagnetic scattering: a trib-

[38] Ergül Ö, Karaosmanoğlu B. Approximate stable diagonalization of the Green's function for low frequencies. IEEE Antennas Wireless Propag. Lett. 2014;13:1054–1056.

[39] Ergül Ö, Karaosmanoğlu B. Broadband multilevel fast multipole algorithm based on an approximate diagonalization of the Green's function. IEEE Trans. Antennas Propag.

[40] Jiang LJ, Chew WC. A mixed-form fast multipole algorithm. IEEE Trans. Antennas

[41] Bogaert I, Peeters J, Olyslager F. A nondirective plane wave MLFMA stable at low

[42] Ohnuki S, Chew WC. Truncation error analysis of multipole expansions. SIAM J. Sci.

in Electromagnetics Research Symposium (PIERS); 2016. p. 2629–2633.

shape. IEEE Trans. Antennas Propag. 1982;30:409–418.

214 Dynamical Systems - Analytical and Computational Techniques

equations. Comput. Phys. Comm. 2009;180:1746–1752.

ute to the work of F. Olyslager. Radio Sci. 2011;46:RS0E21–1-10.

frequencies. IEEE Trans. Antennas Propag. 2008;56:3752–3767.

2005;53:1168–1173.

2015;63:3035–3041.

Propag. 2005;53:4145–4156.

Comput. 2003;25:1293–1306.

This paper is devoted to the existence of a true random periodic solution near the numerical approximate one for a kind of stochastic differential equations. A general finite-time random periodic shadowing theorem is proposed for the random dynamical systems generated by some stochastic differential equations under appropriate conditions and an estimate of shadowing distance via computable quantities is given. Application is demonstrated in the numerical simulations of random periodic orbits of the stochastic Lorenz system for certain given parameters.

**Keywords:** random chaotic system, stochastic differential equations, random periodic shadowing, stochastic Lorenz system

## **1. Introduction**

The investigation for the dynamical properties of the random periodic orbits in some specific stochastic differential equations (SDEs) is a difficult problem [1]. In general, numerical computation is still one of the most feasible methods of studying random periodic orbits of SDEs describing many natural phenomena in meteorology, biology and so on [2–4]. As the chaotic systems is sensitive to the initial value and random noise is constantly affected the systems constantly, to estimate a particular solution of a random chaotic system by numerical solutions for a given length of time is even more difficult. Therefore, it is always difficult to infer the existence of a random periodic orbit rigorously from numerical computations. Shadowing property plays important roles in the theory and applications of random dynamical systems (RDS), especially in the numerical simulations of random chaotic systems generated by some SDEs. As we know, numerical experiments can lead to many nice discoveries, a new numerical method is presented to establish the existence of a true random periodic orbit of SDEs which

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

lies near a numerical random periodic orbit. Furthermore, the reliability and feasibility of numerical computations is considered as well.

There are two main motivations for this work. On the one hand, it follows from the classical shadowing lemma that many studies about the periodic dynamics of deterministic chaotic systems have been performed in Ref. [3] and references therein. Many nice works on the numerical analysis of RDS had been completed in Refs. [5] and [6]. On the other hand, our results in this article have been inspired by our earlier work in Refs. [7] and [8], on shadowing orbits of SDEs where we established in a rather general setting. To the best of our knowledge, shadowing is still an interesting method for studying their random periodic dynamic behavior of SDE, and there is no investigations of the random periodic shadowing theorem of SDE exist in the literatures.

In this work, two computational issues should be considered first. One is the definition of (*ω*, *δ*)-pseudo random periodic orbit, in which a true random periodic orbit is sufficiently closed. Another issue is that in which conditions the random chaotic systems generated by some SDE possess the so-called pseudo hyperbolicity for certain given parameters. With some additional numerical computations, we can show the existence of a true random periodic orbit near the (*ω*, *δ*)-pseudo random periodic orbit under appropriate conditions. Therefore, the main difference between the existing work and the current one is that the random periodic case is concerned, and there is no hyperbolicity assumption on the original systems.

Utilizing the existence of the modified Newton equation's solution, a random periodic shadowing theorem for some kind of SDEs is proposed. The result shows that under some appropriate conditions, there exists a true periodic orbit near the numerical approximative one and the upper bound for the shadowing distance is given.

This paper is organized as follows. In Section 2, background materials on random shadowing for random dynamical system generated by SDEs, including the definitions of (*ω*, *δ*)-pseudo random periodic orbit and the pseudo hyperbolic in mean square, are given. The main result on random periodic shadowing is then stated in Section 3. Illustrative numerical experiments for the main theorem are included in Section 4. The numerical implementations in details are presented in the following section. And, the proof for the main result is presented in Section 6. The final section is devoted to summarize the main results in the current work.

## **2. Preliminaries**

Let ðΩ, F, PÞ be a canonical Wiener space, {F*t*}*<sup>t</sup>*∈<sup>R</sup> be its natural normal filtration, and *W*ð*t*Þð*t*∈ RÞ is a standard one-dimensional Brownian motion defined on the space ðΩ, F, PÞ. And, we assume that Ω :¼ {*ω* ∈*C*ðR, RÞ : *ω*ð0Þ ¼ 0}, which means that the elements of Ω can be identified with paths of a Wiener process *ω*ð*t*Þ ¼ *Wt*ð*ω*Þ. We consider a class of Stratonovich SDEs in the form of

$$d\mathbf{x}\_t = f(\mathbf{x}\_t)dt + \mu \mathbf{x}\_t \circ dW\_t, \qquad \mathbf{x}(0) = \mathbf{x}\_0(\omega) \in \mathbb{R}^d,\tag{1}$$

where the random variable *<sup>x</sup>*0ð*ω*<sup>Þ</sup> is independent of <sup>F</sup><sup>0</sup> and satisfies the inequality <sup>E</sup>j*x*0ð*ω*Þj<sup>2</sup> *< ∞*, and *μ* is a nonzero real number.

#### **2.1. Basic assumptions and notations**

We define the metric dynamical systems <sup>ð</sup>Ω, <sup>F</sup>, <sup>P</sup>, *<sup>θ</sup><sup>t</sup>* Þ by the mapping *θ* : R · Ω ! Ω, such that for *ω* ∈ Ω,

$$
\theta^t w(\mathbf{s}) = w(t+\mathbf{s}) - w(t),
$$

where *s*, *t*∈ R*:*

lies near a numerical random periodic orbit. Furthermore, the reliability and feasibility of

There are two main motivations for this work. On the one hand, it follows from the classical shadowing lemma that many studies about the periodic dynamics of deterministic chaotic systems have been performed in Ref. [3] and references therein. Many nice works on the numerical analysis of RDS had been completed in Refs. [5] and [6]. On the other hand, our results in this article have been inspired by our earlier work in Refs. [7] and [8], on shadowing orbits of SDEs where we established in a rather general setting. To the best of our knowledge, shadowing is still an interesting method for studying their random periodic dynamic behavior of SDE, and there is no investigations of the random periodic shadowing theorem of SDE exist in the literatures.

In this work, two computational issues should be considered first. One is the definition of (*ω*, *δ*)-pseudo random periodic orbit, in which a true random periodic orbit is sufficiently closed. Another issue is that in which conditions the random chaotic systems generated by some SDE possess the so-called pseudo hyperbolicity for certain given parameters. With some additional numerical computations, we can show the existence of a true random periodic orbit near the (*ω*, *δ*)-pseudo random periodic orbit under appropriate conditions. Therefore, the main difference between the existing work and the current one is that the random periodic

Utilizing the existence of the modified Newton equation's solution, a random periodic shadowing theorem for some kind of SDEs is proposed. The result shows that under some appropriate conditions, there exists a true periodic orbit near the numerical approximative one

This paper is organized as follows. In Section 2, background materials on random shadowing for random dynamical system generated by SDEs, including the definitions of (*ω*, *δ*)-pseudo random periodic orbit and the pseudo hyperbolic in mean square, are given. The main result on random periodic shadowing is then stated in Section 3. Illustrative numerical experiments for the main theorem are included in Section 4. The numerical implementations in details are presented in the following section. And, the proof for the main result is presented in Section 6.

Let ðΩ, F, PÞ be a canonical Wiener space, {F*t*}*<sup>t</sup>*∈<sup>R</sup> be its natural normal filtration, and *W*ð*t*Þð*t*∈ RÞ is a standard one-dimensional Brownian motion defined on the space ðΩ, F, PÞ. And, we assume that Ω :¼ {*ω* ∈*C*ðR, RÞ : *ω*ð0Þ ¼ 0}, which means that the elements of Ω can be identified with paths of a Wiener process *ω*ð*t*Þ ¼ *Wt*ð*ω*Þ. We consider a class of Stratonovich SDEs in the form of

*dxt* <sup>¼</sup> *<sup>f</sup>*ð*xt*Þ*dt* <sup>þ</sup> *<sup>μ</sup>xt* <sup>∘</sup> *dWt*, *<sup>x</sup>*ð0Þ ¼ *<sup>x</sup>*0ð*ω*Þ∈R*<sup>d</sup>*

where the random variable *<sup>x</sup>*0ð*ω*<sup>Þ</sup> is independent of <sup>F</sup><sup>0</sup> and satisfies the inequality <sup>E</sup>j*x*0ð*ω*Þj<sup>2</sup>

*;* (1)

case is concerned, and there is no hyperbolicity assumption on the original systems.

The final section is devoted to summarize the main results in the current work.

and the upper bound for the shadowing distance is given.

**2. Preliminaries**

*< ∞*, and *μ* is a nonzero real number.

numerical computations is considered as well.

216 Dynamical Systems - Analytical and Computational Techniques

Let *Ot*ð*ω*Þ be a one-dimension random stable Ornstein-Uhlenbeck process which satisfies the following linear SDE

$$dO\_t = -O\_t dt + dW\_t.$$

And let

$$z(t,\omega) := \exp\left(-\mu O\_t(\omega)\right) \mathbf{x}\_t(\omega) \in \mathbb{R}^d,$$

then SDE (1) can be changed to a random differential equation (RDE) in the form of

$$\frac{dz}{dt} = \exp\left(-\mu O\_l(\omega)\right) f(\exp\left(\mu O\_l(\omega)\right)z) + \mu O\_l z = f\_1(\theta^t \omega, z). \tag{2}$$

It follows from Doss-Sussmann Theorem in Ref. [9] that the solution of RDE (2) is the solution of SDE (1).

In this paper, we make the following assumptions:

• We suppose that *<sup>f</sup>* <sup>1</sup> : <sup>Ω</sup> · <sup>R</sup>*<sup>d</sup>* ! <sup>R</sup>*<sup>d</sup>* be a measurable function which is locally bounded, locally Lipschitz continuous with respect to the first variable, and be a *C*<sup>1</sup> vector field on R*<sup>d</sup>*.

By Theorem 2.2.2 in Ref. [2], RDE(2) generates a unique RDS *ϕ* on the metric dynamical systems <sup>ð</sup>Ω, <sup>F</sup>, <sup>P</sup>, *<sup>θ</sup><sup>t</sup>* Þ as follows

$$
\varphi(\mathbf{s}, t, \omega)\mathbf{z} = \mathbf{z} + \int\_{s}^{t} f\_1(\boldsymbol{\theta}^{\mathrm{r}}\boldsymbol{\omega}, \boldsymbol{\varphi}(\mathbf{s}, \boldsymbol{\tau}, \omega)\mathbf{z})d\boldsymbol{\tau} \in \mathbb{R}^d,\tag{3}
$$

and which is *C*<sup>1</sup> -class with respect to *z* in Ref. [8].

And there exists a diffeomorphism *<sup>ϕ</sup>* : <sup>R</sup> · <sup>R</sup> · <sup>Ω</sup> · <sup>R</sup>*<sup>d</sup>* ! <sup>R</sup>*<sup>d</sup>*, *<sup>ϕ</sup>*ð*s*, *<sup>t</sup>*, *<sup>ω</sup>*, *<sup>z</sup>*<sup>Þ</sup> :<sup>¼</sup> *<sup>ϕ</sup>*ð*s*, *<sup>t</sup>*, *<sup>ω</sup>*Þ*z*∈R*<sup>d</sup>* . We also make use of the following notations which is similar to the Ref. [8].

• The norm of a random variable *<sup>x</sup>* ¼ ð*x*1, *<sup>x</sup>*2,…*; xd*Þ∈*L*<sup>2</sup> ðΩ, PÞ is defined in the form of

$$\|\mathbf{x}\|\_{2} = \left[ \int\_{\Omega} \left[ |\mathbf{x}\_{1}(\omega)|^{2} + |\mathbf{x}\_{2}(\omega)|^{2} + \dots , + |\mathbf{x}\_{d}(\omega)|^{2} \right] d\mathbb{P}(\omega) \right]^{\frac{1}{2}} < \infty,$$

where *L*<sup>2</sup> <sup>ð</sup>Ω, <sup>P</sup><sup>Þ</sup> is the space of all square-integrable random variables *<sup>x</sup>* : <sup>Ω</sup> ! <sup>R</sup>*<sup>d</sup>* . • The norm of a stochastic process *<sup>x</sup>*ð*t*, *<sup>ω</sup>*<sup>Þ</sup> with *xt*ð*ω*Þ∈*L*<sup>2</sup> ðΩ, PÞ and *t*∈R is defined as

$$\|\mathfrak{x}(t,\omega)\|\_{2} = \sup\_{t \in \mathbb{R}} \|\mathfrak{x}\_{t}(\omega)\|\_{2} < \ast.$$

• For a given random matrix *A*, and the operator norm | |, the norm of *A* is defined as follows

$$\|A\|\_{L^{2}(\Omega,\mathbb{P})} = [\mathbb{E}(\left|A\right|^{2})]^{\frac{1}{2}}.$$

• Normally, the norm <sup>∥</sup> <sup>∥</sup><sup>2</sup> and <sup>∥</sup> <sup>∥</sup>*L*2ðΩ,P<sup>Þ</sup> are denoted as <sup>∥</sup> <sup>∥</sup> for simplicity reason, unless otherwise stated.

#### **2.2. Some extended definitions**

**Definition 2.1.** For a given positive number *δ*, if there is a sequence of positive times {*tk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> *;* <sup>0</sup> <sup>≤</sup> *<sup>t</sup>*<sup>0</sup> <sup>≤</sup> *<sup>t</sup>*<sup>1</sup> <sup>≤</sup> ,…*;* <sup>≤</sup> *<sup>τ</sup>* <sup>≤</sup> *tN*þ1, *<sup>τ</sup>*, and a sequence of random variables

$$\{ (y\_k(\theta^{t\_k}\omega), \mathcal{F}\_{t\_k}) \}\_{k=0^\prime}^N$$

*yk*ð*θtkω*<sup>Þ</sup> is <sup>F</sup>*tk* -adapted, such that

$$f\_1(y\_k(\theta^k \omega)) y\_k(\theta^k \omega) \neq 0, \qquad \mathbb{P}\text{-almost surely for } k = 0, 1, 2, \dots, N,$$

and the following inequalities P-almost surely hold

$$\|\|y\_{k+1}(\boldsymbol{\theta}^{t\_{k+1}}\boldsymbol{\omega}) - \boldsymbol{\varphi}(t\_k, t\_{k+1}, \boldsymbol{\theta}^{t\_k}\boldsymbol{\omega})y\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega})\|\| \le \delta, \ k = 0, 1, \ldots, N - 1, \ldots$$

and

$$\|\|y\_N(\Theta^{t\_N}\omega) - y\_0(\Theta^{t\_0}\omega)\|\le\delta,\tag{4}$$

then the random variables {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> are said to be a (*ω*, *<sup>δ</sup>*)-pseudo random periodic orbit of RDS (3) generated by SDE (1) in mean-square sense.

**Definition 2.2.** For a given positive number *ε* and a (*ω*, *δ*)-pseudo random periodic orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> of RDS (3) generated by SDE (1) with associated times {*tk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> , if there is a sequence of times {*hk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> , *<sup>h</sup>*0≤*h*1≤,…*;* <sup>≤</sup>*τ*≤*hN*þ1, such that the following inequalities hold

$$\|\|y\_k(\Theta^{\mathbb{H}}\omega) - \chi\_k(\Theta^{\mathbb{H}}\omega)\|\| \le \varepsilon, 0 \le t\_k - h\_k \le \varepsilon, k = 0, 1, \dots, N,$$

and the random variables {ð*xk*ð*θhkω*Þ,F*hk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> are on the true orbits of RDS (3) generated by SDE (1), that is

$$\mathbf{x}\_{k+1}(\boldsymbol{\Theta}^{\mathbb{h}\_{k+1}}\boldsymbol{\omega}) = \boldsymbol{\varrho}(\boldsymbol{h}\_{k}, \boldsymbol{h}\_{k+1}, \boldsymbol{\Theta}^{\mathbb{h}\_{k}}\boldsymbol{\omega}) \mathbf{x}\_{k}(\boldsymbol{\Theta}^{\mathbb{h}\_{k}}\boldsymbol{\omega}), \ k = 0, 1, 2, \dots, N - 1, \dots$$

and

$$\mathbf{x}\_0(\boldsymbol{\Theta}^{\mathrm{lv}}\boldsymbol{\omega}) = \boldsymbol{\varphi}(\boldsymbol{h}\_{\mathrm{N}}, \boldsymbol{h}\_{\mathrm{N}+1}, \boldsymbol{\Theta}^{\mathrm{lv}}\boldsymbol{\omega}) \mathbf{x}\_{\mathrm{N}}(\boldsymbol{\Theta}^{\mathrm{lv}}\boldsymbol{\omega}),\tag{5}$$

then the (*ω*, *<sup>δ</sup>*)-pseudo random periodic orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> is said to be (*ω*, *<sup>δ</sup>*)-periodic shadowed by a true orbit of RDS (3) generated by SDE (1) in mean-square sense.

**Remark 2.3.** As the *σ*-algebra F*t*ð*t*≥0Þ is nondecreasing, in order to guarantee the random variables *xk*ð*θhkω*Þð*<sup>k</sup>* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>*;* <sup>2</sup>,…*; <sup>N</sup>*<sup>Þ</sup> are <sup>F</sup>*tk* -measurable, we need the shadowing condition 0≤*tk*−*hk*≤*ε* instead of j*tk*−*hk*j≤*ε*. We refer to the Ref. [2] for the deterministic counterpart. Here, we choose a sequence of times {*hk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> <sup>¼</sup> {*tk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> in sequels.

**Definition 2.4.** The RDS *<sup>ϕ</sup>* : <sup>R</sup> · <sup>R</sup> · <sup>Ω</sup> · <sup>R</sup>*<sup>d</sup>* ! <sup>R</sup>*<sup>d</sup>* is said to be pseudo hyperbolic in mean square if the temple variables *κ*1ð*ω*Þ, *κ*2ð*ω*Þ≥1, *ν*1ð*ω*Þ, *ν*2ð*ω*Þ≥0 exist, such that the following inequations hold with <sup>R</sup>*<sup>d</sup>* <sup>¼</sup> *<sup>E</sup><sup>s</sup>* ð*ω*Þ⊕*E<sup>u</sup>* ð*ω*Þ,

$$\begin{split} & \mathbb{E} \| \| \boldsymbol{\varrho}(\mathbf{s}, t\_{1}, \boldsymbol{\omega}) \mathbf{x} \| ^{2} \leq \kappa\_{1}(\boldsymbol{\omega}) \boldsymbol{\varepsilon}^{-\text{v}\_{1}(\boldsymbol{\omega}) \left( t\_{1} - t\_{2} \right)} \mathbb{E} \| \| \boldsymbol{\varrho}(\mathbf{s}, t\_{2}, \boldsymbol{\omega}) \mathbf{x} \| ^{2}, \forall t\_{1} \geq t\_{2} \geq \mathbf{s}, \mathbf{x} \in \boldsymbol{\mathcal{E}}^{s}(\boldsymbol{\omega}), \\ & \mathbb{E} \| \| \boldsymbol{\varrho}(\mathbf{s}, t\_{2}, \boldsymbol{\omega}) \mathbf{x} \| ^{2} \leq \kappa\_{2}(\boldsymbol{\omega}) \boldsymbol{\varepsilon}^{-\text{v}\_{2}(\boldsymbol{\omega}) \left( t\_{1} - t\_{2} \right)} \mathbb{E} \| \| \boldsymbol{\varrho}(\mathbf{s}, t\_{1}, \boldsymbol{\omega}) \mathbf{x} \| ^{2}, \forall t\_{1} \geq t\_{2} \geq \mathbf{s}, \mathbf{x} \in \boldsymbol{\mathcal{E}}^{u}(\boldsymbol{\omega}). \end{split}$$

This means that there is a splitting into exponentially stable <sup>ð</sup>*Es* <sup>ð</sup>*ω*ÞÞ and unstable <sup>ð</sup>*E<sup>u</sup>* ð*ω*ÞÞ components. The multiplicative ergodic theorem (MET) of Oseledets in [10] provides the stochastic analogue of the deterministic spectral theory of matrices, and a method to check the pseudo hyperbolicity.

#### **3. Random periodic shadowing for RDS generated by SDEs**

#### **3.1. Theoretical foundations**

Let {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> be a (*ω*, *<sup>δ</sup>*)-pseudo random periodic orbit of RDS (3) generated by SDE (1) and *yk*ð*θhkω*Þ∈*L*<sup>2</sup> ðΩ, PÞð*k* ¼ 0*;* 1,…*; N*Þ. Assume that we have a sequence of *d* +*d* random matrices {ð*Yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> such that

$$\|\|Y\_{k+1}(\theta^{k+1}\omega) - D\rho(t\_k, t\_{k+1}, \theta^k\omega)y\_k(\theta^k\omega)\| \le \delta, \quad for \ k = 0, 1, \dots, N-1,$$

and

• The norm of a stochastic process *<sup>x</sup>*ð*t*, *<sup>ω</sup>*<sup>Þ</sup> with *xt*ð*ω*Þ∈*L*<sup>2</sup>

218 Dynamical Systems - Analytical and Computational Techniques

follows

{*tk*} *N*þ1

and

{ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>}

SDE (1), that is

and

sequence of times {*hk*}

otherwise stated.

**2.2. Some extended definitions**

*yk*ð*θtkω*<sup>Þ</sup> is <sup>F</sup>*tk* -adapted, such that

and the following inequalities P-almost surely hold

then the random variables {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>}

*N*þ1

and the random variables {ð*xk*ð*θhkω*Þ,F*hk* <sup>Þ</sup>}

*N*

orbit of RDS (3) generated by SDE (1) in mean-square sense.

∥*x*ð*t*, *ω*Þ∥<sup>2</sup> ¼ sup

*t*∈R

• For a given random matrix *A*, and the operator norm | |, the norm of *A* is defined as

• Normally, the norm <sup>∥</sup> <sup>∥</sup><sup>2</sup> and <sup>∥</sup> <sup>∥</sup>*L*2ðΩ,P<sup>Þ</sup> are denoted as <sup>∥</sup> <sup>∥</sup> for simplicity reason, unless

**Definition 2.1.** For a given positive number *δ*, if there is a sequence of positive times

*N <sup>k</sup>*¼<sup>0</sup>,

<sup>∥</sup>*yN*ð*θtN <sup>ω</sup>*Þ−*y*0ð*θ<sup>t</sup>*0*ω*Þ<sup>∥</sup> <sup>≤</sup> *<sup>δ</sup>*, (4)

*<sup>k</sup>*¼<sup>0</sup> are said to be a (*ω*, *<sup>δ</sup>*)-pseudo random periodic

*<sup>k</sup>*¼<sup>0</sup> are on the true orbits of RDS (3) generated by

*N*þ1

*<sup>k</sup>*¼<sup>0</sup> , if there is a

{ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>}

*<sup>f</sup>* <sup>1</sup>ð*yk*ð*θtkω*ÞÞ*yk*ð*θtkω*<sup>Þ</sup> <sup>≠</sup> <sup>0</sup>*;* <sup>P</sup>-almost surely for *<sup>k</sup>* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>*;* <sup>2</sup>,…*; <sup>N</sup>*,

<sup>∥</sup>*yk*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ−*ϕ*ð*tk*, *tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þ*yk*ð*θtkω*Þ<sup>∥</sup> <sup>≤</sup> *<sup>δ</sup>*, *<sup>k</sup>* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>,…*; <sup>N</sup>*−1,

**Definition 2.2.** For a given positive number *ε* and a (*ω*, *δ*)-pseudo random periodic orbit

*<sup>k</sup>*¼<sup>0</sup> of RDS (3) generated by SDE (1) with associated times {*tk*}

<sup>∥</sup>*yk*ð*θtkω*Þ−*xk*ð*θhkω*Þ<sup>∥</sup> <sup>≤</sup> *<sup>ε</sup>;* <sup>0</sup> <sup>≤</sup> *tk*−*hk* <sup>≤</sup> *<sup>ε</sup>*, *<sup>k</sup>* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>,…*; <sup>N</sup>*,

*xk*þ<sup>1</sup>ð*θhk*þ<sup>1</sup>*ω*Þ ¼ *<sup>ϕ</sup>*ð*hk*, *hk*þ<sup>1</sup>, *<sup>θ</sup>hkω*Þ*xk*ð*θhkω*Þ, *<sup>k</sup>* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>*;* <sup>2</sup>,…*; <sup>N</sup>*−1*;*

*N*

*<sup>k</sup>*¼<sup>0</sup> , *<sup>h</sup>*0≤*h*1≤,…*;* <sup>≤</sup>*τ*≤*hN*þ1, such that the following inequalities hold

*N*

*<sup>k</sup>*¼<sup>0</sup> *;* <sup>0</sup> <sup>≤</sup> *<sup>t</sup>*<sup>0</sup> <sup>≤</sup> *<sup>t</sup>*<sup>1</sup> <sup>≤</sup> ,…*;* <sup>≤</sup> *<sup>τ</sup>* <sup>≤</sup> *tN*þ1, *<sup>τ</sup>*, and a sequence of random variables

<sup>∥</sup>*A*∥*L*2ðΩ,P<sup>Þ</sup> ¼ ½Eðj*A*<sup>j</sup>

∥*xt*ð*ω*Þ∥<sup>2</sup> *< ∞:*

2 Þ 1 2*:*

ðΩ, PÞ and *t*∈R is defined as

$$\|\|Y\_0(\theta^{t\_0}\omega) - \mathcal{D}\varphi(t\_N, t\_{N+1}, \theta^{t\_N}\omega)y\_N(\theta^{t\_N}\omega)\| \le \delta. \tag{6}$$

A sequence of *d* +(*<sup>d</sup>* <sup>−</sup> 1) random matrices <sup>ð</sup>*Sk*ð*θtkω*Þ, <sup>F</sup>*tk* <sup>Þ</sup> are chosen such that its columns form an approximate orthogonal basis for the subspace orthogonal to *T*ð*xk*Þ and *k* = 0, 1, …, *N*, where *<sup>T</sup>*ð*xk*Þ ¼ *<sup>f</sup>* <sup>1</sup>ð*θtkω*, *xk*Þ, the approximate orthogonal means that the following inequality holds

$$\|\mathcal{S}\_k(\boldsymbol{\theta}^{\dagger\_k}\boldsymbol{\omega})\mathcal{S}\_k^\*(\boldsymbol{\theta}^{\dagger\_k}\boldsymbol{\omega})^{-1}\| \le \delta\_1,$$

for some positive number *<sup>δ</sup>*1∈ð0*;δ*Þ, where \* denotes the transpose of matrix.

Now a sequence of (*d* − 1) +(*<sup>d</sup>* <sup>−</sup> 1) random matrices *Ak*ð*θtkω*<sup>Þ</sup> is chosen which satisfy

$$\|A\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega}) - \mathbb{S}\_{k+1}^\*(\boldsymbol{\theta}^{t\_{k+1}}\boldsymbol{\omega})\mathcal{Y}\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega})\mathcal{S}\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega})\| \le \delta, \text{ for } k = 0, 1, \dots \text{N-1},$$

and

$$\|\|A\_N(\boldsymbol{\theta}^{t\_N}\boldsymbol{\omega}) - \mathbb{S}\_0^\*(\boldsymbol{\theta}^{t\_0}\boldsymbol{\omega})\,\boldsymbol{Y}\_N(\boldsymbol{\theta}^{t\_N}\boldsymbol{\omega})\mathcal{S}\_N(\boldsymbol{\theta}^{t\_N}\boldsymbol{\omega})\|\leq\delta.$$

Next, a linear operator *<sup>L</sup>* is defined as follows. If random variables *<sup>ξ</sup>* <sup>¼</sup> {*ξk*ð*θtkω*Þ} *N <sup>k</sup>*¼<sup>0</sup> are in the space <sup>ð</sup>R*<sup>d</sup>*−<sup>1</sup> Þ *N*þ1 , then we let *Lξ* ¼ {½*Lξ k*} *N <sup>k</sup>*¼<sup>0</sup> to be

$$[L\xi]\_k = \xi\_{k+1}(\theta^{\sharp\_{k+1}}\omega) - A\_k(\theta^{\sharp\_k}\omega)\xi\_k(\theta^{\sharp\_k}\omega), \text{ for } k = 0, 1, \dots, N - 1.$$

and

$$[L\xi]\_N = \xi\_0(\Theta^{\mathfrak{h}}\omega) \neg A\_N(\Theta^{\mathfrak{t}\_N}\omega)\xi\_N(\Theta^{\mathfrak{t}\_k}\omega).$$

It follows from Section 4.2 that the operator *L* has right inverses and we choose one such right inverse *L*−<sup>1</sup> .

At last, we define some constants. Let *U* be a convex subset of R*<sup>d</sup>* containing the value of the (*ω*, *<sup>δ</sup>*)-pseudo orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup>. Therefore, we define

$$
\Delta t = \inf\_{0 \le k \le N} \Delta t\_{k+1} = \inf\_{0 \le k \le N} (t\_{k+1} - t\_k).
$$

Next, we choose a positive number 0 *<sup>&</sup>lt; <sup>ε</sup>*0≤Δ*<sup>t</sup>* such that <sup>∥</sup>*x*−*yk*ð*θtkω*Þ∥≤*ε*0, then the solution *ϕ*ð*s*, *t*, *ω*Þ*x*ð*s*≤*t*Þ is defined and remains in *U* for 0 *< t*≤*tk* þ *ε*<sup>0</sup> P-almost surely.

Finally, we define

$$\begin{aligned} M\_0 &= \sup\_{\boldsymbol{x} \in \mathcal{U}} \| f\_1(\boldsymbol{\theta}^t \boldsymbol{\omega}, \boldsymbol{x}(t)) \|, \\ M\_1 &= \sup\_{\boldsymbol{x} \in \mathcal{U}} \| Df\_1(\boldsymbol{\theta}^t \boldsymbol{\omega}, \boldsymbol{x}(t)) \|, \\ M\_2 &= \sup\_{\boldsymbol{x} \in \mathcal{U}} \| D^2 f\_1(\boldsymbol{\theta}^t \boldsymbol{\omega}, \boldsymbol{x}(t)) \|. \end{aligned}$$

and

$$\Theta = \sup\_{0 \le k \le N-1} \|Y\_k(\theta^{\sharp\_k} \omega)\|,$$

where

$$Df\_1 = \left[\frac{\partial f\_1(\theta^t \alpha, x(t))}{\partial x\_i}\right],$$

We first introduce the following lemma which has been proved in the Ref. [8] and will be applied to the main theorem [11].

**Lemma 3.1** Let X and Y be finite-dimensional random vector spaces of the same dimension, and <sup>B</sup> be an open subset of <sup>X</sup>. Let *<sup>v</sup>*<sup>0</sup> be a given element of <sup>B</sup>. Suppose that *<sup>G</sup>* : <sup>B</sup> ! <sup>Y</sup> be a *<sup>C</sup>*<sup>2</sup> function and satisfy:


$$M = \sup\{\|D^2G(v)\| : v \in \mathbb{B}, \|v - v\_0\| \le \overline{\varepsilon}\};$$

Then, there is a solution *v* of the equation *G*ð*v*Þ ¼ 0 satisfying ∥*v*−*v*0∥ ≤ *ε*.

#### **3.2. Main results**

<sup>∥</sup>*Ak*ð*θtkω*Þ−*S*

220 Dynamical Systems - Analytical and Computational Techniques

½*Lξ*

(*ω*, *<sup>δ</sup>*)-pseudo orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>}

applied to the main theorem [11].

function and satisfy:

<sup>∥</sup>*AN*ð*θtN <sup>ω</sup>*Þ−*S*

½*Lξ*

*N*

Δ*t* ¼ inf 0 ≤ *k* ≤ *N*

*ϕ*ð*s*, *t*, *ω*Þ*x*ð*s*≤*t*Þ is defined and remains in *U* for 0 *< t*≤*tk* þ *ε*<sup>0</sup> P-almost surely.

*M*<sup>0</sup> ¼ sup *x*∈*U*

*M*<sup>1</sup> ¼ sup *x*∈*U*

*M*<sup>2</sup> ¼ sup *x*∈*U* ∥*D*<sup>2</sup> *<sup>f</sup>* <sup>1</sup>ð*θ<sup>t</sup>*

Θ ¼ sup 0 ≤ *k* ≤ *N*−1

*Df* <sup>1</sup> <sup>¼</sup> <sup>∂</sup>*<sup>f</sup>* <sup>1</sup>ð*θ<sup>t</sup>*

We first introduce the following lemma which has been proved in the Ref. [8] and will be

**Lemma 3.1** Let X and Y be finite-dimensional random vector spaces of the same dimension, and <sup>B</sup> be an open subset of <sup>X</sup>. Let *<sup>v</sup>*<sup>0</sup> be a given element of <sup>B</sup>. Suppose that *<sup>G</sup>* : <sup>B</sup> ! <sup>Y</sup> be a *<sup>C</sup>*<sup>2</sup>

, then we let *Lξ* ¼ {½*Lξ*

Next, a linear operator *<sup>L</sup>* is defined as follows. If random variables *<sup>ξ</sup>* <sup>¼</sup> {*ξk*ð*θtkω*Þ}

*k*} *N <sup>k</sup>*¼<sup>0</sup> to be

and

and

space <sup>ð</sup>R*<sup>d</sup>*−<sup>1</sup>

inverse *L*−<sup>1</sup>

Þ *N*þ1

.

Finally, we define

and

where

*<sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ*Yk*ð*θtkω*Þ*Sk*ð*θtkω*Þ<sup>∥</sup> <sup>≤</sup>*δ*, *f or k* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>,…*N*−1,

<sup>0</sup>ð*θ<sup>t</sup>*0*ω*Þ*YN*ð*θtN <sup>ω</sup>*Þ*SN*ð*θtN <sup>ω</sup>*Þ<sup>∥</sup> <sup>≤</sup>*δ:*

*<sup>k</sup>* <sup>¼</sup> *<sup>ξ</sup><sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ−*Ak*ð*θtkω*Þ*ξk*ð*θtkω*Þ, *f or k* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>,…*; <sup>N</sup>*−1*:*

*<sup>N</sup>* <sup>¼</sup> *<sup>ξ</sup>*0ð*θ<sup>t</sup>*0*ω*Þ−*AN*ð*θtN <sup>ω</sup>*Þ*ξN*ð*θtkω*Þ*:*

It follows from Section 4.2 that the operator *L* has right inverses and we choose one such right

At last, we define some constants. Let *U* be a convex subset of R*<sup>d</sup>* containing the value of the

*<sup>k</sup>*¼<sup>0</sup>. Therefore, we define

Δ*tk*þ<sup>1</sup> ¼ inf

Next, we choose a positive number 0 *<sup>&</sup>lt; <sup>ε</sup>*0≤Δ*<sup>t</sup>* such that <sup>∥</sup>*x*−*yk*ð*θtkω*Þ∥≤*ε*0, then the solution

<sup>∥</sup>*<sup>f</sup>* <sup>1</sup>ð*θ<sup>t</sup>*

<sup>∥</sup>*Df* <sup>1</sup>ð*θ<sup>t</sup>*

0 ≤ *k* ≤ *N*

*ω*, *x*ð*t*ÞÞ∥,

*ω*, *x*ð*t*ÞÞ∥,

*ω*, *x*ð*t*ÞÞ∥

<sup>∥</sup>*Yk*ð*θtkω*Þ∥,

*ω*, *x*ð*t*ÞÞ ∂*xi* 

,

ð*tk*þ<sup>1</sup>−*tk*Þ*:*

*N*

*<sup>k</sup>*¼<sup>0</sup> are in the

Now, we state the main theorem and postponed its proof in the latter section.

**Theorem 3.2.** For a given bounded (*ω*, *δ*)-pseudo random periodic orbit of RDS (3) generated by SDE (1) {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup>, assume that

$$\mathbb{C} := \max \{ \mathsf{M}\_0^{-1} (1 + \Theta \| \mathsf{L}^{-1} \|), \| \mathsf{L}^{-1} \| \}. \tag{7}$$

If the quantities shown in Section 3.1 together with *δ* and *ε*<sup>0</sup> satisfy:


**iii.** *<sup>C</sup>*<sup>3</sup> <sup>¼</sup> <sup>8</sup>*C*<sup>2</sup> *δ*ð*M*0*M*<sup>1</sup> þ 2*M*<sup>1</sup> exp ð*M*1Δ*t*Þ þ *M*2Δ*t* exp ð2*M*1Δ*t*ÞÞ≤1;

Then there exists a sequence of times {*hk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> <sup>ð</sup>*h*0≤*h*1≤,…*;* <sup>≤</sup>*hN*þ<sup>1</sup>≤*tN*þ<sup>1</sup><sup>Þ</sup> such that the (*ω*, *<sup>δ</sup>*) pseudo random periodic orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> is (*ω*, *<sup>δ</sup>*)-periodic shadowed by a true random periodic orbit of SDE (1) containing points {ð*xk*ð*θhkω*Þ,F*hk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> in mean-square. Moreover, shadowing distance satisfies *ε*≤4*Cδ:*

## **4. Numerical experiments**

Here, we apply the random periodic shadowing theorem to rigorously establish the existence of random periodic orbits of the stochastic Lorenz equation. And, this section will provide numerical experiments to compute the shadowing distance.

#### **4.1. Experimental preparation**

Consider the following Stratonovich stochastic Lorenz equation (SSLE) in R3,

$$d\mathbf{X}\_t = f(\mathbf{X}\_t)dt + \mu \mathbf{X}\_t \circ dW\_t(\omega), \ \mathbf{X}(0) = \mathbf{x}\_0 \mathbf{e} \mathbb{R}^3 \tag{8}$$

where *Xt* ¼ ð*x*, *y*, *z*Þ *<sup>T</sup>*∈R3, *x*, *y* and *z* are the state variables, *σ*, *ρ* and *β* are positive constant parameters, and

$$f(\mathbf{X}\_t) = \begin{pmatrix} \neg \sigma \mathbf{x} + \sigma y \\ \rho \mathbf{x} - y \neg \mathbf{x} \\ \neg \beta z + \mathbf{x} y \end{pmatrix}, \mu \mathbf{X}\_t = \begin{pmatrix} \mu \mathbf{x} \\ \mu y \\ \mu z \end{pmatrix}.$$

Make the transformation as follows:

$$\begin{cases} \overline{\mathfrak{x}}(t,\omega) = \exp\left(-\mu O\_t(\omega)\right) \mathbf{x} \\ \overline{\mathfrak{y}}(t,\omega) = \exp\left(-\mu O\_t(\omega)\right) \mathbf{y} \\ \overline{\mathfrak{z}}(t,\omega) = \exp\left(-\mu O\_t(\omega)\right) \mathbf{z} \end{cases}$$

It follows from the transformation that the above SSLE (8) can be transformed to the random differential equation (RDE) in the following form

$$\begin{cases} \frac{d\overline{\mathbf{x}}}{dt} = \sigma(-\overline{\mathbf{x}} + \overline{\mathbf{y}}) + \mu O\_t(\omega)\overline{\mathbf{x}}\\ \frac{d\overline{\mathbf{y}}}{dt} = -\overline{\mathbf{x}}\,\overline{z} + \rho \overline{\mathbf{x}}\,\overline{\mathbf{y}} + \mu O\_t(\omega)\overline{\mathbf{y}}\\ \frac{d\overline{z}}{dt} = \overline{\mathbf{x}}\,\overline{y}\cdot\beta\overline{z} + \mu O\_t(\omega)\overline{z}. \end{cases} \tag{9}$$

The existence and uniqueness of solution of RDE (9) can be proved by the same approaches as proposed in the Refs. [2] and [12] though a normally required linear growth condition does not be satisfied. Hence, a RDS *ϕ* can be generated by the solution operator of RDE (9).

In this experiment, it appears numerically that the stochastic Lorenz equations have asymptotically stable random periodic orbit for the parameter values *<sup>σ</sup>* <sup>¼</sup> <sup>10</sup>*;<sup>ρ</sup>* <sup>¼</sup> <sup>100</sup>*:*5*;<sup>β</sup>* <sup>¼</sup> <sup>8</sup> 3.

Firstly, we generate Brownian trajectories in the following way

$$\mathcal{W}\_0 = 0, \mathcal{W}\_{(i+1)\Delta t} = \mathcal{W}\_{i\Delta t} + \psi\_{i+1}$$

where

$$\psi\_i = N(0, \sqrt{\Delta t}), i = 1, 2, \dots, N$$

Secondly, it follows from the reference [13] that a global attractor, i.e., a forward invariant random compact set U of RDS *ϕ* generated by RDE (9) is the closed ball B<sup>1</sup> with center zero and radius <sup>R</sup>ð*ω*Þ, that is, <sup>B</sup><sup>1</sup> <sup>¼</sup> {*Xt*∈R<sup>3</sup> : <sup>∥</sup>*Xt*<sup>∥</sup> <sup>≤</sup>Rð*ω*Þ}, where

$$\mathcal{R}(\omega) = c\_2 \int\_{-t\_N}^0 \exp\left(c\_1 s - 2\sigma W\_s(\omega)\right) ds$$

and

$$\begin{aligned} c\_1 &= \min(1, \beta, \sigma), c\_2 > 0, \text{2}\langle BX\_t, X\_t \rangle < -c\_1 |X\_t|^2 + c\_2, \\ B &= \begin{pmatrix} -\sigma & \sigma & 0 \\ \rho & -1 & 0 \\ 0 & 0 & -\beta \end{pmatrix}. \end{aligned}$$

It has been proved in Ref. [13] that the RDS *ϕ* generated by Eq. (8) lies in the forward invariant random compact set U for P-almost surely *ω*∈Ω on the finite interval.

### **4.2. Numerical results**

*f*ð*Xt*Þ ¼

8 < :

*dx*

8 >>>>><

>>>>>:

Firstly, we generate Brownian trajectories in the following way

and radius <sup>R</sup>ð*ω*Þ, that is, <sup>B</sup><sup>1</sup> <sup>¼</sup> {*Xt*∈R<sup>3</sup> : <sup>∥</sup>*Xt*<sup>∥</sup> <sup>≤</sup>Rð*ω*Þ}, where

Rð*ω*Þ ¼ *c*<sup>2</sup>

*dy*

*dz*

differential equation (RDE) in the following form

Make the transformation as follows:

222 Dynamical Systems - Analytical and Computational Techniques

where

and

−*σx* þ *σy ρx*−*y*−*xz* −*βz* þ *xy*

*x*ð*t*, *ω*Þ ¼ exp ð−*μOt*ð*ω*ÞÞ*x y*ð*t*, *ω*Þ ¼ exp ð−*μOt*ð*ω*ÞÞ*y z*ð*t*, *ω*Þ ¼ exp ð−*μOt*ð*ω*ÞÞ*z*,

It follows from the transformation that the above SSLE (8) can be transformed to the random

*dt* <sup>¼</sup> *<sup>σ</sup>*ð−*<sup>x</sup>* <sup>þ</sup> *<sup>y</sup>*Þ þ *<sup>μ</sup>Ot*ð*ω*Þ*<sup>x</sup>*

*dt* <sup>¼</sup> <sup>−</sup>*<sup>x</sup> <sup>z</sup>* <sup>þ</sup> *<sup>ρ</sup>x*−*<sup>y</sup>* <sup>þ</sup> *<sup>μ</sup>Ot*ð*ω*Þ*<sup>y</sup>*

*dt* <sup>¼</sup> *<sup>x</sup> <sup>y</sup>*−*β<sup>z</sup>* <sup>þ</sup> *<sup>μ</sup>Ot*ð*ω*Þ*z:*

The existence and uniqueness of solution of RDE (9) can be proved by the same approaches as proposed in the Refs. [2] and [12] though a normally required linear growth condition does not

In this experiment, it appears numerically that the stochastic Lorenz equations have asymptot-

*<sup>W</sup>*<sup>0</sup> <sup>¼</sup> <sup>0</sup>*;W*ð*i*þ1ÞΔ*<sup>t</sup>* <sup>¼</sup> *Wi*Δ*<sup>t</sup>* <sup>þ</sup> *<sup>ψ</sup><sup>i</sup>*þ<sup>1</sup>

Secondly, it follows from the reference [13] that a global attractor, i.e., a forward invariant random compact set U of RDS *ϕ* generated by RDE (9) is the closed ball B<sup>1</sup> with center zero

<sup>Δ</sup>*<sup>t</sup>* <sup>p</sup> <sup>Þ</sup>, *<sup>i</sup>* <sup>¼</sup> <sup>1</sup>*;* <sup>2</sup>,…*; <sup>N</sup>*

exp ð*c*1*s*−2*σWs*ð*ω*ÞÞ*ds*

1 A*:* <sup>2</sup> <sup>þ</sup> *<sup>c</sup>*2,

be satisfied. Hence, a RDS *ϕ* can be generated by the solution operator of RDE (9).

ically stable random periodic orbit for the parameter values *<sup>σ</sup>* <sup>¼</sup> <sup>10</sup>*;<sup>ρ</sup>* <sup>¼</sup> <sup>100</sup>*:*5*;<sup>β</sup>* <sup>¼</sup> <sup>8</sup>

*<sup>ψ</sup><sup>i</sup>* <sup>¼</sup> *<sup>N</sup>*ð0*;* ffiffiffiffiffi

ð0 −*tN*

*B* ¼

*c*<sup>1</sup> ¼ minð1*;β*, *σ*Þ, *c*<sup>2</sup> *>* 0*;* 2〈*BXt*, *Xt*〉 *<* −*c*1j*Xt*j

0 @

−*σ σ* 0 *ρ* −1 0 0 0 −*β*

1

A, *μXt* ¼

*μx μy μz* 1 A*:*

(9)

3.

0 @

0 @

> We first present the results of our computations of the (*ω*, *δ*)-pseudo random periodic orbits for the stochastic Lorenz equation. To generate a good (*ω*, *δ*)-pseudo random periodic orbit, we numerically computed the orbit for some time with a rough guess of initial value. In this experiment, we take the initial value (*x*0, *y*0, *z*0) = (1.76, −4.48, 80.99), time step size Δ*t* = 0.00007 and iterative step *N* = 100000. The (*ω*, *δ*)-pseudo random periodic orbits of Eq. (9) in **Figure 1** are generated by the Euler-Maruyama scheme in Ref. [14] and the refined initial data. This also shows that there exists a forward invariant random compact set.

**Figure 1.** (*ω*, *δ*)-pseudo random periodic orbits of SLS.

Secondly, we briefly describe the details of the computation of the key quantities listed in **Table 2**. It follows from the methods shown in Section 3, and we can determine the parameters of Theorem 3.2. **Tables 1** and **2** present the important quantities and the necessary inequalities pertaining to this (*ω*, *δ*)-pseudo random periodic orbit.


**Table 1.** Value of the parameters.


**Table 2.** Comparison of the inequalities.

**Figure 2.** The distance ∥*Xn* − *X*0∥.

In conclusion, there is explicit dependent relationship between the shadowing distance and the pseudo orbit error, and there exists the true periodic orbit in the appropriate neighborhood of the (*ω*, *δ*)-pseudo random periodic orbit of SLS (**Figure 2**). **Figures 3a** and **3b** demonstrate the relation between (*ω*, *δ*)-pseudo random periodic orbits and true periodic orbits of Eq. (8). The blue lines denote (*ω*, *δ*)-pseudo random periodic orbit for the random dynamical system, and the domain between two blue lines has at least a true orbit for the corresponding random dynamical system.

**Figure 3.** (a) The symbolic drawing of the relation between true orbit and pseudo orbit plane. (b) The approximative structure of pseudo random periodic solution projected on the *z* plane.

## **5. Choice of the operator** *L***−<sup>1</sup>**

of Theorem 3.2. **Tables 1** and **2** present the important quantities and the necessary inequalities

∥ *L*−<sup>1</sup> ∥ ≤ 4.8218*e* − 03

**Parameters Value Parameters Value** Δ*t* 0.00007 *ε*<sup>0</sup> 2.01 *X*<sup>0</sup> (1.76, −4.48, 80.99) *M*<sup>0</sup> ≤ 9.8037 *N* 105 *M*<sup>1</sup> ≤ 0.0185 Approx. period *τ* = 0.1837 *M*<sup>2</sup> 0.0014 *X*<sup>2623</sup> (−0.6911, −7.7293, 81.6553) Θ ≤ 1.0013 ∥ *X*<sup>2623</sup> − *X*0∥ 4.1241 *δ* ≤ 4.1265

**Inequalities Value** *C* ≤ 0.1025 *C*<sup>1</sup> ≤ 0.4229 *C*<sup>2</sup> ≤ 1.6918 *C*<sup>3</sup> ≤ 0.0757 Shadowing distance *ε* 1.2688 Shadowing time *t* 70

In conclusion, there is explicit dependent relationship between the shadowing distance and the pseudo orbit error, and there exists the true periodic orbit in the appropriate neighborhood of the (*ω*, *δ*)-pseudo random periodic orbit of SLS (**Figure 2**). **Figures 3a** and **3b** demonstrate the relation

pertaining to this (*ω*, *δ*)-pseudo random periodic orbit.

224 Dynamical Systems - Analytical and Computational Techniques

**Table 1.** Value of the parameters.

**Table 2.** Comparison of the inequalities.

**Figure 2.** The distance ∥*Xn* − *X*0∥.

We are going to verify that the linear operator *L* along the obtained (*ω*, *δ*)-pseudo random periodic orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> is invertible for <sup>P</sup>-almost surely *<sup>ω</sup>*∈Ω.

Let *<sup>g</sup>* <sup>¼</sup> {*gk*ð*θtkω*Þ} *N <sup>k</sup>*¼<sup>0</sup> be in <sup>Y</sup>. To find *<sup>ξ</sup>* <sup>¼</sup> *<sup>L</sup>*<sup>−</sup><sup>1</sup> *g*, we have to solve the random difference equation

$$\begin{aligned} \xi\_{k+1}(\boldsymbol{\theta}^{\boldsymbol{\theta}+1}\boldsymbol{\omega}) &= A\_k(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega})\xi\_k(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega}) + \mathcal{g}\_k(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega}), \; \text{for } k = 0, \ldots N-1, \\\xi\_0(\boldsymbol{\theta}^{\boldsymbol{\theta}\_0}\boldsymbol{\omega}) &= A\_N(\boldsymbol{\theta}^{\boldsymbol{\theta}\_N}\boldsymbol{\omega})\xi\_N(\boldsymbol{\theta}^{\boldsymbol{\theta}\_N}\boldsymbol{\omega}) + \mathcal{g}\_N(\boldsymbol{\theta}^{\boldsymbol{\theta}\_N}\boldsymbol{\omega}). \end{aligned}$$

With the same choice of the parameters as Section 3, it can be shown that random matrix *Ak*ð*θtkω*<sup>Þ</sup> is upper triangular with positive diagonal entries. Therefore, there is an integer *<sup>l</sup>* such that for most *<sup>k</sup>*, the first *<sup>l</sup>* diagonal entries of *Ak*ð*θtkω*<sup>Þ</sup> exceed 1 and the rest are less than 1 in mean square for <sup>P</sup>-almost surely *<sup>ω</sup>*∈<sup>Ω</sup> [15]. We can partition the random matrix *Ak*ð*θtkω*<sup>Þ</sup> in the form

$$A\_k(\theta^k \omega) = \begin{bmatrix} P\_k(\theta^{t\_k}\omega) & Q\_k(\theta^{t\_k}\omega) \\ 0 & R\_k(\theta^{t\_k}\omega) \end{bmatrix}, k = 0, 1, \dots, N,$$

where *Pk*ð*θtkω*<sup>Þ</sup> is *<sup>l</sup>* +*<sup>l</sup>* random matrix,*Qk*ð*θtkω*<sup>Þ</sup> is *<sup>l</sup>* +(*<sup>d</sup>* <sup>−</sup> *<sup>l</sup>* <sup>−</sup>1) random matrix, and *Rk*ð*θtkω*<sup>Þ</sup> is (*d* − *l* −1) +(*d* − *l* −1) random matrix.

It follows from multiplicative ergodic theorem that the Lyapunov exponents of *Ak*ð*θtkω*<sup>Þ</sup> are nonzero. Then it suggests that the RDS *ϕ* generated by SDE (1) along the obtained (*ω*, *δ*)- pseudo orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> is pseudo hyperbolicity in mean square for <sup>P</sup>-almost surely *ω*∈Ω. It can be written as

$$\begin{cases} \boldsymbol{\xi}\_{k+1}^{(1)}(\boldsymbol{\theta}^{t\_{k+1}}\boldsymbol{\omega}) = \boldsymbol{P}\_{k}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega})\boldsymbol{\xi}\_{k}^{(1)}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega}) + \boldsymbol{Q}\_{k}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega})\boldsymbol{\xi}\_{k}^{(2)}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega}) + \boldsymbol{g}\_{k}^{(1)}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega}) \\\ \boldsymbol{\xi}\_{k+1}^{(2)}(\boldsymbol{\theta}^{t\_{k+1}}\boldsymbol{\omega}) = \boldsymbol{R}\_{k}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega})\boldsymbol{\xi}\_{k}^{(2)}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega}) + \boldsymbol{g}\_{k}^{(2)}(\boldsymbol{\theta}^{t\_{k}}\boldsymbol{\omega}) \end{cases}$$

for *k* = 0, 1, …, *N* − 1, and

$$\begin{cases} \xi\_0^{(1)}(\theta^{t\_0}\omega) = \mathsf{P}\_N(\theta^{t\_N}\omega)\xi\_N^{(1)} + \mathsf{Q}\_N(\theta^{t\_N}\omega)\xi\_N^{(2)}(\theta^{t\_N}\omega) + \mathsf{g}\_N^{(1)}(\theta^{t\_N}\omega),\\ \xi\_0^{(2)}(\theta^{t\_0}\omega) = \mathsf{R}\_N(\theta^{t\_N}\omega)\xi\_N^{(2)}(\theta^{t\_N}\omega) + \mathsf{g}\_N^{(2)}(\theta^{t\_N}\omega) \end{cases}$$

Let *ξ*<sup>ð</sup>2<sup>Þ</sup> <sup>0</sup> <sup>ð</sup>*θ<sup>t</sup>*0*ω*Þ ¼ 0 solve forwards the second equation of the first equations above. The substitute it into the first equation with *ξ*<sup>ð</sup>2<sup>Þ</sup> *<sup>k</sup>* <sup>ð</sup>*θtkω*Þ, and let *<sup>ξ</sup>*<sup>ð</sup>2<sup>Þ</sup> *<sup>N</sup>* <sup>ð</sup>*θtN <sup>ω</sup>*Þ ¼ 0, then solve it backwards. Finally, the solutions *ξ*<sup>ð</sup>1<sup>Þ</sup> *<sup>k</sup>* <sup>ð</sup>*θtkω*<sup>Þ</sup> are obtained. Therefore, the right inverse *<sup>L</sup>*−<sup>1</sup> is obtained as

$$[L^{-1}\mathfrak{g}]\_k = [\xi\_k^{(1)}(\theta^{t\_k}\omega), \xi\_k^{(2)}(\theta^{t\_k}\omega)]^T, k = 0, 1, \dots, N.$$

Hence, invertibility of the operator *L* is proved, which is an important for the application of the random shadowing lemma.

#### **6. Proof of the main theorem**

*Proof*. For a given (*ω*, *<sup>δ</sup>*)-pseudo random periodic orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> of RDS *<sup>ϕ</sup>* (3) generated by SDE (1), and an associated sequence of *d* +*<sup>d</sup>* random matrices {*Yk*ð*θtkω*Þ} *N <sup>k</sup>*¼<sup>0</sup> satisfying Eq. (6). Our aim is to show that {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> is (*ω*, *<sup>δ</sup>*)-periodic shadowed by a true random periodic orbit containing {ð*xk*ð*θhkω*Þ,F*hk* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup>, where *xk*ð*θhkω*<sup>Þ</sup> lies in the random hyperplane <sup>H</sup>*k*ð*θtkω*<sup>Þ</sup> through *yk*ð*θtkω*Þ.

Suppose that the random hyperplane <sup>H</sup>*k*ð*θtkω*<sup>Þ</sup> is approximately normal to *<sup>T</sup>*ð*yk*Þ ¼ *<sup>f</sup>* <sup>1</sup>ð*θtkω*, *yk*<sup>Þ</sup> at the point *yk*ð*θtkω*Þ. Therefore, we only need to find a sequence of times {*hk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> <sup>¼</sup> {*tk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> , *<sup>h</sup>*0≤*h*1≤,…*;* <sup>≤</sup>*hN*þ<sup>1</sup>≤*tN*þ<sup>1</sup> and a sequence of points {ð*xk*ð*θhkω*Þ,F*tN* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> with *xk*ð*θhkω*Þ∈H*k*ð*θtkω*<sup>Þ</sup> being contained in the *<sup>ε</sup>*-neighborhood of *yk*ð*θtkω*<sup>Þ</sup> such that

$$\mathbf{x}\_{k+1}(\boldsymbol{\Theta}^{\mathbb{h}\_{k+1}}\boldsymbol{\omega}) = \boldsymbol{\varphi}(\boldsymbol{h}\_{k}, \boldsymbol{h}\_{k+1}, \boldsymbol{\Theta}^{\mathbb{h}\_{k}}\boldsymbol{\omega}) \mathbf{x}\_{k}(\boldsymbol{\Theta}^{\mathbb{h}\_{k}}\boldsymbol{\omega}), \text{ for } k = 0, 1, \ldots, N - 1, \boldsymbol{\varphi}$$

and

$$\mathfrak{x}\_0(\boldsymbol{\theta}^{\mathfrak{h}\_0}\boldsymbol{\omega}) = \boldsymbol{\varphi}(\boldsymbol{h}\_{\mathbb{N}}, \boldsymbol{h}\_{\mathbb{N}+1}, \boldsymbol{\theta}^{\mathfrak{h}\_{\mathbb{N}}}\boldsymbol{\omega}) \mathfrak{x}\_{\mathbb{N}}(\boldsymbol{\theta}^{\mathfrak{h}\_{\mathbb{N}}}\boldsymbol{\omega}).$$

By the assumption, we obtain that *Sk*ð*θtkω*<sup>Þ</sup> is a *<sup>d</sup>* +(*d* − 1) random matrix whose columns form an approximative orthogonal basis for <sup>H</sup>*k*ð*θtkω*Þ. We first define the random hyperplane <sup>H</sup>*k*ð*θtkω*<sup>Þ</sup> as the image of <sup>R</sup>*<sup>d</sup>*−<sup>1</sup> through the map **<sup>z</sup>**↦*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ**z**, which can be viewed as a subspace of the tangent space at *yk*ð*θtkω*Þ.

Therefore, the problem of finding appropriate sequences of *hk* and *xk* becomes that of finding a sequence of times {*hk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> :<sup>¼</sup> {*tk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> and a sequence of points {ð*zk*ð*θhkω*Þ,F*tN* <sup>Þ</sup>} *N <sup>k</sup>*¼<sup>0</sup> in <sup>R</sup>*<sup>d</sup>*−<sup>1</sup> such that

$$\begin{aligned} \mathbf{y}\_{k+1}(\boldsymbol{\theta}^{\mathbb{H}+1}\boldsymbol{\omega}) + \mathbf{S}\_{k+1}(\boldsymbol{\theta}^{\mathbb{H}+1}\boldsymbol{\omega})\mathbf{z}\_{k+1}(\boldsymbol{\theta}^{\mathbb{H}+1}\boldsymbol{\omega})\\ = \boldsymbol{\varrho}(\boldsymbol{h}\_{k}, \boldsymbol{h}\_{k+1}, \boldsymbol{\theta}^{\mathbb{H}}\boldsymbol{\omega})(\boldsymbol{y}\_{k}(\boldsymbol{\theta}^{\mathbb{H}}\boldsymbol{\omega}) + \mathbf{S}\_{k}(\boldsymbol{\theta}^{\mathbb{H}}\boldsymbol{\omega})\mathbf{z}\_{k}(\boldsymbol{\theta}^{\mathbb{H}}\boldsymbol{\omega})), \ \ k = 0, 1, \ldots, N-1, \end{aligned}$$

and

pseudo orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>}

8 < :

*ξ*<sup>ð</sup>1<sup>Þ</sup>

226 Dynamical Systems - Analytical and Computational Techniques

*ξ*<sup>ð</sup>2<sup>Þ</sup>

(

*ξ*<sup>ð</sup>1<sup>Þ</sup>

*ξ*<sup>ð</sup>2<sup>Þ</sup>

tute it into the first equation with *ξ*<sup>ð</sup>2<sup>Þ</sup>

½*L*−1 *g <sup>k</sup>* ¼ ½*ξ*<sup>ð</sup>1<sup>Þ</sup>

ated by SDE (1), and an associated sequence of *d*

Eq. (6). Our aim is to show that {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>}

random periodic orbit containing {ð*xk*ð*θhkω*Þ,F*hk* <sup>Þ</sup>}

By the assumption, we obtain that *Sk*ð*θtkω*<sup>Þ</sup> is a *<sup>d</sup>*

*ω*∈Ω. It can be written as

for *k* = 0, 1, …, *N* − 1, and

Finally, the solutions *ξ*<sup>ð</sup>1<sup>Þ</sup>

random shadowing lemma.

**6. Proof of the main theorem**

plane <sup>H</sup>*k*ð*θtkω*<sup>Þ</sup> through *yk*ð*θtkω*Þ.

Let *ξ*<sup>ð</sup>2<sup>Þ</sup>

and

*N*

*<sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ ¼ *Pk*ð*θtkω*Þ*ξ*<sup>ð</sup>1<sup>Þ</sup>

*<sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ ¼ *Rk*ð*θtkω*Þ*ξ*<sup>ð</sup>2<sup>Þ</sup>

<sup>0</sup> <sup>ð</sup>*θ<sup>t</sup>*0*ω*Þ ¼ *PN*ð*θtN <sup>ω</sup>*Þ*ξ*<sup>ð</sup>1<sup>Þ</sup>

<sup>0</sup> <sup>ð</sup>*θ<sup>t</sup>*0*ω*Þ ¼ *RN*ð*θtN <sup>ω</sup>*Þ*ξ*<sup>ð</sup>2<sup>Þ</sup>

*<sup>k</sup>*¼<sup>0</sup> is pseudo hyperbolicity in mean square for <sup>P</sup>-almost surely

*<sup>N</sup>* <sup>ð</sup>*θtN <sup>ω</sup>*Þ þ *<sup>g</sup>*

*<sup>k</sup>* <sup>ð</sup>*θtkω*Þ þ *<sup>g</sup>*

ð1Þ *<sup>k</sup>* <sup>ð</sup>*θtkω*<sup>Þ</sup>

ð1Þ *<sup>N</sup>* <sup>ð</sup>*θtN <sup>ω</sup>*<sup>Þ</sup>

*<sup>N</sup>* <sup>ð</sup>*θtN <sup>ω</sup>*Þ ¼ 0, then solve it backwards.

*N*

*<sup>k</sup>*¼<sup>0</sup> is (*ω*, *<sup>δ</sup>*)-periodic shadowed by a true

*<sup>k</sup>*¼<sup>0</sup>, where *xk*ð*θhkω*<sup>Þ</sup> lies in the random hyper-

(*d* − 1) random matrix whose columns form

*<sup>d</sup>* random matrices {*Yk*ð*θtkω*Þ}

*N*

*<sup>k</sup>*¼<sup>0</sup> of RDS *<sup>ϕ</sup>* (3) gener-

*<sup>k</sup>*¼<sup>0</sup> satisfying

*N*þ1 *<sup>k</sup>*¼<sup>0</sup> ,

*N*

*N*þ1 *<sup>k</sup>*¼<sup>0</sup> <sup>¼</sup> {*tk*}

*<sup>k</sup>*¼<sup>0</sup> with *xk*ð*θhkω*Þ∈H*k*ð*θtkω*<sup>Þ</sup>

*<sup>k</sup>* <sup>ð</sup>*θtkω*Þ þ *Qk*ð*θtkω*Þ*ξ*<sup>ð</sup>2<sup>Þ</sup>

*<sup>N</sup>* <sup>þ</sup> *QN*ð*θtN <sup>ω</sup>*Þ*ξ*<sup>ð</sup>2<sup>Þ</sup>

<sup>0</sup> <sup>ð</sup>*θ<sup>t</sup>*0*ω*Þ ¼ 0 solve forwards the second equation of the first equations above. The substi-

*<sup>N</sup>* <sup>ð</sup>*θtN <sup>ω</sup>*Þ þ *<sup>g</sup>*

*<sup>k</sup>* <sup>ð</sup>*θtkω*Þ, and let *<sup>ξ</sup>*<sup>ð</sup>2<sup>Þ</sup>

Hence, invertibility of the operator *L* is proved, which is an important for the application of the

+

*N*

Suppose that the random hyperplane <sup>H</sup>*k*ð*θtkω*<sup>Þ</sup> is approximately normal to *<sup>T</sup>*ð*yk*Þ ¼ *<sup>f</sup>* <sup>1</sup>ð*θtkω*, *yk*<sup>Þ</sup>

*xk*þ<sup>1</sup>ð*θhk*þ<sup>1</sup>*ω*Þ ¼ *<sup>ϕ</sup>*ð*hk*, *hk*þ<sup>1</sup>, *<sup>θ</sup>hkω*Þ*xk*ð*θhkω*Þ, *f or k* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>,…*; <sup>N</sup>*−1*;*

*<sup>x</sup>*0ð*θ<sup>h</sup>*0*ω*Þ ¼ *<sup>ϕ</sup>*ð*hN*, *hN*þ<sup>1</sup>, *<sup>θ</sup>hN <sup>ω</sup>*Þ*xN*ð*θhN <sup>ω</sup>*Þ*:*

an approximative orthogonal basis for <sup>H</sup>*k*ð*θtkω*Þ. We first define the random hyperplane

+

at the point *yk*ð*θtkω*Þ. Therefore, we only need to find a sequence of times {*hk*}

*N*

*<sup>k</sup>* <sup>ð</sup>*θtkω*Þ, *<sup>ξ</sup>*<sup>ð</sup>2<sup>Þ</sup>

*Proof*. For a given (*ω*, *<sup>δ</sup>*)-pseudo random periodic orbit {ð*yk*ð*θtkω*Þ,F*tk* <sup>Þ</sup>}

*<sup>h</sup>*0≤*h*1≤,…*;* <sup>≤</sup>*hN*þ<sup>1</sup>≤*tN*þ<sup>1</sup> and a sequence of points {ð*xk*ð*θhkω*Þ,F*tN* <sup>Þ</sup>}

being contained in the *<sup>ε</sup>*-neighborhood of *yk*ð*θtkω*<sup>Þ</sup> such that

ð2Þ *<sup>k</sup>* <sup>ð</sup>*θtkω*<sup>Þ</sup>

> ð2Þ *<sup>N</sup>* <sup>ð</sup>*θtN <sup>ω</sup>*<sup>Þ</sup>

*<sup>k</sup>* <sup>ð</sup>*θtkω*<sup>Þ</sup> are obtained. Therefore, the right inverse *<sup>L</sup>*−<sup>1</sup> is obtained as

*<sup>k</sup>* <sup>ð</sup>*θtkω*Þ*<sup>T</sup>*, *<sup>k</sup>* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>,…*; <sup>N</sup>:*

*<sup>k</sup>* <sup>ð</sup>*θtkω*Þ þ *<sup>g</sup>*

$$y\_0(\boldsymbol{\theta}^{\mathbb{N}}\boldsymbol{\omega}) + \mathbb{S}\_0(\boldsymbol{\theta}^{\mathbb{N}}\boldsymbol{\omega})\mathbf{z}\_0(\boldsymbol{\theta}^{\mathbb{N}}\boldsymbol{\omega}) = \boldsymbol{\varphi}(\boldsymbol{h}\_{\mathbb{N}}, \boldsymbol{h}\_{\mathbb{N}+1}, \boldsymbol{\theta}^{\mathbb{N}}\boldsymbol{\omega})(y\_{\mathbb{N}}(\boldsymbol{\theta}^{\mathbb{N}}\boldsymbol{\omega}) + \mathbb{S}\_{\mathbb{N}}(\boldsymbol{\theta}^{\mathbb{N}}\boldsymbol{\omega})\mathbf{z}\_{\mathbb{N}}(\boldsymbol{\theta}^{\mathbb{N}}\boldsymbol{\omega})).$$

We next introduce the space <sup>X</sup> <sup>¼</sup> <sup>R</sup>*<sup>N</sup>*þ<sup>2</sup> · <sup>ð</sup>R*<sup>d</sup>*−<sup>1</sup> Þ *<sup>N</sup>*þ<sup>1</sup> with norm

$$\|\| (\{\mathbf{s}\_k\}\_{k=0}^{N+1}, \{\zeta\_k\}\_{k=0}^N) \|\| = \max \left\{ \sup\_{0 \le k \le N+1} |\mathbf{s}\_k|, \sup\_{0 \le k \le N} \|\zeta\_k\| \right\}.$$

and the space <sup>Y</sup> ¼ ðR*<sup>d</sup>* Þ *<sup>N</sup>*þ<sup>1</sup> with norm

$$\|\{\mathbf{g}\_k\}\_{k=0}^N\| = \max\_{0 \le k \le N} \|\mathbf{g}\_k\|\_{\mathcal{L}}$$

where *sk* <sup>∈</sup> <sup>R</sup>, *<sup>ζ</sup><sup>k</sup>* <sup>∈</sup> <sup>R</sup>*<sup>d</sup>*−<sup>1</sup> and *gk* <sup>∈</sup> <sup>R</sup>*<sup>d</sup>*.

Now, we let B be a properly chosen *ε*-open neighborhood of *v*<sup>0</sup> ¼ ð{*tk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> *;* <sup>0</sup><sup>Þ</sup> in <sup>X</sup> which contain the point *v* ¼ ð{*sk*} *N*þ1 *<sup>k</sup>* <sup>¼</sup> <sup>0</sup>, {*ζk*} *N <sup>k</sup>* <sup>¼</sup> <sup>0</sup>Þ. And, we introduce the function *<sup>G</sup>* : <sup>B</sup> ! <sup>Y</sup> given by

$$\begin{aligned} [G(\boldsymbol{\sigma})]\_k &= \boldsymbol{y}\_{k+1}(\boldsymbol{\Theta}^{\boldsymbol{k}+1}\boldsymbol{\omega}) + \boldsymbol{S}\_{k+1}(\boldsymbol{\Theta}^{\boldsymbol{k}+1}\boldsymbol{\omega})\boldsymbol{\zeta}\_{k+1}(\boldsymbol{\Theta}^{\boldsymbol{k}+1}\boldsymbol{\omega})\\ &- q(\boldsymbol{s}\_k, \boldsymbol{s}\_{k+1}, \boldsymbol{\Theta}^{\boldsymbol{k}}\boldsymbol{\omega})(\boldsymbol{y}\_k(\boldsymbol{\Theta}^{\boldsymbol{k}}\boldsymbol{\omega}) + \boldsymbol{S}\_k(\boldsymbol{\Theta}^{\boldsymbol{k}}\boldsymbol{\omega})\boldsymbol{\zeta}\_k(\boldsymbol{\Theta}^{\boldsymbol{k}}\boldsymbol{\omega})), \text{ for } k = 0, 1, \dots, N-1, \end{aligned}$$

and

$$\begin{aligned} [G(\boldsymbol{\sigma})]\_{\boldsymbol{N}} &= \ \boldsymbol{y}\_{0}(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega}) + \boldsymbol{\mathcal{S}}\_{0}(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega})\boldsymbol{\zeta}\_{0}(\boldsymbol{\theta}^{\boldsymbol{\alpha}}\boldsymbol{\omega}) \\ &- \boldsymbol{\varphi}(\boldsymbol{s}\_{\mathrm{N}}, \boldsymbol{s}\_{\mathrm{N}+1}, \boldsymbol{\theta}^{\boldsymbol{\aleph}}\boldsymbol{\omega})(\boldsymbol{y}\_{k}(\boldsymbol{\theta}^{\boldsymbol{\aleph}}\boldsymbol{\omega}) + \boldsymbol{\mathcal{S}}\_{\mathrm{N}}(\boldsymbol{\theta}^{\boldsymbol{\aleph}}\boldsymbol{\omega})\boldsymbol{\zeta}\_{\mathrm{N}}(\boldsymbol{\theta}^{\boldsymbol{\aleph}}\boldsymbol{\omega})). \end{aligned} \tag{10}$$

It is the fact that Theorem 3.2 will be proved if we can find a solution *v* ¼ ð{*hk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> , {**z***k*ð*θhkω*Þ} *N <sup>k</sup>*¼<sup>0</sup><sup>Þ</sup> of the equation

$$G(\overline{v}) = 0, \ a.s.$$

in the closed ball of radius *ε* about *v*<sup>0</sup> ¼ ð{*tk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> *;* <sup>0</sup>Þ.

In order to apply Lemma 3.1, those hypotheses (*i*) – (*iii*) for the map *G* as Eq. (10) should be verified.

#### Step I:

First and foremost, it follows from the construction of pseudo orbits that ∥*G*ð*v*0Þ∥ ≤*δ:* Secondly, the Gateaux derivative of the map *G* at *v*<sup>0</sup> with *u* ¼ f*τk*g*<sup>N</sup>*þ<sup>1</sup> *<sup>k</sup>*¼<sup>0</sup> , <sup>f</sup>*ξk*ð*θtkω*Þg*<sup>N</sup> k*¼0 ∈X is given by

$$\begin{aligned} \left[DG(\boldsymbol{\upsilon}\_{0})\boldsymbol{u}\right]\_{k} &= \lim\_{\boldsymbol{\delta}\to\boldsymbol{0}} \frac{[G(\boldsymbol{\upsilon}\_{0}+\boldsymbol{\varepsilon}\boldsymbol{u})-G(\boldsymbol{\upsilon}\_{0})]\_{k}}{\boldsymbol{\varepsilon}}\\ &= -\boldsymbol{\tau}\_{k}T(\boldsymbol{y}\_{k+1}) + \boldsymbol{S}\_{k+1}(\boldsymbol{\theta}^{\boldsymbol{h}+1}\boldsymbol{\omega}) \cdot \boldsymbol{\xi}\_{k+1}(\boldsymbol{\theta}^{\boldsymbol{h}+1}\boldsymbol{\omega})\\ &- D\boldsymbol{\rho}(\boldsymbol{t}\_{k},\boldsymbol{t}\_{k+1},\boldsymbol{\theta}^{\boldsymbol{h}}\boldsymbol{\omega})\boldsymbol{y}\_{k}(\boldsymbol{\theta}^{\boldsymbol{h}}\boldsymbol{\omega}) \cdot \boldsymbol{S}\_{k}(\boldsymbol{\theta}^{\boldsymbol{h}}\boldsymbol{\omega}) \cdot \boldsymbol{\xi}\_{k}(\boldsymbol{\theta}^{\boldsymbol{h}}\boldsymbol{\omega}), \end{aligned}$$

for *k* = 0, 1, …, *N* − 1, and

$$\begin{split} [\mathrm{DG}(\boldsymbol{\upsilon}\_{0})\boldsymbol{\mu}]\_{\mathrm{N}} &= -\boldsymbol{\tau}\_{\mathrm{N}}T(\boldsymbol{y}\_{\mathrm{N}}) + \mathbb{S}\_{\mathrm{0}}(\boldsymbol{\Theta}^{\mathsf{b}\_{0}}\boldsymbol{\omega}) \cdot \boldsymbol{\xi}\_{\mathrm{0}}(\boldsymbol{\Theta}^{\mathsf{b}\_{0}}\boldsymbol{\omega}) \\ &- \mathrm{D}\boldsymbol{\rho}(\mathrm{t}\_{\mathrm{N}},\mathsf{t}\_{\mathrm{N}+1},\boldsymbol{\Theta}^{\mathsf{b}\_{\mathrm{N}}}\boldsymbol{\omega}) \boldsymbol{y}\_{\mathrm{N}}(\boldsymbol{\Theta}^{\mathsf{b}\_{\mathrm{N}}}\boldsymbol{\omega}) \cdot \boldsymbol{\mathcal{S}}\_{\mathrm{N}}(\boldsymbol{\Theta}^{\mathsf{b}\_{\mathrm{N}}}\boldsymbol{\omega}) \cdot \boldsymbol{\mathcal{S}}\_{\mathrm{N}}(\boldsymbol{\Theta}^{\mathsf{b}\_{\mathrm{N}}}\boldsymbol{\omega}). \end{split} \tag{11}$$

We will approximate *DG*(*v*0) by another operator. Now, we define the operator T : X ! Y for *u*∈X. Let T *ku* be the approximation of ½*DG*ð*v*0Þ*u <sup>k</sup>* in Ref. [16], we have

$$\begin{aligned} T\_k \mu &= -\tau\_k T(\mathcal{y}\_{k+1}) + \mathcal{S}\_{k+1}(\boldsymbol{\Theta}^{\boldsymbol{\theta}\_{k+1}} \boldsymbol{\omega}) \cdot \mathcal{S}\_{k+1}(\boldsymbol{\Theta}^{\boldsymbol{\theta}\_{k+1}} \boldsymbol{\omega}) \\ &- \mathcal{Y}\_k(\boldsymbol{\Theta}^{\boldsymbol{\theta}} \boldsymbol{\omega}) \cdot \mathcal{S}\_k(\boldsymbol{\Theta}^{\boldsymbol{\theta}\_k} \boldsymbol{\omega}) \cdot \mathcal{S}\_k(\boldsymbol{\Theta}^{\boldsymbol{\theta}\_k} \boldsymbol{\omega}), \ k = 0, 1, \dots, N - 1, \end{aligned}$$

and

$$\begin{split} \mathcal{T}\_N \boldsymbol{\mu} &= \quad -\tau\_N \boldsymbol{T}(\boldsymbol{y}\_N) + \mathbb{S}\_0(\boldsymbol{\theta}^{\boldsymbol{\theta}} \boldsymbol{\omega}) \cdot \mathbb{S}\_0(\boldsymbol{\theta}^{\boldsymbol{\theta}} \boldsymbol{\omega}) \\ &- \mathcal{Y}\_N(\boldsymbol{\theta}^{\boldsymbol{\theta}\_N} \boldsymbol{\omega}) \cdot \mathbb{S}\_N(\boldsymbol{\theta}^{\boldsymbol{\theta}\_N} \boldsymbol{\omega}) \cdot \mathbb{S}\_N(\boldsymbol{\theta}^{\boldsymbol{\theta}\_N} \boldsymbol{\omega}). \end{split} \tag{12}$$

Now, we need to prove that T is invertible. Therefore, we must show that for all *g* ¼ {*gk*} *N <sup>k</sup>*¼<sup>0</sup>∈Y, there is a solution of the following equation

$$
\mathcal{T}\_k \mathfrak{u} = \mathfrak{g}\_k,
$$

that is, for *k* = 0, 1, …, *N* −1,

$$-\tau\_k T(y\_{k+1}) + \mathcal{S}\_{k+1}(\boldsymbol{\theta}^{t\_{k+1}}\boldsymbol{\omega})\mathcal{S}\_{k+1}(\boldsymbol{\theta}^{t\_{k+1}}\boldsymbol{\omega}) - Y\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega})\mathcal{S}\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega})\mathcal{S}\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega}) = \mathcal{g}\_k(\boldsymbol{\theta}^{t\_k}\boldsymbol{\omega}),$$

and

$$\begin{array}{ll} -\mathsf{r}\_{N}T(\boldsymbol{y}\_{N}) + \mathsf{S}\_{0}(\boldsymbol{\theta}^{\textnormal{W}}\boldsymbol{\omega}) \cdot \mathsf{S}\_{0}(\boldsymbol{\theta}^{\textnormal{W}}\boldsymbol{\omega}) - Y\_{N}(\boldsymbol{\theta}^{\textnormal{W}}\boldsymbol{\omega}) \cdot \mathsf{S}\_{N}(\boldsymbol{\theta}^{\textnormal{W}}\boldsymbol{\omega}) \cdot \mathsf{S}\_{N}(\boldsymbol{\theta}^{\textnormal{W}}\boldsymbol{\omega}) \\ = \quad \mathsf{g}\_{N}(\boldsymbol{\theta}^{\textnormal{W}}\boldsymbol{\omega}). \end{array} \tag{13}$$

As we know, the matrix

$$\left[\frac{T(\boldsymbol{y}\_k)}{\|T(\boldsymbol{y}\_k)\|}\Big|\mathcal{S}\_k(\boldsymbol{\theta}^h \boldsymbol{w})\right]$$

is orthogonal for each *k*. Then this set of equations is equivalent to the following two sets of equations, one set obtained by premultiplying the *k*th member in Eq. (13) by *T* <sup>ð</sup>*yk*þ<sup>1</sup><sup>Þ</sup> and *T* ð*y*0Þ, respectively, the other set obtained by premultiplying the *k*th member in Eq. (13) by *S <sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*<sup>Þ</sup> and *<sup>S</sup>* <sup>0</sup>ð*θ<sup>t</sup>*0*ω*Þ, respectively. Therefore, we obtain for *<sup>k</sup>* = 0, 1, …, *<sup>N</sup>* <sup>−</sup>1,

$$-\tau\_k \|T(\boldsymbol{y}\_{k+1})\|^2 - T(\boldsymbol{y}\_{k+1})^\* \boldsymbol{Y}\_k(\boldsymbol{\theta}^{\boldsymbol{t}\_k}\boldsymbol{\omega})\mathcal{S}\_k(\boldsymbol{\theta}^{\boldsymbol{t}\_k}\boldsymbol{\omega})\mathcal{S}\_k(\boldsymbol{\theta}^{\boldsymbol{t}\_k}\boldsymbol{\omega}) = T(\boldsymbol{y}\_{k+1})^\* \mathcal{g}\_k(\boldsymbol{\theta}^{\boldsymbol{t}\_k}\boldsymbol{\omega}),$$

and

Step I:

and

and

First and foremost, it follows from the construction of pseudo orbits that ∥*G*ð*v*0Þ∥ ≤*δ:* Secondly,

*<sup>N</sup>* <sup>¼</sup> <sup>−</sup>*τNT*ð*yN*Þ þ *<sup>S</sup>*0ð*θ<sup>t</sup>*0*ω*Þ *<sup>ξ</sup>*0ð*θ<sup>t</sup>*0*ω*<sup>Þ</sup>

<sup>T</sup> *ku* <sup>¼</sup> <sup>−</sup>*τkT*ð*yk*þ<sup>1</sup>Þ þ *Sk*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ *<sup>ξ</sup><sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*<sup>Þ</sup>

We will approximate *DG*(*v*0) by another operator. Now, we define the operator T : X ! Y for

<sup>T</sup> *Nu* <sup>¼</sup> <sup>−</sup>*τNT*ð*yN*Þ þ *<sup>S</sup>*0ð*θ<sup>t</sup>*0*ω*Þ *<sup>ξ</sup>*0ð*θ<sup>t</sup>*0*ω*<sup>Þ</sup>

T *ku* ¼ *gk*,

<sup>−</sup>*τkT*ð*yk*þ<sup>1</sup>Þ þ *Sk*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ*ξ<sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ−*Yk*ð*θtkω*Þ*Sk*ð*θtkω*Þ*ξk*ð*θtkω*Þ ¼ *gk*ð*θtkω*Þ,

 

is orthogonal for each *k*. Then this set of equations is equivalent to the following two sets of

*T*ð*yk*Þ ∥*T*ð*yk*Þ∥

equations, one set obtained by premultiplying the *k*th member in Eq. (13) by *T*

<sup>−</sup>*τNT*ð*yN*Þ þ *<sup>S</sup>*0ð*θ<sup>t</sup>*0*ω*Þ *<sup>ξ</sup>*0ð*θ<sup>t</sup>*0*ω*Þ−*YN*ð*θtN <sup>ω</sup>*Þ *SN*ð*θtN <sup>ω</sup>*Þ *<sup>ξ</sup>N*ð*θtN <sup>ω</sup>*<sup>Þ</sup>

<sup>¼</sup> *gN*ð*θtN <sup>ω</sup>*Þ*:* (13)

*Sk*ð*θtkω*<sup>Þ</sup>

Now, we need to prove that T is invertible. Therefore, we must show that for all *g* ¼ {*gk*}

<sup>−</sup>*Yk*ð*θtkω*Þ *Sk*ð*θtkω*Þ *<sup>ξ</sup>k*ð*θtkω*Þ, *<sup>k</sup>* <sup>¼</sup> <sup>0</sup>*;* <sup>1</sup>,…*; <sup>N</sup>*−1,

½*G*ð*v*<sup>0</sup> þ *εu*Þ−*G*ð*v*0Þ*<sup>k</sup> ε* <sup>¼</sup> <sup>−</sup>*τkT*ð*yk*þ<sup>1</sup>Þ þ *Sk*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ *<sup>ξ</sup><sup>k</sup>*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*<sup>Þ</sup>

 f*τk*g*<sup>N</sup>*þ<sup>1</sup>

<sup>−</sup>*Dϕ*ð*tk*,*tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þ*yk*ð*θtkω*Þ *Sk*ð*θtkω*Þ *<sup>ξ</sup>k*ð*θtkω*Þ,

*<sup>k</sup>*¼<sup>0</sup> , <sup>f</sup>*ξk*ð*θtkω*Þg*<sup>N</sup>*

<sup>−</sup>*Dϕ*ð*tN*, *tN*þ<sup>1</sup>, *<sup>θ</sup>tN <sup>ω</sup>*Þ*yN*ð*θtN <sup>ω</sup>*Þ *SN*ð*θtN <sup>ω</sup>*Þ *<sup>ξ</sup>N*ð*θtN <sup>ω</sup>*Þ*:* (11)

<sup>−</sup>*YN*ð*θtN <sup>ω</sup>*Þ *SN*ð*θtN <sup>ω</sup>*Þ *<sup>ξ</sup>N*ð*θtN <sup>ω</sup>*Þ*:* (12)

*<sup>k</sup>* in Ref. [16], we have

*k*¼0 

∈X is given by

*N <sup>k</sup>*¼<sup>0</sup>∈Y,

<sup>ð</sup>*yk*þ<sup>1</sup><sup>Þ</sup> and

the Gateaux derivative of the map *G* at *v*<sup>0</sup> with *u* ¼

228 Dynamical Systems - Analytical and Computational Techniques

*<sup>k</sup>* ¼ lim*<sup>ε</sup>*!<sup>0</sup>

½*DG*ð*v*0Þ*u*

½*DG*ð*v*0Þ*u*

*u*∈X. Let T *ku* be the approximation of ½*DG*ð*v*0Þ*u*

there is a solution of the following equation

that is, for *k* = 0, 1, …, *N* −1,

As we know, the matrix

for *k* = 0, 1, …, *N* − 1, and

$$\|\tau\_N\| \|T(\boldsymbol{y}\_0)\|^2 - T(\boldsymbol{y}\_0)^\* Y\_N(\boldsymbol{\theta}^{t\_N} \boldsymbol{\omega}) \mathcal{S}\_N(\boldsymbol{\theta}^{t\_N} \boldsymbol{\omega}) \mathcal{S}\_N(\boldsymbol{\theta}^{t\_N} \boldsymbol{\omega}) = T(\boldsymbol{y}\_0)^\* \mathcal{g}\_N(\boldsymbol{\theta}^{t\_N} \boldsymbol{\omega}), \tag{14}$$

$$\xi\_{k+1}(\boldsymbol{\theta}^{k+1} \boldsymbol{\omega}) - \mathcal{A}\_k(\boldsymbol{\theta}^k \boldsymbol{\omega}) \mathcal{S}\_k(\boldsymbol{\theta}^k \boldsymbol{\omega}) = \mathcal{S}\_k^\*(\boldsymbol{\theta}^{k+1} \boldsymbol{\omega}) \mathcal{g}\_k(\boldsymbol{\theta}^k \boldsymbol{\omega}), \ k = 0, 1, \ldots, N - 1,$$

and

$$
\xi\_0(\theta^{\mathfrak{d}\_0}\omega) - A\_N(\theta^{\mathfrak{d}\_N}\omega)\xi\_N(\theta^{\mathfrak{d}\_N}\omega) = \mathfrak{S}\_N^\*(\theta^{\mathfrak{d}\_0}\omega)\mathfrak{g}\_N(\theta^{\mathfrak{d}\_N}\omega). \tag{15}
$$

If we write *g* ¼ {*S <sup>k</sup>* <sup>ð</sup>*θtkω*Þ**g***k*ð*θtkω*Þ} *N <sup>k</sup>*¼<sup>0</sup>, it follows from the condition (7) that the solution of Eq. (15) is

$$
\xi\_k = (L^{-1}\overline{\mathfrak{g}})\_k. \tag{16}
$$

If Eq. (16) is substituted into Eq. (14), we obtain for *k* = 0, 1, …, *N* −1,

$$\tau\_k = -\frac{T(\boldsymbol{y}\_{k+1})^\*}{\left\|\boldsymbol{T}(\boldsymbol{y}\_{k+1})\right\|^2} \cdot \left[\boldsymbol{Y}\_k(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega})\mathbf{S}\_k(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega})\boldsymbol{L}^{-1}\mathbf{S}\_{k+1}(\boldsymbol{\theta}^{\boldsymbol{\theta}+1}\boldsymbol{\omega}) + 1\right] \mathbf{g}\_k(\boldsymbol{\theta}^{\boldsymbol{\theta}}\boldsymbol{\omega}),$$

and

$$\tau\_{\rm N} = -\frac{T(y\_0)^\*}{\left\| T(y\_0) \right\|^2} \cdot \left[ Y\_{\rm N}(\theta^{\hbar \natural} \omega) S\_{\rm N}(\theta^{\hbar \natural} \omega) L^{-1} S\_{\rm O}(\theta^{\hbar} \omega) + 1 \right] g\_{\rm N}(\theta^{\hbar \natural} \omega). \tag{17}$$

Taking into account Eqs. (16) and (17), we define the right inverse of T *<sup>k</sup>* in the form of

$$\mathcal{T}\_k^{-1}\mathbf{g} = [\{\tau\_k\}\_{k=0}^{N+1}, \{\xi\_k(\Theta^{t\_k}\boldsymbol{\omega})\}\_{k=0}^N].$$

It follows from Eq. (17) that T is invertible and the following inequality holds

$$\|\|T^{-1}\|\leq \mathbb{C}.\tag{18}$$

Therefore, we can construct the invertibility of *DG*(*v*0). By the operator theory, we obtain

$$\mathcal{K} = \left[ \mathbb{I} + \mathcal{T}^{-1} (\mathcal{D}\mathcal{G}(v\_0) - \mathcal{T}) \right]^{-1} \mathcal{T}^{-1}. \tag{19}$$

By Eqs. (11) and (12) and the assumption (*i*) of Theorem 3.2, we obtain that

$$\begin{array}{ll} & \|T^{-1}(DG(v\_{0}) - T)\| \lesssim \|T^{-1}\| \|DG(v\_{0}) - T\|\\ & \leq \quad \|T^{-1}\| \cdot \left[ \sup\left| \left(D\varphi(t\_{k}, t\_{k+1}, \theta^{t\_{\sigma}}\omega)y\_{k}(\theta^{t\_{k}}\omega) - Y\_{k}(\theta^{t\_{k}}\omega)S\_{k}(\theta^{t\_{k}}\omega)\xi\_{k}(\theta^{t\_{k}}\omega) \right) \right| \right] \\ & \leq \quad C\delta < \frac{1}{2}. \end{array}$$

Then the inverse <sup>½</sup><sup>I</sup> <sup>þ</sup> <sup>T</sup> <sup>−</sup><sup>1</sup> <sup>ð</sup>*DG*ð*v*0Þ−<sup>T</sup> Þ<sup>−</sup><sup>1</sup> exits and <sup>K</sup> is a right inverse of *DG*ð*v*0Þ. Furthermore,

$$\|\| [\mathbb{I} + \mathcal{T}^{-1}(DG(\upsilon\_0) - \mathcal{T})]^{-1} \|\le 2.\,\,\,$$

Therefore, we have verified hypothesis (*i*) of Lemma 3.1.

Step II:

It follows from Eqs. (18)–(20) that we have

$$\|\kappa\|\mathfrak{L}$$

and

$$\|\|G(v\_0)\|\| = \sup\_k \|y\_{k+1}(\theta^{t\_{k+1}}\omega) - \varphi(t\_k, t\_{k+1}, \theta^{t\_k}\omega)y\_k(\theta^{t\_k}\omega)\| \le \delta.$$

By the assumption (*ii*) of Theorem 3.2, we obtain that

$$\varepsilon = 2 \| \mathcal{K} \| \| G(\upsilon\_0) \| \le 4C\delta < \varepsilon\_0.$$

That is, the closed ball of radius *ε* around *v*<sup>0</sup> is contained in the open set *B*. Therefore, we have verified hypothesis (*ii*) of Lemma 3.1.

Step III:

We only need to estimate ∥*D*<sup>2</sup> *G*ð*v*Þ∥. Then we choose *u* ¼ ð{*rk*} *N*þ1 *<sup>k</sup>*¼<sup>0</sup> , {*ηk*} *N <sup>k</sup>*¼<sup>0</sup><sup>Þ</sup> and calculate the second order Gateaux differential of *G*(*v*) for *k* = 0, 1, …, *N* as follows

½*DG*ð*v*Þ*uu <sup>k</sup>* :¼ lim *t*!0 ½*DG*ð*v* þ *tu*Þ*u*−*DG*ð*v*Þ*u k* j*t*j <sup>¼</sup> <sup>−</sup>*τkrkDT*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ *<sup>T</sup>*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ <sup>−</sup>*τkDT*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ *<sup>D</sup>ϕ*ð*tk*, *tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þð*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*ÞÞ *Sk*ð*θtkω*Þ*ηk*ð*θtkω*<sup>Þ</sup> <sup>−</sup>*rkDT*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ *<sup>D</sup>ϕ*ð*tk*, *tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þð*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*ÞÞ *Sk*ð*θtkω*Þ*ξk*ð*θtkω*<sup>Þ</sup> −*D*<sup>2</sup> *<sup>ϕ</sup>*ð*tk*,*tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þð*yk*ð*θtkω*<sup>Þ</sup> <sup>þ</sup>*Sk*ð*θtkω*Þ*ζk*ð*θtkω*ÞÞ ½*Sk*ð*θtkω*Þ*ξk*ð*θtkω*Þ ½*Sk*ð*θtkω*Þ*ηk*ð*θtkω*Þ*:*

By the norm property, i.e., subadditivity, we obtain

$$M = \sup\_{k} \|D^2 G(\upsilon)\| \le M\_0 M\_1 + 2M\_1 \exp\left(M\_1 \Delta t\right) + M\_2 \Delta t \exp\left(2M\_1 \Delta t\right).$$

It follows from the assumption (*iii*) of Theorem 3.2 and

$$\|\|G(\upsilon\_0)\|\| \leq \delta, \,\|\|\mathcal{K}\|\|^2 \sphericalangle \mathsf{4C}^2,$$

that

Then the inverse <sup>½</sup><sup>I</sup> <sup>þ</sup> <sup>T</sup> <sup>−</sup><sup>1</sup>

Step II:

and

Step III:

<sup>ð</sup>*DG*ð*v*0Þ−<sup>T</sup> Þ<sup>−</sup><sup>1</sup> exits and <sup>K</sup> is a right inverse of *DG*ð*v*0Þ. Furthermore,

∥ ≤ *2:*

<sup>ð</sup>*DG*ð*v*0Þ−<sup>T</sup> Þ<sup>−</sup><sup>1</sup>

∥K∥≤2*C:*

*ε* ¼ 2∥K∥∥*G*ð*v*0Þ∥ ≤ 4*Cδ < ε*0*:*

That is, the closed ball of radius *ε* around *v*<sup>0</sup> is contained in the open set *B*. Therefore, we have

*G*ð*v*Þ∥. Then we choose *u* ¼ ð{*rk*}

*<sup>D</sup>ϕ*ð*tk*, *tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þð*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*ÞÞ *Sk*ð*θtkω*Þ*ηk*ð*θtkω*<sup>Þ</sup>

*<sup>D</sup>ϕ*ð*tk*, *tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þð*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*ÞÞ *Sk*ð*θtkω*Þ*ξk*ð*θtkω*<sup>Þ</sup>

*G*ð*v*Þ∥ ≤ *M*0*M*<sup>1</sup> þ 2*M*<sup>1</sup> exp ð*M*1Δ*t*Þ þ *M*2Δ*t* exp ð2*M*1Δ*t*Þ*:*

<sup>þ</sup>*Sk*ð*θtkω*Þ*ζk*ð*θtkω*ÞÞ ½*Sk*ð*θtkω*Þ*ξk*ð*θtkω*Þ ½*Sk*ð*θtkω*Þ*ηk*ð*θtkω*Þ*:*

½*DG*ð*v* þ *tu*Þ*u*−*DG*ð*v*Þ*u*

j*t*j <sup>¼</sup> <sup>−</sup>*τkrkDT*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ *<sup>T</sup>*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ

<sup>∥</sup>*yk*þ<sup>1</sup>ð*θtk*þ<sup>1</sup>*ω*Þ−*ϕ*ð*tk*,*tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þ*yk*ð*θtkω*Þ<sup>∥</sup> <sup>≤</sup>*δ:*

*k*

*N*þ1 *<sup>k</sup>*¼<sup>0</sup> , {*ηk*} *N*

*<sup>k</sup>*¼<sup>0</sup><sup>Þ</sup> and calculate the

<sup>∥</sup>½<sup>I</sup> <sup>þ</sup> <sup>T</sup> <sup>−</sup><sup>1</sup>

Therefore, we have verified hypothesis (*i*) of Lemma 3.1.

∥*G*ð*v*0Þ∥ ¼ sup

By the assumption (*ii*) of Theorem 3.2, we obtain that

*k*

second order Gateaux differential of *G*(*v*) for *k* = 0, 1, …, *N* as follows

<sup>−</sup>*τkDT*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ

<sup>−</sup>*rkDT*½*yk*ð*θtkω*Þ þ *Sk*ð*θtkω*Þ*ζk*ð*θtkω*Þ

*<sup>ϕ</sup>*ð*tk*,*tk*þ<sup>1</sup>, *<sup>θ</sup>tkω*Þð*yk*ð*θtkω*<sup>Þ</sup>

By the norm property, i.e., subadditivity, we obtain

It follows from the assumption (*iii*) of Theorem 3.2 and

*<sup>k</sup>* :¼ lim *t*!0

It follows from Eqs. (18)–(20) that we have

230 Dynamical Systems - Analytical and Computational Techniques

verified hypothesis (*ii*) of Lemma 3.1.

½*DG*ð*v*Þ*uu*

−*D*<sup>2</sup>

*M* ¼ sup *k* ∥*D*<sup>2</sup>

We only need to estimate ∥*D*<sup>2</sup>

$$2M\|K\|^2\|G(\upsilon\_0)\| \le 1\,\,.$$

Then we have verified hypothesis (*iii*) of Lemma 3.1. Therefore, the conclusion follows from Lemma 3.1. This finishes the proof.

**Remark 6.1** *The proof is similar to the paper [8], and we extend it to the random periodic case*.

## **7. Conclusion**

The main result presented here is the random periodic shadowing theorem of the RDS generated by some SDEs. To conduct the study, we have extended the random shadowing theorem to the random periodic scenario by taking advantage of mean square and stochastic calculus. We show that the existence of the random periodic shadowing orbits of the SSLE so that the numerical experiments are performed and match the results of theoretical analysis. Although some progresses are made, more accurate numerical methods of estimating the shadowing distance are needed in practice, which will be presented in our further work.

## **Acknowledgements**

The authors like to express their gratitude to Prof. Jialin Hong for his helpful discussion. This work is partially supported by NSFC (No. 11021101, 11290142, 91130003, 91530118) and the Fundamental Research Funds for the Central Universities, HUST, No. 2016YXMS226 and Young Teacher's Education and Science Research Project in the Education Department of Fujian Province, No.JAT160182.

## **Author details**

Qingyi Zhan1,2 and Yuhong Li<sup>3</sup> \*

\*Address all correspondence to: liyuhong@hust.edu.cn

1 College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou, P.R. China

2 Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, P.R. China

3 College of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan, P.R. China

## **References**


#### **Solution of Differential Equations with Applications to Engineering Problems** Solution of Differential Equations with Applications to Engineering Problems

## Cheng Yung Ming Cheng Yung Ming

**References**

[1] Feng C, Zhao H, Zhou B. Pathwise random periodic solutions of stochastic differential

[3] Palmer KJ. Shadowing in Dynamical Systems, Theory and Applications. Kluwer Aca-

[5] Hong J, Scherer R, Wang L. Midpoint rule for a linear stochastic oscillator with additive

[6] Li Y, Zdzislaw B, Zhou J. Conceptual analysis and random attractor for dissipative random dynamical systems. Acta Mathematica Scientia, Series B. 2008;**28**:253–268.

[7] Zhan Q. Mean-square numerical approximations to random periodic solutions of stochastic differential equations. Advances in Difference Equations. 2015;**292**:1–17.

[8] Zhan Q. Shadowing orbits of stochastic differential equations. Journal of Nonlinear

[9] Sussmann HJ. An interpretation of stochastic differential equations as ordinary differential equations which depend on the sample point. Bulletin of American Mathematical

[10] Duan J. An Introduction to Stochastic Dynamics. Cambridge University Press, New York,

[11] Kantorovich LV, Akilov GP. Functional Analysis. 2nd. edition. Pergamon Press, Oxford,

[12] Keller H. Attractors and bifurcations of stochastic Lorenz system. In "Technical Report

[13] Arnold L, Schmalfuss B. Lyapunov's second method for random dynamical systems.

[14] Milstein G. Numerical Integration of Stochastic Differential Equations. Kluwer Academic

[15] Coomes BA, Koçak H, Palmer KJ. Rigorous computational shadowing of orbits of ordi-

[16] Golub GH, Van Loan CF. Matrix Computations. 4th edition. The Johns Hopkins Univer-

nary differential equations. Numerische Mathematik. 1995;**69**(4):401–421.

389". Institut fur Dynamische Systeme, Universitat Bremen, 1996.

Journal of Differential Equations. 2001;**177**(1):235–265.

[4] Todorov D. Stochastic shadowing and stochastic stability. arXiv:1411.7604v1.

noise. Neural, Parallel and Scientific Computation. 2006;**14**(1):1–12.

equations. Journal of Differential Equations. 2001;**251**:119–149.

[2] Arnold L. Random Dynamical Systems. Springer, Berlin, 2003.

demic Publishers, Berlin, 2000.

232 Dynamical Systems - Analytical and Computational Techniques

Science and Applications. 2016;**9**:2006–2018.

Society. 1977;**83**(2):296–298.

Publishers, Berlin, 1995.

sity Press, Baltimore, MD, 2013.

NY, 2015.

1982.

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/67539

#### Abstract

Over the last hundred years, many techniques have been developed for the solution of ordinary differential equations and partial differential equations. While quite a major portion of the techniques is only useful for academic purposes, there are some which are important in the solution of real problems arising from science and engineering. In this chapter, only very limited techniques for solving ordinary differential and partial differential equations are discussed, as it is impossible to cover all the available techniques even in a book form. The readers are then suggested to pursue further studies on this issue if necessary. After that, the readers are introduced to two major numerical methods commonly used by the engineers for the solution of real engineering problems.

Keywords: differential equations, analytical solution, numerical solution

## 1. Introduction

## 1.1. Classification of ordinary and partial equations

To begin with, a differential equation can be classified as an ordinary or partial differential equation which depends on whether only ordinary derivatives are involved or partial derivatives are involved. The differential equation can also be classified as linear or nonlinear. A differential equation is termed as linear if it exclusively involves linear terms (that is, terms to the power 1) of y, y<sup>0</sup> , y″ or higher order, and all the coefficients depend on only one variable x as shown in Eq. (1). In Eq. (1), if f(x) is 0, then we term this equation as homogeneous. The general solution of non-homogeneous ordinary differential equation (ODE) or partial differential equation (PDE) equals to the sum of the fundamental solution of the corresponding homogenous equation (i.e. with f(x) = 0) plus the particular solution of the non-homogeneous ODE or PDE. On the other hand, nonlinear differential equations involve nonlinear terms in any of y, y<sup>0</sup> , y″, or higher order term. A nonlinear differential equation is generally more difficult to solve than linear equations. It is common that nonlinear equation is approximated as linear equation

distribution, and eproduction in any medium, provided the original work is properly cited.

(over acceptable solution domain) for many practical problems, either in an analytical or numerical form. The nonlinear nature of the problem is then approximated as series of linear differential equation by simple increment or with correction/deviation from the nonlinear behaviour. This approach is adopted for the solution of many non-linear engineering problems. Without such procedure, most of the non-linear differential equations cannot be solved. Differential equation can further be classified by the order of differential. In general, higher-order differential equations are difficult to solve, and analytical solutions are not available for many higher differential equations. A linear differential equation is generally governed by an equation form as Eq. (1).

$$\frac{d^n y}{d\mathbf{x}^n} + a\_1(\mathbf{x}) \frac{d^{n-1} y}{d\mathbf{x}^{n-1}} + \dots + a\_n(\mathbf{x}) y = f(\mathbf{x}) \tag{1}$$

"Non-linear" differential equation can generally be further classified as

1. Truly nonlinear in the sense that F is nonlinear in the derivative terms.

$$F\left(\mathbf{x}\_1, \mathbf{x}\_1, \mathbf{x}\_n, \mu, \frac{\partial u}{\partial \mathbf{x}\_1}, \frac{\partial u}{\partial \mathbf{x}\_2}, \frac{\partial^2 u}{\partial \mathbf{x}\_1 \partial \mathbf{x}\_2}\right) = 0\tag{2}$$

2. Quasi-linear 1st PDE if nonlinearity in F only involves u but not its derivatives

$$A\_1(\mathbf{x}\_1, \mathbf{x}\_2, \boldsymbol{\mu}) \frac{\partial \boldsymbol{\mu}}{\partial \mathbf{x}\_1} + A\_2(\mathbf{x}\_1, \mathbf{x}\_2, \boldsymbol{\mu}) \frac{\partial \boldsymbol{\mu}}{\partial \mathbf{x}\_2} = B(\mathbf{x}\_1, \mathbf{x}\_2, \boldsymbol{\mu}) \tag{3}$$

3. Quasi-linear 2nd PDE if nonlinearity in F only involves u and its first derivative but not its second-order derivatives

$$\begin{split} &A\_{11}\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\boldsymbol{u},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{1}},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{2}}\right)\frac{\partial^{2}\boldsymbol{u}}{\partial\mathbf{x}\_{1}^{2}} + A\_{12}\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\boldsymbol{u},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{1}},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{2}}\right)\frac{\partial^{2}\boldsymbol{u}}{\partial\mathbf{x}\_{1}\partial\mathbf{x}\_{2}} + A\_{22}\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\boldsymbol{u},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{1}},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{2}}\right)\frac{\partial^{2}\boldsymbol{u}}{\partial\mathbf{x}\_{2}^{2}} \\ &= F\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\boldsymbol{u},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{1}},\frac{\partial\boldsymbol{u}}{\partial\mathbf{x}\_{2}}\right) \end{split} \tag{4}$$

Examples of differential equations:


#### 1.2. Typical differential equations in engineering problems

(over acceptable solution domain) for many practical problems, either in an analytical or numerical form. The nonlinear nature of the problem is then approximated as series of linear differential equation by simple increment or with correction/deviation from the nonlinear behaviour. This approach is adopted for the solution of many non-linear engineering problems. Without such procedure, most of the non-linear differential equations cannot be solved. Differential equation can further be classified by the order of differential. In general, higher-order differential equations are difficult to solve, and analytical solutions are not available for many higher differential equations. A linear differential equation is generally

> dn�<sup>1</sup> y

> > ∂x<sup>1</sup> , <sup>∂</sup><sup>u</sup> ∂x<sup>2</sup>

<sup>þ</sup> <sup>A</sup>2ðx1, <sup>x</sup>2, <sup>u</sup><sup>Þ</sup> <sup>∂</sup><sup>u</sup>

3. Quasi-linear 2nd PDE if nonlinearity in F only involves u and its first derivative but not its

∂x<sup>1</sup> , <sup>∂</sup><sup>u</sup> ∂x<sup>2</sup>

þ y ¼ 0; second-order ODE (nonlinear)/homogenous

dt<sup>2</sup> þ 7x ¼ sint; fourth-order ODE (linear)/nonhomogeneous

∂<sup>2</sup>u

2. Quasi-linear 1st PDE if nonlinearity in F only involves u but not its derivatives

, <sup>∂</sup><sup>2</sup><sup>u</sup> ∂x1∂x<sup>2</sup>

∂x<sup>2</sup>

∂x1∂x<sup>2</sup>

dxn�<sup>1</sup> <sup>þ</sup> … <sup>þ</sup> anðxÞ<sup>y</sup> <sup>¼</sup> <sup>f</sup>ðx<sup>Þ</sup> (1)

¼ 0 (2)

¼ Bðx1, x2, uÞ (3)

∂x<sup>1</sup> , <sup>∂</sup><sup>u</sup> ∂x<sup>2</sup>

∂x<sup>2</sup> 2

(4)

∂<sup>2</sup>u

<sup>þ</sup> <sup>A</sup><sup>22</sup> <sup>x</sup>1, <sup>x</sup>2, <sup>u</sup>, <sup>∂</sup><sup>u</sup>

governed by an equation form as Eq. (1).

234 Dynamical Systems - Analytical and Computational Techniques

second-order derivatives

∂x<sup>1</sup> , <sup>∂</sup><sup>u</sup> ∂x<sup>2</sup>

Examples of differential equations:

<sup>2</sup>y dy dt <sup>3</sup>

dt<sup>4</sup> <sup>þ</sup> <sup>5</sup> <sup>d</sup>2<sup>x</sup>

∂u ∂x<sup>1</sup> , <sup>∂</sup><sup>u</sup> ∂x<sup>2</sup>

∂<sup>2</sup>u

<sup>A</sup><sup>11</sup> <sup>x</sup>1, <sup>x</sup>2, <sup>u</sup>, <sup>∂</sup><sup>u</sup>

¼ F x1, x2, u

1. dy

3. <sup>d</sup>2<sup>y</sup> dt<sup>2</sup> þ t

4. <sup>d</sup>4<sup>x</sup>

5. <sup>∂</sup><sup>z</sup> <sup>∂</sup><sup>x</sup> <sup>þ</sup> <sup>∂</sup><sup>z</sup> dny dxn <sup>þ</sup> <sup>a</sup>1ðx<sup>Þ</sup>

<sup>A</sup>1ðx1, <sup>x</sup>2, <sup>u</sup><sup>Þ</sup> <sup>∂</sup><sup>u</sup>

∂x<sup>2</sup> 1

dx ¼ 3x þ 2; first-order ODE (linear)/nonhomogeneous

<sup>∂</sup><sup>y</sup> ¼ 2z; first-order PDE (linear)/homogeneous

2. ðy � 2xÞdy � 3ydx ¼ 0; first-order ODE (nonlinear)/homogenous

"Non-linear" differential equation can generally be further classified as

1. Truly nonlinear in the sense that F is nonlinear in the derivative terms.

F x1, <sup>x</sup>1, xn, <sup>u</sup>, <sup>∂</sup><sup>u</sup>

∂x<sup>1</sup>

<sup>þ</sup> <sup>A</sup><sup>12</sup> <sup>x</sup>1, <sup>x</sup>2, <sup>u</sup>, <sup>∂</sup><sup>u</sup>

Many engineering problems are governed by different types of partial differential equations, and some of the more important types are given below.

Tricomi equation: y <sup>∂</sup>2<sup>u</sup> <sup>∂</sup>x<sup>2</sup> <sup>þ</sup> <sup>∂</sup>2<sup>u</sup> <sup>∂</sup>y<sup>2</sup> <sup>¼</sup> <sup>0</sup> <sup>y</sup> <sup>&</sup>gt; <sup>0</sup> : elliptic <sup>y</sup> <sup>&</sup>lt; <sup>0</sup> : hyperbolic � Laplace equation (or variants): <sup>∂</sup>2<sup>ϕ</sup> <sup>∂</sup>x<sup>2</sup> <sup>þ</sup> <sup>∂</sup>2<sup>ϕ</sup> <sup>∂</sup>y<sup>2</sup> <sup>¼</sup> <sup>∇</sup><sup>2</sup><sup>ϕ</sup> <sup>¼</sup> <sup>0</sup> Poisson's equation: <sup>∂</sup>2<sup>ϕ</sup> <sup>∂</sup>x<sup>2</sup> <sup>þ</sup> <sup>∂</sup>2<sup>ϕ</sup> <sup>∂</sup>y<sup>2</sup> ¼ fðx, yÞ Helmholtz equation: <sup>∂</sup>2<sup>ϕ</sup> <sup>∂</sup>x<sup>2</sup> <sup>þ</sup> <sup>∂</sup>2<sup>ϕ</sup> <sup>∂</sup>y<sup>2</sup> <sup>þ</sup> <sup>c</sup><sup>2</sup><sup>ϕ</sup> <sup>¼</sup> <sup>0</sup> Plate bending: <sup>∇</sup><sup>2</sup>∇<sup>2</sup><sup>w</sup> <sup>¼</sup> <sup>∇</sup><sup>4</sup><sup>w</sup> <sup>¼</sup> <sup>q</sup> D Wave equation (1D-3D): <sup>∂</sup>2<sup>u</sup> ∂t <sup>2</sup> � <sup>c</sup><sup>2</sup> <sup>∂</sup>2<sup>u</sup> <sup>∂</sup>x<sup>2</sup> <sup>þ</sup> <sup>∂</sup>2<sup>u</sup> ∂y<sup>2</sup> � � <sup>¼</sup> <sup>0</sup> Fourier equation: <sup>∂</sup><sup>T</sup> <sup>∂</sup><sup>t</sup> <sup>¼</sup> <sup>α</sup> <sup>∂</sup>2<sup>T</sup> ∂x<sup>2</sup> � �

There are many methods of solutions for different types of differential equations, but most of these methods are not commonly used for practical problems. In this chapter, the most important and basic methods for solving ordinary and partial differential equations will be discussed, which will then be followed by numerical methods such as finite difference and finite element methods (FEMs). For other numerical methods such as boundary element method, they are less commonly adopted by the engineers; hence, these methods will not be discussed in this chapter.

#### 1.3. Separable differential equations

For equations which can be expressed in separable form as shown below, the solution can be obtained easily as

$$\frac{dy}{dx} = F(\mathbf{x}, y) \frac{dy}{\Phi(y)} = f(\mathbf{x}) d\mathbf{x} \int \frac{dy}{\Phi(y)} = \int f(\mathbf{x}) d\mathbf{x} + c \tag{5}$$

$$M(\mathbf{x}, y)d\mathbf{x} + N(\mathbf{x}, y)dy = 0 \text{ } M(\mathbf{x})d\mathbf{x} = -N(y)dy\tag{6}$$

$$\text{then}\int M(\mathbf{x})d\mathbf{x} = -\int N(y)dy + c \tag{7}$$

Example:

$$\begin{aligned} \frac{dy}{dx} &= x^3(y^2+1) \Rightarrow \frac{dy}{y^2+1} = x^3 dx \\\\ \int \frac{dy}{y^2+1} &= \int x^3 dx + c \Rightarrow \tan^{-1}y = \frac{1}{4}x^4 + C \Rightarrow y = \tan\left(\frac{1}{4}x^4 + c\right) \end{aligned}$$

Example:

$$\frac{dy}{dx} = \frac{3x^2 + 4x + 2}{2(y - 1)} \text{ subject to } y(0) = -1$$

Since this is a separable function, the problem can be solved as

$$2(y-1)dy = (3x^2 + 4x + 2)dx$$

$$y^2 - 2y = x^3 + 2x^2 + 2x + c$$

Based on the boundary condition, <sup>c</sup> = 3, hence <sup>y</sup><sup>2</sup> � <sup>2</sup><sup>y</sup> <sup>¼</sup> <sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>2</sup>x<sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>x</sup> <sup>þ</sup> 3.

This quadratic equation in y<sup>2</sup> can be solved with two solutions by the quadratic equation as

$$y = 1 - \sqrt{\mathbf{x}^3 + 2\mathbf{x}^2 + 2\mathbf{x} + 4} \text{ and } y = 1 + \sqrt{\mathbf{x}^3 + 2\mathbf{x}^2 + 2\mathbf{x} + 4}.$$

Since the second solution does not satisfy the boundary condition, it will not be accepted; hence, the solution to this differential equation is obtained.

#### 1.4. Variation of parameters

For the following equation form, it is possible to solve it by variations of parameters.

$$\text{For } \frac{dy}{d\mathbf{x}} = p(\mathbf{x})y + Q(\mathbf{x}) \tag{8}$$

Put y ¼ cðxÞe Ð <sup>p</sup>ðxÞdx. By differentiating, it gives dy dx <sup>¼</sup> dcðx<sup>Þ</sup> dx e Ð <sup>p</sup>ðxÞdx <sup>þ</sup> <sup>c</sup>ðxÞpðxÞ<sup>e</sup> Ð pðxÞdx |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} pðxÞy . Substitute

it to the original ODE dcðx<sup>Þ</sup> dx ¼ QðxÞe � Ð <sup>p</sup>ðxÞdx. Comparing the terms, it gives

$$\mathcal{L}(\mathbf{x}) = \int Q(\mathbf{x}) e^{-\int p(\mathbf{x})d\mathbf{x}} d\mathbf{x} + \overline{\mathbf{c}}.\tag{9}$$

Example:

$$(\mathfrak{x} + 1)\frac{dy}{d\mathfrak{x}} - ny = \mathfrak{e}^{\mathfrak{x}}(\mathfrak{x} + 1)^{n+1}$$

This equation is now expressed as

Solution of Differential Equations with Applications to Engineering Problems http://dx.doi.org/10.5772/67539 237

$$\frac{dy}{dx} = p(\mathbf{x})y + Q(\mathbf{x})$$

$$\frac{dy}{dx} = \frac{n}{x+1}y + \underbrace{e^{\mathbf{x}}(x+1)^n}\_{Q(\mathbf{x})}$$

For x 6¼ �1

Example:

Example:

dx <sup>¼</sup> <sup>3</sup>x2þ4xþ<sup>2</sup>

dy

dy dx <sup>¼</sup> <sup>x</sup><sup>3</sup>

ð dy <sup>y</sup><sup>2</sup> <sup>þ</sup> <sup>1</sup> <sup>¼</sup>

236 Dynamical Systems - Analytical and Computational Techniques

<sup>2</sup>ðy�1<sup>Þ</sup> subject to <sup>y</sup>ð0޼�<sup>1</sup>

1.4. Variation of parameters

Ð

it to the original ODE dcðx<sup>Þ</sup>

This equation is now expressed as

Put y ¼ cðxÞe

Example:

<sup>ð</sup>y<sup>2</sup> <sup>þ</sup> <sup>1</sup>Þ ) dy

Since this is a separable function, the problem can be solved as

<sup>y</sup> <sup>¼</sup> <sup>1</sup> � ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

hence, the solution to this differential equation is obtained.

<sup>p</sup>ðxÞdx. By differentiating, it gives dy

� Ð

cðxÞ ¼

ðx þ 1Þ

ð QðxÞe � Ð

dy

dx � ny <sup>¼</sup> <sup>e</sup>

dx ¼ QðxÞe

ð x3 <sup>y</sup><sup>2</sup> <sup>þ</sup> <sup>1</sup> <sup>¼</sup> <sup>x</sup><sup>3</sup>

dx <sup>þ</sup> <sup>c</sup> ) tan�<sup>1</sup>

Based on the boundary condition, <sup>c</sup> = 3, hence <sup>y</sup><sup>2</sup> � <sup>2</sup><sup>y</sup> <sup>¼</sup> <sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>2</sup>x<sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>x</sup> <sup>þ</sup> 3.

dx

<sup>y</sup> <sup>¼</sup> <sup>1</sup> 4

<sup>2</sup>ð<sup>y</sup> � <sup>1</sup>Þdy ¼ ð3x<sup>2</sup> <sup>þ</sup> <sup>4</sup><sup>x</sup> <sup>þ</sup> <sup>2</sup>Þdx <sup>y</sup><sup>2</sup> � <sup>2</sup><sup>y</sup> <sup>¼</sup> <sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>2</sup>x<sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>x</sup> <sup>þ</sup> <sup>c</sup>

This quadratic equation in y<sup>2</sup> can be solved with two solutions by the quadratic equation as

Since the second solution does not satisfy the boundary condition, it will not be accepted;

For the following equation form, it is possible to solve it by variations of parameters.

For dy

<sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>2</sup>x<sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>x</sup> <sup>þ</sup> <sup>4</sup> <sup>p</sup> and <sup>y</sup> <sup>¼</sup> <sup>1</sup> <sup>þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

dx <sup>¼</sup> dcðx<sup>Þ</sup> dx e Ð

> x ðx þ 1Þ

nþ1

<sup>p</sup>ðxÞdx. Comparing the terms, it gives

<sup>x</sup><sup>4</sup> <sup>þ</sup> <sup>C</sup> ) <sup>y</sup> <sup>¼</sup> tan

1 4 <sup>x</sup><sup>4</sup> <sup>þ</sup> <sup>c</sup> � �

<sup>x</sup><sup>3</sup> <sup>þ</sup> <sup>2</sup>x<sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>x</sup> <sup>þ</sup> <sup>4</sup> <sup>p</sup> :

dx <sup>¼</sup> <sup>p</sup>ðxÞ<sup>y</sup> <sup>þ</sup> <sup>Q</sup>ðx<sup>Þ</sup> (8)

<sup>p</sup>ðxÞdx <sup>þ</sup> <sup>c</sup>ðxÞpðxÞ<sup>e</sup>

Ð


<sup>p</sup>ðxÞdxdx <sup>þ</sup> <sup>c</sup>: (9)

pðxÞdx

. Substitute

Solving the homogeneous part of the ODE

 $\frac{dy}{dx} = \frac{n}{x+1}$  $y \text{ then } \frac{dy}{y} = \frac{n}{x+1}d\mathbf{x}$ 
$$\ln|y| = n\ln|\mathbf{x}+1| + c\_1$$

$$y = c(\mathbf{x}+1)^n$$

Look for solution y ¼ cðxÞðx þ 1Þ n , where c(x) is the variation of parameters. Substitute it to the ODE

$$\frac{d\mathbf{c}(\mathbf{x})}{d\mathbf{x}}(\mathbf{x}+1)^n + nc(\mathbf{x})(\mathbf{x}+1)^{n-1} = nc(\mathbf{x})(\mathbf{x}+1)^{n-1} + \mathbf{c}^\mathbf{x}(\mathbf{x}+1)^n$$

$$\frac{dy}{d\mathbf{x}} = \frac{n}{\mathbf{x}+1}y + \mathbf{c}^\mathbf{x}(\mathbf{x}+1)^n$$

Comparison gives dcðx<sup>Þ</sup> dx <sup>¼</sup> <sup>e</sup><sup>x</sup>

Integration of this equation gives <sup>c</sup>ðxÞ ¼ ex <sup>þ</sup> <sup>C</sup>

General solution is hence given by y ¼ ðx þ 1Þ n <sup>ð</sup>e<sup>x</sup> <sup>þ</sup> <sup>C</sup><sup>Þ</sup>

The Bernoulli equation is an important equation type which can be solved in a similar way by variation of parameters. Consider the following form of equation

$$\frac{dy}{dx} = p(\mathbf{x})y + Q(\mathbf{x})y^n \tag{10}$$

$$\textbf{Step 1}: \text{ Put } z = y^{1-n} \tag{11}$$

$$\begin{aligned} \textbf{Step 2}: \text{ Then } \frac{dz}{dx} &= (1 - n)y^{-n}\frac{dx}{dy} \\ \frac{dz}{dx} &= (1 - n)P(x)z + (1 - n)Q(x) \end{aligned} \tag{12}$$

The non-linear ODE now becomes linear ODE. It can be solved by formula Step 3: <sup>n</sup> <sup>=</sup> �1, <sup>z</sup> <sup>=</sup> <sup>y</sup><sup>2</sup> . Inverting z to get y

$$\frac{dy}{dx} = \frac{y}{2x} + \frac{x^2}{2y} \tag{13}$$

$$\frac{dz}{d\mathbf{x}} = \frac{1}{\mathbf{x}}z + \mathbf{x}^2\tag{14}$$

$$z = e^{\int\_{\frac{1}{\pi}dx} \left(\int\_{}^{} x^2 e^{-\int\_{\frac{1}{\pi}dx}^{\frac{1}{\pi}dx}dx} + c\right)} = cx + \frac{1}{2}x^3\tag{15}$$

Back substitution of <sup>z</sup> <sup>¼</sup> <sup>y</sup><sup>2</sup>

$$y^2 = cx + \frac{1}{2}x^3\tag{16}$$

#### 1.5. Homogeneous equations

For equation of the following type, where all the coefficients are constant, it can be evaluated according to different conditions.

$$\frac{dy}{dx} = \frac{a\_1 \mathbf{x} + b\_1 \mathbf{y} + c\_1}{a\_2 \mathbf{x} + b\_2 \mathbf{y} + c\_2} \tag{17}$$

Case 1: c<sup>1</sup> ¼ c<sup>2</sup> ¼ 0

$$\frac{dy}{dx} = \frac{a\_1 \mathbf{x} + b\_1 \mathbf{y}}{a\_2 \mathbf{x} + b\_2 \mathbf{y}} = \frac{a\_1 + b\_1 \frac{y}{x}}{a\_2 + b\_2 \frac{y}{x}} = g\left(\frac{y}{x}\right) \tag{18}$$

Step 1: Set <sup>u</sup> <sup>¼</sup> <sup>y</sup> <sup>x</sup>, then dy dx <sup>¼</sup> <sup>x</sup> du dx þ u

Step 2: du dx <sup>¼</sup> <sup>g</sup>ðuÞ�<sup>u</sup> <sup>x</sup> . The resulting non-linear ODE is hence separable and can be solved implicitly.

Step 3: Inverting u to get y.

\*\*Case 2\*\*:  $\begin{vmatrix} a\_1 & a\_2 \\ b\_1 & b\_2 \end{vmatrix} = 0$ 
 $a\_1b\_2 - a\_2b\_1 = 0$  then  $\frac{a\_1}{a\_2} = \frac{b\_1}{b\_2} = k$ 

$$\frac{dy}{dx} = \frac{a\_1x + b\_1y + c\_1}{a\_2x + b\_2y + c\_2} = \frac{k(a\_2x + b\_2y) + c\_1}{a\_2x + b\_2y + c\_2} = f(a\_2x + b\_2y) \tag{19}$$

By change of variables as u ¼ a2x þ b2y

 $\frac{du}{dx} = a\_2 + b\_2$  $\frac{dy}{dx} = a\_2 + b\_2 f(u), \text{ then }$ 
$$\frac{du}{a\_2 + b\_2 f(u)} = d\mathbf{x} \tag{20}$$

The resulting non-linear ODE is now separable and can be solved.

$$\textbf{Case 3:} \begin{vmatrix} a\_1 & a\_2 \\ b\_1 & b\_2 \end{vmatrix} \neq 0 \ c\_1 \neq 0 \text{ and } c\_2 \neq 0$$

Set <sup>a</sup>1<sup>x</sup> <sup>þ</sup> <sup>b</sup>1<sup>y</sup> <sup>þ</sup> <sup>c</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup> a2x þ b2y þ c<sup>2</sup> ¼ 0 � . Intersecting point of these two lines on xy - plane and (α, β) 6¼ 0

$$
\alpha xy - \text{plane and } (\alpha, \beta) \neq (0, 0) \tag{21}
$$

Apply change of variables

dz dx <sup>¼</sup> <sup>1</sup> x

<sup>y</sup><sup>2</sup> <sup>¼</sup> cx <sup>þ</sup>

For equation of the following type, where all the coefficients are constant, it can be evaluated

dx <sup>¼</sup> <sup>a</sup>1<sup>x</sup> <sup>þ</sup> <sup>b</sup>1<sup>y</sup> <sup>þ</sup> <sup>c</sup><sup>1</sup> a2x þ b2y þ c<sup>2</sup>

> y x

<sup>x</sup> . The resulting non-linear ODE is hence separable and can be solved

<sup>¼</sup> <sup>g</sup> <sup>y</sup> x � �

a<sup>2</sup> þ b<sup>2</sup> y x

<sup>¼</sup> <sup>k</sup>ða2<sup>x</sup> <sup>þ</sup> <sup>b</sup>2yÞ þ <sup>c</sup><sup>1</sup> a2x þ b2y þ c<sup>2</sup>

du a<sup>2</sup> þ b2fðuÞ

<sup>a</sup>2<sup>x</sup> <sup>þ</sup> <sup>b</sup>2<sup>y</sup> <sup>¼</sup> <sup>a</sup><sup>1</sup> <sup>þ</sup> <sup>b</sup><sup>1</sup>

dy

dx <sup>¼</sup> <sup>a</sup>1<sup>x</sup> <sup>þ</sup> <sup>b</sup>1<sup>y</sup>

dy

dx þ u

1 2

z ¼ e Ð 1 <sup>x</sup>dx ð x2 e � Ð 1 <sup>x</sup>dxdx <sup>þ</sup> <sup>c</sup> � �

238 Dynamical Systems - Analytical and Computational Techniques

Back substitution of <sup>z</sup> <sup>¼</sup> <sup>y</sup><sup>2</sup>

1.5. Homogeneous equations

Case 1: c<sup>1</sup> ¼ c<sup>2</sup> ¼ 0

Step 1: Set <sup>u</sup> <sup>¼</sup> <sup>y</sup>

Case 2: <sup>a</sup><sup>1</sup> <sup>a</sup><sup>2</sup>

� � � � � dx <sup>¼</sup> <sup>g</sup>ðuÞ�<sup>u</sup>

Step 3: Inverting u to get y.

� � � � � ¼ 0

b<sup>1</sup> b<sup>2</sup>

dy

<sup>a</sup>1b<sup>2</sup> � <sup>a</sup>2b<sup>1</sup> <sup>¼</sup> 0 then <sup>a</sup><sup>1</sup>

Step 2: du

implicitly.

du

dx ¼ a<sup>2</sup> þ b<sup>2</sup>

according to different conditions.

<sup>x</sup>, then dy

dx <sup>¼</sup> <sup>x</sup> du

<sup>a</sup><sup>2</sup> <sup>¼</sup> <sup>b</sup><sup>1</sup> <sup>b</sup><sup>2</sup> ¼ k

dx <sup>¼</sup> <sup>a</sup>1<sup>x</sup> <sup>þ</sup> <sup>b</sup>1<sup>y</sup> <sup>þ</sup> <sup>c</sup><sup>1</sup> a2x þ b2y þ c<sup>2</sup>

The resulting non-linear ODE is now separable and can be solved.

dy

dx ¼ a<sup>2</sup> þ b2fðuÞ, then

By change of variables as u ¼ a2x þ b2y

<sup>z</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> (14)

x<sup>3</sup> (16)

¼ fða2x þ b2yÞ (19)

¼ dx (20)

x<sup>3</sup> (15)

(17)

(18)

¼ cx þ

1 2

$$\begin{cases} X = x - a \begin{cases} x = X + a \\ y = Y + \beta \end{cases} \end{cases} \tag{22}$$

$$\begin{aligned} a\_1 \mathbf{x} + b\_1 \mathbf{y} + c\_1 &= a\_1(\mathbf{X} + a) + b\_1(\mathbf{Y} + \boldsymbol{\beta}) + c\_1 = a\_1 \mathbf{X} + b\_1 \mathbf{Y} + (a\_1 \boldsymbol{\alpha} + b\_1 \boldsymbol{\beta} + c\_1) \\ a\_2 \mathbf{x} + b\_2 \mathbf{y} + c\_2 &= a\_2(\mathbf{X} + \boldsymbol{\alpha}) + b\_2(\mathbf{Y} + \boldsymbol{\beta}) + c\_2 = a\_2 \mathbf{X} + b\_2 \mathbf{Y} + (a\_2 \boldsymbol{\alpha} + b\_2 \boldsymbol{\beta} + c\_2) \end{aligned} \tag{23}$$

The original ODE will now become dY dX <sup>¼</sup> <sup>a</sup>1Xþb1<sup>Y</sup> <sup>a</sup>2Xþb2<sup>Y</sup> which is homogeneous and separable!

\*\*Example:\*\*\*\*: $\frac{dy}{dx} = \frac{x+y-1}{x-y+3}$ 

\*\*Solve for\*\*
 $\begin{cases} x+y-1=0\\ x-y+3=0 \end{cases}$ 

we have  $\alpha = -1, \beta = 2$ 

Change of variables X = x + 1, Y = y � 2

Then, dY dX <sup>¼</sup> dy dx <sup>¼</sup> <sup>x</sup>þy�<sup>1</sup> <sup>x</sup>�yþ<sup>3</sup> <sup>¼</sup> <sup>X</sup>þ<sup>Y</sup> <sup>X</sup>�<sup>Y</sup> <sup>¼</sup> <sup>1</sup>þ<sup>Y</sup> X <sup>1</sup>�<sup>Y</sup> X

Use a change of variable <sup>u</sup> <sup>¼</sup> <sup>Y</sup> <sup>X</sup> X du dX <sup>¼</sup> <sup>1</sup>þu<sup>2</sup> 1�u ð1�uÞdu <sup>1</sup>þu<sup>2</sup> <sup>¼</sup> dX X

$$\begin{aligned} \Rightarrow \tan^{-1}u - \frac{1}{2}\ln(1+u^2) &= \ln|\mathbf{X}| + c \\\\ \Rightarrow \tan^{-1}u &= \ln[\sqrt{1+u^2}\mathbf{X}] + c = \ln[\sqrt{\left(\mathbf{X}^2 + \mathbf{Y}^2\right)}] + c \\\\ \Rightarrow \tan^{-1}\left(\frac{y-2}{x+1}\right) &= \ln\sqrt{\left(\mathbf{x}+1\right)^2 + \left(y-2\right)^2} + c \end{aligned}$$

There are various tricks to solve the differential equations, like integration factors and other techniques. A very good coverage has been given by Polyanin and Nazaikinskii [29] and will not be repeated here. The purpose of this section is just for illustration that various tricks have been developed for the solution of simple differential equations in homogeneous medium, that is, the coefficients are constants inside a continuous solution domain. The readers are also suggested to read the works of Greenberg [14], Soare et al. [34], Nagle et al. [28], Polyanin et al. [30], Bronson and Costa [4], Holzner [18], and many other published books. There are many elegant tricks that have been developed for the solution of different forms of differential equations, but only very few techniques are actually used for the solution of real life problems.

#### 1.6. Partial differential equations

In many engineering or science problems, such as heat transfer, elasticity, quantum mechanics, water flow and others, the problems are governed by partial differential equations. By nature, this type of problem is much more complicated than the previous ordinary differential equations. There are several major methods for the solution of PDE, including separation of variables, method of characteristic, integral transform, superposition principle, change of variables, Lie group method, semianalytical methods as well as various numerical methods. Although the existence and uniqueness of solutions for ordinary differential equation is well established with the Picard-Lindelöf theorem, but that is not the case for many partial differential equations. In fact, analytical solutions are not available for many partial differential equations, which is a well-known fact, particularly when the solution domain is nonregular or homogeneous, or the material properties change with the solution steps.

#### 1.6.1. Classification of second-order PDE

Refer to the following general second-order partial differential equation:

$$A\frac{\partial^2 u}{\partial x^2} + B\frac{\partial^2 u}{\partial x \partial y} + C\frac{\partial^2 u}{\partial y^2} + D\frac{\partial u}{\partial x} + E\frac{\partial u}{\partial y} + Fu + G = 0 \tag{24}$$

To begin with, let us consider a review of conic curves (ellipse, parabola and hyperbola)

$$Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0\tag{25}$$

The conic curve can be classified with the following criterion.

$$B^2 - 4AC = \begin{cases} > 0 \text{ hyperbola} \\ = 0 \text{ parabola} \\ < 0 \text{ ellipse} \end{cases} \tag{26}$$

Following the conic curves, the general partial differential is also classified according to similar criterion as

$$\text{Classification}\\
\begin{cases}
\text{ } & \text{ $B^2$  - 4AC > 0 :} \text{elliptic} \\
\text{ } & \text{ $B^2$  - 4AC = 0 : } \text{parabolic} \\
\text{ } & \text{ $B^2$  - 4AC < 0 : } \text{hyperbolic}
\end{cases}
\tag{27}$$

This classification was proposed by Du Bois-Reymond [41] in 1839. In this section, only some of the more common techniques are discussed, and the readers are suggested to read the works of Hillen et al. [16], Salsa [33], Polyanin and Zaitsev [31], Bertanz [2], Haberman [15] and many other published texts.

#### 1.7. Parabolic type: heat conduction/soil consolidation/diffuse equation

The following equation form is commonly found in many engineering applications.

Solution of Differential Equations with Applications to Engineering Problems http://dx.doi.org/10.5772/67539 241

$$
\alpha^2 \frac{\partial^2 u}{\partial x^2} = \frac{\partial u}{\partial t}, 0 < x < L, t > \sigma \tag{28}
$$

Initial condition: uðx, 0Þ ¼ fðxÞ, 0 ≤ x ≤ L

Boundary condition: uð0, tÞ ¼ 0, uðL, tÞ ¼ 0, t > 0

α<sup>2</sup> is a constant known as the thermal diffusivity or coefficient of consolidation. For soil consolidation problem, the governing conditions are given by

Initial excess pore pressure

$$\begin{aligned} \mu\_{\epsilon}(z,0) &= \mu\_{i}(z), 0 \le z \le 2d\\ \mu\_{\epsilon}(0,t) &= 0, \mu\_{\epsilon}(2d,t) = 0, t > 0 \end{aligned} \tag{29}$$

Drained boundary

1.6. Partial differential equations

240 Dynamical Systems - Analytical and Computational Techniques

1.6.1. Classification of second-order PDE

criterion as

and many other published texts.

<sup>A</sup> <sup>∂</sup><sup>2</sup><sup>u</sup>

<sup>∂</sup>x<sup>2</sup> <sup>þ</sup> <sup>B</sup> <sup>∂</sup><sup>2</sup><sup>u</sup> ∂x∂y

The conic curve can be classified with the following criterion.

Classification

In many engineering or science problems, such as heat transfer, elasticity, quantum mechanics, water flow and others, the problems are governed by partial differential equations. By nature, this type of problem is much more complicated than the previous ordinary differential equations. There are several major methods for the solution of PDE, including separation of variables, method of characteristic, integral transform, superposition principle, change of variables, Lie group method, semianalytical methods as well as various numerical methods. Although the existence and uniqueness of solutions for ordinary differential equation is well established with the Picard-Lindelöf theorem, but that is not the case for many partial differential equations. In fact, analytical solutions are not available for many partial differential equations, which is a well-known fact, particularly when the solution domain is nonregular

or homogeneous, or the material properties change with the solution steps.

Refer to the following general second-order partial differential equation:

<sup>þ</sup> <sup>C</sup> <sup>∂</sup><sup>2</sup><sup>u</sup>

<sup>B</sup><sup>2</sup> � <sup>4</sup>AC <sup>¼</sup>

8 ><

>:

1.7. Parabolic type: heat conduction/soil consolidation/diffuse equation

The following equation form is commonly found in many engineering applications.

<sup>∂</sup>y<sup>2</sup> <sup>þ</sup> <sup>D</sup> <sup>∂</sup><sup>u</sup> ∂x

To begin with, let us consider a review of conic curves (ellipse, parabola and hyperbola)

8 ><

>:

Following the conic curves, the general partial differential is also classified according to similar

This classification was proposed by Du Bois-Reymond [41] in 1839. In this section, only some of the more common techniques are discussed, and the readers are suggested to read the works of Hillen et al. [16], Salsa [33], Polyanin and Zaitsev [31], Bertanz [2], Haberman [15]

<sup>þ</sup> <sup>E</sup> <sup>∂</sup><sup>u</sup> ∂y

> 0 hyperbola ¼ 0 parabola < 0 ellipse

<sup>B</sup><sup>2</sup> � <sup>4</sup>AC <sup>&</sup>gt; <sup>0</sup> : elliptic <sup>B</sup><sup>2</sup> � <sup>4</sup>AC <sup>¼</sup> <sup>0</sup> : parabolic <sup>B</sup><sup>2</sup> � <sup>4</sup>AC <sup>&</sup>lt; <sup>0</sup> : hyperbolic

Ax<sup>2</sup> <sup>þ</sup> Bxy <sup>þ</sup> Cy<sup>2</sup> <sup>þ</sup> Dx <sup>þ</sup> Ey <sup>þ</sup> <sup>F</sup> <sup>¼</sup> <sup>0</sup> (25)

þ Fu þ G ¼ 0 (24)

(26)

(27)

$$\alpha^2 u\_{xx} = u\_t, 0 < \mathbf{x} < L, t > 0$$

$$u(0, t) = 0, u(L, t) = 0, t > 0$$

$$u(\mathbf{x}, 0) = f(\mathbf{x}), 0 \le \mathbf{x} \le L$$

Assuming variable u(x, t) can be separated, using separation of variables

$$
\mu(\mathbf{x}, t) = \mathbf{X}(t)T(t) \tag{31}
$$

$$\begin{aligned} \alpha^2 X'T &= XT'\\ \frac{X'}{X} &= \frac{1}{\alpha^2} \frac{T'}{T} \\ \frac{X'}{X} &= \frac{1}{\alpha^2} \frac{T'}{T} = -\lambda \rightarrow \begin{cases} X' + \lambda X = 0 \\ T' + \alpha^2 \lambda T = 0 \end{cases} \end{aligned} \tag{32}$$

A PDE now becomes two ODE which can be solved readily. Based on the boundary condition uð0, tÞ ¼ 0, uðL,tÞ ¼ 0, t > 0

$$u(0,t) = X(0), T(t) = 0$$

$$X(0) = 0, X(L) = 0$$

$$X^\prime + \lambda X = 0, X(0) = 0, X(L) = 0$$

This is an eigenvalue problem which has solution only for certain λ. The eigenvalues are given by

$$
\lambda\_n = \frac{n^2 \pi^2}{L^2}, n = 1, 2, 3, \dots \tag{34}
$$

Hence the eigenfunctions are expressed as

$$X\_n(\mathbf{x}) = \sin\left(\frac{n\pi\mathbf{x}}{L}\right), n = 1, 2, 3... \tag{35}$$

For the time-dependent function T,

$$T' + \alpha^2 \lambda T = 0\tag{36}$$

$$\begin{split} \frac{dT}{T} &= -\alpha^2 \lambda dt \\ \ln|T| &= \frac{-\alpha^2 n^2 \pi^2 t}{L^2} + \mathcal{C} \end{split} \tag{37}$$

hence Tn <sup>¼</sup> kne�ðnπα=L<sup>Þ</sup> 2 t , kn constant. The fundamental solutions are then expressed as

$$u(\mathbf{x},t) = e^{-\left(m\mathbf{a}/L\right)^2 t} \sin\left(\frac{n\pi\mathbf{x}}{L}\right), n = 1, 2, 3... \tag{38}$$

The Fourier series expansion in x is given by

$$u(0,t) = f(\mathbf{x}), 0 \le \mathbf{x} \le L \tag{39}$$

$$u(\mathbf{x},t) = \sum\_{n=1}^{\infty} \mathbf{c}\_n u\_n(\mathbf{x},t) = \sum\_{n=1}^{\infty} \mathbf{c}\_n e^{-(n\pi\alpha/L)^2 t} \sin\left(\frac{n\pi\alpha}{L}\right) \tag{40}$$

Initial condition is given as

$$u(\mathbf{x},0) = f(\mathbf{x}) = \sum\_{n=1}^{\infty} c\_n \sin\left(\frac{n\pi x}{L}\right) \tag{41}$$

$$\int\_0^L f(\mathbf{x}) \sin\left(\frac{m\pi \mathbf{x}}{L}\right) d\mathbf{x} = \sum\_{n=1}^{\infty} c\_n \int\_0^L \sin\left(\frac{m\pi \mathbf{x}}{L}\right) \sin\left(\frac{n\pi \mathbf{x}}{L}\right) d\mathbf{x}$$

$$\int\_0^L f(\mathbf{x}) \sin\left(\frac{n\pi \mathbf{x}}{L}\right) d\mathbf{x} = c\_n \int\_0^L \sin^2\left(\frac{m\pi \mathbf{x}}{L}\right) d\mathbf{x} = c\_n \frac{L}{2}$$

Solution of the soil consolidation equation is hence given by

$$u(\mathbf{x},t) = \sum\_{n=1}^{\infty} c\_n e^{-\left(n\pi\alpha/L\right)^2 t} \sin\left(\frac{n\pi\mathbf{x}}{L}\right) \tag{42}$$

$$c\_n = \frac{2}{L} \int\_0^L f(x) \sin\left(\frac{n\pi x}{L}\right) dx \quad \text{(EulerFourier formulas)}\tag{43}$$

#### 1.8. One-dimensional wave equation

One-dimensional (1D) wave equation appears in many physical and engineering problems. For example, a vibrating string or pile driving process is given by this type of differential equation. This problem is also commonly solved by the method of separation of variables

$$\begin{aligned} a^2 u\_{xx} &= u\_{tt}, 0 < x < L, t > 0 \\ u(0, t) &= 0, u(L, t) = 0, t \ge 0 \\ u(\mathbf{x}, 0) &= f(\mathbf{x}), u(\mathbf{x}, 0) = 0, 0 \le \mathbf{x} \le L \end{aligned} \tag{44}$$

Consider u(x, t) is given by X(x)T(t). The wave equation will give

$$\frac{X'}{X} = \frac{1}{a^2} \frac{T'}{T} = -\lambda \to \begin{cases} X' + \lambda \mathbf{x} = \mathbf{0} \\ T' + a^2 \lambda t = \mathbf{0} \end{cases} \tag{45}$$

The partial differential equation will then be given by two equivalent ODEs.

$$\begin{aligned} u\_t(\mathbf{x}, 0) &= \mathbf{X}(\mathbf{x}) T(\mathbf{0}) = 0, \mathbf{0} \le \mathbf{x} \le L \to T(\mathbf{0}) = \mathbf{0} \\ u(\mathbf{0}, t) &= \mathbf{X}(\mathbf{0}) T(t) = \mathbf{0}, u(L, t) = \mathbf{X}(L) T(t) \ \mathbf{0} \le \mathbf{x} \le L \to T'(\mathbf{0}) = \mathbf{0} \end{aligned} \tag{46}$$

$$X^\uparrow + \lambda X = 0,\\ X(0) = X(L) = 0 \tag{47}$$

$$X\_n(\mathbf{x}) = \sin\left(\frac{n\pi\mathbf{x}}{L}\right), n = 1, 2, 3, \dots \tag{48}$$

$$
\lambda\_n = \frac{n^2 \pi^2}{L^2}, n = 1, 2, 3, \dots \tag{49}
$$

For the time-dependent function T,

XnðxÞ ¼ sin <sup>n</sup>π<sup>x</sup>

T 0 <sup>þ</sup> <sup>α</sup><sup>2</sup>

�ðnπα=LÞ 2 t sin <sup>n</sup>π<sup>x</sup> L � �

cnunðx, <sup>t</sup>Þ ¼ <sup>X</sup><sup>∞</sup>

<sup>u</sup>ðx, <sup>0</sup>Þ ¼ <sup>f</sup>ðxÞ ¼ <sup>X</sup><sup>∞</sup>

dx <sup>¼</sup> <sup>X</sup><sup>∞</sup> n¼1 cn ðL 0

> ðL 0

dx ¼ cn

n¼1 cne

<sup>u</sup>ðx, <sup>t</sup>Þ ¼ <sup>X</sup><sup>∞</sup>

<sup>f</sup>ðxÞsin <sup>n</sup>π<sup>x</sup> L � � n¼1 cne

n¼1

sin<sup>2</sup> <sup>m</sup>π<sup>x</sup> L � �

�ðnπα=LÞ 2 t sin <sup>n</sup>π<sup>x</sup> L � �

One-dimensional (1D) wave equation appears in many physical and engineering problems. For example, a vibrating string or pile driving process is given by this type of differential equation. This problem is also commonly solved by the method of separation of variables

dT <sup>T</sup> ¼ �α<sup>2</sup>

uðx, tÞ ¼ e

<sup>u</sup>ðx, <sup>t</sup>Þ ¼ <sup>X</sup><sup>∞</sup>

<sup>f</sup>ðxÞsin <sup>m</sup>π<sup>x</sup> L � �

<sup>f</sup>ðxÞsin <sup>n</sup>π<sup>x</sup> L � �

Solution of the soil consolidation equation is hence given by

n¼1

For the time-dependent function T,

242 Dynamical Systems - Analytical and Computational Techniques

2 t

The Fourier series expansion in x is given by

ðL 0

ðL 0

cn <sup>¼</sup> <sup>2</sup> L ðL 0

1.8. One-dimensional wave equation

hence Tn <sup>¼</sup> kne�ðnπα=L<sup>Þ</sup>

Initial condition is given as

L � �

λdt

<sup>L</sup><sup>2</sup> <sup>þ</sup> <sup>C</sup>

, kn constant. The fundamental solutions are then expressed as

�ðnπα=LÞ 2 t sin <sup>n</sup>π<sup>x</sup> L � �

cnsin <sup>n</sup>π<sup>x</sup> L � �

sin <sup>m</sup>π<sup>x</sup> L � �

lnjTj ¼ �α<sup>2</sup>n<sup>2</sup>π<sup>2</sup><sup>t</sup>

, n ¼ 1, 2, 3… (35)

λT ¼ 0 (36)

, n ¼ 1, 2, 3… (38)

uð0, tÞ ¼ fðxÞ, 0 ≤ x ≤ L (39)

sin <sup>n</sup>π<sup>x</sup> L � � dx

> L 2

dx ðEulerFourier formulasÞ (43)

dx ¼ cn

(37)

(40)

(41)

(42)

$$
\Gamma \, T + a^2 \lambda T = 0 \tag{50}
$$

$$\begin{aligned} T(0) &= 0 \ \lambda\_n = n\pi/L\\ \text{Then } T(t) &= k\_1 \cos(n\pi at/L) - k\_2 \sin(n\pi at/L) \end{aligned} \tag{51}$$

Since T 0 ð0Þ ¼ 0 k<sup>2</sup> ¼ 0 Therefore, TðtÞ ¼ k1cosðnπat=LÞ Fundamental solution is given by

$$
\mu\_n(\mathbf{x}, t) = \sin\left(\frac{n\pi x}{L}\right)\cos\left(\frac{n\pi at}{L}\right), n = 1, 2, 3... \tag{52}
$$

The general solution is then given by

$$u(\mathbf{x},t) = \sum\_{n=1}^{m} c\_n u\_n(\mathbf{x},t) = \sum\_{n=1}^{m} c\_n \sin\left(\frac{n\pi\mathbf{x}}{L}\right) \cos\left(\frac{n\pi at}{L}\right) \tag{53}$$

Applying the boundary condition

$$\begin{aligned} u(\mathbf{x},0) &= f(\mathbf{x}), 0 \le \mathbf{x} \le L \\ u(\mathbf{x},0) &= f(\mathbf{x}) = \sum\_{n=1}^{\infty} c\_n \sin\left(\frac{n\pi\mathbf{x}}{L}\right) \to c\_n = \frac{2}{L} \int\_0^L f(\mathbf{x}) \sin\left(\frac{n\pi\mathbf{x}}{L}\right) d\mathbf{x} \end{aligned} \tag{54}$$

The final solution is then given by

$$u(\mathbf{x},t) = \sum\_{n=1}^{\bullet} c\_n \sin\left(\frac{n\pi\mathbf{x}}{L}\right) \cos\left(\frac{n\pi at}{L}\right) \tag{55}$$

$$\mathbf{c}\_{n} = \frac{2}{L} \int\_{0}^{L} f(\mathbf{x}) \sin \left( \frac{n \pi x}{L} \right) d\mathbf{x} \tag{56}$$

#### 1.9. Laplace equation

Laplace equation forms an important governing condition for many types of problems. Some of the more common forms are given by

three-dimensional Laplace equation uxx þ uyy þ uzz ¼ 0

two-dimensional heat conduction <sup>α</sup><sup>2</sup>ðuxx <sup>þ</sup> uyyÞ ¼ ut

two-dimensional seepage problem ðkxuxx þ kyuyyÞ ¼ 0

There are two major types of boundary conditions to this problem:

Dirichlet problem: boundary conditions prescribed as u

Neumann problem: normal derivative ux or uy are usually prescribed on the boundary for many mathematical problems. This case can be solved by the use of complex analysis or series method for which many analytical solutions are available in the literature. In many anisotropic seepage problems, however, the normal of a derived quantity at any arbitrary direction (seepage flow normal to an impermeable surface) is 0 instead of ux or uy being zero. For such cases, it is very difficult to obtain the analytical solution if the solution domain is nonhomogeneous, and the use of numerical method such as the finite element method appears to be indispensable.

Consider the given Laplace equation, using separation of variables for the analysis.

$$\begin{aligned} u\_{xx} + u\_{yy} &= 0, 0 < \mathbf{x} < a, 0 < y < b \\ u(\mathbf{x}, 0) &= 0, u(\mathbf{x}, b) = 0, 0 < \mathbf{x} < a \\ u(0, y) &= 0, u(a, y) = f(y), 0 < y \le b \end{aligned} \tag{57}$$

Using separation of variables, uðx, tÞ ¼ XðxÞYðyÞ

$$\begin{aligned} \boldsymbol{X}^\* \boldsymbol{Y} + \boldsymbol{X} \boldsymbol{Y} &= \boldsymbol{0} \\ \frac{\boldsymbol{X}^\*}{\boldsymbol{X}} = -\frac{\boldsymbol{Y}^\*}{\boldsymbol{Y}} &= \boldsymbol{\lambda} \rightarrow \begin{aligned} \boldsymbol{X}^\* - \boldsymbol{\lambda} \boldsymbol{X} &= \boldsymbol{0} \\ \boldsymbol{Y}^\* + \boldsymbol{\lambda} \boldsymbol{Y} &= \boldsymbol{0} \end{aligned} \tag{58}$$

$$
\mu\_{xx} + \mu\_{yy} = 0,\\
0 < x < a,\\
0 < y < b \tag{59}
$$

$$u(\mathbf{x},0) = 0, u(\mathbf{x},b) = 0, 0 < \mathbf{x} < a \tag{60}$$

$$u(0, y) = 0, u(a, y) = f(y), 0 < y \le b$$

$$\begin{aligned} u(0, y) &= X(0)Y(y) = 0, 0 < y < b \to X(0) = 0, \\ u(\mathbf{x}, 0) &= X(\mathbf{x})Y(0) = 0, 0 < \mathbf{x} < a \to Y(0) = 0, \\ u(\mathbf{x}, b) &= X(\mathbf{x})Y(b) = 0, 0 < \mathbf{x} < a \to Y(b) = 0, \end{aligned} \tag{61}$$

$$\begin{aligned} X'' - \lambda X &= 0, X(0) = 0\\ Y'' + \lambda Y &= 0, Y(0) = 0, Y(b) = 0 \end{aligned} \tag{62}$$

$$\lambda\_n = \frac{n^2 \pi^2}{b^2}, Y\_n(y) = \sin\left(\frac{n \pi y}{b}\right), n = 1, 2, 3, \dots \tag{63}$$

<sup>X</sup>″ � <sup>λ</sup><sup>X</sup> <sup>¼</sup> 0, hence <sup>X</sup>ðxÞ ¼ <sup>k</sup>1coshðnπx=bÞ � <sup>k</sup>2sinðnπx=b<sup>Þ</sup> Since X(0) = 0, k<sup>1</sup> = 0

The final solution is then given by

244 Dynamical Systems - Analytical and Computational Techniques

of the more common forms are given by

appears to be indispensable.

Using separation of variables, uðx, tÞ ¼ XðxÞYðyÞ

X″

X″ <sup>X</sup> ¼ � <sup>Y</sup>″

three-dimensional Laplace equation uxx þ uyy þ uzz ¼ 0

two-dimensional heat conduction <sup>α</sup><sup>2</sup>ðuxx <sup>þ</sup> uyyÞ ¼ ut

two-dimensional seepage problem ðkxuxx þ kyuyyÞ ¼ 0

Dirichlet problem: boundary conditions prescribed as u

There are two major types of boundary conditions to this problem:

1.9. Laplace equation

<sup>u</sup>ðx, <sup>t</sup>Þ ¼ <sup>X</sup><sup>∞</sup>

n¼1

cn <sup>¼</sup> <sup>2</sup> L ðL 0

cnsin <sup>n</sup>π<sup>x</sup> L � �

Laplace equation forms an important governing condition for many types of problems. Some

Neumann problem: normal derivative ux or uy are usually prescribed on the boundary for many mathematical problems. This case can be solved by the use of complex analysis or series method for which many analytical solutions are available in the literature. In many anisotropic seepage problems, however, the normal of a derived quantity at any arbitrary direction (seepage flow normal to an impermeable surface) is 0 instead of ux or uy being zero. For such cases, it is very difficult to obtain the analytical solution if the solution domain is nonhomogeneous, and the use of numerical method such as the finite element method

> uxx þ uyy ¼ 0, 0 < x < a, 0 < y < b uðx, 0Þ ¼ 0, uðx, bÞ ¼ 0, 0 < x < a uð0, yÞ ¼ 0, uða, yÞ ¼ fðyÞ, 0 < y ≤ b

> > <sup>Y</sup> <sup>¼</sup> <sup>λ</sup> ! <sup>X</sup>″ � <sup>λ</sup><sup>X</sup> <sup>¼</sup> <sup>0</sup>

uðx, 0Þ ¼ 0, uðx, bÞ ¼ 0, 0 < x < a

Y<sup>0</sup> þ λY ¼ 0

uxx þ uyy ¼ 0, 0 < x < a, 0 < y < b (59)

<sup>u</sup>ð0, <sup>y</sup>Þ ¼ <sup>0</sup>, <sup>u</sup>ða, <sup>y</sup>Þ ¼ <sup>f</sup>ðyÞ, <sup>0</sup> <sup>&</sup>lt; <sup>y</sup> <sup>≤</sup> <sup>b</sup> (60)

Consider the given Laplace equation, using separation of variables for the analysis.

<sup>Y</sup> <sup>þ</sup> XY″ <sup>¼</sup> <sup>0</sup>

<sup>f</sup>ðxÞsin <sup>n</sup>π<sup>x</sup> L � �

cos

nπat L � �

dx (56)

(55)

(57)

(58)

$$X(\mathbf{x}) = k\_2 \sinh\left(\frac{n\pi\mathbf{x}}{b}\right) \tag{64}$$

$$
\mu\_n(\mathbf{x}, y) = \sinh(\frac{n\pi\alpha}{b})\sin(\frac{n\pi y}{b})n = 1, 2, 3... \tag{65}
$$

$$\begin{aligned} u(a,y) &= f(y), 0 \le y \le b\\ u(\mathbf{x},y) &= \sum\_{n=1} c\_n u\_n(\mathbf{x},y) = \sum\_{n=1}^{\infty} c\_n \sin\left(\frac{n\pi x}{b}\right) \cos\left(\frac{n\pi y}{b}\right) \end{aligned} \tag{66}$$

Based on the Fourier expansion as given by

$$\int\_{0}^{b} f(y) \sin\left(\frac{m\pi y}{b}\right) dy = \sum\_{n=1}^{\infty} c\_n \sinh\left(\frac{n\pi a}{b}\right) \int\_{0}^{b} \sin\left(\frac{m\pi y}{b}\right) \sin\left(\frac{n\pi y}{b}\right) dy$$

$$\int\_{0}^{b} f(x) \sin\left(\frac{m\pi x}{b}\right) dx = \sinh\frac{m\pi a}{b} c\_n \int\_{0}^{b} \sin^2\left(\frac{m\pi x}{b}\right) dx = \sinh\frac{m\pi a}{b} c\_n \frac{b}{2} \tag{67}$$

$$u(a, y) = f(y) = \sum\_{n=1}^{\infty} c\_n \sinh\left(\frac{n\pi a}{b}\right) \sin\left(\frac{n\pi y}{b}\right)$$

$$c\_n \sinh\left(\frac{n\pi a}{b}\right) = \frac{2}{b} \int\_{0}^{b} f(y) \sin\left(\frac{n\pi y}{b}\right) dy$$

$$c\_n = \frac{2}{b} \sinh\left(\frac{n\pi a}{b}\right)^{-1} \int\_{0}^{b} f(y) \sin\left(\frac{n\pi y}{b}\right) dy$$

#### 1.10. Introduction to numerical methods

In general, analytical solutions are not available for most of the practical differential equations, as regular solution domain and homogeneous conditions may not be present for practical problems. Moreover, the solution domain may be indeterminate (free surface seepage flow), the displacement is large so that the solution may deform under motion, or in an extreme case part of the material may tear off from the main body with continuous formation and removal of contacts. Many engineering problems fall into such category by nature, and the use of numerical methods will be necessary. Currently, there are several major numerical methods commonly used by the engineers: finite difference method, finite element method, boundary element method and distinct element. There are also other less common numerical methods available for practical problems, and many researchers also try to combine two or even more fundamental numerical methods so as to achieve greater efficiency in the analysis. In general, the solution domain is discretized into series of subdomains with many degrees of freedom. The number of variables or degrees of freedom may even exceed millions for large-scale problems, and sometimes very special material properties are encountered so that the system is highly sensitive to the method of discretization and the method of solution. Similar to the ODE and PDE, it is impossible to discuss the details of all the numerical methods and the author choose to discuss the finite element method due to the wide acceptance of the method and this method is more suitable for general complicated methods.

Except for some simple problems with regular geometry and loading, it is very difficult to solve most of the boundary value problems with the yield of analytical solutions. Towards this, the use of numerical method seems indispensable, and the finite element is one of the most popular methods used by the engineers [32, 38]. There are two fundamental approaches to FEM, which are the weighted residual method (WRM) and variational principle, but there are also other less popular principles which may be more effective under certain special cases. In finite element analysis of an elastic problem, solution is obtained from the weak form of the equivalent integration for the differential equations by WRM as an approximation. Alternatively, different approximate approaches (e.g. collocation method, least square method and Galerkin method) for solving differential equations can be obtained by choosing different weights based on the WRM and the Galerkin method appears to be the most popular approach in general.

Specifically, in elasticity for instance, the principle of virtual work (including both principle of virtual displacement and virtual stress) is considered to be the weak form of the equivalent integration for the governing equilibrium equations. Furthermore, the aforementioned weak form of equivalent integration on the basis of the Galerkin method can also be evolved to a variation of a functional if the differential equations have some specific properties such as linearity and selfadjointness. Principles of minimum potential energy and complementary energy are two variational approaches equivalent to the fundamental equations of elasticity.

Since displacement is usually the basic unknown quantity in FEM, only the principle of virtual displacement and minimum potential energy will be introduced in the following section. In this case, the FEM introduced herein is also called displacement finite element method (DFEM). There are other ways to form the basis of FEM with advantages in some cases, but these approaches are less general and will not be discussed here.

#### 1.11. Principle of virtual displacement

The principle of virtual displacement is the weak form of the equivalent integration for equilibrium equations and force boundary conditions. Given the equilibrium equations and force boundary conditions in index notation,

$$1. \sigma\_{\vec{\eta}, \vec{j}} + f\_i = 0, (\text{in domain } V) \tag{68}$$

$$(\sigma\_{\dot{\eta}}\mathfrak{n}\_{\dot{\eta}} - T\_i = 0, (\text{on domain boundary } \mathbb{S}\_{\sigma})\tag{69}$$

In WRM, without loss of generality, the variation of true displacement δui and its boundary value (i.e. �δui) can be selected as the weight functions in the equivalent integration

$$\int\_{V} \delta u\_{i} (\sigma\_{\vec{\eta},j} + f\_{i}) dV - \int\_{S\_{v}} \delta u\_{i} (\sigma\_{\vec{\eta}} u\_{j} - T\_{i}) dS = 0 \tag{70}$$

The weak form of Eq. (70) is given as

commonly used by the engineers: finite difference method, finite element method, boundary element method and distinct element. There are also other less common numerical methods available for practical problems, and many researchers also try to combine two or even more fundamental numerical methods so as to achieve greater efficiency in the analysis. In general, the solution domain is discretized into series of subdomains with many degrees of freedom. The number of variables or degrees of freedom may even exceed millions for large-scale problems, and sometimes very special material properties are encountered so that the system is highly sensitive to the method of discretization and the method of solution. Similar to the ODE and PDE, it is impossible to discuss the details of all the numerical methods and the author choose to discuss the finite element method due to the wide acceptance of the method

Except for some simple problems with regular geometry and loading, it is very difficult to solve most of the boundary value problems with the yield of analytical solutions. Towards this, the use of numerical method seems indispensable, and the finite element is one of the most popular methods used by the engineers [32, 38]. There are two fundamental approaches to FEM, which are the weighted residual method (WRM) and variational principle, but there are also other less popular principles which may be more effective under certain special cases. In finite element analysis of an elastic problem, solution is obtained from the weak form of the equivalent integration for the differential equations by WRM as an approximation. Alternatively, different approximate approaches (e.g. collocation method, least square method and Galerkin method) for solving differential equations can be obtained by choosing different weights based on the WRM and the Galerkin method appears to be the most popular approach in general.

Specifically, in elasticity for instance, the principle of virtual work (including both principle of virtual displacement and virtual stress) is considered to be the weak form of the equivalent integration for the governing equilibrium equations. Furthermore, the aforementioned weak form of equivalent integration on the basis of the Galerkin method can also be evolved to a variation of a functional if the differential equations have some specific properties such as linearity and selfadjointness. Principles of minimum potential energy and complementary energy are two variational approaches equivalent to the fundamental equations of elasticity. Since displacement is usually the basic unknown quantity in FEM, only the principle of virtual displacement and minimum potential energy will be introduced in the following section. In this case, the FEM introduced herein is also called displacement finite element method (DFEM). There are other ways to form the basis of FEM with advantages in some cases, but

The principle of virtual displacement is the weak form of the equivalent integration for equilibrium equations and force boundary conditions. Given the equilibrium equations and

σij,<sup>j</sup> þ f <sup>i</sup> ¼ 0,ðin domain VÞ (68)

σijnj � Ti ¼ 0,ðon domain boundary SσÞ (69)

and this method is more suitable for general complicated methods.

246 Dynamical Systems - Analytical and Computational Techniques

these approaches are less general and will not be discussed here.

1.11. Principle of virtual displacement

force boundary conditions in index notation,

$$\int\_{V} (-\delta \varepsilon\_{\vec{\eta}} \sigma\_{\vec{\eta}} + \delta u\_{\vec{\eta}} f\_{i}) dV + \int\_{S\_{\sigma}} \delta u\_{i} T\_{i} dS = 0 \tag{71}$$

It can be seen clearly from Eq. (71) that the first item in the volume integral indicates the work done by the stresses under the virtual strain (i.e. internal virtual work), while the remaining items indicate the work done by the body force and surface force under the virtual displacement (i.e. external virtual work). In other words, the summation of the internal and external virtual works is equal to 0, which is called the principle of virtual displacement. Under this case, we can conclude that a force system will satisfy the equilibrium equations if the summation of the work done by it under any virtual displacement and strain is equal to 0.

#### 1.12. Principle of minimum potential energy (PMPE)

Based on Eq. (71), we can deduce that

$$\int\_{V} (\delta \varepsilon\_{i\bar{\eta}} D\_{i\bar{\eta}l} \varepsilon\_{kl} - \delta u\_{l} f\_{i}) dV + \int\_{S\_{o}} \delta u\_{i} T\_{i} dS = 0 \tag{72}$$

Due to the symmetry of the constitutive matrix Dijkl, we can further obtain

$$(\delta \varepsilon\_{\vec{\eta}}) D\_{\vec{\eta}\vec{\text{kl}}} \varepsilon\_{\text{kl}} = \delta \left(\frac{1}{2} D\_{\vec{\eta}\vec{\text{kl}}} \varepsilon\_{\vec{\eta}\vec{\text{}}} \varepsilon\_{\text{kl}}\right) = \delta \mathcal{U}(\varepsilon\_{\text{mm}}) \tag{73}$$

where UðεmnÞ is the unit volume strain energy. Given the assumptions in linear elasticity

$$-\delta\phi(u\_i) = f\_i \delta u\_i, -\delta\psi(u\_i) = T\_i \delta u\_i \tag{74}$$

Eq. (72) is further simplified to

$$
\delta \Pi\_P = 0 \tag{75}
$$

Π<sup>P</sup> is the total potential energy of the system, which is equal to the summation of the potential energy of deformation and external force and can be expressed as

$$
\Pi\_P = \Pi\_P(\mu\_i) = \int\_V \left(\frac{1}{2} D\_{ijkl} \varepsilon\_{ij} \varepsilon\_{kl} - f\_i \mu\_i\right) dV - \int\_{S\_o} T\_i \mu\_i dS \tag{76}
$$

Eq. (75) shows that, among all the potential displacements, the total potential energy of system will take stationary value at the real displacement, and it can be further verified that this stationary value is exactly the minimum value which is the principle of minimum potential energy.

#### 1.13. General expressions and implementation procedure of FEM

The solution of a general continuum problem by FEM always follows an orderly step-by-step process which is easy to be programmed and used by the engineers. For illustration, a threenode triangular element for plane problems is taken as an example to illustrate the general expressions and implementation procedures of FEM.

#### 1.13.1. Discretization of domain

The first step in the finite element method is to divide the structure or solution region into subdivisions or elements. Hence, the structure is to be modelled with suitable finite elements. In general, the number, type, size, and arrangement of the elements are critical towards good performance of the numerical analysis. A typical discretization with three-node triangular element is shown schematically in Figure 1.

Mesh generation can be a difficult process for a general irregular domain. If only triangular element is to be generated, this is a relatively simple work, and many commercial programs can perform well in this respect. There are also some public domain codes (EasyMesh or Triangle written in C) which are sufficient for normal purposes. For quadrilateral or higher elements, mesh generation is not that simple, and it is preferable to rely on the use of commercial programs for such purposes.

#### 1.13.2. Interpolation or displacement model

As can be seen from Figure 1(b), the nodal number of a typical three-node triangular element is coded in anticlockwise order (i.e. in the order of i, j and m), and each node has two degrees of freedom (DOFs) or two displacement components which is stored in a column vector in index notation as follows:

$$a\_i = \begin{bmatrix} u\_i \\ v\_i \end{bmatrix} (i, j, m) \tag{77}$$

Figure 1. Discretization of a two-dimensional domain with three-node triangular element.

Totally, each element has six nodal displacements, i.e. six DOFs. Putting all the displacements in a column vector, we can obtain the element nodal displacement column matrix as

$$a^{\epsilon} = \begin{bmatrix} a\_i \\ a\_j \\ a\_m \end{bmatrix} = \begin{bmatrix} u\_i & \upsilon\_i & u\_j & \upsilon\_j & u\_m & \upsilon\_m \end{bmatrix}^T \tag{78}$$

In FEM, a nodal displacement is chosen as the basic unknowns, so interpolation at any arbitrary point is based on the three nodal displacements of each element, which is called a displacement mode. For a three-node triangular element, linear polynomial is utilized, and the element displacement in both x -direction and y-direction are

$$
\mu = \beta\_1 + \beta\_2 \mathbf{x} + \beta\_3 \mathbf{y} \tag{79}
$$

$$
\sigma = \beta\_4 + \beta\_5 \mathbf{x} + \beta\_6 \mathbf{y} \tag{80}
$$

Obviously, displacements of all the three nodes should satisfy Eqs. (79) and (80). By substituting the six nodal displacement components into these equations, it is easy to obtain another form of displacement mode as

$$
\mu = N\_i \mu\_i + N\_j \mu\_j + N\_m \mu\_m \tag{81}
$$

$$
\sigma = N\_i \upsilon\_i + N\_j \upsilon\_j + N\_m \upsilon\_m \tag{82}
$$

where

1.13. General expressions and implementation procedure of FEM

expressions and implementation procedures of FEM.

248 Dynamical Systems - Analytical and Computational Techniques

element is shown schematically in Figure 1.

1.13.1. Discretization of domain

cial programs for such purposes.

notation as follows:

1.13.2. Interpolation or displacement model

The solution of a general continuum problem by FEM always follows an orderly step-by-step process which is easy to be programmed and used by the engineers. For illustration, a threenode triangular element for plane problems is taken as an example to illustrate the general

The first step in the finite element method is to divide the structure or solution region into subdivisions or elements. Hence, the structure is to be modelled with suitable finite elements. In general, the number, type, size, and arrangement of the elements are critical towards good performance of the numerical analysis. A typical discretization with three-node triangular

Mesh generation can be a difficult process for a general irregular domain. If only triangular element is to be generated, this is a relatively simple work, and many commercial programs can perform well in this respect. There are also some public domain codes (EasyMesh or Triangle written in C) which are sufficient for normal purposes. For quadrilateral or higher elements, mesh generation is not that simple, and it is preferable to rely on the use of commer-

As can be seen from Figure 1(b), the nodal number of a typical three-node triangular element is coded in anticlockwise order (i.e. in the order of i, j and m), and each node has two degrees of freedom (DOFs) or two displacement components which is stored in a column vector in index

ði, j, mÞ (77)

ai <sup>¼</sup> ui vi 

Figure 1. Discretization of a two-dimensional domain with three-node triangular element.

$$N\_i = \frac{1}{2A}(a\_i + b\_i x + c\_i y)(i, j, m). \tag{83}$$

In Eq. (81), Ni, Nj and Nm denote the interpolation function or shape function for the three nodes, respectively. A is the area of the element, and ai, bi, ci⋯, cm are constants related to the coordinates of the three nodes. Similarly, Eqs. (81) and (82) can also be expressed in the form of matrix as

$$u = \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} N\_i & 0 & N\_j & 0 & N\_m & 0 \\ 0 & N\_i & 0 & N\_j & 0 & N\_m \end{bmatrix} \begin{bmatrix} u\_i \\ v\_i \\ u\_j \\ v\_j \\ u\_m \\ v\_m \end{bmatrix} = N d^\epsilon \tag{84}$$

where N is the shape function matrix and a<sup>e</sup> is the element nodal displacement vector. For the geometric equations, element strains are

$$\begin{aligned} \varepsilon = \begin{bmatrix} \varepsilon\_x \\ \varepsilon\_y \\ \mathcal{V}\_{xy} \end{bmatrix} = Lu = LN a^\varepsilon = L \begin{bmatrix} N\_i & N\_j & N\_m \end{bmatrix} a^\varepsilon \\\ = \begin{bmatrix} B\_i & B\_j & B\_m \end{bmatrix} a^\varepsilon = Ba^\varepsilon \end{aligned} \tag{85}$$

where L is the differential operator and B is the element strain displacement matrix which can be given as

$$B\_i = LN\_i = \begin{bmatrix} \frac{\partial}{\partial x} & 0\\ 0 & \frac{\partial}{\partial y} \\ 0 & \frac{\partial}{\partial y} \end{bmatrix} \begin{bmatrix} N\_i & 0\\ 0 & N\_i \end{bmatrix} = \begin{bmatrix} \frac{\partial N\_i}{\partial x} & 0\\ 0 & \frac{\partial N\_i}{\partial y} \\ 0 & \frac{\partial N\_i}{\partial y} \end{bmatrix} (i, j, m) \tag{86}$$

Substitute Eq. (85) by the stress-strain relation,

$$
\sigma = \begin{bmatrix} \sigma\_x \\ \sigma\_y \\ \tau\_{xy} \end{bmatrix} = D\varepsilon = DBa^\varepsilon = Sa^\varepsilon \tag{87}
$$

where

$$S = DB = D[B\_i \quad B\_j \quad B\_m] = \begin{bmatrix} \mathcal{S}i & \mathcal{S}j & \mathcal{S}m \end{bmatrix} \tag{88}$$

S is called the element stress matrix. It should be noted that both the strain and stress matrices are constant for each element, because in a three-node triangular element, the displacement mode is a first-order function, and differentiating this function will give a constant function.

#### 1.13.3. Stiffness equilibrium equation (SEE) of FEM derived from PMPE

For elastic plane problems, the total potential energy Π<sup>P</sup> in Eq. (76) can be expressed in matrix formulation as follows:

$$\prod\_{P} = \int\_{\Omega} \frac{1}{2} \varepsilon^{T} D \varepsilon t dx dy - \int\_{\Omega} u^{T} f t dx dy - \int\_{S\_{\sigma}} u^{T} T t dS \tag{89}$$

where t, f, and T denote the thickness, body force and surface force, respectively. For an FEM problem, the total potential energy is the summation of that from all the elements. Therefore, substituting Eqs. (84) and (85) into Eq. (89) gives

$$\Pi\_{\mathcal{P}} = \sum\_{\epsilon} \Pi\_{\mathcal{P}}^{\epsilon} = \sum\_{\epsilon} \left( a^{\epsilon T} \right]\_{\Omega\_{\epsilon}} \frac{1}{2} \mathbf{B}^{T} \mathbf{D} \mathbf{f} \mathbf{t} \mathbf{x} dy^{\epsilon} \left( \right) - \sum\_{\epsilon} \left( a^{\epsilon T} \right]\_{\Omega\_{\epsilon}} \mathbf{N}^{T} \mathbf{f} \mathbf{t} \mathbf{x} dy \left( \right) - \sum\_{\epsilon} \left( a^{\epsilon T} \right]\_{S\_{\epsilon}^{\epsilon}} \mathbf{N}^{T} \mathbf{T} \mathbf{t} \mathbf{x} dy \tag{90}$$

Eq. (90) can be viewed as

$$\begin{aligned} K^{\varepsilon} &= \int\_{\Omega\_{\varepsilon}} B^{T} D B t dx dy, P\_{f}^{\varepsilon} = \int\_{\Omega\_{\varepsilon}} N^{T} f t dx dy\\ P\_{S}^{\varepsilon} &= \int\_{S\_{\varepsilon}^{\varepsilon}} N^{T} T t dx dy, P^{\varepsilon} = P\_{f}^{\varepsilon} + P\_{S}^{\varepsilon} \end{aligned} \tag{91}$$

where K<sup>e</sup> and Pe are named as the element stiffness matrix and equivalent element nodal load matrix, respectively. Substitute Eq. (91) to Eq. (90), the total potential energy of the structure can be simplified as

$$
\Pi\_P = a^T \frac{1}{2} \sum\_{\epsilon} (K^{\epsilon}) a - a^T \sum\_{\epsilon} (P^{\epsilon}) \tag{92}
$$

Given

where L is the differential operator and B is the element strain displacement matrix which can

Ni 0 0 Ni � �

S is called the element stress matrix. It should be noted that both the strain and stress matrices are constant for each element, because in a three-node triangular element, the displacement mode is a first-order function, and differentiating this function will give a constant function.

For elastic plane problems, the total potential energy Π<sup>P</sup> in Eq. (76) can be expressed in matrix

ð Ω

where t, f, and T denote the thickness, body force and surface force, respectively. For an FEM problem, the total potential energy is the summation of that from all the elements. Therefore,

> �<sup>X</sup> e <sup>ð</sup>aeT ð Ω<sup>e</sup>

> > <sup>f</sup> ¼ ð Ω<sup>e</sup>

BTDBtdxdy, Pe

<sup>N</sup>TTtdxdy, <sup>P</sup><sup>e</sup> <sup>¼</sup> Pe

uTf tdxdy �

ð Sσ

<sup>N</sup>Tf tdxdyÞ �<sup>X</sup>

NTf tdxdy

<sup>f</sup> <sup>þ</sup> Pe S e <sup>ð</sup>aeT ð Sσ<sup>e</sup>

<sup>ε</sup>TDεtdxdy �

BTDBtdxdyae � �

> <sup>K</sup><sup>e</sup> <sup>¼</sup> ð Ω<sup>e</sup>

Pe <sup>S</sup> ¼ ð Sσ<sup>e</sup> ¼

∂Ni ∂x

∂Ni ∂y

S ¼ DB ¼ D½ Bi Bj Bm �¼½ Si Sj Sm � (88)

0

<sup>5</sup> <sup>¼</sup> <sup>D</sup><sup>ε</sup> <sup>¼</sup> DBa<sup>e</sup> <sup>¼</sup> Sa<sup>e</sup> (87)

ði, j, mÞ (86)

uTTtdS (89)

<sup>N</sup>TTtdxdy<sup>Þ</sup>

(90)

(91)

∂Ni ∂x

<sup>0</sup> <sup>∂</sup>Ni ∂y

be given as

where

formulation as follows:

<sup>Π</sup><sup>P</sup> <sup>¼</sup> <sup>X</sup> e Πe <sup>P</sup> <sup>¼</sup> <sup>X</sup> e

Eq. (90) can be viewed as

Bi ¼ LNi ¼

250 Dynamical Systems - Analytical and Computational Techniques

Substitute Eq. (85) by the stress-strain relation,

∂ ∂x 0

∂ ∂y

σ ¼

1.13.3. Stiffness equilibrium equation (SEE) of FEM derived from PMPE

Y <sup>P</sup> ¼ ð Ω 1 2

substituting Eqs. (84) and (85) into Eq. (89) gives

aeT ð Ω<sup>e</sup> 1 2 <sup>0</sup> <sup>∂</sup> ∂y

> ∂ ∂x

> > σx σy τxy

3

2 4

$$K = \sum\_{\epsilon} K^{\epsilon}, P = \sum\_{\epsilon} P^{\epsilon} \tag{93}$$

Eq. (92) is further simplified as

$$
\Pi\_P = \frac{1}{2} a^T \mathbf{K} \mathbf{a} - a^T \mathbf{P} \mathbf{a} \tag{93a}
$$

where Kand P are global stiffness matrix and global nodal load matrix, respectively.

For PMPE, the variation of Π<sup>P</sup> is equal to 0 and the unknown variable is a, thus Eq. (75) gives

$$\frac{\partial \Pi\_P}{\partial a} = 0\tag{94}$$

which finally comes to the SEE of FEM as

$$Ka = P\tag{95}$$

From Eq. (93), we know that the global stiffness matrix and the global load matrix are the assemblage of the element stiffness matrices and equivalent element nodal load matrices, respectively. Specifically, in order to solve Eq. (93), element stiffness matrix, element equivalent nodal load vector, global stiffness matrices and global nodal load vector are all determined together with some given displacement boundary conditions. Without the provision of adequate boundary condition, the system is singular as rigid body motion will produce no stress in the system and such mode will be present in the SEE.

#### 1.13.4. Derivation of element stiffness matrices (ESM)

For a three-node triangular element, the element strain matrix B is constant, thus Eq. (91) gives

$$K^{\ell} = B^{T}DBt\\A = \begin{bmatrix} \mathcal{K}\_{ii} & \mathcal{K}\_{ij} & \mathcal{K}\_{im} \\ \mathcal{K}\_{ji} & \mathcal{K}\_{ji} & \mathcal{K}\_{jm} \\ \mathcal{K}\_{mi} & \mathcal{K}\_{mj} & \mathcal{K}\_{mm} \end{bmatrix} \tag{96}$$

of which the submatrix

$$K\_{ij} = \begin{bmatrix} k\_{ij}^{xx} & k\_{ij}^{xy} \\ k\_{ij}^{yy} & k\_{ij}^{yy} \end{bmatrix} \tag{97}$$

Kij indicates the ith nodal force along the x- and y-directions in the Cartesian coordinate system when the displacement of the jth node is unit along the x- and y-directions, which can be easily obtained. Moreover, the element stiffness matrix is symmetric, and the computational memory required in an FEM program can be reduced by using this property.

It should be noted that for a higher order triangular element (e.g. six-node triangular element) or quadrilateral element for which higher order terms are involved, the strain matrix B is not constant any more so that the element stiffness matrix needs to be evaluated by numerical integration (direct integration is seldom adopted). Towards this, numerical integration methods such as the Gaussian integration or the Newton-Cotes integration can be utilized.

## 1.13.5. Assembling of ESMs and ENLMs

For an FEM process, we need to solve Eq. (95) which is the global equilibrium equation. Most of the elements in the matrix Kare 0 simply because each node is only shared by a few surrounding elements. In view of that, a rectangular matrix can represent the global stiffness matrix (which is a square matrix), and the half bandwidth D can be defined as

$$D = (1 + \text{NDIF}) \times \text{NDOF} \tag{98}$$

where NDIF denotes the largest absolute difference between the element node numbers among all the elements in the finite element mesh.

In conclusion, the properties of the global stiffness matrix can be summarized as: symmetric, banded distribution, singularity and sparsity. Among all the properties, singularity will vanish by introducing appropriate boundary conditions to Eq. (95) to eliminate the rigid body motion. Also, other properties like banded distribution should be fully taken into consideration to reduce the computational memory and enhance the computation efficiency.

#### 1.13.6. Isoparametric element and numerical integration

Most of the engineering structure is not regular in shape, and some of them even have very complicated boundary shapes. Although the use of triangular element can always fit a complicated boundary, the accuracy of this element is low in general. To cope with the irregular boundary shape with a higher accuracy in analysis, one of the most common approaches is the use of higher-order element, and the isoparametric formulation is the most commonly used at present. Consider an arbitrary four-node quadrilateral element as an example which is schematically shown in Figure 2. If we can find the transformation from Figure 2(a) to (b), then it will become easier to carry numerical integration with complicated shapes for an arbitrary element. In Figure 2(a), we define the Cartesian coordinate system, while in Figure 2(b), we define the local coordinate system (or natural coordinate system) within a specific domain (i.e. ξ, η∈ð�1, 1Þ). The relation between these two kinds of coordinate system can be described as

$$\left\{ \begin{smallmatrix} \mathbf{x} \\ \mathbf{y} \end{smallmatrix} \right\} = f \left\{ \begin{smallmatrix} \xi \\ \eta \end{smallmatrix} \right\} \tag{99}$$

which can be further modified by the interpolation function at nodes in the local coordinate system as follows:

Figure 2. Isoparametric transition.

Kij indicates the ith nodal force along the x- and y-directions in the Cartesian coordinate system when the displacement of the jth node is unit along the x- and y-directions, which can be easily obtained. Moreover, the element stiffness matrix is symmetric, and the computational memory

It should be noted that for a higher order triangular element (e.g. six-node triangular element) or quadrilateral element for which higher order terms are involved, the strain matrix B is not constant any more so that the element stiffness matrix needs to be evaluated by numerical integration (direct integration is seldom adopted). Towards this, numerical integration methods such as the Gaussian integration or the Newton-Cotes integration can be utilized.

For an FEM process, we need to solve Eq. (95) which is the global equilibrium equation. Most of the elements in the matrix Kare 0 simply because each node is only shared by a few surrounding elements. In view of that, a rectangular matrix can represent the global stiffness

where NDIF denotes the largest absolute difference between the element node numbers among

In conclusion, the properties of the global stiffness matrix can be summarized as: symmetric, banded distribution, singularity and sparsity. Among all the properties, singularity will vanish by introducing appropriate boundary conditions to Eq. (95) to eliminate the rigid body motion. Also, other properties like banded distribution should be fully taken into consideration to

Most of the engineering structure is not regular in shape, and some of them even have very complicated boundary shapes. Although the use of triangular element can always fit a complicated boundary, the accuracy of this element is low in general. To cope with the irregular boundary shape with a higher accuracy in analysis, one of the most common approaches is the use of higher-order element, and the isoparametric formulation is the most commonly used at present. Consider an arbitrary four-node quadrilateral element as an example which is schematically shown in Figure 2. If we can find the transformation from Figure 2(a) to (b), then it will become easier to carry numerical integration with complicated shapes for an arbitrary element. In Figure 2(a), we define the Cartesian coordinate system, while in Figure 2(b), we define the local coordinate system (or natural coordinate system) within a specific domain (i.e. ξ, η∈ð�1, 1Þ). The relation between these two kinds of coordinate system can be described as

> x y

<sup>¼</sup> <sup>f</sup> <sup>ξ</sup> η 

which can be further modified by the interpolation function at nodes in the local coordinate

D ¼ ð1 þ NDIFÞ � NDOF (98)

(99)

matrix (which is a square matrix), and the half bandwidth D can be defined as

reduce the computational memory and enhance the computation efficiency.

required in an FEM program can be reduced by using this property.

1.13.5. Assembling of ESMs and ENLMs

252 Dynamical Systems - Analytical and Computational Techniques

all the elements in the finite element mesh.

1.13.6. Isoparametric element and numerical integration

system as follows:

$$\mathbf{x} = \boldsymbol{\Sigma}\_{i=1}^{m} \mathbf{N}\_{i}^{\prime} \mathbf{x}\_{i}, \ y = \sum\_{i=1}^{m} \mathbf{N}\_{i}^{\prime} y\_{i} \tag{100}$$

where ðxi, yi Þ are coordinates in the Cartesian coordinate system corresponding to the ith node in local coordinate system, N<sup>0</sup> <sup>i</sup> is interpolation function of the ith node in local coordinate system and m is the number of nodes chosen to transform the coordinates. Therefore, the regular element in the natural coordinate system can be transformed to the irregular element in the Cartesian coordinate system. The former element is called the parent element, while the latter is called the subelement. Specifically, Eq. (101) can be further expanded as

$$
\begin{Bmatrix} \mathbf{x} \\ \mathbf{y} \end{Bmatrix} = \begin{bmatrix} N\_1 & \mathbf{0} & N\_2 & \mathbf{0} & N\_3 & \mathbf{0} & N\_4 & \mathbf{0} \\ \mathbf{0} & N\_1 & \mathbf{0} & N\_2 & \mathbf{0} & N\_3 & \mathbf{0} & N\_4 \end{bmatrix} \begin{Bmatrix} \mathbf{x}\_1 \\ \mathbf{y}\_1 \\ \mathbf{x}\_2 \\ \mathbf{y}\_2 \\ \mathbf{x}\_3 \\ \mathbf{y}\_3 \\ \mathbf{x}\_4 \\ \mathbf{y}\_4 \end{Bmatrix} \tag{101}
$$

Using the same interpolation functions, the element displacement model can be written as

$$
\begin{Bmatrix} \boldsymbol{u} \\ \boldsymbol{v} \end{Bmatrix} = \begin{bmatrix} N\_1 & 0 & N\_2 & 0 & N\_3 & 0 & N\_4 & 0 \\ 0 & N\_1 & 0 & N\_2 & 0 & N\_3 & 0 & N\_4 \end{bmatrix} \begin{Bmatrix} \boldsymbol{u}\_1 \\ \boldsymbol{v}\_1 \\ \boldsymbol{u}\_2 \\ \boldsymbol{v}\_2 \\ \boldsymbol{u}\_3 \\ \boldsymbol{v}\_3 \\ \boldsymbol{u}\_4 \\ \boldsymbol{v}\_4 \end{Bmatrix} \tag{102}$$

where J denotes the Jacobi matrix while the interpolation functions are given by

$$\begin{aligned} N\_1 &= \frac{1}{4}(1+\xi)(1+\eta), N\_2 = \frac{1}{4}(1-\xi)(1+\eta) \\ N\_3 &= \frac{1}{4}(1+\xi)(1+\eta), N\_2 = \frac{1}{4}(1-\xi)(1+\eta) \end{aligned} \tag{103}$$

As mentioned before, during the derivation of the element stiffness matrix and the equivalent load vector, the derivative of the shape function and the integration in element surface or volume in the Cartesian coordinate system are required. Since the shape functions adopted herein are expressed in natural coordinates, therefore, derivative and integration transformation relationships are essential when isoparametric element is used.

#### 1.13.7. Derivative and integral transformation

According to the law of partial differential,

$$\begin{split} \frac{\partial N\_i}{\partial \xi} &= \frac{\partial N\_i}{\partial x} \frac{\partial x}{\partial \xi} + \frac{\partial N\_i}{\partial y} \frac{\partial y}{\partial \xi}, \\ \frac{\partial N\_i}{\partial \eta} &= \frac{\partial N\_i}{\partial x} \frac{\partial x}{\partial \eta} + \frac{\partial N\_i}{\partial y} \frac{\partial y}{\partial \eta}, \end{split} \tag{104}$$

or in matrix form

$$\begin{Bmatrix} \frac{\partial N\_i}{\partial \xi} \\ \frac{\partial N\_i}{\partial \eta} \end{Bmatrix} = \begin{Bmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi} \\ \frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} \end{Bmatrix} \begin{Bmatrix} \frac{\partial N\_i}{\partial x} \\ \frac{\partial N\_i}{\partial y} \end{Bmatrix} = J \begin{Bmatrix} \frac{\partial N\_i}{\partial x} \\ \frac{\partial N\_i}{\partial y} \end{Bmatrix} \tag{105}$$

Inverse of Eq. (105) gives

$$\begin{Bmatrix} \frac{\partial N\_i}{\partial x} \\ \frac{\partial N\_i}{\partial y} \end{Bmatrix} = J^{-1} \begin{Bmatrix} \frac{\partial N\_i}{\partial \xi} \\ \frac{\partial N\_i}{\partial \eta} \end{Bmatrix} \tag{106}$$

where

$$\begin{aligned} J &= \begin{bmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi} \\ \frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} \end{bmatrix} = \begin{bmatrix} \frac{4}{i=1} \frac{\partial N\_i}{\partial \xi} x\_i & \frac{4}{i=1} \frac{\partial N\_i}{\partial \xi} y\_i \\ \sum\_{i=1}^4 \frac{\partial N\_i}{\partial \eta} x\_i & \frac{4}{i=1} \frac{\partial N\_i}{\partial \eta} y\_i \end{bmatrix} \\ &= \begin{bmatrix} \frac{\partial N\_1}{\partial \xi} & \frac{\partial N\_2}{\partial \xi} & \frac{\partial N\_3}{\partial \xi} & \frac{\partial N\_4}{\partial \xi} \\ \frac{\partial N\_1}{\partial \eta} & \frac{\partial N\_2}{\partial \eta} & \frac{\partial N\_3}{\partial \eta} & \frac{\partial N\_4}{\partial \eta} \end{bmatrix} \begin{bmatrix} x\_1 & y\_1 \\ x\_2 & y\_2 \\ x\_3 & y\_3 \\ x\_4 & y\_4 \end{bmatrix} \end{aligned} \tag{107}$$

For an infinitely small element, the area under the Cartesian coordinate system and the natural coordinate system are related by

$$ds = d\mathbf{x} dy = |\mathbf{J}| d\xi d\eta,\tag{108}$$

where jJj is the determinant of the Jacobian matrix J. Therefore, element stiffness matrix and equivalent nodal load matrix in Eq. (91) can be transformed to

$$\begin{aligned} K^{\epsilon} &= \int\_{\Omega\_{\epsilon}} B^T D B t |J| d\xi d\eta, P^{\epsilon}\_f = \int\_{\Omega\_{\epsilon}} N^T f |J| d\xi d\eta \\ P^{\epsilon}\_S &= \int\_{S\_{\delta}^{\epsilon}} N^T T |J| d\xi d\eta \end{aligned} \tag{109}$$

For solving the integral equation, usually the Gaussian integration method is employed. In practice, both two and three integration points along each direction of integration are commonly used. Since the discretized system is usually overstiff, it is commonly observed that the use of two integration points along each direction of integration will slightly reduce the stiffness of the matrix and give better results as compared with the use of three integration points. The use of exact integration is possible for some elements, but such approaches are usually tedious and are seldom adopted. The advantage in using the exact integration is that the integration is not affected by the shape of the element while the transformation as shown in Eq. (109) may be affected if the poor shape of the element is poor. The author has developed many finite element programs for teaching and research purposes which can be obtained at ceymchen@polyu.edu.hk. The programs available include plane stress/strain problem, thin/ thick plate bending problem, consolidation in 1D and 2D (Biot), seepage problem, slope stability problem, pile foundation problems and others.

#### 1.14. Distinct element method

where J denotes the Jacobi matrix while the interpolation functions are given by

<sup>ð</sup><sup>1</sup> <sup>þ</sup> <sup>ξ</sup>Þð<sup>1</sup> <sup>þ</sup> <sup>η</sup>Þ, <sup>N</sup><sup>2</sup> <sup>¼</sup> <sup>1</sup>

<sup>ð</sup><sup>1</sup> <sup>þ</sup> <sup>ξ</sup>Þð<sup>1</sup> <sup>þ</sup> <sup>η</sup>Þ, <sup>N</sup><sup>2</sup> <sup>¼</sup> <sup>1</sup>

As mentioned before, during the derivation of the element stiffness matrix and the equivalent load vector, the derivative of the shape function and the integration in element surface or volume in the Cartesian coordinate system are required. Since the shape functions adopted herein are expressed in natural coordinates, therefore, derivative and integration transforma-

4

4

∂Ni ∂y ∂y ∂ξ ,

ð1 � ξÞð1 þ ηÞ

(103)

(104)

(105)

(106)

(107)

ð1 � ξÞð1 þ ηÞ

<sup>N</sup><sup>1</sup> <sup>¼</sup> <sup>1</sup> 4

254 Dynamical Systems - Analytical and Computational Techniques

<sup>N</sup><sup>3</sup> <sup>¼</sup> <sup>1</sup> 4

1.13.7. Derivative and integral transformation According to the law of partial differential,

or in matrix form

Inverse of Eq. (105) gives

where

tion relationships are essential when isoparametric element is used.

∂Ni <sup>∂</sup><sup>ξ</sup> <sup>¼</sup> <sup>∂</sup>Ni ∂x ∂x ∂ξ þ

∂Ni <sup>∂</sup><sup>η</sup> <sup>¼</sup> <sup>∂</sup>Ni ∂x ∂x ∂η þ ∂Ni ∂y ∂y ∂η ,

> ∂x ∂ξ

∂x ∂η

∂Ni ∂x ∂Ni ∂y

9 >>=

>>; ¼ J �1

8 >><

>>:

∂y ∂ξ

∂N<sup>2</sup> ∂ξ

∂N<sup>2</sup> ∂η

∂y ∂η ∂y ∂ξ

8 >><

>>:

∂Ni ∂x ∂Ni ∂y

9 >>= ∂Ni ∂x ∂Ni ∂y

9 >>=

>>;

8 >><

>>:

>>; ¼ J

∂Ni ∂ξ ∂Ni ∂η

9 >>=

>>;

X 4

∂Ni <sup>∂</sup><sup>ξ</sup> yi

∂Ni <sup>∂</sup><sup>η</sup> yi

x<sup>1</sup> y<sup>1</sup> x<sup>2</sup> y<sup>2</sup> x<sup>3</sup> y<sup>3</sup> x<sup>4</sup> y<sup>4</sup>

i¼1

X 4

i¼1

8 >><

>>:

∂Ni <sup>∂</sup><sup>ξ</sup> xi

∂Ni ∂η xi

> ∂N<sup>4</sup> ∂ξ

> ∂N<sup>4</sup> ∂η

X 4

i¼1

X 4

i¼1

∂N<sup>3</sup> ∂ξ

∂N<sup>3</sup> ∂η

∂y ∂η

∂Ni ∂ξ ∂Ni ∂η

9 >>=

>>; ¼

8 >><

>>:

J ¼

¼

∂x ∂ξ

∂x ∂η

∂N<sup>1</sup> ∂ξ

∂N<sup>1</sup> ∂η

In practical applications, a limit equilibrium method based on the method of slices or method of columns and strength reduction method based on the finite element method or finite difference method are used for many types of stability problems. These two major analysis methods take the advantage that the in situ stress field which is usually not known with good accuracy is not required in the analysis. The uncertainties associated with the stress-strain relation can also be avoided by a simple concept of factor of safety or the determination of the ultimate limit state. In general, this approach is sufficient for engineering analysis and design. If the condition of the system after failure has initiated is required to be assessed, these two methods will not be applicable. Even if the in situ stress field and the stress-strain relation can be defined, the post-failure collapse is difficult to be assessed using the conventional continuum-based numerical method, as sliding, rotation and collapse of the slope involve very large displacement or even separation without the requirement of continuity.

The most commonly used numerical methods for continuous systems are the FDM, the FEM and the boundary element method (BEM). The basic assumption adopted in these numerical methods is that the materials concerned are continuous throughout the physical processes. This assumption of continuity requires that, at all points in a problem domain, the material cannot be torn open or broken into pieces. All material points originally in the neighbourhood of a certain point in the problem domain remain in the same neighbourhood throughout the whole physical process. Some special algorithms have been developed to deal with material fractures in continuum mechanics-based methods, such as the special joint elements by Goodman [13] and the displacement discontinuity technique in BEM by Crouch and Starfield [5]. However, these methods can only be applied with limitations [21]:


Before a slope starts to collapse, the factor of safety serves as an important index in both the LEM and SRM to assess the stability of the slope. The movement and growth after failure have launched which is also important in many cases that cannot be simulated on the continuum model, and this should be analyzed by the distinct element method (DEM).

In continuum description of soil material, the well-established macro-constitutive equations whose parameters can be measured experimentally are used. On the other hand, a discrete element approach will consider that the material is composed of distinct grains or particles that interact with each other. The commonly used distinct element method is an explicit method based on the finite difference principles which is originated in the early 1970s by a landmark work on the progressive movements of rock masses as 2D rigid block assemblages [6]. Later, the works by Cundall are developed to the early versions of the UDEC and 3DEC codes [9, 10, 12]. The method has also been developed for simulating the mechanical behaviour of granular materials [8], with a typical early code BALL [7], which later evolved into the codes of the PFC group for 2D and 3D problems of particle systems (Itasca, 1995). Through continuous developments and extensive applications over the last three decades, there has accumulated a great body of knowledge and a rich field of literature about the distinct element method. The main trend in the development and application of the method in rock engineering is represented by the history and results of the code groups UDEC/3DEC. Currently, there are many open source (Oval, LIGGGHTS, ESyS, Yade, ppohDEM, Lammps) as well as commercial DEM programs, but in general, this method is still limited to basic research instead of practical application as there are many limitations which include: (1) difficult to define and determine the microparameters; (2) there are still many drawbacks in the use of matching with the macro response to determine the microparameters; (3) not easy to set up a computer model; (4) not easy to include structural element or water pressure; (5) extremely time consuming to perform an analysis; and (6) postprocessing is not easy or trivial. It should also be noted that DEM can be formulated by an energy-based implicit integration scheme which is the discontinuous deformation analysis (DDA) method. This method is similar in many respect to the forcebased explicit integration scheme as mentioned previously.

In DEM, the packing of granular material can be defined from statistical distributions of grain size and porosity, and the particles are assigned normal and shear stiffness and friction coefficients in the contact relation. Two types of bonds can be represented either individually or simultaneously; these bonds are referred to the contact and parallel bonds, respectively (Itasca, 1995). Although the individual particles are solid, these particles are only partially connected at the contact points which will change at different time step. Under low normal stresses, the strength of the tangential bonds of most granular materials will be weak and the material may flow like a fluid under very small shear stresses. Therefore, the behaviour of granular material in motion can be studied as a fluid-mechanical phenomenon of particle flow where individual particles may be treated as "molecules" of the flowing granular material. In many particle models for geological materials in practice, the number of particles contained in a typical domain of interest will be very large, similar to the large numbers of molecules.

One of the primary objectives of the particle model is the establishment of the relations between microscopic and macroscopic variables/parameters of the particle systems, mainly through micromechanical constitutive relations at the contacts. Compared with a continuum, particles have an additional degree of freedom of rotation which enables them to transmit couple stresses, besides forces through their translational degrees of freedom. At certain moment, the positions and velocities of the particles can be obtained by translational and rotational movement equations and any special physical phenomenon can be traced back from every single particle interactions. Therefore, it is possible for DEM to analyze large deformation problems and a flow process which will occur after slope failure has initiated. The main limitation of DEM is that there is great difficulty in relating the microscopic and macroscopic variables/ parameters; hence, DEM is mainly tailored towards qualitative instead of quantitative analysis.

DEM runs according to a time-difference scheme in which calculation includes the repeated application of the law of motion to each particle, a force-displacement law to each contact, and a contact updating scheme. Generally, there are two types of contact in the program which are the ball-wall contact and the ball-ball contact. In each cycle, the set of contacts is updated from the known particles and known wall positions. Force-displacement law is firstly applied on each contact, and new contact force is then calculated according to the relative motion and constitutive relation. Law of motion is then applied to each particle to update the velocity, the direction of travel based on the resultant force, and the moment and contact acting on the particles. Although every particle is assumed as a rigid material, the behaviour of the contacts is characterized using a soft contact approach in which finite normal stiffness is taken to represent the stiffness which exists at the contact. The soft contact approach allows small overlap between the particles which can be easily observed. Stress on particles is then determined from this overlapping through the particle interface.

## 1.15. General formulation of DEM

cannot be torn open or broken into pieces. All material points originally in the neighbourhood of a certain point in the problem domain remain in the same neighbourhood throughout the whole physical process. Some special algorithms have been developed to deal with material fractures in continuum mechanics-based methods, such as the special joint elements by Goodman [13] and the displacement discontinuity technique in BEM by Crouch and Starfield [5].

1. large-scale slip and opening of fracture elements are prevented in order to maintain the

2. the amount of fracture elements must be kept to relatively small so that the global stiffness matrix can be maintained well-posed, without causing severe numerical instabilities; and

3. complete detachment and rotation of elements or groups of elements as a consequence of

Before a slope starts to collapse, the factor of safety serves as an important index in both the LEM and SRM to assess the stability of the slope. The movement and growth after failure have launched which is also important in many cases that cannot be simulated on the continuum

In continuum description of soil material, the well-established macro-constitutive equations whose parameters can be measured experimentally are used. On the other hand, a discrete element approach will consider that the material is composed of distinct grains or particles that interact with each other. The commonly used distinct element method is an explicit method based on the finite difference principles which is originated in the early 1970s by a landmark work on the progressive movements of rock masses as 2D rigid block assemblages [6]. Later, the works by Cundall are developed to the early versions of the UDEC and 3DEC codes [9, 10, 12]. The method has also been developed for simulating the mechanical behaviour of granular materials [8], with a typical early code BALL [7], which later evolved into the codes of the PFC group for 2D and 3D problems of particle systems (Itasca, 1995). Through continuous developments and extensive applications over the last three decades, there has accumulated a great body of knowledge and a rich field of literature about the distinct element method. The main trend in the development and application of the method in rock engineering is represented by the history and results of the code groups UDEC/3DEC. Currently, there are many open source (Oval, LIGGGHTS, ESyS, Yade, ppohDEM, Lammps) as well as commercial DEM programs, but in general, this method is still limited to basic research instead of practical application as there are many limitations which include: (1) difficult to define and determine the microparameters; (2) there are still many drawbacks in the use of matching with the macro response to determine the microparameters; (3) not easy to set up a computer model; (4) not easy to include structural element or water pressure; (5) extremely time consuming to perform an analysis; and (6) postprocessing is not easy or trivial. It should also be noted that DEM can be formulated by an energy-based implicit integration scheme which is the discontinuous deformation analysis (DDA) method. This method is similar in many respect to the force-

deformation are either not allowed or treated with special algorithms.

model, and this should be analyzed by the distinct element method (DEM).

based explicit integration scheme as mentioned previously.

However, these methods can only be applied with limitations [21]:

macroscopic material continuity;

256 Dynamical Systems - Analytical and Computational Techniques

The PFC runs according to a time-difference scheme in which calculation includes the repeated application of the law of motion to each particle, a force-displacement law to each contact, and a contact updating a wall position. Generally, there are two types of contact exist in the program which are ball-to-wall contact and ball-to-ball contact. In each cycle, the set of contacts is updated from the known particle and the known wall position. The forcedisplacement law is first applied on each contact. New contact force is calculated and replaces the old contact force. The force calculations are based on preset parameters such as normal stiffness, density, and friction. Next, a law of motion is applied to each particle to update its velocity, direction of travel based on the resultant force, moment and contact acting on particle. The force-displacement law is then applied to continue the circulation.

#### 1.16. The force-displacement law

The force-displacement law is described for both the ball-ball and ball-wall contacts. The contact arises from contact occurring at a point. For the ball-ball contact, the normal vector is directed along the line between the ball centres. For the ball-wall contact, the normal vector is directed along the line defining the shortest distance between the ball centre and the wall. The contact force vector Fi is composed of normal and shear component in a single plane surface

$$F\_i = F\_{\vec{\eta}}^t(t) + F\_{\vec{\eta}}^s(t + \Delta t) \tag{110}$$

The force acting on particle i in contact with particle j at time t is given by

$$F\_{ij}^{\imath}(t) = k\_{\imath} \left( r\_{i} + r\_{j} - l\_{i\jmath}(t) \right) \tag{111}$$

where rj and r<sup>i</sup> stand for particle i and particle j radii, lij(t) is the vector joining both centres of the particles and kn represents the normal stiffness at the contact. The shear force acting on particle i during a contact with particle j is determined by

$$F\_{\vec{\eta}}^{\circ}(t+\Delta t) = \pm \min(F\_{\vec{\eta}}^{\circ}(t) + k\_{\circ}\Delta \mathbf{s}\_{\vec{\eta}}, f|F\_{\vec{\eta}}^{\circ}(t+\Delta t)|)\tag{112}$$

where f is the particle friction coefficient, ks represents the tangent shear stiffness at the contact. The new shear contact force is found by summing the old shear force (min Fij(t)) with the shear elastic force. Δsij stands for the shear contact displacement-increment occurring over a time step Δt.

$$
\Delta \mathbf{s}\_{i\dot{\jmath}} = \mathbf{v}\_{i\dot{\jmath}}^{s} \Delta t \tag{113}
$$

where V<sup>s</sup> ij is the shear component of the relative velocity at contact between particles i and j over the time step Δt.

#### 1.17. Law of motion

The motion of the particle is determined by the resultant force and moment acting on it. The motion induced by resultant force is called translational motion. The motion induced by resulting moment is rotational motion. The equations of motion are written in vector form as follows:


program which are ball-to-wall contact and ball-to-ball contact. In each cycle, the set of contacts is updated from the known particle and the known wall position. The forcedisplacement law is first applied on each contact. New contact force is calculated and replaces the old contact force. The force calculations are based on preset parameters such as normal stiffness, density, and friction. Next, a law of motion is applied to each particle to update its velocity, direction of travel based on the resultant force, moment and contact acting on particle.

The force-displacement law is described for both the ball-ball and ball-wall contacts. The contact arises from contact occurring at a point. For the ball-ball contact, the normal vector is directed along the line between the ball centres. For the ball-wall contact, the normal vector is directed along the line defining the shortest distance between the ball centre and the wall. The contact force vector Fi is composed of normal and shear component in a single plane surface

ijðtÞ þ Fs

where rj and r<sup>i</sup> stand for particle i and particle j radii, lij(t) is the vector joining both centres of the particles and kn represents the normal stiffness at the contact. The shear force acting on

where f is the particle friction coefficient, ks represents the tangent shear stiffness at the contact. The new shear contact force is found by summing the old shear force (min Fij(t)) with the shear elastic force. Δsij stands for the shear contact displacement-increment occurring over a time

<sup>Δ</sup>sij <sup>¼</sup> vs

The motion of the particle is determined by the resultant force and moment acting on it. The motion induced by resultant force is called translational motion. The motion induced by resulting moment is rotational motion. The equations of motion are written in vector form as

ri þ rj � lijðtÞ

ijðtÞ þ ksΔsij, <sup>f</sup> <sup>j</sup>Fn

ij is the shear component of the relative velocity at contact between particles i and j

ijðt þ ΔtÞ (110)

ijðt þ ΔtÞjÞ (112)

ijΔt (113)

(111)

The force-displacement law is then applied to continue the circulation.

Fi <sup>¼</sup> Fn

The force acting on particle i in contact with particle j at time t is given by

Fn ijðtÞ ¼ kn

ijð<sup>t</sup> <sup>þ</sup> <sup>Δ</sup>t޼�minðFs

particle i during a contact with particle j is determined by

Fs

step Δt.

where V<sup>s</sup>

follows:

over the time step Δt.

1.17. Law of motion

1.16. The force-displacement law

258 Dynamical Systems - Analytical and Computational Techniques

$$\sum\_{j} F\_{ij} + m\_i \mathbf{g} + F\_i^d = m\_i \mathbf{x}\_i^n \tag{114}$$


$$\sum\_{j} r\_{i} F\_{ij} + M\_{i}^{d} = I\_{r} \Theta\_{i}^{n} \tag{115}$$

where x″ <sup>i</sup> and θ″ <sup>i</sup> stand for the translational acceleration and rotational acceleration of particles i. Ir stands for moment of inertia. Fd <sup>i</sup> and Md <sup>i</sup> stand for the damping force and damping moment. Unlike finite element formulation, there are now three degree of freedom for 2D problem and six degree of freedom for 3D problems. In Cundall and Strack's explicit integration distinct element approach, solution of the global system of equation is avoided by considering the dynamic equilibrium of the individual particles rather than solving the entire system simultaneously. That means, Newton's law of motion is applied directly. This approach also avoids the generation and storage of the large global stiffness matrix that will appear in finite element analysis. On the other hand, the implicit DDA approach will generate a global stiffness matrix which is even larger than that in finite element analysis, as the rotation is involved directly in the stiffness matrix.

In a typical DEM simulation, if there is no yield by contact separation or frictional sliding, the particles will vibrate constantly and the equilibrium is difficult to be achieved. To avoid this phenomenon which is physically incorrect, numerical or artificial damping is usually adopted in many DEM codes, and the two most common approaches to damping are the mass damping and non-viscous damping. For mass damping, the amount of damping that each particle "feels" is proportional to its mass, and the proportionality constant depends on the eigenvalues of the stiffness matrix. This damping is usually applied equally to all the nodes. As this form of damping introduces body forces, which may not be appropriate in flowing regions, it may influence the mode of failure. Alternatively, Cundall [11] proposed an alternative method where the damping force at each node is proportional to the magnitude of the outof-balance-force, with a sign to ensure that the vibrational modes are damped rather than the steady motion. This form of damping has the advantage that only accelerating motion is damped and no erroneous damping forces will arise from steady-state motion. The damping constant is also non-dimensional and the damping is frequency independent. As suggested by Itasca [20], an advantage of this approach is that it is similar to the hysteretic damping, as the energy loss per cycle is independent of the rate at which the cycle is executed. While damping is one way to overcome the non-physical nature of the contact constitutive models in DEM simulations, it is quite difficult to select an appropriate and physically meaningful value for the damping. For many DEM simulations, particles are moving around each other and the dominant form of energy dissipation is for frictional sliding and contact breakages. The choice of damping may affect the results of computations. Currently, most of the DEM codes allow the use of automatic damping or manually prescribed the damping if necessary.

To capture the inherent non-linearity behaviour of the problem (with generation and removal of contacts, non-linear contact response and stress-strain behaviour and others), the displacement and contact forces in a given time step must be small enough so that in a single time step, the disturbances cannot propagate from a particle further than its nearest neighbours. For most of the DEM programs, this can be achieved automatically and the default setting is usually good enough for normal cases. It is, however, sometimes necessary to manually adjust the time step in some special cases when the input parameters are unreasonably high or low. Most of the DEM codes use the central difference time integration algorithm which is a second-order scheme in time step.

## 1.18. Measuring logic

If the local results in DEM are analyzed, it is found that there will be large fluctuations with respect to both locations and time. Such results are not surprising, as the results are highly sensitive to the interaction between particles and hence the time step under which the results are monitored. It can be viewed that such local results can be meaningless unless the results are monitored over a long time span or region. A number of quantities in a DEM model are defined with respect to a specified measurement circle. These quantities include coordinate number, porosity, sliding fraction, stress and strain rate. The coordination number and stress are defined as the average number of contacts per particle. Only particles with centroids that are contained within the measurement circle are considered in computation. In order to account for the additional area of particles that is being neglected, a corrector factor based on the porosity is applied to the computed value of stress.

Since measurement circle is used, stress in particle is described as the two in-plane force acting on each particle per volume of particle. Average stress is defined as the total stress in particle divided by the volume of measurement circle. Thus, shape of particle is regardless of the average stress measurement because the reported stress is easily scaled by volume unity. The reported stress is interpreted as the stress per volume of measurement circle.

## 1.19. Discussion and conclusion

There are also various publications on the numerical solutions of differential equations, and the readers are suggested to the works of Lee and Schiesser [24], Jovanoic and Suli [22], Veiga et al. [37], Sewell [35], Morton and Mayers [27], Logg et al. [25], Holmes [17], Lui [26], Lapids and Pinder [23] and Iserles [19]. It is impossible for the author to cover every available analytical or numerical method; hence, the author has chosen some methods that are actually used for teaching and research. The readers are strongly encouraged to consult the numerous resources available in various books and publications. There are still new developments available for the solutions of specific differential equations in large-scale problems, and this is also the current trend in the development of differential equation solution.

Due to the importance of the solution of differential equations, there are other important numerical methods that are used by different researchers but are not discussed here, which include the finite difference and boundary element methods (computer codes for learning can also be obtained from the author). Differential equations rely on the Taylor's series, and the derivatives in the differential equation can be replaced with finite difference approximations on a discretized domain. This will result in a system of algebraic equations that can be solved implicitly or explicitly. There are various ways to form the derivatives, and the most common methods are the forward difference, backward difference and the central difference schemes. While the finite difference methods may be more suitable for different types of differential equations, this method is less convenient to deal with irregular boundary conditions as compared with the finite element method. For highly irregular domain where it is not easy to form a nice discretization, the finite element method will also be much easier and natural to deal with for such condition. In this respect, it is not surprising that many engineering programs are written by the use of the finite element method than the finite difference method.

damping may affect the results of computations. Currently, most of the DEM codes allow the

To capture the inherent non-linearity behaviour of the problem (with generation and removal of contacts, non-linear contact response and stress-strain behaviour and others), the displacement and contact forces in a given time step must be small enough so that in a single time step, the disturbances cannot propagate from a particle further than its nearest neighbours. For most of the DEM programs, this can be achieved automatically and the default setting is usually good enough for normal cases. It is, however, sometimes necessary to manually adjust the time step in some special cases when the input parameters are unreasonably high or low. Most of the DEM codes use the central difference time integration algorithm which is a second-order

If the local results in DEM are analyzed, it is found that there will be large fluctuations with respect to both locations and time. Such results are not surprising, as the results are highly sensitive to the interaction between particles and hence the time step under which the results are monitored. It can be viewed that such local results can be meaningless unless the results are monitored over a long time span or region. A number of quantities in a DEM model are defined with respect to a specified measurement circle. These quantities include coordinate number, porosity, sliding fraction, stress and strain rate. The coordination number and stress are defined as the average number of contacts per particle. Only particles with centroids that are contained within the measurement circle are considered in computation. In order to account for the additional area of particles that is being neglected, a corrector factor based on

Since measurement circle is used, stress in particle is described as the two in-plane force acting on each particle per volume of particle. Average stress is defined as the total stress in particle divided by the volume of measurement circle. Thus, shape of particle is regardless of the average stress measurement because the reported stress is easily scaled by volume unity. The

There are also various publications on the numerical solutions of differential equations, and the readers are suggested to the works of Lee and Schiesser [24], Jovanoic and Suli [22], Veiga et al. [37], Sewell [35], Morton and Mayers [27], Logg et al. [25], Holmes [17], Lui [26], Lapids and Pinder [23] and Iserles [19]. It is impossible for the author to cover every available analytical or numerical method; hence, the author has chosen some methods that are actually used for teaching and research. The readers are strongly encouraged to consult the numerous resources available in various books and publications. There are still new developments available for the solutions of specific differential equations in large-scale problems, and this is

reported stress is interpreted as the stress per volume of measurement circle.

also the current trend in the development of differential equation solution.

use of automatic damping or manually prescribed the damping if necessary.

260 Dynamical Systems - Analytical and Computational Techniques

scheme in time step.

1.18. Measuring logic

the porosity is applied to the computed value of stress.

1.19. Discussion and conclusion

The boundary element method (BEM) is another numerical method for solving linear partial differential equations which can be formulated as integral equations. The boundary element method uses the given boundary conditions to fit boundary values into the integral equation. In the post-processing stage, the integral equation will be used to calculate the solution directly at any given point inside the solution domain numerically. BEM is applied to problems for which Green's functions can be calculated, thus this method is initially designed for problems in linear homogeneous media. The dimension of the problem will then be reduced by one. For example, two-dimensional problem will be effectively reduced to one-dimensional problem along the boundary, and this will greatly improve the efficiency of computation. The requirement from the boundary element method imposes considerable restrictions on the range and generality of problems to which the boundary element method can usefully be applied. There are some new developments to the boundary element method so that it can be used for nonlinear problem or problems with several major materials (problems with random distribution of material properties are still not applicable). The fundamental solutions are often difficult to integrate. One important property of boundary element analysis is the solution of a fully populated matrix as compared with that in the finite element/difference method. For complicated problems, the boundary element will lose its advantage as compared with other numerical methods. Due to the various limitations, there are only limited boundary element programs available to the researchers. Interested readers can consult the works of Banerjee [1], Brebbia et al. [3] and Trevelyan [36]. It appears that there are less interest in the use and development of the boundary element method in the recent years, due to the various limitations of this method in general non-linear non-homogeneous problem.

In history, various techniques have been developed for ordinary differential equations and partial differential equations under different boundary conditions. While these tricks appear to be elegant, they are not readily adopted for normal engineering use due to various limitations. Being an engineer, the author seldom adopted the methods as outlined in this chapter in actual applications (but do adopt for teaching), except the numerical methods as outlined in this chapter. At present, there are many proprietary or open source finite elements or distinct element codes being used for many complicated real problems. The computer codes (usually in Fortran or C) are usually difficult to be read (if available), and the computer codes for all the partial differential forms (including some extended formats) that have been discussed in this chapter can be readily available from the author for learning purposes. There are also very powerful and general finite element tools or differential equations solver such as FreeFem++, Comsol, Matlab, Mathematica, Maple and Maxima which are used by many scientists and engineers [39, 40]. The use of parallel computing is also strongly influenced by the needs to solve complicated partial differential equations over large solution domain.

## Author details

Cheng Yung Ming

Address all correspondence to: ceymchen@polyu.edu.hk

Department of Civil and Environmental Engineering, Hong Kong Polytechnic University, Hong Kong SAR, China

## References


[9] Cundall P.A. UDEC-A generalized distinct element program for modelling jointed rock, Final Tech. Rep. Eur. Res. Office (US Army Contract DAJA37-79-C-0548), Springer, Netherland, 1980.

element codes being used for many complicated real problems. The computer codes (usually in Fortran or C) are usually difficult to be read (if available), and the computer codes for all the partial differential forms (including some extended formats) that have been discussed in this chapter can be readily available from the author for learning purposes. There are also very powerful and general finite element tools or differential equations solver such as FreeFem++, Comsol, Matlab, Mathematica, Maple and Maxima which are used by many scientists and engineers [39, 40]. The use of parallel computing is also strongly influenced by the needs to

Department of Civil and Environmental Engineering, Hong Kong Polytechnic University,

[1] Banerjee P.K. (editor). Developments in boundary element methods, Vols. 1,2,3, Applied

[2] Bertanz R. Fourier series and numerical methods for partial differential equations, John

[3] Brebbia C.A., Telles J.C.F. and Wrobel L.C. Boundary element techniques, Springer-

[5] Crouch S.L. and Starfield A.M. (editors). Boundary element methods in solid mechanics,

[6] Cundall P. A. A computer model for simulating progressive, large-scale movements in blocky rock systems. In: Proceedings of the International Symposium on Rock Mechanics.

[7] Cundall P.A. Ball – A computer program to model granular medium using the distinct element method, Technical note TN-LN-13, Advanced Technology Group, Dames and

[8] Cundall P.A. and Strack O.D.L. A discrete model for granular assemblies, Geotechnique,

[4] Bronson R. and Costa G.B. Differential equations, McGraw-Hill, USA, 2006.

solve complicated partial differential equations over large solution domain.

Address all correspondence to: ceymchen@polyu.edu.hk

262 Dynamical Systems - Analytical and Computational Techniques

Science Publishers, London, 1979.

George Allen & Unwin, London, 1983.

Nancy, France, 1971: 129–136.

Moore, London, 1978:129–163.

1979, 29(1):47–65.

Author details

Cheng Yung Ming

Hong Kong SAR, China

Wiley, USA, 2010.

Verlag, Berlin, 1984.

References


[28] Nagle R.K., Saff E.B. and Snider A.D. Fundamentals of differential equations and bound-

[29] Polyanin A.D. and Nazaikinskii V.E. Handbook of linear partial differential equations for

[30] Polyallin A.D., Zaitsev V.F. and Moussiaux A. Handbook of first order partial differential

[31] Polyanin A.D. and Zaitsev V.F. Handbook of nonlinear Partial Differential Equations, 2nd

[32] Rao S.S. The finite element method in engineering, Butterworth Heinemann, London,

[33] Salsa S. Partial differential equations in action - from modelling to theory, Springer, Italy,

[34] Soare M.V., Teodorescu P.P. and Toma I. Ordinary differential equations with applications

[35] Sewell G. The numerical solution of ordinary and partial differential equations, John

[36] Trevelyan J. Boundary elements for engineers: theory and applications, Computational

[37] Veiga L.B.D., Lipnikov K. and Manzini G. The mimetic finite difference method for

[38] Zienkiewicz O.C., Taylor R.L. and Zhu J.Z. The finite element method: Its basis and

fundamentals, 6th edition, Butterworth Heinemann, London, 2011.

[41] H Weber, Paul du Bois-Reymond, Mathematische Annalen 35 (1890), 457–462

ary value problems, Addison Wesley, USA, 2012.

equations, Taylor and Francis, UK, 2002.

264 Dynamical Systems - Analytical and Computational Techniques

edition, Chapman & Hall, USA, 2004.

to mechanics, Springer, Netherland, 2000.

Mechanics Publications, Boston, USA, 1994.

elliptic problems, Springer, Switzerland, 2014.

[39] Comsol, https://www.comsol.com/ [40] FreeFem++, http://www.freefem.org/

2011.

2008.

Wiley, USA, 2005.

engineers and scientists, 2nd edition, CRC Press, USA, 2016.

## *Edited by Mahmut Reyhanoglu*

There has been a considerable progress made during the recent past on mathematical techniques for studying dynamical systems that arise in science and engineering. This progress has been, to a large extent, due to our increasing ability to mathematically model physical processes and to analyze and solve them, both analytically and numerically. With its eleven chapters, this book brings together important contributions from renowned international researchers to provide an excellent survey of recent advances in dynamical systems theory and applications. The first section consists of seven chapters that focus on analytical techniques, while the next section is composed of four chapters that center on computational techniques.

Dynamical Systems - Analytical and Computational Techniques

Dynamical Systems

Analytical and Computational Techniques

*Edited by Mahmut Reyhanoglu*

Photo by Andrey\_A / iStock