**Generalized Ratio Control of Discrete-Time Systems**

Generalized Ratio Control of Discrete-Time Systems

Dušan Krokavec and Anna Filasová

Additional information is available at the end of the chapter Dušan Krokavec and Anna Filasová

http://dx.doi.org/10.5772/67159 Additional information is available at the end of the chapter

#### Abstract

[6] G. Fernandez-Anaya, J. J. Flores-Godoy, R. Femat and J. J. Alvarez-Ramire. Preservation of stability and synchronization in nonlinear system. Physics Letters A. 2007;371(3): 205–

[7] D. Becker-Bessudo, G. Fernández-Anaya and J. J. Flores-Godoy. Preserving synchronization using nonlinear modifications in the Jacobian matrix. Communications in Nonlinear

[8] G. Fernandez-Anaya, J. J. Flores-Godoy and J. J. Alvarez-Ramirez. Synchronization preservation of dynamical networks. In: J. S. Moreno, editor. Progress in Statistical Mechanics

[9] L. Perko. Differential Equations and Dynamical Systems. 3rd ed. Springer-Verlag; New

[10] S. Liu. Matrix results on the Khatri-Rao and Tracy Singh products. Linear Algebra and Its

[11] J. Lü and G. Chen. A new chaotic attractor coined. International Journal of Bifurcations

Science and Numerical Simulation. 2011;16(2): 940–957.

Research. Nova Science Publishers; New York; 2008: 323–347.

212.

York Inc. 2001.

Applications. 1999;289: 267–277.

76 Dynamical Systems - Analytical and Computational Techniques

and Chaos. 2002;12(3): 659–661.

This chapter exposes the important connection between ratio control and the state control reflecting equality constraint for linear discrete-time systems, which allows significant reduction in computational complexity and efforts. Based on an enhanced bounded real lemma form, to outperform known approaches, the existence of the state feedback for such defined singular task is proven, and the design procedure based on the linear matrix inequalities is provided. The proposed principle, guaranteeing feasibility of the set of inequalities, improves steady-state accuracy of the ratio control and essentially reduces the design effort. The approach is illustrated on simulation examples, where the validity of the proposed method is demonstrated.

Keywords: discrete-time systems, ratio control, state feedback, equality constraint, singular systems, linear matrix inequalities

## 1. Introduction

The problem of the ratio feedback control is one of the specific topics in the theory of control synthesis. It is well practically motivated by applied realizations but not favorable developed in a state control technique or in combination with the state estimation theory. However, a considerable number of problems in the ratio control design have to deal with systems subjected to constraint conditions, which are other than linear, or directly formulated as singular constrained tasks. In the typical case [1, 2] where the system state reflects certain physical entities, constraints usually prescribe the system state, the region of technological conditions. If the ratio control is not formulated as a task with the equality constraints, the application requires further procedures of controlling the evolution of the set-valued ratio. Notably, a special form of the problems can be defined while the system state variables satisfy constraints and interpreted as descriptor systems [3–6]; but, the system with state equality

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

constraints generally does not satisfy the conditions under which the results of descriptor systems can be used. Moreover, if the design task is interpreted as a singular problem, constraint associated methods have to be developed to design the controller.

In principle, it is possible to design the controller that stabilizes a system and simultaneously forces its closed-loop properties to satisfy given constraints [7, 8]. Following the idea of linear quadratic (LQ) control application, these approaches heavily rely on set-valued calculus as well as on min-max theory [9, 10], which are not simple and lead to rather cumbersome technical and numerical procedures. A more simple technique, using equality constraints formulation for discrete-time multiinput/multioutput (MIMO) systems, is introduced in Refs. [11, 12]. Based on the eigenstructure assignment principle, a slight modification of equality constraint technique is presented in Ref. [13].

Many tasks that arise in state-feedback control formulation can be reduced to standard convex problems that involve matrix inequalities. Generally, optimal solutions of such problems can be computed by using the interior point method [14], which converges in polynomial time with respect to the problem size. A review of the progress made in this field can be found in Refs. [15–17] and the references therein. In the given sense, the stability conditions are expressed in terms of linear matrix inequalities (LMI), which have a notable practical interest due to the existence of numerical LMI solvers [18, 19].

The chapter devotes the design conditions to obtain a closed-loop system in which minimally two state variables are rebind by the prescribed ratio. The generalized ratio control principle is reformulated as the full-state feedback control with one equality constraint. Solving this problem, the technique for an enhanced BRL representation [20, 21] is exploited, to circumvent potentially ill-conditioned singular task concerning the discrete-time systems control design with state equality constraints [22]. Due to application of the enhanced BRL, which decouple the Lyapunov matrix and the system matrices, the design task stays well-conditioned. These conditions impose such control that assures asymptotic stability for time-invariant discrete control under defined equality constraints. The presented way, based on projecting the target state variables into a subset of the system state space, adapts the idea of performing the LQ control principle in the fault tolerant control and the constraint control of discrete-time stochastic systems [23, 24].

The outline of this chapter is as follows. Continuing the introduction outlines in Section 1, the problem formulation is principally presented in Section 2. Section 3 is dedicated to the mathematical backgrounds supporting the problem solution and the exploited discrete-time LMI modifications are given in Section 4. These results are used in Section 5 to examine the linearization problems in bilinear matrix inequalities, so that in Section 5, these results can be given with convex formulation of control design condition, guaranteeing a feasible solution of the generally singular design task. Subsequently, numerical examples to illustrate basic properties of the proposed method are presented in Section 6, and Section 7 is finally devoted to a brief concluding remarks.

Throughout the chapter, the following notations are used: x<sup>T</sup> and X<sup>T</sup> denote the transpose of the vector x and matrix X, respectively, for a square matrix X < 0 that X is a symmetric negative definite matrix, the symbol I<sup>n</sup> represents the nth order unit matrix, Y <sup>⊝</sup><sup>1</sup> denotes the Moore-Penrose pseudoinverse of a nonsquare Y, ∥ � ∥ represents the Euclidean norm for vectors and the spectral norm for matrices, IR denotes the set of real numbers and IR<sup>n</sup> <sup>×</sup> <sup>r</sup> the set of all n × r real matrices.

## 2. Problem formulation

constraints generally does not satisfy the conditions under which the results of descriptor systems can be used. Moreover, if the design task is interpreted as a singular problem, con-

In principle, it is possible to design the controller that stabilizes a system and simultaneously forces its closed-loop properties to satisfy given constraints [7, 8]. Following the idea of linear quadratic (LQ) control application, these approaches heavily rely on set-valued calculus as well as on min-max theory [9, 10], which are not simple and lead to rather cumbersome technical and numerical procedures. A more simple technique, using equality constraints formulation for discrete-time multiinput/multioutput (MIMO) systems, is introduced in Refs. [11, 12]. Based on the eigenstructure assignment principle, a slight modification of equality

Many tasks that arise in state-feedback control formulation can be reduced to standard convex problems that involve matrix inequalities. Generally, optimal solutions of such problems can be computed by using the interior point method [14], which converges in polynomial time with respect to the problem size. A review of the progress made in this field can be found in Refs. [15–17] and the references therein. In the given sense, the stability conditions are expressed in terms of linear matrix inequalities (LMI), which have a notable practical interest

The chapter devotes the design conditions to obtain a closed-loop system in which minimally two state variables are rebind by the prescribed ratio. The generalized ratio control principle is reformulated as the full-state feedback control with one equality constraint. Solving this problem, the technique for an enhanced BRL representation [20, 21] is exploited, to circumvent potentially ill-conditioned singular task concerning the discrete-time systems control design with state equality constraints [22]. Due to application of the enhanced BRL, which decouple the Lyapunov matrix and the system matrices, the design task stays well-conditioned. These conditions impose such control that assures asymptotic stability for time-invariant discrete control under defined equality constraints. The presented way, based on projecting the target state variables into a subset of the system state space, adapts the idea of performing the LQ control principle in the fault tolerant control and the constraint control of discrete-time sto-

The outline of this chapter is as follows. Continuing the introduction outlines in Section 1, the problem formulation is principally presented in Section 2. Section 3 is dedicated to the mathematical backgrounds supporting the problem solution and the exploited discrete-time LMI modifications are given in Section 4. These results are used in Section 5 to examine the linearization problems in bilinear matrix inequalities, so that in Section 5, these results can be given with convex formulation of control design condition, guaranteeing a feasible solution of the generally singular design task. Subsequently, numerical examples to illustrate basic properties of the proposed method are presented in Section 6, and Section 7 is finally devoted to a

Throughout the chapter, the following notations are used: x<sup>T</sup> and X<sup>T</sup> denote the transpose of the vector x and matrix X, respectively, for a square matrix X < 0 that X is a symmetric

straint associated methods have to be developed to design the controller.

constraint technique is presented in Ref. [13].

78 Dynamical Systems - Analytical and Computational Techniques

due to the existence of numerical LMI solvers [18, 19].

chastic systems [23, 24].

brief concluding remarks.

Through this chapter, the task is concerned with design of the full-state feedback control to discrete-time linear dynamic systems in such a way that the closed-loop system state variables are constrained in the prescribed ratio. The systems are defined by the set of state equations

$$\mathbf{q}(i+1) = \mathbf{F}\mathbf{q}(i) + \mathbf{G}\mathbf{u}(i),\tag{1}$$

$$\mathbf{y}(i) = \mathbf{C}q(i),\tag{2}$$

where q(i) ∈ IR<sup>n</sup> is the vector of the state variables, u(i) ∈ IRr is the vector of the input variables, y(i) ∈ IRm is the vector of the output variables, and nominal system matrices F ∈ IRn <sup>×</sup> <sup>n</sup> , G ∈ IRn <sup>×</sup> <sup>r</sup> , and C ∈ IRm <sup>×</sup> <sup>n</sup> are real matrices, and i ∈ Z+.

The discrete transfer function matrix of dimension m × r, associated with the system Eqs. (1) and (2) is defined as

$$\mathbf{H}(z) = \mathbf{C}(z\mathbf{I} - \mathbf{F})^{-1}\mathbf{G} = \frac{\tilde{\mathbf{y}}(z)}{\tilde{\mathbf{u}}(z)} \tag{3}$$

where I<sup>n</sup> ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> is the identity matrix, ỹ(z) and ũ(z) stand for the Z transform of m dimensional output vector and r dimensional input vector, respectively, and a complex number z is the transform variable of the Z transform [25].

In practice, the ratio control maintains the relationship between two state variables [26, 27] and is defined for all i ∈ Z as

$$\frac{q\_h(i+1)}{q\_k(i+1)} = a\_h \Rightarrow q\_h(i+1) - a\_h q\_k(i+1) = 0 \,\,\, \,\tag{4}$$

Assuming the parameter vector eh, the task can be expressed by using the system state vector q(i + 1) as

$$
\sigma\_h^T \mathfrak{q}(\mathfrak{i} + 1) = 0,\tag{5}
$$

where

$$\mathbf{e}\_h^T = \begin{bmatrix} \mathbf{0}\_1 & \cdots & \mathbf{1}\_h & \cdots & -a\_h & \cdots & \mathbf{0}\_n \end{bmatrix},\tag{6}$$

$$\boldsymbol{q}^{T}(\mathbf{i}+1) = \begin{bmatrix} q\_1(\mathbf{i}+1) & \cdots & q\_h(\mathbf{i}+1) & \cdots & q\_k(\mathbf{i}+1) & \cdots & q\_n(\mathbf{i}+1) \end{bmatrix}.\tag{7}$$

It is evident that the generalized ratio control can be defined by a composed structure of e, as well as by a structured matrix E [28].

The task formulated above means the design problem that can be generally defined as the stable closed-loop system synthesis using the linear full-state feedback controller of the form

$$\mathfrak{u}(i) = -\mathbf{K}\mathfrak{q}(i),\tag{8}$$

where K ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup> is the controller feedback gain matrix, and the design constraint is considered in the general matrix equality form

$$Eq(i+1) = 0,\tag{9}$$

with E ∈ IR<sup>p</sup> <sup>×</sup> <sup>n</sup> , rank E = p ≤ r. In general, the matrix E reflects prescribed fixed ratio of two or more state variables. The equality Eq. (9) evidently implies ΛEq(i + 1) = 0, where Λ ∈ IR<sup>p</sup> <sup>×</sup> <sup>p</sup> is an arbitrary matrix.

It is considered in the following the discrete-time system is controllable and observable that is, rankð Þ¼ <sup>z</sup><sup>I</sup> � <sup>F</sup>, <sup>G</sup> <sup>n</sup> <sup>∀</sup>z<sup>∈</sup> <sup>C</sup> and rank <sup>z</sup><sup>I</sup> � <sup>F</sup><sup>T</sup>,C<sup>T</sup> <sup>¼</sup> <sup>n</sup> <sup>∀</sup>z<sup>∈</sup> <sup>C</sup>, respectively [29], and that all state variables are measurable.

## 3. Basic preliminaries

Proposition 1. (Matrix pseudoinverse) Let Θ is a matrix variable and A, B, and Π are known nonsquare matrices of appropriate dimensions such that

$$A\Theta B = \Pi.\tag{10}$$

Then all solution to Θ means that

$$
\Theta = A^{\ominus 1} \Lambda B^{\ominus 1} + \Theta^{\circ} - A^{\ominus 1} A \Theta^{\circ} \mathcal{B} B^{\ominus 1}, \tag{11}
$$

where

$$\mathbf{A}^{\ominus 1} = \mathbf{A}^{\mathrm{T}} \left(\mathbf{A}\mathbf{A}^{\mathrm{T}}\right)^{-1}, \quad \mathbf{B}^{\ominus 1} = \left(\mathbf{B}^{\mathrm{T}}\mathbf{B}\right)^{-1}\mathbf{B}^{\mathrm{T}},\tag{12}$$

while A<sup>⊝</sup> <sup>1</sup> is the left Moore-Penrose pseudoinverse of A, B<sup>⊝</sup> <sup>1</sup> is the right Moore-Penrose pseudoinverse of B and Θ° is an arbitrary matrix of appropriate dimension.

Proof. (see, e.g., Ref. [15])

Proposition 2. Let Ξ ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> is a real square matrix with nonrepeated eigenvalues, satisfying the equality constraint

$$e^T \Xi = \mathbf{0},\tag{13}$$

then one from its eigenvalues is zero, and the (normalized) vector e <sup>T</sup> is the left raw eigenvector of Ξ associated with the zero eigenvalue.

Proof. If Ξ ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> is a real square matrix satisfying the above given eigenvalues limitation, then the eigenvalue decomposition of Ξ takes the following form

$$
\boldsymbol{\Xi} = \boldsymbol{\mathsf{N}} \boldsymbol{\Sigma} \mathbf{M}^T,\tag{14}
$$

$$\mathbf{N} = \begin{bmatrix} \mathfrak{n}\_1 & \cdots & \mathfrak{n}\_n \end{bmatrix}, \quad \mathbf{M} = \begin{bmatrix} \mathfrak{m}\_1 & \cdots & \mathfrak{m}\_n \end{bmatrix}, \quad \mathbf{M}^T \mathbf{N} = I, \quad \Sigma = \text{diag}[z\_1 \ \cdots \ \mid \quad z\_n], \tag{15}$$

where n<sup>l</sup> is the right eigenvector and m<sup>T</sup> <sup>l</sup> is the left eigenvector associated with the eigenvalue zl of Ξ, and {zl, l = 1, 2,…n} is the set of the eigenvalues of Ξ. Then Eq. (13) can be rewritten as follows:

$$\mathbf{0} = \mathbf{e}^{T} [\mathfrak{m}\_{1} \quad \cdots \quad \mathfrak{m}\_{\hbar} \quad \cdots \quad \mathfrak{m}\_{\hbar}] \text{diag} [z\_{1} \quad \cdots \quad z\_{\hbar} \quad \cdots \quad z\_{\hbar}] \mathbf{M}^{T}. \tag{16}$$

If <sup>e</sup><sup>T</sup> <sup>¼</sup> <sup>m</sup><sup>T</sup> <sup>h</sup> , then orthogonal property Eq. (15) implies

<sup>0</sup> <sup>¼</sup> ½ � 01 <sup>⋯</sup> <sup>1</sup><sup>h</sup> <sup>⋯</sup> <sup>0</sup><sup>n</sup> diag½ � <sup>z</sup><sup>1</sup> <sup>⋯</sup> zh <sup>⋯</sup> zn <sup>M</sup><sup>T</sup> (17)

and it is evident that Eq. (17) can be satisfied only if zh = 0. This concludes the proof. □

Proposition 3. (Quadratic performance) Given a stable system of the structure Eqs. (1) and (2), then it yields

$$\sum\_{l=0}^{\infty} \left( \mathfrak{y}^{T}(l)\mathfrak{y}(l) - \mathfrak{y}^{2}\_{\ast\ast}\mathfrak{u}^{T}(l)\mathfrak{u}(l) \right) > 0,\tag{18}$$

where γ<sup>∞</sup> ∈ IR is the H<sup>∞</sup> norm of the transfer function matrix of the system Eq. (3).

Proof. Since Eq. (3) implies

$$
\tilde{\boldsymbol{y}}(z) = \mathbf{H}(z)\tilde{\boldsymbol{u}}(z),
\tag{19}
$$

then, evidently,

The task formulated above means the design problem that can be generally defined as the stable closed-loop system synthesis using the linear full-state feedback controller of the form

where K ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup> is the controller feedback gain matrix, and the design constraint is considered

more state variables. The equality Eq. (9) evidently implies ΛEq(i + 1) = 0, where Λ ∈ IR<sup>p</sup> <sup>×</sup> <sup>p</sup> is an

It is considered in the following the discrete-time system is controllable and observable that is, rankð Þ¼ <sup>z</sup><sup>I</sup> � <sup>F</sup>, <sup>G</sup> <sup>n</sup> <sup>∀</sup>z<sup>∈</sup> <sup>C</sup> and rank <sup>z</sup><sup>I</sup> � <sup>F</sup><sup>T</sup>,C<sup>T</sup> <sup>¼</sup> <sup>n</sup> <sup>∀</sup>z<sup>∈</sup> <sup>C</sup>, respectively [29], and that all

Proposition 1. (Matrix pseudoinverse) Let Θ is a matrix variable and A, B, and Π are known

<sup>Λ</sup>B<sup>⊝</sup><sup>1</sup> <sup>þ</sup> <sup>Θ</sup>° � <sup>A</sup><sup>⊝</sup><sup>1</sup>

while A<sup>⊝</sup> <sup>1</sup> is the left Moore-Penrose pseudoinverse of A, B<sup>⊝</sup> <sup>1</sup> is the right Moore-Penrose pseudoinverse

Proposition 2. Let Ξ ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> is a real square matrix with nonrepeated eigenvalues, satisfying the

, rank E = p ≤ r. In general, the matrix E reflects prescribed fixed ratio of two or

in the general matrix equality form

80 Dynamical Systems - Analytical and Computational Techniques

state variables are measurable.

3. Basic preliminaries

Then all solution to Θ means that

Proof. (see, e.g., Ref. [15])

associated with the zero eigenvalue.

equality constraint

where

nonsquare matrices of appropriate dimensions such that

<sup>Θ</sup> <sup>¼</sup> <sup>A</sup><sup>⊝</sup><sup>1</sup>

of B and Θ° is an arbitrary matrix of appropriate dimension.

then one from its eigenvalues is zero, and the (normalized) vector e

<sup>A</sup><sup>⊝</sup><sup>1</sup> <sup>¼</sup> <sup>A</sup><sup>T</sup> AA<sup>T</sup> �<sup>1</sup>

with E ∈ IR<sup>p</sup> <sup>×</sup> <sup>n</sup>

arbitrary matrix.

uðÞ¼� i Kqð Þi ; (8)

Eqð Þ¼ i þ 1 0; (9)

AΘB ¼ Π: (10)

<sup>e</sup><sup>T</sup><sup>Ξ</sup> <sup>¼</sup> <sup>0</sup>; (13)

; (11)

B<sup>T</sup>; (12)

<sup>T</sup> is the left raw eigenvector of Ξ

AΘ°BB<sup>⊝</sup><sup>1</sup>

, <sup>B</sup><sup>⊝</sup><sup>1</sup> <sup>¼</sup> <sup>B</sup><sup>T</sup><sup>B</sup> �<sup>1</sup>

$$||\tilde{y}(z)|| \le ||H(z)||\_2 ||\tilde{u}(z)||,\tag{20}$$

where ∥ H(z) ∥ is H<sup>2</sup> norm of the discrete transfer function matrix H(z).

Since the H<sup>∞</sup> norm property states

$$\frac{1}{\sqrt{m}}\|\boldsymbol{H}(\boldsymbol{z})\|\_{\circ} \leq \|\boldsymbol{H}(\boldsymbol{z})\|\_{2} \leq \sqrt{r}\|\boldsymbol{H}(\boldsymbol{z})\|\_{\circ},\tag{21}$$

using the notation ∥ H(z) ∥<sup>∞</sup> = γ∞, then Eq. (21) can be naturally rewritten as

$$\frac{1}{\sqrt{m}} \le 1 < \frac{1}{\gamma\_{\text{es}}'} \frac{\|\check{\mathbf{y}}(z)\|}{\|\check{\mathbf{u}}(z)\|} \le \frac{1}{\gamma\_{\text{es}}'} \|\mathbf{H}(z)\|\_{2} \le \sqrt{r}.\tag{22}$$

Thus, based on the Parseval's theorem, Eq. (22) gives

$$1 < \frac{\|\check{y}(z)\|}{\mathcal{V}\_{\curvearrowright} \|\hat{\mathfrak{u}}(z)\|} = \frac{\sqrt{\sum\_{i=0}^{\infty} \mathfrak{z}^{T}(i)\mathfrak{y}(i)}}{\mathcal{V}\_{\curvearrowright} \sqrt{\sum\_{i=0}^{\infty} \mathfrak{u}^{T}(i)\mathfrak{u}(i)}} \tag{23}$$

and using squares of the elements, the inequality Eq. (23) subsequently results in

$$\sum\_{i=0}^{\infty} \mathfrak{y}^T(i)\mathfrak{y}(i) - \gamma\_{\infty}^2 \sum\_{i=0}^{\infty} \mathfrak{u}^T(i)\mathfrak{u}(i) > 0. \tag{24}$$

Thus, Eq. (24) implies Eq. (18). This concludes the proof. □

If it is not in contradiction with other design constraints, Eq. (18) can be used as the extension to a Lyapunov function candidate for linear discrete-time systems, since it is positive.

#### 4. Quadratic performances

The above presented assumptions are imposed to obtain LMI structures exploiting H<sup>∞</sup> norm, known as the bounded real lemma LMIs. To simplify proofs of theorems in following, proof sketches of the BRL are presented, since more versions of BRL can be constructed.

Proposition 4. (Bounded real lemma) The autonomous system Eqs. (1) and (2) is stable with the quadratic performance γ∞, if there exist a symmetric positive definite matrix P ∈ IRn <sup>×</sup> <sup>n</sup> and a positive scalar γ<sup>∞</sup> ∈ IR such that

$$\mathbf{P} = \mathbf{P}^T > 0, \qquad \boldsymbol{\gamma}\_{\boldsymbol{\alpha}} > 0,\tag{25}$$

$$
\begin{bmatrix}
\mathbf{F}^T \mathbf{P} & -\mathbf{P} & \* & \* \\
\mathbf{G}^T \mathbf{P} & \mathbf{0} & -\boldsymbol{\gamma}\_o \mathbf{I}\_r & \* \\
\mathbf{0} & \mathbf{C} & \mathbf{0} & -\boldsymbol{\gamma}\_o \mathbf{I}\_m
\end{bmatrix} < 0,\tag{26}
$$

where I<sup>r</sup> ∈ IR<sup>r</sup> <sup>×</sup> <sup>r</sup> and I<sup>m</sup> ∈ IR<sup>m</sup> <sup>×</sup> <sup>m</sup> are identity matrices, respectively.

Hereafter, ∗ denotes the symmetric item in a symmetric matrix.

Proof. (compare, e.g., Refs. [16] and [23]) Defining the Lyapunov function candidate as follows:

$$w(\boldsymbol{q}(i)) = \boldsymbol{q}^T(i)\boldsymbol{P}\boldsymbol{q}(i) + \boldsymbol{\gamma}\_{\boldsymbol{\circ}}^{-1} \sum\_{l=0}^{i-1} \left( \boldsymbol{y}^T(l)\boldsymbol{y}(l) - \boldsymbol{\gamma}\_{\boldsymbol{\circ}}^2 \boldsymbol{u}^T(l)\boldsymbol{u}(l) \right) > 0,\tag{27}$$

then Eq. (18) implies that with the H<sup>∞</sup> norm γ<sup>∞</sup> of the transform function matrix Eq. (3), the inequality Eq. (27) is positive. The forward difference of Eq. (27) along a solution of the autonomous system Eq. (1) can be written as

$$\mathbf{u}$$

$$\begin{array}{l} \Delta v(\boldsymbol{q}(\boldsymbol{i})) = v(\boldsymbol{q}(\boldsymbol{i}+1)) - v(\boldsymbol{q}(\boldsymbol{i})) \\ = \boldsymbol{q}^{T}(\boldsymbol{i}+1)\boldsymbol{P}\boldsymbol{q}(\boldsymbol{i}+1) - \boldsymbol{q}^{T}(\boldsymbol{i})\boldsymbol{P}\boldsymbol{q}(\boldsymbol{i}) + \boldsymbol{\gamma}\_{\text{\textquotedblleft}}^{-1}\boldsymbol{y}^{T}(\boldsymbol{i})\boldsymbol{y}(\boldsymbol{i}) - \boldsymbol{\gamma}\_{\text{\textquotedblleft}}\boldsymbol{u}^{T}(\boldsymbol{i})\boldsymbol{u}(\boldsymbol{i})<0 \end{array} \tag{28}$$

and, using the description of the state system Eqs. (1) and (2), the inequality Eq. (28) becomes

$$\begin{array}{c} \Delta \boldsymbol{\sigma}(\boldsymbol{q}(\boldsymbol{i})) = \boldsymbol{\mathfrak{q}}^{T}(\boldsymbol{i}) \Big( \boldsymbol{\chi}\_{\boldsymbol{\omega}}^{-1} \mathbf{C}^{\top} \mathbf{C} - \mathbf{P} + \mathbf{F}^{T} \mathbf{P} \mathbf{F} \Big) \boldsymbol{q}(\boldsymbol{i}) + \boldsymbol{\mathfrak{u}}^{T}(\boldsymbol{i}) \mathbf{G}^{T} \mathbf{P} \mathbf{F} \boldsymbol{q}(\boldsymbol{i}) \\ \quad + \boldsymbol{\mathfrak{q}}^{T}(\boldsymbol{i}) \mathbf{F}^{T} \mathbf{P} \mathbf{G} \boldsymbol{u}(\boldsymbol{i}) + \boldsymbol{\mathfrak{u}}^{T}(\boldsymbol{i}) \Big( \mathbf{G}^{T} \mathbf{P} \mathbf{G} - \boldsymbol{\chi}\_{\boldsymbol{\omega}} \mathbf{I}\_{\tau} \Big) \boldsymbol{u}(\boldsymbol{i}) < 0. \end{array} \tag{29}$$

Thus, introducing the notation

$$\boldsymbol{\mathfrak{q}}\_c^T(\mathbf{i}) = \begin{bmatrix} \boldsymbol{\mathfrak{q}}^T(\mathbf{i}) & \boldsymbol{\mathfrak{u}}^T(\mathbf{i}) \end{bmatrix}, \tag{30}$$

it is obtained

$$
\Delta v \left( \boldsymbol{\mathfrak{q}}\_c(\mathbf{i}) \right) = \boldsymbol{\mathfrak{q}}\_c^T(\mathbf{i}) \mathbf{P}\_c \boldsymbol{\mathfrak{q}}\_c(\mathbf{i}) < 0,\tag{31}
$$

where

<sup>1</sup> <sup>&</sup>lt; <sup>∥</sup>y~ð Þ<sup>z</sup> <sup>∥</sup> <sup>γ</sup>∞∥u~ð Þ<sup>z</sup> <sup>∥</sup> <sup>¼</sup>

X∞ i¼0

82 Dynamical Systems - Analytical and Computational Techniques

4. Quadratic performances

scalar γ<sup>∞</sup> ∈ IR such that

and using squares of the elements, the inequality Eq. (23) subsequently results in

<sup>y</sup><sup>T</sup>ð Þ<sup>i</sup> <sup>y</sup>ðÞ�<sup>i</sup> <sup>γ</sup><sup>2</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X∞ i¼0

s

γ∞

∞ X∞ i¼0

Thus, Eq. (24) implies Eq. (18). This concludes the proof. □

If it is not in contradiction with other design constraints, Eq. (18) can be used as the extension

The above presented assumptions are imposed to obtain LMI structures exploiting H<sup>∞</sup> norm, known as the bounded real lemma LMIs. To simplify proofs of theorems in following, proof

Proposition 4. (Bounded real lemma) The autonomous system Eqs. (1) and (2) is stable with the quadratic performance γ∞, if there exist a symmetric positive definite matrix P ∈ IRn <sup>×</sup> <sup>n</sup> and a positive

> �P ∗∗ ∗ <sup>F</sup><sup>T</sup><sup>P</sup> �<sup>P</sup> ∗ ∗ <sup>G</sup><sup>T</sup><sup>P</sup> <sup>0</sup> �γ∞I<sup>r</sup> <sup>∗</sup> 0 C 0 �γ∞I<sup>m</sup>

Proof. (compare, e.g., Refs. [16] and [23]) Defining the Lyapunov function candidate as follows:

<sup>y</sup><sup>T</sup>ð Þ<sup>l</sup> <sup>y</sup>ðÞ�<sup>l</sup> <sup>γ</sup><sup>2</sup>

∞ X i�1

l¼0

then Eq. (18) implies that with the H<sup>∞</sup> norm γ<sup>∞</sup> of the transform function matrix Eq. (3), the inequality Eq. (27) is positive. The forward difference of Eq. (27) along a solution of the

where I<sup>r</sup> ∈ IR<sup>r</sup> <sup>×</sup> <sup>r</sup> and I<sup>m</sup> ∈ IR<sup>m</sup> <sup>×</sup> <sup>m</sup> are identity matrices, respectively.

<sup>v</sup>ð Þ¼ <sup>q</sup>ð Þ<sup>i</sup> <sup>q</sup><sup>T</sup>ð Þ<sup>i</sup> PqðÞþ<sup>i</sup> <sup>γ</sup>�<sup>1</sup>

autonomous system Eq. (1) can be written as

Hereafter, ∗ denotes the symmetric item in a symmetric matrix.

to a Lyapunov function candidate for linear discrete-time systems, since it is positive.

sketches of the BRL are presented, since more versions of BRL can be constructed.

y<sup>T</sup>ð Þi yð Þi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X∞ i¼0

u<sup>T</sup>ð Þi uð Þi

<sup>s</sup> (23)

<sup>u</sup><sup>T</sup>ð Þ<sup>i</sup> <sup>u</sup>ð Þ<sup>i</sup> <sup>&</sup>gt; <sup>0</sup>: (24)

<sup>P</sup> <sup>¼</sup> <sup>P</sup><sup>T</sup> <sup>&</sup>gt; <sup>0</sup>, <sup>γ</sup><sup>∞</sup> <sup>&</sup>gt; <sup>0</sup>; (25)

< 0; (26)

<sup>∞</sup>u<sup>T</sup>ð Þ<sup>l</sup> <sup>u</sup>ð Þ<sup>l</sup> � � <sup>&</sup>gt; <sup>0</sup>; (27)

$$P\_{\varepsilon} = \begin{bmatrix} \mathbf{F}^T \mathbf{P} \mathbf{F} + \boldsymbol{\gamma}\_{\ast \ast}^{-1} \mathbf{C}^T \mathbf{C} - \mathbf{P} & \mathbf{F}^T \mathbf{P} \mathbf{G} \\ \mathbf{G}^T \mathbf{P} \mathbf{F} & \mathbf{G}^T \mathbf{P} \mathbf{G} - \boldsymbol{\gamma}\_{\ast \ast} \mathbf{I}\_r \end{bmatrix} < 0. \tag{32}$$

Since, using the Schur complement property with respect to the matrix element γ�<sup>1</sup> <sup>∞</sup> C<sup>T</sup>C, Eq. (32) can be rewritten as

$$\boldsymbol{P}\_{\boldsymbol{c}} = \begin{bmatrix} -\boldsymbol{P} & \mathbf{0} & \mathbf{C}^{\top} \\ \mathbf{0} & -\boldsymbol{\gamma}\_{\text{ov}} \mathbf{I}\_{\boldsymbol{r}} & \mathbf{0} \\ \mathbf{C} & \mathbf{0} & -\boldsymbol{\gamma}\_{\text{ov}} \mathbf{I}\_{\boldsymbol{m}} \end{bmatrix} + \begin{bmatrix} \mathbf{F}^{T} \mathbf{P} \\ \mathbf{G}^{T} \mathbf{P} \\ \mathbf{0} \end{bmatrix} \mathbf{P}^{-1} [\mathbf{P} \mathbf{F} & \mathbf{P} \mathbf{G} & \mathbf{0}] < 0,\tag{33}$$

then, applying the dual Schur complement property, Eq. (33) implies Eq. (26). This concludes the proof. □

Direct application of the second Lyapunov method [30, 31] and BRL in the structure given by Eqs. (25) and (26) for affine uncertain systems as well as in constrained control design is in general ill-conditioned owing to singular design conditions [13]. To circumvent this problem, an enhanced LMI representation of BRL is proposed, where design condition proof is based on another form of LMIs.

Proposition 5. (Enhanced LMI representation of BRL) The autonomous system Eqs. (1) and (2) is stable with the quadratic performance γ∞, if there exist a symmetric positive definite matrix P ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> , a regular square matrix Q ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> , and a positive scalar γ<sup>∞</sup> ∈ IR such that

$$\mathbf{P} = \mathbf{P}^T > \mathbf{0}, \quad \mathbf{y}\_{\text{ov}} > \mathbf{0}, \tag{34}$$

$$\mathbf{Y} = \begin{bmatrix} \mathbf{P} - \mathbf{Q} - \mathbf{Q}^T & \* & \* & \* \\ \mathbf{F}^T \mathbf{Q}^T & -\mathbf{P} & \* & \* \\ \mathbf{G}^T \mathbf{Q}^T & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_r & \* \\ \mathbf{0} & \mathbf{C} & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_m \end{bmatrix} < 0,\tag{35}$$

where I<sup>r</sup> ∈ IR<sup>r</sup> <sup>×</sup> <sup>r</sup> and I<sup>m</sup> ∈ IR<sup>m</sup> <sup>×</sup> <sup>m</sup> are identity matrices.

Proof. Since, Eq. (1) can be rewritten as

$$\mathbf{F}\boldsymbol{q}(i) + \mathbf{G}\boldsymbol{u}(i) - \boldsymbol{q}(i+1) = \mathbf{0},\tag{36}$$

with an arbitrary square matrix Q ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> , it yields

$$\mathbf{q}^T(i+1)\mathbf{Q}(\mathbf{F}\mathbf{q}(i) + \mathbf{G}\mathbf{u}(i) - \mathbf{q}(i+1)) = \mathbf{0}.\tag{37}$$

Now, not substituting Eq. (1) into Eq. (28), but adding Eq. (37) and its transposition to Eq. (28), it can be obtained that

$$\begin{split} \Delta v(\boldsymbol{q}(\boldsymbol{i})) &= \boldsymbol{q}^{T}(\boldsymbol{i}+1)\mathbf{P}\boldsymbol{q}(\boldsymbol{i}+1) - \boldsymbol{q}^{T}(\boldsymbol{i})\mathbf{P}\boldsymbol{q}(\boldsymbol{i}) + \boldsymbol{\gamma}\_{\ast\ast}^{-1}\mathbf{y}^{T}(\boldsymbol{i})\mathbf{y}(\boldsymbol{i}) - \boldsymbol{\gamma}\_{\ast\ast}\mathbf{u}^{T}(\boldsymbol{i})\boldsymbol{u}(\boldsymbol{i}) \\ &+ \left(\mathbf{F}\boldsymbol{q}(\boldsymbol{i}) + \mathbf{G}\boldsymbol{u}(\boldsymbol{i}) - \boldsymbol{q}(\boldsymbol{i}+1)\right)^{T}\mathbf{Q}^{T}\boldsymbol{q}(\boldsymbol{i}+1) \\ &+ \boldsymbol{q}^{T}(\boldsymbol{i}+1)\mathbf{Q}(\mathbf{F}\boldsymbol{q}(\boldsymbol{i}) + \mathbf{G}\boldsymbol{u}(\boldsymbol{i}) - \boldsymbol{q}(\boldsymbol{i}+1)) < 0. \end{split} \tag{38}$$

Thus, considering Eq. (2), then Eq. (38) can be rewritten as

$$\mathfrak{q}^{\circ \mathrm{T}}(\mathrm{i})\mathbb{P}^{\circ}\mathfrak{q}^{\circ}(\mathrm{i})<\mathrm{0},\tag{39}$$

where

$$\boldsymbol{\mathfrak{q}}^{\circ \mathcal{T}}(\mathbf{i}) = \begin{bmatrix} \boldsymbol{\mathfrak{q}}^{\mathrm{T}}(\mathbf{i}) & \boldsymbol{\mathfrak{q}}^{\mathrm{T}}(\mathbf{i}+\mathbf{1}) & \boldsymbol{\mathfrak{u}}^{\mathrm{T}}(\mathbf{i}) \end{bmatrix} \tag{40}$$

and

$$P^\circ = \begin{bmatrix} -\mathbf{P} + \boldsymbol{\gamma}\_\circ^{-1} \mathbf{C}^T \mathbf{C} & \mathbf{F}^T \mathbf{Q}^T & \mathbf{0} \\ & \mathbf{Q} \mathbf{F} & \mathbf{P} - \mathbf{Q} - \mathbf{Q}^T & \mathbf{Q} \mathbf{G} \\ \mathbf{0} & & \mathbf{G}^T \mathbf{Q}^T & -\boldsymbol{\gamma}\_\circ \mathbf{I}\_r \end{bmatrix} < 0. \tag{41}$$

Since Eq. (41) can be written as

$$P^{\circ} = \begin{bmatrix} -\mathbf{P} & \mathbf{F}^{T}\mathbf{Q}^{T} & \mathbf{0} \\ \mathbf{Q}\mathbf{F} & \mathbf{P} - \mathbf{Q} - \mathbf{Q}^{T} & \mathbf{Q}\mathbf{G} \\ \mathbf{0} & \mathbf{G}^{T}\mathbf{Q}^{T} & -\boldsymbol{\gamma}\_{\text{w}}\boldsymbol{I}\_{r} \end{bmatrix} + \boldsymbol{\gamma}\_{\text{w}}^{-1} \begin{bmatrix} \mathbf{C}^{T} \\ \mathbf{0} \\ \mathbf{0} \end{bmatrix} \begin{bmatrix} \mathbf{C} & \mathbf{0} & \mathbf{0} \end{bmatrix} < 0,\tag{42}$$

then, using the dual Schur complement property, Eq. (43) can be transformed in the form

$$
\begin{bmatrix}
\mathbf{C}^{T} & -\mathbf{P} & \mathbf{F}^{T}\mathbf{Q}^{T} & \mathbf{0} \\
\mathbf{0} & \mathbf{Q}\boldsymbol{F} & \mathbf{P} - \mathbf{Q} - \mathbf{Q}^{T} & \mathbf{Q}\boldsymbol{G} \\
\mathbf{0} & \mathbf{0} & \mathbf{G}^{T}\mathbf{Q}^{T} & -\boldsymbol{\gamma}\_{\text{es}}\mathbf{I}\_{r}
\end{bmatrix} < 0. \tag{43}
$$

To obtain a LMI structure visually comparable with Eq. (26), the following block permutation matrix is defined

#### Generalized Ratio Control of Discrete-Time Systems http://dx.doi.org/10.5772/67159 85

$$T\_{a} \circ = \begin{bmatrix} \mathbf{0} & \mathbf{0} & I\_{n} & \mathbf{0} \\ \mathbf{0} & I\_{n} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & I\_{r} \\ I\_{m} & \mathbf{0} & \mathbf{0} & \mathbf{0} \end{bmatrix}. \tag{44}$$

Then, premultiplying the left side of Eq. (43) by Ta° and postmultiplying the right side of Eq. (43) by the transposition of Ta° lead to the inequality in Eq. (35). This concludes the proof.□

It is evident that Lyapunov matrix P is separated from the matrix parameters of the system F, G, and C, i.e., there are no terms containing the product of P and any of them. By introducing the slack variable matrix Q, the product forms are relaxed to new products QF and QG, where Q needs not be symmetric and positive definite. This enables a robust BRL, which can be obtained to deal with linear systems with parametric uncertainties, as well as with singular system matrices.

Considering a symmetric positive definite matrix Q ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> , the following symmetric enhanced LMI representation of BRL is evidently obtained.

Corollary 1. (Enhanced symmetric LMI representation of BRL) The autonomous system Eqs. (1) and (2) is stable with the quadratic performance γ∞, if there exist symmetric positive definite matrices P, Q ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> and a positive scalar γ<sup>∞</sup> ∈ IR such that

$$\mathbf{P} = \mathbf{P}^{\mathrm{T}} > 0, \quad \mathbf{Q} = \mathbf{Q}^{\mathrm{T}} > 0, \quad \boldsymbol{\gamma}\_{\circ} > 0,\tag{45}$$

$$
\begin{bmatrix}
\mathbf{P} - 2\mathbf{Q} & \* & \* & \* \\
\mathbf{F}^T \mathbf{Q} & -\mathbf{P} & \* & \* \\
\mathbf{G}^T \mathbf{Q} & \mathbf{0} & -\boldsymbol{\gamma}\_\circ \mathbf{I}\_r & \* \\
\mathbf{0} & \mathbf{C} & \mathbf{0} & -\boldsymbol{\gamma}\_\circ \mathbf{I}\_\mathfrak{m}
\end{bmatrix} < 0,\tag{46}
$$

where I<sup>r</sup> ∈ IR<sup>r</sup> <sup>×</sup> <sup>r</sup> , I<sup>m</sup> ∈ IR<sup>m</sup> <sup>×</sup> <sup>m</sup> are identity matrices.

Note, Corollary 1 provides the identical condition of existence to Proposition 4, if the equality P = Q is set.

#### 5. Control law parameter design

Proof. Since, Eq. (1) can be rewritten as

84 Dynamical Systems - Analytical and Computational Techniques

with an arbitrary square matrix Q ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup>

it can be obtained that

where

and

FqðÞþi GuðÞ�i qð Þ¼ i þ 1 0; (36)

<sup>∞</sup> <sup>y</sup><sup>T</sup>ð Þ<sup>i</sup> <sup>y</sup>ðÞ�<sup>i</sup> <sup>γ</sup>∞u<sup>T</sup>ð Þ<sup>i</sup> <sup>u</sup>ð Þ<sup>i</sup>

<sup>q</sup>°<sup>T</sup>ð Þ<sup>i</sup> <sup>P</sup>°q° ið Þ <sup>&</sup>lt; <sup>0</sup>; (39)

< 0: (41)

5½ � C 0 0 < 0; (42)

< 0: (43)

<sup>q</sup>°<sup>T</sup>ðÞ¼ <sup>i</sup> <sup>q</sup><sup>T</sup>ð Þ<sup>i</sup> <sup>q</sup><sup>T</sup>ð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> <sup>u</sup><sup>T</sup>ð Þ<sup>i</sup> � � (40)

CT 0 0

3

2 4 (38)

<sup>q</sup><sup>T</sup>ð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> Q Fq <sup>ð</sup> ðÞþ<sup>i</sup> GuðÞ�<sup>i</sup> <sup>q</sup>ð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> Þ ¼ <sup>0</sup>: (37)

, it yields

Now, not substituting Eq. (1) into Eq. (28), but adding Eq. (37) and its transposition to Eq. (28),

<sup>∞</sup> C<sup>T</sup>C F<sup>T</sup>Q<sup>T</sup> 0 QF P � <sup>Q</sup> � <sup>Q</sup><sup>T</sup> QG <sup>0</sup> <sup>G</sup><sup>T</sup>Q<sup>T</sup> �γ∞I<sup>r</sup>

> 3 7 <sup>5</sup> <sup>þ</sup> <sup>γ</sup>�<sup>1</sup> ∞

then, using the dual Schur complement property, Eq. (43) can be transformed in the form

�γ∞I<sup>m</sup> C 0 0 <sup>C</sup><sup>T</sup> �P F<sup>T</sup>Q<sup>T</sup> <sup>0</sup> <sup>0</sup> QF P � <sup>Q</sup> � <sup>Q</sup><sup>T</sup> QG 0 0 <sup>G</sup><sup>T</sup>Q<sup>T</sup> �γ∞I<sup>r</sup>

To obtain a LMI structure visually comparable with Eq. (26), the following block permutation

<sup>þ</sup> ð Þ FqðÞþ<sup>i</sup> GuðÞ�<sup>i</sup> <sup>q</sup>ð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> <sup>T</sup>Q<sup>T</sup>qð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> <sup>þ</sup> <sup>q</sup><sup>T</sup>ð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> Q Fq ð Þ ðÞþ<sup>i</sup> GuðÞ�<sup>i</sup> <sup>q</sup>ð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> <sup>&</sup>lt; <sup>0</sup>:

<sup>Δ</sup>vð Þ¼ <sup>q</sup>ð Þ<sup>i</sup> <sup>q</sup><sup>T</sup>ð Þ <sup>i</sup> <sup>þ</sup> <sup>1</sup> Pqð Þ� <sup>i</sup> <sup>þ</sup> <sup>1</sup> <sup>q</sup><sup>T</sup>ð Þ<sup>i</sup> PqðÞþ<sup>i</sup> <sup>γ</sup>�<sup>1</sup>

Thus, considering Eq. (2), then Eq. (38) can be rewritten as

P° ¼

Since Eq. (41) can be written as

matrix is defined

P° ¼

2 6 4

�<sup>P</sup> <sup>þ</sup> <sup>γ</sup>�<sup>1</sup>

�P F<sup>T</sup>Q<sup>T</sup> <sup>0</sup> QF P � <sup>Q</sup> � <sup>Q</sup><sup>T</sup> QG <sup>0</sup> <sup>G</sup><sup>T</sup>Q<sup>T</sup> �γ∞I<sup>r</sup>

> The state-feedback control problem is finding, for an optimized (or prescribed) scalar γ > 0, the state-feedback gain K such that the control law guarantees an upper bound of γ<sup>∞</sup> of the closedloop transfer function, while the closed-loop is stable. Note, all the above presented BRL structures applied in the control law synthesis lead to bilinear matrix inequalities and have to be linearized.

> Theorem 1. System Eqs. (1) and (2) under control Eq. (3) is stable with quadratic performance γ∞, if there exist a positive definite symmetric matrix R ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> , a matrix Y ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup> , and a positive scalar γ<sup>∞</sup> ∈ IR such that

$$\mathbf{R} = \mathbf{R}^T > 0, \qquad \boldsymbol{\gamma}\_{\boldsymbol{\omega}} > 0,\tag{47}$$

$$
\begin{bmatrix}
\mathbf{R}\mathbf{F}^T - \mathbf{Y}^T\mathbf{G}^T & -\mathbf{R} & \* & \* \\
\mathbf{G}^T & \mathbf{0} & -\boldsymbol{\gamma}\_\circ\mathbf{I}\_r & \* \\
\mathbf{0} & \mathbf{C}\mathbf{R} & \mathbf{0} & -\boldsymbol{\gamma}\_\circ\mathbf{I}\_m
\end{bmatrix} < 0. \tag{48}
$$

When these inequalities are satisfied, the control law gain matrix is given as

$$\mathbf{K} = \mathbf{Y} \mathbf{R}^{-1}.\tag{49}$$

Proof. Since P is positive definite, the transform matrix T<sup>∞</sup> can be defined as follows:

$$T\_{\circ\circ} = \text{diag}[\: \mathbb{R} \quad \mathbb{R} \quad I\_r \quad I\_m], \quad \mathbb{R} = P^{-1}. \tag{50}$$

Then, premultiplying the left side of Eq. (35) and postmultiplying the right side of Eq. (35) by T<sup>∞</sup> gives

$$
\begin{bmatrix}
\mathbf{R}\mathbf{F}^T & -\mathbf{R} & \mathbf{0} & \mathbf{R}\mathbf{C}^T \\
\mathbf{G}^T & \mathbf{0} & -\boldsymbol{\gamma}\_\circ \mathbf{I}\_r & \mathbf{0} \\
\mathbf{0} & \mathbf{C}\mathbf{R} & \mathbf{0} & -\boldsymbol{\gamma}\_\circ \mathbf{I}\_m
\end{bmatrix} < 0. \tag{51}
$$

Inserting F ← F<sup>c</sup> = (F � GK) into Eq. (51) gives

$$
\begin{bmatrix}
\mathbf{R}(\mathbf{F} - \mathbf{G}\mathbf{K})^T & -\mathbf{R} & \mathbf{0} & \mathbf{R}\mathbf{C}^T \\
\mathbf{G}^T & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_r & \mathbf{0} \\
\mathbf{0} & \mathbf{C}\mathbf{R} & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_m
\end{bmatrix} < 0 \tag{52}
$$

and with

$$\mathbf{Y} = \mathbf{K}\mathbf{R} \tag{53}$$

Eq. (53) implies Eq. (48). This concludes the proof. □

Theorem 2. System Eqs. (1) and (2) under control Eq. (3) is stable with quadratic performance γ∞, if there exist positive definite symmetric matrices S, O ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> , a matrix Y ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup> , and a positive scalar γ<sup>∞</sup> ∈ IR such that

$$\mathbf{S} = \mathbf{S}^{\top} > 0, \quad \mathbf{O} = \mathbf{O}^{\top} > 0, \quad \boldsymbol{\gamma}\_{\circ} > 0,\tag{54}$$

$$
\begin{bmatrix}
\mathbf{O} - 2\mathbf{S} & \* & \* & \* \\
\mathbf{S} \mathbf{F}^T - \mathbf{Y}^T \mathbf{G}^T & -\mathbf{O} & \* & \* \\
\mathbf{G}^T & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_r & \mathbf{a} \mathbf{s} \\
\mathbf{0} & \mathbf{C} \mathbf{S} & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_m
\end{bmatrix} < 0. \tag{55}
$$

When these inequalities are satisfied, the control law gain matrix is given as

$$\mathbf{K} = \mathbf{Y} \mathbf{S}^{-1}.\tag{56}$$

Proof. Considering that Q is positive definite, the transform matrix T<sup>∘</sup> <sup>∞</sup> can be defined as follows:

$$T^\*\_{\circ} = \text{diag}[\mathbf{S} \quad \mathbf{S} \quad I\_r \quad I\_m], \quad \mathbf{S} = \mathbf{Q}^{-1}. \tag{57}$$

Therefore, premultiplying the left side of Eq. (46) and postmultiplying the right side of Eq. (46) by the matrix T<sup>∘</sup> <sup>∞</sup> gives

$$
\begin{bmatrix}
\mathbf{SPS} - 2\mathbf{S} & \mathbf{FS} & \mathbf{G} & \mathbf{0} \\
\mathbf{S}\mathbf{F}^T & -\mathbf{SPS} & \mathbf{0} & \mathbf{S}\mathbf{C}^T \\
\mathbf{G}^T & \mathbf{0} & -\boldsymbol{\gamma}\_\circ \mathbf{I}\_r & \mathbf{0} \\
\mathbf{0} & \mathbf{C}\mathbf{S} & \mathbf{0} & -\boldsymbol{\gamma}\_\circ \mathbf{I}\_m
\end{bmatrix} < 0. \tag{58}
$$

Substituting F ← F<sup>c</sup> = (F � GK) into Eq. (58) gives

$$
\begin{bmatrix}
\mathbf{SPS} - 2\mathbf{S} & (\mathbf{F} - \mathbf{GK})\mathbf{S} & \mathbf{G} & \mathbf{0} \\
\mathbf{S(F - GK)^T} & -\mathbf{SPS} & \mathbf{0} & \mathbf{SC}^T \\
\mathbf{G}^T & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_r & \mathbf{0} \\
\mathbf{0} & \mathbf{CS} & \mathbf{0} & -\boldsymbol{\gamma}\_\alpha \mathbf{I}\_m
\end{bmatrix} < 0. \tag{59}
$$

and with

<sup>R</sup> <sup>¼</sup> <sup>R</sup><sup>T</sup> <sup>&</sup>gt; <sup>0</sup>, <sup>γ</sup><sup>∞</sup> <sup>&</sup>gt; <sup>0</sup>; (47)

Y ¼ KR (53)

, a matrix Y ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup>

<sup>S</sup> <sup>¼</sup> <sup>S</sup><sup>T</sup> <sup>&</sup>gt; <sup>0</sup>, <sup>O</sup> <sup>¼</sup> <sup>O</sup><sup>T</sup> <sup>&</sup>gt; <sup>0</sup>, <sup>γ</sup><sup>∞</sup> <sup>&</sup>gt; <sup>0</sup>; (54)

< 0: (48)

: (50)

< 0: (51)

< 0 (52)

, and a positive scalar

< 0: (55)

: (49)

�R ∗∗ ∗ RF<sup>T</sup> � <sup>Y</sup><sup>T</sup>G<sup>T</sup> �<sup>R</sup> ∗ ∗ <sup>G</sup><sup>T</sup> <sup>0</sup> �γ∞I<sup>r</sup> <sup>∗</sup> 0 CR 0 �γ∞I<sup>m</sup>

<sup>K</sup> <sup>¼</sup> YR�<sup>1</sup>

<sup>T</sup><sup>∞</sup> <sup>¼</sup> diag½ � RRI<sup>r</sup> <sup>I</sup><sup>m</sup> , <sup>R</sup> <sup>¼</sup> <sup>P</sup>�<sup>1</sup>

Then, premultiplying the left side of Eq. (35) and postmultiplying the right side of Eq. (35) by

�R FR G 0 RF<sup>T</sup> �<sup>R</sup> <sup>0</sup> RC<sup>T</sup> <sup>G</sup><sup>T</sup> <sup>0</sup> �γ∞I<sup>r</sup> <sup>0</sup> 0 CR 0 �γ∞I<sup>m</sup>

�R Fð Þ � GK R G 0 R Fð Þ � GK <sup>T</sup> �<sup>R</sup> <sup>0</sup> RC<sup>T</sup> <sup>G</sup><sup>T</sup> <sup>0</sup> �γ∞I<sup>r</sup> <sup>0</sup> 0 CR 0 �γ∞I<sup>m</sup>

Eq. (53) implies Eq. (48). This concludes the proof. □

Theorem 2. System Eqs. (1) and (2) under control Eq. (3) is stable with quadratic performance γ∞, if

O � 2S ∗∗ ∗ SF<sup>T</sup> � <sup>Y</sup><sup>T</sup>G<sup>T</sup> �<sup>O</sup> ∗ ∗ <sup>G</sup><sup>T</sup> <sup>0</sup> �γ∞I<sup>r</sup> <sup>a</sup>st 0 CS 0 �γ∞I<sup>m</sup>

When these inequalities are satisfied, the control law gain matrix is given as

Proof. Since P is positive definite, the transform matrix T<sup>∞</sup> can be defined as follows:

86 Dynamical Systems - Analytical and Computational Techniques

T<sup>∞</sup> gives

and with

γ<sup>∞</sup> ∈ IR such that

When these inequalities are satisfied, the control law gain matrix is given as

Inserting F ← F<sup>c</sup> = (F � GK) into Eq. (51) gives

there exist positive definite symmetric matrices S, O ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup>

$$\mathbf{Y} = \mathbf{K}\mathbf{Q}, \quad \mathbf{O} = \mathbf{SPS}, \tag{60}$$

Eq. (59) implies Eq. (55). This concludes the proof. □

#### 6. Ratio control design

Using the control law Eq. (3), the closed-loop system equations take the form

$$
\mathfrak{q}(i+1) = (\mathcal{F} - \mathbf{G}\mathbf{K})\mathfrak{q}(i),\tag{61}
$$

$$\mathfrak{y}(i) = \mathsf{C}\mathfrak{q}(i). \tag{62}$$

Prescribed by a matrix E ∈ IRp <sup>×</sup> <sup>n</sup> , rank E = p ≤ r, it is considered the design constraint Eq. (9) for all nonzero natural numbers i. From Proposition 2, it is clear that such kind of design is a singular task, where Eq. (9) gives

$$E\boldsymbol{\eta}(i+1) = E(\boldsymbol{F} - \boldsymbol{G}\mathbf{K})\boldsymbol{\eta}(i) = 0,\tag{63}$$

which evidently implies

$$E(F - \mathbf{G}\mathbf{K}) = \mathbf{0}.\tag{64}$$

Evidently, the equality

$$\text{EF} = \text{EGK} \tag{65}$$

can be satisfied, as well as the closed-loop system matrix F<sup>c</sup> = F � GK has to stable (all its eigenvalues are from the unit circle in the complex plane Z).

Lemma 1.The equivalent state-space description of the system Eqs. (1) and (2) under control Eq. (3), in which closed-loop state variables satisfying the condition Eq. (9) is

$$
\mathfrak{q}(i+1) = (\mathcal{F} - \mathcal{G}\mathcal{K})\mathfrak{q}(i),\tag{66}
$$

$$\mathbf{y}(i) = \mathbf{C}\boldsymbol{\eta}(i),\tag{67}$$

where

$$\mathbf{K} = \mathbf{J} + \mathbf{L}\mathbf{K}^\*, \quad \mathbf{J} = (\mathbf{E}\mathbf{G})^{\ominus 1}\mathbf{E}\mathbf{F}, \quad \mathbf{L} = \mathbf{I}\_r - (\mathbf{E}\mathbf{G})^T \left(\mathbf{E}\mathbf{G}(\mathbf{E}\mathbf{G})^T\right)^{-1}\mathbf{E}\mathbf{G} \tag{68}$$

while L ∈ IR<sup>r</sup> <sup>×</sup> <sup>r</sup> is the projection matrix (the orthogonal projector of EG onto the null space N EG [23]) and K° ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup> is the ratio control gain matrix.

Proof. Premultiplying the left side of Eq. (65) by the identity matrix, it yields

$$\text{EG}(\text{EG})^T \left(\text{EG}(\text{EG})^T\right)^{-1} \text{EF} = \text{EGK},\tag{69}$$

which implies the particular solution

$$\mathbf{K} = (\mathbf{E}\mathbf{G})^{\ominus 1} \mathbf{E} \mathbf{F},\tag{70}$$

where

$$(\mathbf{EG})^{\ominus 1} = (\mathbf{EG})^T \left(\mathbf{EG}(\mathbf{EG})^T\right)^{-1} \tag{71}$$

is the left Moore-Penrose pseudoinverse of EG.

Using the equality Eq. (65), then Eq. (69) can be also written as

$$\text{EG}(\mathbf{EG})^T \left(\mathbf{EG}(\mathbf{EG})^T\right)^{-1} \mathbf{EGK} = \mathbf{EGK},\tag{72}$$

which implies

$$\mathbb{E}\mathbf{G}\left(\mathbf{I}\_{r} - (\mathbf{E}\mathbf{G})^{T}\left(\mathbf{E}\mathbf{G}(\mathbf{E}\mathbf{G})^{T}\right)^{-1}\mathbf{E}\mathbf{G}\right)\mathbf{K} = \mathbf{0},\tag{73}$$

$$\mathbb{E}\mathbf{G}\left(\mathbf{I}\_r - (\mathbb{E}\mathbf{G})^{\ominus 1}\mathbf{E}\mathbf{G}\right)\mathbf{K} = \mathbf{0},\tag{74}$$

respectively, where I<sup>r</sup> ∈ IRp <sup>×</sup> <sup>p</sup> is the identity matrix. It is evident that Eq. (74) can be satisfied only if

$$I\_r - (\mathbf{E}\mathbf{G})^{\ominus 1} \mathbf{E} \mathbf{G} = \mathbf{0}.\tag{75}$$

Thus, Eq. (11) implies all solutions of K as follows

Evidently, the equality

where

where

which implies

EF ¼ EGK (65)

qð Þ¼ i þ 1 ð Þ F � GK qð Þi ; (66)

EF, <sup>L</sup> <sup>¼</sup> <sup>I</sup><sup>r</sup> � ð Þ EG <sup>T</sup> EG EG ð Þ<sup>T</sup> �<sup>1</sup>

yðÞ¼ i Cqð Þi ; (67)

EF ¼ EGK; (69)

EF; (70)

EGK ¼ EGK; (72)

K ¼ 0; (73)

K ¼ 0; (74)

EG (68)

(71)

can be satisfied, as well as the closed-loop system matrix F<sup>c</sup> = F � GK has to stable (all its

Lemma 1.The equivalent state-space description of the system Eqs. (1) and (2) under control Eq. (3),

while L ∈ IR<sup>r</sup> <sup>×</sup> <sup>r</sup> is the projection matrix (the orthogonal projector of EG onto the null space N EG [23])

eigenvalues are from the unit circle in the complex plane Z).

88 Dynamical Systems - Analytical and Computational Techniques

<sup>K</sup> <sup>¼</sup> <sup>J</sup> <sup>þ</sup> LK<sup>∘</sup>

and K° ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup> is the ratio control gain matrix.

which implies the particular solution

is the left Moore-Penrose pseudoinverse of EG.

Using the equality Eq. (65), then Eq. (69) can be also written as

in which closed-loop state variables satisfying the condition Eq. (9) is

, <sup>J</sup> <sup>¼</sup> ð Þ EG <sup>⊝</sup><sup>1</sup>

Proof. Premultiplying the left side of Eq. (65) by the identity matrix, it yields

EG EG ð Þ<sup>T</sup> EG EG ð Þ<sup>T</sup> �<sup>1</sup>

EG EG ð Þ<sup>T</sup> EG EG ð Þ<sup>T</sup> �<sup>1</sup>

EG I<sup>r</sup> � ð Þ EG <sup>T</sup> EG EG ð Þ<sup>T</sup> �<sup>1</sup>

EG I<sup>r</sup> � ð Þ EG <sup>⊝</sup><sup>1</sup>

EG

EG

<sup>K</sup> <sup>¼</sup> ð Þ EG <sup>⊝</sup><sup>1</sup>

ð Þ EG <sup>⊝</sup><sup>1</sup> <sup>¼</sup> ð Þ EG <sup>T</sup> EG EG ð Þ<sup>T</sup> �<sup>1</sup>

$$\mathbf{K} = (\mathbf{E}\mathbf{G})^{\ominus 1}\mathbf{E}\mathbf{F} + \left(\mathbf{I}\_{\rm r} - (\mathbf{E}\mathbf{G})^{\ominus 1}\mathbf{E}\mathbf{G}\right)\mathbf{K}^{\circ},\tag{76}$$

where K° is an arbitrary matrix with appropriate dimension, and evidently Eq. (76) gives Eq. (68). This concludes the proof. □

Considering the model involving the given ratio constraint on the closed-loop system state variables Eqs. (66)–(68), the design conditions are presented in the following theorems.

Theorem 3. System Eqs. (1) and (2) under the control (3), and satisfying the constraint Eq. (4) is stable with the quadratic performance γ∞, if there exist positive definite matrices S, O ∈ IR<sup>n</sup> <sup>×</sup> <sup>n</sup> , a matrix Y° ∈ IR<sup>r</sup> <sup>×</sup> <sup>n</sup> , and a positive scalar γ<sup>∞</sup> ∈ IR such that

$$\mathbf{S} = \mathbf{S}^{T} > 0, \quad \mathbf{O} = \mathbf{O}^{T} > 0, \quad \boldsymbol{\gamma}\_{\bullet} > 0,\tag{77}$$

$$
\begin{bmatrix}
\mathbf{O} - 2\mathbf{S} & \* & \* & \* \\
\mathbf{S}(\mathbf{F} - \mathbf{G})^T - \mathbf{Y}^{\circ T}\mathbf{L}^T\mathbf{G}^T & -\mathbf{O} & \* & \* \\
& \mathbf{G}^T & \mathbf{0} & -\boldsymbol{\gamma}\_o\mathbf{I}\_r & \* \\
& \mathbf{0} & \mathbf{C}\mathbf{S} & \mathbf{0} & -\boldsymbol{\gamma}\_o\mathbf{I}\_m
\end{bmatrix} < 0. \tag{78}
$$

When these inequalities are satisfied, the control law gain matrices are given as

$$\mathbf{K}^{\circ} = \mathbf{Y}^{\circ}\mathbf{S}^{-1}, \quad \mathbf{K} = \mathbf{J} + \mathbf{L}\mathbf{K}^{\circ}, \tag{79}$$

where J, L are defined in Eq. (68).

Proof. Substituting Eq. (68) into Eq. (59) gives

$$
\begin{bmatrix}
\mathbf{O} - 2\mathbf{S} & (\mathbf{F} - \mathbf{G}\mathbf{L} - \mathbf{G}\mathbf{L}\mathbf{K}^{\circ})\mathbf{S} & \mathbf{G} & \mathbf{0} \\
\mathbf{S}(\mathbf{F} - \mathbf{G}\mathbf{J} - \mathbf{G}\mathbf{L}\mathbf{K}^{\circ})^{T} & -\mathbf{O} & \mathbf{0} & \mathbf{S}\mathbf{C}^{T} \\
\mathbf{G}^{T} & \mathbf{0} & -\boldsymbol{\gamma}\_{\star}\mathbf{I}\_{r} & \mathbf{0} \\
\mathbf{0} & \mathbf{C}\mathbf{S} & \mathbf{0} & -\boldsymbol{\gamma}\_{\star}\mathbf{I}\_{m}
\end{bmatrix} < 0. \tag{80}
$$

Using the notation

$$\mathbf{Y}^{\diamond} = \mathbf{K}^{\diamond} \mathbf{S} \tag{81}$$

Eq. (80) implies Eq. (78). This concludes the proof. □

The ratio control does not exclude a forced regime given by the control law

$$\mathfrak{w}(i) = -\mathbf{K}\mathfrak{q}(i) + \mathbf{W}\mathfrak{w}(i),\tag{82}$$

where w(i) ∈ IR<sup>m</sup> is desired output signal vector and W ∈ IR<sup>m</sup> <sup>×</sup> <sup>m</sup> is the signal gain matrix. Using the static decoupling principle, the conditions to design the signal gain matrix W can be proven.

Lemma 2. If the system Eqs. (1) and (2) is square, which is stabilizable by the control policy Eq. (82) and Ref. [32]

$$
rank \begin{bmatrix} F & G \\ \mathbf{C} & \mathbf{0} \end{bmatrix} = n + m,\tag{83}
$$

then the matrix W takes the form

$$\mathbf{W} = \left( \mathbf{C} (\mathbf{I}\_n - (\mathbf{F} - \mathbf{GK}))^{-1} \mathbf{G} \right)^{-1}, \tag{84}$$

where I<sup>n</sup> ∈ IRn <sup>×</sup> <sup>n</sup> is the identity matrix.

Proof. In a steady state, the system equations Eqs. (1) and (2), and the control law Eq. (82) imply

$$
\boldsymbol{\mathfrak{q}}\_{o} = (\mathbf{F} - \mathbf{G}\mathbf{K})\boldsymbol{\mathfrak{q}}\_{o} + \mathbf{G}\mathbf{W}\mathbf{w}\_{o},\tag{85}
$$

where qo, w<sup>o</sup> are the steady-state values of the vectors q(i), w(i), respectively. Since from Eq. (85), it can be derived that

$$\boldsymbol{q}\_o = \left(\mathbf{I}\_n - \left(\mathbf{F} - \mathbf{G}\mathbf{K}\right)\right)^{-1} \mathbf{G} \mathbf{W} \mathbf{w}\_0 \tag{86}$$

and

$$\mathbf{y}\_o = \mathbf{C}(I\_n - (\mathbf{F} - \mathbf{G}\mathbf{K}))^{-1}\mathbf{G}\mathbf{W}\mathbf{w}\_o,\tag{87}$$

considering y<sup>o</sup> = wo, Eq. (87) implies Eq. (84). This concludes the proof. □

Theorem 4. If the closed-loop system state variables satisfy the state constraint Eq. (63), then the common state variable vector qd(i) = Eq(i), qd(i) ∈ IR<sup>k</sup> attains the steady-state value

$$
\mathfrak{q}\_{dw} = \mathbf{E} \mathbf{G} \mathbf{W} \mathbf{w}\_{o}. \tag{88}
$$

Proof. Using the control policy Eq. (82), then

$$\mathbf{E}\boldsymbol{q}(i+1) = \mathbf{E}(\mathbf{F} - \mathbf{G}\mathbf{K})\boldsymbol{q}(i) + \mathbf{E}\mathbf{G}\mathbf{W}\mathbf{w}(i). \tag{89}$$

Since K satisfies Eq. (65), then Eq. (89) implies

$$E\mathfrak{q}(i+1) = EG\mathfrak{W}\mathfrak{w}(i)\tag{90}$$

and it is evident that the tied state variable qd(i) of the closed-loop system in a steady state is proportional to the steady state of the desired signal w<sup>o</sup> and takes the value Eq. (88). This concludes the proof. □

## 7. Illustrative examples

uðÞ¼� i KqðÞþi Wwð Þi ; (82)

G

q<sup>o</sup> ¼ ð Þ F � GK q<sup>o</sup> þ GWwo; (85)

qdw ¼ EGWwo: (88)

Eqð Þ¼ i þ 1 EGWwð Þi (90)

Eqð Þ¼ i þ 1 E Fð Þ � GK qðÞþi EGWwð Þi : (89)

¼ n þ m; (83)

; (84)

GWw<sup>o</sup> (86)

GWwo; (87)

where w(i) ∈ IR<sup>m</sup> is desired output signal vector and W ∈ IR<sup>m</sup> <sup>×</sup> <sup>m</sup> is the signal gain matrix. Using the static decoupling principle, the conditions to design the signal gain matrix W can be

Lemma 2. If the system Eqs. (1) and (2) is square, which is stabilizable by the control policy Eq. (82)

rank F G C 0 

<sup>W</sup> <sup>¼</sup> C Ið Þ <sup>n</sup> � ð Þ <sup>F</sup> � GK �<sup>1</sup>

�<sup>1</sup>

Proof. In a steady state, the system equations Eqs. (1) and (2), and the control law Eq. (82) imply

where qo, w<sup>o</sup> are the steady-state values of the vectors q(i), w(i), respectively. Since from

considering y<sup>o</sup> = wo, Eq. (87) implies Eq. (84). This concludes the proof. □ Theorem 4. If the closed-loop system state variables satisfy the state constraint Eq. (63), then the

and it is evident that the tied state variable qd(i) of the closed-loop system in a steady state is proportional to the steady state of the desired signal w<sup>o</sup> and takes the value Eq. (88). This concludes the proof. □

<sup>q</sup><sup>o</sup> <sup>¼</sup> ð Þ <sup>I</sup><sup>n</sup> � ð Þ <sup>F</sup> � GK �<sup>1</sup>

<sup>y</sup><sup>o</sup> <sup>¼</sup> C Ið Þ <sup>n</sup> � ð Þ <sup>F</sup> � GK �<sup>1</sup>

common state variable vector qd(i) = Eq(i), qd(i) ∈ IR<sup>k</sup> attains the steady-state value

proven.

and

and Ref. [32]

then the matrix W takes the form

Eq. (85), it can be derived that

Proof. Using the control policy Eq. (82), then

Since K satisfies Eq. (65), then Eq. (89) implies

where I<sup>n</sup> ∈ IRn <sup>×</sup> <sup>n</sup> is the identity matrix.

90 Dynamical Systems - Analytical and Computational Techniques

To demonstrate properties of proposed approach, the classical example for a helicopter control [33] is taken, where the discrete-time state-space representation Eqs. (1) and (2) for the sampling period Δt = 0.05s consists of the following parameters

$$\begin{aligned} \mathbf{F} = \begin{bmatrix} 0.9982 & 0.0013 & 0.0004 & -0.0229 \\ 0.0023 & 0.9507 & -0.0048 & -0.1962 \\ 0.0049 & 0.0176 & 0.9670 & 0.0679 \\ 0.0001 & 0.0004 & 0.0492 & 1.0017 \end{bmatrix}, & \mathbf{G} = \begin{bmatrix} 0.0221 & 0.0086 \\ 0.1733 & -0.3705 \\ -0.2697 & 0.2173 \\ -0.0068 & 0.0055 \end{bmatrix}, \\\\ \mathbf{C} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}. \end{aligned}$$

The state constraint, defining the ratio control of two state system variables, is specified as

$$\frac{q\_4(t)}{q\_1(t)} = 1.5 \quad \Rightarrow \mathbf{E} = \begin{bmatrix} -1.5 & 0 & 0 & 1 \end{bmatrix} \tag{92}$$

and subsequently it yields

$$(\mathbf{EG})^{\ominus 1} = \begin{bmatrix} -24.1737 \\ -4.4828 \end{bmatrix}, \quad \mathbf{L} = \begin{bmatrix} 0.0332 & -0.1793 \\ -0.1793 & 0.9668 \end{bmatrix}, \tag{93}$$

$$J = \begin{bmatrix} 36.1914 & 0.0372 & -1.1753 & -25.0447 \\ 6.7113 & 0.0069 & -0.2179 & -4.6443 \end{bmatrix} . \tag{94}$$

Solving Eqs. (77) and (78) using self-dual-minimization (SeDuMi) package for Matlab [19], the feedback gain matrix design problem in the constrained control is feasible with the results

$$\mathbf{O} = \begin{bmatrix} 2.9027 & 0.2117 & 0.1103 & -1.7595 \\ 0.2117 & 1.3174 & -0.1751 & -0.1245 \\ 0.1103 & -0.1751 & 0.4162 & 0.0060 \\ -1.7595 & -0.1245 & 0.0060 & 3.2464 \end{bmatrix},$$

$$\mathbf{S} = \begin{bmatrix} 2.4910 & 0.1375 & 0.0792 & -1.4957 \\ 0.1375 & 1.0779 & -0.0910 & -0.0030 \\ 0.0792 & -0.0910 & 0.3735 & -0.0348 \\ -1.4957 & -0.0030 & -0.0348 & 3.0926 \end{bmatrix},\tag{95}$$
 
$$Y^o = \begin{bmatrix} -2.2113 & 0.2435 & -0.0819 & 1.4281 \\ 11.9245 & -1.3129 & 0.4416 & -7.7011 \end{bmatrix}, \quad \gamma\_{\ast} = 8.5565.\tag{96}$$

Inserting Y° and S into Eq. (79), the gain matrix K° is computed as

$$\mathbf{K}^{\diamond} = \begin{bmatrix} -0.8887 & 0.3441 & 0.0562 & 0.0329 \\ 4.7926 & -1.8555 & -0.3028 & -0.1775 \end{bmatrix} \tag{97}$$

and Eq. (79) implies the full-state feedback gain matrix values

$$\mathbf{K} = \begin{bmatrix} 35.3027 & 0.3813 & -1.1191 & -25.0117 \\ 11.5040 & -1.8486 & -0.5208 & -4.8217 \end{bmatrix} \text{.} \tag{98}$$

It can be easily verified that the closed-loop system matrix takes the format

$$\mathbf{F}\_c = \mathbf{F} - \mathbf{G}\mathbf{K} = \begin{bmatrix} 0.1179 & 0.0088 & 0.0296 & 0.5722 \\ -1.8528 & 0.1997 & -0.0038 & 2.3515 \\ 7.0258 & 0.5223 & 0.7783 & -5.6297 \\ 0.1768 & 0.0132 & 0.0444 & 0.8583 \end{bmatrix},\tag{99}$$

while the ratio control law rises up the stable closed-loop system with the closed-loop system matrix eigenvalues spectrum

$$
\rho(F\_c) = \{0.9527, \quad 0.7566, \quad 0.0000, \quad 0.2449\}.\tag{100}
$$

Note that one from the resulting eigenvalue of F<sup>c</sup> is zero (rank(E) = 1)), because Proposition 2 prescribes this constrained design task as a singular problem. Using the connection between the eigenvector matrix N and M as given by Eq. (17), it is possible to show that this instance is documented also by the structure of M, while

$$\begin{aligned} \mathcal{N} &= \begin{bmatrix} -0.3109 & -0.1105 & -0.0800 & -0.0184 \\ -0.6937 & -0.3384 & -0.4690 & -0.7382 \\ 0.4522 & 0.9197 & 0.8793 & 0.6738 \\ -0.4664 & -0.1657 & -0.0218 & -0.0276 \end{bmatrix}, \\\\ \mathcal{M} &= \begin{bmatrix} -3.4197 & -0.3938 & -0.5157 & 0.2213 \\ 10.2685 & 1.3777 & 1.4844 & -7.4555 \\ -15.2705 & 0.0000 & 0.0000 & 10.1803 \\ 8.2076 & -1.6162 & -0.1958 & -3.2577 \end{bmatrix}, \end{aligned} \tag{101}$$

where the structure of the third row of M correspondents to the structure of the constraint vector <sup>E</sup>, while <sup>a</sup><sup>4</sup> <sup>¼</sup> <sup>m</sup><sup>T</sup> <sup>3</sup> ð Þ<sup>1</sup> <sup>=</sup>m<sup>T</sup> <sup>3</sup> ð Þ¼� 4 1:5.

To illustrate the closed-loop system property in the forced mode, the signal gain matrix W is computed by using Eq. (84) as follows

$$\mathbf{W} = \begin{bmatrix} 1.4575 & 35.9137 \\ -1.7651 & 11.6521 \end{bmatrix} \text{.} \tag{102}$$

Therefore, according to Theorem 4, the constraint given on the states of the system under study is satisfied with zero offset in the autonomous regime and with offset value equal qdw in the forced mode, i.e.,

$$
\mathfrak{q}\_d = 0, \qquad \mathfrak{q}\_{dw} = \mathbf{E} \mathbf{G} \mathbf{W} \mathbf{w}\_0 = \mathbf{3}.0001,\tag{103}
$$

while

(97)

(101)

: (98)

5; (99)

3 7 7

: (102)

<sup>K</sup>° <sup>¼</sup> �0:8887 0:3441 0:0562 0:<sup>0329</sup>

<sup>K</sup> <sup>¼</sup> <sup>35</sup>:3027 0:<sup>3813</sup> �1:<sup>1191</sup> �25:<sup>0117</sup> 11:5040 �1:8486 �0:5208 �4:8217 � �

while the ratio control law rises up the stable closed-loop system with the closed-loop system

Note that one from the resulting eigenvalue of F<sup>c</sup> is zero (rank(E) = 1)), because Proposition 2 prescribes this constrained design task as a singular problem. Using the connection between the eigenvector matrix N and M as given by Eq. (17), it is possible to show that this instance is

> �0:3109 �0:1105 �0:0800 �0:0184 �0:6937 �0:3384 �0:4690 �0:7382 0:4522 0:9197 0:8793 0:6738 �0:4664 �0:1657 �0:0218 �0:0276

�3:4197 �0:3938 �0:5157 0:2213 10:2685 1:3777 1:4844 �7:4555 �15:2705 0:0000 0:0000 10:1803 8:2076 �1:6162 �0:1958 �3:2577

where the structure of the third row of M correspondents to the structure of the constraint

To illustrate the closed-loop system property in the forced mode, the signal gain matrix W is

<sup>W</sup> <sup>¼</sup> <sup>1</sup>:4575 35:<sup>9137</sup> �1:7651 11:6521 � �

and Eq. (79) implies the full-state feedback gain matrix values

92 Dynamical Systems - Analytical and Computational Techniques

F<sup>c</sup> ¼ F � GK ¼

documented also by the structure of M, while

N ¼

M ¼

<sup>3</sup> ð Þ<sup>1</sup> <sup>=</sup>m<sup>T</sup>

computed by using Eq. (84) as follows

<sup>3</sup> ð Þ¼� 4 1:5.

vector <sup>E</sup>, while <sup>a</sup><sup>4</sup> <sup>¼</sup> <sup>m</sup><sup>T</sup>

matrix eigenvalues spectrum

It can be easily verified that the closed-loop system matrix takes the format

4:7926 �1:8555 �0:3028 �0:1775 � �

> :1179 0:0088 0:0296 0:5722 �1:8528 0:1997 �0:0038 2:3515 :0258 0:5223 0:7783 �5:6297 :1768 0:0132 0:0444 0:8583

ρð Þ¼ F<sup>c</sup> f g 0:9527, 0:7566, 0:0000, 0:2449 : (100)

$$
\omega \mathbf{w}(i) = \begin{bmatrix} 1 \\ -2 \end{bmatrix} \text{ for all } i. \tag{104}
$$

The simulation results of the closed-loop system response in the autonomous and forced mode are presented, where Figure 1 is concerned with the system state variables response in the autonomous regime and Figure 2 with the system state variables response in the forced mode. It is evident that the condition Eq. (9) is satisfied at all time instant, except initial time instant in the above given way (see the time response of the additive of variable, which is included as qd(i) in the figures).

For comparison, an example is given for default design of state feedback gain matrix using BRL structure of LMIs. Solving Eqs. (54) and (55), the task is feasible with the Lyapunov matrix variables

Figure 1. State response in autonomous regime.

Figure 2. State response in forced mode.

$$\mathbf{O} = \begin{bmatrix} 0.1438 & -0.1090 & -0.1619 & -0.2191 \\ -0.1090 & 1.5603 & -0.2198 & 0.2945 \\ -0.1619 & -0.2198 & 1.6006 & -0.4711 \\ -0.2191 & 0.2945 & -0.4711 & 1.8586 \end{bmatrix},$$

$$\mathbf{S} = \begin{bmatrix} 0.1338 & -0.0840 & -0.1490 & -0.1928 \\ -0.0840 & 1.2736 & -0.2314 & 0.2439 \\ -0.1490 & -0.2314 & 1.6729 & -0.5520 \\ -0.1928 & 0.2439 & -0.5520 & 1.8296 \end{bmatrix},\tag{105}$$

and parameter matrix variable

$$\mathbf{Y} = \begin{bmatrix} 0.6210 & -0.8607 & -2.6800 & -0.7582 \\ 0.4017 & -2.6793 & -0.3804 & 0.1788 \end{bmatrix}, \quad \boldsymbol{\gamma}\_{\ast \circ} = 3.1301. \tag{106}$$

Therefore, using Eq. (56), the nominal control law gain matrix K is computed as

$$\mathbf{K} = \begin{bmatrix} 0.8951 & -0.8107 & -1.8928 & -0.7830 \\ 2.4671 & -2.0742 & -0.0947 & 0.6056 \end{bmatrix},\tag{107}$$

the closed-loop system matrix takes the form

$$F\_c = F - \mathbf{GK} = \begin{bmatrix} 0.9571 & 0.0371 & 0.0431 & -0.0108 \\ 0.7613 & 0.3227 & 0.2881 & 0.1639 \\ -0.2898 & 0.2498 & 0.4771 & -0.2749 \\ -0.0073 & 0.0063 & 0.0368 & 0.9931 \end{bmatrix},\tag{108}$$

while the closed-loop system matrix eigenvalues spectrum is

$$
\rho(F\_c) = \{0.1207, \quad 0.6570, \quad 0.9733, \quad 0.9990\}.\tag{109}
$$

To apply in the forced mode, the signal gain matrix W is now computed by using Eq. (84) as follows:

$$\mathbf{W} = \begin{bmatrix} -0.8296 & 0.9567 \\ -2.2360 & 2.4922 \end{bmatrix} \text{.} \tag{110}$$

The simulation results of the nominal closed-loop system response are illustrated in Figures 3 and 4, where Figure 3 is concerned with the system state variables response in the autonomous regime and Figure 4 with the system state variables response in the forced mode.

Since these two control structures are of interest in the context of full-state control design, matching the presented results, it is evident that the system dynamics in both cases are comparable.

Figure 3. State response in autonomous regime.

O ¼

94 Dynamical Systems - Analytical and Computational Techniques

S ¼

the closed-loop system matrix takes the form

and parameter matrix variable

Figure 2. State response in forced mode.

<sup>Y</sup> <sup>¼</sup> <sup>0</sup>:<sup>6210</sup> �0:<sup>8607</sup> �2:<sup>6800</sup> �0:<sup>7582</sup> 0:4017 �2:6793 �0:3804 0:1788 � �

Therefore, using Eq. (56), the nominal control law gain matrix K is computed as

<sup>K</sup> <sup>¼</sup> <sup>0</sup>:<sup>8951</sup> �0:<sup>8107</sup> �1:<sup>8928</sup> �0:<sup>7830</sup> 2:4671 �2:0742 �0:0947 0:6056 � �

0:1438 �0:1090 �0:1619 �0:2191 �0:1090 1:5603 �0:2198 0:2945 �0:1619 �0:2198 1:6006 �0:4711 �0:2191 0:2945 �0:4711 1:8586

0:1338 �0:0840 �0:1490 �0:1928 �0:0840 1:2736 �0:2314 0:2439 �0:1490 �0:2314 1:6729 �0:5520 �0:1928 0:2439 �0:5520 1:8296

> 3 7 7

5; (105)

; (107)

, γ<sup>∞</sup> ¼ 3:1301: (106)

Figure 4. State response in forced mode.

## 8. Concluding Remarks

In this chapter, an extended method is presented, based on the classical memoryless feedback H<sup>∞</sup> control principle of discrete-time systems, if the ratio control is reformulated by an equality constraint setting on associated state variables. The asymptotic stability of the control scheme is guaranteed in the sense of the enhanced representation of BRL, while resulting LMIs are linear with respect to the system state variables, and does not involve products of the Lyapunov matrix and the system matrix parameters, which provides one way of solving the singular LMI problem. Moreover, formulated as a stabilization problem with the full-state feedback controller, the control gain matrix takes no special structure. The formulation allows to find a solution without restrictive assumptions and additional specifications on the design parameters. It is clear from Theorem 4 that the control law strictly solves the problem even in the unforced mode. The validity of the proposed method is demonstrated by numerical examples.

#### Acknowledgements

The work presented in this chapter was supported by VEGA, the Grant Agency of the Ministry of Education and the Academy of Science of Slovak Republic, under Grant No. 1/0608/17. These supports are very gratefully acknowledged.

## Author details

Dušan Krokavec\* and Anna Filasová

\*Address all correspondence to: dusan.krokavec@tuke.sk

Department of Cybernetics Artificial Intelligence, Faculty of Electrical Engineering Informatics, Technical University of Košice, Košice, Slovakia

## References

8. Concluding Remarks

Figure 4. State response in forced mode.

96 Dynamical Systems - Analytical and Computational Techniques

examples.

Acknowledgements

These supports are very gratefully acknowledged.

In this chapter, an extended method is presented, based on the classical memoryless feedback H<sup>∞</sup> control principle of discrete-time systems, if the ratio control is reformulated by an equality constraint setting on associated state variables. The asymptotic stability of the control scheme is guaranteed in the sense of the enhanced representation of BRL, while resulting LMIs are linear with respect to the system state variables, and does not involve products of the Lyapunov matrix and the system matrix parameters, which provides one way of solving the singular LMI problem. Moreover, formulated as a stabilization problem with the full-state feedback controller, the control gain matrix takes no special structure. The formulation allows to find a solution without restrictive assumptions and additional specifications on the design parameters. It is clear from Theorem 4 that the control law strictly solves the problem even in the unforced mode. The validity of the proposed method is demonstrated by numerical

The work presented in this chapter was supported by VEGA, the Grant Agency of the Ministry of Education and the Academy of Science of Slovak Republic, under Grant No. 1/0608/17.


[27] Cakmakci M., Ulsoy A.G. Modular discrete optimal MIMO controller for a VCT engine. In: Proceedings of the 2009 American Control Conference; 10–12 June 2009; St. Louis, USA, pp. 1359–1364.

[11] Ko S., Bitmead R.R. State estimation for linear systems with state equality constraints.

[12] Ko S., Bitmead R.R. Optimal control for linear systems with state equality constraints.

[13] Filasová A., Krokavec D. Observer state feedback control of discrete-time systems with state equality constraints. Archives of Control Sciences. 2010. 10(3):253–266. DOI: 10.2478/

[14] Nesterov Y., Nemirovsky A. Interior Point Polynomial Methods in Convex Programming. Theory and Applications. Philadelphia: SIAM; 1994. 407 p. DOI: 10.1137/1.9781611970791.fm

[15] Boyd D., El Ghaoui L., Peron E., Balakrishnan V. Linear Matrix Inequalities in System and Control Theory. Philadelphia: SIAM; 1994. 205 p. DOI: 10. 1137/1.9781611970777.

[16] Skelton R.E., Iwasaki T., Grigoriadis K. A Unified Algebraic Approach to Linear Control

[17] Herrmann G., Turner M.C., Postlethwaite I. Linear matrix inequalities in control. In: Turner M.C., Bates D.G., editors. Mathematical Methods for Robust and Nonlinear Control. Berlin: Springer-Verlag; 2007. pp. 123–142. DOI: 10.1007/978-1-84800-025-4-4. [18] Gahinet P., Nemirovski A., Laub A.J., Chilali M. LMI Control Toolbox User's Guide.

[19] Peaucelle D., Henrion D., Labit Y., Taitz K. User's Guide for SeDuMi Interface 1.04.

[20] Oliveira de M.C., Bernussou J., Geromel J.C. A new discrete-time robust stability condition. Systems & Control Letters. 1999. 37(4):261–265. DOI: 10.1016/S0167-6911(99)00035-3.

[21] Wu A.I., Duan G.R. Enhanced LMI representations for H2 performance of polytopic uncertain systems. Continuous-time case. International Journal of Automation and Com-

[22] Filasová A., Krokavec D. H∞ control of discrete-time linear systems constrained in state by equality constraints. International Journal of Applied Mathematics and Computer

[23] Krokavec D., Filasová A. Constrained control of discrete-time stochastic systems. IFAC Proceedings Volumes. 2008. 41(2):15315–15320. DOI: 10.3182/20080706-5-KR-1001.02590.

[24] Krokavec D., Filasová A. Control reconfiguration based on the constrained LQ control algorithms. IFAC Proceedings Volumes. 2009. 42(8):143–148. DOI: 10.3182/20090630 -4-

[25] Ogata K. Discrete-Time Control Systems. Upper Saddle River: Prentice-Hall; 1995. 760 p. [26] Debiane L., Ivorra B., Mohammadi B., Nicoud F., Ernz A., Poinsot T., Pitsch H. Temperature and pollution control in flames. In: Proceedings of the Summer Program 2004; 2004;

puting. 2006. 3(3):304–308. http://www.ijac.net/EN/Y2006/V3/I3/304.

Science. 2012. 22(3):551–560. DOI: 10.2478/v10006-012-0042-5.

Design. London: Taylor & Francis; 1998. 285 p. DOI: 10.1002/rnc.694.

Automatica. 2007. 43(9):1363–1368. DOI: 10.1016/j.automatica.2007.01.017.

Automatica. 2007. 43(9):1573–1582. DOI: 10.1016/j.automatica.2007.01.024.

v10170-010-0016-5.

98 Dynamical Systems - Analytical and Computational Techniques

Natick: The MathWorks; 1995. 356 p.

Toulouse: LAAS-CNRS; 2002. 36 p.

University of Montpellier, France, pp. 1–9.

ES-2003.00024.


#### **Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling** Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling

Sergei Soldatenko and Rafael Yusupov Sergei Soldatenko and Rafael Yusupov

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/66752

#### Abstract

Climate system consisting of the atmosphere, ocean, cryosphere, land and biota is considered as a complex adaptive dynamical system along with its essential physical properties. Since climate system is a nonlinear dissipative dynamical system that possesses a global attractor and its dynamics on the attractor are chaotic, the prediction of weather and climate change has a finite time horizon. There are two kinds of predictability of climate system: one is generated by uncertainties in the initial conditions (predictability of the first kind) and another is produced by uncertainties in parameters that describe the external forcing (predictability of the second kind). Using the concept of the 'perfect' climate model, two kinds of predictability are considered from the standpoint of the mathematical theory of climate.

Keywords: climate system, deterministic chaos, predictability, stability

## 1. Introduction

High-complexity computational models that simulate earth's climate system (ECS) have earned well-deserved recognition as the indispensable and primary instrument for numerical weather prediction (NWP) as well as for the study of climate change and variability caused by both natural processes and human activities [1–4]. In spite of dramatic progress achieved over the past few decades in weather forecasting and climate simulation thanks to the advances in computing hardware and algorithms and to a substantial increase in the volume of climatological data, contemporary computational climate models can reconstruct the real world only with a certain degree of validity [3]. There are several major sources of discrepancy between climate model simulation results and reality. First of all, climate models remain an ideal mathematical abstraction of a real physical system, namely the ECS. These models ignore

distribution, and eproduction in any medium, provided the original work is properly cited.

some physical, dynamical and chemical processes or, at least, represent them in a simplified fashion. As a result, various physical simplifications in the formulation of climate models substantially influence their adequacy [5]. Second, the NWP and climate simulation are mathematically an initial-value (Cauchy) and/or a boundary-value (Dirichlet or von Neumann) problem, which is solved numerically using finite-difference, spectral or another appropriate method. Consequently, uncertainties emerging in the initial and boundary conditions as well as in the climate model parameters and external forcing, approximation, truncation and round-off errors lead to distinctions between the model output and the observed real state of the ECS. Third, let us suppose that we have the 'perfect' model of the ECS. It means that exact governing equations are known exactly and can be solved. However, even in this, hypothetically ideal, case the ability of climate models to predict the future remains limited. This can be explained by the fact that the atmosphere, which is the most rapidly changing component of the ECS, is strongly nonlinear and exhibits irregular (chaotic) spatial-temporal oscillations on all scales ranging from millimetre seconds (turbulent fluctuations) to thousands of kilometres and several years (climate variability). This phenomenon known as deterministic chaos was first discovered by Lorenz [6]. The chaotic nature of the atmosphere significantly limits our ability to successfully predict the weather and climate since the predicted trajectory of the ECS is unstable with respect to both the infinitesimal errors in initial conditions and external forcing [7]. Even with a perfect atmospheric model and accurate initial condition, we cannot predict the weather beyond approximately two weeks.

For further discussion, we need to clarify that terms 'weather' and 'climate' have different meanings. Weather is defined as the daily conditions of the atmosphere in terms of such atmospheric variables as temperature, humidity, wind direction and velocity, surface pressure, cloud cover and precipitation. In turn, the climate represents an ensemble of states traversed by climate system over a sufficiently long temporal interval (about 30 years, according to the World Meteorological Organization). Here, the ensemble includes not only a set of system states but also the probability measure defined on this set. Therefore, climate, roughly speaking, can be considered as the 'average' weather, in terms of mean and variance, in a certain geographical location over many years.

Time horizon of a forecast's usefulness and validity can be characterized by the specific measure known as predictability. Predictability is commonly understood as the degree to which it is possible to make an accurate qualitative or quantitative forecast of the future system's state. The study of atmospheric predictability was initiated by Thompson [8] and Lorenz [6, 9] more than 50 years ago and was extensively explored theoretically using various numerical and statistical models since then (e.g. [10–17]). One of the obvious measures of predictability that can be used to verify a weather forecast is the mean-squared error (the average of the squared differences between forecasts and observations). This measure increases over time and asymptotically approaches some finite value known as the saturation value. Therefore, predictability is lost when the forecast errors become comparable to the saturation value in magnitude. If this happens, the forecast result is not better than any randomly selected trajectory of the system. However, for a number of reasons, mean-squared error and other weather forecast verification metrics (e.g. mean absolute error and mean error) are rarely used to estimate the climate system predictability in practice (for details, see Ref. [18]).

Predictability characterizes both the physical system itself and the model of this system that is used to make a forecast. However, in atmospheric and climate studies we are interested in the predictability of real dynamical processes rather than the predictability of the model used in simulations.

some physical, dynamical and chemical processes or, at least, represent them in a simplified fashion. As a result, various physical simplifications in the formulation of climate models substantially influence their adequacy [5]. Second, the NWP and climate simulation are mathematically an initial-value (Cauchy) and/or a boundary-value (Dirichlet or von Neumann) problem, which is solved numerically using finite-difference, spectral or another appropriate method. Consequently, uncertainties emerging in the initial and boundary conditions as well as in the climate model parameters and external forcing, approximation, truncation and round-off errors lead to distinctions between the model output and the observed real state of the ECS. Third, let us suppose that we have the 'perfect' model of the ECS. It means that exact governing equations are known exactly and can be solved. However, even in this, hypothetically ideal, case the ability of climate models to predict the future remains limited. This can be explained by the fact that the atmosphere, which is the most rapidly changing component of the ECS, is strongly nonlinear and exhibits irregular (chaotic) spatial-temporal oscillations on all scales ranging from millimetre seconds (turbulent fluctuations) to thousands of kilometres and several years (climate variability). This phenomenon known as deterministic chaos was first discovered by Lorenz [6]. The chaotic nature of the atmosphere significantly limits our ability to successfully predict the weather and climate since the predicted trajectory of the ECS is unstable with respect to both the infinitesimal errors in initial conditions and external forcing [7]. Even with a perfect atmospheric model and accurate initial condition, we cannot predict

For further discussion, we need to clarify that terms 'weather' and 'climate' have different meanings. Weather is defined as the daily conditions of the atmosphere in terms of such atmospheric variables as temperature, humidity, wind direction and velocity, surface pressure, cloud cover and precipitation. In turn, the climate represents an ensemble of states traversed by climate system over a sufficiently long temporal interval (about 30 years, according to the World Meteorological Organization). Here, the ensemble includes not only a set of system states but also the probability measure defined on this set. Therefore, climate, roughly speaking, can be considered as the 'average' weather, in terms of mean and variance, in a certain

Time horizon of a forecast's usefulness and validity can be characterized by the specific measure known as predictability. Predictability is commonly understood as the degree to which it is possible to make an accurate qualitative or quantitative forecast of the future system's state. The study of atmospheric predictability was initiated by Thompson [8] and Lorenz [6, 9] more than 50 years ago and was extensively explored theoretically using various numerical and statistical models since then (e.g. [10–17]). One of the obvious measures of predictability that can be used to verify a weather forecast is the mean-squared error (the average of the squared differences between forecasts and observations). This measure increases over time and asymptotically approaches some finite value known as the saturation value. Therefore, predictability is lost when the forecast errors become comparable to the saturation value in magnitude. If this happens, the forecast result is not better than any randomly selected trajectory of the system. However, for a number of reasons, mean-squared error and other weather forecast verification metrics (e.g. mean absolute error and mean error) are rarely used to estimate the climate system

the weather beyond approximately two weeks.

102 Dynamical Systems - Analytical and Computational Techniques

geographical location over many years.

predictability in practice (for details, see Ref. [18]).

According to Lorenz [19], in weather and climate modelling we are facing the predictability of two kinds reflecting the internal and external variability of the climate system, respectively. The predictability of the first kind relates to the Cauchy (initial value) problem, namely the prediction of sequential states of the ECS for constant values of external parameters and given variations in the initial conditions. In contrast, the predictability of the second kind refers to a boundary-value problem, specifically to the prediction of response of the climate system in asymptotical equilibrium to perturbations in external parameters (forcing).

This chapter considers both the predictability of atmospheric and climate processes with respect to the initial data errors (predictability of the first kind) as well as the predictability with respect to external perturbations (predictability of the second kind). The stability of dynamical system is also discussed since stability is a key problem related to predictability in dynamical systems.

## 2. Climate system as a complex adaptive dynamical system

Let us begin with some preliminary notes and definitions which will be used in this chapter.

The term 'system' generally refers to a goal-oriented set of interconnected and interdependent elements that operate together to achieve some objectives [20]. The system is called complex if it possesses such characteristics as emergent behaviour, nonlinearity and high sensitivity to initial conditions and/or to perturbations, self-organization, chaotic behaviour, feedback loop, spontaneous order, robustness and hierarchical structure. Complexity in systems arises from nonlinear spatio-temporal interactions between their components. These nonlinear interactions lead to the appearance of new dynamical properties (for example, synchronous oscillations and other structural changes) that cannot be observed by exploring constituent elements individually.

Complex systems include a special class of systems that have the capacity to adapt to system's environment. These systems are known as complex adaptive systems. In a complex adaptive system, parts are linked together in such a way that the entire system as a whole has the capacity to transform fundamentally the interrelations and interdependences between its components, the collective behaviour of a system and also the behaviour of individual components due to the external forcing. Complex adaptive systems are dynamical systems since they evolve and change over time. These systems have a number of properties that include the following [21, 22]: co-evolution, connectivity, sub-optimality, requisite variety and iteration, edge of chaos and, certainly, emergence and self-organization.

The ECS (S) is understood as a complex, large-scale physical system that consists of the following five basic and interacting constituent subsystems [23]:


$$\mathcal{S} = A \cup \mathcal{H} \cup \mathcal{C} \cup \mathcal{L} \cup \mathcal{B}$$

The ECS components are characterized by a finite set of distributed variables whose values at a given time determine their state. The most unstable and rapidly oscillating component of the ECS is the atmosphere.

The ECS is a large-scale and unique physical system that possesses a number of specific properties (e.g. [24–29]) making the exploration of this system a high complexity problem. In contradistinction to many problems in physics, the study of the climate system, its change and variability cannot be implemented by a direct physical experiment due to climate system's essential features as a large-scale physical system. Laboratory experiments and analytical approaches have a very limited applicability to climate exploration by virtue of extreme complexity of the ECS. As a result, in climate studies the computational simulation represents the primary instrument and as such requires the development of appropriate mathematical models and numerical algorithms.

The utilization of mathematical models in climate research involves the development of a specific mathematical theory that allows one to explore the climate system along with its mathematical models. The contemporary mathematical theory of climate is based on methods of the qualitative theory of differential equations that enables us to explore the behaviour of climate system in its phase space [30]. In other words, the dynamical system theory is currently the theoretical foundation of mathematical climate theory. In this context, the ECS can be viewed as a complex adaptive dynamical system [21, 22].

The ECS belongs to the class of complex adaptive systems due to the following factors:

1. The ECS is a complex large-scale physical system combining the atmosphere, hydrosphere, cryosphere, land and biota together with global biochemical cycles (such as cycles of CO2, N2O and CH4) and aerosols. Components of the climate system are heterogeneous thermo-dynamical subsystems characterized by specific variables that determine their states. Elements of the ECS have strong differences in their structure, dynamics, physics and chemistry. They cover processes with different temporal and spatial scales, and link together via numerous physical coupling mechanisms, which can be either weak or strong. Each subsystem of the ECS can in turn be viewed as being composed of subsystems, which are themselves composed of subsystems. For example, the atmosphere can be divided into several layers based on its vertical temperature distribution. These layers are respectively the troposphere, stratosphere, mesosphere and thermosphere. The atmosphere can also be divided into surface layer, boundary layer and free atmosphere based on the influence of surface friction.

1. Atmosphere (A), the gaseous and aerosol envelope of the earth that propagates from the

2. Hydrosphere (H), the ocean and other water bodies on the surface of our planet, and

3. Cryosphere (C), the sea ice, freshwater ice, snow cover, glaciers, ice caps and ice sheets

The ECS components are characterized by a finite set of distributed variables whose values at a given time determine their state. The most unstable and rapidly oscillating component of the

The ECS is a large-scale and unique physical system that possesses a number of specific properties (e.g. [24–29]) making the exploration of this system a high complexity problem. In contradistinction to many problems in physics, the study of the climate system, its change and variability cannot be implemented by a direct physical experiment due to climate system's essential features as a large-scale physical system. Laboratory experiments and analytical approaches have a very limited applicability to climate exploration by virtue of extreme complexity of the ECS. As a result, in climate studies the computational simulation represents the primary instrument and as such requires the development of appropriate mathematical

The utilization of mathematical models in climate research involves the development of a specific mathematical theory that allows one to explore the climate system along with its mathematical models. The contemporary mathematical theory of climate is based on methods of the qualitative theory of differential equations that enables us to explore the behaviour of climate system in its phase space [30]. In other words, the dynamical system theory is currently the theoretical foundation of mathematical climate theory. In this context, the ECS can be

The ECS belongs to the class of complex adaptive systems due to the following factors:

1. The ECS is a complex large-scale physical system combining the atmosphere, hydrosphere, cryosphere, land and biota together with global biochemical cycles (such as cycles of CO2, N2O and CH4) and aerosols. Components of the climate system are heterogeneous thermo-dynamical subsystems characterized by specific variables that determine their states. Elements of the ECS have strong differences in their structure, dynamics, physics and chemistry. They cover processes with different temporal and spatial scales, and link together via numerous physical coupling mechanisms, which can be either weak or strong. Each subsystem of the ECS can in turn be viewed as being composed of subsystems, which are themselves composed of subsystems. For example, the atmosphere can be divided into several layers based on its vertical temperature distribution. These

land, water bodies and ice-covered surface outwards to space.

S ¼ A ∪ H ∪ C ∪ L ∪ B

water that is underground and in the atmosphere.

104 Dynamical Systems - Analytical and Computational Techniques

4. Lithosphere (L), the solid, external part of the earth.

viewed as a complex adaptive dynamical system [21, 22].

5. Biosphere (B), the part of our planet where life exists, i.e.

and permafrost.

ECS is the atmosphere.

models and numerical algorithms.


changes in albedo (reflection coefficient) is also tangible. The most influential gas component to affect the climate is CO2, which comprises about 70% points of the global warming potential.


Undoubtedly, there are other specific properties of the ECS that should be taken into account while studying climate as a complex adaptive system and building models of the ECS.

To simulate the ECS, we should assign some mathematical object that is an abstract representation of the real climate system taking into account its essential features mentioned above. This object is known as a perfect model of the ECS. It is usually assumed that a perfect model is deterministic semi-dynamical system that is dissipative, ergodic and possesses a global attractor. It is also assumed that any trajectory generated by the model is unstable [30].

Formally, an abstract climate system model represents a set of multi-dimensional nonlinear differential equations in partial derivatives, which generates finite dimensional deterministic semi-dynamical system of the form [24, 30]

$$\mathbf{x}'d\mathbf{x}/dt = \mathbf{F}(\mathbf{x}, p, f), \quad \mathbf{x} \in \mathbf{R}^n, \mathbf{x}|\_{t=0} = \mathbf{x}\_0, t \ge 0,\tag{1}$$

where x is the state vector, the components of which characterize the state of a system at a given time t, x<sup>0</sup> is a given initial state of a system, n is the dimension of dynamical system, p ∈ R<sup>p</sup> is the vector of model parameters and f is the external forcing. The solution to climate model equations (1) cannot be found analytically and one needs to employ available numerical methods. For that reason, in order to obtain numerical solution, the original set of partial differential equations is replaced with discrete spatio-temporal approximations using any appropriate technique (e.g. finite-difference method, Galerkin approach, etc.). Thus, in weather and climate simulation we mainly deal with discrete dynamical systems.

Suppose that the set of n real variables x1, x2, …, xn defines the current state of discrete-time dynamical system representing the ECS. A certain particular state x ¼ ðx1, x2, …, xnÞ corresponds to a point in an n-dimensional space Q ⊆ R<sup>n</sup>, known as the system phase space. Let tm ∈ Z<sup>þ</sup> ðm ¼ 0; 1; 2;…Þ be the discrete time, and let f ¼ ðf <sup>1</sup>, f <sup>2</sup>, …, f <sup>n</sup>Þ be a smooth vectorvalued function defined in the domain Q ⊆ R<sup>n</sup>. This function describes the evolution of the system state from one moment to another. Then, a deterministic discrete-time semi-dynamical system that approximates the continuous time dynamical system (1) can be specified by the following equation:

Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling http://dx.doi.org/10.5772/66752 107

$$\mathbf{x}(t\_{m+1}) = f\left(\mathbf{x}(t\_m)\right), \quad \mathbf{x}(t\_0) = \mathbf{x}\_0, \quad m = 0, 1, 2, \dots, \tag{2}$$

It is obvious that a family of operators forms a semi-group:

changes in albedo (reflection coefficient) is also tangible. The most influential gas component to affect the climate is CO2, which comprises about 70% points of the global warming

7. The components of the ECS are also non-isolated systems. They act as cascading systems and interact with each other in various ways including through the transfer of momentum, sensible and latent heat, gases and particles. All together they compose the climate

8. Dynamical processes in the ECS fluctuate due to both internal factors (natural oscillations) and external forcing (forced oscillations). Natural fluctuations are caused by internal instability (for example, hydrodynamic instability such as barotropic and baroclinic) with respect to stochastic perturbations. Human impacts, both intentional and unintentional,

Undoubtedly, there are other specific properties of the ECS that should be taken into account

To simulate the ECS, we should assign some mathematical object that is an abstract representation of the real climate system taking into account its essential features mentioned above. This object is known as a perfect model of the ECS. It is usually assumed that a perfect model is deterministic semi-dynamical system that is dissipative, ergodic and possesses a global attrac-

Formally, an abstract climate system model represents a set of multi-dimensional nonlinear differential equations in partial derivatives, which generates finite dimensional deterministic

where x is the state vector, the components of which characterize the state of a system at a given time t, x<sup>0</sup> is a given initial state of a system, n is the dimension of dynamical system, p ∈ R<sup>p</sup> is the vector of model parameters and f is the external forcing. The solution to climate model equations (1) cannot be found analytically and one needs to employ available numerical methods. For that reason, in order to obtain numerical solution, the original set of partial differential equations is replaced with discrete spatio-temporal approximations using any appropriate technique (e.g. finite-difference method, Galerkin approach, etc.). Thus, in weather

Suppose that the set of n real variables x1, x2, …, xn defines the current state of discrete-time dynamical system representing the ECS. A certain particular state x ¼ ðx1, x2, …, xnÞ corresponds to a point in an n-dimensional space Q ⊆ R<sup>n</sup>, known as the system phase space. Let tm ∈ Z<sup>þ</sup> ðm ¼ 0; 1; 2;…Þ be the discrete time, and let f ¼ ðf <sup>1</sup>, f <sup>2</sup>, …, f <sup>n</sup>Þ be a smooth vectorvalued function defined in the domain Q ⊆ R<sup>n</sup>. This function describes the evolution of the system state from one moment to another. Then, a deterministic discrete-time semi-dynamical system that approximates the continuous time dynamical system (1) can be specified by the

<sup>t</sup>¼<sup>0</sup> <sup>¼</sup> <sup>x</sup>0, <sup>t</sup> <sup>≥</sup> <sup>0</sup>, (1)

while studying climate as a complex adaptive system and building models of the ECS.

tor. It is also assumed that any trajectory generated by the model is unstable [30].

dx=dt <sup>¼</sup> <sup>F</sup>ðx, <sup>p</sup>, <sup>f</sup>Þ, <sup>x</sup> <sup>∈</sup> <sup>R</sup><sup>n</sup>, <sup>x</sup><sup>j</sup>

and climate simulation we mainly deal with discrete dynamical systems.

system, which is a unique large-scale natural system.

belong to the category of external forcing.

106 Dynamical Systems - Analytical and Computational Techniques

semi-dynamical system of the form [24, 30]

following equation:

potential.

$$f\_{s+p} = f\_s \circ f\_p, \quad f\_0 = I, \forall s, p \in \mathbb{Z}\_+,\tag{3}$$

where I is the identity operator. Therefore, the system state xðtmÞ at time tm can be explicitly expressed via the initial condition x0:

$$\mathbf{x}(t\_m) = f^m(\mathbf{x}\_0),\tag{4}$$

where f <sup>m</sup> denotes an <sup>m</sup>-folding application of <sup>f</sup> to <sup>x</sup>0. The sequence <sup>f</sup>xðtmÞg<sup>∞</sup> <sup>m</sup>¼<sup>0</sup> is a trajectory of system (2) in its phase space, which is uniquely defined by the initial values of state variables x0.

For reference, let us reproduce a couple of definitions [30].

Definition 1. The solution xðtÞ to system (1) is Lyapunov stable if ∀ε > 0, ∃δðεÞ > 0 such that

$$\|\mathbf{x}\_0 - \mathbf{x}\_0^\*\| < \delta(\varepsilon) \Rightarrow \|\mathbf{x}(t) - \mathbf{x}^\*(t)\| < \varepsilon, \forall t \ge 0,\tag{5}$$

where x�ðtÞ is the solution to the system

$$d\mathbf{x}^\*/dt = F(\mathbf{x}^\*, p, f), \quad \mathbf{x}^\*|\_{t=0} = \mathbf{x}\_{0.}^\* \tag{6}$$

Definition 2. The solution xðtÞ to system (1) is stable with respect to the continuous perturbation δF if ∀ε > 0; ∃δðεÞ > 0 such that

$$\|\delta F\| < \delta(\varepsilon) \Rightarrow \|\mathbf{x}(t) - \mathbf{x}^\*(t)\| < \varepsilon, \forall t \ge 0,\tag{7}$$

where xðtÞ is the solution to the following perturbed equation:

$$d\mathbf{x}^\*/dt = F(\mathbf{x}^\*, p, f) + \delta F, \mathbf{x}^\*|\_{t=0} = \mathbf{x}\_0^\*. \tag{8}$$

These definitions are important when considering both kinds of predictability.

The key point for further consideration is the assumption that climate system model described by the set of nonlinear partial differential equations (1) is 'perfect'. We suppose that system (1) is nonlinear dissipative semi-dynamical system (t ≥ 0) that has an absorbing set in the phase space and its solution exists and is unique for any t ≥ 0. Next, we assume that the system (1) possesses a global attractor of finite dimension that is a certain set in the system's phase space towards which a system tends to evolve for a wide variety of initial conditions of the system. Global attractor is characterized by the attraction property and invariance [30]. So, the dynamics of system (1) can be formally divided into to two phases: (1) movement towards the attractor and (2) motion on the attractor. When studying the climate system stability and predictability we assume that the system trajectory is on the attractor and its dynamics are chaotic. We also assume that system (1) possesses the property of ergodicity. Thus, statistical characteristics of the climate system (e.g. the first <sup>x</sup> <sup>¼</sup> 〈x〉 and second varðxÞ ¼ 〈x<sup>2</sup>〉−x<sup>2</sup> moments) can be calculated by averaging along a certain trajectory.

Structurally, any climate system model represents a set of interacting and interdependent models of lower level (i.e. atmospheric model, model of the ocean, etc.). The number of these lower level models is determined by the objectives of a problem under consideration. For example, to study the large-scale climate variability the model can include the following major components: tropical, mid-latitude and polar troposphere, stratosphere, ocean, land ice, ocean and sea ice, surface and boundary layers, hydrological cycle, clouds (e.g. convective and stratiform), precipitation, aerosols, CO2 and CH4 cycles, solar radiation, terrestrial emission, etc. Other subsystems of the ECS (e.g. vegetation, land surface and biota) can be considered as the boundary conditions and external forcing. In numerical weather prediction problem, some atmospheric model (either global, regional or local) is the main component of the forecasting system, while ocean, sea ice, land surface are used only to impose boundary conditions. Note that models of general circulation of the atmosphere and the ocean represent main computational instruments for simulating the ECS.

## 3. Climate model governing equations

The main energy source of the ECS is the Sun. Spatial inhomogeneity and temporal changes of the heat energy that the earth's surface receives from the Sun are the main cause of motions in the atmosphere and ocean. Equations that govern the atmospheric and oceanic circulation represent the mathematical expressions of fundamental laws of physics: conservation of momentum, conservation of mass, conservation of water and conservation of energy (the first law of thermodynamics). Some diagnostic relationships between variables are also used (i.e. the equation of state). Almost every model uses a slightly different set of equations tailored to a specific problem. However, all climate models include the following basic equations: two equations for horizontal motions (or equation for the vorticity and divergence), equation for the vertical velocity (or hydrostatic equation), continuity equation, as well as thermodynamic and moisture equations. Equations of motion are derived from the law of conservation of momentum applicable to a rotating system. These equations describe all types and scales of atmospheric motions that are important for the formation of weather and climate (i.e. largescale Rossby waves, planetary waves and gravity waves). Conservation of mass is mathematically expressed in the form of continuity equation, equation for conservation of moisture and equations for conservation of other substances taken into account in a particular climate model.

The set of equations that describes the general circulation of the atmosphere can be written in the spherical co-ordinate system (λ,ϕ) defined by longitude λ and latitude ϕ, with normalized pressure as a vertical coordinate σ ¼ p=ps, where p is pressure and ps is the surface pressure [1, 31]. The set of the model equations includes two momentum equations:

Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling http://dx.doi.org/10.5772/66752 109

$$\frac{\partial u}{\partial t} = \eta v - \frac{1}{a \cos \varphi} \frac{\partial}{\partial \lambda} (\circledast + K) - \frac{\mathcal{R} T\_v}{a \cos \varphi} \frac{\partial \ln p\_s}{\partial \lambda} - \dot{\sigma} \frac{\partial u}{\partial \sigma} = F\_{uV} + F\_{uH}, \tag{9}$$

$$\frac{\partial \upsilon}{\partial t} = -\eta \mu - \frac{1}{a} \frac{\partial}{\partial \wp} (\circledast + K) - \frac{\text{RT}\_v}{a} \frac{\partial \ln p\_s}{\partial \wp} - \dot{\sigma} \frac{\partial \upsilon}{\partial \sigma} = F\_{\upsilon V} + F\_{\upsilon H}, \tag{10}$$

where u and v are zonal and meridional velocities, a is the earth's average radius, σ ¼ dσ=dt is the vertical velocity in the σ co-ordinate system, Φ is geopotential, T is temperature, R is the gas constant for dry air, <sup>K</sup> ¼ ðu<sup>2</sup> <sup>þ</sup> <sup>v</sup><sup>2</sup>Þ=2 is the kinetic energy, <sup>η</sup> <sup>¼</sup> <sup>ς</sup> <sup>þ</sup> <sup>f</sup> is the absolute vorticity, f is the Coriolis parameter and ς is the relative vorticity that is given by

$$\zeta = \frac{1}{a\cos\varphi} \left[ \frac{\partial v}{\partial \lambda} - \frac{\partial}{\partial \varphi} (u \cos\varphi) \right]. \tag{11}$$

The virtual temperature Tv is represented as

characteristics of the climate system (e.g. the first <sup>x</sup> <sup>¼</sup> 〈x〉 and second varðxÞ ¼ 〈x<sup>2</sup>〉−x<sup>2</sup>

Structurally, any climate system model represents a set of interacting and interdependent models of lower level (i.e. atmospheric model, model of the ocean, etc.). The number of these lower level models is determined by the objectives of a problem under consideration. For example, to study the large-scale climate variability the model can include the following major components: tropical, mid-latitude and polar troposphere, stratosphere, ocean, land ice, ocean and sea ice, surface and boundary layers, hydrological cycle, clouds (e.g. convective and stratiform), precipitation, aerosols, CO2 and CH4 cycles, solar radiation, terrestrial emission, etc. Other subsystems of the ECS (e.g. vegetation, land surface and biota) can be considered as the boundary conditions and external forcing. In numerical weather prediction problem, some atmospheric model (either global, regional or local) is the main component of the forecasting system, while ocean, sea ice, land surface are used only to impose boundary conditions. Note that models of general circulation of the atmosphere and the ocean represent main computational instruments for simulating the

The main energy source of the ECS is the Sun. Spatial inhomogeneity and temporal changes of the heat energy that the earth's surface receives from the Sun are the main cause of motions in the atmosphere and ocean. Equations that govern the atmospheric and oceanic circulation represent the mathematical expressions of fundamental laws of physics: conservation of momentum, conservation of mass, conservation of water and conservation of energy (the first law of thermodynamics). Some diagnostic relationships between variables are also used (i.e. the equation of state). Almost every model uses a slightly different set of equations tailored to a specific problem. However, all climate models include the following basic equations: two equations for horizontal motions (or equation for the vorticity and divergence), equation for the vertical velocity (or hydrostatic equation), continuity equation, as well as thermodynamic and moisture equations. Equations of motion are derived from the law of conservation of momentum applicable to a rotating system. These equations describe all types and scales of atmospheric motions that are important for the formation of weather and climate (i.e. largescale Rossby waves, planetary waves and gravity waves). Conservation of mass is mathematically expressed in the form of continuity equation, equation for conservation of moisture and equations for conservation of other substances taken into account in a particular climate

The set of equations that describes the general circulation of the atmosphere can be written in the spherical co-ordinate system (λ,ϕ) defined by longitude λ and latitude ϕ, with normalized pressure as a vertical coordinate σ ¼ p=ps, where p is pressure and ps is the surface pressure [1, 31]. The set of the model equations includes two momentum

moments) can be calculated by averaging along a certain trajectory.

108 Dynamical Systems - Analytical and Computational Techniques

3. Climate model governing equations

ECS.

model.

equations:

$$T\_v = T \left[ 1 + \left( \frac{\mathbf{R}\_v}{\mathbf{R}} \mathbf{-1} \right) q \right],\tag{12}$$

where T is the temperature, q is the specific humidity and R<sup>v</sup> is the gas constant for water vapour. The terms FuV and FvV describe the vertical friction and terms FuH and FvH the horizontal diffusion. Generally, however, the momentum equations are transformed into the equations for the absolute vorticity η and the divergence D using new independent variable μ ¼ sin ϕ:

$$\frac{\partial \eta}{\partial t} = \frac{1}{a(1 - \mu^2)} \frac{\partial}{\partial \lambda} (N\_v + \cos qF\_{vV}) - \frac{1}{a} \frac{\partial}{\partial \mu} (N\_u + \cos qF\_{uV}) + F\_{\eta H},\tag{13}$$

∂D <sup>∂</sup><sup>t</sup> <sup>¼</sup> <sup>1</sup> að1−μ<sup>2</sup>Þ ∂ <sup>∂</sup><sup>λ</sup> <sup>ð</sup>Nu <sup>þ</sup> cosϕFuVÞ þ <sup>1</sup> a ∂ <sup>∇</sup><sup>μ</sup> <sup>ð</sup>Nv <sup>þ</sup> cosϕFvVÞ þ FDH <sup>−</sup>∇<sup>2</sup>ð<sup>Φ</sup> <sup>þ</sup> <sup>K</sup> <sup>þ</sup> <sup>R</sup>T0ln psÞ, (14)

where the horizontal divergence is given by

$$D = \frac{1}{a\cos\varphi} \left[ \frac{\partial u}{\partial \lambda} + \frac{\partial}{\partial \varphi} (v \cos\varphi) \right]. \tag{15}$$

The spherical horizontal Laplacian can be written as

$$\nabla^2 = \frac{1}{a^2(1-\mu^2)}\frac{\partial^2}{\partial \lambda^2} + \frac{1}{a^2}\frac{\partial}{\partial \mu} \left[ (1-\mu^2)\frac{\partial}{\partial \mu} \right]. \tag{16}$$

To provide the computational effectiveness of numerical integration scheme, the virtual temperature is partitioned into two parts, one of which T<sup>0</sup> is a function of the vertical coordinate only, i.e. Tvðλ, μ, σ, tÞ ¼ T0ðσÞ þ T<sup>0</sup> <sup>v</sup>ðλ, μ, σ, tÞ. Then, the nonlinear dynamical terms Nu and Nv can be represented in the following form:

$$N\_u = \eta V \text{-RT}\_v^{'} \frac{1}{a} \frac{\partial \ln p\_s}{\partial \lambda} - \dot{\sigma} \frac{\partial \mathcal{U}}{\partial \sigma},\tag{17}$$

$$N\_v = -\eta L I - \text{RT}\_v' \frac{(1-\mu^2)}{a} \frac{\partial \ln p\_s}{\partial \mu} - \dot{\sigma} \frac{\partial V}{\partial \sigma},\tag{18}$$

where U ¼ ucosϕ and V ¼ vcosϕ.

The thermodynamic equation, which represents the mathematical expression of the first law of thermodynamic, is written for a perturbation in temperature T<sup>0</sup> calculated with respect to the mean T0ðσÞ mentioned above:

$$\begin{split} \frac{\partial T}{\partial t} &= -\frac{1}{a(1-\mu^2)} \frac{\partial}{\partial \lambda} (UT') - \frac{1}{a} \frac{\partial}{\partial \mu} (VT') + T'D - \dot{\sigma} \frac{\partial T'}{\partial \sigma} + \frac{\mathbf{R}T\_v}{c\_p^\*} \frac{\omega}{p} \\ &+ Q + F\_{TV} + F\_{TH} - \frac{1}{c\_p^\*} [u(F\_{uV} + F\_{uH}) + v(F\_{vV} + F\_{vH})], \end{split} \tag{19}$$

where Q is the diabatic heating rate, ω is the pressure vertical velocity and c� <sup>p</sup> is given by

$$\mathbf{c}\_p^\* = \mathbf{c}\_p \left[ 1 + \left( \frac{\mathbf{c}\_v}{\mathbf{c}\_p} - 1 \right) \right]. \tag{20}$$

Here, cp is the specific heat of dry air at a constant pressure and cv is the specific heat of water vapour at a constant pressure.

The equation for specific humidity is used to describe the hydrologic cycle in the atmosphere:

$$\frac{\partial \overline{\eta}}{\partial t} = -\frac{1}{a(1-\mu^2)} \frac{\partial}{\partial \lambda} (\mathcal{U}\eta) - \frac{1}{a} \frac{\partial}{\partial \mu} (V\eta) + qD - \dot{\sigma} \frac{\partial \eta}{\partial \sigma} + \mathcal{S} + F\_{qV} + F\_{qH},\tag{21}$$

where the term S describes the source/sink processes for water vapour, and FqV and FqH are the vertical and horizontal water vapour diffusion.

Let us consider now the continuity equation that represents the conservation of mass law:

$$\frac{\partial \ln p\_s}{\partial t} = -\frac{U}{a(1 - \mu^2)} \frac{\partial \ln p\_s}{\partial \lambda} - \frac{V}{a} \frac{\partial \ln p\_s}{\partial \mu} - D - \frac{\partial \dot{\sigma}}{\partial \sigma}.\tag{22}$$

Integrating this equation from the top (σ ¼ 0) to the bottom (σ ¼ 1), with the vertical boundary conditions σ\_ ¼ 0 at σ ¼ 1 and σ ¼ 0, one can obtain the equation for surface pressure prediction:

$$\frac{\partial \ln p\_s}{\partial t} = \int\_0^1 \left[ D + \frac{\iota U}{a(1 - \mu^2)} \frac{\partial \ln p\_s}{\partial \lambda} + \frac{V}{a} \frac{\partial \ln p\_s}{\partial \mu} \right] d\sigma. \tag{23}$$

Combining the continuity equation and the equation for the surface pressure, one can derive the diagnostic equation for the vertical velocity σ\_:

Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling http://dx.doi.org/10.5772/66752 111

$$\dot{\sigma} = \sigma \Big| \left[ D + \frac{U}{a(1-\mu^2)} \frac{\partial \ln p\_s}{\partial \lambda} + \frac{V}{a} \frac{\partial \ln p\_s}{\partial \mu} \right] d\sigma - \int\_0^\sigma [D + \frac{U}{a(1-\mu^2)} \frac{\partial \ln p\_s}{\partial \lambda} + \frac{V}{a} \frac{\partial \ln p\_s}{\partial \mu}] d\sigma. \tag{24}$$

Two diagnostic equations, the hydrostatic equation and the equation of state, are also components of a set of equations that are used to simulate the atmospheric general circulation. The hydrostatic equation is

$$
\partial \mathfrak{O} / \partial \ln \sigma = -\mathbb{R}T\_v.\tag{25}
$$

In the integral form, this equation can be written as

$$
\mathcal{O} = \mathcal{O}\_s - \int\_1^\sigma \mathbf{R} T\_v d\ln \sigma,\tag{26}
$$

where Φ<sup>s</sup> is the geopotential at the earth's surface. The equation of state is written as

$$p = \rho \mathbb{R} T\_v,\tag{27}$$

where ρ is the air density.

Nu ¼ ηV−RT

Nv ¼ −ηU − RT

where U ¼ ucosϕ and V ¼ vcosϕ.

110 Dynamical Systems - Analytical and Computational Techniques

∂T 0 <sup>∂</sup><sup>t</sup> <sup>¼</sup> <sup>−</sup> <sup>1</sup>

að1−μ<sup>2</sup>Þ

<sup>þ</sup><sup>Q</sup> <sup>þ</sup> FTV <sup>þ</sup> FTH<sup>−</sup> <sup>1</sup>

∂ <sup>∂</sup><sup>λ</sup> <sup>ð</sup>Uq<sup>Þ</sup> <sup>−</sup> <sup>1</sup>

∂ <sup>∂</sup><sup>λ</sup> <sup>ð</sup>UT<sup>0</sup>

mean T0ðσÞ mentioned above:

vapour at a constant pressure.

∂q ∂t <sup>¼</sup> <sup>−</sup> <sup>1</sup> að1−μ<sup>2</sup>Þ

vertical and horizontal water vapour diffusion.

∂lnps

∂lnps ∂t ¼

the diagnostic equation for the vertical velocity σ\_:

<sup>∂</sup><sup>t</sup> <sup>¼</sup> <sup>−</sup> <sup>U</sup>

ð 1

0

að1−μ<sup>2</sup>Þ

D þ

0 v 1 a ∂ln ps <sup>∂</sup><sup>λ</sup> <sup>−</sup> <sup>σ</sup>\_

0 v <sup>ð</sup>1−μ<sup>2</sup><sup>Þ</sup> a

<sup>Þ</sup> <sup>−</sup> <sup>1</sup> a ∂ <sup>∂</sup><sup>μ</sup> <sup>ð</sup>VT<sup>0</sup>

c� p

<sup>p</sup> ¼ cp 1 þ

cv cp −1 � � � �

<sup>∂</sup><sup>μ</sup> <sup>ð</sup>VqÞ þ qD <sup>−</sup> <sup>σ</sup>\_

∂ln ps

Here, cp is the specific heat of dry air at a constant pressure and cv is the specific heat of water

where the term S describes the source/sink processes for water vapour, and FqV and FqH are the

Let us consider now the continuity equation that represents the conservation of mass law:

∂ln ps <sup>∂</sup><sup>λ</sup> <sup>−</sup> V a

Integrating this equation from the top (σ ¼ 0) to the bottom (σ ¼ 1), with the vertical boundary conditions σ\_ ¼ 0 at σ ¼ 1 and σ ¼ 0, one can obtain the equation for surface pressure prediction:

Combining the continuity equation and the equation for the surface pressure, one can derive

∂ln ps ∂λ þ

� �

V a ∂ln ps ∂μ

U að1−μ<sup>2</sup>Þ

The equation for specific humidity is used to describe the hydrologic cycle in the atmosphere:

a ∂

where Q is the diabatic heating rate, ω is the pressure vertical velocity and c�

c �

The thermodynamic equation, which represents the mathematical expression of the first law of thermodynamic, is written for a perturbation in temperature T<sup>0</sup> calculated with respect to the

∂U

∂V

∂ln ps <sup>∂</sup><sup>μ</sup> <sup>−</sup> <sup>σ</sup>\_

> Þ þ T 0 D − σ\_ ∂T 0 ∂σ þ RTv c� p

½uðFuV þ FuHÞ þ vðFvV þ FvHÞ�,

∂q ∂σ

<sup>∂</sup><sup>μ</sup> <sup>−</sup> <sup>D</sup> <sup>−</sup> <sup>∂</sup>σ\_

<sup>∂</sup><sup>σ</sup> , (17)

ω p

: (20)

þ S þ FqV þ FqH, (21)

<sup>∂</sup><sup>σ</sup> : (22)

dσ: (23)

<sup>p</sup> is given by

(19)

<sup>∂</sup><sup>σ</sup> , (18)

Boundary conditions in the longitudinal direction are periodic, and the solution to the atmospheric model equations is bounded at the north and south poles. Vertical boundary conditions represent the vanishing of vertical velocity both at the bottom and at the top of the atmosphere: σ\_ ¼ 0 at σ ¼ 1 and σ ¼ 0.

Equations used in the ocean model are written in the Boussinesq hydrostatic approximation with a rigid lid in the spherical coordinate system, with depth z as a vertical coordinate defined as negative downwards from z ¼ 0, which denotes the ocean surface [1, 31]. The set of model equations include the following:

1. The horizontal equations of motion:

$$\frac{\partial u}{\partial t} + L(u) - \left(f + \frac{u}{a}\tan\varphi\right)v + \frac{1}{a\rho\_o\cos\varphi}\frac{\partial p}{\partial \lambda} = k\_V\frac{\partial^2 u}{\partial z^2} + F\_u,\tag{28}$$

$$\frac{\partial v}{\partial t} + L(v) + \left(f + \frac{\mu}{a} \tan \varphi\right) \mu + \frac{1}{a\rho\_o} \frac{\partial p}{\partial \rho} = k\_V \frac{\partial^2 v}{\partial z^2} + F\_v,\tag{29}$$

where kV is the vertical eddy viscosity coefficient, ρ<sup>0</sup> is the density of sea water and the advection operator, LðαÞ, is given by

$$L(\alpha) = \frac{1}{a\cos\varphi} \left( \frac{\partial u \,\alpha}{\partial \lambda} + \frac{\partial v \alpha \cos\varphi}{\partial \varphi} \right) + \frac{\partial w \,\alpha}{\partial z} \,. \tag{30}$$

The horizontal viscosity terms, Fu and Fv, are defined as

$$F\_{\mu} = k\_H \left[ \nabla^2 \mu + \frac{(1 - \tan^2 \phi) \mu}{a^2} - \frac{2 \sin \phi}{a^2 \cos^2 \phi} \frac{\partial v}{\partial \lambda} \right],\tag{31}$$

$$F\_{\upsilon} = k\_{H} \left[ \nabla^{2} \upsilon + \frac{(1 - \tan^{2} \varphi) \upsilon}{a^{2}} + \frac{2 \sin \varphi}{a^{2} \cos^{2} \varphi} \frac{\partial u}{\partial \lambda} \right],\tag{32}$$

where kH is the horizontal eddy viscosity coefficient. The given form of the diffusion terms, Fu and Fv, is required for conserving angular momentum property.

2. The hydrostatic equation:

$$
\partial p/\partial z = -\mathbf{g}\rho.\tag{33}
$$

3. The thermodynamic equation:

$$\frac{\partial T}{\partial t} + L(T) = \kappa\_V \frac{\partial^2 T}{\partial z^2} + \kappa\_H \nabla^2 T,\tag{34}$$

where κ<sup>V</sup> and κ<sup>H</sup> are, respectively, the vertical and horizontal eddy diffusivity coefficients.

4. The equation for the mass continuity of salinity:

$$\frac{\partial S}{\partial t} + L(S) = \kappa\_V \frac{\partial^2 S}{\partial z^2} + \kappa\_H \nabla^2 S. \tag{35}$$

5. The equation of continuity:

$$\frac{\partial w}{\partial z} = -\frac{1}{a\cos\varphi} \frac{\partial u}{\partial \lambda} - \frac{1}{a\cos\varphi} \frac{\partial v}{\partial \varphi} \frac{\cos\varphi}{\partial \varphi}.\tag{36}$$

6. The equation of state:

$$
\rho = \rho(T, \mathcal{S}.p). \tag{37}
$$

Due to their extreme complexity, weather and climate models can be implemented on computers only using numerical techniques. Since models are based on partial differential equations, it is necessary, first, to ensure that the problem under consideration is well posed, i.e. it has a unique solution that depends on the boundary and initial conditions. Thus, both the initial and boundary conditions must be properly specified. Next, weather and climate mathematical models should be transformed into numerical models that can be implemented on computers. The most widely used technique for solving differential equations of weather and climate models is the finite-difference method according to which the derivatives in the partial differential equations are approximated on a certain temporal-spatial grid. Thus, instead of continuous functions, which describe the state of climate system and its components, we deal with discrete functions defined only for specific times separated by the time step Δt and specific space locations separated by spatial (horizontal Δs and vertical Δh) steps. As a result, instead of partial differential equation we obtain finite-difference equations (numerical model). It is very important that numerical schemes used for the discretization of model differential equations must satisfy several fundamental requirements: finite-difference equations must be consistent with model differential equations, the solution of finite-difference equations must converge to the solution of differential equations and numerical schemes must be computationally stable. In practice, finite difference is not the only method used to solve weather and climate problems. The most popular among other methods are the family of Galerkin techniques, spectral, finite-volume and finite element approaches.

In contemporary climate models, due to their discrete spatial and temporal structure, a large number of physical processes and cycles cannot be clearly represented and formulated by model equations. Climate models are theoretically incapable of simulating processes on spatial scales of the order of magnitude that is twice the model grid length [32]. Such thermo-dynamical, physical and chemical processes and cycles are parameterized, i. e. expressed parametrically using simplified description. Study of the climate system by computer simulation requires extensive computational resources. As a result, the predictability problem is usually studied either on the basis of low-order models, which possess the main properties of the climate system (nonlinearity, chaos, dissipative, etc.), or on the basis of complex climate models using the ensemble approach or the Monte Carlo method.

## 4. Predictability of climate system

## 4.1. Predictability of the first kind

Fu <sup>¼</sup> kH <sup>∇</sup><sup>2</sup>

112 Dynamical Systems - Analytical and Computational Techniques

Fv <sup>¼</sup> kH <sup>∇</sup><sup>2</sup>

∂T

∂S ∂t

∂w <sup>∂</sup><sup>z</sup> <sup>¼</sup> <sup>−</sup> <sup>1</sup>

4. The equation for the mass continuity of salinity:

2. The hydrostatic equation:

3. The thermodynamic equation:

5. The equation of continuity:

6. The equation of state:

<sup>u</sup> <sup>þ</sup> <sup>ð</sup>1−tan2ϕÞ<sup>u</sup>

<sup>v</sup> <sup>þ</sup> <sup>ð</sup>1−tan<sup>2</sup>ϕÞ<sup>v</sup>

terms, Fu and Fv, is required for conserving angular momentum property.

<sup>∂</sup><sup>t</sup> <sup>þ</sup> <sup>L</sup>ðTÞ ¼ <sup>κ</sup><sup>V</sup>

þ LðSÞ ¼ κ<sup>V</sup>

acosϕ

∂u <sup>∂</sup><sup>λ</sup> <sup>−</sup> <sup>1</sup> acos ϕ

Due to their extreme complexity, weather and climate models can be implemented on computers only using numerical techniques. Since models are based on partial differential equations, it is necessary, first, to ensure that the problem under consideration is well posed, i.e. it has a unique solution that depends on the boundary and initial conditions. Thus, both the initial and boundary conditions must be properly specified. Next, weather and climate mathematical models should be transformed into numerical models that can be implemented on computers. The most widely used technique for solving differential equations of weather and climate models is the finite-difference method according to which the derivatives in the partial differential equations are approximated on a certain temporal-spatial grid. Thus, instead of continuous functions, which describe the state of climate system and its components, we deal with discrete functions defined only for specific times separated by the time step Δt and

<sup>a</sup><sup>2</sup> <sup>−</sup> 2sin<sup>ϕ</sup>

a<sup>2</sup> þ

where kH is the horizontal eddy viscosity coefficient. The given form of the diffusion

∂<sup>2</sup>T <sup>∂</sup>z<sup>2</sup> <sup>þ</sup> <sup>κ</sup>H∇<sup>2</sup>

where κ<sup>V</sup> and κ<sup>H</sup> are, respectively, the vertical and horizontal eddy diffusivity coefficients.

∂<sup>2</sup>S <sup>∂</sup>z<sup>2</sup> <sup>þ</sup> <sup>κ</sup>H∇<sup>2</sup>

∂v cosϕ

a2cos2 ϕ

2sinϕ a2cos2 ϕ ∂v ∂λ

∂u ∂λ

∂p=∂z ¼ −gρ: (33)

, (31)

, (32)

T, (34)

S: (35)

<sup>∂</sup><sup>ϕ</sup> : (36)

ρ ¼ ρðT, S:pÞ: (37)

The first kind predictability of climate processes (predictability of climate processes with respect to the initial conditions) will be considered under the assumption that the climate system (1) evolves on its attractor. Since system (1) is a nonlinear dissipative dynamical system, its attractor, known as a strange attractor, has an extremely complex fractal structure and can be characterized by such parameters as dimension, characteristic Lyapunov exponents, invariant measure and asymptotically steady solution and others. If some trajectory of system (1) is enclosed in a bounded phase volume (attractor), then the system's dynamics demonstrate deterministic chaos: the behaviour of simulated system resembles a stochastic process despite the fact that the system is described by deterministic laws and its evolution is governed by deterministic differential equations. So, all orbits of a system that start close enough will diverge from one another, however, will never depart from the attractor. The rate of separation of infinitesimally close orbits is characterized by positive Lyapunov exponents. The number of directions along which the orbit is unstable is equal to the number of positive Lyapunov exponents n<sup>λ</sup> (note that n<sup>λ</sup> < n, where n is a system's dimension). Thus, trajectories of climate dynamical systems are Lyapunov unstable.

To consider the initial growth rates of errors in the initial conditions let us linearize Eq. (1) around some trajectory to obtain the equation in variations:

$$d\mathbf{x}^{'}/dt = \mathbf{M}\_t \mathbf{x}\_0^{'},\tag{38}$$

where Mt ¼ ∂F=∂x is the tangent propagator along the trajectory between the initial state x<sup>0</sup> 0 and the forecast state x<sup>0</sup> at a certain time t (actually Mt is a Jacobian matrix). Obviously, one can obtain

$$\|\mathbf{x}'(t)\|^2 = (M\_t \mathbf{x}\_0', M\_t \mathbf{x}\_0') = (M\_t^\* M\_t \mathbf{x}\_0', \mathbf{x}\_0'),\tag{39}$$

where (·,·) is the inner product in R<sup>n</sup> and M� is the transpose of M. Since the operator M� <sup>t</sup> Mt is self-adjoint, then for any t one can consider the following eigenvalue problem:

$$
\mathcal{M}\_t^\* \mathcal{M}\_t \psi\_i = \sigma\_i \psi\_i,\tag{40}
$$

where σ<sup>i</sup> is the ith eigenvalue of the matrix M� <sup>t</sup> Mt and ψ<sup>i</sup> is the corresponding eigenvector. Representing x<sup>0</sup> <sup>0</sup> in the form of series as x<sup>0</sup> <sup>0</sup> ¼ ∑ i αiψ<sup>i</sup> , one can get ‖x 0 <sup>ð</sup>tÞ‖<sup>2</sup> <sup>¼</sup> <sup>∑</sup> i σiα<sup>i</sup> 2 . So, the forecast error on temporal interval ½0; t� depends on errors in the initial distribution of eigenvectors ψ<sup>i</sup> and singular values of the tangent linear propagator Mt. Since system (1) is ergodic, we can also calculate the Lyapunov exponents λ<sup>i</sup> in accordance with the multiplicative theorem [33]:

$$\lambda\_i = \lim\_{t \to \infty} \frac{1}{t} \ln \sigma\_i(M\_t^\* M\_t), \quad i = 1, \ldots, n. \tag{41}$$

The Lyapunov exponents define the exponential growth (decay) of linear independent components of x' at x 0 ! 0. The knowledge of the Lyapunov exponent spectrum of a dynamical system allows one to estimate the attractor fractal dimension, the rate of Kolmogorov-Sinai entropy production and the characteristic e-folding time. Knowledge of these parameters is very important for the stability and predictability analysis of dynamical systems. The fractal dimension of attractors of dissipative dynamical systems can be determined by applying the Kaplan-Yorke conjecture [34]:

$$D\_{KY} = I + \sum\_{i=1}^{I} \lambda\_i / |\lambda\_{i+1}|,\tag{42}$$

where J is the maximum integer such that the sum of the J largest exponents is still nonnegative, i.e. ∑<sup>J</sup> <sup>i</sup>¼<sup>1</sup>λ<sup>i</sup> <sup>&</sup>gt; 0. The sum of all positive Lyapunov exponents, according to theorem [35], gives an estimate of the Kolmogorov-Sinai entropy, i.e. the value showing mean divergence of the trajectories on attractors. The arrangement of the Lyapunov exponents in (42) is as follows: λ<sup>1</sup> ≥ λ<sup>2</sup> ≥ … ≥ λnd . The multiplicative inverse (reciprocal) of the largest Lyapunov exponent is referred to as the characteristic e-folding time.

Let δ<sup>0</sup> be the initial perturbation of x<sup>0</sup> used to integrate equation (8). Since the system is Lyapunov unstable, after some sufficiently large temporal interval of integration the distance between two hyper-points in the phase space reaches the value of δt. Let δ<sup>t</sup> be the accepted error tolerance, then the predictability time of a system can be roughly estimated as

dx0

<sup>ð</sup>tÞ‖<sup>2</sup> ¼ ðMtx<sup>0</sup>

self-adjoint, then for any t one can consider the following eigenvalue problem:

M�

‖x 0

114 Dynamical Systems - Analytical and Computational Techniques

where σ<sup>i</sup> is the ith eigenvalue of the matrix M�

<sup>0</sup> in the form of series as x<sup>0</sup>

<sup>λ</sup><sup>i</sup> <sup>¼</sup> lim<sup>t</sup>!<sup>∞</sup>

exponent is referred to as the characteristic e-folding time.

1 t

ln σiðM�

DKY ¼ J þ ∑

obtain

Representing x<sup>0</sup>

nents of x' at x

negative, i.e. ∑<sup>J</sup>

0

Kaplan-Yorke conjecture [34]:

[33]:

=dt ¼ Mtx<sup>0</sup>

where Mt ¼ ∂F=∂x is the tangent propagator along the trajectory between the initial state x<sup>0</sup>

and the forecast state x<sup>0</sup> at a certain time t (actually Mt is a Jacobian matrix). Obviously, one can

<sup>0</sup>, Mtx<sup>0</sup>

<sup>t</sup> Mtψ<sup>i</sup> ¼ σiψ<sup>i</sup>

<sup>0</sup> ¼ ∑ i αiψ<sup>i</sup>

forecast error on temporal interval ½0; t� depends on errors in the initial distribution of eigenvectors ψ<sup>i</sup> and singular values of the tangent linear propagator Mt. Since system (1) is ergodic, we can also calculate the Lyapunov exponents λ<sup>i</sup> in accordance with the multiplicative theorem

The Lyapunov exponents define the exponential growth (decay) of linear independent compo-

system allows one to estimate the attractor fractal dimension, the rate of Kolmogorov-Sinai entropy production and the characteristic e-folding time. Knowledge of these parameters is very important for the stability and predictability analysis of dynamical systems. The fractal dimension of attractors of dissipative dynamical systems can be determined by applying the

> J i¼1

where J is the maximum integer such that the sum of the J largest exponents is still non-

[35], gives an estimate of the Kolmogorov-Sinai entropy, i.e. the value showing mean divergence of the trajectories on attractors. The arrangement of the Lyapunov exponents in (42) is as follows: λ<sup>1</sup> ≥ λ<sup>2</sup> ≥ … ≥ λnd . The multiplicative inverse (reciprocal) of the largest Lyapunov

Let δ<sup>0</sup> be the initial perturbation of x<sup>0</sup> used to integrate equation (8). Since the system is Lyapunov unstable, after some sufficiently large temporal interval of integration the distance

! 0. The knowledge of the Lyapunov exponent spectrum of a dynamical

<sup>i</sup>¼<sup>1</sup>λ<sup>i</sup> <sup>&</sup>gt; 0. The sum of all positive Lyapunov exponents, according to theorem

where (·,·) is the inner product in R<sup>n</sup> and M� is the transpose of M. Since the operator M�

0Þ¼ðM�

<sup>t</sup> Mtx<sup>0</sup> <sup>0</sup>, x<sup>0</sup>

, one can get ‖x

<sup>0</sup>, (38)

, (40)

<sup>ð</sup>tÞ‖<sup>2</sup> <sup>¼</sup> <sup>∑</sup> i σiα<sup>i</sup> 2

<sup>t</sup> Mt and ψ<sup>i</sup> is the corresponding eigenvector.

0

<sup>t</sup> MtÞ, i ¼ 1;…, n: (41)

λi=jλ<sup>i</sup>þ<sup>1</sup>j, (42)

<sup>0</sup>Þ, (39)

0

<sup>t</sup> Mt is

. So, the

$$T\_p \approx \lambda\_{\text{max}}^{-1} \ln(\overline{\delta}\_t / \delta\_0),\tag{43}$$

where λmax is the leading Lyapunov exponent. The error doubling time can be calculated as t ¼ ln2=λmax. However, Lyapunov exponents are very useful instrument to estimate the predictability of low-order dynamical systems [36].

Climate data observations are subject to measurement errors. The simplest way to represent the resulting uncertainty is to define the probability density function (PDF) ρðx, t0Þ or, generally, the set of a finite measure μ<sup>0</sup> on which the initial state x<sup>0</sup> is concentrated. The time evolution of a system leads to a divergence and mixing of points of this set. Since the initial state x<sup>0</sup> is concentrated on a set having the measure μ0, then after some period of time the measure will become μ<sup>t</sup> . Let μ be the invariant ergodic measure. Suppose the convergence theorem μ ! μ does exist. Hence, at a certain time t ! t<sup>ε</sup> the measure μ<sup>t</sup> falls into the εneighbourhood of μ. Consequently, the initial data information characterized by μ<sup>0</sup> will be completely lost. So, one can say that the time t<sup>ε</sup> defines the potential predictability of a system under consideration [16]. Thus, a focal point of the predictability problem is to prove the existence of ergodic measure and the existence of convergence theorem. This problem, however, is extremely difficult to solve because the structure of the invariant measure generated on the system attractor is sophisticated and non-smooth. To avoid this problem, the stochastic regularization can be applied [37]. So, in lieu of system (1), the following stochastic dynamical system will be considered [16]:

$$d\mathbf{x}/dt = F(\mathbf{x}) + \eta(t),\tag{44}$$

where η is a Gaussian stochastic process: 〈η<sup>i</sup> ðtÞη<sup>j</sup> ðt 0 Þ〉 ¼ 2dijδðt−t 0 Þ, dij ≥ 0. This procedure is correct since our knowledge about the model parameters is always limited, thus real climate models have random errors, which are represented by the term η. Under the assumption that dij ¼ d, one can write the Fokker-Plank equation with respect to PDF ρðx, tÞ, which describes the evolution of ρ [30]:

$$
\partial \rho / \partial t + \operatorname{div} \left( F(\mathbf{x}) \rho \right) = d \Delta \rho, \rho \ge 0, \int \rho \, d\mathbf{x} = 1. \tag{45}
$$

Let <sup>ρ</sup> be a stationary solution to Eq. (45), i.e. div� FðxÞρ � ¼ dΔρ. If x belongs to the compact manifold without boundary, then ρ is asymptotically stable [37]. The existence of a stationary solution (i.e. attractor) at infinity has been proved for finite-dimensional dynamical systems [38].

Suppose that the initial condition x<sup>0</sup> is specified then the condition ρj <sup>t</sup>¼<sup>0</sup> <sup>¼</sup> <sup>δ</sup>ðx−x0<sup>Þ</sup> is also specified and enable us to solve Eq. (45). The numerical integration of Eq. (45) transforms the PDF ρðx, tÞ, which asymptotically evolves to the stationary solution ρ: ρ ! ρ at t ! tε. Thus, at sufficiently large time t<sup>ε</sup> predictability is finally lost. There is a question: how can we estimate the time tε? Let us consider the following one-variable stochastic dynamical equation [16].

$$d\mathbf{x}/dt = -\boldsymbol{\gamma}\mathbf{x} + \boldsymbol{\eta},\tag{46}$$

$$\left. \mathbf{x} \right|\_{t=0} = \mathbf{x}\_0, \langle \eta(t)\eta(t') \rangle = 2\eta^2 \delta(t - t'), \langle \eta \rangle = 0,\tag{47}$$

where x<sup>0</sup> is the known initial condition and η is the Gaussian δ-correlated process. If we average Eq. (46) we obtain

$$d\langle \mathbf{x} \rangle / dt = \left. \neg \gamma \langle \mathbf{x} \rangle, \langle \mathbf{x} \rangle \right|\_{t=0} = \mathbf{x}\_0,\tag{48}$$

thus 〈x〉 <sup>¼</sup> <sup>x</sup>0e<sup>−</sup>γ<sup>t</sup> . For the newly introduced variable <sup>θ</sup>ðtÞ ¼ 〈x<sup>2</sup>〉, we can obtain the following equation:

$$d\theta/dt = -2\gamma\theta + 2\langle \eta \cdot \mathbf{x} \rangle. \tag{49}$$

Since <sup>x</sup>ðtÞ ¼ <sup>x</sup>0e<sup>−</sup>γ<sup>t</sup> <sup>þ</sup> ðt 0 e <sup>γ</sup>ðt−τ<sup>Þ</sup> <sup>η</sup>ðτÞdτ, then

$$d\Theta/dt = -2\gamma\theta + 4\eta^2.\tag{50}$$

The solution to this equation is

$$
\theta(t) = \frac{2\eta^2}{\mathcal{Y}} (1 - e^{-2\gamma t}). \tag{51}
$$

Equation for the PDF ρ has the following form:

$$
\partial \rho / \partial t = \partial (\rho \mathbf{x} \boldsymbol{\gamma}) / \partial \mathbf{x} + \eta^2 \partial^2 \rho / \partial \mathbf{x}^2. \tag{52}
$$

The stationary solution to Eq. (52) can be found if we suppose that the left-hand side is equal to zero. Then, we have ρ ¼ � 1= ffiffiffiffiffiffi πθ <sup>p</sup> � <sup>e</sup><sup>−</sup>x2=<sup>θ</sup>, where <sup>θ</sup> <sup>¼</sup> <sup>2</sup>η<sup>2</sup>=γ. We assume that the solution to Eq. (48) is of the form

$$\rho(t) = \frac{1}{\sqrt{\pi \theta(t)}} e^{-\left(x - \langle \mathbf{x}(t) \rangle \right)^2 / \theta(t)}.\tag{53}$$

By substituting (53) into (52) one can be convinced that if θðtÞ and 〈xðtÞ〉 satisfy Eq. (48) and Eq. (50), respectively, then Eq. (53) is the solution to the Fokker-Planck equation (52). As a result, any initial data that is normally distributed will be attracted to the steady solution of Eq. (52), which is also normally distributed. The dissipation parameter γ determines the rate at which PDF ρ approaches ρ. The auto-correlation function for the stationary stochastic process (46) can be written as

Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling http://dx.doi.org/10.5772/66752 117

$$\mathcal{C}(\tau) = \frac{2\eta^2}{\mathcal{V}} e^{-\gamma \tau} = \overline{\Theta} e^{-\gamma t}. \tag{54}$$

Thus, the potential predictability of system (46) can be characterized by the auto-correlation function of the process xðtÞ and, therefore, the convergence of ρðtÞ to ρ can be explored based only on function CðτÞ with time lag τ. This conclusion is valid for the set of multi-dimensional differential equations [16]. In this case, however, the covariance matrix is used instead of the auto-correlation function. It is very important that for climate models the convergence of the covariance matrix CðtÞ to the covariance matrix of stationary process C is defined only by climatological values of climate model variables. As a result, potential predictability is also determined by climatological data.

Generally, the potential predictability can be defined as the convergence time of initial distribution to the equilibrium one. To quantify the rate of convergence of one-dimensional distributions to the equilibrium ones, the concept of entropy can be used. If the information entropy S ¼ ð ρln ρdα is taken as a measure of predictability, then for the Gaussian distribution <sup>ρ</sup> ¼ ð1<sup>=</sup> ffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πσ<sup>2</sup> <sup>p</sup> <sup>Þ</sup>e<sup>−</sup>ðα−α<sup>Þ</sup> 2 <sup>=</sup>ð2σ2<sup>Þ</sup> information entropy can be expressed as <sup>S</sup> <sup>¼</sup> lnσ<sup>2</sup> <sup>þ</sup> <sup>C</sup>. It can be shown that the variance and, therefore, the entropy are directly dependent on the Lyapunov exponents [39]. To study the predictability of climate system, the relative entropy Sr ¼ ð ρlnðρ=ρÞdα, where ρ is an equilibrium PDF, is a more suitable measure [40]. Relative entropy is invariant with respect to nonlinear transformations of α and ρ ! ρ at t ! ∞.

#### 4.2. Predictability of the second kind

sufficiently large time t<sup>ε</sup> predictability is finally lost. There is a question: how can we estimate the time tε? Let us consider the following one-variable stochastic dynamical equation [16].

where x<sup>0</sup> is the known initial condition and η is the Gaussian δ-correlated process. If we

<sup>d</sup>θ=dt <sup>¼</sup> <sup>−</sup>2γθ <sup>þ</sup> <sup>4</sup>η<sup>2</sup>

<sup>γ</sup> <sup>ð</sup>1−<sup>e</sup>

The stationary solution to Eq. (52) can be found if we suppose that the left-hand side is equal to

− � x−〈xðtÞ〉 �2 =θðtÞ

By substituting (53) into (52) one can be convinced that if θðtÞ and 〈xðtÞ〉 satisfy Eq. (48) and Eq. (50), respectively, then Eq. (53) is the solution to the Fokker-Planck equation (52). As a result, any initial data that is normally distributed will be attracted to the steady solution of Eq. (52), which is also normally distributed. The dissipation parameter γ determines the rate at which PDF ρ approaches ρ. The auto-correlation function for the stationary stochastic process

−2γt

∂2

<sup>θ</sup>ðtÞ ¼ <sup>2</sup>η<sup>2</sup>

<sup>∂</sup>ρ=∂<sup>t</sup> <sup>¼</sup> <sup>∂</sup>ðρxγÞ=∂<sup>x</sup> <sup>þ</sup> <sup>η</sup><sup>2</sup>

ffiffiffiffiffiffiffiffiffiffiffi πθðt<sup>Þ</sup> <sup>p</sup> <sup>e</sup>

<sup>ρ</sup>ðtÞ ¼ <sup>1</sup>

δðt−t 0

. For the newly introduced variable <sup>θ</sup>ðtÞ ¼ 〈x<sup>2</sup>〉, we can obtain the following

0 <sup>Þ</sup>〉 <sup>¼</sup> <sup>2</sup>η<sup>2</sup>

d〈x〉=dt ¼ −γ〈x〉,〈x〉j

xj

116 Dynamical Systems - Analytical and Computational Techniques

average Eq. (46) we obtain

thus 〈x〉 <sup>¼</sup> <sup>x</sup>0e<sup>−</sup>γ<sup>t</sup>

Since <sup>x</sup>ðtÞ ¼ <sup>x</sup>0e<sup>−</sup>γ<sup>t</sup> <sup>þ</sup>

ðt

<sup>γ</sup>ðt−τ<sup>Þ</sup> <sup>η</sup>ðτÞdτ, then

0 e

Equation for the PDF ρ has the following form:

� 1= ffiffiffiffiffiffi πθ <sup>p</sup> �

The solution to this equation is

zero. Then, we have ρ ¼

Eq. (48) is of the form

(46) can be written as

equation:

<sup>t</sup>¼<sup>0</sup> <sup>¼</sup> <sup>x</sup>0,〈ηðtÞηð<sup>t</sup>

dx=dt ¼ −γx þ η, (46)

dθ=dt ¼ −2γθ þ 2〈η � x〉: (49)

<sup>e</sup><sup>−</sup>x2=<sup>θ</sup>, where <sup>θ</sup> <sup>¼</sup> <sup>2</sup>η<sup>2</sup>=γ. We assume that the solution to

Þ,〈η〉 ¼ 0, (47)

<sup>t</sup>¼<sup>0</sup> <sup>¼</sup> <sup>x</sup>0, (48)

: (50)

Þ: (51)

ρ=∂x<sup>2</sup>: (52)

: (53)

Predictability of the second kind relates to the predictability of changes in climate system caused by infinitesimal perturbations in the parameters that describe the external forcing. Climate prediction does not involve forecasting weather conditions at either a certain geographical region or globally. On the contrary, climate prediction aims to forecast statistics of the climate system averaged over sufficiently long period of time. So, we are interested in how external perturbations affect certain aspects of climate statistic, such as the first x (mean) and/or second σ<sup>2</sup> <sup>x</sup> (variance) moments. One of the most important problems in the exploration of predictability of the second kind is to distinguish the response signal of the climate system to perturbed external forcing from the noise in the model output results. The signal-to-noise ratio can be used to make the conclusion with respect to the usefulness of the obtained climate system response. Thus, the predictability of the second kind is mathematically reduced to finding the response function of the climate system model [39].

Consider the following finite-dimensional dynamical system that is controlled by some external forcing f (e.g. the concentration of carbon dioxide in the atmosphere):

$$d\mathbf{x}/dt = F(\mathbf{x}) + f, \mathbf{x}|\_{t=0} = \mathbf{x}\_0,\tag{55}$$

Suppose that system (55) possesses the attractor A and let μ be its invariant measure. The behaviour of this system will be explored on the attractor A. Since system (55) a priori possesses the property of ergodicity, its statistical characteristics are calculated by averaging along a single, sufficiently long, random trajectory. Thus, the average state 〈x〉 and variance 〈σ<sup>2</sup> <sup>x</sup>〉 of system (55) are defined, respectively, as

$$\langle \mathbf{x} \rangle = \lim\_{T \to \infty} \frac{1}{T} \bigg| \mathbf{x}(t) dt = \int\_{A} \mathbf{x} d\mu, \langle \sigma\_{x}^{2} \rangle = \int\_{A} (\mathbf{x} - \langle \mathbf{x} \rangle)^{2} d\mu. \tag{56}$$

Let system (55) be perturbed by an infinitesimal disturbance in the external forcing δf such that δf ≪ f :

$$d\mathbf{x}^\*/dt = \mathbf{F}(\mathbf{x}^\*) + f + \delta f. \tag{57}$$

For this system 〈x�〉 ¼ ð A x� dμ� and 〈σ<sup>2</sup> <sup>x</sup>〉 ¼ ð A ðx�−〈x�〉Þ 2 dμ� . Let us introduce the new variable

x 0 ðtÞ ¼ xðtÞ−x�ðtÞ. Assuming that ‖x 0 ‖ is rather small then, combining (55) and (56), one can obtain the following linear equation for variable x 0 :

$$d\mathbf{x}^{'}/dt = \mathbf{J}(\mathbf{x})\mathbf{x}^{'} + \delta \mathbf{f}.\tag{58}$$

where JðxÞ ¼ ∂F=∂x is the Jacobian. Let δf be a staircase function that is activated at t ¼ 0 then the solution to Eq. (58) can be written in terms of the Green's function:

$$\mathbf{x}'(t) = \int\_0^t \mathbf{G}(t, t') \delta f(t') dt'. \tag{59}$$

The operator R ¼ ðt 0 Gðt, t 0 Þdt<sup>0</sup> is a sought-for response function (operator). If at t ¼ 0 the distri-

bution of initial states is identical for both unperturbed (55) and perturbed (57) systems, then one can calculate the average response operator:

$$
\langle \mathbf{R} \rangle = \int\_0^t \langle G(t, t') \rangle dt' = \int\_0^t G(t - t') d(t - t'). \tag{60}
$$

By averaging both sides of Eq. (59), one can get the following linear equation to calculate the system's response to the external forcing:

Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling http://dx.doi.org/10.5772/66752 119

$$
\langle \mathbf{x'} \rangle = \langle \mathbf{R} \rangle \delta f.\tag{61}
$$

Suppose that system (55) is regular, i.e. for this system the quadratic conservation law is valid and system itself satisfies the Liouville equation for incompressibility in the phase space. Assume also that the system is in equilibrium. Taking into consideration the fluctuation dissipation theorem [41], the average impulse response operator of the regular system in equilibrium is expressed via system's statistics:

$$
\langle G(t, t') \rangle = G(t - t') = \mathcal{C}(t - t') \mathcal{C}^{-1}(0), \tag{62}
$$

where Cðt−t 0 Þ ¼ 〈xðtÞx<sup>Τ</sup>ð<sup>t</sup> 0 Þ〉 is the system's auto-correlation matrix with time lag τ ¼ t−t 0 . Now we can combine (60) and (62) to get the following well-known formula [42]:

$$\left< \mathbf{x'} \right> = \bigcap\_{0}^{\infty} \mathbf{C}(t) \mathbf{C}^{-1}(0) dt \cdot \delta f. \tag{63}$$

Thus, the mean response of climate system to external forcing is determined by observations of unperturbed climate oscillation.

#### 5. Concluding remarks

dx=dt ¼ FðxÞ þ f , xj

system (55) are defined, respectively, as

118 Dynamical Systems - Analytical and Computational Techniques

δf ≪ f :

x 0

For this system 〈x�〉 ¼

The operator R ¼

〈x〉 ¼ lim T!∞ 1 T ð T

ð

A x�

obtain the following linear equation for variable x

ðtÞ ¼ xðtÞ−x�ðtÞ. Assuming that ‖x

ðt

Gðt, t 0

one can calculate the average response operator:

〈R〉 ¼

ðt

〈Gðt,t 0 Þ〉dt<sup>0</sup> ¼

0

0

system's response to the external forcing:

0

dx�

0

dx0

the solution to Eq. (58) can be written in terms of the Green's function:

x 0 ðtÞ ¼ ðt

dμ� and 〈σ<sup>2</sup>

xðtÞdt ¼

Suppose that system (55) possesses the attractor A and let μ be its invariant measure. The behaviour of this system will be explored on the attractor A. Since system (55) a priori possesses the property of ergodicity, its statistical characteristics are calculated by averaging along a single, sufficiently long, random trajectory. Thus, the average state 〈x〉 and variance 〈σ<sup>2</sup>

ð

xdμ,〈σ<sup>2</sup>

<sup>x</sup>〉 ¼ ð

A

ðx−〈x〉Þ 2

A

Let system (55) be perturbed by an infinitesimal disturbance in the external forcing δf such that

ðx�−〈x�〉Þ

0 :

0

2 dμ�

=dt ¼ Fðx�

A

=dt ¼ JðxÞx

0

where JðxÞ ¼ ∂F=∂x is the Jacobian. Let δf be a staircase function that is activated at t ¼ 0 then

Gðt,t 0 Þδfðt 0 Þdt0

bution of initial states is identical for both unperturbed (55) and perturbed (57) systems, then

By averaging both sides of Eq. (59), one can get the following linear equation to calculate the

ðt

Gðt−t 0 Þdðt−t 0

0

<sup>x</sup>〉 ¼ ð <sup>t</sup>¼<sup>0</sup> <sup>¼</sup> <sup>x</sup>0, (55)

Þ þ f þ δf : (57)

‖ is rather small then, combining (55) and (56), one can

Þdt<sup>0</sup> is a sought-for response function (operator). If at t ¼ 0 the distri-

. Let us introduce the new variable

þ δf : (58)

: (59)

Þ: (60)

<sup>x</sup>〉 of

dμ: (56)

The prediction of climate change caused by natural processes and human-induced drivers is one of the most critical scientific issues facing the mankind in the 21st century. Computersimulated climate models represent a very powerful and, perhaps, the only research instrument for studying climate and its dynamics. One of the key components of climate models, namely the model of the atmospheric general circulation, currently also serves as a primary tool for the numerical weather prediction all around the globe. However, the climate (atmospheric) system's trajectory calculated via numerical integration of multi-dimensional partial differential equations that describe the climate (atmospheric) system evolution is unstable with respect to both perturbations (errors) in the initial conditions and infinitesimal external forcing expressed by some model parameters and/or boundary conditions. This instability limits the time horizon of the validity of the climate (weather) forecast and leads to predictability problem.

In this chapter, the climate system is considered as a complex adaptive dynamical system that possesses a number of specific properties such as, for example, dissipativity, nonlinearity and chaoticity. From this perspective, the climate predictability problem is best discussed and analysed by formally examine two kinds of predictability. The first kind of predictability refers to the initial value problem (estimating the impact of perturbations in the initial conditions on the forecast skill), while the second kind of predictability relates to the boundary value problem (estimating the impact of external forcing on the system's behaviour).

## Author details

Sergei Soldatenko\* and Rafael Yusupov

\*Address all correspondence to: s.soldatenko@bom.gov.au

St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia

## References


Author details

St. Petersburg, Russia

References

Sergei Soldatenko\* and Rafael Yusupov

120 Dynamical Systems - Analytical and Computational Techniques

Wiley & Sons; 2014. 456 pp.

versity Press; 2012. 368 pp.

Aristotel. Soc. Suppl. Vol. 2009; 83: 233–249.

Amer. Meteor. Soc. 1969; 50: 286–312.

atmospheric flow patterns. Tellus. 1957; 9: 275–295.

Press; 2007. 74 pp.

1965; 17: 321–333.

pp. 365–383.

1987; 44: 722–728.

\*Address all correspondence to: s.soldatenko@bom.gov.au

St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences,

[1] Washington W.M., Parkinson C.L. Introduction to Three-Dimensional Climate Model-

[2] McGuffie K., Henderson-Sellers A. The Climate Modelling Primer. 4th ed. New York: J.

[3] Randall D.A., Wood R.A., Bony S., et al. Climate models and their evaluation. In: Solomon S., Qin D., Manning M., et al., editors. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge and New York: Cambridge University

[4] Coiffier J. Fundamentals of Numerical Weather Prediction. Cambridge: Cambridge Uni-

[5] Parker W.S. Confirmation and adequacy-for-purpose in climate modelling. Proc.

[8] Thompson P.D. Uncertainty of initial state as a factor in the predictability of large-scale

[9] Lorenz E.N. A study of the predictability of a 28-variable atmospheric model. Tellus.

[10] Smagorinsky J. Problems and promises of deterministic extended range forecasting. Bull.

[11] Leith C.E. Predictability in theory and practice. In: Hoskins B.J., Pearce R.P., editors. Large-Scale Dynamical Processes in the Atmosphere. New York: Academic Press; 1983.

[12] Fraedrich K. Estimating weather and climate predictability on attractors. J. Atmos. Sci.

[6] Lorenz E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963; 20: 130–141.

[7] Selvam A. Chaotic Climate Dynamics. Bristol, UK: Luniver Press; 2007. 156 pp.

ling. 2nd ed. Sausalito, California: University Science Book; 2005. 368 pp.


#### **Emergence of Classical Distributions from Quantum Distributions: The Continuous Energy Spectra Case Emergence of Classical Distributions from Quantum Distributions: The Continuous Energy Spectra Case** Provisional chapter

Emergence of Classical Distributions from Quantum

Distributions: The Continuous Energy Spectra Case

Gabino Torres-Vega

[30] Dymnikov V.P., Filatov A.N. Mathematics of Climate Modelling. Boston: Birkhäuser;

[31] Washington W.M., VerPlamk L. A Description of Coupled General Circulation Models of the Atmosphere and Oceans Used for Carbon Dioxe Studies. Boulder, Colorado: NCAR

[32] Mezinger F., Arakawa A. Numerical Methods Used in Atmospheric Models. Geneva: World Meteorological Organization; GARP Publication Series; 1979. Vol. 17, 64 pp. [33] Oseledets V.I. Multiplicative ergodic theorem: Characteristic Lyapunov exponents of

[34] Kaplan J.L., Yorke A.J. Chaotic behaviour in multidimensional difference equations. In: Peitgen H.-O., Walter H.-O., editors. Functional Differential Equations and Approximations of Fixed Points. Lecture Notes in Mathematics. Berlin: Springer-Verlag; 1979. pp.

[35] Pesin B.J. Characteristic Lyapunov exponents and smooth ergodic theory. Russian Math.

[36] Palmer T.N. Predicting Uncertainty in Forecasts of Weather and Climate. ECMWF Technical Memorandum No. 294. ECMWF Shinfield Park: ECMWF (European Centre for

[38] Noarov A.I. Sufficient condition for the existence of a stationary solution to the Fokker-

[39] Dymnikov V.P. Stability and Predictability of Large Scale Atmospheric Processes. Mos-

[40] Kleeman R. Measuring dynamical prediction utility using relative entropy. J. Atmos. Sci.

[41] Kraichnan R.H. Classical fluctuation-relaxation theorem. Phys. Rev. 1959; 113: 1181–1182. [42] Leith C.E. Climate response and fluctuation dissipation. J. Atmos. Sci. 1975; 32: 2022–

(National Centre for Atmospheric Research)/TN-271; 1986. 34 pp.

dynamical systems. Trans. Moscow Math. Soc. 1968; 19: 179–210.

Medium Range Weather Forecasting), Reading; 1999. 64 pp.

Planck equation. J. Comput. Math. Physics. 1997; 5: 587–598.

[37] Zeeman E.C. Stability of dynamical systems. Nonlinearity. 1988; 1: 115–155.

1997. 264 pp.

122 Dynamical Systems - Analytical and Computational Techniques

228–237.

Surveys. 1977; 32: 55–114.

cow: INM RAS; 2007. 283 pp.

2002; 59: 2057–2972.

2026.

Additional information is available at the end of the chapter Torres-VegaAdditional information is available at the end of the chapter Gabino Torres-Vega

http://dx.doi.org/10.5772/109722 Additional information is available at the end of the chapter

#### Abstract

We explore the properties of quantum states and operators that are conjugate to the Hamiltonian eigenstates and operator when the Hamiltonian spectrum is continuous, i.e., we find time-like operators Tb such that ½Tb, Hb � ¼ iℏ. This is a property expected for a time operator. We explicitly unfold the momentum sign degeneracy of energy states. We consider the free-particle case, and we find, among other things, that the time states are also the solution of the quantized version of the classical motion of the particle.

Keywords: time operator, time eigenstates, conjugate states, free-particle time eigenstates

## 1. Introduction

The problem of the time operator in quantum mechanics has been studied by numerous researchers for many years and remains a subject of current research. There are many instances in which a time variable is useful. An example of such a situation is calculating the tunneling time of a particle passing through a barrier. This time was recently measured, and it was shown to vanish [1, 2].

There are several approaches in this area that were developed by Kijowski [3], Hegerfeldt et al. [4], Weyl [5], Galapon [6], Arai and Yasumichi [7, 8], Strauss et al. [9, 10], and Hall [11], among others. The work by these authors may appear to be in four differing approaches; however, we shall show that they are simply different approaches to the same theme, approximated ones.

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

Some of these approaches are similar to the work of Weyl on periodic functions [5]. Weyl defined the Hermitian form

$$-i\sum\_{m\neq n} \frac{(-1)^{n-m}}{n-m} \mathbf{x}\_m \mathbf{x}\_n,\tag{1}$$

where {xm} are the components of a vector in the basis e<sup>i</sup>2πm=<sup>n</sup>= ffiffiffi <sup>n</sup> <sup>p</sup> , <sup>m</sup> = 0,1,…, <sup>n</sup> – 1. Galapon, Arai et al., Straus et al., and Hall used a similar expression but with a factor of one instead of the (–1)n–<sup>m</sup> factor. Their results are valid in a limited region of the Hilbert space for the expression of Galapon and Arai. Strauss wanted to obtain a Lyapunov function; instead, he obtained a function that only gives the sign of time, as was shown by Hall. A different factor might result in a time operator that would be valid over the entire Hilbert space. In this chapter, we find a proper factor to obtain sensible time-like kets and operators that are valid over the entire Hilbert space, for the purely continuous energy spectrum case.

We introduce time-like kets and operators following a different route. We search for the states that are conjugate to the energy eigenstates, which is a natural approach to this subject. We find time kets and operators that are valid over the entire Hilbert space. We also find that we can make contact with the operators defined by other authors. These operators lack the oscillatory function found in this work.

Time is typically viewed as a parameter and not as a dynamical variable in classical and quantum mechanics. However, the characteristics of the time variable depend on the representation being considered. In classical mechanics, we have shown that we can talk of translations along the energy direction; in that case, the energy variable becomes a parameter, and time becomes a dynamical variable, a function of the phase-space variables [12].

For comparison, let us consider the coordinate representation of quantum mechanics. If a variable, s, with units of length is the parameter used in the shifting along the coordinate direction through the displacement operator, <sup>e</sup>isP^=ℏ, in the momentum representation, <sup>s</sup> becomes the coordinate operator and the momentum Pb becomes a parameter. A similar behavior is expected when considering energy-time representations. However, the problem is to define a time representation in quantum mechanics, and we use the conjugacy concept in this chapter to find such a representation.

The basis for this work is that time is another coordinate that has to be determined. The conjugate pair coordinate-momentum is a pair of conjugate coordinates that are used to define representations of wave functions and operators. Similarly, energy and time can be used as an alternative coordinate set, but the time coordinate has to be defined. As coordinate and momentum eigenstates, the time eigenstates will also be nonnormalizable, and their peculiarities originate from the type of coordinate that energy is a semibounded quantity.

In Section 2, we use the rewriting of the identity operator in terms of energy eigenstates to define the states that are conjugate to the energy eigenstates and subsequently determine some of their properties and several time-like operators. We define time states for negative and positive momentum values.

Section 3 is devoted to time-like operators and their properties. Time operators are written in three different forms. We verify that the time kets are eigenkets of the time operators. We find "evolution equations" for time kets and note that the time operators are the generators for translations along the energy direction. We also discuss how a wave packet is shifted along the energy direction.

In numerical calculations, we have to address finite regions of variables and not infinite intervals. Therefore, we focus our attention on approximate expressions for time operators in Section 4. We find approximate expressions of time operators that can be used in numerical calculations and are of help in the understanding of the expressions found by other authors.

The free-particle problem is analyzed in Section 5. We find expressions for the time kets for the free particle. The coordinate matrix elements of the time operators are also found, and we learn that the time states are also a solution to the quantum analog of the classical motion. The support of the time states embodies the classical trajectories, and as ℏ ! 0, we recover the classical motion.

We conclude the chapter with some concluding remarks.

## 2. Time eigenstates

Some of these approaches are similar to the work of Weyl on periodic functions [5]. Weyl

Arai et al., Straus et al., and Hall used a similar expression but with a factor of one instead of the (–1)n–<sup>m</sup> factor. Their results are valid in a limited region of the Hilbert space for the expression of Galapon and Arai. Strauss wanted to obtain a Lyapunov function; instead, he obtained a function that only gives the sign of time, as was shown by Hall. A different factor might result in a time operator that would be valid over the entire Hilbert space. In this chapter, we find a proper factor to obtain sensible time-like kets and operators that are valid

We introduce time-like kets and operators following a different route. We search for the states that are conjugate to the energy eigenstates, which is a natural approach to this subject. We find time kets and operators that are valid over the entire Hilbert space. We also find that we can make contact with the operators defined by other authors. These operators lack the

Time is typically viewed as a parameter and not as a dynamical variable in classical and quantum mechanics. However, the characteristics of the time variable depend on the representation being considered. In classical mechanics, we have shown that we can talk of translations along the energy direction; in that case, the energy variable becomes a parameter, and time

For comparison, let us consider the coordinate representation of quantum mechanics. If a variable, s, with units of length is the parameter used in the shifting along the coordinate direction through the displacement operator, <sup>e</sup>isP^=ℏ, in the momentum representation, <sup>s</sup> becomes the coordinate operator and the momentum Pb becomes a parameter. A similar behavior is expected when considering energy-time representations. However, the problem is to define a time representation in quantum mechanics, and we use the conjugacy concept in

The basis for this work is that time is another coordinate that has to be determined. The conjugate pair coordinate-momentum is a pair of conjugate coordinates that are used to define representations of wave functions and operators. Similarly, energy and time can be used as an alternative coordinate set, but the time coordinate has to be defined. As coordinate and momentum eigenstates, the time eigenstates will also be nonnormalizable, and their peculiar-

In Section 2, we use the rewriting of the identity operator in terms of energy eigenstates to define the states that are conjugate to the energy eigenstates and subsequently determine some of their properties and several time-like operators. We define time states for negative and

ities originate from the type of coordinate that energy is a semibounded quantity.

xmxn, (1)

<sup>n</sup> <sup>p</sup> , <sup>m</sup> = 0,1,…, <sup>n</sup> – 1. Galapon,

ð�1Þ n�m n � m

�i X n≠m

over the entire Hilbert space, for the purely continuous energy spectrum case.

becomes a dynamical variable, a function of the phase-space variables [12].

where {xm} are the components of a vector in the basis e<sup>i</sup>2πm=<sup>n</sup>= ffiffiffi

defined the Hermitian form

124 Dynamical Systems - Analytical and Computational Techniques

oscillatory function found in this work.

this chapter to find such a representation.

positive momentum values.

In this section, we define the states that are conjugate to the energy eigenstates and the corresponding conjugate operator to a given quantum Hamiltonian Hb . We also derive some of their properties. The definition of conjugacy between the operators Tb and Hb that we will use here is the usual one, i.e., that these operators should comply with the constant commutator relationship ½Tb, Hb � ¼ iℏ. We will consider the case of a purely continuous energy spectrum with a Hamiltonian operator <sup>H</sup><sup>b</sup> of the form <sup>H</sup><sup>b</sup> <sup>¼</sup> <sup>P</sup>b<sup>2</sup> =2m þ Vb ðQb Þ, where Pb is the momentum operator, Qb is the coordinate operator, and Vb ðQb Þ is the potential energy operator. We will also consider that the sign of the momentum operator commutes with the Hamiltonian. The continuous eigenvalues of the Hamiltonian are denoted by E ∈ [0, ∞) and correspond to the eigenkets {|E〉}.

We will base our definition of time states on rewriting the identity operator in terms of energy eigenstates and using the integral representation of the Dirac delta function. We assume that the Hamiltonian is self-adjoint. Thus, we will work on the span of the Hamiltonian eigenstates, denoted by

$$D = \left\{ |\psi\rangle\langle|\psi\rangle| = \int\_0^\infty dE \psi(E)|E\rangle, \quad \psi(E) = \langle E|\psi\rangle\right\} \tag{2}$$

We assume that the closure relationship for the energy eigenstates holds, ^<sup>I</sup> <sup>¼</sup> ðEm 0 dEjE〉〈Ej. The i times the derivative is self-adjoint in a finite interval and hence will work in the subspace E∈½0, Em�, Em < ∞, which implies that p ∈½�pm, pm�, pm < ∞:

We start with the rewriting of the identity operator in terms of the energy eigenkets,

$$\begin{split} \hat{I} &= \int\_{0}^{\mathbb{E}\_{m}} dE |\mathbf{E}\rangle\langle\mathbf{E}| = \int\_{0}^{\mathbb{E}\_{m}} dE' dE \delta(E - E') |E'\rangle\langle\mathbf{E}| = \int\_{0}^{\mathbb{E}\_{m}} dE' dE \frac{1}{2\pi\hbar} \Big|\_{ - \infty}^{\infty} dt \, e^{it(\mathbb{E} - \frac{\mathbf{r}}{\hbar})/\hbar} |E'\rangle\langle\mathbf{E}| \\ &= \int\_{-\infty}^{\infty} dt \Big|\_{0}^{\mathbb{E}\_{m}} dE' dE \frac{e^{-it\mathbf{E}'/\hbar}}{\sqrt{2\pi\hbar}} |E'\rangle\langle\mathbf{E}| \frac{e^{it\mathbf{E}\cdot\hbar}}{\sqrt{2\pi\hbar}}, \end{split} \tag{3a}$$

where we have made use of the properties of the Dirac delta function. We can separate the negative and positive momentum parts of the above expression by means of the closure relationship for the momentum states, obtaining

$$
\hat{I} = \int\_{-p\_w}^{p\_w} dp \int\_0^{E\_m} dE \, \langle E | p \rangle \langle p | E \rangle \langle E | = \int\_{-p\_w}^{p\_w} dp \int\_0^{E\_m} dE' dE \delta(E - E') |E'\rangle \langle E' | p \rangle \langle p | E \rangle \langle E |
$$

$$
= \int\_{-p\_w}^{p\_w} dp \int\_0^{E\_m} dE' dE \frac{1}{2\pi\hbar} \int\_{-\infty}^\infty dt e^{i\langle (E - E') \rangle \hbar} |E'\rangle \langle E' | p \rangle \langle p | E \rangle \langle E | \tag{3b}
$$

$$
= \int\_{-\infty}^\infty dt \int\_{-p\_w}^{p\_w} dp \int\_0^{E\_m} dE' dE \frac{e^{-iE'/\hbar}}{\sqrt{2\pi\hbar}} |E'\rangle \langle E' | p \rangle \langle p | E \rangle \langle E | \frac{e^{iE/\hbar}}{\sqrt{2\pi\hbar}}.
$$

Thus, we define time-like kets as

$$|t\rangle := \int\_0^{\mathbb{E}\_m} dE \frac{e^{-itE/\hbar}}{\sqrt{2\pi\hbar}} |E\rangle, \quad |t(p)\rangle := \int\_0^{\mathbb{E}\_m} dE \frac{e^{-itE/\hbar}}{\sqrt{2\pi\hbar}} |E\rangle \langle E|p\rangle. \tag{4}$$

With these kets, the identity operator is written as

$$
\hat{I} = \int\_{-\infty}^{\infty} dt |t\rangle\langle t| = \int\_{-\infty}^{\infty} dt \int\_{-p\_m}^{p\_m} dp |t(p)\rangle\langle t(p)| = \hat{I}\_- + \hat{I}\_+,\tag{5a}
$$

where

$$\hat{I}\_{-}: = \int\_{-\infty}^{\infty} dt \int\_{-p\_{m}}^{0} dp |t(p)\rangle\langle t(p)|, \quad \hat{I}\_{+}: = \int\_{-\infty}^{\infty} dt \int\_{0}^{p\_{m}} dp |t(p)\rangle\langle t(p)|. \tag{5b}$$

Then, the identity operator is written in terms of the time evolution of some bras and kets, which are composed of all the energy eigenstates.

Now, we define time-like operators Tb and Tb� by introducing a factor t in the integrand of Eq. (5):

$$
\widehat{T} = \int\_{-\infty}^{\infty} dt \, t|t\rangle\langle t|,\tag{6a}
$$

and

Emergence of Classical Distributions from Quantum Distributions: The Continuous Energy Spectra Case http://dx.doi.org/10.5772/109722 127

$$
\widehat{T}\_{-}: = \int\_{-\infty}^{\infty} dt \, t \Big|\_{{-p\_m}^{}}^{0} dp |t(p)\rangle\langle t(p)|, \quad \widehat{T}\_{+}: = \int\_{-\infty}^{\infty} dt \, t \Big|\_{0}^{p\_m} dp |t(p)\rangle\langle t(p)|. \tag{6b}
$$

The function eitE=<sup>ℏ</sup> exists only for <sup>E</sup>∈½0, Em�, so that, for the sake of simplicity of notation, we, sometimes, will include explicitly the function ΘðEÞ � ΘðE � EmÞ, where Θ is the step function, when necessary, otherwise we will omit this factor.

The commutator between these operators and the Hamiltonian operator is

We start with the rewriting of the identity operator in terms of the energy eigenkets,

ðEm 0

dE′dE <sup>1</sup> 2πℏ ð∞ �∞

dEδð<sup>E</sup> � <sup>E</sup>′

ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> :

dE <sup>e</sup>�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi

ðEm 0

> ð∞ �∞ dt ðpm 0

ÞjE′ 〉〈E′

dt eitðE�E′

<sup>Þ</sup>=<sup>ℏ</sup>jE′〉〈E<sup>j</sup>

jp〉〈pjE〉〈Ej

<sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E〉〈Ejp〉: (4)

dpjtðpÞ〉〈tðpÞj: (5b)

dpjtðpÞ〉〈tðpÞj ¼ ^I� <sup>þ</sup> ^Iþ, (5a)

dt tjt〉〈tj, (6a)

(3a)

(3b)

dE′dEδðE � E′ÞjE′〉〈Ej ¼

e<sup>i</sup>τE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> ,

where we have made use of the properties of the Dirac delta function. We can separate the negative and positive momentum parts of the above expression by means of the closure

dteitðE�E′Þ=<sup>ℏ</sup>jE′〉〈E′jp〉〈pjE〉〈E<sup>j</sup>

<sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E′〉〈E′jp〉〈pjE〉〈E<sup>j</sup> <sup>e</sup>itE=<sup>ℏ</sup>

<sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E〉, <sup>j</sup>tðpÞ〉 :<sup>¼</sup>

dpjtðpÞ〉〈tðpÞj, ^I<sup>þ</sup> :<sup>¼</sup>

Then, the identity operator is written in terms of the time evolution of some bras and kets,

Now, we define time-like operators Tb and Tb� by introducing a factor t in the integrand of Eq. (5):

Tb ¼ ð∞ �∞

ð∞ �∞ dt ðpm �pm

ðpm �pm dp ðEm 0 dE′

^I ¼ ðEm 0

> ¼ ð∞ �∞ dt ðEm 0

^I ¼ ðpm �pm dp ðEm 0

> ¼ ðpm �pm dp ðEm 0

¼ ð∞ �∞ dt ðpm �pm dp ðEm 0

where

and

Thus, we define time-like kets as

jt〉 :¼

^I ¼ ð∞ �∞

> ð∞ �∞ dt ð0 �pm

which are composed of all the energy eigenstates.

^I� : <sup>¼</sup>

ðEm 0

With these kets, the identity operator is written as

dEjE〉〈Ej ¼

ðEm 0

> =ℏ ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E′〉〈E<sup>j</sup>

dEjE〉〈Ejp〉〈pjE〉〈Ej ¼

ð∞ �∞

dE′dE <sup>e</sup>�itE′=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi

> dE <sup>e</sup>�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi

dtjt〉〈tj ¼

dE′dE <sup>e</sup>�itE′

126 Dynamical Systems - Analytical and Computational Techniques

relationship for the momentum states, obtaining

dE′dE <sup>1</sup> 2πℏ

<sup>½</sup>Tb, <sup>H</sup><sup>b</sup> � ¼ <sup>ð</sup><sup>∞</sup> �∞ dt tðEm 0 dE′ dE <sup>e</sup>�itE′ =ℏ ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> eitE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E′ 〉〈Ej, <sup>H</sup><sup>b</sup> h i ¼ ðEm 0 dE′dE <sup>1</sup> 2πℏ ð∞ �∞ dt t eitðE�E′Þ=<sup>ℏ</sup>ð<sup>E</sup> � <sup>E</sup>′ÞjE′〉〈E<sup>j</sup> ¼ ðEm 0 dE′ dE <sup>1</sup> 2πℏ ð∞ �∞ dt" � <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> <sup>½</sup>Θð Þ� <sup>E</sup> <sup>Θ</sup>ð<sup>E</sup> � EmÞ� <sup>þ</sup>iℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ�� <sup>e</sup>itðE�E′Þ=<sup>ℏ</sup>�ð<sup>E</sup> � <sup>E</sup>′ ÞjE′ 〉〈Ej ¼ ðEm 0 dE′ dE <sup>1</sup> 2πℏ ð∞ �∞ dt eitðE�E′Þ=<sup>ℏ</sup>½½ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ�i<sup>ℏ</sup> <sup>∂</sup> ∂E <sup>þ</sup>iℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ��ð<sup>E</sup> � <sup>E</sup>′ ÞjE′ 〉〈Ej �iℏ ðEm 0 dE′ <sup>1</sup> 2πℏ ð∞ �∞ dt eitðE�E′Þ=<sup>ℏ</sup>ð<sup>E</sup> � <sup>E</sup>′ÞjE′ 〉〈E′ j Em E¼0 ¼ ðEm 0 dE′dEδð<sup>E</sup> � <sup>E</sup>′<sup>Þ</sup> <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> <sup>þ</sup> <sup>i</sup>ℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ� � �ð<sup>E</sup> � <sup>E</sup>′ÞjE′ 〉〈Ej �iℏ ðEm 0 dE′δð<sup>E</sup> � <sup>E</sup>′Þð<sup>E</sup> � <sup>E</sup>′ ÞjE′ 〉〈EjjEm E¼0 ¼ ðEm 0 dE′dE <sup>δ</sup>ð<sup>E</sup> � <sup>E</sup>′Þi<sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′ÞjE′ 〉〈Ej þiℏ ðEm 0 dE′dE <sup>δ</sup>ð<sup>E</sup> � <sup>E</sup>′Þ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ�ð<sup>E</sup> � <sup>E</sup>′ÞjE′ 〉〈Ej ¼ iℏ ðEm 0 dE′dE <sup>δ</sup>ð<sup>E</sup> � <sup>E</sup>′ÞE′ 〉〈Ej þ ðEm 0 dE′dEδð<sup>E</sup> � <sup>E</sup>′Þð<sup>E</sup> � <sup>E</sup>′ÞjE′〉i<sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> 〈Ej ¼ <sup>i</sup><sup>ℏ</sup> ðEm 0 dEjE〉〈Ej <sup>¼</sup> <sup>i</sup>ℏ^I, (7a)

where we have made use of the integration by parts. This is one of the properties that a time operator should comply with—the constant commutator with the Hamiltonian. We also have that

$$\begin{split} \left[\hat{T} - \hat{H}\right] &= \int\_{-\infty}^{0} dt \int\_{-\infty}^{0} dt \int\_{-\infty}^{0} dx dE \frac{d^{2}\tilde{\sigma}^{\prime}}{2\pi\hbar} \int\_{-\infty}^{\infty} d\hat{\sigma}^{\prime}(E) \langle p|\hat{\sigma}\rangle \left[\hat{E}\left[\hat{E}\middle|\hat{E}\middle|\hat{E}\middle\rangle\right] \left[\hat{E}\middle|\hat{E}\middle\|\hat{E}\middle\rangle\right] \\ &= \int\_{-\infty}^{0} dt \int\_{0}^{0} dE \hat{E}E \frac{1}{2\pi\hbar} \int\_{-\infty}^{0} dt \, d\hat{\sigma}^{\prime(E-E)/\hbar} \langle \hat{E} - E|\hat{E}\rangle \langle E|\hat{E}\rangle \langle p|\hat{\sigma}\rangle \langle E|\hat{E}\rangle \\ &= \int\_{-\infty}^{0} dt \int\_{0}^{0} dE \hat{E}E \frac{1}{2\pi\hbar} \int\_{-\infty}^{0} dt \, \delta[(E-\hat{H})\stackrel{E}{\delta}\middle|\hat{E}\middle\|\hat{E}\middle\| -\delta[E-\hat{E}\_{m}]] \\ &+ \hat{H}\delta[(E-\hat{E}(\mathbb{E}-\hat{E}\_{m}))\hat{E}\stackrel{E}{\delta}\bigg] \langle \hat{E} - E|\hat{E}\middle\rangle \langle E|\hat{E}\middle\rangle \langle p|\hat{E}\middle\| \\ &= \int\_{-\infty}^{0} dt \int\_{0}^{0} dE \hat{E}\frac{1}{2\pi\hbar} \int\_{-\infty}^{0} dt \, \delta^{(E-E)/\hbar}[\delta[\hat{E}\middle|\hat{E}\middle\rangle \langle E|\hat{E}\middle\rangle \langle p|\hat{E}\middle\rangle] \\ &- i\hbar \left[\delta[\hat{E}, E] - \delta[E$$

The operator

$$-i\hbar \frac{\partial}{\partial E} + i\hbar [\delta(E) - \delta(E - E\_m)] \tag{8}$$

is a time-like operator in the energy representation, which is symmetric in the interval ½0, Em� regardless of the boundary conditions at E = 0, Em, when the functions exist only in the interval [0, Em].

Thus, we can say that the kets

<sup>T</sup>b�, <sup>H</sup><sup>b</sup> h i

¼ ð∞ �∞ dt t ð0 �pm dp ðEm 0

128 Dynamical Systems - Analytical and Computational Techniques

¼ ð0 �pm dp ðEm 0

¼ ð0 �pm dp ðEm 0

¼ ð0 �pm dp ðEm 0

> �iℏ ð0 �pm dp ðEm 0

� iℏ ð0 �pm dp ðEm 0

þiℏ ð0 �pm dp ðEm 0

<sup>ð</sup><sup>E</sup> � <sup>E</sup>′

ÞjE′ 〉〈E′

ðpm 0 dp ðEm 0

�i<sup>ℏ</sup> <sup>∂</sup>

¼ ð0 �pm dp ðEm 0

¼ iℏ ð0 �pm dp ðEm 0

> þ ð0 �pm dp ðEm 0

¼ iℏ ð0 �pm dp ðEm 0

The operator

½Tbþ, Hb � ¼ iℏ

<sup>ð</sup><sup>E</sup> � <sup>E</sup>′

ÞjE′ 〉〈E′

¼ ð0 �pm dp ðEm 0

dE′dE <sup>e</sup>�itE′

ð∞ �∞

ð∞ �∞

ð∞ �∞

> ð∞ �∞

dE′dEδð<sup>E</sup> � <sup>E</sup>′<sup>Þ</sup> <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

jp〉〈pjE〉〈Ej

dE′dEδð<sup>E</sup> � <sup>E</sup>′Þi<sup>ℏ</sup> <sup>∂</sup>

jp〉〈pjE〉〈Ej

dE′dEδð<sup>E</sup> � <sup>E</sup>′

dE′δð<sup>E</sup> � <sup>E</sup>′Þð<sup>E</sup> � <sup>E</sup>′

dE′dE <sup>1</sup> 2πℏ

dE′dE <sup>1</sup> 2πℏ

<sup>þ</sup>iℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ�eitðE�E′Þ=<sup>ℏ</sup>

dE′dE <sup>1</sup> 2πℏ

<sup>þ</sup>iℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ�ð<sup>E</sup> � <sup>E</sup>′

dE′ <sup>1</sup> 2πℏ

=ℏ ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

dt½ð�i<sup>ℏ</sup> <sup>∂</sup>

�

dt eitðE�E′Þ=<sup>ℏ</sup>

ÞjE′

h

ÞjE′ 〉〈E′

> ÞjE′ 〉〈E′

jp〉〈pjE〉〈Ej

<sup>∂</sup><sup>E</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′

dE′dEδðE � E′Þ½δðEÞ � δðE � EmÞ�

ÞjE′ 〉〈E′

dEjE〉〈Ejp〉〈pjE〉〈Ej ¼ <sup>i</sup>ℏ^I�,

dE′dEδð<sup>E</sup> � <sup>E</sup>′Þð<sup>E</sup> � <sup>E</sup>′ÞjE′〉〈E′jp〉i<sup>ℏ</sup> <sup>∂</sup>

〉〈E′jp〉〈pjE〉〈Ej

eitE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> 〈E′

<sup>j</sup>p〉〈pjE〉 <sup>j</sup>E′

dt teitðE�E′Þ=<sup>ℏ</sup>ð<sup>E</sup> � <sup>E</sup>′ÞjE′〉〈E′jp〉〈pjE〉〈E<sup>j</sup>

<sup>∂</sup><sup>E</sup> <sup>½</sup>ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ�

ðE � E′ÞjE′〉〈E′jp〉〈pjE〉〈Ej

½ΘðEÞ � ΘðE � EmÞ

dt eitðE�E′Þ=<sup>ℏ</sup>ð<sup>E</sup> � <sup>E</sup>′ÞjE′〉〈E′jp〉〈pjE〉〈EjjEm

jp〉〈pjE〉〈Ej

Em E¼0

jp〉〈pjE〉〈Ej

<sup>∂</sup><sup>E</sup> 〈pjE〉〈E<sup>j</sup>

dEjE〉〈Ejp〉〈pjE〉〈Ej ¼ <sup>i</sup>ℏ^Iþ: (7c)

<sup>∂</sup><sup>E</sup> <sup>þ</sup> <sup>i</sup>ℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ� (8)

<sup>∂</sup><sup>E</sup> <sup>þ</sup> <sup>i</sup>ℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ� � �

〉〈Ej, <sup>H</sup><sup>b</sup> h i

i <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> ∂E

E¼0

(7b)

$$<|t(p)\rangle := \int\_0^{E\_m} dE \frac{e^{-itE/\hbar}}{\sqrt{2\pi\hbar}} |E\rangle\langle E|p\rangle = \frac{e^{-it\widehat{H}/\hbar}}{\sqrt{2\pi\hbar}} |p\rangle,\tag{9a}$$

$$|t\rangle := \int\_0^{\mathbb{E}\_m} dE \frac{e^{-itE/\hbar}}{\sqrt{2\pi\hbar}} |E\rangle,\tag{9b}$$

can be considered as time-like kets. We will study some of their properties in what follows.

The inner product between time states is

$$
\langle t'(p')|t(p)\rangle = \frac{1}{2\pi\hbar} \langle p'|e^{-i(t-t')\hat{H}/\hbar}|p\rangle = \frac{1}{2\pi\hbar} \langle p'|p(t-\dot{t})\rangle,\tag{10a}
$$

$$\begin{split} \langle t' | t \rangle = \int\_0^{E\_m} dE \langle t' | E \rangle \ \langle E | t \rangle &= \int\_0^{E\_m} dE \frac{e^{i \ell E/\hbar}}{\sqrt{2\pi \hbar}} \frac{e^{-i t E/\hbar}}{\sqrt{2\pi \hbar}} = \frac{1}{2\pi \hbar} \int\_0^{E\_m} dE \, e^{i(t'-t)E/\hbar} \\ &= \frac{1}{\pi (t'-t)} e^{i(t'-t)E\_m/2\hbar} \sin\left(\frac{E\_m}{2\hbar} (t'-t)\right), \end{split} \tag{10b}$$

with limit

$$\lim\_{t \to \infty} \langle t' | t \rangle = \frac{1}{2} \delta(t' - t) + \frac{i}{2\pi(t' - t)}.\tag{10c}$$

Thus, the time states are not orthogonal due to the bounded nature of the Hamiltonian operator.

#### 2.1. Properties of the transformation function between energy and time states

The transformation function between energy and time representations is given by

$$\langle E|t\rangle = \frac{e^{-itE/\hbar}}{\sqrt{2\pi\hbar}}, \quad E \in [0, E\_m], \quad t \in (-\infty, \infty) \,. \tag{11}$$

A property of this transformation function is that it is a sort of eigenfunction of the time-like operator, iℏðd=dEÞ½ΘðEÞ � ΘðE � EmÞ� � iℏ½δðEÞ � δðE � EmÞ�, when the functions exists in the interval E ∈ [0, Em], in the energy representation,

$$\left[i\hbar[\delta(E) - \delta(E - E\_m)] - i\hbar\frac{\partial}{\partial E}[\Theta(E) - \Theta(E - E\_m)]\right] \langle t|E\rangle = t[\Theta(E) - \Theta(E - E\_m)] \,\langle t|E\rangle,\tag{12}$$

and it is also an eigenfunction of the energy operator, iℏ d=dt,

$$i\hbar\frac{\partial}{\partial t}\langle E|t\rangle = i\hbar\frac{\partial}{\partial t}\frac{e^{-i t E/\hbar}}{\sqrt{2\pi\hbar}} = E\langle E|t\rangle. \tag{13}$$

This is similar to the corresponding properties of the transformation function between coordinate and momentum representations. The squared modulus of the transformation function is constant for all values of t and E, as is desired for coordinate variables.

Time kets can be used as a coordinate system for quantum systems and are similar to coordinate or momentum eigenkets. The norm of a wave packet in the time representation is (see Eq. (5))

$$\begin{split} \langle \psi | \psi \rangle &= \int\_{-\infty}^{\infty} dt \langle \psi | t \rangle \, \langle t | \psi \rangle = \int\_{-\infty}^{\infty} dt \int\_{0}^{E\_{m}} dE' dE \frac{e^{-itE'/\hbar}}{\sqrt{2\pi\hbar}} \langle \psi | E' \rangle \frac{e^{itE/\hbar}}{\sqrt{2\pi\hbar}} \langle E | \psi \rangle \\ &= \int\_{0}^{E\_{m}} dE' dE \, \langle \psi | E' \rangle \, \langle E | \psi \rangle \frac{1}{2\pi\hbar} \int\_{-\infty}^{\infty} dt \, e^{it(E-E')/\hbar} \\ &= \int\_{0}^{E\_{m}} dE' dE \langle \psi | E' \rangle \, \langle E | \psi \rangle \delta(E-E') \\ &= \int\_{0}^{E\_{m}} dE \, \langle \psi | E | \psi \rangle \end{split} \tag{14}$$

Thus, we will obtain well-defined quantities if the wave packet jψ〉 is normalized in the energy representation, i.e., if <sup>ð</sup><sup>∞</sup> 0 dEj〈Ejψ〉j <sup>2</sup> <sup>¼</sup> 1. We also note that the transformation from energy to time representations is norm preserving, i.e., it is unitary.

#### 2.2. The time eigenstates are conjugate to the energy eigenstates

Now, the Fourier transform of the time states is

$$\begin{split} \int\_{-\infty}^{\infty} dt \frac{e^{itE/\hbar}}{\sqrt{2\pi\hbar}} |t\rangle &= \int\_{-\infty}^{\infty} dt \frac{e^{itE/\hbar}}{\sqrt{2\pi\hbar}} \Big|\_{0}^{E\_m} dE' \frac{e^{-itE'/\hbar}}{\sqrt{2\pi\hbar}} |E'\rangle = \int\_{0}^{E\_m} dE' |E'\rangle \frac{1}{2\pi\hbar} \Big|\_{-\infty}^{\infty} dt \, e^{it(E-E')/\hbar} \\ &= \int\_{0}^{E\_m} dE' |E'\rangle \delta(E-E') = |E\rangle, \end{split} \tag{15}$$

Thus, the kets jt〉 and jE〉 are conjugate indeed, i.e., the definition (9) is consistent; jt〉 and jE〉 are the Fourier transforms of each other, and then an eigenstate contains all the conjugate eigenstates with the same weight.

#### 3. Time operators

We now focus on the time operators obtained from the time kets of the previous section and on their properties. Time operators for negative, positive, and any value of the momentum are defined as

$$
\hat{T}\_{-}=\int\_{-\infty}^{\infty}dt\int\_{-p\_{m}}^{0}dp|t(p)\rangle t\langle t(p)|,\\\hat{T}\_{+}=\int\_{-\infty}^{\infty}dt\int\_{0}^{p\_{m}}dp|t(p)\rangle t\langle t(p)|,\tag{16a}
$$

Emergence of Classical Distributions from Quantum Distributions: The Continuous Energy Spectra Case http://dx.doi.org/10.5772/109722 131

and

iℏ ∂ ∂t

constant for all values of t and E, as is desired for coordinate variables.

dt〈ψjt〉 〈tjψ〉 ¼

dE′dE 〈ψjE′

dEj〈Ejψ〉j

dEj〈Ejψ〉j

2.2. The time eigenstates are conjugate to the energy eigenstates

ðEm 0

dE′jE′〉δðE � E′Þ¼jE〉,

dE′ <sup>e</sup>�itE′=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E′

Thus, the kets jt〉 and jE〉 are conjugate indeed, i.e., the definition (9) is consistent; jt〉 and jE〉 are the Fourier transforms of each other, and then an eigenstate contains all the conjugate

We now focus on the time operators obtained from the time kets of the previous section and on their properties. Time operators for negative, positive, and any value of the momentum are

> ð∞ �∞ dt ðpm 0

dpjtðpÞ〉t〈tðpÞj, <sup>T</sup>^<sup>þ</sup> <sup>¼</sup>

dt eitE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

time representations is norm preserving, i.e., it is unitary.

2 :

〈ψjψ〉 ¼

ð∞ �∞

130 Dynamical Systems - Analytical and Computational Techniques

¼ ðEm 0

¼ ðEm 0 dE′

¼ ðEm 0

> ð∞ 0

Now, the Fourier transform of the time states is

ð∞ �∞

¼ ðEm 0

eigenstates with the same weight.

<sup>T</sup>^� <sup>¼</sup>

ð∞ �∞ dt ð0 �pm

3. Time operators

defined as

representation, i.e., if

ð∞ �∞ dt <sup>e</sup>itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>t〉 <sup>¼</sup> 〈Ejt〉 <sup>¼</sup> <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

∂t

This is similar to the corresponding properties of the transformation function between coordinate and momentum representations. The squared modulus of the transformation function is

Time kets can be used as a coordinate system for quantum systems and are similar to coordinate or momentum eigenkets. The norm of a wave packet in the time representation is (see Eq. (5))

> 2πℏ ð∞ �∞

Thus, we will obtain well-defined quantities if the wave packet jψ〉 is normalized in the energy

dE <sup>e</sup>�itE′ =ℏ ffiffiffiffiffiffiffiffi

dt eitðE�E′

Þ

〉 ¼ ðEm 0

<sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> 〈ψjE′〉 <sup>e</sup>itE=<sup>ℏ</sup>

Þ=ℏ

<sup>2</sup> <sup>¼</sup> 1. We also note that the transformation from energy to

dE′jE′〉 <sup>1</sup> 2πℏ ð∞ �∞

dt eitðE�E′Þ=<sup>ℏ</sup>

dpjtðpÞ〉t〈tðpÞj, (16a)

ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> 〈Ejψ〉

(14)

(15)

ð∞ �∞ dt ðEm 0 dE′

〉 〈Ejψ〉 <sup>1</sup>

dE〈ψjE′〉 〈Ejψ〉δð<sup>E</sup> � <sup>E</sup>′

e�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi

<sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>¼</sup> <sup>E</sup>〈Ejt〉: (13)

$$
\hat{T} = \int\_{-\infty}^{\infty} dt |t\rangle t \langle t| . \tag{16b}
$$

The last construction was also introduced, from another perspective, by Hegerfeldt et al. [4]. Our construction is different from that of Hegerfeldt et al. because it involves all the energy eigenstates and not only those that are time reflection invariant. Our time operator exhibits the time reversal property already.

Time operators can be written in three equivalent forms in the energy representation. One form is

Tb� ¼ ð∞ �∞ dtð0 �pm dpjtðpÞ〉t〈tðpÞj ¼ ð∞ �∞ dtð0 �pm dpðEm 0 dE′dE <sup>e</sup>�it E′=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E′〉〈E′jp〉 <sup>t</sup> 〈pjE〉 〈E<sup>j</sup> eit E=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>¼</sup> <sup>1</sup> 2πℏ ð∞ �∞ dtð0 �pm dpðEm 0 dE′dE� <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> ∂E′ ½ΘðE′Þ � ΘðE′ � EmÞ�e �it E′=ℏ �iℏ½δðE′Þ � <sup>δ</sup>ðE′ � EmÞ�e�it E′=<sup>ℏ</sup> � <sup>j</sup>E′〉〈E′jp〉 〈pjE〉〈Ejeit E=<sup>ℏ</sup> <sup>¼</sup> <sup>1</sup> 2πℏ ð∞ �∞ dTð<sup>0</sup> �pm dpðEm 0 dE′dE� � ½ΘðE′Þ � ΘðE′ � EmÞ�e �it E′=h <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> ∂E′ �iℏ½δðE′Þ � <sup>δ</sup>ðE′ � EmÞ�e�it E′=<sup>ℏ</sup> � <sup>j</sup>E′〉〈E′jp〉〈pjE〉 〈Ejeit E=<sup>ℏ</sup> þ 1 2πℏ ð∞ �∞ dtð0 �pm dpðEm 0 dE i<sup>ℏ</sup> <sup>½</sup>ΘðE′ Þ �ΘðE′ � EmÞ�e�itE′=<sup>ℏ</sup>jE′〉〈E′jp〉 〈pjE〉 〈EjeitE=<sup>ℏ</sup><sup>j</sup> Em E′ ¼0 ¼ ð0 �pm dpðEm 0 dE′dEδðE � E′Þ � � ½ΘðE′Þ � <sup>Θ</sup>ðE′ � EmÞ�i<sup>ℏ</sup> <sup>∂</sup> ∂E′ �iℏ½δðE′ Þ � <sup>δ</sup>ðE′ � EmÞ�� jE′ 〉〈E′ jp〉〈pjE〉 〈Ej þiℏ ð0 �pm dpðEm 0 dE <sup>δ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′Þ½ΘðE′Þ � <sup>Θ</sup>ðE′ � EmÞ�jE′〉〈E′jp〉 〈pjE〉〈EjjEm E′ ¼0 ¼ ð0 �pm dpðEm 0 dE½ð�½ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ�i<sup>ℏ</sup> <sup>∂</sup> ∂E �iℏ ½δðEÞ � δðE � EmÞ�ÞjE〉〈Ejp〉�〈pjE〉〈Ej þiℏ ð0 �pm dpjE′〉〈E′jp〉 〈pjE′〉〈E′jjEm E′ ¼0 ¼ ð0 �pm dpðEm 0 dE �i<sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> <sup>j</sup>E〉〈Ejp〉 � �〈pjE〉〈Ej � <sup>i</sup><sup>ℏ</sup> ð0 �∞ dpjE〉j〈pjE〉j 2 〈EjjEm E¼0 þiℏ ð0 �pm dpjE′〉j〈pjE′〉j 2 〈E′jjEm E′ ¼0 ¼ ð0 �pm dpðEm 0 dE �i<sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> <sup>j</sup>E〉〈Ejp〉 � �〈pjE〉〈Ej, (17a)

where we have performed an integration by parts. We also have that

$$
\hat{T}\_{+} = \int\_{-\infty}^{\infty} dt \int\_{0}^{p\_{m}} dp |t(p)\rangle t(p) |= \int\_{0}^{p\_{m}} dp \int\_{0}^{E\_{m}} dE \left(-i\hbar \frac{\partial}{\partial E} |E\rangle \langle E|p\rangle \right) \langle p|E\rangle \langle E|,\tag{17b}
$$

and

$$
\hat{T} = \int\_{-\infty}^{\infty} dt |t\rangle \, t \,\langle t| = \int\_{0}^{E\_m} dE \left( -i\hbar \frac{\partial}{\partial E} |E\rangle \right) \langle E| . \tag{17c}
$$

These are the forms in which the time operators act on energy eigenkets, but they take a different form when they act on states or on both, eigenstates and wave packets.

A second energy representation of time operators is

<sup>T</sup><sup>b</sup> � <sup>¼</sup> <sup>1</sup> 2πℏ ð∞ �∞ dtð0 �pm dpðEm 0 dE′dE e�it E′=<sup>ℏ</sup>jE′〉〈E′jp〉 〈pjE〉〈E<sup>j</sup> � � <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> <sup>½</sup>ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ� <sup>þ</sup>iℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ�� eit E=<sup>ℏ</sup> <sup>¼</sup> <sup>1</sup> 2πℏ ð∞ �∞ dtð0 �pm dpðEm 0 dE′dE e�it E′=<sup>h</sup> jE′〉〈E′jp〉 h�ΘðE<sup>Þ</sup> �ΘðE � EmÞ i <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> <sup>þ</sup> <sup>i</sup>ℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ�Þ〈pjE〉〈Ej�<sup>e</sup> it E=ℏ � i 2π ð∞ �∞ dtð0 �pm dpðEm 0 dE′ e �it E′ <sup>=</sup><sup>ℏ</sup>jE′ 〉〈E′ jp〉 〈pjE〉〈Eje it E=<sup>ℏ</sup><sup>j</sup> Em E¼0 ¼ ð0 �pm dpðEm 0 dE′dE <sup>1</sup> 2πℏ ð∞ �∞ dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉 〈E′jp〉 <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> 〈pjE〉〈E<sup>j</sup> þiℏ ð0 �pm dpðEm 0 dE′dE <sup>1</sup> 2πℏ ð∞ �∞ dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉 〈E′jp〉½δðE<sup>Þ</sup> �δðE � EmÞ�〈pjE〉〈Ej �iℏ ð0 �pm dpðEm 0 dE′ <sup>1</sup> 2πℏ ð∞ �∞ dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉〈E′jp〉 〈pjE〉〈EjjEm E¼0 �iℏ ð0 �pm dpðEm 0 dE′ <sup>1</sup> 2πℏ ð∞ �∞ dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉〈E′jp〉 〈pjE〉〈EjjEm E¼0 ¼ ð0 �pm dpðEm 0 dE′dE <sup>δ</sup>ð<sup>E</sup> � <sup>E</sup>′ÞjE′〉〈E′jp〉i<sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> 〈pjE〉〈E<sup>j</sup> ¼ ð0 �pm dpðEm 0 dEjE〉 〈Ejp〉i<sup>ℏ</sup> <sup>∂</sup> <sup>∂</sup><sup>E</sup> 〈pjE〉〈Ej, (18a)

$$
\hat{T}\_{+} = \int\_{-\infty}^{\infty} dt \int\_{0}^{p\_{m}} dp \left| t(p) \right> \left< t(p) \right| \\
= \int\_{0}^{p\_{m}} dp \int\_{0}^{E\_{m}} dE \left| E \right> \langle E | p \rangle i\hbar \frac{\partial}{\partial E} \langle p | E \rangle \langle E |, \tag{18b}
$$

Emergence of Classical Distributions from Quantum Distributions: The Continuous Energy Spectra Case http://dx.doi.org/10.5772/109722 133

and

where we have performed an integration by parts. We also have that

dpjtðpÞ〉t〈tðpÞj ¼

dtjt〉 t 〈tj ¼

different form when they act on states or on both, eigenstates and wave packets.

<sup>T</sup>^ <sup>¼</sup> ð∞ �∞

A second energy representation of time operators is

� eit E=<sup>ℏ</sup>

dE′ e �it E′ <sup>=</sup><sup>ℏ</sup>jE′ 〉〈E′

> ð∞ �∞

ð∞ �∞

> ð∞ �∞

> ð∞ �∞

dE′dE <sup>δ</sup>ð<sup>E</sup> � <sup>E</sup>′ÞjE′〉〈E′jp〉i<sup>ℏ</sup> <sup>∂</sup>

<sup>∂</sup><sup>E</sup> 〈pjE〉〈Ej,

ðpm 0 dp ðEm 0

dE′dE e�it E′=<sup>h</sup>

ðpm 0 dp ðEm 0

> ðEm 0

These are the forms in which the time operators act on energy eigenkets, but they take a

dE′dE e�it E′=<sup>ℏ</sup>jE′〉〈E′jp〉 〈pjE〉〈E<sup>j</sup>

jE′〉〈E′jp〉

dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉 〈E′jp〉 <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉 〈E′jp〉½δðE<sup>Þ</sup>

dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉〈E′jp〉 〈pjE〉〈EjjEm

dt eitðE�E′Þ=<sup>ℏ</sup>jE′〉〈E′jp〉 〈pjE〉〈EjjEm

<sup>∂</sup><sup>E</sup> 〈pjE〉〈E<sup>j</sup>

dEjE〉〈Ejp〉i<sup>ℏ</sup> <sup>∂</sup>

<sup>∂</sup><sup>E</sup> <sup>þ</sup> <sup>i</sup>ℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ�Þ〈pjE〉〈Ej�<sup>e</sup>

h� ΘðEÞ

jp〉 〈pjE〉〈Eje

dE �i<sup>ℏ</sup> <sup>∂</sup>

<sup>∂</sup><sup>E</sup> <sup>j</sup>E〉 � �

> � � <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

it E=ℏ

it E=<sup>ℏ</sup><sup>j</sup> Em E¼0

<sup>∂</sup><sup>E</sup> 〈pjE〉〈E<sup>j</sup>

E¼0

E¼0

dE �i<sup>ℏ</sup> <sup>∂</sup>

<sup>∂</sup><sup>E</sup> <sup>j</sup>E〉〈Ejp〉 � �

〈pjE〉〈Ej, (17b)

〈Ej: (17c)

<sup>∂</sup><sup>E</sup> <sup>½</sup>ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ�

(18a)

<sup>∂</sup><sup>E</sup> 〈pjE〉〈Ej, (18b)

<sup>T</sup>^<sup>þ</sup> <sup>¼</sup>

and

<sup>T</sup><sup>b</sup> � <sup>¼</sup> <sup>1</sup> 2πℏ

> <sup>¼</sup> <sup>1</sup> 2πℏ

> > � i 2π ð∞ �∞ dt ð0 �pm dp ðEm 0

þiℏ ð0 �pm dp ðEm 0

�iℏ ð0 �pm dp ðEm 0

�iℏ ð0 �pm dp ðEm 0

¼ ð0 �pm dp ðEm 0

¼ ð0 �pm dp ðEm 0

<sup>T</sup>^<sup>þ</sup> <sup>¼</sup>

ð∞ �∞ dt ðpm 0

¼ ð0 �pm dp ðEm 0

ð∞ �∞ dt ð0 �pm dp ðEm 0

ð∞ �∞ dt ð0 �pm dp ðEm 0

�ΘðE � EmÞ

þiℏ½δðEÞ � δðE � EmÞ�

i <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

dE′dE <sup>1</sup> 2πℏ

�δðE � EmÞ�〈pjE〉〈Ej

dE′dE <sup>1</sup> 2πℏ

dE′ <sup>1</sup> 2πℏ

dE′ <sup>1</sup> 2πℏ

dEjE〉 〈Ejp〉i<sup>ℏ</sup> <sup>∂</sup>

dpjtðpÞ〉 t 〈tðpÞj ¼

ð∞ �∞ dt ðpm 0

132 Dynamical Systems - Analytical and Computational Techniques

$$
\hat{T} = \int\_{-\infty}^{\infty} dt |t\rangle \, t \,\langle t| = \int\_{0}^{E\_m} dE |E\rangle i\hbar \frac{d}{dE} \langle E|. \tag{18c}
$$

These are the various forms in which the time operators can act on states in the energy representation. The difference with the time operators when acting on energy eigenkets is a minus sign.

Other symmetric expressions for the time operators can also be obtained:

$$\begin{split} \hat{T}\_{-} &= \int\_{-\infty}^{\infty} dt \int\_{-p\_{n}}^{0} dp \, |t(p)\rangle \, t \,\langle t(p)| \\ &= \int\_{-\infty}^{\infty} dt \int\_{-p\_{n}}^{0} dp \int\_{0}^{E\_{\text{an}}} dE' d\mathcal{E} \frac{e^{-it\ E'/\hbar}}{\sqrt{2\pi\hbar}} \langle E'\rangle \, \langle E'|p\rangle \, t \,\,\langle p|E\rangle \, \langle E| \frac{e^{itE/\hbar}}{\sqrt{2\pi\hbar}} \\ &= \int\_{-p\_{n}}^{0} dp \int\_{0}^{E\_{\text{an}}} dE' d\mathcal{E} |E'\rangle \, \langle E'|p\rangle \, \langle p|E\rangle \, \langle E| \frac{1}{2\pi\hbar} \int\_{-\infty}^{\infty} dt \, t \,\, e^{it(E-\acute{E}')/\hbar} \\ &= -i\hbar \int\_{-p\_{n}}^{0} dp \int\_{0}^{E\_{\text{an}}} dE' d\mathcal{E} |E'\rangle \, \langle E'|p\rangle \, \langle p|E\rangle \, \langle E| \delta\rangle \, \langle E| \delta\rangle \, \langle E| \end{split} \tag{19a}$$
 
$$\begin{split} \hat{T}\_{+} &= -i\hbar \int\_{0}^{p\_{n}} dp \int\_{0}^{E\_{\text{an}}} dE' d\mathcal{E} |E'\rangle \, \langle E'|p\rangle \, \langle p|E\rangle \, \langle E| \delta\rangle \, \langle E| \delta\rangle \, \langle E| \, \delta\rangle \, \tag{19b} \end{split} \tag{19b}$$

and

$$
\hat{T} = -i\hbar \int\_0^{E\_m} dE' dE |E'\rangle \,\,\langle E| \delta'(E' - E). \tag{19c}
$$

The domain of our time operators is D, defined in Eq. (2). The convergence of quantities depends on the type of wave packet that these operators act on. A wave packet of type L<sup>2</sup> (0, Em) in the energy representation is a good choice (see Eq. (14)). Thus, the domain is invariant under the action of the time operators, and the commutator between the Hamiltonian and the time operators is thus valid in the entire domain D.

#### 3.1. Time matrix elements of the Hamiltonian

The matrix elements of the Hamiltonian in the time representation are given by

$$\begin{split} \langle \boldsymbol{t} \vert \hat{H} \vert \boldsymbol{t} \rangle &= \int\_{0}^{\mathbb{E}\_{m}} dE' d\boldsymbol{E} \frac{e^{i\boldsymbol{t}\cdot\boldsymbol{E}/\hbar}}{\sqrt{2\pi\hbar}} \langle \boldsymbol{E} \vert \hat{H} \vert \boldsymbol{E} \rangle \frac{e^{-i\boldsymbol{t}\cdot\boldsymbol{E}/\hbar}}{\sqrt{2\pi\hbar}} = \frac{1}{2\pi\hbar} \int\_{0}^{\mathbb{E}\_{m}} dE' d\boldsymbol{E} \mid \boldsymbol{E} \, e^{i\boldsymbol{t}\cdot\boldsymbol{E}/\hbar} e^{-i\boldsymbol{t}\cdot\boldsymbol{E}/\hbar} \langle \boldsymbol{E}' \vert \boldsymbol{E} \rangle \\ &= \frac{1}{2\pi\hbar} \int\_{0}^{\mathbb{E}\_{m}} dE \, \boldsymbol{E} \, e^{i(\boldsymbol{t}-\boldsymbol{t})\cdot\boldsymbol{E}/\hbar} = i\hbar \frac{d}{dt} \langle \boldsymbol{t} \vert \boldsymbol{t} \rangle = -i\hbar \frac{d}{dt} \langle \boldsymbol{t} \vert \boldsymbol{t} \rangle. \end{split} \tag{20}$$

This is the Schrödinger equation for time kets in the time representation.

#### 3.2. The time ket is the eigenstate of the time operator

We can find the characteristic operator of the commutators ½�, Hb � and ½Tb, ��. Because ½Tb, Hb � ¼ iℏ (see Eq. (7a)), the commutator between the operator <sup>e</sup>�iεTb<sup>=</sup><sup>ℏ</sup>, <sup>E</sup> <sup>∈</sup>½0, Em�, and the Hamiltonian is

$$[e^{-i\hat{\boldsymbol{\ell}}\hat{T}/\hbar}, \hat{H}] = \sum\_{n=0}^{\prime} \frac{1}{n!} \left(-i\frac{\boldsymbol{\varepsilon}}{\hbar}\right)^{n} [\hat{T}^{n}, \hat{H}] = \sum\_{n=1}^{\prime} \frac{1}{n!} \left(-i\frac{\boldsymbol{\varepsilon}}{\hbar}\right)^{n} i\hbar \, n \, \hat{T}^{n-1} = \varepsilon e^{-i\boldsymbol{\varepsilon}} \hat{T}/\hbar \,. \tag{21}$$

Similarly, the commutator between the time operator and the time propagator is

$$[\hat{T}, e^{-i\hat{H}/\hbar}] = \sum\_{n=0}^{\infty} \frac{1}{n!} \left(-i\frac{t}{\hbar}\right)^{n} [\hat{T}, \hat{H}^{n}] = \sum\_{n=1}^{\infty} \frac{1}{n!} \left(-i\frac{t}{\hbar}\right)^{n} i\hbar \, n \, \hat{H}^{n-1} = t e^{-i\hat{H}\hat{H}/\hbar},\tag{22}$$

where t∈ R.

The time ket ^jt〉 is the time propagation of a zero time ket ^j0〉,

$$
\hat{\langle t \rangle} = \int\_0^\circ dE \frac{e^{-itE/\hbar}}{\sqrt{2\pi\hbar}} |E\rangle = e^{-it\hat{H}/\hbar} \hat{\langle 0 \rangle},\\ \hat{\langle 0 \rangle} = \frac{1}{\sqrt{2\pi\hbar}} \int\_0^\circ dE |E\rangle. \tag{23}
$$

Thus, according to Eq. (22), we can say that the time ket is an eigenstate of the time operator

$$
\hat{T}|t\rangle = \hat{T}e^{-i\hat{t}\hat{H}/\hbar}|0\rangle = e^{-i\hat{t}\hat{H}/\hbar}\hat{T}|0\rangle + t\,e^{-i\hat{t}\hat{H}/\hbar}|0\rangle = t|t\rangle,\tag{24}
$$

where we have set Tbj0〉 ¼ 0 because j0〉 is the zero-time state.

An "evolution equation" for the energy eigenstate is (see Eq. (15))

$$\begin{split} \langle \hat{T} | E \rangle &= \int\_{-\infty}^{\infty} dt \frac{e^{i\hat{t}E/\hbar}}{\sqrt{2\pi\hbar}} \hat{T} | t \rangle = \int\_{-\infty}^{\infty} dt \frac{e^{i\hat{t}E/\hbar}}{\sqrt{2\pi\hbar}} | t \rangle \\ &= \int\_{-\infty}^{\infty} dt \Big( -i\hbar \frac{\partial}{\partial E} [\Theta(E) - \Theta(E - E\_m)] \frac{e^{i\hat{t}E/\hbar}}{\sqrt{2\pi\hbar}} + i\hbar [\delta(E) - \delta(E - E\_m)] \frac{e^{i\hat{t}E/\hbar}}{\sqrt{2\pi\hbar}} \Big) | t \rangle \\ &= \Big( -i\hbar \frac{d}{dE} [\Theta(E) - \Theta(E - E\_m)] + i\hbar [\delta(E) - \delta(E - E\_m)] \Big) | E \rangle \\ &= [\Theta(E) - \Theta(E - E\_m)] \Big( -i\hbar \frac{d}{dE} \Big) | E \rangle . \end{split} \tag{25}$$

Thus, the time operator is the generator of translations along the energy direction. All quantities are well defined as long as E and t belong to the allowed set of values for them. For other values of E and E þ E, we will get a linear combination of the energy eigenstates [14].

#### 3.3. Shifting of operators

3.2. The time ket is the eigenstate of the time operator

134 Dynamical Systems - Analytical and Computational Techniques

n¼0

n¼0

1 <sup>n</sup>! �<sup>i</sup>

The time ket ^jt〉 is the time propagation of a zero time ket ^j0〉,

dE <sup>e</sup>�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup>E〉 <sup>¼</sup> <sup>e</sup>

<sup>b</sup> �itH^ <sup>=</sup>ℏj0〉 <sup>¼</sup> <sup>e</sup>

where we have set Tbj0〉 ¼ 0 because j0〉 is the zero-time state.

An "evolution equation" for the energy eigenstate is (see Eq. (15))

ð∞ �∞ dt <sup>e</sup>itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>t</sup>jt〉

dE <sup>½</sup>ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ� þ <sup>i</sup>ℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ� � �

jE〉:

values of E and E þ E, we will get a linear combination of the energy eigenstates [14].

Thus, the time operator is the generator of translations along the energy direction. All quantities are well defined as long as E and t belong to the allowed set of values for them. For other

dE � �

ðE <sup>½</sup>ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ� <sup>e</sup>itE=<sup>ℏ</sup>

1 <sup>n</sup>! �<sup>i</sup> ε ℏ � �<sup>n</sup>

½e

<sup>½</sup>T^,<sup>e</sup>

where t∈ R.

TbjE〉 ¼

¼ ð∞ �∞

ð∞ �∞

¼ �i<sup>ℏ</sup> <sup>d</sup>

dt <sup>e</sup>itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>T</sup>bjt〉 <sup>¼</sup>

dt �i<sup>ℏ</sup> <sup>∂</sup>

¼ ½ΘðEÞ � <sup>Θ</sup>ð<sup>E</sup> � EmÞ� �i<sup>ℏ</sup> <sup>d</sup>

�iεT^=ℏ, <sup>H</sup>^ � ¼ <sup>X</sup><sup>∞</sup>

�itH^ <sup>=</sup>ℏ� ¼ <sup>X</sup><sup>∞</sup>

^jt〉 <sup>¼</sup> ð∞ 0

Tbjt〉 ¼ T e

We can find the characteristic operator of the commutators ½�, Hb � and ½Tb, ��. Because ½Tb, Hb � ¼ iℏ (see Eq. (7a)), the commutator between the operator <sup>e</sup>�iεTb<sup>=</sup><sup>ℏ</sup>, <sup>E</sup> <sup>∈</sup>½0, Em�, and the Hamiltonian is

<sup>½</sup>T^ <sup>n</sup>, <sup>H</sup>^ � ¼ <sup>X</sup><sup>∞</sup>

<sup>½</sup>T^, <sup>H</sup>^ <sup>n</sup>� ¼ <sup>X</sup><sup>∞</sup>

Thus, according to Eq. (22), we can say that the time ket is an eigenstate of the time operator

ffiffiffiffiffiffiffiffi

� �

Similarly, the commutator between the time operator and the time propagator is

t ℏ � �<sup>n</sup> n¼1

n¼1

1 <sup>n</sup>! �<sup>i</sup> ε ℏ � �<sup>n</sup>

1 <sup>n</sup>! �<sup>i</sup>

�itH^ <sup>=</sup>ℏ^j0〉,^j0〉 <sup>¼</sup> <sup>1</sup>

t ℏ � �<sup>n</sup>

> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

<sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>þ</sup>iℏ½δðEÞ � <sup>δ</sup>ð<sup>E</sup> � EmÞ� <sup>e</sup>itE=<sup>ℏ</sup>

jE〉

ð∞ 0

�itH^ <sup>=</sup>ℏTbj0〉 <sup>þ</sup> t e�itH^ <sup>=</sup>ℏj0〉 <sup>¼</sup> <sup>t</sup>jt〉, (24)

<sup>i</sup><sup>ℏ</sup> <sup>n</sup> <sup>T</sup>^ <sup>n</sup>�<sup>1</sup> <sup>¼</sup> <sup>ε</sup><sup>e</sup>

�iεT^=ℏ: (21)

<sup>i</sup><sup>ℏ</sup> <sup>n</sup> <sup>H</sup>^ <sup>n</sup>�<sup>1</sup> <sup>¼</sup> te�itH^ <sup>=</sup>ℏ, (22)

dEjE〉: (23)

ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> jt〉

(25)

The shifting of the Hamiltonian along the energy direction is (see Eq. (21))

$$
\hat{H}(\mathcal{E}) := e^{-i\boldsymbol{\varepsilon}\hat{\mathcal{T}}/\hbar} \hat{H} e^{i\boldsymbol{\varepsilon}\hat{\mathcal{T}}/\hbar} = (\hat{H} \, e^{-i\boldsymbol{\varepsilon}\hat{\mathcal{T}}/\hbar} + \mathcal{E} \, e^{-i\boldsymbol{\varepsilon}\hat{\mathcal{T}}/\hbar}) e^{i\boldsymbol{\varepsilon}\hat{\mathcal{T}}/\hbar} = \hat{H} + \mathcal{E},\tag{26}
$$

where 0 ≤ E þ ε. For the translation of the time operator (see Eq. (22)), we have

$$\hat{T}(t) := e^{i\hat{\mathcal{H}}/\hbar} \hat{T} e^{-i\hat{\mathcal{H}}/\hbar} = e^{i\hat{\mathcal{H}}/\hbar} (e^{-i\hat{\mathcal{H}}/\hbar} \hat{T} + t e^{-i\hat{\mathcal{H}}/\hbar}) = \hat{T} + t. \tag{27}$$

These operations are well defined as long as E þ ε ≥ 0 [6, 14]. The derivative with respect to t of the time-shifted operator (27) is

$$\frac{d}{dt}\widehat{T}(t) = \widehat{I},\tag{28}$$

that is, in the energy-time representations, t is the value that the time operator Tb can take and not simply a parameter. Similarly, in the case of a translation of the Hamiltonian operator by the time operator, i.e., Eq. (26), we find that

$$\frac{d}{d\varepsilon}\hat{H}(\varepsilon) = \hat{I}.\tag{29}$$

Therefore, in the energy-time representations, ε is not simply a parameter, but it is related to the values that the Hamiltonian Hb can take.

Thus, the use of energy and time eigenkets and operators instead of coordinate and momentum eigenkets and operators is similar to going from a parametric representation of curves, with time being the parameter of evolution, to a nonparametric representation in which time is now one of the coordinates.

#### 4. Approximate expressions

In this section, we make contact with other expressions that have been used by other authors. Other works have not made use of the Sa(x;1) factor that appears in our results. The results in this section will allow us to obtain a better understanding of previous results.

#### 4.1. Approximating the integral in an infinite interval

As an approximation, we replace the integral in an infinite interval ð2πÞ �1 ð∞ �∞ dt with the integral in the finite interval t∈ ½�T=2, T=2�, lim T!∞ ð1=TÞ ð<sup>T</sup>=<sup>2</sup> �T=2 dt. Then,

Tb� ¼ ð∞ �∞ dtð0 �pm dpjtðpÞ〉t〈tðpÞj ≅ 1 T ð<sup>T</sup>=<sup>2</sup> �T=2 dtð0 �pm dpðEm 0 dE′ dE e�itE′ <sup>=</sup><sup>ℏ</sup>jE′ 〉 〈E′ jp〉t〈pjE〉 〈Eje itE=ℏ ¼ ð0 �pm dpðEm 0 dE′ dEjE′ 〉 〈E′ jp〉 〈pjE〉 〈Ej 1 T ð<sup>T</sup>=<sup>2</sup> �T=2 dt t eitðE�E′ Þ=ℏ ¼ ð0 �pm dpðEm 0 dE′ dEjE′ 〉 〈E′ <sup>j</sup>p〉 〈pjE〉 〈E<sup>j</sup> <sup>i</sup><sup>ℏ</sup> <sup>E</sup> � <sup>E</sup>′ Sa <sup>T</sup> <sup>2</sup><sup>ℏ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′ Þ; 1 � � ¼ ðEm 0 dE′ dE <sup>i</sup><sup>ℏ</sup> <sup>E</sup> � <sup>E</sup>′ Sa <sup>T</sup> <sup>2</sup><sup>ℏ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′ Þ; 1 � �jE′ 〉 〈E′ j ^I�jE〉 〈Ej, (30a) Tbþ≅ ðEm 0 dE′ dE <sup>i</sup><sup>ℏ</sup> <sup>E</sup> � <sup>E</sup>′ Sa <sup>T</sup> <sup>2</sup><sup>ℏ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′ Þ; 1 � �jE′ 〉 〈E′ j ^IþjE〉 〈Ej, (30b)

and

$$
\hat{T}\mathbb{1}\int\_0^{E\_m} dE' d\boldsymbol{E}|\boldsymbol{E}'\rangle \,\left\langle \boldsymbol{E}|\frac{i\hbar}{E-\boldsymbol{E}}\mathrm{Sa}\left(\frac{T}{2\hbar}(\boldsymbol{E}-\boldsymbol{E}');1\right),\tag{30c} \right\rangle
$$

where the Sa function of type one is defined as

$$\text{Sa}(\mathfrak{x};1) := \frac{\sin\left(\mathfrak{x}\right)}{\mathfrak{x}} - \cos\left(\mathfrak{x}\right). \tag{31}$$

A plot of this function can be found in Figure 1. This function is zero at x = 0 and oscillates between ≈ � 1. The limit T ! ∞ of the integral of SaðTx=2; 1Þ=Tx times a function f(x) gives an approximation to the derivative of the latter at x = 0.

Figure 1. A plot of the function Saðx; 1Þ :¼ sin ðxÞ=x � cos ðxÞ.

Expressions that resemble Eq. (30c), but without the Sa factor, were used by other authors as a function that gives the sign of time in the continuous energy spectrum case [9–11].

## 5. The free particle

Tb� ¼

≅ 1 T ð<sup>T</sup>=<sup>2</sup> �T=2 dt ð0 �pm dp ðEm 0 dE′

¼ ð0 �pm dp ðEm 0 dE′ dEjE′

¼ ð0 �pm dp ðEm 0 dE′ dEjE′

> ¼ ðEm 0 dE′

Tbþ≅ ðEm 0 dE′

> Tb≅ ðEm 0 dE′ dEjE′

approximation to the derivative of the latter at x = 0.

Figure 1. A plot of the function Saðx; 1Þ :¼ sin ðxÞ=x � cos ðxÞ.

where the Sa function of type one is defined as

and

ð∞ �∞ dt ð0 �pm

136 Dynamical Systems - Analytical and Computational Techniques

dpjtðpÞ〉t〈tðpÞj

dE e�itE′

〉 〈E′

〉 〈E′

Sa <sup>T</sup>

Sa <sup>T</sup>

〉 〈E<sup>j</sup> <sup>i</sup><sup>ℏ</sup>

Saðx; <sup>1</sup><sup>Þ</sup> :<sup>¼</sup> sin <sup>ð</sup>x<sup>Þ</sup>

A plot of this function can be found in Figure 1. This function is zero at x = 0 and oscillates between ≈ � 1. The limit T ! ∞ of the integral of SaðTx=2; 1Þ=Tx times a function f(x) gives an

Expressions that resemble Eq. (30c), but without the Sa factor, were used by other authors as a

function that gives the sign of time in the continuous energy spectrum case [9–11].

dE <sup>i</sup><sup>ℏ</sup> <sup>E</sup> � <sup>E</sup>′

dE <sup>i</sup><sup>ℏ</sup> <sup>E</sup> � <sup>E</sup>′ <sup>=</sup><sup>ℏ</sup>jE′ 〉 〈E′

<sup>j</sup>p〉 〈pjE〉 〈E<sup>j</sup> <sup>i</sup><sup>ℏ</sup>

Þ; 1

Þ; 1

1 T ð<sup>T</sup>=<sup>2</sup> �T=2

<sup>E</sup> � <sup>E</sup>′

jE′ 〉 〈E′ j

> jE′ 〉 〈E′ j

<sup>2</sup><sup>ℏ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′

� �

jp〉 〈pjE〉 〈Ej

<sup>2</sup><sup>ℏ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′

<sup>2</sup><sup>ℏ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′

� �

<sup>E</sup> � <sup>E</sup>′ Sa <sup>T</sup>

� �

jp〉t〈pjE〉 〈Eje

Sa <sup>T</sup>

Þ; 1

<sup>x</sup> � cos <sup>ð</sup>xÞ: (31)

dt t eitðE�E′

<sup>2</sup><sup>ℏ</sup> <sup>ð</sup><sup>E</sup> � <sup>E</sup>′

^I�jE〉 〈Ej,

� �

itE=ℏ

Þ=ℏ

Þ; 1

^IþjE〉 〈Ej, (30b)

, (30c)

(30a)

As an example of the time kets provided by our method, let us apply the derived results to the free-particle system. We find expressions for time eigenkets, including the case when a distinction of the sign of the momentum is needed. In this model, the momentum operator P^ commutes with the Hamiltonian operator H^ , indicating a symmetry, allowing for some simplifications.

A set of energy eigenfunctions, in the coordinate representation, for the free-particle model is

$$\langle \eta | E\_{\pm} \rangle = \frac{e^{\pm i \sqrt{2mE} \eta / \hbar}}{\sqrt{2\pi \hbar}}, \quad E \in [0, \infty) \cdot \tag{32}$$

The subscripts in these functions indicate the sign of the momentum of the particle. Thus, the zero-time eigenstate for the free particle is given as

$$\begin{split} \langle q|0\_{\pm}\rangle &:= \int\_{0}^{\infty} dE \frac{1}{\sqrt{2\pi\hbar}} \langle q|E\_{\pm}\rangle = \frac{1}{\sqrt{2\pi\hbar}} \int\_{0}^{\infty} dE \frac{e^{\pm i\sqrt{2mE}/\hbar}}{\sqrt{2\pi\hbar}} = \frac{1}{\sqrt{2\pi\hbar}} \int\_{0}^{\infty} \frac{p}{m} \frac{dp}{\sqrt{2\pi\hbar}} \frac{e^{\pm ip\cdot q/\hbar}}{\sqrt{2\pi\hbar}} \\ &= \frac{1}{m} \left( \mp i\,\hbar \frac{d}{dq} \right) \frac{1}{2\pi\hbar} \int\_{0}^{\infty} dp \,\,\, e^{\pm i\,p\,\,q/\hbar} = \mp i \frac{\hbar}{m} \frac{d}{dq} \left( \frac{\delta(q)}{2} \pm \frac{i}{2\pi q} \right), \end{split} \tag{33}$$

where we have made the change in variable <sup>E</sup> <sup>¼</sup> <sup>p</sup><sup>2</sup>=2m. The unit of the last ket is time�<sup>1</sup> . Various other authors have used kets obtained by direct quantization of the classical expression for the time variable and have obtained a time ket with units of time1/2. However, our kets exhibit the properties discussed in this chapter.

Figure 2 shows a three-dimensional plot of the approximation of the squared modulus of the time states 〈qjt�〉 and 〈qjtþ〉, obtained by not integrating from E = 0 to ∞ but up to a finite, large, value of E. They start highly localized at the origin and subsequently they move away from it and spread with time. The support of these functions resembles the classical motion curve mq = pt.

Figure 2. Three-dimensional plots of the squared modulus of the approximate time kets j〈qjt�〉j <sup>2</sup> and <sup>j</sup>〈qjtþ〉<sup>j</sup> <sup>2</sup> for the freeparticle model. The density is initially a highly localized density at q = 0 but subsequently it spreads and moves away from the origin. Dimensionless units.

For the sake of completeness, we write down the matrix elements of the time operators in the coordinate representation. They are

$$\begin{split} \langle q' | \widehat{T}\_{\pm} | q \rangle &= \int\_{0}^{\circ} dE \langle q' | E\_{\pm} \rangle \left( i\hbar \frac{\partial}{\partial E} \right) \langle E\_{\pm} | q \rangle = \int\_{0}^{\circ} dE \frac{e^{\pm i\sqrt{2mE}q'/\hbar}}{\sqrt{2\pi\hbar}} \left( i\hbar \frac{\partial}{\partial E} \right) \frac{e^{\mp i\sqrt{2mE}q/\hbar}}{\sqrt{2\pi\hbar}} \\ &= \pm \int\_{0}^{\circ} dE \frac{e^{\pm i\sqrt{2mE}q'/\hbar}}{\sqrt{2\pi\hbar}} \frac{m}{\sqrt{2mE}} q \frac{e^{\mp i\sqrt{2mE}q/\hbar}}{\sqrt{2\pi\hbar}} = \pm \frac{1}{2\pi\hbar} q \int\_{0}^{\circ} dp \, e^{\pm ip(q'-q)/\hbar} \\ &= \pm q \left[ \frac{\delta(q-q')}{2} + \frac{i}{2\pi(q-q')} \right]. \end{split} \tag{34}$$

#### 5.1. Solution to the quantized version of the classical motion of a free particle

The following calculation shows that the time states can also be the solution to the quantized classical expression for the motion of a free particle initially located at q = 0, i.e., the quantization of mq = pt. Let us rewrite the product mq〈qjt�〉 as follows:

mq〈qjt�〉 <sup>¼</sup> mqðEm 0 dE <sup>e</sup>�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> 〈qjE�〉 <sup>¼</sup> mqðEm 0 dE <sup>e</sup>�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> e�<sup>i</sup> ffiffiffiffiffiffi <sup>2</sup>mE <sup>p</sup> <sup>q</sup>=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>¼</sup> mqð<sup>0</sup> �pm dp <sup>p</sup> m e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> eipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> ¼ m ð0 �pm dp <sup>p</sup> m e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> �i<sup>ℏ</sup> <sup>∂</sup> ∂p � � ei pq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> ¼ m ð0 �pm dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>i</sup><sup>ℏ</sup> <sup>∂</sup> ∂p � � p m e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> � <sup>i</sup>ℏ<sup>m</sup> <sup>p</sup> m e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> eipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup> 0 p¼�pm ¼ iℏ m ð0 �pm dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> 1 <sup>m</sup> � <sup>i</sup> p m 2tp 2mℏ � � e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>þ</sup> <sup>i</sup><sup>ℏ</sup> pm <sup>2</sup>π<sup>ℏ</sup> <sup>e</sup> �itpm <sup>2</sup>=2m<sup>ℏ</sup>e ipmq=ℏ ¼ iℏ m ð0 �pm dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> 1 m e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>þ</sup> <sup>t</sup> ð0 �pm dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> p2 m e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> þ i pm 2π e �itp<sup>2</sup> <sup>m</sup>=2m<sup>ℏ</sup>e i pmq=ℏ <sup>¼</sup> <sup>t</sup> �i<sup>ℏ</sup> <sup>d</sup> dq � �ð<sup>0</sup> �pm dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> p m e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>þ</sup> <sup>i</sup><sup>ℏ</sup> ð0 �pm dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> þ i pm 2π e �itp<sup>2</sup> <sup>m</sup>=2m<sup>ℏ</sup>e ipmq=ℏ <sup>¼</sup> <sup>t</sup> �i<sup>ℏ</sup> <sup>d</sup> dq � �〈qjt�〉 <sup>þ</sup> <sup>i</sup><sup>ℏ</sup> ð0 �pm dp〈qjp〉 〈pjE〉 þ i pm 2π e �itp<sup>2</sup> <sup>m</sup>=2m<sup>ℏ</sup>e i pmq=ℏ <sup>¼</sup> <sup>t</sup>〈qjP^jt�〉 <sup>þ</sup> <sup>i</sup>ℏð〈q<sup>j</sup> ^I�jE〉 <sup>þ</sup> 〈qjP^jpm〉 〈pmjE〉Þ, (35a)

$$mq\langle q|t\_{+}\rangle = t\langle q|\hat{P}|t\_{+}\rangle + i\hbar(\langle q|\hat{I}\_{+}|E\rangle - \langle q|\hat{P}|p\_{m}\rangle\langle p\_{m}|E\rangle). \tag{35b}$$

We can think of the last two terms in the above equations as quantum corrections to the classical trajectory of a free particle. These correction terms seem to vanish when ℏ ! 0.

On the other hand, the straightforward solution to the quantized version of the classical expression for the motion of a free particle gives a quite different function. The solution to the differential equation

$$
tau\_{\
u} f(q; \
t) = t \left( -i\hbar \frac{d}{dq} \right) f(q; \
t) \tag{36}
$$

is

〈q′

jTb�jq〉 ¼

mq〈qjt�〉 ¼ mq

¼ mq ð0 �pm

¼ m ð0 �pm

¼ m ð0 �pm

¼ iℏ m

¼ iℏ m

þ i pm 2π e �itp<sup>2</sup> <sup>m</sup>=2m<sup>ℏ</sup>e

þ i pm 2π e �itp<sup>2</sup> <sup>m</sup>=2m<sup>ℏ</sup>e

<sup>¼</sup> <sup>t</sup> �i<sup>ℏ</sup> <sup>d</sup>

<sup>¼</sup> <sup>t</sup> �i<sup>ℏ</sup> <sup>d</sup>

dq � �ð<sup>0</sup>

dq � �

<sup>¼</sup> <sup>t</sup>〈qjP^jt�〉 <sup>þ</sup> <sup>i</sup>ℏð〈q<sup>j</sup>

ð0 �pm

<sup>þ</sup> <sup>i</sup><sup>ℏ</sup> pm <sup>2</sup>π<sup>ℏ</sup> <sup>e</sup> �itpm <sup>2</sup>=2m<sup>ℏ</sup>e

> ð0 �pm

ð∞ 0

138 Dynamical Systems - Analytical and Computational Techniques

¼ � ð∞ 0

¼ �q

ðEm 0

dE〈q′jE�〉 <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

dE <sup>e</sup>�<sup>i</sup> ffiffiffiffiffiffi <sup>2</sup>mE <sup>p</sup> <sup>q</sup>′=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

δðq � q′Þ 2 þ

tion of mq = pt. Let us rewrite the product mq〈qjt�〉 as follows:

dE <sup>e</sup>�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi

> dp <sup>p</sup> m

dp <sup>p</sup> m

dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

> dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> > �pm

∂E � �

� �

m ffiffiffiffiffiffiffiffiffi <sup>2</sup>mE <sup>p</sup> <sup>q</sup>

i 2πðq � q′Þ

5.1. Solution to the quantized version of the classical motion of a free particle

<sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> 〈qjE�〉 <sup>¼</sup> mq

∂p � � p

> 1 <sup>m</sup> � <sup>i</sup> p m 2tp 2mℏ � � e�itp2=2m<sup>ℏ</sup>

ipmq=ℏ

e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>þ</sup> <sup>t</sup>

> p m

1 m

i pmq=ℏ

dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

ipmq=ℏ

ð0 �pm

〈qjt�〉 þ iℏ

mq〈qjtþ〉 <sup>¼</sup> <sup>t</sup>〈qjP^jtþ〉 <sup>þ</sup> <sup>i</sup>ℏð〈q<sup>j</sup>

eipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

∂p � � ei pq=<sup>ℏ</sup>

> e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> � <sup>i</sup>ℏ<sup>m</sup> <sup>p</sup>

m

e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> �i<sup>ℏ</sup> <sup>∂</sup>

〈E�jq〉 ¼

:

The following calculation shows that the time states can also be the solution to the quantized classical expression for the motion of a free particle initially located at q = 0, i.e., the quantiza-

> ðEm 0

dE <sup>e</sup>�itE=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> ð0 �pm

dp〈qjp〉 〈pjE〉 þ i

e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>þ</sup> <sup>i</sup><sup>ℏ</sup>

^I�jE〉 <sup>þ</sup> 〈qjP^jpm〉 〈pmjE〉Þ,

We can think of the last two terms in the above equations as quantum corrections to the classical trajectory of a free particle. These correction terms seem to vanish when ℏ ! 0.

ð∞ 0

e<sup>∓</sup><sup>i</sup> ffiffiffiffiffiffi <sup>2</sup>mE <sup>p</sup> <sup>q</sup>=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> ¼ �

dE <sup>e</sup>�<sup>i</sup> ffiffiffiffiffiffi <sup>2</sup>mE <sup>p</sup> <sup>q</sup>′=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>i</sup><sup>ℏ</sup> <sup>∂</sup>

> 1 <sup>2</sup>π<sup>ℏ</sup> <sup>q</sup> ð∞ 0

e�<sup>i</sup> ffiffiffiffiffiffi <sup>2</sup>mE <sup>p</sup> <sup>q</sup>=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

m

ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> > ð0 �pm

> > > pm 2π e �itp<sup>2</sup> <sup>m</sup>=2m<sup>ℏ</sup>e

e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> p2 m

dp <sup>e</sup>ipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> e�itp2=2m<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

> > i pmq=ℏ

^IþjE〉 � 〈qjP^jpm〉 〈pmjE〉Þ: (35b)

eipq=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup> <sup>j</sup> 0 p¼�pm

∂E � � e<sup>∓</sup><sup>i</sup> ffiffiffiffiffiffi

dp e�ipðq′�qÞ=<sup>ℏ</sup>

<sup>2</sup>mE <sup>p</sup> <sup>q</sup>=<sup>ℏ</sup> ffiffiffiffiffiffiffiffi <sup>2</sup>π<sup>ℏ</sup> <sup>p</sup>

(34)

(35a)

$$f(q; \ t) = \mathcal{N}\ e^{i\
 m q^2 / 2\hbar t},\tag{37}$$

where N is a normalization constant. The squared modulus of this function is constant for all q and for all t. The squared modulus of the corresponding momentum function,

$$f(p; \ t) = \mathcal{N} \sqrt{\dot{\imath} \frac{t}{m}} \ e^{-i \ \text{tp}^2 / 2m\hbar},\tag{38}$$

is not a localized function either; it actually is proportional to the transformation function between energy and time representations, in momentum representation. Thus, the route of forming conjugate states to the energy eigenstates seems to be a better path for obtaining appropriate time eigenstates.

## 6. Conclusions

We have introduced time-like states and time-like operators that are conjugate to the energy eigenstates and Hamiltonian operator, respectively. We have also given an interpretation of the obtained states and operators, and we have found that expressions obtained via other approaches to finding time eigenstates can be related to our expressions. However, the oscillatory Sa factor that we use solves many difficulties found in previous treatments. We have found the form of the time states for the free particle and a time operator that is valid for any L<sup>2</sup> -type wave functions.

The approximation to time operators that we have introduced in this chapter uses expressions that can be adapted to the case of discrete energy spectra. We will explore this possibility in a later paper. From the literature on time operators, it might be believed that the treatment for a continuous energy spectrum is different from that for discrete energy spectrum systems. But, the results of this study suggest that both types of systems can be addressed in a similar manner.

Finally, we have found that the spectral measure <sup>M</sup>ðdτ<sup>Þ</sup> of <sup>T</sup>^ is a nonorthogonal resolution of the identity defined by

$$
\langle \acute{E} \vert \widehat{M\_I}(d\tau) \vert E \rangle = \frac{e^{it(\acute{E}^\prime - E)/\hbar}}{2\pi\hbar} d\tau. \tag{39}
$$

This measure exhibits the covariance property, as was previously stated by Holevo [13].

## Author details

Gabino Torres-Vega

Address all correspondence to: gabino@fis.cinvestav.mx

Physics Department, Cinvestav, México

## References


[13] Holevo AS: Estimation of shift parameters of a quantum state. Rep Math Phys. 1978;13:379–399.

Author details

Gabino Torres-Vega

References

Address all correspondence to: gabino@fis.cinvestav.mx

2008;322:1524–1529. DOI: 10.1126/science.1163439

relation for energy and time. Rep Math Phys. 1974;6:361–386.

[1] Galapon EA: Only above barrier energy components contribute to barrier traversal time.

[2] Eckle P, Pfeiffer AN, Cirelli C, Staudte A, Dörner R, Muller HG, Büttiker M, Keller U: Attosecond ionization and tunnelling delay time measurements in Helium. Science.

[3] Kijowski J: On the time operator in quantum mechanics and the Heisenberg uncertainty

[4] Hegerfeldt GC, Muga JG, Muñoz J: Manufacturing time operators: Covariance, selection criteria, and examples. Phys Rev A. 2010;82:012113. DOI: 10.1103/PhysRevA.82.012113 [5] Weyl H: The Theory of Groups and Quantum Mechanics. 2nd ed. USA: Dover Publica-

[6] Galapon EA: Self-adjoint time operator is the rule for discrete semi-bounded Hamilto-

[7] Arai A, Yasumichi M: Time operators of a Hamiltonian with purely discrete spectrum.

[8] Arai A: Necessary and sufficient conditions for a Hamiltonian with discrete eigenvalues to have time operators. Lett Math Phys. 2009;87:67. DOI: 10.1007/sl 1005-008-0286-z [9] Strauss Y, Silman J, Machnes S, Horwitz LP: An arrow of time operator for standard

[10] Strauss Y: Forward and backward time observables for quantum evolution and quantum

[11] Hall MJW: Comment on "An arrow of time operator for standard quantum mechanics" (a

[12] Torres-Vega G: Conjugate dynamical systems: Classical analogue of the quantum energy translation. J Phys A: Math Theor. 2012;45:215302. DOI: 10.1088/1751-8113/45/21/215302

stochastic processes – I: The time observables. 2007. math-ph 0706.0268v1.

nians. Proc R Soc Lond A. 2002;458:2671–2690. DOI: 10.1098/rspa.2002.0992

Phys Rev Lett. 2012;108:170402. DOI: 10.1103/PhysRevLett.108.170402

Physics Department, Cinvestav, México

140 Dynamical Systems - Analytical and Computational Techniques

tions, Inc.; 1950. 447 p.

Rev Math Phys. 2008;20:951–978.

Quantum Mechanics. 2008. quant-ph 0802.2448.

sign of the time!). 2008. quant-ph 0802.2682.

[14] Martínez-Pérez A, Torres-Vega G: Translations in quantum mechanics revisited. The point spectrum case. Can J Phys. 2016;94:1365.

## **Recent Fixed Point Techniques in Fractional Set-Valued Dynamical Systems Provisional chapter**

**Recent Fixed Point Techniques in Fractional Set-Valued**

Parin Chaipunya and Poom Kumam

Additional information is available at the end of the chapter Parin Chaipunya and Poom Kumam

http://dx.doi.org/10.5772/67069 Additional information is available at the end of the chapter

**Dynamical Systems**

#### **Abstract**

In this chapter, we present a recollection of fixed point theorems and their applications in fractional set-valued dynamical systems. In particular, the fractional systems are used in describing many natural phenomena and also vastly used in engineering. We consider mainly two conditions in approaching the problem. The first condition is about the cyclicity of the involved operator and this one takes place in ordinary metric spaces. In the latter case, we develop a new fundamental theorem in modular metric spaces and apply to show solvability of fractional set-valued dynamical systems.

**Keywords:** fractional set-valued dynamical system, fixed point theory, contraction, modular metric space

## **1. Introduction**

Dynamical system is a wide area that deals with a system that changes over time. The two main characteristics of the time domain here are identified with the discrete and continuous manners. In discrete time domain, major considerations turn to the difference equations and generating functions. While in the latter one, which we shall be considering mainly for this chapter, the system is usually represented by differential equations. It might be more influential to talk about the inclusion problems if a set-valued system is to be analyzed.

The very first and fundamental dynamical system is known nowadays under the term Cauchy problem. It is represented with the following *C*<sup>1</sup> initial-valued problem:

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

$$\begin{cases} \boldsymbol{\mu}'(t) = \boldsymbol{f}(t, \boldsymbol{\mu}(t)),\\ \boldsymbol{\mu}(0) = \boldsymbol{\mu}\_0 \end{cases}$$

In this case, we assume that *<sup>f</sup>* : <sup>½</sup>0, *<sup>T</sup>* · *<sup>R</sup>* ! *<sup>R</sup>* is continuous and *<sup>u</sup>*∈*C*<sup>1</sup> ð½0, *T*Þ. From simple calculus, we may see that this system is equivalent to the following integral equation:

$$
\mu(t) = \mu\_0 + \int\_{[0,t]} f(\mathbf{s}, \mu(\mathbf{s}))d\mathbf{s} \tag{1}
$$

This is where Banach got the idea to solve the problem. He proposed his famous fixed point theorem known today as the contraction principle in 1922 [1], mainly to solve this Cauchy problem effectively. Recall that the contraction principle states that if *X* is a complete metric space and *T* : *X* ! *X* is Lipschitz continuous with constant 0 *< L <* 1, then *T* has a unique fixed point.

Let us consider a map *Λ* : *C*<sup>1</sup> ð½0, *<sup>T</sup>*Þ ! *<sup>C</sup>*<sup>1</sup> ð½0, *T*Þ given by

$$A(\mu)(t) := \mu\_0 + \int\_{[0,t]} f(s, \mu(s))ds, \quad \forall \mu \in \mathbb{C}^1([0,T]), \ \forall t \in [0,T].$$

One can notice that *u*∈*C*<sup>1</sup> ð½0, *T*Þ solves Eq. (1) if and only if it is a fixed point of *Λ*. With this approach, by considering *C*<sup>1</sup> ð½0, *T*Þ with the supremum norm ∥ ∥*∞*, we end up with the local solvability of the Cauchy problem. To obtain the global solution, we have to apply some techniques to extend the boundary of the local solution.

It is not very obvious that renorming by the *<sup>L</sup>*-weighted norm <sup>∥</sup>*<sup>f</sup>* <sup>∥</sup>*<sup>L</sup>* :<sup>¼</sup> sup*<sup>t</sup>*<sup>∈</sup> <sup>½</sup>0,*T <sup>e</sup>*<sup>−</sup>*Ltf*ð*t*Þ, with *L >* 0, will resolve such difficulty. We shall give the short solvability result of the Cauchy problem with the contraction principle here, to illustrate the concept of how we apply fixed point theorem to continuous dynamical systems. Under the assumption that *f* must be Lipschitz in the second variable with constant *L >* 0, we have for any *x*, *y* ∈*C*<sup>1</sup> ð½0, *T*Þ the following:

$$\begin{split} \left| e^{-Lt} |\Lambda(\mathbf{x})(t) - \Lambda(y)(t)| \right| &= e^{-Lt} \left| \int\_{[0,t]} f(s, \mathbf{x}(s)) \neg f(s, y(s)) ds \right| \\ &\leq e^{-Lt} \int\_{[0,t]} |f(s, \mathbf{x}(s)) - f(s, y(s))| ds \\ &\leq e^{-Lt} \int\_{[0,t]} Le^{Ls} e^{-Ls} |\mathbf{x}(s) - y(s)| ds \\ &\leq e^{-Lt} \|\mathbf{x} - y\|\_{L} \int\_{[0,t]} Le^{Ls} e^{-Ls} ds \\ &\leq e^{-Lt} (e^{Lt} - 1) \|\mathbf{x} - y\|\_{L} \\ &\leq (1 - e^{-Lt}) \|\mathbf{x} - y\|\_{L} .\end{split}$$

Taking supremum over *t*∈ ½0, *T* yields the result and the solvability thus follows.

This is the alternative technique to guarantee the solvability of the Cauchy problem, without obtaining the local solution first. It is important to remark that there are many mathematicians that can later adapt different technique and different direction to obtain the solvability of various classes of dynamical systems, under one unifying fact—by applying fixed point theorems.

It is natural to raise the situation of set-valued integral, which proved itself for its importance in practical applications especially in engineering. In 1965, Aumann [2] introduced the concept of definite set-valued integral on real line and Euclidean spaces. Suppose that *Ψ* is an interval <sup>½</sup>0, *<sup>T</sup>*, where *<sup>T</sup> <sup>&</sup>gt;* 0. Let *<sup>F</sup>* : *<sup>Ψ</sup>* ! <sup>2</sup><sup>R</sup> be a set-valued operator. A selection of *<sup>F</sup>* is the function *f* : *Ψ* ! R∪{ *∞*} such that *f*ð*t*Þ∈ *F*ð*t*Þ a.e. *t*∈ *Ψ*. We write ℱ to denote the set containing all integrable selections of *F*. According to Aumann [2], the set-valued integral is determined by the operator *J* in the following:

$$\mathcal{J}\_{\mathbb{M}}F(t)dt := \left\{ \int\_{\mathbb{M}} f(t)dt \; ; \; f \in \mathbb{R} \right\}$$

that is, the set of the integrals of integrable selections of *F*.

*u*′

In this case, we assume that *<sup>f</sup>* : <sup>½</sup>0, *<sup>T</sup>* · *<sup>R</sup>* ! *<sup>R</sup>* is continuous and *<sup>u</sup>*∈*C*<sup>1</sup>

ð½0, *<sup>T</sup>*Þ ! *<sup>C</sup>*<sup>1</sup>

ð ½0,*t*

*Λ*ð*u*Þð*t*Þ :¼ *u*<sup>0</sup> þ

144 Dynamical Systems - Analytical and Computational Techniques

techniques to extend the boundary of the local solution.

fixed point.

following:

Let us consider a map *Λ* : *C*<sup>1</sup>

One can notice that *u*∈*C*<sup>1</sup>

approach, by considering *C*<sup>1</sup>

*u*ð0Þ ¼ *u*<sup>0</sup>

calculus, we may see that this system is equivalent to the following integral equation:

ð ½0,*t*

This is where Banach got the idea to solve the problem. He proposed his famous fixed point theorem known today as the contraction principle in 1922 [1], mainly to solve this Cauchy problem effectively. Recall that the contraction principle states that if *X* is a complete metric space and *T* : *X* ! *X* is Lipschitz continuous with constant 0 *< L <* 1, then *T* has a unique

ð½0, *T*Þ given by

*<sup>f</sup>*ð*s*, *<sup>u</sup>*ð*s*ÞÞ*ds*, <sup>∀</sup>*u*∈*C*<sup>1</sup>

solvability of the Cauchy problem. To obtain the global solution, we have to apply some

*L >* 0, will resolve such difficulty. We shall give the short solvability result of the Cauchy problem with the contraction principle here, to illustrate the concept of how we apply fixed point theorem to continuous dynamical systems. Under the assumption that *f* must be

> ð ½0,*t*

≤ *e*<sup>−</sup>*Lt* ð ½0,*t*

≤ *e*<sup>−</sup>*Lt* ð ½0,*t*

Taking supremum over *t*∈ ½0, *T* yields the result and the solvability thus follows.

≤ *e*<sup>−</sup>*Lt*∥*x*−*y*∥*<sup>L</sup>*

<sup>≤</sup> *<sup>e</sup>*<sup>−</sup>*Lt*ð*eLt*−1Þ∥*x*−*y*∥*<sup>L</sup>* <sup>≤</sup> <sup>ð</sup>1−*e*<sup>−</sup>*LT*Þ∥*x*−*y*∥*L:*

ð ½0,*t*

It is not very obvious that renorming by the *<sup>L</sup>*-weighted norm <sup>∥</sup>*<sup>f</sup>* <sup>∥</sup>*<sup>L</sup>* :<sup>¼</sup> sup*<sup>t</sup>*<sup>∈</sup> <sup>½</sup>0,*T*

Lipschitz in the second variable with constant *L >* 0, we have for any *x*, *y* ∈*C*<sup>1</sup>

*<sup>e</sup>*<sup>−</sup>*Lt*j*Λ*ð*x*Þð*t*Þ−*Λ*ð*y*Þð*t*Þj ¼ *<sup>e</sup>*<sup>−</sup>*Lt*<sup>j</sup>

*u*ð*t*Þ ¼ *u*<sup>0</sup> þ

ð*t*Þ ¼ *f*ð*t*, *u*ð*t*ÞÞ,

ð½0, *T*Þ. From simple

*<sup>e</sup>*<sup>−</sup>*Ltf*ð*t*Þ, with

ð½0, *T*Þ the

*f*ð*s*, *u*ð*s*ÞÞ*ds* (1)

ð½0, *T*Þ, ∀*t*∈ ½0, *T*

ð½0, *T*Þ solves Eq. (1) if and only if it is a fixed point of *Λ*. With this

ð½0, *T*Þ with the supremum norm ∥ ∥*∞*, we end up with the local

*f*ð*s*, *x*ð*s*ÞÞ−*f*ð*s*, *y*ð*s*ÞÞ*ds*j

j*f*ð*s*, *x*ð*s*ÞÞ−*f*ð*s*, *y*ð*s*ÞÞj*ds*

*LeLse*<sup>−</sup>*Lsds*

*LeLse*<sup>−</sup>*Ls*j*x*ð*s*Þ−*y*ð*s*Þj*ds*

On the other hand, in elementary calculus, one deals with derivatives and integrals, including the higher-integer-order iterations. Here, in fractional integral, one looks at a broader concept where the real-order iteration is taken into account. There are many approaches to study this kind of extensions. In our context, we shall use the classical notion introduced by Riemann and Liouville, the latter of which is the first one to point out the possibility of fractional calculus in 1832. Given a function *f* ∈*L*<sup>1</sup> ð*Ψ*, *μ*Þ, the fractional integral of order *α >* 0 is given by

$$\Pi\_{\Psi}^{\alpha}f(t)dt := \frac{1}{\Gamma(\alpha)} \int\_{\Psi} \left(t - \tau\right)^{\alpha - 1} f(\tau)d\tau$$

Naturally, we may further consider the following fractional integral:

$$\mathbb{J}^{\alpha}\_{\Psi} F(t)dt := \left\{ \mathbb{I}^{\alpha}\_{\Psi} f(t)dt \; ; \; f \in \mathcal{F} \right\}.$$

These two concepts have brought up the studies of new systems, the set-valued dynamical systems and the fractional dynamical systems. Even the combination of the two, the fractional set-valued dynamical systems, is an emerging area in research. We shall be particular with this latter class of systems and give some brief investigations over the problem.

The very concept of set-valued fractional integral operator was first proposed by El-Sayed and Ibrahim [3–5] and this has opened a new universe of investigation to fractional operator equations. It has been reflected that such theory can better describe nonlinear phenomena, compared to the classical theory of differential and integral equations. The extensive use of this theory lays naturally in automatic control theory, network theory and dynamical systems (see, e.g. [6–10]).

The central system that we are going to investigate in this chapter is the following delayed system:

$$u(t) - \sum\_{i=1}^{n} \beta\_i(t) u(t - \tau\_i) \in \mathbb{I}^\alpha F(t, u(t)) \; ; \quad \alpha \in (0, 1], \; t \in J := [0, T], \; T > 0 \tag{2}$$

where *<sup>τ</sup><sup>i</sup>* <sup>∈</sup> <sup>½</sup>0, *<sup>t</sup>* for all *<sup>i</sup>*<sup>∈</sup> {1, <sup>2</sup>, <sup>⋯</sup>, *<sup>n</sup>*}, *<sup>F</sup>* : *<sup>J</sup>* · <sup>R</sup> ! *CB*ðRÞ, I*<sup>α</sup>F*ð*t*, *<sup>u</sup>*ð*t*ÞÞ is the definite integral of order *α* given by

$$\mathcal{I}^a F(t, \mu(t)) := \left\{ \frac{1}{\Gamma(\alpha)} \int\_0^t (t \tau)^{\alpha - 1} f(\tau, \mu(\tau)) d\tau \; ; \; f \in \mathcal{S}\_\Gamma(\mu) \right\}.$$

and

$$S\_F(\mu) := \{ f \in L^1(I, \mathbb{R}) \; ; \; f(t) \in F(t, \mu(t)) \text{ a.e. } \; t \in I \}$$

denotes the set of selections of *F* and *β<sup>i</sup>* : *J* ! R is continuous for each *i*∈ {1, 2, ⋯, *n*}. Also, set *B* :¼ max1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>n</sup>*sup*<sup>t</sup>*∈*<sup>J</sup> βi* ð*t*Þ.

In this chapter, we shall bring up some recent results in fixed point theory in several approaches and then show how these theorems apply to different classes of dynamical systems. Going precise, in Section 2, we investigate the system (2) in standard metric spaces through a newly developed fixed point theorem. The mentioned fixed point theorem deals with an operator that satisfied the so-called implicit contractivity condition only on a portion of a space, where such partial partition is obtained from the cyclicity behavior that we imposed. We also note the relation between this cyclicity behavior and the one that arises from the partial ordering relation approach. The solvability of the dynamical system (2) in this section is naturally obtained via the cyclicity and implicit contractivity assumptions. For further readings related to this topic, consult [11–17]. In Section 3, we consider a newly emerged approach of studying fixed point theory, i.e., fixed point theory in modular metric spaces. This theory has only been introduced to researchers only a few years ago and has been investigated reasonably in such a short duration. We bring up one of the fundamental fixed point theorem in this modular metric spaces, give appropriate examples and then apply it to guarantee the solvability of, again, the system (2). Even the studies of modular metric spaces are relatively limited at the time, we suggest that further readings from Refs. [18–20] should give some ideas about the theory itself and also how to develop further dynamical systems in this framework.

#### **2. Cyclic operators in metric spaces**

In this section, we consider a very general class of operators that satisfy the implicit contractivity condition. Moreover, we also assume the operator to be cyclic over its domain. This cyclicity weakens the contractivity only to a portion of the space. This is a more general case than the contractivity on comparable pairs, as we show later in this chapter. This also allows the coexistence result that is better than the exact solution and the sub-/supersolution.

Note that results in this section are based on our paper [21]. Recall the following notion of cyclic operators.

DEFINITION 2.1. Let *X* be a nonempty set and *A*1, *A*2, ⋯, *Ap* be nonempty subsets of *X*. An operator *F* : ∪ *p <sup>k</sup>*¼<sup>1</sup> *Ak* ! <sup>2</sup><sup>∪</sup> *p k*¼1 *Ak* is called a phset-valued cyclic operator over ∪ *p <sup>k</sup>*¼<sup>1</sup>*Ak* if *F*ð*Ai*Þ⊆*Ai*þ<sup>1</sup> for all *i*∈ {1, 2, ⋯, *p*−1} and *F*ð*Ap*Þ⊆*A*1.

There is a special property about the location of fixed point of this operator, as illustrated in the following.

PROPOSITION 2.2. *Let <sup>X</sup> be a nonempty set and A*1, *A*2, ⋯, *Ap be nonempty subsets of X. If F is a setvalued cyclic operator over* ∪ *p <sup>k</sup>*¼<sup>1</sup>*Ak, then we have the inclusion* Fixð*F*Þ⊆<sup>∩</sup> *p <sup>k</sup>*¼<sup>1</sup>*Ak, where* Fixð*F*<sup>Þ</sup> *denotes the fixed point set of F*.

PROOF. If either *Fix*ð*F*Þ ¼ *ptyset* or ∩ *p <sup>k</sup>*¼<sup>1</sup>*Ak* <sup>¼</sup> *ptyset*, the conclusion is clear. Thus, let *<sup>z</sup>*<sup>∈</sup> <sup>∪</sup> *p <sup>k</sup>*¼<sup>1</sup>*Ak* be a fixed point of *F*. Then, *z* ∈ *Aq* for some *q*∈ {1, 2, ⋯, *p*} and *z* ∈*Fz*⊆ *Aq*þ1. Consequently, we also have *z* ∈*Fz*⊆ *Aq*þ2. It is easy to see that *z*∈ *Aq*þ*<sup>n</sup>* for all *n*∈ N. Therefore, it is enough to conclude that *z* ∈∩ *p <sup>k</sup>*¼<sup>1</sup>*Ak*.

The following classes of functions are necessary to our further contents.

DEFINITION 2.3. Let Φ be the class of functions *ϕ* : R<sup>þ</sup> ! R<sup>þ</sup> satisfying the following conditions:

(Φ1) *ϕ* is right continuous.

(Φ2) *ϕ* (0) = 0.

The central system that we are going to investigate in this chapter is the following delayed

where *<sup>τ</sup><sup>i</sup>* <sup>∈</sup> <sup>½</sup>0, *<sup>t</sup>* for all *<sup>i</sup>*<sup>∈</sup> {1, <sup>2</sup>, <sup>⋯</sup>, *<sup>n</sup>*}, *<sup>F</sup>* : *<sup>J</sup>* · <sup>R</sup> ! *CB*ðRÞ, I*<sup>α</sup>F*ð*t*, *<sup>u</sup>*ð*t*ÞÞ is the definite integral of

denotes the set of selections of *F* and *β<sup>i</sup>* : *J* ! R is continuous for each *i*∈ {1, 2, ⋯, *n*}. Also, set

In this chapter, we shall bring up some recent results in fixed point theory in several approaches and then show how these theorems apply to different classes of dynamical systems. Going precise, in Section 2, we investigate the system (2) in standard metric spaces through a newly developed fixed point theorem. The mentioned fixed point theorem deals with an operator that satisfied the so-called implicit contractivity condition only on a portion of a space, where such partial partition is obtained from the cyclicity behavior that we imposed. We also note the relation between this cyclicity behavior and the one that arises from the partial ordering relation approach. The solvability of the dynamical system (2) in this section is naturally obtained via the cyclicity and implicit contractivity assumptions. For further readings related to this topic, consult [11–17]. In Section 3, we consider a newly emerged approach of studying fixed point theory, i.e., fixed point theory in modular metric spaces. This theory has only been introduced to researchers only a few years ago and has been investigated reasonably in such a short duration. We bring up one of the fundamental fixed point theorem in this modular metric spaces, give appropriate examples and then apply it to guarantee the solvability of, again, the system (2). Even the studies of modular metric spaces are relatively limited at the time, we suggest that further readings from Refs. [18–20] should give some ideas about the theory itself and also how to develop further dynamical systems in

In this section, we consider a very general class of operators that satisfy the implicit contractivity condition. Moreover, we also assume the operator to be cyclic over its domain. This cyclicity weakens the contractivity only to a portion of the space. This is a more general

*<sup>α</sup>F*ð*t*, *<sup>u</sup>*ð*t*ÞÞ ; *<sup>α</sup>* <sup>∈</sup>ð0, <sup>1</sup>, *<sup>t</sup>* <sup>∈</sup>*<sup>J</sup>* :¼ ½0, *<sup>T</sup>*, *<sup>T</sup> <sup>&</sup>gt;* <sup>0</sup> (2)

*f*ð*τ*, *u*ð*τ*ÞÞ*dτ* ; *f* ∈*SF*ð*u*Þ

ð*J*, RÞ ; *f*ð*t*Þ ∈*F*ð*t*, *u*ð*t*ÞÞ a*:*e*: t*∈*J*g

system:

and

order *α* given by

*B* :¼ max1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>n</sup>*sup*<sup>t</sup>*∈*<sup>J</sup>*

this framework.

*u*ð*t*Þ− X*n i*¼1 *βi*

146 Dynamical Systems - Analytical and Computational Techniques

I

*βi* ð*t*Þ.

**2. Cyclic operators in metric spaces**

ð*t*Þ*u*ð*t*−*τi*Þ ∈I

*<sup>α</sup>F*ð*t*, *<sup>u</sup>*ð*t*ÞÞ :<sup>¼</sup> <sup>1</sup>

*SF*ð*u*<sup>Þ</sup> :¼ f*<sup>f</sup>* <sup>∈</sup>*L*<sup>1</sup>

*Γ*ð*α*Þ

ð*t* 0 ð*t*−*τ*Þ *α*−1

(Φ3) *ϕ*(*t*) < *t* for all *t >* 0.

DEFINITION 2.4. Let *Ψ* be the class of functions *ψ* : R<sup>6</sup> <sup>þ</sup> ! <sup>R</sup> satisfying the following conditions:


REMARK 2.5. If *<sup>ϕ</sup>* <sup>∈</sup> <sup>Φ</sup>, then *<sup>ϕ</sup><sup>n</sup>*ð*t*Þ ! 0.

EXAMPLE 2.6 ([22]). The following functions are contained in the class *Ψ*:


**c.** *ψ*3ð*t*1, *t*2, ⋯, *t*6Þ :¼ *t* 2 <sup>1</sup>−*t*1ð*αt*<sup>2</sup> þ *βt*3*γt*4Þ−*δt*5*t*6, where *α >* 0 and *β*, *γ*, *δ*≥0 with *α* þ *β* þ *γ <* 1 and *α* þ *δ <* 1.

#### **2.1. Fixed point theorem for cyclic operators**

Now, we give the main fixed point theorem for cyclic implicit contractive operators.

THEOREM 2.7. *Let* ð*X*, *d*Þ *be a complete metric space and let A*1, *A*2, …, *Ap be nonempty closed subsets of X. Suppose that F is a proximal set-valued cyclic operator over* ∪ *p <sup>k</sup>*¼<sup>1</sup>*Ak in which there exists some <sup>ψ</sup>*<sup>∈</sup> *<sup>Ψ</sup> satisfying*

$$\psi(H(F\mathbf{x}, F\mathbf{y}), d(\mathbf{x}, \mathbf{y}), d(\mathbf{x}, F\mathbf{x}), d(\mathbf{y}, F\mathbf{y}), d(\mathbf{x}, F\mathbf{y}), d(\mathbf{y}, F\mathbf{x})) \le 0$$

*whenever either* ð*x*, *y*Þ ∈ *Ai* · *Ai*þ<sup>1</sup> *or* ð*x*, *y*Þ ∈ *Ai*þ<sup>1</sup> · *Ai holds for some i* ∈{1, 2, ⋯, *p*}. *Then, we have the following:*


PROOF. For (I), let *x*<sup>0</sup> be chosen arbitrarily from some *Aj*. Choose any *x*<sup>1</sup> ∈*Fx*0. Then, we define implicitly a sequence ð*xn*Þ by choosing *xn*þ<sup>1</sup> ∈ *Fxn* satisfying

$$d(\mathfrak{x}\_n, \mathfrak{x}\_{n+1}) = d(\mathfrak{x}\_n, F\mathfrak{x}\_n).$$

Note that this definition is valid since *F* is a proximal operator. Also note that by this definition, we may derive that

$$d(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \le H(F\mathbf{x}\_{n-1}, F\mathbf{x}\_n) \tag{3}$$

Now, since ð*xn*þ<sup>1</sup>, *xn*Þ∈ *Aj*þ*n*þ<sup>1</sup> · *Aj*þ*<sup>n</sup>*, we have

$$\begin{aligned} 0 &\geq \quad \psi \Big( \begin{aligned} &H(F\mathbf{x}\_{n+1},F\mathbf{x}\_{n}),d(\mathbf{x}\_{n+1},\mathbf{x}\_{n}),d(\mathbf{x}\_{n+1},F\mathbf{x}\_{n+1}), \\ &d(\mathbf{x}\_{n},F\mathbf{x}\_{n}),d(\mathbf{x}\_{n+1},F\mathbf{x}\_{n}),d(\mathbf{x}\_{n},F\mathbf{x}\_{n+1}) \end{aligned} \Big) \\ &\geq \quad \psi \Big( \begin{aligned} &H(F\mathbf{x}\_{n},F\mathbf{x}\_{n+1}),d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}),H(F\mathbf{x}\_{n},F\mathbf{x}\_{n+1}), \\ &d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}),0,d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}) + H(F\mathbf{x}\_{n},F\mathbf{x}\_{n+1}) \end{aligned} \Big) \end{aligned}$$

Suppose that *ϕ*∈ Φ is chosen according to ð*Ψ*3Þ. Thus, we have

$$H(F\boldsymbol{\alpha}\_n, F\boldsymbol{\alpha}\_{n+1}) \le \boldsymbol{\varphi}\left(d(\boldsymbol{\alpha}\_n, \boldsymbol{\alpha}\_{n+1})\right),$$

At this point, we assume that *xn*≠*xn*þ<sup>1</sup> for all *n*∈ N, otherwise a fixed point is already obtained. Together with Eq. (3), we may deduce that

$$d(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \le H(F\mathbf{x}\_{n-1}, F\mathbf{x}\_n) \le \boldsymbol{\varrho} \left( d(\mathbf{x}\_{n-1}, \mathbf{x}\_n) \right) \le \dots \le \boldsymbol{\varrho}^{n-1} \left( d(\mathbf{x}\_0, \mathbf{x}\_1) \right)$$

Therefore, we have immediately that *d*ð*xn*, *xn*þ<sup>1</sup>Þ ! 0.

Next, we show that ð*xn*Þ is Cauchy. Suppose to the contrary. So, we may find *ε*<sup>0</sup> *>* 0 and two strictly increasing sequences of integers ð*mk*Þ and ð*nk*Þ in which

$$d(\mathfrak{x}\_{m\_k}, \mathfrak{x}\_{n\_k}) \succeq \mathfrak{x}\_0$$

We can assume, without loss of generality, that *nk > mk > k* and *nk* is minimal in the sense that *d*ð*xmk* , *xr*Þ *< ε*<sup>0</sup> for all *mk* ≤ *r < nk*.

Consequently, *d*ð*xmk* , *xnk*<sup>−</sup>1Þ *< ε*0. Moreover, we may obtain that *ε*<sup>0</sup> ≤ *d*ð*xmk* , *xnk* Þ ≤ *d*ð*xmk* , *xnk*<sup>−</sup>1Þ þ*d*ð*xnk*<sup>−</sup>1, *xnk* Þ *< ε*<sup>0</sup> þ *d*ð*xnk*<sup>−</sup>1, *xnk* Þ*:* Letting *k* ! *∞*, we have *d*ð*xmk* , *xnk* Þ ! *ε*0.

On the other hand, for each *k*∈ N, we may find *j <sup>k</sup>* ∈ {1, 2, ⋯, *p*} in which *nk*−*mk* þ *j <sup>k</sup>*≡1ðmod*p*Þ. For *k* sufficiently large, we may see that *mk*−*j <sup>k</sup> >* 0. Observe that

$$\begin{aligned} &|d(\mathfrak{x}\_{m\_k-j\_k}, \mathfrak{x}\_{m\_k}) - d(\mathfrak{x}\_{m\_k}, \mathfrak{x}\_{m\_k})| \quad \le \quad d(\mathfrak{x}\_{m\_k-j\_k}, \mathfrak{x}\_{m\_k}) \\ &\le \quad \sum\_{l=0}^{j\_k-1} d(\mathfrak{x}\_{m\_k-j\_k+l}, \mathfrak{x}\_{m\_k-j\_k+l+1}) \\ &\le \quad \sum\_{l=0}^{p-1} d(\mathfrak{x}\_{m\_k-j\_k+l}, \mathfrak{x}\_{m\_k-j\_k+l+1}) \end{aligned}$$

Letting *k* ! *∞*, we have *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk* Þ ! *ε*0. Also consider that

**c.** *ψ*3ð*t*1, *t*2, ⋯, *t*6Þ :¼ *t*

and *α* þ *δ <* 1.

*satisfying*

*following:*

we may derive that

2

148 Dynamical Systems - Analytical and Computational Techniques

**2.1. Fixed point theorem for cyclic operators**

**(I)** *F has at least one fixed point*;

**(II)** *F has no fixed point outside* ∩

Now, since ð*xn*þ<sup>1</sup>, *xn*Þ∈ *Aj*þ*n*þ<sup>1</sup> · *Aj*þ*<sup>n</sup>*, we have

Together with Eq. (3), we may deduce that

0 ≥ *ψ*

≥ *ψ*

Therefore, we have immediately that *d*ð*xn*, *xn*þ<sup>1</sup>Þ ! 0.

Suppose that *ϕ*∈ Φ is chosen according to ð*Ψ*3Þ. Thus, we have

*X. Suppose that F is a proximal set-valued cyclic operator over* ∪

implicitly a sequence ð*xn*Þ by choosing *xn*þ<sup>1</sup> ∈ *Fxn* satisfying

<sup>1</sup>−*t*1ð*αt*<sup>2</sup> þ *βt*3*γt*4Þ−*δt*5*t*6, where *α >* 0 and *β*, *γ*, *δ*≥0 with *α* þ *β* þ *γ <* 1

*p*

*d*ð*xn*, *xn*þ<sup>1</sup>Þ ≤ *H*ð*Fxn*<sup>−</sup>1, *Fxn*Þ (3)

ð*d*ð*x*0, *x*1ÞÞ

*<sup>k</sup>*¼<sup>1</sup>*Ak in which there exists some <sup>ψ</sup>*<sup>∈</sup> *<sup>Ψ</sup>*

Now, we give the main fixed point theorem for cyclic implicit contractive operators.

*p <sup>k</sup>*¼<sup>1</sup>*Ak*.

THEOREM 2.7. *Let* ð*X*, *d*Þ *be a complete metric space and let A*1, *A*2, …, *Ap be nonempty closed subsets of*

*ψ*ð*H*ð*Fx*, *Fy*Þ, *d*ð*x*, *y*Þ, *d*ð*x*, *Fx*Þ, *d*ð*y*, *Fy*Þ, *d*ð*x*, *Fy*Þ, *d*ð*y*, *Fx*ÞÞ ≤ 0

*whenever either* ð*x*, *y*Þ ∈ *Ai* · *Ai*þ<sup>1</sup> *or* ð*x*, *y*Þ ∈ *Ai*þ<sup>1</sup> · *Ai holds for some i* ∈{1, 2, ⋯, *p*}. *Then, we have the*

PROOF. For (I), let *x*<sup>0</sup> be chosen arbitrarily from some *Aj*. Choose any *x*<sup>1</sup> ∈*Fx*0. Then, we define

*d*ð*xn*, *xn*þ<sup>1</sup>Þ ¼ *d*ð*xn*, *Fxn*Þ*:*

Note that this definition is valid since *F* is a proximal operator. Also note that by this definition,

*H*ð*Fxn*þ<sup>1</sup>, *Fxn*Þ, *d*ð*xn*þ<sup>1</sup>, *xn*Þ, *d*ð*xn*þ<sup>1</sup>, *Fxn*þ<sup>1</sup>Þ, *d*ð*xn*, *Fxn*Þ, *d*ð*xn*þ<sup>1</sup>, *Fxn*Þ, *d*ð*xn*, *Fxn*þ<sup>1</sup>Þ

*H Fx* ð Þ *<sup>n</sup>*, *Fxn*þ<sup>1</sup> , *d*ð*xn*, *xn*þ<sup>1</sup>Þ, *H*ð*Fxn*, *Fxn*þ<sup>1</sup>Þ, *d*ð*xn*, *xn*þ<sup>1</sup>Þ, 0, *d*ð*xn*, *xn*þ<sup>1</sup>Þ þ *H*ð*Fxn*, *Fxn*þ<sup>1</sup>Þ

*H*ð*Fxn*, *Fxn*þ<sup>1</sup>Þ ≤ *ϕ*ð*d*ð*xn*, *xn*þ<sup>1</sup>ÞÞ

At this point, we assume that *xn*≠*xn*þ<sup>1</sup> for all *n*∈ N, otherwise a fixed point is already obtained.

*<sup>d</sup>*ð*xn*, *xn*þ<sup>1</sup><sup>Þ</sup> <sup>≤</sup> *<sup>H</sup>*ð*Fxn*<sup>−</sup>1, *Fxn*<sup>Þ</sup> <sup>≤</sup>*ϕ*ð*d*ð*xn*<sup>−</sup>1, *xn*ÞÞ <sup>≤</sup> <sup>⋯</sup> <sup>≤</sup> *<sup>ϕ</sup><sup>n</sup>*−<sup>1</sup>

!

!

$$|d(\mathfrak{x}\_{n\_k}, \mathfrak{x}\_{m\_k-\mathfrak{j}\_k}) - d(\mathfrak{x}\_{m\_k-\mathfrak{j}\_k}, \mathfrak{x}\_{n\_k+1})| \le d(\mathfrak{x}\_{n\_k}, \mathfrak{x}\_{m\_k+1}) \dots$$

As *k* ! *∞*, we have *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk*þ<sup>1</sup>Þ ! *ε*0. Similarly, we have

$$|d(\mathfrak{x}\_{m\_k-j\_k}, \mathfrak{x}\_{n\_k}) - d(\mathfrak{x}\_{m\_k}, \mathfrak{x}\_{m\_k-j\_k+1})| \le d(\mathfrak{x}\_{m\_k-j\_k}, \mathfrak{x}\_{m\_k-j\_k+1}) \dots$$

So, we get *d*ð*xnk* , *xmk*<sup>−</sup>*<sup>j</sup> <sup>k</sup>*þ<sup>1</sup>Þ ! *ε*<sup>0</sup> as *k* ! *∞*. Also observe that

$$|d(\mathfrak{x}\_{n\_k}, \mathfrak{x}\_{n\_k+1}) - d(\mathfrak{x}\_{n\_k+1}, \mathfrak{x}\_{m\_k-j\_k+1})| \le d(\mathfrak{x}\_{n\_k}, \mathfrak{x}\_{m\_k-j\_k+1}) \dots$$

Again, letting *k* ! *∞*, we obtain that *d*ð*xnk*þ<sup>1</sup>, *xmk*<sup>−</sup>*<sup>j</sup> <sup>k</sup>*þ<sup>1</sup>Þ ! *ε*0. Finally, by the fact that ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk* Þ∈ *Ai* · *Ai*þ<sup>1</sup> for some *i*∈ {1, 2, ⋯, *p*} and Eq. (3), we may obtain that

0 ≥ *ψ H*ð*Fxmk*<sup>−</sup>*<sup>j</sup> k* , *Fxnk* Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk* Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *Fxmk*<sup>−</sup>*<sup>j</sup> k* Þ, *d*ð*xnk* , *Fxnk* Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *Fxnk* Þ, *d*ð*xnk* , *Fxmk*<sup>−</sup>*<sup>j</sup> k* Þ ! ≥ *ψ d*ð*xmk*<sup>−</sup>*<sup>j</sup> <sup>k</sup>*þ<sup>1</sup>, *xnk*þ<sup>1</sup>Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk* Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xmk*<sup>−</sup>*<sup>j</sup> <sup>k</sup>*þ<sup>1</sup>Þ, *d*ð*xnk* , *xnk*þ<sup>1</sup>Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk*þ<sup>1</sup>Þ, *d*ð*xnk* , *xmk*<sup>−</sup>*<sup>j</sup> k* Þ þ *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *Fxmk*<sup>−</sup>*<sup>j</sup> k* Þ ! ¼ *ψ d*ð*xmk*<sup>−</sup>*<sup>j</sup> <sup>k</sup>*þ<sup>1</sup>, *xnk*þ<sup>1</sup>Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk* Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xmk*<sup>−</sup>*<sup>j</sup> <sup>k</sup>*þ<sup>1</sup>Þ, *d*ð*xnk* , *xnk*þ<sup>1</sup>Þ, *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xnk*þ<sup>1</sup>Þ, *d*ð*xnk* , *xmk*<sup>−</sup>*<sup>j</sup> k* Þ þ *d*ð*xmk*<sup>−</sup>*<sup>j</sup> k* , *xmk*<sup>−</sup>*<sup>j</sup> <sup>k</sup>*þ<sup>1</sup>Þ !

By the condition ð*Ψ*4Þ and letting *k* ! *∞*, we may deduce that

$$0 \ge \psi(\varepsilon\_0, \varepsilon\_0, 0, 0, \varepsilon\_0, \varepsilon\_0) > 0$$

which is absurd. Hence, the sequence ð*xn*Þ is Cauchy. Since ∪ *p <sup>k</sup>*¼<sup>1</sup>*Ak* is closed, it is complete and therefore ð*xn*Þ converges to some unique point *x*<sup>∗</sup> ∈ ∪ *p <sup>k</sup>*¼<sup>1</sup>*Ak*.

Next, we shall prove that *x*<sup>∗</sup> is, in fact, a fixed point of *F*. Let us assume now that *d*ð*x*∗, *Fx*∗Þ *>* 0. Note that for any *n*∈ N, ð*x*∗, *xn*Þ ∈ *Ai* · *Ai*þ<sup>1</sup> for some *i*∈{1, 2, ⋯, *p*}. So, it is followed that

$$\begin{split} 0 &\geq \; \psi(H(\mathcal{F}\mathbf{x}\_{\*},\mathcal{F}\mathbf{x}\_{n}),d(\mathbf{x}\_{\*},\mathbf{x}\_{n}),d(\mathbf{x}\_{\*},\mathcal{F}\mathbf{x}\_{\*}),d(\mathbf{x}\_{\*},\mathcal{F}\mathbf{x}\_{n}),d(\mathbf{x}\_{\*},\mathcal{F}\mathbf{x}\_{n}),d(\mathbf{x}\_{n},\mathcal{F}\mathbf{x}\_{\*})) \\ &\geq \; \psi\left(\begin{array}{c}d(\mathbf{x}\_{n+1},\mathcal{F}\mathbf{x}\_{\*}),d(\mathbf{x}\_{\*},\mathbf{x}\_{n}),d(\mathbf{x}\_{\*},\mathcal{F}\mathbf{x}\_{\*}),d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}),\\d(\mathbf{x}\_{\*},\mathbf{x}\_{n})+d(\mathbf{x}\_{n},\mathcal{F}\mathbf{x}\_{n}),d(\mathbf{x}\_{n},\mathcal{F}\mathbf{x}\_{\*})\\ \end{array}\right) \\ &=\; \;\psi\left(\begin{array}{c}d(\mathbf{x}\_{n+1},\mathcal{F}\mathbf{x}\_{\*}),d(\mathbf{x}\_{\*},\mathbf{x}\_{n}),d(\mathbf{x}\_{\*},\mathcal{F}\mathbf{x}\_{\*}),d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}),\\d(\mathbf{x}\_{\*},\mathbf{x}\_{n})+d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}),d(\mathbf{x}\_{n},\mathcal{F}\mathbf{x}\_{\*})\end{array}\right)\end{split}$$

Passing to the limit as *n* ! *∞*, we obtain that

$$0 \ge \psi \left( d(\mathfrak{x}\_\*, F\mathfrak{x}\_\*), 0, d(\mathfrak{x}\_\*, F\mathfrak{x}\_\*), 0, 0, d(\mathfrak{x}\_\*, F\mathfrak{x}\_\*) \right) > 0$$

which is absurd. Therefore, *d*ð*x*∗, *Fx*∗Þ ¼ 0. Since *Fx*<sup>∗</sup> is closed, we conclude that *x*<sup>∗</sup> ∈ *Fx*∗.

To obtain (II), apply Proposition 2.2.

#### **2.2. Ordered spaces as corollaries**

Let *<sup>X</sup>* be a nonempty set, recall that the binary relation <sup>Ê</sup> a is said to be a ph(partial) ordering on *X* if it is reflexive, antisymmetric and transitive. By an phordered set, we shall mean the pair ð*X*, ⊑Þ where *X* is nonempty and ⊑ is an ordering on *X*. A ph(partially) ordered metric space is the triple ð*X*, ⊑, *d*Þ, where ð*X*, ⊑Þ is an ordered set and ð*X*, *d*Þ is a metric space.

In this part, we show that contractivity on comparable pairs is particularly a cyclic operator over a single set. The following general assumption on the ordered structure is central in the few forthcoming theorems.

DEFINITION 2.8. Let ð*X*,⊑, *d*Þ is said to satisfies the phcondition ð*Θ*Þ if every convergent sequence ð*xn*Þ in *X* and every point *z*<sup>0</sup> ∈ *X* such that *z*<sup>0</sup> ⊑ *xn* for all *n*∈ N, there holds the property *z*0⊑*x*∗, where *x*<sup>∗</sup> ∈ *X* is the limit of ð*xn*Þ.

THEOREM 2.9. *Let* ð*X*,⊑, *d*Þ *be a complete ordered metric space satisfying the condition* ð*Θ*Þ *and let F* : *X* ! *CB*ð*X*Þ *be a nondecreasing proximal operator in the sense that if x*, *y*∈ *X satisfies x* ⊑ *y, then u*⊑*v for all u* ∈ *Fx and v*∈ *Fy. Suppose that there exists ψ*∈ *Ψ such that*

$$
\psi(H(F\mathbf{x}, F\mathbf{y}), d(\mathbf{x}, \mathbf{y}), d(\mathbf{x}, F\mathbf{x}), d(\mathbf{y}, F\mathbf{y}), d(\mathbf{x}, F\mathbf{y}), d(\mathbf{y}, F\mathbf{x})) \le 0 \tag{4}
$$

*for all x*, *y* ∈ *X in which we can find some z* ∈ *X satisfying both z* ⊑*x and z* ⊑*y*. *If there exists x*<sup>0</sup> ∈ *X such that x*<sup>0</sup> ⊑ *w for all w* ∈ *Fx*0, *then F has at least one fixed point*.

PROOF. By the existence of such a point *x*0, we shall now construct a set

$$C(\mathfrak{x}\_0) := \{ z \in X \; ; \; \mathfrak{x}\_0 \square z \} $$

Taking any sequence ð*xn*Þ in *C*ð*x*0Þ. By the condition ð*Θ*Þ with *z*<sup>0</sup> :¼ *x*0, we may see that if ð*xn*Þ converges, its limit is also included in *C*ð*x*0Þ. Hence, *C*ð*x*0Þ is closed and therefore it is complete.

On the other hand, we define an operator *G* : *C*ð*x*0Þ ! *CB*ð*X*Þ by

$$G := F|\_{C(\mathfrak{x}\_0)^\*}$$

For any *z*∈ *C*ð*x*0Þ, observe that *x*0⊑*w* for all *w* ∈ *Gz*. Thus, *G*ð*C*ð*x*0ÞÞ⊆*C*ð*x*0Þ so that *G* is cyclic over *C*ð*x*0Þ. Moreover, for any *x*, *y*∈*C*ð*x*0Þ, we have by definition that *x*0⊑*x* and *x*0⊑*y*, so that the inequality (4) holds whenever ð*x*, *y*Þ∈ *C*ð*x*0Þ ·*C*ð*x*0Þ. Therefore, we can now apply Theorem 2.7 to obtain that *G* has at least one fixed point. Passing this property to *F*, we have now proved the theorem.

COROLLARY 2.10. *Let* ð*X*, ⊑, *d*Þ *be a complete ordered metric space and let F* : *X* ! *CB*ð*X*Þ *be a nondecreasing proximal operator in the sense that if x*, *y* ∈ *X satisfies x*⊑*y, then u*⊑*v for all u* ∈*Fx and v*∈ *Fy. Suppose that there exists ψ*∈ *Ψ such that*

$$\psi(H(F\mathbf{x}, F\mathbf{y}), d(\mathbf{x}, \mathbf{y}), d(\mathbf{x}, F\mathbf{x}), d(\mathbf{y}, F\mathbf{y}), d(\mathbf{x}, F\mathbf{y}), d(\mathbf{y}, F\mathbf{x})) \le 0$$

*whenever x*, *y* ∈ *X satisfy x*⊑*y*. *Also assume that if the sequence* ð*xn*Þ *in X is nondecreasing and converges to x*<sup>∗</sup> ∈ *X*, *then xn*⊑*x*<sup>∗</sup> *for all n* ∈ N. *If there exists x*<sup>0</sup> ∈ *X such that x*0⊑*w for all w* ∈ *Fx*0, *then F has at least one fixed point*.

PROOF. Note that if *x*, *y*∈ *X* are comparable, then, according to Theorem 2.9, we may choose *z* :¼ *x*∈ *X* so that *z*⊑*x* and *z*⊑*y*.

On the other hand, let ð*yn*Þ be a sequence in *X* which is both nondecreasing and convergent to *y*<sup>∗</sup> ∈ *X*. According to the condition ð*Θ*Þ, set *z*<sup>0</sup> :¼ *y*1. We may see easily that, in this case, *X* satisfies the condition ð*Θ*Þ. We next apply Theorem 2.9 to finish the proof.

#### **2.3. An example**

0 ≥*ψ*ð*ε*0, *ε*0, 0, 0, *ε*0, *ε*0Þ *>* 0

Next, we shall prove that *x*<sup>∗</sup> is, in fact, a fixed point of *F*. Let us assume now that *d*ð*x*∗, *Fx*∗Þ *>* 0. Note that for any *n*∈ N, ð*x*∗, *xn*Þ ∈ *Ai* · *Ai*þ<sup>1</sup> for some *i*∈{1, 2, ⋯, *p*}. So, it is followed that

0 ≥ *ψ*ð*H*ð*Fx*∗, *Fxn*Þ, *d*ð*x*∗, *xn*Þ, *d*ð*x*∗, *Fx*∗Þ, *d*ð*xn*, *Fxn*Þ, *d*ð*x*∗, *Fxn*Þ, *d*ð*xn*, *Fx*∗ÞÞ

0 ≥*ψ*ð*d*ð*x*∗, *Fx*∗Þ, 0, *d*ð*x*∗, *Fx*∗Þ, 0, 0, *d*ð*x*∗, *Fx*∗ÞÞ *>* 0

*X* if it is reflexive, antisymmetric and transitive. By an phordered set, we shall mean the pair ð*X*, ⊑Þ where *X* is nonempty and ⊑ is an ordering on *X*. A ph(partially) ordered metric space is

In this part, we show that contractivity on comparable pairs is particularly a cyclic operator over a single set. The following general assumption on the ordered structure is central in the

DEFINITION 2.8. Let ð*X*,⊑, *d*Þ is said to satisfies the phcondition ð*Θ*Þ if every convergent sequence ð*xn*Þ in *X* and every point *z*<sup>0</sup> ∈ *X* such that *z*<sup>0</sup> ⊑ *xn* for all *n*∈ N, there holds the property *z*0⊑*x*∗,

THEOREM 2.9. *Let* ð*X*,⊑, *d*Þ *be a complete ordered metric space satisfying the condition* ð*Θ*Þ *and let F* : *X* ! *CB*ð*X*Þ *be a nondecreasing proximal operator in the sense that if x*, *y*∈ *X satisfies x* ⊑ *y, then*

*for all x*, *y* ∈ *X in which we can find some z* ∈ *X satisfying both z* ⊑*x and z* ⊑*y*. *If there exists x*<sup>0</sup> ∈ *X*

*ψ*ð*H*ð*Fx*, *Fy*Þ, *d*ð*x*, *y*Þ, *d*ð*x*, *Fx*Þ, *d*ð*y*, *Fy*Þ, *d*ð*x*, *Fy*Þ, *d*ð*y*, *Fx*ÞÞ ≤ 0 (4)

the triple ð*X*, ⊑, *d*Þ, where ð*X*, ⊑Þ is an ordered set and ð*X*, *d*Þ is a metric space.

*u*⊑*v for all u* ∈ *Fx and v*∈ *Fy. Suppose that there exists ψ*∈ *Ψ such that*

*such that x*<sup>0</sup> ⊑ *w for all w* ∈ *Fx*0, *then F has at least one fixed point*.

which is absurd. Therefore, *d*ð*x*∗, *Fx*∗Þ ¼ 0. Since *Fx*<sup>∗</sup> is closed, we conclude that *x*<sup>∗</sup> ∈ *Fx*∗.

*d*ð*xn*þ<sup>1</sup>, *Fx*∗Þ, *d*ð*x*∗, *xn*Þ, *d*ð*x*∗, *Fx*∗Þ, *d*ð*xn*, *xn*þ<sup>1</sup>Þ, *d*ð*x*∗, *xn*Þ þ *d*ð*xn*, *Fxn*Þ, *d*ð*xn*, *Fx*∗Þ

*d*ð*xn*þ<sup>1</sup>, *Fx*∗Þ, *d*ð*x*∗, *xn*Þ, *d*ð*x*∗, *Fx*∗Þ, *d*ð*xn*, *xn*þ<sup>1</sup>Þ, *d*ð*x*∗, *xn*Þ þ *d*ð*xn*, *xn*þ<sup>1</sup>Þ, *d*ð*xn*, *Fx*∗Þ

!

!

*p <sup>k</sup>*¼<sup>1</sup>*Ak*. *p*

*<sup>k</sup>*¼<sup>1</sup>*Ak* is closed, it is complete and

a is said to be a ph(partial) ordering on

which is absurd. Hence, the sequence ð*xn*Þ is Cauchy. Since ∪

therefore ð*xn*Þ converges to some unique point *x*<sup>∗</sup> ∈ ∪

150 Dynamical Systems - Analytical and Computational Techniques

≥ *ψ*

¼ *ψ*

Passing to the limit as *n* ! *∞*, we obtain that

Let *<sup>X</sup>* be a nonempty set, recall that the binary relation <sup>Ê</sup>

To obtain (II), apply Proposition 2.2.

**2.2. Ordered spaces as corollaries**

few forthcoming theorems.

where *x*<sup>∗</sup> ∈ *X* is the limit of ð*xn*Þ.

We now give a validating example for our fixed point theorem to help the understanding of the content.

EXAMPLE 2.11. Consider the Euclidean space *E*<sup>2</sup> with its standard metric *d*. For each *t*∈ R, we define

$$\ell\_0 := [0, \frac{1}{2}] \times \{0\}, \quad \ell\_1 := [0, \frac{1}{2}] \times \{\frac{1}{\sqrt{2}}\}, \quad \text{and} \quad \ell\_2 := [0, \frac{1}{2}] \times \{-\frac{1}{\sqrt{2}}\}.$$

Suppose that *A*<sup>1</sup> and *A*<sup>2</sup> are two closed sets defined by

$$A\_1 := \ell\_0 \cup \ell\_1 \quad \text{and} \quad A\_2 := \ell\_0 \cup \ell\_2.$$

Let *<sup>F</sup>* : *<sup>A</sup>*1∪*A*<sup>2</sup> ! <sup>2</sup>*<sup>A</sup>*1∪*A*<sup>2</sup> be an operator defined by

$$F\mathbf{x} := \begin{cases} \{\mathbf{x}\}, & \text{if}\mathbf{x} \in \ell\_0, \\ P\_{\ell\_1}^{-1}(\mathbf{x}) \cap A\_2, & \text{if}\mathbf{x} \in \ell\_1, \\ P\_{\ell\_2}^{-1}(\mathbf{x}) \cap A\_1, & \text{if}\mathbf{x} \in \ell\_2. \end{cases} \tag{5}$$

Note that the notation *P* as is appeared in Eq. (5) is the metric projection onto the corresponding sets *ℓ*<sup>1</sup> and *ℓ*2, respectively. The cyclicity of *F* is apparent.

Claim. The operator *F* satisfies the inequality in Theorem 2.7 with *ψ* defined as in (c) of Example 2.6 when *α* ¼ <sup>9</sup> <sup>20</sup>, *β* ¼ *γ* ¼ <sup>1</sup> <sup>4</sup> and *δ* ¼ <sup>1</sup> 2.

The case *x*, *y*∈*ℓ*<sup>0</sup> is trivial and so we omit it. For the case *x* ∈*ℓ*<sup>0</sup> as *y* ∈*ℓ*<sup>1</sup> and *x*∈ *ℓ*<sup>1</sup> as *y*∈ *ℓ*2, we consider the following calculation.

From **Table 1**(A), we have

$$\begin{array}{lcl} & \left[H(\text{Fx},\text{Fy})\right]^2 & & \\ & = & \left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2} \\ & \leq & \left(\frac{9}{20}+\frac{1}{4\sqrt{2}}+\frac{1}{2}\right)\left(\left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2}\right) \\ & \leq & \frac{9}{20}\left(\left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2}\right) + \frac{1}{4\sqrt{2}}\sqrt{\left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2}} + \frac{1}{2}\left(\left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2}\right) \\ & = & \sqrt{\left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2}\left(\frac{9}{20}\sqrt{\left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2}} + \frac{1}{4\sqrt{2}}\right) + \frac{1}{2}\left(\left(\text{x}\_1-\text{y}\_1\right)^2 + \frac{1}{2}\right)} \\ & = & H(\text{Fx},\text{Fy})\left[ad(\text{x},\text{y}) + \beta d(\text{x},\text{Fx}) + \gamma d(\text{y},\text{Fy})\right] + \delta d(\text{x},\text{Fy})d(\text{y},\text{Fx}) \end{array}$$

for all *x*∈ *ℓ*<sup>0</sup> and *y*∈*ℓ*1. We can similarly obtain from **Table 1**(B) the following:


**Table 1.** Distances.

½*H*ð*Fx*,*Fy*Þ<sup>2</sup> ¼ ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 ≤ <sup>9</sup> 20 ffiffi 5 2 <sup>p</sup> <sup>þ</sup> ffiffiffi <sup>2</sup> � <sup>p</sup> <sup>ð</sup>*x*1−*y*1<sup>Þ</sup> <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 ≤ <sup>9</sup> 20 ffiffi 5 2 p� <sup>ð</sup>*x*1−*y*1<sup>Þ</sup> <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 <sup>þ</sup> ffiffiffi 2 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 q ≤ <sup>9</sup> 20 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 <sup>2</sup> ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 <sup>2</sup> <sup>r</sup> <sup>þ</sup> ffiffiffi 2 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 q ¼ <sup>9</sup> 20 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 <sup>2</sup> þ 3 <sup>2</sup> ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 <sup>2</sup> <sup>r</sup> <sup>þ</sup> ffiffiffi 2 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 q ≤ <sup>9</sup> 20 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 <sup>2</sup> þ 3 <sup>2</sup> ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 <sup>r</sup> <sup>þ</sup> ffiffiffi 2 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 q ¼ <sup>9</sup> 20 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 <sup>ð</sup>*x*1−*y*1<sup>Þ</sup> <sup>2</sup> <sup>þ</sup> <sup>1</sup> <sup>2</sup> þ <sup>3</sup> 2 <sup>r</sup> <sup>þ</sup> ffiffiffi 2 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 q ¼ <sup>9</sup> 20 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 ð*x*1−*y*1<sup>Þ</sup> <sup>2</sup> <sup>þ</sup> <sup>2</sup> r <sup>þ</sup> ffiffiffi 2 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 q ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2 q <sup>9</sup> 20 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>2</sup> q þ <sup>1</sup>ffiffi 2 <sup>p</sup> þ <sup>1</sup>ffiffi 2 p ¼ *H*ð*Fx*, *Fy*Þ½*αd*ð*x*, *y*Þ þ *βd*ð*x*, *Fx*Þ þ *γd*ð*y*, *Fy*Þ ≤ *H*ð*Fx*, *Fy*Þ½*αd*ð*x*, *y*Þ þ *βd*ð*x*, *Fx*Þ þ *γd*ð*y*, *Fy*Þ þ *δd*ð*x*, *Fy*Þ*d*ð*y*, *Fx*Þ

for all *x*∈ *ℓ*<sup>1</sup> and *y*∈*ℓ*2. Therefore, we have now proved our claim. Observe now that *Fix*ð*F*Þ ¼ *<sup>ℓ</sup>*<sup>0</sup> <sup>¼</sup> *<sup>A</sup>*1∩*A*2, coincide with the Theorem 2.7.

#### **2.4. Fractional set-valued dynamical systems**

*Fx* :¼

corresponding sets *ℓ*<sup>1</sup> and *ℓ*2, respectively. The cyclicity of *F* is apparent.

<sup>20</sup>, *β* ¼ *γ* ¼ <sup>1</sup>

<sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ

*<sup>H</sup>*ð*Fx*, *Fy*<sup>Þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

*<sup>d</sup>*ð*x*, *<sup>y</sup>*<sup>Þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

*<sup>d</sup>*ð*x*, *Ty*<sup>Þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

*<sup>d</sup>*ð*y*, *Tx*<sup>Þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

*<sup>H</sup>*ð*Fx*, *Fy*<sup>Þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

*<sup>d</sup>*ð*x*, *<sup>y</sup>*<sup>Þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

*d*ð*x*, *Tx*Þ 1 *d*ð*y*, *Ty*Þ 1 *d*ð*x*, *Ty*Þ j*x*1−*y*1j *d*ð*y*, *Tx*Þ j*x*1−*y*1j

*d*ð*x*, *Tx*Þ 0 *<sup>d</sup>*ð*y*, *Ty*<sup>Þ</sup> <sup>1</sup>*<sup>=</sup>* ffiffiffi

q <sup>9</sup>

<sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

<sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

Example 2.6 when *α* ¼ <sup>9</sup>

From **Table 1**(A), we have

(A) *x*∈*ℓ*<sup>0</sup> as *y*∈*ℓ*<sup>1</sup>

(B) *x*∈*ℓ*<sup>1</sup> as *y*∈*ℓ*<sup>2</sup>

**Table 1.** Distances.

consider the following calculation.

½*H*ð*Fx*,*Fy*Þ<sup>2</sup>

≤ <sup>9</sup>

≤ <sup>9</sup>

¼

¼ ð*x*1−*y*1Þ

152 Dynamical Systems - Analytical and Computational Techniques

<sup>20</sup> þ <sup>1</sup> <sup>4</sup> ffiffi 2 <sup>p</sup> þ <sup>1</sup> 2

<sup>20</sup> ð*x*1−*y*1Þ

*P*<sup>−</sup><sup>1</sup>

8 < :

<sup>4</sup> and *δ* ¼ <sup>1</sup>

ð*x*1−*y*1Þ

þ <sup>1</sup> <sup>4</sup> ffiffi 2 p

q

20

for all *x*∈ *ℓ*<sup>0</sup> and *y*∈*ℓ*1. We can similarly obtain from **Table 1**(B) the following:

*P*<sup>−</sup><sup>1</sup>

Note that the notation *P* as is appeared in Eq. (5) is the metric projection onto the

Claim. The operator *F* satisfies the inequality in Theorem 2.7 with *ψ* defined as in (c) of

The case *x*, *y*∈*ℓ*<sup>0</sup> is trivial and so we omit it. For the case *x* ∈*ℓ*<sup>0</sup> as *y* ∈*ℓ*<sup>1</sup> and *x*∈ *ℓ*<sup>1</sup> as *y*∈ *ℓ*2, we

2.

<sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ

¼ *H*ð*Fx*, *Fy*Þ½*αd*ð*x*, *y*Þ þ *βd*ð*x*, *Fx*Þ þ *γd*ð*y*, *Fy*Þ þ *δd*ð*x*, *Fy*Þ*d*ð*y*, *Fx*Þ

q

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð*x*1−*y*1Þ

> <sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

ð*x*1−*y*1Þ

ð*x*1−*y*1Þ

2 p

ð*x*1−*y*1Þ

ð*x*1−*y*1Þ

ð*x*1−*y*1Þ

ð*x*1−*y*1Þ <sup>2</sup> <sup>þ</sup> <sup>2</sup>

q

q

q

q

q

q

<sup>2</sup> <sup>þ</sup> <sup>1</sup>*=*<sup>2</sup>

<sup>2</sup> <sup>þ</sup> <sup>1</sup>*=*<sup>2</sup>

<sup>2</sup> <sup>þ</sup> <sup>1</sup>*=*<sup>2</sup>

<sup>2</sup> <sup>þ</sup> <sup>1</sup>*=*<sup>2</sup>

<sup>2</sup> <sup>þ</sup> <sup>1</sup>*=*<sup>2</sup>

<sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

> þ <sup>1</sup> <sup>4</sup> ffiffi 2 p

þ 1

<sup>2</sup> ð*x*1−*y*1Þ

þ 1

<sup>2</sup> ð*x*1−*y*1Þ

<sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

<sup>2</sup> <sup>þ</sup> <sup>1</sup> 2

{*x*}, ifx ∈*ℓ*0*;*

*<sup>ℓ</sup>*<sup>1</sup> <sup>ð</sup>*x*Þ∩*A*2, ifx <sup>∈</sup>*ℓ*1*;*

(5)

*<sup>ℓ</sup>*<sup>2</sup> <sup>ð</sup>*x*Þ∩*A*1, ifx <sup>∈</sup>*ℓ*2*:*

For convenience, we shall always consider the nonempty closed and bounded subspace

$$\mathcal{Q}\mathsf{C}\mathsf{C}(\mathsf{J},\mathbb{R}) := \{ \mathsf{u} : \mathsf{J} \to \mathbb{R} \text{ ; \text{uiscontinuous} \},$$

endowed with the supremum norm ∥ ∥ given by

$$\|u\| := \sup\_{t \in \mathcal{I}} |u(t)|.$$

The solutions for the problem (2) are assumed to be in *Ω* under this circumstance. Moreover, we shall need some more notions in order to obtain the existence of solutions for the problem (2).

DEFINITION 2.12. Let (*X, d*) be a metric space and let *<sup>J</sup>* be an interval of <sup>R</sup>. An operator *<sup>F</sup>* : *<sup>J</sup>* ! <sup>2</sup>*<sup>X</sup>* is said to be measurable if for each *x* ∈ *X* and *t* ∈*J*, the mapping *x*↦*d*ð*x*, *F*ð*t*ÞÞ is measurable.

Next, we shall define the set-valued operator *<sup>Λ</sup>* : *<sup>Ω</sup>* ! <sup>2</sup>*<sup>Ω</sup>* given by

$$\mu(Au)(t) := \left\{ w \in \Omega \; ; \; w(t) = \sum\_{i=1}^{n} \beta\_i(t) u(t - \tau\_i) + \mathbb{U}^a f(t, u(t)), \; f \in \mathbb{S}\_{\mathbb{F}}(u) \right\},\tag{6}$$

where U is the ordinary single-valued fractional integral.

We shall next illustrate that the operator *Λ* possesses closed values.

LEMMA 2.13. *Suppose that the operator Λ is given as in* (2.4), *then Λu is closed for all u* ∈ *Ω*.

PROOF. Let *u*∈ *Ω* and let ð*uk*Þ be a sequence in *Λu* which converges to some *u*<sup>∗</sup> ∈ *Ω*. We shall prove the statement by showing that limits of convergent sequence in *Λu* are in *Λu*. Then, there exists a sequence ð*f <sup>k</sup>*Þ in *SF*ð*u*Þ in which

$$\mu\_k(t) = \sum\_{i=1}^n \beta\_i(t)\mu(t-\tau\_i) + \mathbb{U}^\alpha f\_k(t,\mu(t)).$$

Also note that this sequence <sup>ð</sup>*<sup>f</sup> <sup>k</sup>*<sup>Þ</sup> converges to some *<sup>f</sup>* <sup>∗</sup> <sup>∈</sup>*L*<sup>1</sup> ð*J*, RÞ. Since *F*ð*t*, *u*ð*t*ÞÞ is closed, *f* <sup>∗</sup> ∈ *SF*ð*u*Þ. Actually, we have

$$\mu\_\*(t) = \sum\_{i=1}^n \beta\_i(t)\mu(t-\tau\_i) + \mathbb{U}^\alpha f\_\*(t,\mu(t)) \in \Lambda\nu.$$

This completes the proof.

Now, we give the solvability of the system (2).

THEOREM 2.14. *According to Eq*. (2), *assume that there exist non-empty closed subsets Π*1, *Π*2, ⋯, *Π<sup>p</sup> in Ω such that* ∪ *p <sup>k</sup>*¼<sup>1</sup>*Π<sup>k</sup>* <sup>¼</sup> *<sup>Ω</sup> and F has the following properties:*


*H*ð*F*ð*t*, *u*ð*t*ÞÞ, *F*ð*t*, *v*ð*t*ÞÞÞ ≤ *ξ*ð∥*u*−*v*∥, *d*ð*u*, *Λu*Þ, *d*ð*v*, *Λv*Þ, *d*ð*u*, *Λv*Þ, *d*ð*v*, *Λu*ÞÞ*whenever either* ð*u*, *v*Þ ∈ *Π<sup>i</sup>* · *Π<sup>i</sup>*þ<sup>1</sup> *or* ð*u*, *v*Þ ∈ *Π<sup>i</sup>*þ<sup>1</sup> · *Π<sup>i</sup> holds for some i* ∈ {1, 2, ⋯, *p*};

**3.** *Λ is proximal and cyclic over* ∪ *p <sup>k</sup>*¼<sup>1</sup>*Π<sup>k</sup>* <sup>¼</sup> *<sup>Ω</sup>*.

*If the function ψ* : R<sup>6</sup> <sup>þ</sup> ! <sup>R</sup><sup>þ</sup> *given by*

$$\psi(t\_1, t\_2, \dots, t\_6) := t\_1 - nBt\_2 - \frac{T^{\alpha}}{\Gamma(\alpha + 1)} \xi(t\_2, t\_3, t\_4, t\_5, t\_6)$$

*is in the class Ψ*, *then the problem* (1.2) *has at least one solution*.

PROOF. Letð*u*, *v*Þ ∈ *Π<sup>i</sup>* · *Π<sup>i</sup>*þ<sup>1</sup> for some *i* ∈{1, 2, ⋯, *p*}. By 2, we may choose some *f* <sup>1</sup>ð*t*, *u*ð*t*ÞÞ ∈ *F*ð*t*, *u*ð*t*ÞÞ and *f* <sup>2</sup>ð*t*, *v*ð*t*ÞÞ ∈*F*ð*t*, *v*ð*t*ÞÞ in which

$$|f\_1(t, u(t)) - f\_2(t, v(t))| \le \xi(\|u - v\|, d(u, \Lambda u), d(v, \Lambda v), d(u, \Lambda v), d(v, \Lambda u))$$

Consider the two functions

$$w\_1(t) = \sum\_{i=1}^n \beta\_i(t)\mu(t - \tau\_i) + \mathbb{U}^\alpha f\_1(t, \mu(t)) \in \Lambda\nu$$

and

<sup>ð</sup>*Λu*Þð*t*<sup>Þ</sup> :<sup>¼</sup> *<sup>w</sup>* <sup>∈</sup> *<sup>Ω</sup>* ; *<sup>w</sup>*ð*t*Þ ¼ <sup>X</sup>*<sup>n</sup>*

154 Dynamical Systems - Analytical and Computational Techniques

where U is the ordinary single-valued fractional integral.

exists a sequence ð*f <sup>k</sup>*Þ in *SF*ð*u*Þ in which

*f* <sup>∗</sup> ∈ *SF*ð*u*Þ. Actually, we have

This completes the proof.

*p*

**2.** *there exists a function ξ* : R<sup>5</sup>

**3.** *Λ is proximal and cyclic over* ∪

and *f* <sup>2</sup>ð*t*, *v*ð*t*ÞÞ ∈*F*ð*t*, *v*ð*t*ÞÞ in which

*If the function ψ* : R<sup>6</sup>

*Ω such that* ∪

We shall next illustrate that the operator *Λ* possesses closed values.

*uk*ð*t*Þ ¼ <sup>X</sup>*<sup>n</sup>*

Also note that this sequence <sup>ð</sup>*<sup>f</sup> <sup>k</sup>*<sup>Þ</sup> converges to some *<sup>f</sup>* <sup>∗</sup> <sup>∈</sup>*L*<sup>1</sup>

*<sup>u</sup>*∗ð*t*Þ ¼ <sup>X</sup>*<sup>n</sup>*

Now, we give the solvability of the system (2).

**1.** *t*↦*F*ð*t*, *u*ð*t*ÞÞ *is measurable for each u* ∈ *Ω*;

*i*¼1 *βi*

*<sup>k</sup>*¼<sup>1</sup>*Π<sup>k</sup>* <sup>¼</sup> *<sup>Ω</sup> and F has the following properties:*

<sup>þ</sup> ! <sup>R</sup><sup>þ</sup> *such that*

*<sup>k</sup>*¼<sup>1</sup>*Π<sup>k</sup>* <sup>¼</sup> *<sup>Ω</sup>*.

*<sup>ψ</sup>*ð*t*1, *<sup>t</sup>*2, <sup>⋯</sup>, *<sup>t</sup>*6<sup>Þ</sup> :<sup>¼</sup> *<sup>t</sup>*1−*nBt*2<sup>−</sup> *<sup>T</sup><sup>α</sup>*

ð*u*, *v*Þ ∈ *Π<sup>i</sup>* · *Π<sup>i</sup>*þ<sup>1</sup> *or* ð*u*, *v*Þ ∈ *Π<sup>i</sup>*þ<sup>1</sup> · *Π<sup>i</sup> holds for some i* ∈ {1, 2, ⋯, *p*};

*p*

<sup>þ</sup> ! <sup>R</sup><sup>þ</sup> *given by*

*is in the class Ψ*, *then the problem* (1.2) *has at least one solution*.

*i*¼1 *βi*

*i*¼1 *βi*

LEMMA 2.13. *Suppose that the operator Λ is given as in* (2.4), *then Λu is closed for all u* ∈ *Ω*.

PROOF. Let *u*∈ *Ω* and let ð*uk*Þ be a sequence in *Λu* which converges to some *u*<sup>∗</sup> ∈ *Ω*. We shall prove the statement by showing that limits of convergent sequence in *Λu* are in *Λu*. Then, there

<sup>ð</sup>*t*Þ*u*ð*t*−*τi*Þ þ <sup>U</sup>*<sup>α</sup><sup>f</sup> <sup>k</sup>*ð*t*, *<sup>u</sup>*ð*t*ÞÞ*:*

<sup>ð</sup>*t*Þ*u*ð*t*−*τi*Þ þ <sup>U</sup>*<sup>α</sup><sup>f</sup>* <sup>∗</sup>ð*t*, *<sup>u</sup>*ð*t*ÞÞ <sup>∈</sup> *<sup>Λ</sup>u:*

THEOREM 2.14. *According to Eq*. (2), *assume that there exist non-empty closed subsets Π*1, *Π*2, ⋯, *Π<sup>p</sup> in*

*H*ð*F*ð*t*, *u*ð*t*ÞÞ, *F*ð*t*, *v*ð*t*ÞÞÞ ≤ *ξ*ð∥*u*−*v*∥, *d*ð*u*, *Λu*Þ, *d*ð*v*, *Λv*Þ, *d*ð*u*, *Λv*Þ, *d*ð*v*, *Λu*ÞÞ*whenever either*

*Γ*ð*α* þ 1Þ

PROOF. Letð*u*, *v*Þ ∈ *Π<sup>i</sup>* · *Π<sup>i</sup>*þ<sup>1</sup> for some *i* ∈{1, 2, ⋯, *p*}. By 2, we may choose some *f* <sup>1</sup>ð*t*, *u*ð*t*ÞÞ ∈ *F*ð*t*, *u*ð*t*ÞÞ

*ξ*ð*t*2, *t*3, *t*4, *t*5, *t*6Þ

<sup>ð</sup>*t*Þ*u*ð*t*−*τi*Þ þ <sup>U</sup>*<sup>α</sup>f*ð*t*, *<sup>u</sup>*ð*t*ÞÞ, *<sup>f</sup>* <sup>∈</sup>*SF*ð*u*<sup>Þ</sup>

, (6)

ð*J*, RÞ. Since *F*ð*t*, *u*ð*t*ÞÞ is closed,

( )

$$w\_2(t) = \sum\_{i=1}^n \beta\_i(t)\upsilon(t-\tau\_i) + \mathbb{U}^\alpha f\_2(t, \upsilon(t)) \in \Lambda \upsilon.$$

Next, observe that

$$\begin{split} &|w\_{1}(t)-w\_{2}(t)| \\ &\leq \quad \sum\_{i=1}^{n} \beta\_{i}(t) |u(t-\tau\_{i})-v(t-\tau\_{i})| + |\mathbb{U}^{a}f\_{1}(t,u(t))-\mathbb{U}^{a}f\_{2}(t,v(t))| \\ &\leq \quad \sum\_{i=1}^{n} \beta\_{i}(t) |u(t-\tau\_{i})-v(t-\tau\_{i})| + \mathbb{U}^{a}|f\_{1}(t,u(t))-f\_{2}(t,v(t))| \\ &\leq \quad nB||u-v|| + \frac{T^{n}}{I(\alpha+1)}|f\_{1}(t,u(t))-f\_{2}(t,v(t))| \\ &\leq \quad nB||u-v|| + \frac{T^{n}}{I(\alpha+1)}\xi(\|u-v\|,d(u,\Lambda u),d(v,\Lambda v),d(u,\Lambda v),d(v,\Lambda u)). \end{split}$$

It follows that

$$\mathbb{E}\left[H(\Lambda u, \Lambda \upsilon) \le nB \|u - \upsilon\|\right] + \frac{T^{\alpha}}{\Gamma(\alpha + 1)} \mathbb{E}(\|u - \upsilon\|, d(u, \Lambda u), d(\upsilon, \Lambda \upsilon), d(u, \Lambda \upsilon), d(\upsilon, \Lambda u)).$$

Consequently, we have for each ð*u*, *v*Þ ∈ *Π<sup>i</sup>* · *Π<sup>i</sup>*þ1, *i*∈ {1, 2, ⋯, *p*}, that

$$\psi(H(\Lambda u, \Lambda v), \|\mu - \upsilon\|, d(u, \Lambda u), d(\upsilon, \Lambda v), d(u, \Lambda v), d(\upsilon, \Lambda u)) \le 0.$$

We may deduce similarly that the above inequality holds also in the case ð*u*, *v*Þ ∈ *Π<sup>i</sup>*þ<sup>1</sup> · *Πi*. Apply Theorem 2.7 to obtain the desired result.

We next consider the existence of solutions to Eq. (2) in the case when an ordering ⊑ is defined on *Ω* in such a way that for *u*, *v*∈ *Ω*,

$$
\mu \square v \Leftrightarrow \iota(t) \leq v(t) \quad \text{a.e.} \quad t \in I.
$$

It is easy to see that if ð*un*Þ is a nondecreasing sequence in *Ω* which converges to some *u*<sup>∗</sup> ∈ *Ω*, then *un*⊑*u*<sup>∗</sup> for all *n*∈ N. In the further step, we shall need in the initial state that a weak solution to Eq. (2) exists.

DEFINITION 2.15. Suppose that ð*Ω*,⊑Þ is a partially ordered set. A phweak solution for the problem (2) (w.r.t. ⊑) is a function *u*∈ *Ω* such that *u*⊑*v* for all *v* ∈ *Λu*.

COROLLARY 2.16. *According to Eq*. (2), *assume that there is an ordering* ⊑ *defined on Ω. Suppose also that we have the following properties:*


*H*ð*F*ð*t*, *u*ð*t*ÞÞ, *F*ð*t*, *v*ð*t*ÞÞÞ ≤ *ξ*ð∥*u*−*v*∥, *d*ð*u*, *Λu*Þ, *d*ð*v*, *Λv*Þ, *d*ð*u*, *Λv*Þ, *d*ð*v*, *Λu*ÞÞ *whenever u*, *v*∈ *Ω are comparable*;


*If the function ψ* : R<sup>6</sup> <sup>þ</sup> ! <sup>R</sup><sup>þ</sup> *given by*

$$\psi(t\_1, t\_2, \dots, t\_6) := t\_1 - nBt\_2 - \frac{T^{\alpha}}{\Gamma(\alpha + 1)} \xi(t\_2, t\_3, t\_4, t\_5, t\_6)$$

*is in the class Ψ*, *then the problem* (2) *has at least one solution*.

PROOF. As in the proof of the previous theorem, we may similarly derive that

*ψ*ð*H*ð*Λu*, *Λv*Þ, ∥*u*−*v*∥, *d*ð*u*, *Λu*Þ, *d*ð*v*, *Λv*Þ, *d*ð*u*, *Λv*Þ, *d*ð*v*, *Λu*ÞÞ ≤ 0

whenever *u*, *v* ∈ *Ω* are comparable. Therefore, we may apply Corollary 2.10 to obtain the desired result.

#### **3. Fractional set-valued systems in modular metric spaces**

In this section, we shall consider on fixed point inclusions that are studied within a modular metric spaces. With certain conditions, we can extend Nadler's theorem to the context of modular metric spaces successfully. A modular metric space is a relatively new concept. It generalizes and unifies both modular and metric spaces. It is therefore not necessarily equipped with a linear structure.

Before we go further, let us first give basic definitions and related properties of a modular metric space.

DEFINITION 3.1. ([23]). Let *X* be a nonempty set. A function *w* : ð0, *∞*Þ · *X* · *X* ! ½0, þ *∞* is said to be a phmetric modular on *X* if the following conditions are satisfied for any *s*, *t >* 0 and *x*, *y*, *z* ∈ *X*:


Here, we use *wt*ð, Þ :¼ *w*ð*t*, , Þ. In this case, we say that ð*X*, *w*Þ is a phmodular metric space. Notice that the value of a metric modular can be infinite.

DEFINITION 2.15. Suppose that ð*Ω*,⊑Þ is a partially ordered set. A phweak solution for the

COROLLARY 2.16. *According to Eq*. (2), *assume that there is an ordering* ⊑ *defined on Ω. Suppose also*

*H*ð*F*ð*t*, *u*ð*t*ÞÞ, *F*ð*t*, *v*ð*t*ÞÞÞ ≤ *ξ*ð∥*u*−*v*∥, *d*ð*u*, *Λu*Þ, *d*ð*v*, *Λv*Þ, *d*ð*u*, *Λv*Þ, *d*ð*v*, *Λu*ÞÞ *whenever u*, *v*∈ *Ω are*

*Γ*ð*α* þ 1Þ

*ψ*ð*H*ð*Λu*, *Λv*Þ, ∥*u*−*v*∥, *d*ð*u*, *Λu*Þ, *d*ð*v*, *Λv*Þ, *d*ð*u*, *Λv*Þ, *d*ð*v*, *Λu*ÞÞ ≤ 0

whenever *u*, *v* ∈ *Ω* are comparable. Therefore, we may apply Corollary 2.10 to obtain the

In this section, we shall consider on fixed point inclusions that are studied within a modular metric spaces. With certain conditions, we can extend Nadler's theorem to the context of modular metric spaces successfully. A modular metric space is a relatively new concept. It generalizes and unifies both modular and metric spaces. It is therefore not necessarily

Before we go further, let us first give basic definitions and related properties of a modular

DEFINITION 3.1. ([23]). Let *X* be a nonempty set. A function *w* : ð0, *∞*Þ · *X* · *X* ! ½0, þ *∞* is said to be a phmetric modular on *X* if the following conditions are satisfied for any *s*, *t >* 0 and

*ξ*ð*t*2, *t*3, *t*4, *t*5, *t*6Þ

problem (2) (w.r.t. ⊑) is a function *u*∈ *Ω* such that *u*⊑*v* for all *v* ∈ *Λu*.

<sup>þ</sup> ! <sup>R</sup><sup>þ</sup> *such that*

*<sup>ψ</sup>*ð*t*1, *<sup>t</sup>*2, <sup>⋯</sup>, *<sup>t</sup>*6<sup>Þ</sup> :<sup>¼</sup> *<sup>t</sup>*1−*nBt*2<sup>−</sup> *<sup>T</sup><sup>α</sup>*

PROOF. As in the proof of the previous theorem, we may similarly derive that

**3. Fractional set-valued systems in modular metric spaces**

*that we have the following properties:*

**2.** *there exists a function ξ* : R<sup>5</sup>

**3.** *Λ is proximal and nondecreasing*;

*comparable*;

*If the function ψ* : R<sup>6</sup>

desired result.

metric space.

*x*, *y*, *z* ∈ *X*:

**2.** *wt*ð*x*, *y*Þ ¼ *wt*ð*y*, *x*Þ.

**3.** *ws*þ*<sup>t</sup>*ð*x*, *y*Þ ≤ *ws*ð*x*, *z*Þ þ *wt*ð*z*, *y*Þ.

equipped with a linear structure.

**1.** *x* ¼ *y* if and only if *wt*ð*x*, *y*Þ ¼ 0 for all *t >* 0.

**1.** *t*↦*F*ð*t*, *u*ð*t*ÞÞ *is measurable for each u* ∈ *Ω*;

156 Dynamical Systems - Analytical and Computational Techniques

**4.** *a weak solution u*<sup>0</sup> ∈ *Ω to the problem* (2) *exists*.

<sup>þ</sup> ! <sup>R</sup><sup>þ</sup> *given by*

*is in the class Ψ*, *then the problem* (2) *has at least one solution*.

Since we are focusing on the generalized metric space approach, we shall not be discussing about modular space theory here. Suppose that ð*X*, *d*Þ is a metric space, then *wt*ð, Þ :¼ *d*ð, Þ is a metric modular on *X*.

Now, we turn to basic definitions we need in this particular space. We start by giving the topology of the space.

Let ð*X*, *w*Þ be a modular metric space. By defining an open ball with *Bw*(*x*;*r*):={*z*∈*X*; sup*t*>0*wt*(*x*,*z*) <*r*}, we can define a Hausdorff topology on *X* having the collection of all such open balls as a base. The convergence in this topology can therefore be written by:

$$\mathfrak{a}(\mathfrak{x}\_n) \to \overline{\mathfrak{x}} \oplus \sup\_{t>0} w\_t(\mathfrak{x}\_n, \overline{\mathfrak{x}}) \to 0,$$

where ð*xn*Þ⊂*X* and *x*∈ *X*. With this characterization, we now have a good hint to define the Cauchy sequence. A sequence ð*xn*Þ⊂*X* is said to be phCauchy if for any given *ε >* 0, there exists *n*<sup>∗</sup> ∈ N such that

$$\sup\_{t>0} w\_t(\mathfrak{x}\_m, \mathfrak{x}\_n) < \varepsilon$$

whenever *m*, *n > n*∗. Naturally, *X* is said to be phcomplete if Cauchy sequences in *X* converges.

We next give another route of investigation of fixed point inclusion in modular metric spaces. This time, we shall apply more on analytical assumptions. Briefly said, we shall use the contractivity assumptions.

Before we could stomp into the main exploration, we need the following knowledge of metric modular of sets.

We write *C*ð*X*Þ to denote the set of all nonempty closed subsets of *X*. For any subset *A*⊂*Xw* and point *x*∈ *X*, we denote *wt*ð*x*, *A*Þ :¼ inf*<sup>y</sup>*<sup>∈</sup> *Awt*ð*x*, *y*Þ.

Given two subsets *A*, *B*∈*C*ð*X*Þ, define *wt*ð*A*, *B*Þ :¼ sup*<sup>x</sup>*<sup>∈</sup> *<sup>A</sup>wt*ð*x*, *B*Þ. Most importantly, the Hausdorff-Pompieu metric modular *Wt*ð*A*, *B*Þ :¼ max{*wt*ð*A*, *B*Þ, *wt*ð*B*, *A*Þ}*:*

LEMMA 3.2. *Let* ð*X*, *w*Þ *be a modular metric space, A* ∈*C*ð*X*Þ *and x*∈ *X. Then*,

$$w\_t(\mathbf{x}, A) = 0 \text{ for all } t > 0 \text{ } \Leftrightarrow \; \mathbf{x} \in A.$$

DEFINITION 3.3. Given a modular metric space ð*X*, *w*Þ and an arbitrary point *x* ∈ *X*. A subset *Y*⊂*X* is said to be phreachable from *x* if

$$\inf\_{y \in \mathcal{Y}} \sup\_{t>0} w\_t(\mathfrak{x}, y) = \sup\_{t>0} w\_t(\mathfrak{x}, \mathcal{Y}) < \infty.$$

This lemma gives a simple criterion of when the reachability holds.

LEMMA 3.4. *Let* ð*X*, *w*Þ *be a modular metric space with w being l.s.c., Y*⊂*X a nonempty compact subset. For a point x*∈ *X, if either* inf*<sup>y</sup>*∈*<sup>Y</sup>*sup*<sup>t</sup>>*<sup>0</sup>*wt*ð*x*, *y*Þ *< ∞ or* sup*<sup>t</sup>>*<sup>0</sup>*wt*ð*x*, *Y*Þ *< ∞, then Y is reachable from x*.

The following lemma is essential in showing the solvability of fixed point inclusion for contractivity condition.

LEMMA 3.5. *Suppose that Y*, *Z*∈ *C*ð*X*Þ *are nonempty and z*∈ *Z. If Y is reachable from z, then for each ε >* 0*, there exists a point y<sup>ε</sup>* ∈*Y such that* sup*<sup>t</sup>>*<sup>0</sup> *wt*ð*z*, *yε*Þ ≤ sup*<sup>t</sup>>*<sup>0</sup> *Wt*ð*X*,*Y*Þ þ *ε*.

#### **3.1. Fixed point inclusion in modular metric spaces**

Now, we state the notion of the contraction and the Kannan's contraction. Make note that these two concepts are not generalizations of one another.

DEFINITION 3.6. Let ð*X*, *w*Þ be a modular metric space. A set-valued operator *F* : *X*⇉*X* is said to be a phcontraction if there exists a constant *k*∈ ½0, 1Þ such that

$$\mathcal{W}\_t(F\mathfrak{x}, F\mathfrak{y}) \le k \mathfrak{z} w\_t(\mathfrak{x}, \mathfrak{y}),\tag{7}$$

for all *t >* 0 and *x*, *y* ∈ *X*.

If *k* is restricted in ½0, <sup>1</sup> <sup>2</sup>Þ and Eq. (7) is replaced with the following inequality:

$$\mathcal{W}\_t(F(\mathbf{x}), F(\mathbf{y})) \le k[w\_t(\mathbf{x}, F(\mathbf{x})) + w\_t(\mathbf{y}, F(\mathbf{y}))].$$

Then, we call *F* a phKannan's contraction

Now, we present the main existence theorems.

THEOREM 3.7. *Let* ð*X*, *w*Þ *be a complete modular metric space with w being l.s.c. and F a contraction on X having compact values with contraction constant k. Suppose that there exists a pair of points x*<sup>0</sup> ∈ *X and x*<sup>1</sup> ∈ *F*ð*x*0Þ *with the following properties:*


*Then*, *F has at least one fixed point*.

PROOF. Since *F*ð*x*1Þ is reachable from *x*1, by using Lemma 3.5, we may choose *x*<sup>2</sup> ∈*F*ð*x*1Þ such that

$$\sup\_{t>0} w\_t(\boldsymbol{\alpha}\_1, \boldsymbol{\alpha}\_2) \le \sup\_{t>0} w\_t(F(\boldsymbol{\alpha}\_0), F(\boldsymbol{\alpha}\_1)) + k.$$

From the above evidence and the hypothesis that {*x*0, *x*1} is bounded, it comes to the following inequalities:

$$\begin{array}{lcl}\displaystyle\sup\_{\boldsymbol{w}>0}\boldsymbol{w}\_{t}(\boldsymbol{\chi}\_{2},\boldsymbol{F}(\boldsymbol{\chi}\_{2})) & \leq & \sup\_{\boldsymbol{t}>0}\boldsymbol{w}\_{t}(\boldsymbol{F}(\boldsymbol{\chi}\_{1}),\boldsymbol{F}(\boldsymbol{\chi}\_{2})) \\ & \leq & k\sup\_{\boldsymbol{t}>0}\boldsymbol{w}\_{t}(\boldsymbol{\chi}\_{1},\boldsymbol{\chi}\_{2}) \\ & \leq & k[\sup\_{\boldsymbol{t}>0}\boldsymbol{W}\_{t}(\boldsymbol{F}(\boldsymbol{\chi}\_{0}),\boldsymbol{F}(\boldsymbol{\chi}\_{1}))+k] \\ & \leq & k^{2}\sup\_{\boldsymbol{t}>0}\boldsymbol{w}\_{t}(\boldsymbol{\chi}\_{0},\boldsymbol{\chi}\_{1})+k^{2} \\ & < & \infty. \end{array}$$

By the assumptions, we apply Lemma 3.4 to guarantee that *F*ð*x*2Þ is actually reachable from *x*2.

Inductively, by this procedure, we define a sequence ð*xn*Þ in *X*, with the supplement from Lemma 3.5, satisfying the following properties for all *n*∈ N:

$$\begin{cases} \mathbf{x}\_{n} \in F(\mathbf{x}\_{n-1}),\\ \sup\_{t>0} w\_{t}(\mathbf{x}\_{n}, \mathbf{x}\_{n+1}) \le \sup\_{t>0} W\_{t}(F(\mathbf{x}\_{n-1}), F(\mathbf{x}\_{n})) + k'',\\ \text{F}(\mathbf{x}\_{n}) \text{ is reachable from } \mathbf{x}\_{n}. \end{cases}$$

Hence, by the contractivity of *F*, we have

$$\begin{aligned} \sup\_{t>0} \left| w\_t(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \right| &\leq \sup\_{\substack{t>0\\ \leq \ \ k \sup\_{t>0} w\_t(\mathbf{x}\_{n-1}, \mathbf{x}\_n) + k^n}} W\_t(F(\mathbf{x}\_{n-1}), F(\mathbf{x}\_n)) + k^n \\ &\leq \ \ k[k \sup\_{t>0} w\_t(\mathbf{x}\_{n-2}, \mathbf{x}\_{n-1}) + k^{n-1}] + k^n \\ &\leq \ \ k^2 \sup\_{t>0} w\_t(\mathbf{x}\_{n-2}, \mathbf{x}\_{n-1}) + 2k^n. \end{aligned}$$

Thus, by induction, we have

LEMMA 3.4. *Let* ð*X*, *w*Þ *be a modular metric space with w being l.s.c., Y*⊂*X a nonempty compact subset. For a point x*∈ *X, if either* inf*<sup>y</sup>*∈*<sup>Y</sup>*sup*<sup>t</sup>>*<sup>0</sup>*wt*ð*x*, *y*Þ *< ∞ or* sup*<sup>t</sup>>*<sup>0</sup>*wt*ð*x*, *Y*Þ *< ∞, then Y is reachable from*

The following lemma is essential in showing the solvability of fixed point inclusion for

LEMMA 3.5. *Suppose that Y*, *Z*∈ *C*ð*X*Þ *are nonempty and z*∈ *Z. If Y is reachable from z, then for each*

Now, we state the notion of the contraction and the Kannan's contraction. Make note that these

DEFINITION 3.6. Let ð*X*, *w*Þ be a modular metric space. A set-valued operator *F* : *X*⇉*X* is said to

<sup>2</sup>Þ and Eq. (7) is replaced with the following inequality:

*Wt*ð*F*ð*x*Þ, *F*ð*y*ÞÞ ≤ *k*½*wt*ð*x*, *F*ð*x*ÞÞ þ *wt*ð*y*, *F*ð*y*ÞÞ*:*

THEOREM 3.7. *Let* ð*X*, *w*Þ *be a complete modular metric space with w being l.s.c. and F a contraction on X having compact values with contraction constant k. Suppose that there exists a pair of points x*<sup>0</sup> ∈ *X*

PROOF. Since *F*ð*x*1Þ is reachable from *x*1, by using Lemma 3.5, we may choose *x*<sup>2</sup> ∈*F*ð*x*1Þ such

*wt*ð*F*ð*x*0Þ, *F*ð*x*1ÞÞ þ *k:*

*t>*0

From the above evidence and the hypothesis that {*x*0, *x*1} is bounded, it comes to the follow-

*Wt*ð*Fx*, *Fy*Þ ≤ *kwt*ð*x*, *y*Þ, (7)

*ε >* 0*, there exists a point y<sup>ε</sup>* ∈*Y such that* sup*<sup>t</sup>>*<sup>0</sup> *wt*ð*z*, *yε*Þ ≤ sup*<sup>t</sup>>*<sup>0</sup> *Wt*ð*X*,*Y*Þ þ *ε*.

**3.1. Fixed point inclusion in modular metric spaces**

158 Dynamical Systems - Analytical and Computational Techniques

two concepts are not generalizations of one another.

be a phcontraction if there exists a constant *k*∈ ½0, 1Þ such that

*x*.

contractivity condition.

for all *t >* 0 and *x*, *y* ∈ *X*.

Then, we call *F* a phKannan's contraction

*and x*<sup>1</sup> ∈ *F*ð*x*0Þ *with the following properties:*

*(A) the set* {*x*0, *x*1} *is bounded*,

*(B) F*ð*x*1Þ *is reachable from x*1.

that

ing inequalities:

*Then*, *F has at least one fixed point*.

Now, we present the main existence theorems.

sup *t>*0

*wt*ð*x*1, *x*2Þ ≤ sup

If *k* is restricted in ½0, <sup>1</sup>

$$\sup\_{t>0} w\_t(\boldsymbol{\pi}\_n, \boldsymbol{\pi}\_{n+1}) \le k^n \sup\_{t>0} w\_t(\boldsymbol{\pi}\_0, \boldsymbol{\pi}\_1) + n k^n.$$

Moreover, it follows that

$$\sup\_{t>0} \sum\_{n \in \mathbb{N}} w\_t(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \le \sup\_{t>0} w\_t(\mathbf{x}\_0, \mathbf{x}\_1) \sum\_{n \in \mathbb{N}} k^n + \sum\_{n \in \mathbb{N}} n k^n < \infty.$$

Without loss of generality, suppose *m*, *n*∈ N and *m > n*. Observe that

$$\begin{aligned} \sup\_{t>0} \left| w\_t(\mathbf{x}\_n, \mathbf{x}\_m) \right| &\leq \sup\_{t>0} \left[ w\_{\frac{r}{\mathbf{w}\cdot\mathbf{w}}}(\mathbf{x}\_n, \mathbf{x}\_{n+1}) + \dots + w\_{\frac{r}{\mathbf{w}\cdot\mathbf{w}}}(\mathbf{x}\_{m-1}, \mathbf{x}\_m) \right] \\ &\leq \sup\_{t>0} \left| w\_t(\mathbf{x}\_n, \mathbf{x}\_{n+1}) + \dots + \sup\_{t>0} w\_t(\mathbf{x}\_{m-1}, \mathbf{x}\_m) \right| \\ &\leq \sum\_{n=n\_\*} \sup\_{t>0} w\_t(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \\ &< \varepsilon, \end{aligned}$$

for all *m > n* ≥ *n*<sup>∗</sup> for some *n*<sup>∗</sup> ∈ N. Hence, ð*xn*Þ is a Cauchy sequence so that the completeness of *Xw* implies that ð*xn*Þ converges to some point *x*∈ *Xw*. Consequently, we may conclude from the contractivity of *F* that the sequence ð*F*ð*xn*ÞÞ converges to *F*ð*x*Þ. Since *xn* ∈ *F*ð*xn*<sup>−</sup>1Þ, we have for any *t >* 0,

$$0 \le w\_t(\boldsymbol{\pi}, F(\boldsymbol{\pi})) \le w\_{\frac{t}{2}}(\boldsymbol{\pi}, \boldsymbol{\pi}\_n) + \mathcal{W}\_{\frac{t}{2}}(F(\boldsymbol{\pi}\_{n-1}), F(\boldsymbol{\pi})),$$

which implies that *wt*ð*x*, *F*ð*x*ÞÞ ¼ 0 for all *t >* 0. Since *F*ð*x*Þ is closed, it then follows from Lemma 3.2 that *x*∈ *F*ð*x*Þ.

EXAMPLE 3.8. Suppose that *X* ¼ ½0, 1 and *w* : ð0, þ *∞*Þ · *X* · *X* ! ½0, þ *∞* is defined by

$$w\_t(\mathbf{x}, y) = \frac{1}{(1+t)}|\mathbf{x} - y|\,.$$

Clearly, *w* is an l.s.c. metric modular on *X*. Notice that any two-point subset is bounded. Now, we define a set-valued operator *F* : *X*⇉*X* by

$$F(\mathfrak{x}) := \left[\frac{\mathfrak{x}+1}{2}, 1\right]$$

for every *x*∈ *X*.

Observe that *F* has compact values on *X*. Note that for each *t >* 0 and *x*, *y*∈ *X*, we have

$$\mathcal{W}\_t(Fx, Fy) = \frac{1}{2(1+t)} |\chi - y| \le \frac{1}{2} w\_t(x, y).$$

Therefore, *F* is a contraction with contraction constant *k* ¼ <sup>1</sup> <sup>2</sup>. Moreover, it is easy to see that the conditions (A) and (B) hold. Finally, we have that 1 is a fixed point of *F* (and it is unique).

Next, we shall show that the fixed point in the above theorem needs not be unique, as we shall see in the following example:

EXAMPLE 3.9. Suppose that *X* is defined as in the previous example. Consider the operator *G* : *X*⇉*X* given by

$$G(\mathfrak{x}) := \left[0, \frac{\mathfrak{x} + 1}{2}\right],$$

for each *x*∈ *X*.

Note that this operator *G* is also a contraction with constant *k* ¼ <sup>1</sup> <sup>2</sup> and takes compact values on *X*. Also, the conditions (A) and (B) hold. However, every point in *X* is a fixed point of *G*. This shows the nonuniqueness of fixed points for a set-valued contraction.

THEOREM 3.10. *Replacing F in Theorem 3.7 with a Kannan's contraction yields the same existence result*.

PROOF. Since *F*ð*x*1Þ is reachable from *x*1, by using Lemma 3.5, we may choose *x*<sup>2</sup> ∈*F*ð*x*1Þ such that

$$\sup\_{t>0} w\_t(\mathbf{x}\_1, \mathbf{x}\_2) \le \sup\_{t>0} \mathcal{W}\_t(F(\mathbf{x}\_0), F(\mathbf{x}\_1)) + k.$$

Now, observe that

contractivity of *F* that the sequence ð*F*ð*xn*ÞÞ converges to *F*ð*x*Þ. Since *xn* ∈ *F*ð*xn*<sup>−</sup>1Þ, we have for

ð*x*, *xn*Þ þ *Wt*

ð1 þ *t*Þ

<sup>2</sup> *;* <sup>1</sup> 

> <sup>j</sup>*x*−*y*<sup>j</sup> <sup>≤</sup> <sup>1</sup> 2 *wt*ð*x*, *y*Þ*:*

> > <sup>2</sup>. Moreover, it is easy to see that the

<sup>2</sup> and takes compact values on

Clearly, *w* is an l.s.c. metric modular on *X*. Notice that any two-point subset is bounded. Now,

*<sup>F</sup>*ð*x*<sup>Þ</sup> :<sup>¼</sup> *<sup>x</sup>* <sup>þ</sup> <sup>1</sup>

Observe that *F* has compact values on *X*. Note that for each *t >* 0 and *x*, *y*∈ *X*, we have

2ð1 þ *t*Þ

conditions (A) and (B) hold. Finally, we have that 1 is a fixed point of *F* (and it is unique).

Next, we shall show that the fixed point in the above theorem needs not be unique, as we shall

EXAMPLE 3.9. Suppose that *X* is defined as in the previous example. Consider the operator

*X*. Also, the conditions (A) and (B) hold. However, every point in *X* is a fixed point of *G*. This

THEOREM 3.10. *Replacing F in Theorem 3.7 with a Kannan's contraction yields the same existence result*.

PROOF. Since *F*ð*x*1Þ is reachable from *x*1, by using Lemma 3.5, we may choose *x*<sup>2</sup> ∈*F*ð*x*1Þ such that

*x* þ 1 2 

,

*G*ð*x*Þ :¼ 0,

*Wt*ð*Fx*, *Fy*Þ ¼ <sup>1</sup>

Therefore, *F* is a contraction with contraction constant *k* ¼ <sup>1</sup>

Note that this operator *G* is also a contraction with constant *k* ¼ <sup>1</sup>

shows the nonuniqueness of fixed points for a set-valued contraction.

which implies that *wt*ð*x*, *F*ð*x*ÞÞ ¼ 0 for all *t >* 0. Since *F*ð*x*Þ is closed, it then follows from Lemma

2

j*x*−*y*j*:*

ð*F*ð*xn*<sup>−</sup>1Þ, *F*ð*x*ÞÞ,

2

EXAMPLE 3.8. Suppose that *X* ¼ ½0, 1 and *w* : ð0, þ *∞*Þ · *X* · *X* ! ½0, þ *∞* is defined by

*wt*ð*x*, *<sup>y</sup>*Þ ¼ <sup>1</sup>

0 ≤ *wt*ð*x*, *F*ð*x*ÞÞ ≤ *wt*

we define a set-valued operator *F* : *X*⇉*X* by

160 Dynamical Systems - Analytical and Computational Techniques

any *t >* 0,

3.2 that *x*∈ *F*ð*x*Þ.

for every *x*∈ *X*.

see in the following example:

*G* : *X*⇉*X* given by

for each *x*∈ *X*.

$$\begin{array}{lcl} & \displaystyle \sup\_{t>0} w\_{l}(\mathbf{x}\_{2}, F(\mathbf{x}\_{2})) \\ & \leq & \sup\_{t>0} W\_{l}(F(\mathbf{x}\_{1}), F(\mathbf{x}\_{2})) \\ & \leq & k \sup\_{t>0} w\_{l}(\mathbf{x}\_{1}, F(\mathbf{x}\_{1})) + k \sup\_{t>0} w\_{l}(\mathbf{x}\_{2}, F(\mathbf{x}\_{2})) \\ & \leq & k \sup\_{t>0} W\_{l}(F(\mathbf{x}\_{0}), F(\mathbf{x}\_{1})) + k \sup\_{t>0} w\_{l}(\mathbf{x}\_{2}, F(\mathbf{x}\_{2})) \\ & \leq & k \sup\_{t>0} w\_{l}(\mathbf{x}\_{0}, F(\mathbf{x}\_{0})) + k \sup\_{t>0} w\_{l}(\mathbf{x}\_{1}, F(\mathbf{x}\_{1})) + k \sup\_{t>0} w\_{l}(\mathbf{x}\_{2}, F(\mathbf{x}\_{2})) \\ & \leq & k \sup\_{t>0} w\_{l}(\mathbf{x}\_{0}, \mathbf{x}\_{1}) + k \sup\_{t>0} w\_{l}(\mathbf{x}\_{1}, F(\mathbf{x}\_{1})) + k \sup\_{t>0} w\_{l}(\mathbf{x}\_{2}, F(\mathbf{x}\_{2})). \end{array}$$

Writing *<sup>ξ</sup>* :<sup>¼</sup> *<sup>k</sup>* <sup>1</sup>−*<sup>k</sup> <* 1, we obtain, from the boundedness of {*x*0, *x*1} and the reachability of *F*ð*x*1Þ from *x*1, that

$$\sup\_{t>0} w\_t(\mathbf{x}\_2, F(\mathbf{x}\_2)) \preceq \xi \sup\_{t>0} w\_t(\mathbf{x}\_0, \mathbf{x}\_1) + \xi \sup\_{t>0} w\_t(\mathbf{x}\_1, F(\mathbf{x}\_1)) < \infty$$

Thus, from the assumptions and Lemma 3.5, we may see that *F*ð*x*2Þ is reachable from *x*2.

Inductively, we can construct a sequence ð*xn*Þ in *X* with exactly the same properties appearing in the proof of Theorem 3.7.

Now, consider further that

$$\begin{array}{lcl}\displaystyle\sup\_{t>0}\ w\_{t}(\mathbf{x}\_{n},\mathbf{x}\_{n+1})\\\leq\displaystyle\sup\_{t>0}W\_{t}(F(\mathbf{x}\_{n-1}),F(\mathbf{x}\_{n}))+k^{\imath}\\\leq\displaystylek\sup\_{t>0}w\_{t}(\mathbf{x}\_{n-1},F(\mathbf{x}\_{n-1}))+k\sup\_{t>0}w\_{t}(\mathbf{x}\_{n},F(\mathbf{x}\_{n}))+k^{\imath}\\\leq\displaystylek\sup\_{t>0}w\_{t}(\mathbf{x}\_{n-1},F(\mathbf{x}\_{n-1}))+k\sup\_{t>0}w\_{t}(\mathbf{x}\_{n},\mathbf{x}\_{n+1})+k^{\imath}.\end{array}$$

Moreover, we get

$$\begin{split} \sup\_{t>0} \left| \boldsymbol{w}\_{t}(\mathbf{x}\_{n}, \mathbf{x}\_{n+1}) \right| &\leq \quad \xi \sup\_{t>0} \left| \boldsymbol{w}\_{t}(\mathbf{x}\_{n-1}, \mathbf{x}\_{n}) \right| + \frac{k^{n}}{1-k} \\ &\leq \quad \xi^{2} \sup\_{t>0} \left| \boldsymbol{w}\_{t}(\mathbf{x}\_{n-2}, \mathbf{x}\_{n-1}) \right| + \frac{k^{n}}{\left(1-k\right)^{2}} + \frac{k^{n}}{\left(1-k\right)^{2}} \\ &\leq \quad \xi^{2} \sup\_{t>0} \left| \boldsymbol{w}\_{t}(\mathbf{x}\_{n-2}, \mathbf{x}\_{n-1}) \right| + 2 \cdot \frac{k^{n}}{\left(1-k\right)^{2}} \\ &\vdots \\ &\leq \quad \xi^{n} \sup\_{t>0} \left| \boldsymbol{w}\_{t}(\mathbf{x}\_{0}, \mathbf{x}\_{1}) + n\xi^{n} \right. \end{split}$$

As in the proof of Theorem 3.7, the sequence ð*xn*Þ converges to some *x*∈ *X*. Observe now that

$$\begin{split} & \sup\_{t>0} w\_t(\mathbf{x}, F(\mathbf{x})) \\ & \qquad = \sup\_{t>0} \operatorname{wp} w\_t(\{\mathbf{x}\}, F(\mathbf{x})) \\ & \qquad \le \sup\_{t>0} w\_t(\{\mathbf{x}\}, F(\mathbf{x}\_n)) + \sup\_{t>0} w\_t(F(\mathbf{x}\_n), F(\mathbf{x})) \\ & \qquad = \sup\_{t>0} w\_t(\mathbf{x}, F(\mathbf{x}\_n)) + \sup\_{t>0} w\_t(F(\mathbf{x}\_n), F(\mathbf{x})) \\ & \qquad \le \sup\_{t>0} w\_t(\mathbf{x}, \mathbf{x}\_{n+1}) + \sup\_{t>0} W\_t(F(\mathbf{x}\_n), F(\mathbf{x})) \\ & \qquad \le \sup\_{t>0} w\_t(\mathbf{x}, \mathbf{x}\_{n+1}) + k \sup\_{t>0} w\_t(\mathbf{x}\_n, F(\mathbf{x}\_n)) + k \sup\_{t>0} w\_t(\mathbf{x}, F(\mathbf{x})) \\ & \qquad = (1+k) \sup\_{t>0} w\_t(\mathbf{x}, \mathbf{x}\_{n+1}) + k \sup\_{t>0} w\_t(\mathbf{x}, F(\mathbf{x})). \end{split}$$

Thus, we have

$$\sup\_{t>0} \left. w\_t(\mathbf{x}, F(\mathbf{x})) \right| \le \frac{1+k}{1-k} \sup\_{t>0} \left. w\_t(\mathbf{x}, \mathbf{x}\_{n+1}) \right|.$$

Letting *n* ! *∞* to conclude the theorem.

#### **3.2. Fractional integral inclusion**

In this particular subsection, we shall use notations a bit differently than those of earlier sections. This is due to conventional uses of variables and functions that is common to integral and differential equations.

Suppose that *Ψ* is the interval mentioned in the previous section. Let us assume throughout the section that the real line R is equipped with the metric modular

$$w\_{\lambda}^{\mathbb{R}}(\mathfrak{x}, y) := \frac{1}{1 + \lambda} |\mathfrak{x} - y|, 1$$

for *λ >* 0 and *x*, *y*∈ R. Thus, for the space *C*ð*Ψ*Þ of all continuous (in *ω*<sup>R</sup>-topology) real-valued functions on *Ψ*, we shall use the metric modular

$$w\_{\lambda}^{\mathcal{C}(\Psi)}(\boldsymbol{\varphi}, \boldsymbol{\psi}) := \sup\_{t \in \Psi} w\_{\lambda}^{\mathbb{R}}(\boldsymbol{\varphi}(t), \boldsymbol{\psi}(t)),$$

for *<sup>λ</sup> <sup>&</sup>gt;* 0 and *<sup>ϕ</sup>*,*ψ*<sup>∈</sup> *<sup>C</sup>*ð*Ψ*Þ. Note that both *<sup>ω</sup>*<sup>R</sup> and *<sup>ω</sup><sup>C</sup>*ð*Ψ*<sup>Þ</sup> satisfy the Fatou's property. Also note that the set R is second countable, i.e., it has a countable base, w.r.t. *ω*<sup>R</sup>-topology. Moreover, it is clear that the set {*ϕ*, *ψ*} is bounded w.r.t. *ω<sup>C</sup>*ð*Ψ*<sup>Þ</sup> , for any *ϕ*,*ψ*∈*C*ð*Ψ*Þ. Suppose that *<sup>F</sup>* : *<sup>Ψ</sup>* · <sup>R</sup> ! <sup>2</sup><sup>R</sup> is a set-valued operator with nonempty compact values and *<sup>u</sup>*<sup>∈</sup> *<sup>C</sup>*ð*Ψ*Þ. We shall use the following notation to explain the collection of integrable selections:

$$S\_{\mathbb{F}}(\mu) := \left\{ f \in L^1(\mathbb{V}, \mu) \; ; \; f(t) \in F(t, \mu(t)) \text{a.e.} \, t \in \mathbb{V} \right\}.$$

It is clear that *SF*ð*u*Þ is closed. Next, for each *i* ∈{0, 1, ⋯, *N*}, *N* ∈ N, assume that *β<sup>i</sup>* : *Ψ* ! *R* is continuous and *τ<sup>i</sup>* : *Ψ* ! R<sup>þ</sup> is a function with *τi*ð*t*Þ ≤ *t*. We write *B* :¼ max0 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>N</sup>*sup*<sup>t</sup>*<sup>∈</sup> *<sup>Ψ</sup> β<sup>i</sup>* ð*t*Þ. The main aim of this section is to consider the fractional integral inclusion:

Recent Fixed Point Techniques in Fractional Set-Valued Dynamical Systems http://dx.doi.org/10.5772/67069 163

$$u(t) - \sum\_{i=0}^{N} \beta\_i(t) u(t - \tau\_i(t)) \in \mathbb{J}\_{\mathbb{V}}^a F(t, u(t))dt, \quad a \in (0, 1]. \tag{FII}$$

In the above inclusion, the summation here is interpreted to be the delay term.

We shall define a set-valued operator *<sup>Λ</sup>* : *<sup>C</sup>*ð*Ψ*Þ ! <sup>2</sup>*<sup>C</sup>*ð*Ψ*<sup>Þ</sup> by

$$\Lambda(u) := \left\{ w \in \mathbb{C}(\mathbb{V}) \; ; \; w(t) = \sum\_{i=0}^{N} \beta\_i(t) u(t - \tau\_i(t)) + \mathbb{I}\_{\mathbb{V}}^{\alpha} f(t, u(t)) dt, \quad f \in \mathbb{S}\_{\mathbb{F}}(u) \right\}.$$

Note here that for any *<sup>ϕ</sup>*∈*C*ð*Ψ*Þ, we have *<sup>Λ</sup>*ð*ϕ*<sup>Þ</sup> is reachable from *<sup>ϕ</sup>* w.r.t. *<sup>ω</sup><sup>C</sup>*ð*Ψ*<sup>Þ</sup> . To restrict the operator *Λ* with some nice property, we assume that *SF*ð*u*Þ is nonempty.

LEMMA 3.11. *The operator Λ given above is compact valued if SF*ð*u*Þ *is nonempty*.

PROOF. For the proof, we shall show the compactness by its sequential characterization. Suppose that *u*∈*C*ð*Ψ*Þ and ð*wn*Þ is an arbitrary sequence in *Λ*ð*u*Þ. By definition, there corresponds a convergent sequence ð*f <sup>n</sup>*Þ in *SF*ð*u*Þ⊂*F*ð, *u*ðÞÞ satisfying

$$w\_{\boldsymbol{\eta}}(t) = \sum\_{i=0}^{N} \beta\_i(t)\boldsymbol{u}(t - \tau\_i(t)) + \boldsymbol{\Gamma}\_{\boldsymbol{\Psi}}^{\boldsymbol{\alpha}} f\_{\boldsymbol{u}}(t, \boldsymbol{u}(t))dt.$$

The conclusion is then followed.

Now, we shall state now the solvability result for the problem (FII). It is clear that *u*∈*C*ð*Ψ*Þ solves Eq. (FII) if and only if *u* is a fixed point of *Λ*.

THEOREM 3.12. *Suppose that F defined above is compact-valued and SF*ð*u*Þ*is nonempty. Assume further that*

*(F1) for any given u*, *<sup>v</sup>*∈*C*ð*Ψ*<sup>Þ</sup> *and a selection f* <sup>∈</sup> *SF*ð*u*<sup>Þ</sup> *of F*, *there corresponds a function f* ′ ∈ *SF*ð*v*Þ *such that*

$$\begin{cases} \boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{R}}(\boldsymbol{f}(t,\boldsymbol{u}(t)),\boldsymbol{f}'(t,\boldsymbol{v}(t))) = \boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{R}}(\boldsymbol{f}\_{1}(t,\boldsymbol{u}(t)),\boldsymbol{F}(t,\boldsymbol{v}(t))),\\ \boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{R}}(\boldsymbol{f}(t,\boldsymbol{u}(t)),\boldsymbol{f}'(t,\boldsymbol{v}(t))) \leq \boldsymbol{L}\boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{C}(\boldsymbol{\lambda}^{\mathbb{R}})}(\boldsymbol{u},\boldsymbol{v}), \end{cases}$$

*for all t* ∈ *Ψ*;

sup *t>*0

*wt*ð*x*, *F*ð*x*ÞÞ

*wt*ð{*x*}, *F*ð*x*ÞÞ

*wt*ð{*x*}, *F*ð*xn*ÞÞ þ sup

*wt*ð*x*, *F*ð*xn*ÞÞ þ sup

*wt*ð*x*, *xn*þ<sup>1</sup>Þ þ sup

*wt*ð*x*, *xn*þ<sup>1</sup>Þ þ *k* sup

*t>*0

sup *t>*0

the section that the real line R is equipped with the metric modular

*ω C*ð*Ψ*Þ

is clear that the set {*ϕ*, *ψ*} is bounded w.r.t. *ω<sup>C</sup>*ð*Ψ*<sup>Þ</sup>

*SF*ð*u*<sup>Þ</sup> :<sup>¼</sup> *<sup>f</sup>* <sup>∈</sup>*L*<sup>1</sup>

functions on *Ψ*, we shall use the metric modular

*ω*<sup>R</sup>

*<sup>λ</sup>* ð*ϕ*,*ψ*Þ :¼ sup

use the following notation to explain the collection of integrable selections:

The main aim of this section is to consider the fractional integral inclusion:

*t>*0

*t>*0

*t>*0

*wt*ð*x*, *xn*þ<sup>1</sup>Þ þ *k* sup

*t>*0

*wt*ð*x*, *<sup>F</sup>*ð*x*ÞÞ <sup>≤</sup> <sup>1</sup> <sup>þ</sup> *<sup>k</sup>*

*wt*ð*F*ð*xn*Þ, *F*ð*x*ÞÞ

*wt*ð*F*ð*xn*Þ, *F*ð*x*ÞÞ

*wt*ð*xn*, *F*ð*xn*ÞÞ þ *k* sup

*wt*ð*x*, *F*ð*x*ÞÞ*:*

*wt*ð*x*, *xn*þ<sup>1</sup>Þ*:*

*t>*0

*wt*ð*x*, *F*ð*x*ÞÞ

, for any *ϕ*,*ψ*∈*C*ð*Ψ*Þ. Suppose that

ð*t*Þ.

*Wt*ð*F*ð*xn*Þ, *F*ð*x*ÞÞ

*t>*0

<sup>1</sup>−*<sup>k</sup>* sup *t>*0

In this particular subsection, we shall use notations a bit differently than those of earlier sections. This is due to conventional uses of variables and functions that is common to integral

Suppose that *Ψ* is the interval mentioned in the previous section. Let us assume throughout

for *λ >* 0 and *x*, *y*∈ R. Thus, for the space *C*ð*Ψ*Þ of all continuous (in *ω*<sup>R</sup>-topology) real-valued

*t* ∈ *Ψ ω*<sup>R</sup>

for *<sup>λ</sup> <sup>&</sup>gt;* 0 and *<sup>ϕ</sup>*,*ψ*<sup>∈</sup> *<sup>C</sup>*ð*Ψ*Þ. Note that both *<sup>ω</sup>*<sup>R</sup> and *<sup>ω</sup><sup>C</sup>*ð*Ψ*<sup>Þ</sup> satisfy the Fatou's property. Also note that the set R is second countable, i.e., it has a countable base, w.r.t. *ω*<sup>R</sup>-topology. Moreover, it

*<sup>F</sup>* : *<sup>Ψ</sup>* · <sup>R</sup> ! <sup>2</sup><sup>R</sup> is a set-valued operator with nonempty compact values and *<sup>u</sup>*<sup>∈</sup> *<sup>C</sup>*ð*Ψ*Þ. We shall

It is clear that *SF*ð*u*Þ is closed. Next, for each *i* ∈{0, 1, ⋯, *N*}, *N* ∈ N, assume that *β<sup>i</sup>* : *Ψ* ! *R* is continuous and *τ<sup>i</sup>* : *Ψ* ! R<sup>þ</sup> is a function with *τi*ð*t*Þ ≤ *t*. We write *B* :¼ max0 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>N</sup>*sup*<sup>t</sup>*<sup>∈</sup> *<sup>Ψ</sup> β<sup>i</sup>*

<sup>ð</sup>*Ψ*, *<sup>μ</sup>*<sup>Þ</sup> ; *<sup>f</sup>*ð*t*Þ<sup>∈</sup> *<sup>F</sup>*ð*t*, *<sup>u</sup>*ð*t*ÞÞa*:*e*:t*<sup>∈</sup> *<sup>Ψ</sup> :*

1 þ *λ*

j*x*−*y*j,

*<sup>λ</sup>* ð*ϕ*ð*t*Þ,*ψ*ð*t*ÞÞ,

*<sup>λ</sup>* <sup>ð</sup>*x*, *<sup>y</sup>*<sup>Þ</sup> :<sup>¼</sup> <sup>1</sup>

¼ sup *t>*0

162 Dynamical Systems - Analytical and Computational Techniques

≤ sup *t>*0

¼ sup *t>*0

≤ sup *t>*0

≤ sup *t>*0

Letting *n* ! *∞* to conclude the theorem.

**3.2. Fractional integral inclusion**

and differential equations.

Thus, we have

¼ ð1 þ *k*Þsup

$$(F2)\ \frac{(N+1)\mathcal{B}\Gamma(\alpha) + LT^{\alpha}}{\Gamma(\alpha)} < 1.$$

*Then*, *Λ has a fixed point*.

PROOF. For each *u*, *v*∈ *C*ð*Ψ*Þ, we may choose, from the assumption, functions *f* <sup>1</sup>, *f* <sup>2</sup> such that

$$\begin{cases} f\_1 \in \mathcal{S}\_F(\boldsymbol{\mu}), \\ f\_2 \in \mathcal{S}\_F(\boldsymbol{\upsilon}), \\ \boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{R}}(f\_1(t, \boldsymbol{\mu}(t)), f\_2(t, \boldsymbol{\upsilon}(t))) = \boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{R}}(f\_1(t, \boldsymbol{\mu}(t)), F(t, \boldsymbol{\upsilon}(t))), \\ \boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{R}}(f\_1(t, \boldsymbol{\mu}(t)), f\_2(t, \boldsymbol{\upsilon}(t))) \leq \boldsymbol{\omega}\_{\boldsymbol{\lambda}}^{\mathbb{C}(\mathbb{Y}^{\mathbb{R}})}(\boldsymbol{\mu}, \boldsymbol{\upsilon}), \end{cases}$$

for each *t*∈ *Ψ*. Consider the two functions *w*<sup>1</sup> ∈ *Λ*ð*u*Þ and *w*<sup>2</sup> ∈ *Λ*ð*v*Þ, respectively as follows:

$$\begin{cases} w\_1(t) := \sum\_{i=0}^N \beta\_i(t)\mu(t - \tau\_i(t)) + \mathbf{I}\_\Psi^\alpha f\_1(t, \mu(t))dt, \\ w\_2(t) := \sum\_{i=0}^N \beta\_i(t)\upsilon(t - \tau\_i(t)) + \mathbf{I}\_\Psi^\alpha f\_2(t, \upsilon(t))dt. \end{cases}$$

Now, consider the following computation:

$$\begin{split} &\omega\_{\lambda}^{\mathbb{R}}(w\_{1}(t),w\_{2}(t)) \\ &\leq \quad \sum\_{i=0}^{N} \beta\_{i}(t)\omega\_{\lambda}^{\mathbb{R}}(u(t-\tau\_{i}(t)),\upsilon(t-\tau\_{i}(t))) \\ &\quad + \quad\omega\_{\lambda}^{\mathbb{C}(\mathbb{W})}(\Pi\_{\mathbb{W}}^{a}f\_{1}(t,u(t))dt,\Pi\_{\mathbb{W}}^{a}f\_{2}(t,u(t))dt) \\ &\leq \quad (N+1)B\omega\_{\lambda}^{\mathbb{C}(\mathbb{W})}(u,\upsilon) + \Pi\_{\mathbb{W}}^{a}\omega\_{\lambda}^{\mathbb{R}}(f\_{1}(t,u(t)),f\_{2}(t,\upsilon(t))) \\ &\leq \quad (N+1)B\omega\_{\lambda}^{\mathbb{C}(\mathbb{W})}(u,\upsilon) + \frac{LT^{\alpha}}{\Gamma(\alpha)}\omega\_{\lambda}^{\mathbb{C}(\mathbb{W})}(u,\upsilon) \\ &\quad = \quad \left[\frac{(N+1)B\Gamma(\alpha)+LT^{\alpha}}{\Gamma(\alpha)}\right]\omega\_{\lambda}^{\mathbb{C}(\mathbb{W})}(u,\upsilon). \end{split}$$

It follows that

$$
\mathcal{Q}^{\mathbb{C}(\mathbb{M})}\_{\lambda}(\Lambda(u), \Lambda(v)) \le \left[ \frac{(N+1)B\Gamma(\alpha) + LT^{\alpha}}{\Gamma(\alpha)} \right] \alpha\_{\lambda}^{\mathbb{C}(\mathbb{M})}(u, v).
$$

The proof ends here by applying Theorem 3.7.

## **Author details**

Parin Chaipunya and Poom Kumam\*

\*Address all correspondence to: poom.kum@kmutt.ac.th

Department of Mathematics, Faculty of Science, Theoretical and Computational Science Center (TaCS), King Mongkut's University of Technology Thonburi, Bangkok, Thailand

## **References**

[1] Banach S. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundamenta Math. 1922; 3: 133–181.

[2] Aumann RJ. Integrals of set-valued functions. Journal of Mathematical Analysis and Applications. 1965; 2: 1–12.

for each *t*∈ *Ψ*. Consider the two functions *w*<sup>1</sup> ∈ *Λ*ð*u*Þ and *w*<sup>2</sup> ∈ *Λ*ð*v*Þ, respectively as follows:

*<sup>λ</sup>* ð*u*ð*t*−*τi*ð*t*ÞÞ, *v*ð*t*−*τi*ð*t*ÞÞ

*<sup>λ</sup>* <sup>ð</sup>*u*, *<sup>v</sup>*Þ þ *LT<sup>α</sup>*

*α*

*α <sup>Ψ</sup> ω*<sup>R</sup>

*Γ*ð*α*Þ *ω C*ð*Ψ*Þ *<sup>λ</sup>* ð*u*, *v*Þ

*ω C*ð*Ψ*Þ *<sup>λ</sup>* ð*u*, *v*Þ*:*

*Γ*ð*α*Þ 

Department of Mathematics, Faculty of Science, Theoretical and Computational Science Center

[1] Banach S. Sur les opérations dans les ensembles abstraits et leur application aux équa-

(TaCS), King Mongkut's University of Technology Thonburi, Bangkok, Thailand

*<sup>Ψ</sup> f* <sup>2</sup>ð*t*, *u*ð*t*ÞÞ*dt*Þ

*<sup>Ψ</sup> f* <sup>1</sup>ð*t*, *u*ð*t*ÞÞ*dt*, I

*<sup>λ</sup>* <sup>ð</sup>*Λ*ð*u*Þ, *<sup>Λ</sup>*ð*v*ÞÞ <sup>≤</sup> <sup>ð</sup>*<sup>N</sup>* <sup>þ</sup> <sup>1</sup>Þ*BΓ*ð*α*Þ þ *LT<sup>α</sup>*

*C*ð*Ψ*Þ *<sup>λ</sup>* ð*u*, *v*Þ þ I

*C*ð*Ψ*Þ

<sup>¼</sup> <sup>ð</sup>*<sup>N</sup>* <sup>þ</sup> <sup>1</sup>Þ*BΓ*ð*α*Þ þ *LT<sup>α</sup> Γ*ð*α*Þ 

ð*t*Þ*u*ð*t*−*τi*ð*t*ÞÞ þ I

ð*t*Þ*v*ð*t*−*τi*ð*t*ÞÞ þ I

*α*

*α*

*<sup>Ψ</sup> f* <sup>1</sup>ð*t*, *u*ð*t*ÞÞ*dt*,

*<sup>Ψ</sup> f* <sup>2</sup>ð*t*, *v*ð*t*ÞÞ*dt:*

*<sup>λ</sup>* ð*f* <sup>1</sup>ð*t*, *u*ð*t*ÞÞ, *f* <sup>2</sup>ð*t*, *v*ð*t*ÞÞÞ

*ω C*ð*Ψ*Þ *<sup>λ</sup>* ð*u*, *v*Þ*:*

*<sup>i</sup>*¼<sup>0</sup> *<sup>β</sup><sup>i</sup>*

*<sup>i</sup>*¼<sup>0</sup> *<sup>β</sup><sup>i</sup>*

*<sup>w</sup>*1ð*t*<sup>Þ</sup> :<sup>¼</sup> <sup>X</sup>*<sup>N</sup>*

8 ><

164 Dynamical Systems - Analytical and Computational Techniques

>:

Now, consider the following computation:

*ω*<sup>R</sup>

It follows that

**Author details**

**References**

*<sup>w</sup>*2ð*t*<sup>Þ</sup> :<sup>¼</sup> <sup>X</sup>*<sup>N</sup>*

*<sup>λ</sup>* ð*w*1ð*t*Þ, *w*2ð*t*ÞÞ

*i*¼0 *βi* ð*t*Þ*ω*<sup>R</sup>

> *C*ð*Ψ*Þ *<sup>λ</sup>* ðI *α*

≤ ð*N* þ 1Þ*Bω*

≤ ð*N* þ 1Þ*Bω*

≤ X *N*

þ *ω*

*Ω<sup>C</sup>*ð*Ψ*<sup>Þ</sup>

The proof ends here by applying Theorem 3.7.

\*Address all correspondence to: poom.kum@kmutt.ac.th

tions intégrales. Fundamenta Math. 1922; 3: 133–181.

Parin Chaipunya and Poom Kumam\*


**Computational Techniques**

[18] Chaipunya P, Kumam P. An observation on set-valued contraction mappings in modular

[19] Chaipunya P, Mongkolkeha C, Sintunavarat W, Kumam P. Fixed-point theorems for multivalued mappings in modular metric spaces. Abstract and Applied Analysis. 2012;

[20] Chaipunya P, Cho YJ, Kumam P. Geraghty-type theorems in modular metric spaces with an application to partial differential equation. Advances in Difference Equations. 2012:

[21] Chaipunya P, Kumam P. Fixed point theorems for cyclic operators with application in fractional integral inclusions with delays. Dynamical Systems, Differential Equations and

[22] Nashine HK, Kadelburg Z, Kumam P. Implicit-relation-type cyclic contractive mappings and applications to integral equations. Abstract and Applied Analysis. 2012: 15.

[23] Chistyakov VV. Modular metric spaces. I: basic concepts. Nonlinear Analysis: Theory,

Methods & Applications, Series A, Theory Methods. 2010; 72: 1–14.

metric spaces. Thai Journal of Mathematics. 2015; 13: 9–17.

166 Dynamical Systems - Analytical and Computational Techniques

Applications AIMS Proceedings. 2015: 248–257.

2012: 2–12.

83.

## **Relationship between Interpolation and Differential Equations: A Class of Collocation Methods** Provisional chapter Relationship between Interpolation and Differential

Francesco Aldo Costabile, Maria Italia Gualtieri and

Equations: A Class of Collocation Methods

#### Anna Napoli Francesco Aldo Costabile Maria Italia

Additional information is available at the end of the chapter Gualtieri and Anna Napoli

http://dx.doi.org/10.5772/66995 Additional information is available at the end of the chapter

#### Abstract

In this chapter, the connection between general linear interpolation and initial, boundary and multipoint value problems is explained. First, a result of a theoretical nature is given, which highlights the relationship between the interpolation problem and the Fredholm integral equation for high-order differential problems. After observing that the given problem is equivalent to a Fredholm integral equation, this relation is used in order to determine a general procedure for the numerical solution of high-order differential problems by means of appropriate collocation methods based on the integration of the Fredholm integral equation. The classical analysis of the class of the obtained methods is carried out. Some particular cases are illustrated. Numerical examples are given in order to illustrate the efficiency of the method.

Keywords: boundary value problem, initial value problem, collocation methods, interpolation, Birkhoff, Lagrange, Peano, Fredholm

## 1. Introduction

The relationship between interpolation and differential equations theories has already been considered. In Ref. ([1], p. 72), Davis observed that the Peano kernel in the interpolation problem

$$y(a) = \alpha, \quad y(b) = \beta, \qquad a, b, \alpha, \beta \in \mathbb{R},\tag{1}$$

is the Green's function of the differential problem

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

$$\begin{aligned} \phi''(x) &= f(x) \\ \phi(a) &= \phi(b) = 0 \end{aligned}$$

where φðxÞ ¼ yðxÞ−P1½y�ðxÞ, being P1½y�ðxÞ the unique interpolatory polynomial for Eq. (1).

He observed that "these remarks indicate the close relationship between Peano kernels and Green's functions, and hence between interpolation theory and the theory of linear differential equations. Unfortunately, we shall not be able to pursue this relationship" [1].

Later, Agarwal ([2], p. 2), Agarwal and Wong ([3], pp. 21, 151, 186) considered some separate boundary value problems and the related Fredholm integral equation, using only polynomial interpolation, without taking into account the related Peano kernel. They used Fredholm integral equation in order to obtain existence and uniqueness results for the solution of the considered boundary value problems.

Linear interpolation has an important role also in the numerical solution of differential problems. For example, finite difference methods (see, for instance, [4–6] and references therein) approximate the solution yðxÞ of a boundary value problem by a sequence of overlapping polynomials which interpolate yðxÞ in a set of grid points. This is obtained by replacing the differential equation with finite difference equations on a mesh of points that covers the range of integration. The resultant algebraic system of equations is often solved with iterative processes, such as relaxation methods.

Many authors (see [7–10] and references therein) used linear interpolation with spline functions for the numerical solution of boundary value problems.

Here, we take into account a more general nonlinear initial/boundary/multipoint value problems for high-order differential equations

$$\begin{cases} y^{(r)}(\mathbf{x}) = f\left(\mathbf{x}, y(\mathbf{x})\right), & \mathbf{x} \in I = [a, b], \ r \ge 1 \\\\ L\_i[y](\mathbf{x}) = w\_i, \ i = 0, \dots, r - 1, \ \mathbf{x} \in I \end{cases} \tag{2}$$

where <sup>y</sup>ðxÞ¼ðyðxÞ, <sup>y</sup>′ <sup>ð</sup>xÞ;…, <sup>y</sup>ðq<sup>Þ</sup> <sup>ð</sup>xÞÞ, 0 <sup>≤</sup> <sup>q</sup> <sup>&</sup>lt; <sup>r</sup>, <sup>y</sup> <sup>∈</sup> <sup>C</sup> <sup>r</sup> ðIÞ, and Li are r linearly independent functionals on C <sup>r</sup> <sup>ð</sup>IÞ. Moreover, we suppose that the function <sup>f</sup> : <sup>½</sup>a, <sup>b</sup>� · <sup>R</sup><sup>q</sup>þ<sup>1</sup> ! <sup>R</sup> is continuous at least in the interior of the domain of interest, and it satisfies a uniform Lipschitz condition in y, which means that there exists a nonnegative constant Λ, such that, whenever ðx, y0, y1, …, yqÞ and ðx, y0, y1,…, yqÞ are in the domain of f , the following inequality holds

$$\left| f(\mathbf{x}, y\_0, y\_1, \dots, y\_q) \neg f(\mathbf{x}, \overline{y}\_0, \overline{y}\_1, \dots, \overline{y}\_q) \right| \le \Lambda \sum\_{k=0}^q |y\_k \neg \overline{y}\_k|. \tag{3}$$

If Li½y� ¼ Φ � yðaÞ � , i ¼ 0;…,r−1, then (2) is an initial value problem (IVP); if Li½y� ¼ Φ � yðaÞ, yðbÞ � , i ¼ 0;…,r−1, then (2) is a boundary value problem (BVP); if Li½y� ¼ Φ � yðxjÞ � , i ¼ 0;…, r−1, j ¼ 0;…, m ≥ 2, then (2) is a multipoint value problem (MVP).

In this chapter,

φ″ðxÞ ¼ fðxÞ φðaÞ ¼ φðbÞ ¼ 0

where φðxÞ ¼ yðxÞ−P1½y�ðxÞ, being P1½y�ðxÞ the unique interpolatory polynomial for Eq. (1).

Unfortunately, we shall not be able to pursue this relationship" [1].

tions for the numerical solution of boundary value problems.

yðr<sup>Þ</sup>

8 < :

<sup>ð</sup>xÞ;…, <sup>y</sup>ðq<sup>Þ</sup>

� � � ðxÞ ¼ f � x, yðxÞ �

considered boundary value problems.

170 Dynamical Systems - Analytical and Computational Techniques

cesses, such as relaxation methods.

where <sup>y</sup>ðxÞ¼ðyðxÞ, <sup>y</sup>′

functionals on C <sup>r</sup>

If Li½y� ¼ Φ

yðbÞ � � yðaÞ �

lems for high-order differential equations

He observed that "these remarks indicate the close relationship between Peano kernels and Green's functions, and hence between interpolation theory and the theory of linear differential equations.

Later, Agarwal ([2], p. 2), Agarwal and Wong ([3], pp. 21, 151, 186) considered some separate boundary value problems and the related Fredholm integral equation, using only polynomial interpolation, without taking into account the related Peano kernel. They used Fredholm integral equation in order to obtain existence and uniqueness results for the solution of the

Linear interpolation has an important role also in the numerical solution of differential problems. For example, finite difference methods (see, for instance, [4–6] and references therein) approximate the solution yðxÞ of a boundary value problem by a sequence of overlapping polynomials which interpolate yðxÞ in a set of grid points. This is obtained by replacing the differential equation with finite difference equations on a mesh of points that covers the range of integration. The resultant algebraic system of equations is often solved with iterative pro-

Many authors (see [7–10] and references therein) used linear interpolation with spline func-

Here, we take into account a more general nonlinear initial/boundary/multipoint value prob-

Li½y�ðxÞ ¼ wi, i ¼ 0;…,r−1; x ∈I

<sup>ð</sup>xÞÞ, 0 <sup>≤</sup> <sup>q</sup> <sup>&</sup>lt; <sup>r</sup>, <sup>y</sup> <sup>∈</sup> <sup>C</sup> <sup>r</sup>

fðx, y0, y1, …, yqÞ−fðx, y0, y1,…, yqÞ

, i ¼ 0;…,r−1, then (2) is a boundary value problem (BVP); if Li½y� ¼ Φ

r−1, j ¼ 0;…, m ≥ 2, then (2) is a multipoint value problem (MVP).

ous at least in the interior of the domain of interest, and it satisfies a uniform Lipschitz condition in y, which means that there exists a nonnegative constant Λ, such that, whenever ðx, y0, y1, …, yqÞ and ðx, y0, y1,…, yqÞ are in the domain of f , the following inequality holds

, x ∈I ¼ ½a, b�, r ≥1

<sup>ð</sup>IÞ. Moreover, we suppose that the function <sup>f</sup> : <sup>½</sup>a, <sup>b</sup>� · <sup>R</sup><sup>q</sup>þ<sup>1</sup> ! <sup>R</sup> is continu-

� � � ≤ Λ X q

, i ¼ 0;…,r−1, then (2) is an initial value problem (IVP); if Li½y� ¼ Φ

k¼0

yk−yk � � �

ðIÞ, and Li are r linearly independent

�: (3)

� yðxjÞ � � yðaÞ,

, i ¼ 0;…,

(2)


The reason for which we prefer collocation methods is their superior accuracy for problems whose solutions are sufficiently smooth functions. Recently, Boyd ([11], p. 8) observed that "When many decimal places of accuracy are needed, the contest between pseudospectral algorithms and finite difference and finite element methods is not an even battle but a rout: pseudospectral methods win hands-down."

## 2. The Fredholm integral equation for problem (2)

We consider the general differential problem (2), and we prove that it is equivalent to a Fredholm integral equation.

Proposition 1 [1, p. 35] The linear interpolation problem

$$L\_i[P](\mathbf{x}) = \mathbf{w}\_i, \qquad \mathbf{w}\_i, \in \mathbb{R}, \ i = 0, \dots, r - 1, \ P \in P\_{r - 1}, \ \mathbf{x} \in I \tag{4}$$

with Li, <sup>i</sup> <sup>¼</sup> <sup>0</sup>;…,r−1, linearly independent functionals on <sup>C</sup> <sup>r</sup> ðIÞ, has the unique solution

$$P\_{r-1}(t) = -\frac{1}{G} \begin{vmatrix} 0 & 1 & t & \cdots & \cdots & t^{r-1} \\ w\_0 & & & \\ & w\_1 & & \\ \vdots & & & L\_i[\mathbf{x}^j] & \\ & \vdots & & & \\ w\_{r-1} \end{vmatrix}, \quad G = |L\_i[\mathbf{x}^j]|\_{i, j=0,\ldots,r-1}. \tag{5}$$

Proof. Since the Li, i ¼ 0;…,r−1 are linearly independent, the result follows from the general linear interpolation theory.

Proposition 2 If y ∈ C <sup>r</sup> ðIÞ and Li½y�ðxÞ ¼ wi, i ¼ 0;…,r−1, x∈ I, then

$$y(\mathbf{x}) = P\_{r-1}[y](\mathbf{x}) + \int\_{a}^{b} K\_r^x(\mathbf{x}, t) \, y^{(r)}(t) \, dt, \qquad \forall \mathbf{x} \in I \, f \, \text{fixed},\tag{6}$$

with Li½y� ¼ Li½Pr<sup>−</sup>1�, i ¼ 0;…,r−1, Pr<sup>−</sup>1½y�ðxÞ ¼ Pr<sup>−</sup>1ðxÞ, and

$$K\_r^{\mathbf{x}}(\mathbf{x},t) = \frac{1}{(r-1)!} \left[ (\mathbf{x}-t)\_+^{r-1} - P\_{r-1} \left[ (\mathbf{x}-t)\_+^{r-1} \right](\mathbf{x}) \right],\tag{7}$$

where index x means that K<sup>x</sup> <sup>r</sup> ðx, tÞ is considered as a function of x.

Proof. It follows by observing that Pr<sup>−</sup>1½ðxÞ j þ�ðtÞ¼ðt<sup>Þ</sup> j <sup>þ</sup>, <sup>j</sup> <sup>¼</sup> <sup>0</sup>;…,r−1 and from Peano kernel Theorem [1].

Theorem 1 With the above notations and under the mentioned hypothesis, problem (2) is equivalent to the Fredholm integral equation

$$y(\mathbf{x}) = P\_{r-1}[y](\mathbf{x}) + \int\_{a}^{b} K\_r^{\mathbf{x}}(\mathbf{x}, t) f\left(t, y(t)\right) dt. \tag{8}$$

Proof. The result follows from the uniqueness of the Peano kernel and from Propositions 1 and 2.

Corollary 1 It results Li½K<sup>x</sup> <sup>r</sup> � ¼ 0; i ¼ 0;…,r−1:

From Theorem 1, general results on the existence and uniqueness of solution of problem (2) by standard techniques [2, 3] can be obtained. In the following, we will not linger over them, but we will outline the close relationship between interpolation and differential equations. Particularly, we will use linear interpolation in order to determine a class of collocation methods for the numerical solution of problem (2).

## 3. A class of Birkhoff-Lagrange collocation methods

Integral Eq. (8) allows to determine a very wide class of numerical methods for Eq. (2), which we call methods of collocation for integration.

Let <sup>f</sup>xig<sup>m</sup> <sup>i</sup>¼<sup>1</sup> be <sup>m</sup> distinct points in <sup>½</sup>a, <sup>b</sup>� and denoted by liðtÞ, <sup>i</sup> <sup>¼</sup> <sup>1</sup>;…, <sup>m</sup>, the fundamental Lagrange polynomials on the nodes xi, that is

$$l\_i(t) = \frac{\omega\_m(t)}{(t - \mathbf{x}\_i)\omega\_m'(\mathbf{x}\_i)}, \quad \text{where } \omega\_m(t) = \prod\_{k=1}^m (t - \mathbf{x}\_k). \tag{9}$$

Theorem 2 If the solution yðx<sup>Þ</sup> of Eq. (8) is in <sup>C</sup> <sup>r</sup>þ<sup>m</sup>ðIÞ, then

$$y(\mathbf{x}) = P\_{r-1}[y](\mathbf{x}) + \sum\_{i=1}^{m} p\_{r,i,m}(\mathbf{x}) f\left(\mathbf{x}\_i, \mathbf{y}(\mathbf{x}\_i)\right) + T\_{r,m}(y, \mathbf{x}), \tag{10}$$

where

$$p\_{r,i,m}(\mathbf{x}) = \int\_{a}^{b} K\_r^x(\mathbf{x}, t) l\_i(t) \, dt, \quad i = 1, \ldots, m,\tag{11}$$

and the remainder term Tr,mðy, xÞ is given by:

Relationship between Interpolation and Differential Equations: A Class of Collocation Methods http://dx.doi.org/10.5772/66995 173

$$T\_{r,m}(y,\mathbf{x}) = \frac{1}{m!} \int\_{a}^{b} K\_r^x(\mathbf{x}, t) \omega\_m(t) y^{(r+m)}(\xi\_\mathbf{x}) \, dt,\tag{12}$$

being ξ<sup>x</sup> a suitable point of the smallest interval containing x and all xi, i ¼ 1;…, m. Proof. From Lagrange interpolation

$$y^{(r)}(\mathbf{x}) = \sum\_{i=1}^{m} l\_i(\mathbf{x}) y^{(r)}(\mathbf{x}\_i) + \overline{R}\_m(y, \mathbf{x}) \tag{13}$$

where

Kx

Proof. It follows by observing that Pr<sup>−</sup>1½ðxÞ

172 Dynamical Systems - Analytical and Computational Techniques

where index x means that K<sup>x</sup>

the Fredholm integral equation

Corollary 1 It results Li½K<sup>x</sup>

the numerical solution of problem (2).

we call methods of collocation for integration.

Lagrange polynomials on the nodes xi, that is

and the remainder term Tr,mðy, xÞ is given by:

Theorem [1].

Let <sup>f</sup>xig<sup>m</sup>

where

<sup>r</sup> <sup>ð</sup>x, <sup>t</sup>Þ ¼ <sup>1</sup>

yðxÞ ¼ Pr<sup>−</sup>1½y�ðxÞ þ

<sup>r</sup> � ¼ 0; i ¼ 0;…,r−1:

3. A class of Birkhoff-Lagrange collocation methods

liðtÞ ¼ <sup>ω</sup>mðt<sup>Þ</sup> ðt−xiÞω′

<sup>y</sup>ðxÞ ¼ Pr<sup>−</sup>1½y�ðxÞ þX<sup>m</sup>

pr,i,mðxÞ ¼

Theorem 2 If the solution yðx<sup>Þ</sup> of Eq. (8) is in <sup>C</sup> <sup>r</sup>þ<sup>m</sup>ðIÞ, then

<sup>m</sup>ðxiÞ

i¼1

ðb a Kx pr,i,mðxÞf

<sup>ð</sup>r−1Þ! <sup>ð</sup>x−t<sup>Þ</sup>

<sup>r</sup> ðx, tÞ is considered as a function of x.

j

r−1

þ�ðtÞ¼ðt<sup>Þ</sup>

Theorem 1 With the above notations and under the mentioned hypothesis, problem (2) is equivalent to

ðb a Kx <sup>r</sup> ðx, tÞf � t, yðtÞ �

Proof. The result follows from the uniqueness of the Peano kernel and from Propositions 1 and 2.

From Theorem 1, general results on the existence and uniqueness of solution of problem (2) by standard techniques [2, 3] can be obtained. In the following, we will not linger over them, but we will outline the close relationship between interpolation and differential equations. Particularly, we will use linear interpolation in order to determine a class of collocation methods for

Integral Eq. (8) allows to determine a very wide class of numerical methods for Eq. (2), which

<sup>i</sup>¼<sup>1</sup> be <sup>m</sup> distinct points in <sup>½</sup>a, <sup>b</sup>� and denoted by liðtÞ, <sup>i</sup> <sup>¼</sup> <sup>1</sup>;…, <sup>m</sup>, the fundamental

, where ωmðtÞ ¼ ∏

� xi, yðxiÞ �

m k¼1

<sup>r</sup> ðx,tÞliðtÞ dt, i ¼ 1;…, m, (11)

<sup>þ</sup> <sup>−</sup>Pr<sup>−</sup><sup>1</sup> <sup>ð</sup>x−t<sup>Þ</sup>

h i

j

r−1 þ h i

ðxÞ

, (7)

dt: (8)

ðt−xkÞ: (9)

þ Tr,mðy, xÞ, (10)

<sup>þ</sup>, <sup>j</sup> <sup>¼</sup> <sup>0</sup>;…,r−1 and from Peano kernel

$$\overline{R}\_m(y,\mathbf{x}) = \frac{1}{m!} \omega\_m(t) y^{(r+m)}(\xi\_\mathbf{x}) \tag{14}$$

is the remainder term. From (2), <sup>f</sup>ðx, <sup>y</sup>ðxÞÞ ¼ <sup>X</sup><sup>m</sup> i¼1 liðxÞyðr<sup>Þ</sup> ðxiÞ þ Rmðy, xÞ. Then, from Theorem 1, inserting Eq. (13) into (8), we obtain Eq. (10).

Theorem 2 suggests to consider the implicitly defined polynomial

$$y\_{r,m}(\mathbf{x}) = P\_{r-1}[y\_{r,m}](\mathbf{x}) + \sum\_{i=1}^{m} p\_{r,i,m}(\mathbf{x}) f\left(\mathbf{x}\_i, y\_{r,m}(\mathbf{x}\_i)\right). \tag{15}$$

For polynomial (15), the following theorem holds.

Theorem 3 (The main Theorem). Polynomial (15), of degree r þ m−1, satisfies the relations

$$\begin{aligned} L\_i[y\_{r,m}](\mathbf{x}) &= w\_i, \qquad i = 0, \ldots, r-1, \ x \in I, \ w\_i \in \mathbb{R} \\ y\_{r,m}^{(r)}(\mathbf{x}\_j) &= f\left(\mathbf{x}\_j, y\_{r,m}(\mathbf{x}\_j)\right) \qquad j = 1, \ldots, m, \end{aligned} \tag{16}$$

that is, yr,mðxÞ is a collocation polynomial for Eq. (2) at nodes xj, j ¼ 1;…, m.

Proof. From (15), Corollary 1 and the linearity of operators Li, we get Li½yr,m�ðxÞ ¼ wi, i ¼ 0;…, r−1. By Theorems 1 and 2, we obtain yðr<sup>Þ</sup> <sup>ð</sup>xiÞ ¼ <sup>y</sup>ðr<sup>Þ</sup> <sup>r</sup>,mðxiÞ, and from Eq. (11), p ðrÞ <sup>r</sup>,i,mðxÞ ¼ liðxÞ. Hence, relations (16) follow.

Remark 1 (Hermite-Birkhoff-type interpolation). Theorem 3 is equivalent to the general Hermite-Birkhoff interpolation problem [12]: given wi ∈ R, i ¼ 0;…,r−1, and α<sup>j</sup> ∈ R, j ¼ 1;…, m, determine, if there exists, the polynomial QðxÞ∈ P <sup>m</sup>þr−<sup>1</sup> such that

$$\begin{array}{ll} L\_i[Q] = w\_i, & i = 0, \ldots, r - 1 \\ Q^{(r)}(x\_j) = \alpha\_j, & j = 1, \ldots, m, \ x\_j \in I. \end{array} \tag{17}$$

Remark 2 In the case of IVPs, for each method (15), we can derive the corresponding implicit Runge-Kutta method. For example, for r ¼ 2, let b ¼ x<sup>0</sup> þ h and xi ¼ x<sup>0</sup> þ cih with ci ∈½0; 1�. With the change of coordinates x ¼ x<sup>0</sup> þ th, t ∈½0; 1�, we can write

$$p\_{r,i,m}(\mathbf{x}) = p\_{r,i,m}(\mathbf{x}\_0 + t\mathbf{h}) = h^2 \int\_0^t \int\_0^r l\_i(\mathbf{s}) \, d\mathbf{s} \, dr, \qquad l\_i(\mathbf{s}) = \prod\_{\substack{k=1\\k \neq i}}^m \frac{\mathbf{s} - \mathbf{c}\_k}{\mathbf{c}\_i - \mathbf{c}\_k}. \tag{18}$$

Putting fðxi, yr,mðxiÞÞ ¼ <sup>y</sup>″ <sup>r</sup>,mðxiÞ≡Ki, ai,<sup>j</sup> ¼ pr,<sup>j</sup> <sup>ð</sup>xiÞ ¼ <sup>h</sup><sup>2</sup> ðci 0 ðci−sÞljðsÞ ds, we have

$$\mathcal{K}\_i = f\left(\mathbf{x}\_0 + c\_i h, \mathbf{y}\_0 + \mathbf{y}\_0' th + \sum\_{j=1}^m a\_{i,j} \mathcal{K}\_j\right) \tag{19}$$

and

$$\begin{cases} \boldsymbol{y}\_{1}(t) \boldsymbol{\mathfrak{w}}\_{r,m}(\mathbf{x}\_{0} + th) = \boldsymbol{y}\_{0} + \boldsymbol{y}\_{0}^{\prime}th + h^{2} \sum\_{i=1}^{m} \boldsymbol{p}\_{r,i,m}(\mathbf{x}\_{0} + th)\mathbf{K}\_{i} \\\\ \boldsymbol{y}\_{1}^{\prime}(t) \boldsymbol{\mathfrak{w}}\_{r,m}^{\prime}(\mathbf{x}\_{0} + th) = \boldsymbol{y}\_{0}^{\prime}h + h^{2} \sum\_{i=1}^{m} \boldsymbol{p}\_{r,i,m}^{\prime}(\mathbf{x}\_{0} + th)\mathbf{K}\_{i}. \end{cases} \tag{20}$$

Eqs. (19) and (20) are the well-known continuous Runge-Kutta method for second-order differential equations. Particularly, for t ¼ 1, we have the implicit Runge-Kutta-Nystrom method.

#### 3.1. A-priori estimation of error

In what follows, we consider the norm

$$\|f\| = \max\_{a \le t \le b} \sum\_{k=0}^{q} |f^{(k)}(t)|, \qquad \forall f \in \mathcal{H}^{(q)}(I). \tag{21}$$

Moreover, we define

$$Q\_m = \sum\_{i=1}^m \mathbb{I} p\_{r,i,m} \mathbb{I}, \quad F(\mathbf{x}) = \int\_a^b K\_r^x(\mathbf{x}, t) dt, \quad H = \underset{a \le t \le b}{\mathbf{m} \, \mathbf{ax}} \mathbb{X} |\overline{R}\_m(y, t)|, \tag{22}$$

where Rmðy, tÞ is defined as in (14).

Theorem 4 With the previous notations, if ΛQm < 1, then

$$\|\|y - y\_{r,m}\|\| \le \frac{H \|\|F\|\|}{1 - \Lambda Q\_m}.\tag{23}$$

Proof. By deriving Eqs. (10) and (15), s times, s ¼ 0;…, q, we get

$$y^{(s)}(\mathbf{x}) - y^{(s)}\_{r,m}(\mathbf{x}) = \sum\_{i=1}^{m} p^{(s)}\_{r,i,m}(\mathbf{x}) \left[ f\left(\mathbf{x}\_i, \mathbf{y}(\mathbf{x}\_i)\right) - f\left(\mathbf{x}\_i, \mathbf{y}\_{r,m}(\mathbf{x}\_i)\right) \right] + \frac{\partial^{\epsilon}}{\partial \mathbf{x}^{\epsilon}} \int\_{a}^{b} K\_r^{\epsilon}(\mathbf{x}, t) \overline{R}\_m(y, t) dt. \tag{24}$$

It follows that

Relationship between Interpolation and Differential Equations: A Class of Collocation Methods http://dx.doi.org/10.5772/66995 175

$$\begin{split} |y^{(s)}(\mathbf{x}) - y^{(s)}\_{r,m}(\mathbf{x})| \quad \le & \sum\_{i=1}^{m} |p^{(s)}\_{r,i,m}(\mathbf{x})| \Lambda \sum\_{k=0}^{q} |y^{(k)}(\mathbf{x}\_{i}) - y^{(k)}\_{r,m}(\mathbf{x}\_{i})| + H \left| F^{(s)}(\mathbf{x}) \right| \\ \qquad \le & \Lambda \| y - y\_{r,m} \| \sum\_{i=1}^{m} |p^{(s)}\_{r,i,m}(\mathbf{x})| + H |F^{(s)}(\mathbf{x})|. \end{split} \tag{25}$$

From this, we obtain inequality (23).

#### 4. Algorithms and implementation

To calculate the approximate solution of problem (2) by polynomial yr,mðxÞ at x∈ I, we need the values yðs<sup>Þ</sup> <sup>r</sup>,mðxkÞ, k ¼ 1;…, m, s ¼ 0;…, q. In order to get these values, we propose the following algorithm:


$$y\_k^{(s)} = P\_{r-1}^{(s)}[y\_k](\mathbf{x}\_k) + \sum\_{i=1}^{m} p\_{r,i}^{(s)}(\mathbf{x}\_k) f(\mathbf{x}\_i, y\_i),\tag{26}$$

k ¼ 1;…, m, s ¼ 0;…, q, where y<sup>i</sup> ¼ ðyi , y′ i , …, y ðqÞ <sup>i</sup> Þ.

System (26) can be written in the form

$$Y\text{-}AF(Y) = \mathbb{C} \tag{27}$$

where

pr,i,mðxÞ ¼ pr,i,mðx<sup>0</sup> <sup>þ</sup> thÞ ¼ <sup>h</sup><sup>2</sup>

174 Dynamical Systems - Analytical and Computational Techniques

<sup>r</sup>,mðxiÞ≡Ki, ai,<sup>j</sup> ¼ pr,<sup>j</sup>

0 @

<sup>y</sup>1ðtÞ≡yr,mðx<sup>0</sup> <sup>þ</sup> thÞ ¼ <sup>y</sup><sup>0</sup> <sup>þ</sup> <sup>y</sup>′

‖<sup>f</sup> ‖ <sup>¼</sup> max a ≤ t ≤ b

‖pr,i,m‖, FðxÞ ¼

<sup>r</sup>,mðx<sup>0</sup> <sup>þ</sup> thÞ ¼ <sup>y</sup>′

equations. Particularly, for t ¼ 1, we have the implicit Runge-Kutta-Nystrom method.

X q

k¼0 jf ðkÞ

> ðb a Kx

∥y−yr,m∥ ≤

h i�

y′ 1ðtÞ≡y′

8 >>><

>>>:

3.1. A-priori estimation of error

Moreover, we define

yðs<sup>Þ</sup>

It follows that

<sup>ð</sup>xÞ−yðs<sup>Þ</sup>

In what follows, we consider the norm

Qm <sup>¼</sup> <sup>X</sup><sup>m</sup>

where Rmðy, tÞ is defined as in (14).

<sup>r</sup>,mðxÞ ¼ <sup>X</sup><sup>m</sup>

i¼1 p ðsÞ <sup>r</sup>,i,mðxÞ f

i¼1

Theorem 4 With the previous notations, if ΛQm < 1, then

Proof. By deriving Eqs. (10) and (15), s times, s ¼ 0;…, q, we get

� xi, yðxiÞ � −f �

Ki <sup>¼</sup> f x<sup>0</sup> <sup>þ</sup> cih, <sup>y</sup><sup>0</sup> <sup>þ</sup> <sup>y</sup>′

Putting fðxi, yr,mðxiÞÞ ¼ <sup>y</sup>″

and

ðt 0 ðr 0

<sup>ð</sup>xiÞ ¼ <sup>h</sup><sup>2</sup>

ðci 0

<sup>0</sup>th <sup>þ</sup>X<sup>m</sup> j¼1

> Xm i¼1

<sup>0</sup>th <sup>þ</sup> <sup>h</sup><sup>2</sup>

Xm i¼1 p′

<sup>ð</sup>tÞj, <sup>∀</sup><sup>f</sup> <sup>∈</sup> <sup>C</sup> <sup>ð</sup>q<sup>Þ</sup>

<sup>r</sup> <sup>ð</sup>x, <sup>t</sup>Þdt, <sup>H</sup> <sup>¼</sup> max

H∥F∥ 1−ΛQm

xi, yr,mðxiÞ

þ ∂s ∂xs ðb a Kx

a ≤ t ≤ b

<sup>0</sup><sup>h</sup> <sup>þ</sup> <sup>h</sup><sup>2</sup>

Eqs. (19) and (20) are the well-known continuous Runge-Kutta method for second-order differential

liðsÞ ds dr, liðsÞ ¼ ∏

ai,jKj

ðci−sÞljðsÞ ds, we have

1

pr,i,mðx<sup>0</sup> þ thÞKi

<sup>r</sup>,i,mðx<sup>0</sup> þ thÞKi:

m k ¼ 1 k ≠ i

s−ck ci−ck

A (19)

ðIÞ: (21)

jRmðy,tÞj, (22)

<sup>r</sup> ðx,tÞRmðy, tÞdt: (24)

: (23)

: (18)

(20)

$$A = \begin{pmatrix} A\_0 & 0 & \cdots & 0 \\ 0 & \ddots & & \vdots \\ \vdots & & \ddots & 0 \\ 0 & \cdots & 0 & A\_q \end{pmatrix}\_{m(q+1)\times m(q+1)} \tag{28}$$

with

$$A\_s = \begin{pmatrix} \ddot{a}\_{1,1}^{(s)} & \cdots & \ddot{a}\_{1,m}^{(s)} \\ \vdots & & \vdots \\ \ddot{a}\_{m,1}^{(s)} & \cdots & \ddot{a}\_{m,m}^{(s)} \end{pmatrix}\_{m \times m} \qquad \qquad \ddot{a}\_{i,j}^{(s)} = p\_{r,j}^{(s)}(\mathbf{x}\_i), \quad s = 0, \ldots, q,\tag{29}$$

$$Y = \left(\overline{Y}\_0, \dots, \overline{Y}\_q\right)\_{m(q+1)\times 1}^T, \qquad \overline{Y}\_s = \left(y\_1^{(s)}, \dots, y\_m^{(s)}\right),\tag{30}$$

$$F(Y) = \underbrace{(F\_m, \dots, F\_m)}\_{q}^T, \qquad F\_m = (f\_1, \dots, f\_m)^T, \qquad f\_i = f(\mathbf{x}\_i, y\_i), \tag{31}$$

$$B\_s = \left(P\_{r-1}^{(s)}[y\_1](\mathbf{x}\_1), \dots, P\_{r-1}^{(s)}[y\_m](\mathbf{x}\_m)\right), \qquad \mathbb{C} = (B\_0, \dots, B\_q)^T\_{m(q+1)\times 1}.\tag{32}$$

From Eq. (27), we get

$$Y = AF(Y) + \mathbb{C}\,,\tag{33}$$

or, putting GðYÞ ¼ AFðYÞ þ C,

$$Y = \mathcal{G}(Y) \,. \tag{34}$$

For the existence and uniqueness of solution of system (34), we can prove, with standard technique, the following theorem.

Theorem 5 If T ¼ Λ∥A∥<sup>∞</sup> < 1, system (34) has a unique solution which can be calculated by an iterative method

$$\mathcal{G}(\boldsymbol{Y}\_{\boldsymbol{m}})\_{\boldsymbol{\nu}+1} = \mathcal{G}\Big((\boldsymbol{Y}\_{\boldsymbol{m}})\_{\left(\boldsymbol{\nu}\right)}\Big), \qquad \qquad \boldsymbol{\nu} \ge \mathbf{0} \tag{35}$$

with a fixed <sup>ð</sup>YmÞ<sup>0</sup> <sup>∈</sup> <sup>R</sup><sup>m</sup>ðqþ1<sup>Þ</sup> and GðYmÞ ¼ AFðYmÞ þ <sup>C</sup>:

Moreover, if Y is the exact solution,

$$\|\left(Y\_m\right)\_{\nu+1} - Y\|\_{\circ} \le \frac{T'}{1-T} \|\left(Y\_m\right)\_1 - \left(Y\_m\right)\_0\|\_{\circ} \tag{36}$$

Remark 3 If f is linear, then system (27) is a linear system which can be solved by a more suitable method. Remark 4 System (27) can be considered as a discrete method for the numerical solution of (2).

Remark 5 Method (15) can generate the polynomial sequence

$$(y\_{r,m}(\mathbf{x}))\_{\mathbf{v}} = P\_{r-1}[y\_{r,m}](\mathbf{x}) + \sum\_{i=1}^{m} p\_{r,i,m}(\mathbf{x}) f(\mathbf{x}\_i, (y\_{r,m}(\mathbf{x}\_i))\_{\mathbf{v}-1}), \quad (y\_{r,m})\_0 = P\_{r-1}[y](\mathbf{x}) \tag{37}$$

which is equivalent to the discretization of Picard method for differential equations.

#### 4.1. Numerical computation of the entries of matrix A

To calculate the elements ~a ðsÞ <sup>i</sup>, <sup>k</sup> of the matrix A in Eq. (27), we have to compute the integrals

$$p\_{r,k}^{(s)}(\mathbf{x}) = \frac{d^s}{d\mathbf{x}^s} \int\_a^b K\_r^x(\mathbf{x}, t) l\_i(t) \, dt \tag{38}$$

for x ¼ xi. Integrating by parts, it remains to solve the problem of the computation of

$$F\_{i1}(\mathbf{x}\_{j}) = \int\_{a}^{\mathbf{x}\_{j}} l\_{i}(t)dt, \quad F\_{ik}(\mathbf{x}\_{j}) = \int\_{a}^{\mathbf{x}\_{j}} F\_{i,k-1}(t)dt \qquad k = 2, \ldots, n \tag{39}$$

$$M\_{i1}(\mathbf{x}\_j) = \int\_{x\_j}^{b} l\_i(t)dt, \quad M\_{i\bar{k}}(\mathbf{x}\_j) = \int\_{x\_j}^{b} M\_{i,k-1}(t)dt \qquad k = 2, \ldots, n \tag{40}$$

i, j ¼ 1;…m. To this aim, it suffices to compute

Relationship between Interpolation and Differential Equations: A Class of Collocation Methods http://dx.doi.org/10.5772/66995 177

$$\int\_{\mathcal{c}}^{x\_{j}=t\_{k}} \int\_{\mathcal{c}}^{t\_{k-1}} \cdots \int\_{\mathcal{c}}^{t\_{1}} r\_{m,i}(t) \, dt \, dt\_{1} \cdots dt\_{k-1} \tag{41}$$

where c ¼ a or c ¼ b, r<sup>0</sup>;<sup>0</sup>ðtÞ ¼ 1,

Y ¼ AFðYÞ þ C , (33)

Y ¼ GðYÞ : (34)

, ν≥0 (35)

‖ ðYmÞ1−ðYmÞ0‖<sup>∞</sup> : (36)

<sup>r</sup> ðx, tÞliðtÞ dt (38)

Fi, <sup>k</sup>−1ðtÞdt k ¼ 2;…, n (39)

Mi, <sup>k</sup>−1ðtÞdt k ¼ 2;…, n (40)

pr,i,mðxÞfðxi,ðyr,mðxiÞÞν−1Þ, ðyr,mÞ<sup>0</sup> ¼ Pr<sup>−</sup>1½y�ðxÞ (37)

<sup>i</sup>, <sup>k</sup> of the matrix A in Eq. (27), we have to compute the integrals

For the existence and uniqueness of solution of system (34), we can prove, with standard

Theorem 5 If T ¼ Λ∥A∥<sup>∞</sup> < 1, system (34) has a unique solution which can be calculated by an

Tν 1−T

Remark 3 If f is linear, then system (27) is a linear system which can be solved by a more suitable method.

Remark 4 System (27) can be considered as a discrete method for the numerical solution of (2).

dxs ðb a Kx

for x ¼ xi. Integrating by parts, it remains to solve the problem of the computation of

ðxj a

> ðb xj

liðtÞdt, FikðxjÞ ¼

liðtÞdt, MikðxjÞ ¼

� <sup>ð</sup>YmÞð<sup>ν</sup><sup>Þ</sup> �

<sup>ð</sup>YmÞ<sup>ν</sup>þ<sup>1</sup> <sup>¼</sup> <sup>G</sup>

‖ðYmÞ<sup>ν</sup>þ<sup>1</sup>−Y‖<sup>∞</sup> <sup>≤</sup>

i¼1

which is equivalent to the discretization of Picard method for differential equations.

with a fixed <sup>ð</sup>YmÞ<sup>0</sup> <sup>∈</sup> <sup>R</sup><sup>m</sup>ðqþ1<sup>Þ</sup> and GðYmÞ ¼ AFðYmÞ þ <sup>C</sup>:

Remark 5 Method (15) can generate the polynomial sequence

4.1. Numerical computation of the entries of matrix A

ðsÞ

ðxj a

ðb xj

Fi1ðxjÞ ¼

Mi1ðxjÞ ¼

i, j ¼ 1;…m. To this aim, it suffices to compute

p ðsÞ <sup>r</sup>, <sup>k</sup>ðxÞ ¼ ds

<sup>ð</sup>yr,mðxÞÞ<sup>ν</sup> <sup>¼</sup> Pr<sup>−</sup>1½yr,m�ðxÞ þX<sup>m</sup>

or, putting GðYÞ ¼ AFðYÞ þ C,

176 Dynamical Systems - Analytical and Computational Techniques

technique, the following theorem.

Moreover, if Y is the exact solution,

To calculate the elements ~a

iterative method

$$r\_{m,i}(t) = (t \text{-} \mathbf{x}\_1) \cdots (t \text{-} \mathbf{x}\_{i-1}) (t \text{-} \mathbf{x}\_{i+1}) \cdots (t \text{-} \mathbf{x}\_m) \qquad i = 1, 2, \ldots, m \text{ .} \tag{42}$$

For the computation of the integral (41), we use the recursive algorithm introduced in Ref. [13]: for each i ¼ 1;…, m, let us consider the new points z ðiÞ <sup>j</sup> ¼ xj if j < i, and z ðiÞ <sup>j</sup> ¼ xjþ<sup>1</sup> if j ≥ i. Moreover, let us define g ðiÞ <sup>0</sup>;1;<sup>c</sup>ðxÞ ¼ x−c and for s ¼ 1;…, m−1

$$\mathbf{g}\_{s,j,c}^{(i)}(\mathbf{x}) = \int\_{\mathcal{c}}^{\mathbf{x} = t\_j} \int\_{\mathcal{c}}^{t\_{j-1}} \cdots \int\_{\mathcal{c}}^{t\_1} \left(\mathfrak{t} \cdot \mathbf{z}\_1^{(i)}\right) \left(\mathfrak{t} \cdot \mathbf{z}\_2^{(i)}\right) \cdots \left(\mathfrak{t} \cdot \mathbf{z}\_s^{(i)}\right) \, dt \, dt\_1 \cdots dt\_{j-1} \,. \tag{43}$$

We can easily compute g ðiÞ <sup>0</sup>;j, <sup>c</sup>ðxÞ ¼ <sup>ð</sup>x−c<sup>Þ</sup> j <sup>j</sup>! : For the computation of Eq. (43), the following recurrence formula [13] holds

$$\mathbf{g}\_{s,j,\mathbf{c}}^{(i)}(\mathbf{x}) = \left(\mathbf{x} \cdot \mathbf{z}\_s^{(i)}\right) \mathbf{g}\_{s-1,j,\mathbf{c}}^{(i)}(\mathbf{x}) \mathbf{-j} \mathbf{g}\_{s-1,j+1,\mathbf{c}}^{(i)}(\mathbf{x}).\tag{44}$$

Thus, if Wi ¼ ∏ m <sup>k</sup>¼<sup>1</sup>, <sup>k</sup>≠<sup>i</sup> ðxi−xkÞ, then

$$F\_{ik}(\mathbf{x}\_{j}) = \frac{\mathbf{g}\_{m-1,k,a}^{(i)}(\mathbf{x}\_{j})}{W\_{i}}, \qquad M\_{ik}(\mathbf{x}\_{j}) = (-1)^{k} \frac{\mathbf{g}\_{m-1,k,b}^{(i)}(\mathbf{x}\_{j})}{W\_{i}}.\tag{45}$$

Remark 6 An alternative approach for the exact computation of integrals (39) and (40) is to use a quadrature formula with a suitable degree of precision.

#### 4.2. Outline of the method

Summarizing the proposed method consists of the following steps:


#### 5. Some particular cases

Now we consider some special cases of problem (2), and for each case, we determine Pr<sup>−</sup>1½y�ðxÞ and K<sup>x</sup> <sup>r</sup> ðx, tÞ. We also show how the proposed class of methods includes the methods presented in previous works [12–24].

#### 5.1. Initial value problems

In the case of initial value problems, in Refs. [13, 17, 25], problem

$$y^{(r)}(\mathbf{x}) = f(\mathbf{x}, y(\mathbf{x})) \tag{46}$$

has been considered, while in Ref. [23], the authors introduced the more general equation

$$y^{(r)}(\mathbf{x}) = f\left(\mathbf{x}, y(\mathbf{x}), y'(\mathbf{x}), \dots, y^{(r)}(\mathbf{x})\right), \qquad q \le r - 1. \tag{47}$$

In both cases

$$P\_{r-1}[y](x) = \sum\_{i=0}^{r-1} \frac{\left(\chi - a\right)^i}{i!} y^{(i)}(a) \tag{48}$$

and

$$K\_r^{\mathbf{x}}(\mathbf{x},t) = \frac{1}{(r\mathbf{1})!} (\mathbf{x}-t)\_+^{r-1}. \tag{49}$$

If <sup>f</sup>xig<sup>m</sup> <sup>i</sup>¼<sup>1</sup> are the zeros of Chebyshev polynomials of first and second kind, the explicit expression for polynomials pr,i,mðxÞ can be obtained [13, 17, 25] for some values of r.

Particularly, forr ¼ 1 and r ¼ 2, in the case of zeros of Chebyshev polynomials of first kind, we get

$$\begin{split} p\_{1,i,m}(\mathbf{x}) &= \frac{1}{m} \sum\_{k=2}^{m-1} \left\{ \left[ \frac{T\_{k+1}(\mathbf{x})}{k+1} - \frac{T\_{k-1}(\mathbf{x})}{k-1} + 2 \frac{(-1)^{k-1}}{k^2 - 1} \right] \cos\left(\frac{2i-1}{2m}k\pi\right) \right\} \\ &\quad + \frac{1}{m} \left[ \mathbf{x} + \mathbf{1} + \cos\left(\frac{2i-1}{2m}\pi\right) (\mathbf{x}^2 - \mathbf{1}) \right] \end{split} \tag{50}$$

where Tk<sup>−</sup>1ðxÞ and Tkþ<sup>1</sup>ðxÞ are the Chebyshev polynomials of the first kind and degree k−1 and k þ 1, respectively, and

$$p\_{2,i,m}(\mathbf{x}) = \frac{1}{m} \left\{ \frac{(\mathbf{x} + \mathbf{1})^2}{2} + \frac{\mathbf{x}^3 - 3\mathbf{x} - 2}{3} \left( \frac{\cos \frac{\pi(2i - 1)}{2m} + \mathbf{x} \cos \pi(2i - 1)}{m} \right) \right.$$

$$+ \frac{1}{2} \sum\_{k=3}^{m-1} \cos \frac{k \pi(2i - 1)}{2m} \left[ \frac{T\_{k+2}(\mathbf{x})}{(k+1)(k+2)} - 2 \frac{T\_k(\mathbf{x})}{k^2 - 1} \right.$$

$$+ \frac{T\_{k-2}(\mathbf{x})}{(k-1)(k-2)} - \frac{12k(-1)^k}{k(k^2 - 1)(k^2 - 4)} - \frac{4(-1)^k}{k^2 - 1}(\mathbf{x} + 1) \right] \mathbf{}. \tag{51}$$

In the case of zeros of Chebyshev polynomials of second kind

Relationship between Interpolation and Differential Equations: A Class of Collocation Methods http://dx.doi.org/10.5772/66995 179

$$p\_{1,i,m}(\mathbf{x}) = \frac{2}{m+1} \sin \frac{\pi i}{m+1} \sum\_{k=0}^{m-1} \sin \frac{(k+1)\pi i}{m+1} \frac{1}{k+1} \left[ T\_{k+1}(\mathbf{x}) + (-1)^k \right] \tag{52}$$

and

5.1. Initial value problems

In both cases

and

If <sup>f</sup>xig<sup>m</sup>

In the case of initial value problems, in Refs. [13, 17, 25], problem

yðr<sup>Þ</sup>

178 Dynamical Systems - Analytical and Computational Techniques

<sup>p</sup><sup>1</sup>;i,mðxÞ ¼ <sup>1</sup>

<sup>p</sup><sup>2</sup>;i,mðxÞ ¼ <sup>1</sup>

m

þ 1 2 Xm−1 k¼3 cos

Tk<sup>−</sup>2ðxÞ

In the case of zeros of Chebyshev polynomials of second kind

þ

( ðx þ 1Þ 2 2 þ

k þ 1, respectively, and

m Xm−1 k¼2

þ 1 m

ðxÞ ¼ f � yðr<sup>Þ</sup>

<sup>x</sup>, <sup>y</sup>ðxÞ, <sup>y</sup>′

Pr<sup>−</sup>1½y�ðxÞ ¼ <sup>X</sup><sup>r</sup>−<sup>1</sup>

<sup>r</sup> <sup>ð</sup>x, <sup>t</sup>Þ ¼ <sup>1</sup>

Kx

sion for polynomials pr,i,mðxÞ can be obtained [13, 17, 25] for some values of r.

Tkþ<sup>1</sup>ðxÞ <sup>k</sup> <sup>þ</sup> <sup>1</sup> <sup>−</sup>

x þ 1 þ cos

has been considered, while in Ref. [23], the authors introduced the more general equation

<sup>ð</sup>xÞ, …, <sup>y</sup>ðr<sup>Þ</sup>

i¼0

ðr−1Þ!

Particularly, forr ¼ 1 and r ¼ 2, in the case of zeros of Chebyshev polynomials of first kind, we get

Tk<sup>−</sup>1ðxÞ <sup>k</sup>−<sup>1</sup> <sup>þ</sup> <sup>2</sup>

where Tk<sup>−</sup>1ðxÞ and Tkþ<sup>1</sup>ðxÞ are the Chebyshev polynomials of the first kind and degree k−1 and

x<sup>3</sup>−3x−2 3

�

kπð2i−1Þ 2m

<sup>ð</sup>k−1Þð Þ <sup>k</sup>−<sup>2</sup> <sup>−</sup> <sup>12</sup>kð Þ <sup>−</sup><sup>1</sup> <sup>k</sup> kðk 2 −1Þðk 2 −4Þ − 4ð−1Þ k

" #

2i−1 2m π � �

cos

Tkþ<sup>2</sup>ðxÞ ð Þ <sup>k</sup> <sup>þ</sup> <sup>1</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>2</sup> <sup>−</sup><sup>2</sup>

> k 2

0 B@

<sup>i</sup>¼<sup>1</sup> are the zeros of Chebyshev polynomials of first and second kind, the explicit expres-

ðx−aÞ i <sup>i</sup>! <sup>y</sup>ði<sup>Þ</sup>

> ðx−tÞ r−1

> > ð−1Þ k−1

( ) � �

cos

2i−1 2m kπ

þ x cos πð2i−1Þ

1 CA

: (51)

k2 −1

ðx2 −1Þ � � (50)

> πð2i−1Þ 2m

> > m

<sup>−</sup><sup>1</sup> <sup>ð</sup><sup>x</sup> <sup>þ</sup> <sup>1</sup><sup>Þ</sup>

TkðxÞ k 2 −1

#)

ðxÞ �

ðxÞ ¼ fðx, yðxÞÞ (46)

, q ≤ r−1: (47)

ðaÞ (48)

<sup>þ</sup> : (49)

$$\begin{aligned} p\_{2,i,m}(\mathbf{x}) &= \frac{1}{m+1} \sin \frac{\pi i}{m+1} \left\{ \sin \frac{\pi i}{m+1} (\mathbf{x} + 1)^2 \\ + \sum\_{k=2}^m \frac{1}{k} \sin \frac{k \pi i}{m+1} \left[ \frac{T\_{k+1}(\mathbf{x})}{k+1} - \frac{T\_{k-1}(\mathbf{x})}{k-1} - 2 \left( \mathbf{x} + \frac{k^2}{k^2 - 1} \right) (-1)^k \right] \right\} \end{aligned} \tag{53}$$

In Refs. [13, 25], the authors presented the corresponding implicit Runge-Kutta methods too.

In Ref. [26], Coleman and Booth used also a polynomial interpolant of degree n for y″, but they started from an identity different to Eq. (8) and derived a collocation method for which the nodes {xi} m <sup>i</sup>¼<sup>1</sup> are the zeros of Chebyshev polynomials of second kind.

#### 5.2. Boundary value problems

5.2.1. Case r ¼ 2n

For n ¼ 1, for the exact solution yðxÞ of the second-order BVP

$$y''(\mathbf{x}) = f(\mathbf{x}, y(\mathbf{x}), y'(\mathbf{x})), \quad y(-1) = y\_0, \ y(1) = y\_1 \tag{54}$$

x∈ ½−1; 1�, it is known that

$$y(\mathbf{x}) = \frac{y\_1 + y\_0}{2} + \mathbf{x}\frac{y\_1 - y\_0}{2} + \int\_{-1}^{1} K\_2^x(\mathbf{x}, t) f(\mathbf{x}, y(\mathbf{x}), \dot{y}'(\mathbf{x})) dt\tag{55}$$

where

$$K\_2^{\mathfrak{x}}(\mathbf{x}, t) = \begin{cases} \frac{(t+1)(\mathbf{x}-1)}{2} & t \le \mathbf{x} \\\\ \frac{(\mathbf{x}+1)(t-1)}{2} & \mathbf{x} < t \end{cases} \tag{56}$$

By applying method (15), we get [16]

$$y\_{2,m}(\mathbf{x}) = \frac{y\_1 + y\_0}{2} + \mathbf{x}\frac{y\_1 - y\_0}{2} + \sum\_{i=1}^m p\_{r,i,m}(\mathbf{x}) f\left(\mathbf{x}\_i, y(\mathbf{x}\_i), y'(\mathbf{x}\_i)\right) \tag{57}$$

with pr,i,mðxÞ ¼ <sup>ð</sup><sup>1</sup> −1 Kx <sup>2</sup>ðx, tÞliðtÞdt : If xi <sup>¼</sup> cos <sup>π</sup><sup>i</sup> <sup>m</sup>þ1, <sup>i</sup> <sup>¼</sup> <sup>1</sup>;…, <sup>m</sup>, we obtain explicitly the expression of pr,i,mðx<sup>Þ</sup> [18]

$$p\_{r,i,m}(\mathbf{x}) = \frac{1}{m+1} \sin \frac{\pi i}{m+1} \left[ \sum\_{k=2}^{m} \frac{G\_k(\mathbf{x})}{k} \sin \frac{k \pi i}{m+1} + (\mathbf{x}^2 - 1) \sin \frac{\pi i}{m+1} \right] \tag{58}$$

where

$$G\_k(\mathbf{x}) = \frac{T\_{k+1}(\mathbf{x})}{k+1} - \frac{T\_{k-1}(\mathbf{x})}{k-1} + \begin{cases} \frac{2\mathbf{x}}{k^2 - 1} & \text{even } k\\ k3\frac{2}{k^2 - 1} & \text{odd } k. \end{cases} \tag{59}$$

The same method has been presented in Ref. [24], where also stability has been studied. Now assume ½a, b�¼½0; 1� and r > 2. Several types of boundary conditions can be considered. -Hermite boundary conditions [22]:

$$y^{(h)}(0) = \alpha\_h, \quad y^{(h)}(1) = \beta\_h, \qquad h = 0, \ldots, n-1\tag{60}$$

with αh, βh, h ¼ 0;…, n−1 real constants.

In this case, Pr<sup>−</sup><sup>1</sup> is the Hermite polynomial of degree 2n−1

$$P\_{2n-1}[y](\mathbf{x}) = \sum\_{i=0}^{n-1} (y^{(i)}(0)H\_{i1}(\mathbf{x}) + y^{(i)}(1)H\_{i2}(\mathbf{x})) \tag{61}$$

with

$$\begin{aligned} H\_{i1}(\mathbf{x}) &= \frac{\mathbf{x}^i (1 - \mathbf{x})^n}{i!} \sum\_{s=0}^{n-i-1} \binom{n+s-1}{n-1} \mathbf{x}^s \\ H\_{i2}(\mathbf{x}) &= \frac{\mathbf{x}^n (1 - \mathbf{x})^i}{i!} \sum\_{s=0}^{n-i-1} \binom{n+s-1}{n-1} (1 - \mathbf{x})^s. \end{aligned} \tag{62}$$

The kernel is

$$K\_{2n}^{\pi}(\mathbf{x},t) = \begin{cases} \sum\_{i=0}^{n-1} \frac{(-t)^{2n-i-1}}{(2n-i-1)!} H\_{i1}(\mathbf{x}) & 0 \le t \le \mathbf{x} \\\ -\sum\_{i=0}^{n-1} \frac{(1-t)^{2n-i-1}}{(2n-i-1)!} H\_{i2}(\mathbf{x}) & \mathbf{x} \le t \le \mathbf{1} \end{cases} \tag{63}$$


$$y^{(2h)}(0) = \alpha\_h, \quad y^{(2h)}(1) = \beta\_h, \qquad h = 0, \ldots, n-1\tag{64}$$

where αh, βh, h ¼ 0;…, n are real constants.

In this case, Pr<sup>−</sup><sup>1</sup> is the Lidstone interpolating polynomial [3] of degree 2n−1

$$P\_{2n-1}[y](\mathbf{x}) = \sum\_{k=0}^{n-1} \left[ y^{(2k)}(\mathbf{0})\Lambda\_k(\mathbf{1}-\mathbf{x}) + y^{(2k)}(\mathbf{1})\Lambda\_k(\mathbf{x}) \right] \tag{65}$$

where <sup>Λ</sup>kðx<sup>Þ</sup> are the Lidstone polynomials of degree 2<sup>k</sup> <sup>þ</sup> 1 [3], and the function <sup>K</sup><sup>x</sup> <sup>2</sup>nðx, tÞ is

$$K\_{2n}^{\pi}(\mathbf{x},t) = \begin{cases} \sum\_{k=0}^{n-1} \frac{t^{2n-2k-1}}{(2n-2k-1)!} \Lambda\_k(1-\mathbf{x}) & t \le \mathbf{x} \\\ \sum\_{k=0}^{n-1} \frac{(1-t)^{2n-2k-1}}{(2n-2k-1)!} \Lambda\_k(\mathbf{x}) & \mathbf{x} \le t. \end{cases} \tag{66}$$

5.2.2. Case r ¼ 2n þ 1

pr,i,mðxÞ ¼ <sup>1</sup>

180 Dynamical Systems - Analytical and Computational Techniques


with αh, βh, h ¼ 0;…, n−1 real constants.

where

with

The kernel is

<sup>m</sup> <sup>þ</sup> <sup>1</sup> sin <sup>π</sup><sup>i</sup>

GkðxÞ ¼ Tkþ<sup>1</sup>ðx<sup>Þ</sup>

yðh<sup>Þ</sup>

Kx

yð2h<sup>Þ</sup>

where αh, βh, h ¼ 0;…, n are real constants.


<sup>2</sup>nðx, tÞ ¼

In this case, Pr<sup>−</sup><sup>1</sup> is the Hermite polynomial of degree 2n−1

<sup>P</sup>2n−1½y�ðxÞ ¼ <sup>X</sup><sup>n</sup>−<sup>1</sup>

Hi1ðxÞ ¼ xi

Hi2ðxÞ ¼ xnð1−x<sup>Þ</sup>

8 >>>><

>>>>:

<sup>ð</sup>0Þ ¼ <sup>α</sup>h, <sup>y</sup>ð2h<sup>Þ</sup>

i¼0 <sup>ð</sup>yði<sup>Þ</sup>

> ð1−xÞ n

n X−i−1 s¼0

i!

i!

Xn−1 i¼0

− Xn−1 i¼0

i

n X−i−1 s¼0

ð−tÞ 2n−i−1

ð1−tÞ

ð2n−i−1Þ!

ð2n−i−1Þ!

2n−i−1

m þ 1

<sup>k</sup> <sup>þ</sup> <sup>1</sup> <sup>−</sup>

<sup>ð</sup>0Þ ¼ <sup>α</sup>h, <sup>y</sup>ðh<sup>Þ</sup>

Xm k¼2

Tk<sup>−</sup>1ðxÞ k−1 þ

The same method has been presented in Ref. [24], where also stability has been studied.

Now assume ½a, b�¼½0; 1� and r > 2. Several types of boundary conditions can be considered.

GkðxÞ

<sup>k</sup> sin <sup>k</sup>π<sup>i</sup>

8 ><

>:

<sup>ð</sup>0ÞHi1ðxÞ þ <sup>y</sup>ði<sup>Þ</sup>

n þ s−1 n−1

n þ s−1 n−1

!

xs

ð1−xÞ s :

Hi1ðxÞ 0 ≤ t ≤ x

Hi2ðxÞ x ≤ t ≤ 1 :

ð1Þ ¼ βh, h ¼ 0;…, n−1 (64)

!

2x k 2 −1

<sup>k</sup><sup>3</sup> <sup>2</sup> k 2 −1

<sup>m</sup> <sup>þ</sup> <sup>1</sup> þ ðx<sup>2</sup>

even k

odd k:

ð1Þ ¼ βh, h ¼ 0;…, n−1 (60)

" #

<sup>−</sup>1<sup>Þ</sup> sin <sup>π</sup><sup>i</sup>

m þ 1

ð1ÞHi2ðxÞÞ (61)

(58)

(59)

(62)

(63)

If we consider the following boundary conditions

$$y(0) = \alpha\_0, \quad y^{(2h-1)}(0) = \alpha\_h, \quad y^{(2h-1)}(1) = \beta\_h, \qquad h = 1, \ldots, n\tag{67}$$

with α0, αh, βh, h ¼ 1;…, n real constants, then Pr<sup>−</sup><sup>1</sup> is the complementary Lidstone interpolating polynomial [27] of degree 2n [3, 24, 27, 28].

$$P\_{2n}[y](\mathbf{x}) = y(0) + \sum\_{k=1}^{n} \left[ y^{(2k-1)}(0) \left( v\_k(1) \neg v\_k(1-\mathbf{x}) \right) + y^{(2k-1)}(1) \left( v\_k(\mathbf{x}) \neg v\_k(0) \right) \right],\tag{68}$$

where vkðxÞ are the complementary Lidstone polynomials of degree k [27]. The kernel is

$$K\_{2n-1}^x(\mathbf{x}, t) = \begin{cases} \frac{t^{2n}}{(2n)!} + \sum\_{k=1}^n \frac{t^{2n-2k+1}}{(2n-2k+1)!} \left( v\_k(1-\mathbf{x}) \neg v\_k(1) \right) & t \le \mathbf{x} \\\ -\sum\_{k=1}^n \frac{(1-t)^{2n-2k+1}}{(2n-2k+1)!} \left( v\_k(\mathbf{x}) \neg v\_k(0) \right) & \mathbf{x} \le t \,\,. \end{cases} \tag{69}$$

In Ref. [19], the proposed method applied to problem (2) with conditions (64) and (67), respectively, has been examined in detail.

#### 5.2.3. Other special boundary conditions

If r ¼ n−1 and ½a, b�¼½0; 1�, we can consider Bernoulli boundary conditions [21]

$$y(0) = \beta\_0, \quad y(1) = \beta\_1, \quad y^{(k)}(1) - y^{(k)}(0) = \beta\_{k+1}, \qquad k = 1, \ldots, n-2 \tag{70}$$

with βk, k ¼ 0;…, n−1 real constants. The method has been examined in [14].

#### 5.3. Multipoint boundary value problems

Let us now consider [15] the following conditions in I ¼ ½−1; 1�

$$y^{(k)}(-1) = a\_k, \quad k = 0, \ldots, s - 1,\qquad y^{(s)}(\mathbf{x}\_i) = a\_i \quad i = 1, \ldots, r - s.\tag{71}$$

In this case

$$P\_{r-1}[y](\mathbf{x}) = \sum\_{i=0}^{s-1} \frac{\left(\mathbf{x} + \mathbf{1}\right)^i}{i!} \alpha\_i + \frac{1}{(s-1)!} \sum\_{k=1}^{r-s} \omega\_k p\_{r,k}(\mathbf{x})\,,\tag{72}$$

with

$$p\_{r,k}(\mathbf{x}) = \int\_{-1}^{\mathbf{x}} (\mathbf{x} - t)^{s-1} l\_k(t) dt \tag{73}$$

and lkðtÞ are the fundamental Lagrange polynomials on the points xj, j ¼ 1;…,r−s. Pr<sup>−</sup>1ðxÞ is the unique polynomial of degree ≤ r−1 which satisfies the Birkhoff interpolation problem

$$P\_{r-1}^{(k)}(-1) = a\_k, \quad k = 0, \ldots, \text{s-1}, \qquad P\_{r-1}^{(s)}(\mathbf{x}\_i) = \omega\_i, \quad \mathbf{i} = 1, \ldots, r \neg s, \quad \mathbf{s} \le r - 1 \tag{74}$$

with −1 < x<sup>1</sup> < ⋯ < xr<sup>−</sup><sup>s</sup> ≤ 1. Hence, the solution of problem (2), with multipoint conditions (71), is

$$y(\mathbf{x}) = P\_{r-1}[y](\mathbf{x}) + \int\_{-1}^{1} K\_r^{\mathbf{x}}(\mathbf{x}, t) y^{(r)}(t) dt,\tag{75}$$

with Pr<sup>−</sup>1½y�ðxÞ given in Eq. (72) and

$$K\_r^x(\mathbf{x}, t) = \frac{1}{(r - 1)!} \left[ (\mathbf{x} - t)\_+^{r - 1} - \binom{r - 1}{s} s \sum\_{i = 1}^{r - s} p\_{r, i, m}(\mathbf{x}) (\mathbf{x} - t)\_+^{r - s - 1} \right]. \tag{76}$$

Observe that Eq. (74) is a special type of Birkhoff interpolation problem with incidence matrix E ¼ ðeijÞ defined by e1<sup>j</sup> ¼ eis ¼ 1;j ¼ 0;⋯,s−1; i ¼ 2;…,r−s þ 1; eij ¼ 0 otherwise and r ≥ 1.

In Ref. [23], Pr<sup>−</sup>1½y�ðxÞ is presented in a little different form:

$$P\_{r-1}[y](\mathbf{x}) = \sum\_{i=0}^{s-1} \frac{(\mathbf{x}+1)^i}{i!} \alpha\_i + \sum\_{k=1}^{r-s} \omega\_k E\_s(\mathbf{x}, l\_k(\mathbf{x})) \,, \tag{77}$$

where Esðx, lkðxÞÞ ¼ <sup>ð</sup><sup>x</sup> −1 ⋯ ðx −1 |fflfflfflffl{zfflfflfflffl} s lkðtÞdt⋯dt:

Let us now consider the following conditions [12, 20]

$$y(-1) = \omega\_0, \qquad y(1) = \omega\_{r-1} \qquad y''(\mathbf{x}\_i) = \omega\_i \qquad \mathbf{i} = 1, \ldots, r-2. \tag{78}$$

The solution to the Birkhoff interpolation problem

Relationship between Interpolation and Differential Equations: A Class of Collocation Methods http://dx.doi.org/10.5772/66995 183

$$P\_{r-1}(-1) = \omega\_0, \qquad P\_{r-1}(1) = \omega\_{r-1}, \qquad P\_{r-1}'(\mathbf{x}\_i) = \omega\_i, \quad i = 1, \ldots, r-2 \tag{79}$$

with −1 < x<sup>1</sup> < ⋯ < xr<sup>−</sup><sup>2</sup> < 1 is [12]

$$P\_{r-1}[y](\mathbf{x}) = \frac{\omega\_{r-1} + \omega\_0}{2} + \frac{\omega\_{r-1} - \omega\_0}{2}\mathbf{x} + \sum\_{i=1}^{r-2} q\_{r,i}(\mathbf{x})\omega\_i \tag{80}$$

with

yðk<sup>Þ</sup>

182 Dynamical Systems - Analytical and Computational Techniques

P<sup>ð</sup>k<sup>Þ</sup>

with Pr<sup>−</sup>1½y�ðxÞ given in Eq. (72) and

Kx

ðx −1 ⋯ ðx −1 |fflfflfflffl{zfflfflfflffl} s

Let us now consider the following conditions [12, 20]

The solution to the Birkhoff interpolation problem

<sup>r</sup> <sup>ð</sup>x, <sup>t</sup>Þ ¼ <sup>1</sup>

In Ref. [23], Pr<sup>−</sup>1½y�ðxÞ is presented in a little different form:

Pr<sup>−</sup>1½y�ðxÞ ¼ <sup>X</sup><sup>s</sup>−<sup>1</sup>

lkðtÞdt⋯dt:

i¼0

ðx þ 1Þ i <sup>i</sup>! <sup>α</sup><sup>i</sup> <sup>þ</sup>X<sup>r</sup>−<sup>s</sup>

tions (71), is

where Esðx, lkðxÞÞ ¼

In this case

with

<sup>ð</sup>−1Þ ¼ <sup>α</sup>k, <sup>k</sup> <sup>¼</sup> <sup>0</sup>;…,s−1; <sup>y</sup>ðs<sup>Þ</sup>

i¼0

pr, <sup>k</sup>ðxÞ ¼

yðxÞ ¼ Pr<sup>−</sup>1½y�ðxÞ þ

<sup>ð</sup>r−1Þ! ð Þ <sup>x</sup>−<sup>t</sup> <sup>r</sup>−<sup>1</sup>

ðx þ 1Þ i <sup>i</sup>! <sup>α</sup><sup>i</sup> <sup>þ</sup>

> ðx −1 ðx−tÞ s−1

unique polynomial of degree ≤ r−1 which satisfies the Birkhoff interpolation problem

and lkðtÞ are the fundamental Lagrange polynomials on the points xj, j ¼ 1;…,r−s. Pr<sup>−</sup>1ðxÞ is the

with −1 < x<sup>1</sup> < ⋯ < xr<sup>−</sup><sup>s</sup> ≤ 1. Hence, the solution of problem (2), with multipoint condi-

ð1 −1 Kx <sup>r</sup> <sup>ð</sup>x, <sup>t</sup>Þyðr<sup>Þ</sup>

<sup>þ</sup> <sup>−</sup> <sup>r</sup>−<sup>1</sup> s � � s Xr−s i¼1

Observe that Eq. (74) is a special type of Birkhoff interpolation problem with incidence matrix E ¼ ðeijÞ defined by e1<sup>j</sup> ¼ eis ¼ 1;j ¼ 0;⋯,s−1; i ¼ 2;…,r−s þ 1; eij ¼ 0 otherwise and r ≥ 1.

1 ðs−1Þ! Xr−s k¼1

Pr<sup>−</sup>1½y�ðxÞ ¼ <sup>X</sup><sup>s</sup>−<sup>1</sup>

<sup>r</sup>−1ð−1Þ ¼ <sup>α</sup>k, <sup>k</sup> <sup>¼</sup> <sup>0</sup>;…,s−1; <sup>P</sup><sup>ð</sup>s<sup>Þ</sup>

ðxiÞ ¼ ω<sup>i</sup> i ¼ 1;…,r−s: (71)

lkðtÞdt (73)

<sup>r</sup>−1ðxiÞ ¼ ωi, i ¼ 1;…,r−s, s ≤ r−1 (74)

r−s−1 þ

ωkEsðx, lkðxÞÞ , (77)

pr,i,mð Þð x xi−tÞ

" #

k¼1

yð−1Þ ¼ ω0, yð1Þ ¼ ωr−<sup>1</sup> y″ðxiÞ ¼ ω<sup>i</sup> i ¼ 1;…,r−2: (78)

ðtÞdt, (75)

: (76)

ωkpr, <sup>k</sup>ðxÞ , (72)

$$q\_{r,i}(\mathbf{x}) = \int\_{-1}^{1} K\_r^{\mathbf{x}}(\mathbf{x}, t) l\_i(t) dt \tag{81}$$

and

$$K\_r^\mathbf{x}(\mathbf{x},t) = \begin{cases} \frac{(t+1)(\mathbf{x}-1)}{2} & t \le \mathbf{x} \\\ \frac{(\mathbf{x}+1)(t-1)}{2} & \mathbf{x} < t. \end{cases} \tag{82}$$

Hence, the solution of problem (2) is

$$y(\mathbf{x}) = P\_{r-1}[y](\mathbf{x}) + \int\_{-1}^{1} K\_r^{\mathbf{x}}(\mathbf{x}, t) y^{(r)}(t) dt,\tag{83}$$

with Pr<sup>−</sup>1½y�ðxÞ given in Eq. (80) and

$$K\_r^x(\mathbf{x}, t) = \frac{1}{(r - 1)!} \left[ (\mathbf{x} - t)\_+^{r - 1} - \frac{(1 - t)^{r - 1}(1 + \mathbf{x})}{2} - (r - 1)(r - 2) \sum\_{i = 1}^{r - 2} p\_{r, i, m}(\mathbf{x}) (\mathbf{x}\_i - t)\_+^{r - 3} \right]. \tag{84}$$

### 6. Numerical examples

In this section, we present some numerical results obtained by applying method (15), which we call CGN method, to find numerical approximations yr,mðxÞ to the solution of some test problems. In order to solve the nonlinear system (19), we use the so-called modified Newton method [29] (the same Jacobian matrix is used for more than one iteration) and we use algorithm (44) for the computation of the entries of the matrix, when polynomials pr,i,mðxÞ are not explicitly known. Since the true solutions of the analyzed problems are known, we consider the error function emðxÞ¼jyðxÞ−yr,mðxÞj.

The maximum values of emðxÞ over the interval ½a, b� have also been calculated by using Matlab, particularly the built-in solvers

• ode15s, a variable-step, variable-order multistep solver based on the numerical differentiation formulas of orders 1–5;

• ode45, a single-step solver, based on an explicit Runge-Kutta (4, 5) formula, the Dormand-Prince pair

for initial value problems, and the finite difference codes;


for boundary value problems.

All solvers have been used with optional parameters RelTol=AbsTol=1e−17.

Moreover, the powerful tool Chebfun [30] has been used.

Example 1 Consider the following linear ninth-order BVP [28]

$$\begin{cases} y^{(9)}(\mathbf{x}) = -9e^{\mathbf{x}} + y(\mathbf{x}) & \mathbf{x} \in [0, 1] \\ y^{(j)}(0) = 1 \neg j & j = 0, \ldots, 4 \\ y^{(j)}(1) = \neg j \; e & j = 0, \ldots, 3 \end{cases} \tag{85}$$

with exact solution <sup>y</sup>ðxÞ¼ð1−xÞex.

The unique polynomial P8ðxÞ ¼ <sup>P</sup>8½y�ðx<sup>Þ</sup> of degree 8 satisfying the boundary conditions P<sup>ð</sup>j<sup>Þ</sup> <sup>8</sup> ð0Þ ¼ 1−j for j <sup>¼</sup> <sup>0</sup>;…; <sup>4</sup>; and P<sup>ð</sup>j<sup>Þ</sup> <sup>8</sup> ð1Þ ¼ −jej ¼ 0;…; 3 is

$$\begin{array}{l} P\_8(\mathbf{x}) = \ 1 - \frac{1}{2}\mathbf{x}^2 - \frac{1}{3}\mathbf{x}^3 - \frac{1}{8}\mathbf{x}^4 + \left(\frac{31}{2}1\,\mathbf{e} - \frac{253}{6}\right)\mathbf{x}^5 +\\ \left(\frac{1321}{12} - \frac{81}{2}1\,\mathbf{e}\right)\mathbf{x}^6 + \left(\frac{71}{2}\,\mathbf{e} - \frac{193}{2}\right)\mathbf{x}^7 + \left(\frac{685}{24} - \frac{21}{2}\,\mathbf{e}\right)\mathbf{x}^8. \end{array} \tag{86}$$

From Eq. (7), we get

$$K\_{9}^{x}(\mathbf{x},t) = \frac{1}{8!} \begin{cases} 70t^4(\mathbf{x}^4 - 4\mathbf{x}^5 + 6\mathbf{x}^6 - 4\mathbf{x}^7 + \mathbf{x}^8) + 56t^5(-\mathbf{x}^3 + 10\mathbf{x}^5 - 20\mathbf{x}^6 + 15\mathbf{x}^7 - 4\mathbf{x}^8) + \\ 28t^6(\mathbf{x}^2 - 20\mathbf{x}^5 + 45\mathbf{x}^6 - 36\mathbf{x}^7 + 10\mathbf{x}^8) + 8t^7(-\mathbf{x} + 35\mathbf{x}^5 - 84\mathbf{x}^6 + 70\mathbf{x}^7 - 20\mathbf{x}^8) + \\ \quad t^6(1 - 56\mathbf{x}^5 + 140\mathbf{x}^6 - 120\mathbf{x}^7 + 35\mathbf{x}^8) & 0 \le t \le \mathbf{x} \\ -\mathbf{x}^8 + 8t\mathbf{x}^7 - 28t^2\mathbf{x}^6 + 56t^3\mathbf{x}^5 + 70t^4(-4\mathbf{x}^5 + 6\mathbf{x}^6 - 4\mathbf{x}^7 + \mathbf{x}^8) + \\ 56t^5(10\mathbf{x}^5 - 20\mathbf{x}^6 + 15\mathbf{x}^7 - 4\mathbf{x}^8) + 28t^6(-20\mathbf{x}^5 + 45\mathbf{x}^6 - 36\mathbf{x}^7 + 10\mathbf{x}^8) + \\ 8t^7(35\mathbf{x}^5 - 84\mathbf{x}^6 + 70\mathbf{x}^7 - 20\mathbf{x}^8) + \\ \quad t^8(-56\mathbf{x}^5 + 140\mathbf{x}^6 - 120\mathbf{x}^7 + 35\mathbf{x}^8) & \mathbf{x} \le t \le 1. \end{cases} (87)$$

Now we calculate the values of the integrals (39) by using Eq. (45), and we solve system (26). Thus, we obtain the approximate solution (15) to problem (85).

Table 1 shows the numerical results. The absolute errors are compared with those obtained in Ref. [28], where a modified decomposition method is applied for the solution of problem (85). The second and third columns of Table 1 show the error, respectively, in the method in Ref. [28] and in the CGN method, using in both cases polynomials of degree 12. The last column contains the error in the approximation by a polynomial of degree 14 using CGN method. As collocation points, equidistant nodes in ½0; 1� are chosen. Analogous results are obtained by using Chebyshev nodes of first and second kind, and Legendre-Gauss-Lobatto points.

The maximum absolute error maxfemðxÞg on ½0; 1� has also been calculated by using Matlab (Table 2).


Table 1. Absolute error emðxÞ in MDM and CGN methods for problem (85).


Table 2. Maximum absolute error in problem (85) using Matlab built-in functions.


Table 3. Problem (88)—example 2.

• ode45, a single-step solver, based on an explicit Runge-Kutta (4, 5) formula, the Dormand-

• bvp4c (with an optional mesh of 200 points) that implements the three-stage Lobatto IIIa formula;

<sup>ð</sup>xÞ ¼ <sup>−</sup>9ex <sup>þ</sup> <sup>y</sup>ðx<sup>Þ</sup> <sup>x</sup> <sup>∈</sup>½0; <sup>1</sup>�

(85)

(86)

(87)

<sup>8</sup> ð0Þ ¼ 1−j

ð0Þ ¼ 1−j j ¼ 0;…; 4

ð1Þ ¼ −je j ¼ 0;…; 3

31 2 1 e− 253 6

71 <sup>2</sup> <sup>e</sup><sup>−</sup>

� �

193 2 � � x5 þ

<sup>x</sup><sup>7</sup> <sup>þ</sup>

<sup>4</sup>ð−4x<sup>5</sup> <sup>þ</sup> <sup>6</sup>x<sup>6</sup>−4x<sup>7</sup> <sup>þ</sup> <sup>x</sup><sup>8</sup>Þþ

685 <sup>24</sup> <sup>−</sup> 21 2 e � �

<sup>5</sup>ð−x<sup>3</sup> <sup>þ</sup> <sup>10</sup>x<sup>5</sup>−20x<sup>6</sup> <sup>þ</sup> <sup>15</sup>x<sup>7</sup>−4x<sup>8</sup>Þþ

<sup>6</sup>ð−20x<sup>5</sup> <sup>þ</sup> <sup>45</sup>x<sup>6</sup>−36x<sup>7</sup> <sup>þ</sup> <sup>10</sup>x<sup>8</sup>Þþ

<sup>7</sup>ð−<sup>x</sup> <sup>þ</sup> <sup>35</sup>x<sup>5</sup>−84x<sup>6</sup> <sup>þ</sup> <sup>70</sup>x<sup>7</sup>−20x<sup>8</sup>Þþ

x<sup>8</sup>:

The unique polynomial P8ðxÞ ¼ <sup>P</sup>8½y�ðx<sup>Þ</sup> of degree 8 satisfying the boundary conditions P<sup>ð</sup>j<sup>Þ</sup>

<sup>x</sup><sup>6</sup> <sup>þ</sup>

<sup>8</sup>ð1−56x<sup>5</sup> <sup>þ</sup> <sup>140</sup>x<sup>6</sup>−120x<sup>7</sup> <sup>þ</sup> <sup>35</sup>x<sup>8</sup><sup>Þ</sup> <sup>0</sup> <sup>≤</sup> <sup>t</sup> <sup>≤</sup> <sup>x</sup>

<sup>8</sup>ð−56x<sup>5</sup> <sup>þ</sup> <sup>140</sup>x<sup>6</sup>−120x<sup>7</sup> <sup>þ</sup> <sup>35</sup>x<sup>8</sup><sup>Þ</sup> <sup>x</sup> <sup>≤</sup> <sup>t</sup> <sup>≤</sup> <sup>1</sup>:

Now we calculate the values of the integrals (39) by using Eq. (45), and we solve system (26). Thus, we

Table 1 shows the numerical results. The absolute errors are compared with those obtained in Ref. [28], where a modified decomposition method is applied for the solution of problem (85). The second and third columns of Table 1 show the error, respectively, in the method in Ref. [28] and in the CGN method, using in both cases polynomials of degree 12. The last column contains the error in the approximation

<sup>3</sup>x<sup>5</sup> <sup>þ</sup> <sup>70</sup><sup>t</sup>

Prince pair

for boundary value problems.

with exact solution <sup>y</sup>ðxÞ¼ð1−xÞex.

P8ðxÞ ¼ 1−

70t

8

>>>>>>>>>>><

>>>>>>>>>>>:

28t

56t

8t

t

<sup>−</sup>x<sup>8</sup> <sup>þ</sup> <sup>8</sup>tx<sup>7</sup>−28<sup>t</sup>

obtain the approximate solution (15) to problem (85).

t

for j <sup>¼</sup> <sup>0</sup>;…; <sup>4</sup>; and P<sup>ð</sup>j<sup>Þ</sup>

From Eq. (7), we get

<sup>9</sup>ðx, <sup>t</sup>Þ ¼ <sup>1</sup>

8! �

Kx

for initial value problems, and the finite difference codes;

184 Dynamical Systems - Analytical and Computational Techniques

Moreover, the powerful tool Chebfun [30] has been used. Example 1 Consider the following linear ninth-order BVP [28]

> 8 < :

• bvp5c that implements the four-stage Lobatto IIIa formula.

yð9<sup>Þ</sup>

yðj<sup>Þ</sup>

yðj<sup>Þ</sup>

<sup>8</sup> ð1Þ ¼ −jej ¼ 0;…; 3 is

1 2 x2 − 1 3 x3 − 1 8 <sup>x</sup><sup>4</sup> <sup>þ</sup>

1321 <sup>12</sup> <sup>−</sup> 81 2 1 e

� �

<sup>4</sup>ðx<sup>4</sup>−4x<sup>5</sup> <sup>þ</sup> <sup>6</sup>x<sup>6</sup>−4x<sup>7</sup> <sup>þ</sup> <sup>x</sup><sup>8</sup>Þ þ <sup>56</sup><sup>t</sup>

<sup>2</sup>x<sup>6</sup> <sup>þ</sup> <sup>56</sup><sup>t</sup>

<sup>5</sup>ð10x<sup>5</sup>−20x<sup>6</sup> <sup>þ</sup> <sup>15</sup>x<sup>7</sup>−4x<sup>8</sup>Þ þ <sup>28</sup><sup>t</sup>

<sup>7</sup>ð35x<sup>5</sup>−84x<sup>6</sup> <sup>þ</sup> <sup>70</sup>x<sup>7</sup>−20x<sup>8</sup>Þþ

<sup>6</sup>ðx<sup>2</sup>−20x<sup>5</sup> <sup>þ</sup> <sup>45</sup>x<sup>6</sup>−36x<sup>7</sup> <sup>þ</sup> <sup>10</sup>x<sup>8</sup>Þ þ <sup>8</sup><sup>t</sup>

All solvers have been used with optional parameters RelTol=AbsTol=1e−17.

Example 2 Consider the fifth-order initial value problem [13]

$$\begin{cases} y^{(5)} + (32x^5 + 120x)y = 160x^3 e^{-x^2} & x \in [0, 1] \\ y(0) = 1, \ y'(0) = 0, \ y''(0) = -2 \\ y''(0) = 0, \ y^{(4)}(0) = 12 \end{cases} \tag{88}$$

with solution yðxÞ ¼ <sup>e</sup><sup>−</sup>x<sup>2</sup> .

Table 3 shows the absolute error in some points of the interval ½0; 1� for CGN method in the case, respectively, of Chebyshev nodes of first kind (Cheb I), of second kind (Cheb II) and in the case of equidistant nodes (EqPts).

The maximum absolute errors calculated by using Matlab are displayed in Table 4.


Table 4. Maximum absolute error in problem (88) using Matlab built-in functions.

Example 3 Consider now the following nonlinear problem [31]

$$\begin{cases} y^{(4)}(\mathbf{x}) = \sin \mathbf{x} + \mathbf{S} \mathbf{i} \mathbf{n}^2 \mathbf{x} - \left( y''(\mathbf{x}) \right)^2 & \mathbf{x} \in [0, 1] \\ y(0) = 0 & y'(0) = 1 \\ y(1) = \sin(1) & y'(1) = \cos(1) \end{cases} \tag{89}$$

with exact solution yðxÞ ¼ sin ðxÞ.

This kind of problems models several nonlinear phenomena such as traveling waves in suspension bridges [32] or the bending of an elastic beam [33].

Suspension bridges are generally susceptible to visible oscillations, due to the forces acting on the bridge (including the force due to the cables which are considered as a spring with a one-sided restoring, the gravitation force and the external force due to the wind or other external sources). f represents the forcing term, while y represents the vertical displacement when the bridge is bending.

In the case of elastic beam, f represents the force exerted on the beam by the supports. x measures the position along the beam (<sup>x</sup> <sup>¼</sup> <sup>0</sup> is the left-hand endpoint of the beam), <sup>y</sup> and <sup>y</sup>′ indicate, respectively, the height and the slope of the beam at x. y″ measures the curvature of the graph of y, and, in physical terms, it measures the bending moment of the beam at x, that is, the torque that the load places on the beam at x.

The considered boundary conditions state that the beam has both endpoints simply supported. Moreover, the derivative of the deflection function is not zero at those points, and it indicates that the beam at the wall is not horizontal.

Table 5 shows the comparison between the NMD method presented in Ref. [31] and the CGN method with m ¼ 5 and m ¼ 9, respectively. The approximating polynomial of NMD method has degree 11, while the polynomial considered in CGN method for m ¼ 5 has degree 8.


#### The maximum absolute errors calculated by using Matlab are displayed in Table 6.

Table 5. Error of NMD and CGN methods—problem (89).


Table 6. Maximum absolute error in problem (89) using Matlab build-in functions.

## Author details

Example 2 Consider the fifth-order initial value problem [13]

Example 3 Consider now the following nonlinear problem [31]

<sup>y</sup>ð0Þ ¼ <sup>0</sup> <sup>y</sup>′

Table 4. Maximum absolute error in problem (88) using Matlab built-in functions.

<sup>y</sup>ð1Þ ¼ sin <sup>ð</sup>1<sup>Þ</sup> <sup>y</sup>′

yð4<sup>Þ</sup>

8 ><

>:

bridges [32] or the bending of an elastic beam [33].

with exact solution yðxÞ ¼ sin ðxÞ.

places on the beam at x.

wall is not horizontal.

8 < :

186 Dynamical Systems - Analytical and Computational Techniques

.

with solution yðxÞ ¼ <sup>e</sup><sup>−</sup>x<sup>2</sup>

equidistant nodes (EqPts).

<sup>y</sup>″′ð0Þ ¼ <sup>0</sup>; <sup>y</sup>ð4<sup>Þ</sup>

<sup>y</sup>ð5<sup>Þ</sup> þ ð32x<sup>5</sup> <sup>þ</sup> <sup>120</sup>xÞ<sup>y</sup> <sup>¼</sup> <sup>160</sup>x<sup>3</sup>e<sup>−</sup>x<sup>2</sup>

yð0Þ ¼ 1; y′ð0Þ ¼ 0; y″ð0Þ ¼ −2

The maximum absolute errors calculated by using Matlab are displayed in Table 4.

Chebfun ode15s ode45 2:11e−11 1:35e−13 1:33e−15

<sup>ð</sup>xÞ ¼ sin <sup>x</sup> <sup>þ</sup> sin <sup>2</sup>

forcing term, while y represents the vertical displacement when the bridge is bending.

while the polynomial considered in CGN method for m ¼ 5 has degree 8.

ð0Þ ¼ 1

ð0Þ ¼ 12

Table 3 shows the absolute error in some points of the interval ½0; 1� for CGN method in the case, respectively, of Chebyshev nodes of first kind (Cheb I), of second kind (Cheb II) and in the case of

> x− � y″ðxÞ �2

This kind of problems models several nonlinear phenomena such as traveling waves in suspension

Suspension bridges are generally susceptible to visible oscillations, due to the forces acting on the bridge (including the force due to the cables which are considered as a spring with a one-sided restoring, the gravitation force and the external force due to the wind or other external sources). f represents the

In the case of elastic beam, f represents the force exerted on the beam by the supports. x measures the position along the beam (<sup>x</sup> <sup>¼</sup> <sup>0</sup> is the left-hand endpoint of the beam), <sup>y</sup> and <sup>y</sup>′ indicate, respectively, the height and the slope of the beam at x. y″ measures the curvature of the graph of y, and, in physical terms, it measures the bending moment of the beam at x, that is, the torque that the load

The considered boundary conditions state that the beam has both endpoints simply supported. Moreover, the derivative of the deflection function is not zero at those points, and it indicates that the beam at the

Table 5 shows the comparison between the NMD method presented in Ref. [31] and the CGN method with m ¼ 5 and m ¼ 9, respectively. The approximating polynomial of NMD method has degree 11,

ð1Þ ¼ cos ð1Þ

x ∈½0; 1�

x ∈½0; 1�

(88)

(89)

Francesco Aldo Costabile, Maria Italia Gualtieri and Anna Napoli\*

\*Address all correspondence to: anna.napoli@unical.it

Department of Mathematics and Informatics, University of Calabria, Rende (Cs), Italy

## References


[22] Costabile F, Napoli A. Collocation for high order differential equations with two-points Hermite boundary conditions. Applied Numerical Mathematics. 2015; 87: 157–167.

[5] Henrici P. Discrete variable methods in ordinary differential equations. Wiley, New York. 1962. [6] Strikwerda J. Finite difference schemes and partial differential equations. SIAM., Philadelphia,

[7] Caglar H, Caglar N, Elfaituri K. B-spline interpolation compared with finite difference, finite element and finite volume methods which applied to two-point boundary value problems.

[8] Chang J, Yang Q, Zhao L. Comparison of b-spline method and finite difference method to solve

[9] Costabile F, Gualtieri MI, Serafini G. Cubic Lidstone-Spline for numerical solution of BVPs.

[10] Khan A. Parametric cubic spline solution of two point boundary value problems. Applied

[12] Costabile F, Longo E. A Birkhoff interpolation problem and application. Calcolo. 2010; 47(1):

[13] Costabile F, Napoli A. A class of collocation methods for numerical integration of initial value problems. Computers and Mathematics with Applications. 2011; 62(8): 3221–3235.

[14] Costabile F, Napoli A. Numerical solution of high order Bernoulli boundary value problems. Journal of Applied Mathematics. 2014, Article ID 276585. doi: 10.1155/2014/276585. [15] Costabile F, Napoli A. A method for high-order multipoint boundary value problems with Birkhofftype conditions. International Journal of Computer Mathematics. 2015; 92(1): 192–200. [16] Costabile F, Longo E. A new collocation method for a BVP. Applied and Industrial Mathematics in Italy III, 289–297. 2009; 3: 289–297. (Ser. Adv. Math. Appl. Sci., 82, World Sci.

[17] Costabile F, Napoli A. A method for global approximation of the solution of second order IVPs.

[18] Costabile F, Napoli A. A method for polynomial approximation of the solution of general second

[19] Costabile F, Napoli A. Collocation for high-order differential equations with Lidstone boundary conditions. Journal of Applied Mathematics. 2012, Article ID 120792. doi: 10.1155/2012/

[20] Costabile F, Napoli A. A multipoint Birkhoff type boundary value problem. Journal of Numer-

[21] Costabile F, Serpe A, Bruzio A. No classic boundary conditions. In: Proceedings of World

order BVPs. Far East Journal of Applied Mathematics. 2006; 25(3): 289–305.

Rendiconti del Circolo Matematico, Ser. II. 2004; 24: 239–260.

Congress on Engineering 2007; July 2–4, 2007, London 918–921.

[11] Boyd J. Chebyshev and Fourier spectral methods. 2nd edition, Dover, Mineola, NY. 2000.

Applied Mathematics and Computation. 2006; 175(1): 72–79.

bvp of linear odes. Journal of Computers. 2011; 6(10): 2149–2155.

Mathematics and Computation. 2004; 154(1): 175–182.

PA. 2004.

188 Dynamical Systems - Analytical and Computational Techniques

submitted.

49–63.

120792.

Publ., Hackensack, NJ. 2010)

ical Mathematics. 2015; 23(1): 1–11.


Provisional chapter

#### **Integral-Equation Formulations of Plasmonic Problems in the Visible Spectrum and Beyond Integral-Equation Formulations of Plasmonic Problems in the Visible Spectrum and Beyond** Integral-Equation Formulations of Plasmonic Problems

in the Visible Spectrum and Beyond

Abdulkerim Çekinmez, Barişcan Karaosmanoğlu and Özgür Ergül Abdulkerim Çekinmez, Barişcan Karaosmanoğlu and Özgür Ergül Abdulkerim Çekinmez,

Additional information is available at the end of the chapter Additional information is available at the end of the chapter BarIşcan Karaosmanoğlu and Özgür Ergül

http://dx.doi.org/10.5772/67216 Additional information is available at the end of the chapter

#### Abstract

Computational modeling of nano-plasmonic structures is essential to understand their electrodynamic responses before experimental efforts in measurement setups. Similar to the other ranges of the electromagnetic spectrum, there are alternative methods for the numerical analysis of nano-plasmonic problems, while the optics literature is dominated by differential equations that require discretizations of the host media with artificial truncations. These approaches often need serious assumptions, such as periodicity, infinity, or self-similarity, in order to reduce the computational load. On the other hand, surface integral equations based on integro-differential operators can bring important advantages for accurate and efficient modeling of nano-plasmonic problems with arbitrary geometries. Electrical properties of materials, which may be obtained either experimentally or via physical modeling, can easily be inserted into integral-equation formulations, leading to accurate predictions of electromagnetic responses of complex structures. This chapter presents the implementation of such accurate, efficient, and reliable solvers based on appropriate combinations of surface integral equations, discretizations, numerical integrations, fast algorithms, and iterative techniques. As a case study, nanowire transmission lines are investigated in wide-frequency ranges, demonstrating the capabilities of the developed implementations.

Keywords: surface integral equations, multilevel fast multipole algorithm, surface plasmons, computational electromagnetics

## 1. Introduction

As in all areas of electrodynamics, numerical study of plasmonic problems is essential to understand interactions between electromagnetic waves and matter at the higher range of the spectrum. Applications include nanowires for negative refraction, imaging, and super-resolution

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons © 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Licensee InTech. This chapter is distributed under the terms of the Creative Commons License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

[1, 2], and nanoantennas for energy harvesting, single-molecule sensing, and optical links [3–9], to name a few. At optical frequencies, some metals are known to possess strong plasmonic properties [10] that are crucial for a majority of such applications, while their accurate analysis requires more than perfectly conducting models that are common in radio and microwave regimes. In the infrared region, it may not be obvious when perfect conductivity or impedance approximation methods can safely be used. Hence, it is desirable to extend the plasmonicmodeling capabilities across wide ranges of frequencies until they converge to the other forms. While, in the literature, experimental studies are often supported by differential solvers, their applicability to complex problems is usually limited to small-scale and/or simplified models due to well-known drawbacks, such as need for space (host-medium) discretizations that are accompanied with artificial truncations. Major tools of computational electromagnetics, that is, surface integral equations [11, 12] employing integro-differential operators, are recently applied to plasmonic problems with promising results for realistic simulations of complex structures [13–23]. In fact, surface integral equations need only the discretization of boundaries between different media, which usually correspond to the surface of the plasmonic object. In addition to homogeneous bodies, they are also applicable to piecewise homogeneous cases, making it possible to analyze structures with coexisting multiple materials [24].

Using surface integral equations, it is possible to solve plasmonic problems involving finite models with arbitrary geometries, without periodicity, self-similarity, and infinity assumptions. When the object is large in terms of wavelength, fast and efficient methods, such as the multilevel fast multipole algorithm (MLFMA) [25], are available to accelerate solutions [26–28]. For plasmonic modeling, effective permittivity values with negative real parts are required, while they are already available via theoretical and experimental studies [10]. In the phasor domain with time-harmonic sources, which is considered in this chapter, permittivity is a simulation parameter with a fixed value at a given frequency. Then, frequency sweeps can be performed by using the discrete values of the permittivity with respect to frequency. As theoretical models, Drude (D) or Lorentz-Drude (LD) models are commonly used. While these models (especially the Lorentz-Drude model) provide reliable permittivity values in widefrequency ranges, they deviate from experimental data at higher frequencies of the optical spectrum. From the perspective of surface integral equations, it does not matter where the permittivity values are obtained from. Besides, there is a great flexibility in geometric modeling, allowing sharp edges and corners, tips, and subwavelength details [29]. On top of these, the background of surface integral equations provides self-consistency and accuracy-check mechanisms, such as based on the equivalence theorem, enabling accuracy analysis without resorting to alternative solvers [30].

From numerical point of view, surface integral equations bring their own challenges when they are applied to plasmonic problems. In free space, plasmonic objects are naturally high-contrast problems [15], leading to difficulties in maintaining the accuracy and/or efficiency. Considering the equivalence theorem, ideal mesh size for surface formulations can be selected based on wavenumber of the host medium, where the impressed sources are located [26]. Therefore, the source of the inaccuracy is not directly the discretization size, but a combination of geometric deviation (for smooth objects), numerical integration, and imbalanced contributions from inner/outer media. Efficiency of iterative solutions may also deteriorate due to imbalanced matrix blocks that lead to ill-conditioned matrix equations [31]. On the other hand, numerical challenges are not only due to the high contrasts of plasmonic objects. The effective permittivity of a plasmonic medium is typically negative, which becomes increasingly large at lower frequencies. In numerical solutions, integro-differential operators become localized with exponentially decaying Green's function. This localization is responsible for the evolution of plasmonic formulations into perfectly conducting types, while this process may not be achieved smoothly in discrete forms. Some traditional formulations break down due to dominant inner contributions, which are difficult to compute accurately [32], if not impossible. Classical singularity extractions may fail to provide smooth integrands, leading to increasingly inaccurate near-zone interactions. While all formulations may be improved by manipulating integrations into more suitable forms, our focus is to develop new formulations that reduce into perfectly conducting formulations in the limit. All results presented in this chapter are obtained by such a stabilized integral-equation formulation, namely a modified combined tangential formulation (MCTF), which provides accurate results using the conventional Rao-Wilton-Glisson (RWG) discretizations [33].

The chapter is organized as follows. In Section 2, we present surface integral equations, with the emphasis on MCTF. Discretization is presented in Section 3, including implementation details that may be followed by the readers to develop their own solvers. MLFMA is further discussed in Section 4, demonstrating how to accelerate numerical solutions. Finally, we present an extensive case study, involving nanowire transmission lines in a wide range of frequency to illustrate the significant differences between the analytical models and measurement data for the permittivity values. In the following, time-harmonic electrodynamic problems are considered with exp(− iωt) time dependency, where i <sup>2</sup> = −1 and ω = 2πf is the angular frequency.

## 2. Surface integral equations

[1, 2], and nanoantennas for energy harvesting, single-molecule sensing, and optical links [3–9], to name a few. At optical frequencies, some metals are known to possess strong plasmonic properties [10] that are crucial for a majority of such applications, while their accurate analysis requires more than perfectly conducting models that are common in radio and microwave regimes. In the infrared region, it may not be obvious when perfect conductivity or impedance approximation methods can safely be used. Hence, it is desirable to extend the plasmonicmodeling capabilities across wide ranges of frequencies until they converge to the other forms. While, in the literature, experimental studies are often supported by differential solvers, their applicability to complex problems is usually limited to small-scale and/or simplified models due to well-known drawbacks, such as need for space (host-medium) discretizations that are accompanied with artificial truncations. Major tools of computational electromagnetics, that is, surface integral equations [11, 12] employing integro-differential operators, are recently applied to plasmonic problems with promising results for realistic simulations of complex structures [13–23]. In fact, surface integral equations need only the discretization of boundaries between different media, which usually correspond to the surface of the plasmonic object. In addition to homogeneous bodies, they are also applicable to piecewise homogeneous cases, making it

Using surface integral equations, it is possible to solve plasmonic problems involving finite models with arbitrary geometries, without periodicity, self-similarity, and infinity assumptions. When the object is large in terms of wavelength, fast and efficient methods, such as the multilevel fast multipole algorithm (MLFMA) [25], are available to accelerate solutions [26–28]. For plasmonic modeling, effective permittivity values with negative real parts are required, while they are already available via theoretical and experimental studies [10]. In the phasor domain with time-harmonic sources, which is considered in this chapter, permittivity is a simulation parameter with a fixed value at a given frequency. Then, frequency sweeps can be performed by using the discrete values of the permittivity with respect to frequency. As theoretical models, Drude (D) or Lorentz-Drude (LD) models are commonly used. While these models (especially the Lorentz-Drude model) provide reliable permittivity values in widefrequency ranges, they deviate from experimental data at higher frequencies of the optical spectrum. From the perspective of surface integral equations, it does not matter where the permittivity values are obtained from. Besides, there is a great flexibility in geometric modeling, allowing sharp edges and corners, tips, and subwavelength details [29]. On top of these, the background of surface integral equations provides self-consistency and accuracy-check mechanisms, such as based on the equivalence theorem, enabling accuracy analysis without

From numerical point of view, surface integral equations bring their own challenges when they are applied to plasmonic problems. In free space, plasmonic objects are naturally high-contrast problems [15], leading to difficulties in maintaining the accuracy and/or efficiency. Considering the equivalence theorem, ideal mesh size for surface formulations can be selected based on wavenumber of the host medium, where the impressed sources are located [26]. Therefore, the source of the inaccuracy is not directly the discretization size, but a combination of geometric deviation (for smooth objects), numerical integration, and imbalanced contributions from inner/outer media. Efficiency of iterative solutions may also deteriorate due to imbalanced

possible to analyze structures with coexisting multiple materials [24].

192 Dynamical Systems - Analytical and Computational Techniques

resorting to alternative solvers [30].

For deriving surface formulations, we consider a plasmonic object with permittivity/permeability (εp=μp) located in unbounded free space with permittivity/permeability (εo=μo). Alternative surface integral equations can be obtained by considering the boundary conditions on the surface of the object. In a general form, we have

$$
\begin{bmatrix}
\mathcal{Z}\_{11} & \mathcal{Z}\_{12} \\
\mathcal{Z}\_{21} & \mathcal{Z}\_{22}
\end{bmatrix} \cdot \begin{bmatrix}
\mathbf{J} \\
\mathbf{M}
\end{bmatrix}(\mathbf{r}) = \begin{bmatrix}
a\hat{\mathbf{n}} \times \hat{\mathbf{n}} \times \boldsymbol{E}^{\text{inc}} - e\hat{\mathbf{n}} \times \boldsymbol{H}^{\text{inc}} \\
c\hat{\mathbf{n}} \times \hat{\mathbf{n}} \times \boldsymbol{H}^{\text{inc}} + g\hat{\mathbf{n}} \times \boldsymbol{E}^{\text{inc}}
\end{bmatrix}(\mathbf{r}),\tag{1}
$$

where J ¼ n^ · H and M ¼ −n^ ·E are the equivalent currents written in terms of the tangential electric field intensity E and the magnetic field intensity H on the closed surface (r∈ S). In the above, n^ is the unit vector outward the object, and Einc and Hinc are the incident electric and magnetic fields, respectively, created by impressed sources located in the host medium. At an observation point on a locally planar surface (solid angle = 2π), the combined operators can be written as

$$\mathcal{Z}\_{11} = -\mathbf{\hat{n}} \times \mathbf{\hat{n}} \times (a\eta\_o \mathcal{T}\_o + b\eta\_p \mathcal{T}\_p) + \mathbf{\hat{n}} \times (e\mathcal{K}\_{\text{PV},o} - f\mathcal{K}\_{\text{PV},p}) - (e+f)\mathcal{Z}/2 \tag{2}$$

$$\mathcal{Z}\_{12} = \hat{\mathfrak{n}} \times \hat{\mathfrak{n}} \times (a\mathcal{K}\_{\text{PV},o} + b\mathcal{K}\_{\text{PV},p}) - (a - b)\hat{\mathfrak{n}} \times \mathcal{Z}/2 + \hat{\mathfrak{n}} \times (e\eta^{-1}\_o \mathcal{T}\_o - f\eta^{-1}\_p \mathcal{T}\_p) \tag{3}$$

$$\mathcal{Z}\_{21} = -\mathbf{\hat{n}} \times \mathbf{\hat{n}} \times \left( c\mathcal{K}\_{\text{PV},o} + d\mathcal{K}\_{\text{PV},p} \right) + (c - d)\mathbf{\hat{n}} \times \mathcal{Z}/2 - \mathbf{\hat{n}} \times \left( g\eta\_o \mathcal{T}\_o - \hbar \eta\_p \mathcal{T}\_p \right) \tag{4}$$

$$\mathcal{Z}\_{22} = -\hat{\mathfrak{n}} \times \hat{\mathfrak{n}} \times \left( c\eta\_o^{-1} \mathcal{T}\_o + d\eta\_p^{-1} \mathcal{T}\_p \right) + \hat{\mathfrak{n}} \times \left( g\mathcal{K}\_{\text{PV},o} - h\mathcal{K}\_{\text{PV},p} \right) - \left( \mathcal{g} + h \right) \mathcal{Z}/2,\tag{5}$$

where {a; b; c; d; e; f ; g; h} are generalized coefficients. In the above, η<sup>o</sup> ¼ ffiffiffiffiffi μo p = ffiffiffiffi εo p is the intrinsic impedance of the host medium, whereas η<sup>p</sup> ¼ ffiffiffiffiffi μp p = ffiffiffiffi εp p is the complex intrinsic impedance of the plasmonic object. The integro-differential and identity operators are derived as

$$\mathcal{T}\_u\{\mathbf{X}\}(\mathbf{r}) = ik\_u \int\_S d\mathbf{r}' [X(\mathbf{r}') + \frac{1}{k\_u^2} \nabla' \cdot X(\mathbf{r}') \nabla] \mathbf{g}\_u(\mathbf{r}, \mathbf{r}') \tag{6}$$

$$\mathcal{K}\_{\text{PV},u}(X|(r) = \int\_{\text{PV},S} dr' X(r') \times \nabla' g\_u(r, r') \tag{7}$$

$$\mathcal{LT}\{X\}(r) = X(r) \tag{8}$$

for r∈ S, where PV indicates the principal value of the integral, ∇ ¼ x^∂=∂x þ y^∂=∂y þ ^z∂=∂z is the differential operator, guðr; <sup>r</sup>′ Þ ¼ exp <sup>ð</sup>ikujr−r′ jÞ=ð4πjr−r′ jÞ is the homogeneous-space Green's function, and ku ¼ 2π=λ<sup>u</sup> ¼ ω ffiffiffiffiffiffiffiffiffiffi μuε<sup>u</sup> <sup>p</sup> is the wavenumber for <sup>u</sup> <sup>¼</sup> {o; <sup>p</sup>}.

The conventional formulations can be obtained by setting the generalized coefficients to suitable values such that the outer and inner problems are coupled while the internal resonances are removed. By using nonzero values for {e; f ; g; h} while setting {a; b; c; d} to zero leads to N-formulations, such as the Müller formulation and the combined normal formulation [12]. These formulations contain the identity operator I, which usually dominates the matrix equations when Galerkin discretization is used. Therefore, matrix equations derived from Nformulations are generally easier to solve iteratively. On the other hand, T-formulations are obtained by selecting {a; b; c; d} nonzero, while inserting zero values for {e; f ; g; h}. The Poggio-Miller-Chang-Harrington-Wu-Tsai formulation [34] and the combined tangential formulation [12] are among the well-known T-formulations. As opposed to N-formulations, T-formulations contain either the rotational identity operator n^ · I or no identity operator at all (when a ¼ b and c ¼ d). Hence, using a Galerkin discretization, T-formulations do not contain a dominant identity operator and they produce matrix equations that are potentially ill-conditioned. Finally, when a mixture of coefficients are used from the sets {a; b; c; d} and {e, f, g, h}, mixed formulations are obtained. For example, the JM combined-field integral equation [35] is a mixed formulation when all coefficients are nonzero. Obviously, mixed formulations always contain a dominant identity operator (due to either I or n^ · I).

Discretization is an important stage of numerical solutions. All formulations described above can be discretized in different ways such that the derived matrix equations can be well conditioned, and, at the same time, they may produce accurate results. On the other hand, using a Galerkin scheme employing the same set of basis and testing functions, N-formulations and mixed formulations usually produce better-conditioned matrix equations than T-formulations, as mentioned above. In addition, when low-order discretizations are used, the existence of a dominant identity operator is critical in terms of accuracy. It is well known that a discretized identity operator acts like a discretized integro-differential operator with a Dirac-delta kernel [36]. Therefore, a low-order discretization of the identity operator may produce large errors, leading to inaccurate results if the operator is directly tested such that it dominates the matrix equation. RWG discretizations of N-formulations and mixed formulations have this serious drawback, making them less preferred (despite their faster iterative solutions) in comparison to T-formulations in many applications. The tradeoff between the efficiency and accuracy has been resolved in many studies [37] by improving the accuracy of N-formulations and mixed formulations via alternative discretizations and/or by improving the efficiency of T-formulations via preconditioning.

Z<sup>11</sup> ¼ −n^ · n^ · ðaηoT <sup>o</sup> þ bηpT <sup>p</sup>Þ þ n^ · ðeKPV; <sup>o</sup>−f KPV; <sup>p</sup>Þ−ðe þ fÞI=2 (2)

Z<sup>21</sup> ¼ −n^ · n^ · ðcKPV; <sup>o</sup> þ dKPV; <sup>p</sup>Þþðc−dÞn^ · I=2−n^ · ðgηoT <sup>o</sup>−hηpT <sup>p</sup>Þ (4)

μp p = ffiffiffiffi εp

> <sup>∇</sup>′ � <sup>X</sup>ð<sup>r</sup> ′

<sup>o</sup> <sup>T</sup> <sup>o</sup>−<sup>f</sup> <sup>η</sup><sup>−</sup><sup>1</sup>

μo p = ffiffiffiffi εo

p is the complex intrinsic impedance

<sup>p</sup> T <sup>p</sup>Þ þ n^ · ðgKPV; <sup>o</sup>−hKPV; <sup>p</sup>Þ−ðg þ hÞI=2; (5)

Þ∇�guðr; r

I{X}ðrÞ ¼ XðrÞ (8)

guðr; r ′ ′

<sup>p</sup> T <sup>p</sup>Þ (3)

p is the intrin-

Þ (6)

Þ (7)

jÞ is the homogeneous-space Green's

<sup>Z</sup><sup>12</sup> <sup>¼</sup> <sup>n</sup>^ · <sup>n</sup>^ · <sup>ð</sup>aKPV; <sup>o</sup> <sup>þ</sup> <sup>b</sup>KPV; <sup>p</sup>Þ−ða−bÞn^ · <sup>I</sup>=<sup>2</sup> <sup>þ</sup> <sup>n</sup>^ · <sup>ð</sup>eη<sup>−</sup><sup>1</sup>

<sup>o</sup> <sup>T</sup> <sup>o</sup> <sup>þ</sup> <sup>d</sup>η<sup>−</sup><sup>1</sup>

where {a; b; c; d; e; f ; g; h} are generalized coefficients. In the above, η<sup>o</sup> ¼ ffiffiffiffiffi

Z S dr ′ ½Xðr ′ Þ þ <sup>1</sup> k2 u

Þ ¼ exp <sup>ð</sup>ikujr−r′

KPV; <sup>u</sup>{X}ðrÞ ¼

μuε<sup>u</sup>

contain a dominant identity operator (due to either I or n^ · I).

of the plasmonic object. The integro-differential and identity operators are derived as

Z

PV; S dr ′ Xðr ′ <sup>Þ</sup> · <sup>∇</sup>′

for r∈ S, where PV indicates the principal value of the integral, ∇ ¼ x^∂=∂x þ y^∂=∂y þ ^z∂=∂z is

The conventional formulations can be obtained by setting the generalized coefficients to suitable values such that the outer and inner problems are coupled while the internal resonances are removed. By using nonzero values for {e; f ; g; h} while setting {a; b; c; d} to zero leads to N-formulations, such as the Müller formulation and the combined normal formulation [12]. These formulations contain the identity operator I, which usually dominates the matrix equations when Galerkin discretization is used. Therefore, matrix equations derived from Nformulations are generally easier to solve iteratively. On the other hand, T-formulations are obtained by selecting {a; b; c; d} nonzero, while inserting zero values for {e; f ; g; h}. The Poggio-Miller-Chang-Harrington-Wu-Tsai formulation [34] and the combined tangential formulation [12] are among the well-known T-formulations. As opposed to N-formulations, T-formulations contain either the rotational identity operator n^ · I or no identity operator at all (when a ¼ b and c ¼ d). Hence, using a Galerkin discretization, T-formulations do not contain a dominant identity operator and they produce matrix equations that are potentially ill-conditioned. Finally, when a mixture of coefficients are used from the sets {a; b; c; d} and {e, f, g, h}, mixed formulations are obtained. For example, the JM combined-field integral equation [35] is a mixed formulation when all coefficients are nonzero. Obviously, mixed formulations always

Discretization is an important stage of numerical solutions. All formulations described above can be discretized in different ways such that the derived matrix equations can be well conditioned, and, at the same time, they may produce accurate results. On the other hand,

jÞ=ð4πjr−r′

<sup>p</sup> is the wavenumber for <sup>u</sup> <sup>¼</sup> {o; <sup>p</sup>}.

<sup>Z</sup><sup>22</sup> <sup>¼</sup> <sup>−</sup>n^ · <sup>n</sup>^ · <sup>ð</sup>cη<sup>−</sup><sup>1</sup>

194 Dynamical Systems - Analytical and Computational Techniques

the differential operator, guðr; <sup>r</sup>′

function, and ku ¼ 2π=λ<sup>u</sup> ¼ ω ffiffiffiffiffiffiffiffiffiffi

sic impedance of the host medium, whereas η<sup>p</sup> ¼ ffiffiffiffiffi

T <sup>u</sup>{X}ðrÞ ¼ iku

In the context of plasmonic problems, further challenges appear in surface formulations. First, considering that their permittivity values can be written as ε<sup>p</sup> ¼ εoð−ε<sup>R</sup> þ iεIÞ, where both ε<sup>R</sup> and ε<sup>I</sup> are positive, plasmonic objects are naturally high-contrast structures in free space (except for very high frequencies for which −ε<sup>R</sup> ! 1). Then, the matrix equations derived from surface formulations can be unbalanced, leading to efficiency and/or accuracy problems. For planar discretizations of curved surfaces, fine discretizations are needed to capture the geometry of the object. At lower frequencies of the optical range, ε<sup>R</sup> can be very large (as large as 1000 and beyond) such that the localization of the operators as T <sup>p</sup> ! −I=2 and KPV; <sup>p</sup>−I=2 ! −I=2 when ε<sup>R</sup> ! ∞ leads to numerical problems if the blocks are not weighted properly (that occurs in many conventional formulations). While the well-known perfectly conducting models may be used at lower frequencies, it may not be obvious where the plasmonic model can be omitted for a given structure. Hence, it is desirable to extend the applicability of the surface integral equations in wide-frequency ranges until other kinds of approaches can safely be used. In a recent study, we show that a new tangential formulation, namely MCTF, provides reliable and convergent solutions in wide ranges of frequencies of the optical spectrum [32]. Considering the general form, MCTF is obtained by using a ¼ b ¼ 1 and c ¼ d ¼ ηoηp, while setting e ¼ f ¼ g ¼ h ¼ 0. Therefore, we obtain

$$\mathcal{Z}\_{11}^{\text{MCTF}} = -\hat{\mathfrak{n}} \times \hat{\mathfrak{n}} \times (\eta\_o \mathcal{T}\_o + \eta\_p \mathcal{T}\_p) \tag{9}$$

$$\mathcal{Z}\_{12}^{\rm MCTF} = \hat{\mathfrak{n}} \times \hat{\mathfrak{n}} \times \left( \mathcal{K}\_{\rm PV}, \boldsymbol{\varrho} + \mathcal{K}\_{\rm PV, \, p} \right) \tag{10}$$

$$\mathcal{Z}\_{21}^{\rm MCTF} = -\hat{\mathfrak{n}} \times \hat{\mathfrak{n}} \times \eta\_o \eta\_p (\mathcal{K}\_{\rm PV,o} + \mathcal{K}\_{\rm PV,p}) \tag{11}$$

$$\mathcal{Z}\_{22}^{\rm MCTF} = -\hat{\mathfrak{n}} \times \hat{\mathfrak{n}} \times (\eta\_p \mathcal{T}\_o + \eta\_o \mathcal{T}\_p). \tag{12}$$

It can be observed that MCTF is completely free of the identity operator, and it can be shown that it smoothly turns into the electric-field integral equation for perfectly conducting objects as the frequency drops and ε<sup>R</sup> goes to infinity. In the following, we consider numerical solutions of plasmonic problems formulated with MCTF.

#### 3. Discretization

Similar to the diversity of surface integral equations, discretization can be performed in alternative ways. Using a Galerkin scheme, the basis and testing functions are selected as the same set of N functions locally defined on the surface. As a popular choice for triangular discretizations, which is also considered in this chapter, the RWG functions are defined as [33]

$$f\_n(r) = \begin{cases} \frac{l\_n}{2A\_{n1}}(r - r\_{n1}), & r \in \mathbb{S}\_{n1} \\\\ \frac{l\_n}{2A\_{n2}}(r\_{n2} - r), & r \in \mathbb{S}\_{n2} \\\\ 0, & r \notin \mathbb{S}\_n \end{cases} \tag{13}$$

Each RWG function is located on a pair of triangles sharing an edge. In the above, ln represents the length of the main edge, An<sup>1</sup> and An<sup>2</sup> are, respectively, the areas of the first (Sn1) and the second (Sn2) triangles, and rn<sup>1</sup> and rn<sup>2</sup> represent the coordinates of the nodes opposite of the edge. The RWG functions are divergence conforming and their divergence is finite everywhere, that is,

$$\nabla \cdot f\_n(r) = \begin{cases} \quad \frac{l\_n}{A\_{n1}}, & r \in S\_{n1} \\\\ \quad -\frac{l\_n}{A\_{n2}}, & r \in S\_{n2} \\ \quad 0, & r \notin S\_n, \end{cases} \tag{14}$$

while the charge neutrality is satisfied locally as An1ln=An1−An2ln=An<sup>2</sup> ¼ 0.

By selecting the basis and testing functions (b<sup>n</sup> and t<sup>m</sup> for {n; m} ¼ {1, 2, …; N}) as the same set of the RWG functions, MCTF can be discretized as

$$
\begin{bmatrix}
\mathbf{Z}\_{11}^{\text{MCTF}} & \mathbf{Z}\_{12}^{\text{MCTF}} \\
\mathbf{Z}\_{21}^{\text{MCTF}} & \mathbf{Z}\_{22}^{\text{MCTF}}
\end{bmatrix} \cdot \begin{bmatrix}
\mathbf{a}\_{/} \\
\mathbf{a}\_{M}
\end{bmatrix} = \begin{bmatrix}
\mathbf{w}\_{1}^{\text{MCTF}} \\
\mathbf{w}\_{2}^{\text{MCTF}}
\end{bmatrix},\tag{15}
$$

where a<sup>J</sup> and a<sup>M</sup> are vectors containing complex coefficients to expand the current densities. The matrix elements and the elements of the right-hand-side vector are derived as

$$\overline{Z}\_{11}^{\text{MCTF}} = \eta\_o \overline{T}\_o^T + \eta\_p \overline{T}\_p^T \tag{16}$$

$$\overline{\mathbf{Z}}\_{12}^{\text{MCTF}} = -\overline{\mathbf{K}}\_{\text{PV},o}^{T} - \overline{\mathbf{K}}\_{\text{PV},p}^{T} \tag{17}$$

$$\overline{\mathbf{Z}}\_{21}^{\text{MCTF}} = \eta\_o \eta\_p (\overline{\mathbf{K}}\_{\text{PV},o}^T + \overline{\mathbf{K}}\_{\text{PV},p}^T) \tag{18}$$

$$\overline{\mathbf{Z}}\_{22}^{\text{MCTF}} = \eta\_p \overline{\mathbf{T}}\_o^T + \eta\_o \overline{\mathbf{T}}\_p^T \tag{19}$$

and

3. Discretization

196 Dynamical Systems - Analytical and Computational Techniques

as [33]

where, that is,

Similar to the diversity of surface integral equations, discretization can be performed in alternative ways. Using a Galerkin scheme, the basis and testing functions are selected as the same set of N functions locally defined on the surface. As a popular choice for triangular discretizations, which is also considered in this chapter, the RWG functions are defined

ðr−rn1Þ, r ∈Sn<sup>1</sup>

ðrn2−rÞ, r ∈Sn<sup>2</sup>

0; r ∉ Sn:

; r ∈Sn<sup>1</sup>

; r ∈Sn<sup>2</sup>

0; r ∉ Sn;

� ¼ " wMCTF 1 wMCTF 2

<sup>o</sup> <sup>þ</sup> <sup>η</sup>pT<sup>T</sup>

PV; <sup>o</sup> <sup>þ</sup> <sup>K</sup><sup>T</sup>

<sup>o</sup> <sup>þ</sup> <sup>η</sup>oT<sup>T</sup>

PV; <sup>o</sup>−K<sup>T</sup>

#

; (15)

<sup>p</sup> (16)

PV; <sup>p</sup> (17)

PV; <sup>p</sup>Þ (18)

<sup>p</sup> (19)

(13)

(14)

ln 2An<sup>1</sup>

8 >>>>><

>>>>>:

∇ � fnðrÞ ¼

while the charge neutrality is satisfied locally as An1ln=An1−An2ln=An<sup>2</sup> ¼ 0.

<sup>11</sup> <sup>Z</sup>MCTF 12

<sup>21</sup> <sup>Z</sup>MCTF 22

The matrix elements and the elements of the right-hand-side vector are derived as

ZMCTF <sup>11</sup> <sup>¼</sup> <sup>η</sup>oT<sup>T</sup>

ZMCTF <sup>12</sup> <sup>¼</sup> <sup>−</sup>K<sup>T</sup>

ZMCTF

<sup>21</sup> <sup>¼</sup> <sup>η</sup>oηpðK<sup>T</sup>

<sup>22</sup> <sup>¼</sup> <sup>η</sup>pT<sup>T</sup>

ZMCTF

of the RWG functions, MCTF can be discretized as

" ZMCTF

ZMCTF

ln 2An<sup>2</sup>

Each RWG function is located on a pair of triangles sharing an edge. In the above, ln represents the length of the main edge, An<sup>1</sup> and An<sup>2</sup> are, respectively, the areas of the first (Sn1) and the second (Sn2) triangles, and rn<sup>1</sup> and rn<sup>2</sup> represent the coordinates of the nodes opposite of the edge. The RWG functions are divergence conforming and their divergence is finite every-

> ln An<sup>1</sup>

8 >>>>><

>>>>>:

<sup>−</sup> ln An<sup>2</sup>

By selecting the basis and testing functions (b<sup>n</sup> and t<sup>m</sup> for {n; m} ¼ {1, 2, …; N}) as the same set

where a<sup>J</sup> and a<sup>M</sup> are vectors containing complex coefficients to expand the current densities.
