3. Linear γ-Hamiltonian systems

Definition 16 If there is a differentiable function called Hamiltonian function (energy) <sup>H</sup>ð Þ <sup>t</sup>; <sup>x</sup>; <sup>y</sup> , <sup>H</sup> : <sup>R</sup> � <sup>R</sup><sup>n</sup> � <sup>R</sup>n↦R, which satisfies

$$\dot{\mathbf{x}} = \left(\frac{\partial \mathcal{H}}{\partial \mathbf{y}}\right)^T \\ and \ \dot{\mathbf{y}} = -\left(\frac{\partial \mathcal{H}}{\partial \mathbf{y}}\right),$$

then it is called a Hamiltonian system. If Hð Þ t; x; y is a quadratic function with respect to x and y, then the system is a linear Hamiltonian system.

It is easy to prove that if H does not depend on t, Hð Þ x; y is a first integral. However, this is no longer true in the time-periodic case. In the time-periodic case, even for n ¼ 1, the integration of the equations is not possible. Any linear Hamiltonian system can be written as

$$
\dot{z} = jH(t)z \tag{16}
$$

where <sup>H</sup><sup>T</sup>ðÞ¼ <sup>t</sup> H tð Þ is a symmetric matrix (Hermitian in the complex case). Herein, the variables used in the definition satisfy <sup>z</sup> <sup>¼</sup> xT; <sup>y</sup><sup>T</sup> <sup>T</sup> . Therefore, the dimension of real Hamiltonian systems is always even. Finally, note that the product JH satisfies the condition for a Hamiltonian matrix. The fundamental property of any linear Hamiltonian system is that the state transition matrix of the system in Eq. (16) is a symplectic matrix (see [9] for more details).

If A is γ-Hamiltonian matrix, or equivalently, A þ γI2<sup>n</sup> is a Hamiltonian matrix for some γ . 0; then it follows from Eq. (16) that

$$
\dot{\mathfrak{x}} = [\mathcal{A} + I\_{2n}]\mathfrak{x} = f\mathcal{H}\mathfrak{x}
$$

for some matrix <sup>H</sup> <sup>¼</sup> <sup>H</sup><sup>T</sup>. From the last equation ½ �¼ <sup>A</sup> <sup>þ</sup> <sup>I</sup>2<sup>n</sup> JH, we obtain

$$A = J[H + \gamma I\_{2n}].\tag{17}$$

Any γ-Hamiltonian matrix A may be written as in Eq. (17), which motivates the next definition.

Definition 17 Any linear system that can be written as

$$
\dot{\mathbf{x}} = A(\mathbf{t})\mathbf{x} = f[H(\mathbf{t}) + \eta I]\mathbf{x} \tag{18}
$$

with <sup>x</sup><sup>∈</sup> <sup>R</sup><sup>2</sup><sup>n</sup>, <sup>H</sup><sup>T</sup>ðÞ¼ <sup>t</sup> H tð Þ, and <sup>γ</sup> <sup>≥</sup>0 is called a linear <sup>γ</sup>-Hamiltonian system. Lemma 18 The state transition matrix of a linear γ-Hamiltonian system in Eq. (18) is <sup>μ</sup>-symplectic with <sup>μ</sup> <sup>¼</sup> <sup>e</sup>�2γ<sup>t</sup> .

Proof 19 Let be N tðÞ¼ Φð Þ t; 0 be the state transition of Eq. (17), and then

$$
\dot{N}(t) = A(t)N(t).
$$

Differentiating the product NTJN gives

$$\frac{d}{dt}N^T\_J \mathbf{N} = \dot{N}^T\_J \mathbf{N} + N^T\_J \dot{\mathbf{N}} = (AN)^T\_J \mathbf{N} + N^T\_J (AN)$$

$$= N^T \left(A^T J + J\mathbf{A}\right) \mathbf{N} = N^T \left(\left(J(H+\chi I)\right)^T J + J\left(J(H+\chi I)\right)\right) \mathbf{N} \tag{19}$$

$$= -2\gamma N^T j \mathbf{N}$$

Since NTð Þ <sup>0</sup> JNð Þ¼ <sup>0</sup> J, we get<sup>1</sup>

$$N^T(t)JN(t) = e^{-2\gamma t}J = \mu J.$$

Therefore, N is μ-symplectic. Lemma 20 Consider the transformation

$$\mathbf{x} = \mathbf{S}(t)\mathbf{z} \tag{20}$$

with S tð Þ a symplectic matrix for all t. Then the transformation in Eq. (20) preserves the γ-Hamiltonian form of the system, Eq.(18).

Proof 21 From the definition STJS <sup>¼</sup> <sup>0</sup> ! \_ <sup>S</sup>TJS <sup>þ</sup> STJ \_ <sup>S</sup> <sup>¼</sup> <sup>0</sup>, thus \_ <sup>S</sup>TJS ¼ �STJ \_ S, and from Eq.(20)

$$\dot{\mathfrak{x}} = \mathbf{S}\dot{\mathbf{z}} + \dot{\mathbf{S}}z \to \mathbf{S}^{-1}\dot{\mathfrak{x}} = \dot{\mathbf{z}} + \mathbf{S}^{-1}\dot{\mathbf{S}}z$$

then applying the transformation Eq.(20) into Eq.(18) it is obtained as <sup>z</sup>\_ <sup>þ</sup> <sup>S</sup>�<sup>1</sup> \_ Sz <sup>¼</sup> <sup>S</sup>�<sup>1</sup> J Hð Þ þ γJ Sz; then from the symplectic definition matrix <sup>S</sup>�<sup>1</sup> <sup>¼</sup> <sup>J</sup> �1 STJ,

$$\dot{\mathbf{z}} = \mathbf{S}^{-1} \mathbf{J} (H + \eta \mathbf{J}) \mathbf{S} \mathbf{z} - \mathbf{S}^{-1} \dot{\mathbf{S}} \mathbf{z} = \left( \mathbf{J}^{-1} \mathbf{S}^{T} \mathbf{J} \right) \mathbf{H} \mathbf{S} \mathbf{z} - \eta \mathbf{I} \mathbf{z} - \left( \mathbf{J}^{-1} \mathbf{S}^{T} \mathbf{J} \right) \dot{\mathbf{S}} \mathbf{z} $$
 
$$= \mathbf{J} \mathbf{S}^{T} \mathbf{H} \mathbf{S} \mathbf{z} - \eta \mathbf{I} \mathbf{z} + \mathbf{J} \mathbf{S}^{T} \mathbf{J} \dot{\mathbf{S}} \mathbf{z} = \mathbf{J} \left( \mathbf{S}^{T} \mathbf{H} \mathbf{S} + \mathbf{S}^{T} \mathbf{J} \dot{\mathbf{S}} + \eta \mathbf{I} \right) \mathbf{z} = \mathbf{J} \left( \ddot{\mathbf{H}} + \eta \mathbf{J} \right) \mathbf{z} $$

where <sup>H</sup><sup>~</sup> <sup>¼</sup> <sup>S</sup>THS <sup>þ</sup> <sup>S</sup>TJ \_ S, but STJ \_ <sup>S</sup> <sup>T</sup> <sup>¼</sup> \_ STJ TS ¼ � \_ <sup>S</sup>TJS ¼� �STJ \_ <sup>S</sup> <sup>¼</sup> STJ \_ S; therefore <sup>H</sup><sup>~</sup> <sup>¼</sup> <sup>H</sup><sup>~</sup> <sup>T</sup> ■.

#### 3.1 Mechanical, linear γ-Hamiltonian system

Consider any mechanical system described by the equation

$$
\tilde{M}\ddot{\mathbf{y}} + \tilde{D}\dot{\mathbf{y}} + \tilde{K}(t)\mathbf{y} = \mathbf{0} \tag{21}
$$

where y tð Þ <sup>∈</sup> <sup>R</sup><sup>n</sup>, K t <sup>~</sup> ðÞ¼ <sup>K</sup><sup>~</sup> <sup>T</sup>ð Þ<sup>t</sup> <sup>∈</sup> <sup>R</sup><sup>n</sup>�<sup>n</sup>, and the constant matrices <sup>M</sup><sup>~</sup> and <sup>D</sup><sup>~</sup> <sup>∈</sup> <sup>R</sup><sup>n</sup>�<sup>n</sup> such that <sup>M</sup><sup>~</sup> <sup>¼</sup> <sup>M</sup><sup>~</sup> <sup>T</sup> . 0 and <sup>D</sup><sup>~</sup> <sup>¼</sup> <sup>D</sup><sup>~</sup> <sup>T</sup>. Then there always exists a linear transformation T such that

$$\begin{aligned} T^T \tilde{M} T &= I\_n \\ T^T \tilde{D} T &= D = \text{diag}\{d\_1, d\_2, \dots, d\_n\} \\ \sigma(\tilde{M}^{-1} \tilde{D}) &= \{d\_1, d\_2, \dots, d\_n\} \end{aligned}$$

<sup>1</sup> The matrix product <sup>d</sup> dt <sup>N</sup>TJN <sup>N</sup>TJN <sup>¼</sup> <sup>N</sup>TJN <sup>d</sup> dt <sup>N</sup>TJN is commutative.

(e.g., see [15]). Therefore, applying the transformation y ¼ Tz yields

$$
\ddot{z} + D\dot{z} + K(t)z = 0,\tag{22}
$$

where K tðÞ¼ <sup>T</sup>TK t <sup>~</sup> ð ÞT. Eq. (22) can be rewritten as a first-order system by introducing the state vector <sup>x</sup> <sup>¼</sup> zT; <sup>z</sup>\_ <sup>T</sup> � �<sup>T</sup> :

$$
\dot{\boldsymbol{x}} = \begin{bmatrix} \mathbf{0}\_{n \times n} & I\_n \\ -K(t) & -D \end{bmatrix} \mathbf{x} \tag{23}
$$

where x∈ R2n�2n. Let

$$Q = \frac{1}{\sqrt{2}} \begin{bmatrix} I\_n & I\_n \\ -I\_n & I\_n \end{bmatrix} \tag{24}$$

be an orthogonal matrix satisfying QQ<sup>T</sup> <sup>¼</sup> <sup>Q</sup>TQ <sup>¼</sup> <sup>I</sup>2<sup>n</sup>, and also JQ <sup>¼</sup> Q J, one can introduce the transformation <sup>w</sup> <sup>¼</sup> <sup>Q</sup>Tx, and Eq. (23) gives

$$
\dot{w} = \mathbf{Q}^T \begin{bmatrix} \mathbf{0}\_{n \times n} & I\_n \\ -K(t) & -D \end{bmatrix} \mathbf{Q} w = \frac{\mathbf{1}}{2} \begin{bmatrix} K(t) - I\_n - D & K(t) + I\_n + D \\ -K(t) + D - I\_n & -K(t) + I\_n - D \end{bmatrix} w,
$$

or equivalently,

$$\dot{w} = J \begin{pmatrix} \mathbf{1} \\ \frac{\mathbf{1}}{2} \begin{bmatrix} K(t) + I\_n - D & K(t) - I\_n \\ K(t) - I\_n & K(t) + I\_n + D \end{bmatrix} + \frac{\mathbf{1}}{2} \begin{bmatrix} D & \mathbf{0}\_{n \times n} \\ \mathbf{0}\_{n \times n} & D \end{bmatrix} \end{pmatrix} \mathbf{w}. \tag{25}$$

Since <sup>D</sup> <sup>¼</sup> diagf g <sup>d</sup>1; <sup>d</sup>2; …; dn and <sup>K</sup> <sup>¼</sup> <sup>K</sup><sup>T</sup>, the matrix

$$H(t) = \frac{1}{2} \begin{bmatrix} K(t) + I\_n - D & K(t) - I\_n \\ K(t) - I\_n & K(t) + I\_n + D \end{bmatrix}.$$

is also symmetric H tðÞ¼ H tð Þ<sup>T</sup>. Therefore, Eq. (25) can be cast into the γ-Hamiltonian linear system form w\_ ¼ J Hð Þ þ γJ w if γ is approximated as γ ≈ <sup>1</sup> 2n P<sup>n</sup> <sup>i</sup>¼<sup>1</sup> di. In the special case <sup>d</sup> <sup>¼</sup> <sup>d</sup><sup>1</sup> <sup>¼</sup> <sup>d</sup><sup>2</sup> <sup>¼</sup> <sup>⋯</sup> <sup>¼</sup> dn, <sup>γ</sup> is given exactly given by <sup>γ</sup> <sup>¼</sup> <sup>d</sup> 2.

#### 3.2 Periodic linear systems

This section summarizes the main results on periodic linear systems. The proofs and details are omitted and can be found in [16, 17]. Consider the linear periodic system:

$$
\dot{\mathbf{x}} = B(t)\mathbf{x} \quad \text{with} \quad B(t) = B(t+\Omega) \tag{26}
$$

where x∈ R<sup>n</sup>, B∈ R<sup>n</sup>�<sup>n</sup>, and Ω are the fundamental periods.

Theorem 22 (Floquet) The state transition matrix Φð Þ t; t<sup>0</sup> of the system in Eq. (26) may be factorized as

$$\Phi(t, t\_0) = P^{-1}(t)e^{\mathbb{R}(t - t\_0)}P(t\_0) \tag{27}$$

where

$$P^{-1}(t) = \Phi(t, \mathbf{0})e^{-Rt}.\tag{28}$$

In addition P�<sup>1</sup> ðÞ¼ <sup>t</sup> <sup>P</sup>�<sup>1</sup> ð Þ t þ Ω is a periodic matrix of the same period Ω, and R is in general a complex constant matrix [18].

Definition 23 We define the monodromy matrix M associated to the Eq. (26) as

$$M = \Phi(\Omega, \mathbf{0}).\tag{29}$$

The monodromy matrix may be defined as Mt<sup>0</sup> ¼ Φ Ωð Þ ; Ω þ t<sup>0</sup> , but we use only the spectrum of the monodromy matrix, σð Þ M . From.

<sup>Φ</sup>ð Þ¼ <sup>t</sup>; <sup>t</sup><sup>0</sup> <sup>P</sup>�<sup>1</sup> ð Þ<sup>t</sup> eR tð Þ �t<sup>0</sup> P tð Þ<sup>0</sup> � � <sup>t</sup>¼t0þ<sup>Ω</sup> <sup>¼</sup> Φ Ωð Þ¼ ; <sup>Ω</sup> <sup>þ</sup> <sup>t</sup><sup>0</sup> <sup>P</sup>�<sup>1</sup> ð Þ <sup>t</sup><sup>0</sup> <sup>þ</sup> <sup>Ω</sup> eR<sup>W</sup>P tð Þ<sup>0</sup> <sup>¼</sup> <sup>P</sup>�<sup>1</sup> ð Þ <sup>t</sup><sup>0</sup> <sup>e</sup>RWP tð Þ<sup>0</sup> , because <sup>P</sup> and <sup>P</sup>�<sup>1</sup> are <sup>Ω</sup>-periodic. This last relation shows that M and Mt<sup>0</sup> are similar matrices and possess the same spectrum. Moreover, if t<sup>0</sup> ¼ 0 in the Floquet theorem, then <sup>Φ</sup>ð Þ¼ <sup>t</sup>; <sup>0</sup> Q tð ÞeRt based on Q tðÞ¼ Q tð Þ <sup>þ</sup> <sup>Ω</sup> and Qð Þ¼ 0 In; we have

$$M = \Phi(\Omega, \mathbf{0}) = Q(\Omega)e^{\mathbb{R}\Omega} = Q(\mathbf{0})e^{\mathbb{R}\Omega} = e^{\mathbb{R}\Omega}. \tag{30}$$

Definition 24 The eigenvalues λ<sup>i</sup> of the monodromy matrix are called characteristic multipliers or multipliers. The numbers <sup>ρ</sup>i, not unique, defined as <sup>λ</sup><sup>i</sup> <sup>¼</sup> <sup>e</sup>ρiΩ, are called characteristic exponents or Floquet exponents.

Corollary 25 (Lyapunov-Floquet Transformation) If we define the change of coordinates

$$z(t) = P(t)\mathbf{x}(t)\tag{31}$$

where P fulfills Eq. (28), then the periodic linear system in Eq. (26) can be transformed into a linear time-invariant system

$$
\dot{z}(t) = Rz(t) \tag{32}
$$

where R is a constant matrix as introduced in the Floquet theorem.

The transformation in Eq. (31) is a Lyapunov transformation which means that the stability properties of the linear system in Eq. (26) are preserved. Therefore any periodic system as in Eq. (26) is reducible to a system in Eq. (32) with constant coefficients<sup>2</sup> ([16]). However, the matrix R is not always real (e.g., see [10, 20]). In the present discussion, we only use its spectrum σð Þ R .

For analyzing x tð Þ as t ! ∞, we assume that the initial conditions are given at t<sup>0</sup> ¼ 0. Then for any t . 0, t may be expressed as t ¼ kΩ þ τ, where k∈ Z<sup>þ</sup> and τ ∈ ½0, ΩÞ. Applying the well-known properties of the state transition matrix, the solution can be written as

$$\begin{split} \mathfrak{x}(t) &= \Phi(t,\mathbf{0})\mathfrak{x}\_{0} = \Phi(k\Omega + \mathfrak{r},\mathbf{0})\mathfrak{x}\_{0} \\ &= \Phi(k\Omega + \mathfrak{r},k\Omega) \underbrace{\Phi(k\Omega,(k-1)\Omega)\Phi((k-1)\Omega,(k-2)\Omega)...\Phi(\mathfrak{Q},\mathbf{0})}\_{k-\text{terms}} \mathfrak{x}\_{0} \\ &= \Phi(\mathfrak{r},\mathbf{0})[\Phi(\mathfrak{Q},\mathbf{0})]^{k}\mathfrak{x}\_{0} = \Phi(\mathfrak{r},\mathbf{0})\mathcal{M}^{k}\mathfrak{x}\_{0} \end{split}$$

Analyzing the last expression, the terms Φð Þ τ; 0 and x<sup>0</sup> are bounded; the following three cases can be distinguished:

<sup>2</sup> For applying the transformation in Eq. (31), the analytical solution of Eq. (26) is only available for special cases [19], and in general a numerical solution needs to be calculated.

$$\mathfrak{a}\mathfrak{z}(t) \to \mathbf{0} \Leftrightarrow \lim\_{k \to \infty} M^k = \mathbf{0} \Leftrightarrow \sigma(M) \subset D = \{ z \in \mathbb{C} : |z| < 1 \}.$$


Theorem 26 (Lyapunov-Floquet) Considering the linear periodic system in Eq. (26), then the system is (a) asymptotic stable if and only if Eq. (1) is satisfied, (b) stable if and only if Eq. (2) is satisfied, and (c) Unstable if and only if Eq. (3) is satisfied.

Due to the Lyapunov-Floquet transformation in Eq. (31), the stability of the periodic linear system in Eq. (26) can be determined by analyzing the system in Eq. (32). Corollary 27 The system in Eq. (26) is:

