**1. Introduction**

The discrete-time linear control systems have been applied in the wide area of applications such as engineering, economics, biology. Such type systems have been intensively considered in the control literature in both the deterministic and the stochastic framework. The stability and optimal control of stochastic differential equations with Markovian switching has recently received a lot of attention, see Freiling and Hochhaus [8], Costa, Fragoso, and Marques [2], Dragan and Morozan [4, 5]. The equilibrium in these discrete-time stochastic systems can be found via the maximal solution of the corresponding set of discrete-time Riccati equations.

We consider a set of discrete-time generalized Riccati equations that arise in quadratic optimal control of discrete-time stochastic systems subjected to both state-dependent noise and Markovian jumps, i.e. the discrete-time Markovian jump linear systems (MJLS). The iterative method to compute the maximal and stabilizing solution of wide class of discrete-time nonlinear equations is derived by Dragan, Morozan and Stoica [6, 7].

We study a problem for computing the maximal symmetric solution to the following set of discrete-time generalized algebraic Riccati equations (DTGAREs):

$$X(i) = \mathcal{P}(i, \mathbf{X}) := \sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_i(\mathbf{X}) A\_l(i) + \mathcal{C}^T(i) \mathbf{C}(i)$$

$$- (\sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_i(\mathbf{X}) B\_l(i) + L(i)) \tag{1}$$

$$\times \mathcal{R}(i, \mathbf{X})^{-1} \left(\sum\_{l=0}^{r} B\_l(i)^T \mathcal{E}\_i(\mathbf{X}) A\_l(i) + L(i)^T \right), \qquad i = 1, \dots, N,$$

where *R*(*i*, **X**) = *R*(*i*) + ∑*<sup>r</sup> <sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*T*E*i*(**X**)*Bl*(*i*) and <sup>E</sup>(**X**) = (E1(**X**),..., <sup>E</sup>*N*(**X**)) with **<sup>X</sup>** <sup>=</sup> (*X*(1),..., *X*(*N*)) and

$$\mathcal{E}\_{\mathbf{i}}(\mathbf{X}) = \sum\_{j=1}^{N} p\_{\mathbf{i}j} \, \mathbf{X}(j) \, \prime \qquad \mathbf{X}(j) \text{ is an } n \times n \text{ matrix } \prime \text{ for } \mathbf{i} = 1, \ldots, N.$$

©2012 Ivanov, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©2012 Ivanov, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### 2 Will-be-set-by-IN-TECH 148 Stochastic Modeling and Control Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison <sup>3</sup>

In addition the operator

$$\Pi\left(i,\mathbf{X}\right) = \begin{pmatrix} \sum\_{l=0}^{r} A\_{l}(i)^{T}\mathcal{E}\_{i}(\mathbf{X})A\_{l}(i) & \sum\_{l=0}^{r} A\_{l}(i)^{T}\mathcal{E}\_{i}(\mathbf{X})B\_{l}(i) + L(i) \\\\ \sum\_{l=0}^{r} B\_{l}(i)^{T}\mathcal{E}\_{i}(\mathbf{X})A\_{l}(i) + L(i)^{T} & \sum\_{l=0}^{r} B\_{l}(i)^{T}\mathcal{E}\_{i}(\mathbf{X})B\_{l}(i) \end{pmatrix}$$

of the same type Riccati equations is obtained. A new iteration for the maximal solution to

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 149

We investigate the applicability of the existing methods for the maximal solution to (1) where the weighting matrices *R*(*i*), *Q*(*i*) are indefinite in the second case. These weighting matrices are indefinite, but the matrices *R*(*i*, **X**), *i* = 1, . . . , *N* are still positive definite. Similar investigations have been executed by Rami and Zhou [12, 14] in case in the infinite time horizon. The important tool for finding the maximal solution is a semidefinite programming associated with the linear matrix inequalities (LMI). The method for the maximal solution used an LMI optimization problem is called the LMI approach or the LMI method. Rami and Zhou [14] have described a technics for applying the LMI method to indefinite linear quadratic problem in infinite time horizon. Here we will extend their findings and will modify their technics to indefinite linear quadratic problem of Markovian jump linear systems. We propose a new optimization problem suitable to the occasion. The investigation is accompanied by comparisons of the LMI approach on different numerical examples. This is the subject of

The third case is considered in section 4. Here, the solution of (1) under the assumption that at least one of matrices *R*(*i*, **X**), *i* = 1, . . . , *N* is positive semidefinite is analysed. In this case

*<sup>l</sup>*=<sup>0</sup> *Al*(*i*)*T*E*i*(**X**)*Al*(*i*) + *<sup>C</sup>T*(*i*)*C*(*i*) <sup>−</sup> *<sup>S</sup>*(*i*, **<sup>X</sup>**)(*R*(*i*, **<sup>X</sup>**))

Such type generalized Riccati equation is introduced in [15]. The notation *Z*† stands for the Moore-Penrose inverse of a matrix *Z*. We derive a suitable iteration formula for computing the maximal solution of (3) - (4) and the convergence properties of the induced matrix sequence are proved. In addition, the LMI approach is modified and applied to the case of semidefinite matrices *R*(*i*, **X**) (*i* = 1, . . . , *N*). Numerical simulations for comparison the derived methods

We are executing some numerical experiments in this investigation. Based on the results from experiments the considered methods are compared in all cases. In the examples we consider a MJLS with three operation modes describing an economic system, adapted from [17] which studies a time-variant macroeconomic model where some of the parameters are allowed to fluctuate in an exogenous form, according to a Markov chain. The operation modes are interpreted as the general situation: "neutral", "bad" or "good" (*N* = 3). See [17] and references therein for more details. Our experiments are carried out in the MATLAB on a 1,7GHz PENTIUM computer. In order to execute our experiments the suitable MATLAB

The notation <sup>H</sup>*<sup>n</sup>* stands for the linear space of symmetric matrices of size *<sup>n</sup>* over the field of real numbers. For any *<sup>X</sup>*,*<sup>Y</sup>* ∈ H*n*, we write *<sup>X</sup>* <sup>&</sup>gt; *<sup>Y</sup>* or *<sup>X</sup>* <sup>≥</sup> *<sup>Y</sup>* if *<sup>X</sup>* <sup>−</sup> *<sup>Y</sup>* is positive definite or *<sup>X</sup>* <sup>−</sup> *<sup>Y</sup>* is positive semidefinite. The notations **<sup>X</sup>** = (*X*(1), *<sup>X</sup>*(2),..., *<sup>X</sup>*(*N*)) ∈ H*<sup>n</sup>* and the inequality **<sup>Y</sup>** <sup>≥</sup> **<sup>Z</sup>** mean that for *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*, *<sup>X</sup>*(*i*) ∈ H*<sup>n</sup>* and *<sup>Y</sup>*(*i*) <sup>≥</sup> *<sup>Z</sup>*(*i*), respectively. The

† *S*(*i*, **X**)*<sup>T</sup>* (3)

† *R*(*i*, **X**)) *S*(*i*, **X**)*<sup>T</sup>* = 0 . (4)

this equivalent set of nonlinear equations is proposed. This is the subject of section 2.

section 3.

set of equations (1) can be written as

for *i* = 1, . . . , *N* with the additional conditions

*R*(*i*, **X**) ≥ 0, and (*I* − (*R*(*i*, **X**))

*X*(*i*) = ∑*<sup>r</sup>*

are presented in the section.

procedures are used.

is assumed to be linear and positive, i.e. **X** ≥ 0 implies Π (*i*, **X**) ≥ 0 for *i* = 1, . . . , *N*. That is a natural assumption (see assumption *H*1, [6]). The notation **X** ≥ 0 means that *X*(*i*) ≥ 0, *i* = 1, . . . , *N*.

Such systems of discrete-time Riccati equations *X*(*i*) = P(*i*, **X**), *i* = 1, . . . , *N* are used to determine the solutions of linear-quadratic optimization problems for a discrete-time MJLS [5]. More precisely, these optimization problems are described by controlled systems of the type:

$$\mathbf{x}(t+1) = \left[A\_0(\eta\_l) + \sum\_{l=1}^r w\_l(t)A\_l(\eta\_l)\right]\mathbf{x}(t) + \left[B\_0(\eta\_l) + \sum\_{l=1}^r w\_l(t)B\_l(\eta\_l)\right]u(t) \tag{2}$$

for *x*(0) = *x*<sup>0</sup> and the output

$$y = \mathcal{C}(\eta\_t) \ge (t) + D(\eta\_t) \ge (t)$$

where {*ηt*}*t*≥<sup>0</sup> is a Markov chain taking values in {1, 2, . . . , *N*} with transition probability matrix (*pij*)*<sup>N</sup> <sup>i</sup>*,*j*=1. Moreover, {*w*(*t*)}*t*≥<sup>0</sup> is a sequence of independent random vectors (*w*(*t*) = (*w*1(*t*),..., *wr*(*t*))*T*), for details see e.g. [5–7].

We define the matrices *Al*, *Bl* such that *Al* = (*Al*(1),..., *Al*(*N*)), *Bl* = (*Bl*(1),..., *Bl*(*N*)) where *Al*(*i*) is an *n* × *n* matrix and *Bl*(*i*) is an *n* × *k* matrix *l* = 0, 1, . . . ,*r* and *i* = 1, . . . , *N*, and **A** = (*A*0, *A*1, *A*2,..., *Ar*) and **B** = (*B*0, *B*1, *B*2,..., *Br*). We present the Definition 4.1 [7] in the form:

**Definition 1.1.** *We say that the couple* (**A**, **B**) *is stabilizable if for some* **F** = (*F*(1),..., *F*(*N*)) *the closed loop system:*

$$\mathbf{x}(t+1) = \left[A\_0(\eta\_t) + B\_0(\eta\_t)F(\eta\_t) + \sum\_{l=1}^r w\_l(t)(A\_l(\eta\_t) + B\_l(\eta\_t)F(\eta\_t))\right] \mathbf{x}(t)$$

*is exponentially stable in mean square (ESMS).*

The matrix **F** involved in the above definition is called stabilizing feedback gain.

We will investigate the computation of the maximal solution of a set of equations (1). A solution **<sup>X</sup>**˜ of (1) is called maximal if **<sup>X</sup>**˜ <sup>≥</sup> **<sup>X</sup>** for any solution **<sup>X</sup>**.

We will consider three cases. In the first case the weighting matrices *R*(*i*) = *DT*(*i*) *D*(*i*), *i* = 1, . . . , *N* are assumed to be positive definite and *Q*(*i*) = *CT*(*i*) *C*(*i*), *i* = 1, . . . , *N* are positive semidefinite. Thus, the matrices *R*(*i*, **X**) = *R*(*i*) + ∑*<sup>r</sup> <sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*T*E*i*(**X**)*Bl*(*i*), *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>* are positive definite. We present an overview of several computational methods [6, 10] to compute the maximal and stabilizing solutions of a considered class of discrete-time Riccati equations. In addition, we apply a new approach, where the variables are changed and an equivalent set of the same type Riccati equations is obtained. A new iteration for the maximal solution to this equivalent set of nonlinear equations is proposed. This is the subject of section 2.

We investigate the applicability of the existing methods for the maximal solution to (1) where the weighting matrices *R*(*i*), *Q*(*i*) are indefinite in the second case. These weighting matrices are indefinite, but the matrices *R*(*i*, **X**), *i* = 1, . . . , *N* are still positive definite. Similar investigations have been executed by Rami and Zhou [12, 14] in case in the infinite time horizon. The important tool for finding the maximal solution is a semidefinite programming associated with the linear matrix inequalities (LMI). The method for the maximal solution used an LMI optimization problem is called the LMI approach or the LMI method. Rami and Zhou [14] have described a technics for applying the LMI method to indefinite linear quadratic problem in infinite time horizon. Here we will extend their findings and will modify their technics to indefinite linear quadratic problem of Markovian jump linear systems. We propose a new optimization problem suitable to the occasion. The investigation is accompanied by comparisons of the LMI approach on different numerical examples. This is the subject of section 3.

The third case is considered in section 4. Here, the solution of (1) under the assumption that at least one of matrices *R*(*i*, **X**), *i* = 1, . . . , *N* is positive semidefinite is analysed. In this case set of equations (1) can be written as

$$X(i) = \sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_l(\mathbf{X}) A\_l(i) + \mathbf{C}^T(i) \mathbf{C}(i) - \mathbf{S}(i, \mathbf{X}) \left(\mathbf{R}(i, \mathbf{X})\right)^\dagger \mathbf{S}(i, \mathbf{X})^T \tag{3}$$

for *i* = 1, . . . , *N* with the additional conditions

2 Will-be-set-by-IN-TECH

is assumed to be linear and positive, i.e. **X** ≥ 0 implies Π (*i*, **X**) ≥ 0 for *i* = 1, . . . , *N*. That is a natural assumption (see assumption *H*1, [6]). The notation **X** ≥ 0 means that *X*(*i*) ≥ 0, *i* =

Such systems of discrete-time Riccati equations *X*(*i*) = P(*i*, **X**), *i* = 1, . . . , *N* are used to determine the solutions of linear-quadratic optimization problems for a discrete-time MJLS [5]. More precisely, these optimization problems are described by controlled systems of the

*wl*(*t*)*Al*(*ηt*)]*x*(*t*)+[*B*0(*ηt*) +

*y* = *C*(*ηt*) *x*(*t*) + *D*(*ηt*) *x*(*t*)

where {*ηt*}*t*≥<sup>0</sup> is a Markov chain taking values in {1, 2, . . . , *N*} with transition probability

We define the matrices *Al*, *Bl* such that *Al* = (*Al*(1),..., *Al*(*N*)), *Bl* = (*Bl*(1),..., *Bl*(*N*)) where *Al*(*i*) is an *n* × *n* matrix and *Bl*(*i*) is an *n* × *k* matrix *l* = 0, 1, . . . ,*r* and *i* = 1, . . . , *N*, and **A** = (*A*0, *A*1, *A*2,..., *Ar*) and **B** = (*B*0, *B*1, *B*2,..., *Br*). We present the Definition 4.1 [7] in the

**Definition 1.1.** *We say that the couple* (**A**, **B**) *is stabilizable if for some* **F** = (*F*(1),..., *F*(*N*)) *the*

*r* ∑ *l*=1

We will investigate the computation of the maximal solution of a set of equations (1). A

We will consider three cases. In the first case the weighting matrices *R*(*i*) = *DT*(*i*) *D*(*i*), *i* = 1, . . . , *N* are assumed to be positive definite and *Q*(*i*) = *CT*(*i*) *C*(*i*), *i* = 1, . . . , *N* are positive

positive definite. We present an overview of several computational methods [6, 10] to compute the maximal and stabilizing solutions of a considered class of discrete-time Riccati equations. In addition, we apply a new approach, where the variables are changed and an equivalent set

The matrix **F** involved in the above definition is called stabilizing feedback gain.

*<sup>i</sup>*,*j*=1. Moreover, {*w*(*t*)}*t*≥<sup>0</sup> is a sequence of independent random vectors (*w*(*t*) =

*<sup>l</sup>*=<sup>0</sup> *Al*(*i*)*T*E*i*(**X**)*Bl*(*i*) + *<sup>L</sup>*(*i*)

*wl*(*t*)*Bl*(*ηt*)]*u*(*t*) (2)

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*T*E*i*(**X**)*Bl*(*i*)

*r* ∑ *l*=1

*wl*(*t*)(*Al*(*ηt*) + *Bl*(*ηt*)*F*(*ηt*))]*x*(*t*)

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*T*E*i*(**X**)*Bl*(*i*), *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>* are

*<sup>l</sup>*=<sup>0</sup> *Al*(*i*)*T*E*i*(**X**)*Al*(*i*) <sup>∑</sup>*<sup>r</sup>*

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*T*E*i*(**X**)*Al*(*i*) + *<sup>L</sup>*(*i*)*<sup>T</sup>* <sup>∑</sup>*<sup>r</sup>*

In addition the operator

1, . . . , *N*.

type:

matrix (*pij*)*<sup>N</sup>*

*closed loop system:*

form:

Π (*i*, **X**) =

∑*<sup>r</sup>*

∑*r*

*x*(*t* + 1)=[*A*0(*ηt*) +

(*w*1(*t*),..., *wr*(*t*))*T*), for details see e.g. [5–7].

*is exponentially stable in mean square (ESMS).*

*x*(*t* + 1)=[*A*0(*ηt*) + *B*0(*ηt*)*F*(*ηt*) +

solution **<sup>X</sup>**˜ of (1) is called maximal if **<sup>X</sup>**˜ <sup>≥</sup> **<sup>X</sup>** for any solution **<sup>X</sup>**.

semidefinite. Thus, the matrices *R*(*i*, **X**) = *R*(*i*) + ∑*<sup>r</sup>*

for *x*(0) = *x*<sup>0</sup> and the output

*r* ∑ *l*=1

$$R(i, \mathbf{X}) \ge 0, \quad \text{and} \quad \left(I - \left(R(i, \mathbf{X})\right)^{\dagger} R(i, \mathbf{X})\right) S(i, \mathbf{X})^{T} = 0. \tag{4}$$

Such type generalized Riccati equation is introduced in [15]. The notation *Z*† stands for the Moore-Penrose inverse of a matrix *Z*. We derive a suitable iteration formula for computing the maximal solution of (3) - (4) and the convergence properties of the induced matrix sequence are proved. In addition, the LMI approach is modified and applied to the case of semidefinite matrices *R*(*i*, **X**) (*i* = 1, . . . , *N*). Numerical simulations for comparison the derived methods are presented in the section.

We are executing some numerical experiments in this investigation. Based on the results from experiments the considered methods are compared in all cases. In the examples we consider a MJLS with three operation modes describing an economic system, adapted from [17] which studies a time-variant macroeconomic model where some of the parameters are allowed to fluctuate in an exogenous form, according to a Markov chain. The operation modes are interpreted as the general situation: "neutral", "bad" or "good" (*N* = 3). See [17] and references therein for more details. Our experiments are carried out in the MATLAB on a 1,7GHz PENTIUM computer. In order to execute our experiments the suitable MATLAB procedures are used.

The notation <sup>H</sup>*<sup>n</sup>* stands for the linear space of symmetric matrices of size *<sup>n</sup>* over the field of real numbers. For any *<sup>X</sup>*,*<sup>Y</sup>* ∈ H*n*, we write *<sup>X</sup>* <sup>&</sup>gt; *<sup>Y</sup>* or *<sup>X</sup>* <sup>≥</sup> *<sup>Y</sup>* if *<sup>X</sup>* <sup>−</sup> *<sup>Y</sup>* is positive definite or *<sup>X</sup>* <sup>−</sup> *<sup>Y</sup>* is positive semidefinite. The notations **<sup>X</sup>** = (*X*(1), *<sup>X</sup>*(2),..., *<sup>X</sup>*(*N*)) ∈ H*<sup>n</sup>* and the inequality **<sup>Y</sup>** <sup>≥</sup> **<sup>Z</sup>** mean that for *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*, *<sup>X</sup>*(*i*) ∈ H*<sup>n</sup>* and *<sup>Y</sup>*(*i*) <sup>≥</sup> *<sup>Z</sup>*(*i*), respectively. The linear space <sup>H</sup>*<sup>n</sup>* is a Hilbert space with the Frobenius inner product <sup>&</sup>lt; *<sup>X</sup>*,*<sup>Y</sup>* <sup>&</sup>gt;<sup>=</sup> *trace*(*XY*). Let �.� denote the spectral matrix norm.

*hold.*

is :

where

corresponding papers.

**X**(**0**) = (*X*(0)

given by the following theorem:

<sup>1</sup> ,..., *<sup>X</sup>*(0)

(i) *We have* **<sup>X</sup>**(*k*) <sup>≥</sup> **Xˆ** , **<sup>X</sup>**(*k*) <sup>≥</sup> **<sup>X</sup>**(*k*+1) *and*

*where i* = 1, 2, . . . , *N*, *k* = 0, 1, 2, . . .*;*

<sup>P</sup>(*i*, **<sup>X</sup>**(*k*)

*following properties are satisfied:*

Dragan, Morozan and Stoica [6] have been proposed an iterative procedure for computing the maximal solution of a set of nonlinear equations (1). The proposed iteration [6, iteration (4.7)]

*<sup>l</sup>*(*i*, **X**(*k*−<sup>1</sup>))

*<sup>k</sup> In*

*<sup>l</sup>*(*i*, **X**(*k*−<sup>1</sup>)) = *Al*(*i*) + *Bl*(*i*)*F*(*i*, **X**(*k*−<sup>1</sup>)),

*T*

<sup>E</sup>*i*1(**X**(*k*)) + *piiX*(*i*)(*k*−1) <sup>+</sup> <sup>E</sup>*i*2(**X**(*k*−<sup>1</sup>))

+ *T*(*i*, **X**(*k*−<sup>1</sup>)),

*N* ∑ *j*=*i*+1

*pij Z*(*j*).

<sup>E</sup>*i*(**X**(*k*−<sup>1</sup>))

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 151

 *A*˜

*<sup>l</sup>*(*i*, **X**(*k*−<sup>1</sup>))

(5)

(6)

*T*

*<sup>k</sup> In* ,

*k* = 1, 2, 3 . . . , and *ε* is a small positive number. Note that iteration (5) is a special case of the general iterative method given in [6, Theorem 3.3]. Based on the Gauss-Seidel technique the

*<sup>l</sup>*(*i*, **X**(*k*−<sup>1</sup>))

*i* = 1, 2, . . . , *N*, *k* = 1, 2, 3 . . .

*pij Z*(*j*), and E*i*2(**Z**) =

The convergence properties of matrix sequences defined by (5) and (6) are derived in the

The method can be applied under the assumption that the matrix inequalities P(*i*, **Z**) ≥ *Z*(*i*) and P(*i*, **Z**) ≤ *Z*(*i*), (*i* = 1, . . . , *N*) are solvable. Under these conditions the convergence of (6) takes place if the algorithm starts at any suitable initial point **X**(**0**). The new iteration (6) can be considered as an accelerated modification to iteration (5). The convergence result is

**Theorem 2.1.** *[10] Letting there are symmetric matrices* **Xˆ** = (*X*<sup>ˆ</sup> 1,..., *<sup>X</sup>*<sup>ˆ</sup> *<sup>N</sup>*) <sup>∈</sup> *Dom* <sup>P</sup> *and*

*<sup>N</sup>* ) *such that* (*a*) <sup>P</sup>(*i*, **Xˆ**) <sup>≥</sup> *<sup>X</sup>*ˆ(*i*); (*b*) **<sup>X</sup>**(**0**) <sup>≥</sup> **Xˆ** ; (*c*) <sup>P</sup>(*i*, **<sup>X</sup>**(0)) <sup>≤</sup> *<sup>X</sup>*(*i*)(0)

*<sup>k</sup>*=1,..., {*X*(*N*)(*k*)}<sup>∞</sup>

)*A*˜

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*k*) ),

)*T*E*i*1(**X**(*k*) <sup>−</sup> **<sup>X</sup>**(*k*+1)

*<sup>k</sup>*=<sup>1</sup> *defined by (6) the*

*<sup>l</sup>*(*i*, **X**(*k*−<sup>1</sup>))

*<sup>X</sup>*(*i*)(*k*) <sup>=</sup> <sup>P</sup>**X**(*k*−1) (*i*, **<sup>X</sup>**(*k*−<sup>1</sup>)) + *<sup>ε</sup>*

*l*=0 *A*˜

× 

× *A*˜

*i*−1 ∑ *j*=1

+*T*(*i*, **X**(*k*−<sup>1</sup>)) + *<sup>ε</sup>*

= ∑*<sup>r</sup> l*=0 *A*˜

where *A*˜

following modification is observed by Ivanov [10]:

*X*(*i*)(*k*) = ∑*<sup>r</sup>*

E*i*1(**Z**) =

*for i* <sup>=</sup> 1, . . . , *N. Then for the matrix sequences* {*X*(1)(*k*)}<sup>∞</sup>

) = *X*(*i*)(*k*+1) +

*r* ∑ *l*=0 *A*˜ *<sup>l</sup>*(*i*, **<sup>X</sup>**(*k*)

## **2. The positive definite case**

Let us assume that the weighting matrices *R*(*i*), *i* = 1, . . . , *N* are positive definite and *Q*(*i*), *i* = 1, . . . , *N* are positive semidefinite. Thus the matrices *R*(*i*, **X**), *i* = 1, . . . , *N* are positive definite. In this section, we consider set of equations (1) where a matrix **X** belongs to the domain:

$$\operatorname{Dom} \mathcal{P} = \left\{ \mathbf{X} \in \mathcal{H}^n \, \middle| \, R(i, \mathbf{X}) = R(i) + \sum\_{l=0}^r B\_l(i)^T \mathcal{E}\_l(\mathbf{X}) B\_l(i) > 0, \, i = 1, 2, \dots, N \right\}.$$

Note that **X** ∈ *Dom* P implies **Y** ∈ *Dom* P for all **Y** ≥ **X** and that *Dom* P is open and convex. We consider the map *Dom* P→H*n*. We investigate some iterations for finding the maximal solution to (1). For the matrix function P(*i*, **X**) we introduce notations

$$\begin{split} Q(i,\mathbf{Z}) &= \sum\_{l=0}^{r} \, \_{l} A\_{l}(i)^{T} \mathcal{E}\_{i}(\mathbf{Z}) A\_{l}(i) + \mathbf{C}^{T}(i) \mathbf{C}(i) : \\ S(i,\mathbf{Z}) &= \sum\_{l=0}^{r} \, A\_{l}(i)^{T} \mathcal{E}\_{i}(\mathbf{Z}) B\_{l}(i) + L(i) : \\ F(i,\mathbf{Z}) &= -R(i,\mathbf{Z})^{-1} S(i,\mathbf{Z})^{T} \,, \quad \text{(note that } \mathbf{S}(i,\mathbf{Z}) = -\mathbf{F}(i,\mathbf{Z})^{T} R(i,\mathbf{Z})) \\ T(i,\mathbf{Z}) &= \mathbf{C}^{T}(i) \mathbf{C}(i) + F(i,\mathbf{Z})^{T} L(i)^{T} + L(i) \, F(i,\mathbf{Z}) + F(i,\mathbf{Z})^{T} R(i) \, F(i,\mathbf{Z}) \\ &= \begin{pmatrix} I & F(i,\mathbf{Z})^{T} \end{pmatrix} \begin{pmatrix} \mathbf{C}^{T}(i) \mathbf{C}(i) & L(i) \\ L(i)^{T} & R(i) \end{pmatrix} \begin{pmatrix} I \\ F(i,\mathbf{Z}) \end{pmatrix} \\ \mathbf{1} &= \begin{pmatrix} \mathbf{Z} & \mathbf{Z} \end{pmatrix} \end{split}$$

and we present set of equations (1) as follows:

$$X(i) = Q(i, \mathbf{X}) - S(i, \mathbf{X}) \, R(i, \mathbf{X})^{-1} \, S(i, \mathbf{X})^T \, \text{s}$$

with *i* = 1, . . . , *N* .

Then, for the matrix function P(*i*, **X**) we rewrite

$$\mathcal{P}(i, \mathbf{X}) = Q(i, \mathbf{X}) - F(i, \mathbf{X})^T \mathcal{R}(i, \mathbf{X}) \, F(i, \mathbf{X}) \dots$$

We will study the system *X*(*i*) = P(*i*, **X**) for *i* = 1, . . . , *N*. We start by some useful properties to <sup>P</sup>(*i*, **<sup>X</sup>**). For briefly we use *<sup>A</sup>*˜ *<sup>l</sup>*(*i*, **<sup>Z</sup>**) = *Al*(*i*) + *Bl*(*i*)*F*(*i*, **<sup>Z</sup>**), *<sup>l</sup>* <sup>=</sup> 0, 1, . . . ,*<sup>r</sup>* for some **<sup>Z</sup>** ∈ H*n*.

**Lemma 2.1.** *[10] Assuming* **<sup>Y</sup>** ∈ H*<sup>n</sup> and* **<sup>Z</sup>** ∈ H*<sup>n</sup> are symmetric matrices, then the following properties of* P(*i*, **X**), *i* = 1, . . . , *N*

$$\begin{aligned} \mathcal{P}\_{\mathbf{Z}}(i, \mathbf{Y}) &= \sum\_{l=0}^{r} \tilde{A}\_{l}(i, \mathbf{Z})^{\mathrm{T}} \mathcal{E}\_{i}(\mathbf{Y}) \tilde{A}\_{l}(i, \mathbf{Z}) + T(i, \mathbf{Z}) \\ &- \left( F(i, \mathbf{Y})^{\mathrm{T}} - F(i, \mathbf{Z})^{\mathrm{T}} \right) R(i, \mathbf{Y}) \left( F(i, \mathbf{Y}) - F(i, \mathbf{Z}) \right) \\\\ \mathcal{P}\_{\mathbf{Z}}(i, \mathbf{Z}) - \mathcal{P}\_{\mathbf{Z}}(i, \mathbf{Y}) &= \sum\_{l=0}^{r} \tilde{A}\_{l}(i, \mathbf{Z})^{\mathrm{T}} \mathcal{E}\_{i}(\mathbf{Z} - \mathbf{Y}) \tilde{A}\_{l}(i, \mathbf{Z}) \\ &+ \left( F(i, \mathbf{Y})^{\mathrm{T}} - F(i, \mathbf{Z})^{\mathrm{T}} \right) R(i, \mathbf{Y}) \left( F(i, \mathbf{Y}) - F(i, \mathbf{Z}) \right) \end{aligned}$$

*hold.*

4 Will-be-set-by-IN-TECH

linear space <sup>H</sup>*<sup>n</sup>* is a Hilbert space with the Frobenius inner product <sup>&</sup>lt; *<sup>X</sup>*,*<sup>Y</sup>* <sup>&</sup>gt;<sup>=</sup> *trace*(*XY*).

Let us assume that the weighting matrices *R*(*i*), *i* = 1, . . . , *N* are positive definite and *Q*(*i*), *i* = 1, . . . , *N* are positive semidefinite. Thus the matrices *R*(*i*, **X**), *i* = 1, . . . , *N* are positive definite. In this section, we consider set of equations (1) where a matrix **X** belongs to the domain:

> *r* ∑ *l*=0

Note that **X** ∈ *Dom* P implies **Y** ∈ *Dom* P for all **Y** ≥ **X** and that *Dom* P is open and convex. We consider the map *Dom* P→H*n*. We investigate some iterations for finding the maximal

*T*(*i*, **Z**) = *CT*(*i*)*C*(*i*) + *F*(*i*, **Z**)*<sup>T</sup> L*(*i*)*<sup>T</sup>* + *L*(*i*) *F*(*i*, **Z**) + *F*(*i*, **Z**)*<sup>T</sup> R*(*i*) *F*(*i*, **Z**)

*L*(*i*)*<sup>T</sup> R*(*i*)

*<sup>X</sup>*(*i*) = *<sup>Q</sup>*(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>S</sup>*(*i*, **<sup>X</sup>**) *<sup>R</sup>*(*i*, **<sup>X</sup>**)−<sup>1</sup> *<sup>S</sup>*(*i*, **<sup>X</sup>**)*<sup>T</sup>* ,

<sup>P</sup>(*i*, **<sup>X</sup>**) = *<sup>Q</sup>*(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>X</sup>**)*<sup>T</sup> <sup>R</sup>*(*i*, **<sup>X</sup>**) *<sup>F</sup>*(*i*, **<sup>X</sup>**).

We will study the system *X*(*i*) = P(*i*, **X**) for *i* = 1, . . . , *N*. We start by some useful properties

**Lemma 2.1.** *[10] Assuming* **<sup>Y</sup>** ∈ H*<sup>n</sup> and* **<sup>Z</sup>** ∈ H*<sup>n</sup> are symmetric matrices, then the following*

*<sup>l</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**Y**)*A*˜

*<sup>F</sup>*(*i*, **<sup>Y</sup>**)*<sup>T</sup>* <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>*

*<sup>F</sup>*(*i*, **<sup>Y</sup>**)*<sup>T</sup>* <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>*

*<sup>l</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**<sup>Z</sup>** <sup>−</sup> **<sup>Y</sup>**)*A*˜

*Bl*(*i*)*T*E*i*(**X**)*Bl*(*i*) <sup>&</sup>gt; 0, *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>N</sup>* } .

note that S(i, **<sup>Z</sup>**) = <sup>−</sup>F(i, **<sup>Z</sup>**)<sup>T</sup> <sup>R</sup>(i, **<sup>Z</sup>**)

*I*

*F*(*i*, **Z**)

*<sup>l</sup>*(*i*, **<sup>Z</sup>**) = *Al*(*i*) + *Bl*(*i*)*F*(*i*, **<sup>Z</sup>**), *<sup>l</sup>* <sup>=</sup> 0, 1, . . . ,*<sup>r</sup>* for some **<sup>Z</sup>** ∈ H*n*.

*<sup>l</sup>*(*i*, **Z**) + *T*(*i*, **Z**)

*<sup>l</sup>*(*i*, **Z**)

*R*(*i*, **Y**) (*F*(*i*, **Y**) − *F*(*i*, **Z**))

*R*(*i*, **Y**) (*F*(*i*, **Y**) − *F*(*i*, **Z**))

Let �.� denote the spectral matrix norm.

*Dom* <sup>P</sup> <sup>=</sup> { **<sup>X</sup>** ∈ H*<sup>n</sup>* <sup>|</sup> *<sup>R</sup>*(*i*, **<sup>X</sup>**) = *<sup>R</sup>*(*i*) +

*<sup>F</sup>*(*i*, **<sup>Z</sup>**) = <sup>−</sup>*R*(*i*, **<sup>Z</sup>**)−<sup>1</sup> *<sup>S</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* ,

solution to (1). For the matrix function P(*i*, **X**) we introduce notations

*<sup>l</sup>*=<sup>0</sup> *Al*(*i*)*T*E*i*(**Z**)*Bl*(*i*) + *<sup>L</sup>*(*i*);

*I F*(*i*, **<sup>Z</sup>**)*<sup>T</sup> <sup>C</sup>T*(*i*)*C*(*i*) *<sup>L</sup>*(*i*)

*<sup>l</sup>*=<sup>0</sup> *Al*(*i*)*T*E*i*(**Z**)*Al*(*i*) + *<sup>C</sup>T*(*i*)*C*(*i*);

**2. The positive definite case**

*Q*(*i*, **Z**) = ∑*<sup>r</sup>*

*S*(*i*, **Z**) = ∑*<sup>r</sup>*

=

with *i* = 1, . . . , *N* .

to <sup>P</sup>(*i*, **<sup>X</sup>**). For briefly we use *<sup>A</sup>*˜

*properties of* P(*i*, **X**), *i* = 1, . . . , *N*

and we present set of equations (1) as follows:

Then, for the matrix function P(*i*, **X**) we rewrite

P**Z**(*i*, **Y**) =

P**Z**(*i*, **Z**) − P**Z**(*i*, **Y**) =

*r* ∑ *l*=0 *A*˜

− 

*r* ∑ *l*=0 *A*˜

+  Dragan, Morozan and Stoica [6] have been proposed an iterative procedure for computing the maximal solution of a set of nonlinear equations (1). The proposed iteration [6, iteration (4.7)] is :

$$\begin{split} X(i)^{(k)} &= \mathcal{P}\_{\mathbf{X}^{(k-1)}}(i, \mathbf{X}^{(k-1)}) + \frac{\varepsilon}{k} I\_{\text{lt}} \\ &= \sum\_{l=0}^{r} \left[ \tilde{A}\_{l}(i, \mathbf{X}^{(k-1)}) \right]^{T} \mathcal{E}\_{i}(\mathbf{X}^{(k-1)}) \left[ \tilde{A}\_{l}(i, \mathbf{X}^{(k-1)}) \right] \\ &+ T(i, \mathbf{X}^{(k-1)}) + \frac{\varepsilon}{k} I\_{\text{lt}} \, \prime \end{split} \tag{5}$$
  $\text{where} \qquad \tilde{A}\_{l}(i, \mathbf{X}^{(k-1)}) = A\_{l}(i) + B\_{l}(i)F(i, \mathbf{X}^{(k-1)}) \,,$ 

*k* = 1, 2, 3 . . . , and *ε* is a small positive number. Note that iteration (5) is a special case of the general iterative method given in [6, Theorem 3.3]. Based on the Gauss-Seidel technique the following modification is observed by Ivanov [10]:

$$\begin{split} X(i)^{(k)} &= \sum\_{l=0}^{r} \left[ \tilde{A}\_{l}(i, \mathbf{X}^{(k-1)}) \right]^{T} \\ &\quad \times \left( \mathcal{E}\_{l1}(\mathbf{X}^{(k)}) + p\_{ii} X(i)^{(k-1)} + \mathcal{E}\_{l2}(\mathbf{X}^{(k-1)}) \right) \\ &\quad \times \left[ \tilde{A}\_{l}(i, \mathbf{X}^{(k-1)}) \right] + T(i, \mathbf{X}^{(k-1)}) \,, \\ i &= 1, 2, \ldots, N, \; k = 1, 2, 3 \ldots \end{split} \tag{6}$$

where

$$\mathcal{E}\_{i1}(\mathbf{Z}) = \sum\_{j=1}^{i-1} p\_{i\bar{j}} \, Z(j), \quad \text{and} \quad \mathcal{E}\_{i2}(\mathbf{Z}) = \sum\_{j=i+1}^{N} p\_{i\bar{j}} \, Z(j) \, Z$$

The convergence properties of matrix sequences defined by (5) and (6) are derived in the corresponding papers.

The method can be applied under the assumption that the matrix inequalities P(*i*, **Z**) ≥ *Z*(*i*) and P(*i*, **Z**) ≤ *Z*(*i*), (*i* = 1, . . . , *N*) are solvable. Under these conditions the convergence of (6) takes place if the algorithm starts at any suitable initial point **X**(**0**). The new iteration (6) can be considered as an accelerated modification to iteration (5). The convergence result is given by the following theorem:

**Theorem 2.1.** *[10] Letting there are symmetric matrices* **Xˆ** = (*X*<sup>ˆ</sup> 1,..., *<sup>X</sup>*<sup>ˆ</sup> *<sup>N</sup>*) <sup>∈</sup> *Dom* <sup>P</sup> *and* **X**(**0**) = (*X*(0) <sup>1</sup> ,..., *<sup>X</sup>*(0) *<sup>N</sup>* ) *such that* (*a*) <sup>P</sup>(*i*, **Xˆ**) <sup>≥</sup> *<sup>X</sup>*ˆ(*i*); (*b*) **<sup>X</sup>**(**0**) <sup>≥</sup> **Xˆ** ; (*c*) <sup>P</sup>(*i*, **<sup>X</sup>**(0)) <sup>≤</sup> *<sup>X</sup>*(*i*)(0) *for i* <sup>=</sup> 1, . . . , *N. Then for the matrix sequences* {*X*(1)(*k*)}<sup>∞</sup> *<sup>k</sup>*=1,..., {*X*(*N*)(*k*)}<sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> *defined by (6) the following properties are satisfied:*

(i) *We have* **<sup>X</sup>**(*k*) <sup>≥</sup> **Xˆ** , **<sup>X</sup>**(*k*) <sup>≥</sup> **<sup>X</sup>**(*k*+1) *and*

$$\mathcal{P}(i, \mathbf{X}^{(k)}) = X(i)^{(k+1)} + \sum\_{l=0}^{r} \tilde{A}\_l(i, \mathbf{X}^{(k)})^T \mathcal{E}\_{i1} (\mathbf{X}^{(k)} - \mathbf{X}^{(k+1)}) \tilde{A}\_l(i, \mathbf{X}^{(k)})\_l$$

*where i* = 1, 2, . . . , *N*, *k* = 0, 1, 2, . . .*;*

(ii) *the sequences* {*X*(1)(*k*)},..., {*X*(*N*)(*k*)} *converge to the maximal solution* **X˜** *of the set of equations X*(*i*) = <sup>P</sup>(*i*, **<sup>X</sup>**) *and* **X˜** <sup>≥</sup> **Xˆ** *.*

Consider the difference

*F*(*i*, **X**(*k*−1)

<sup>=</sup> <sup>−</sup>*R*(*i*, **<sup>X</sup>**(*k*−1)

<sup>=</sup> <sup>−</sup>*R*(*i*, **<sup>X</sup>**(*k*−1)

<sup>=</sup> *<sup>R</sup>*(*i*, **X˜**)−<sup>1</sup>

<sup>−</sup>*R*(*i*, **<sup>X</sup>**(*k*−1)

�*F*(*i*, **<sup>X</sup>**(*k*−1)

*<sup>τ</sup>i*,2 <sup>=</sup> �*R*(*i*, **X˜**)−1�

= 

Then

Thus

where

) <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)

*<sup>R</sup>*(*i*, **X˜**)−<sup>1</sup> <sup>−</sup> *<sup>R</sup>*(*i*, **<sup>X</sup>**(*k*−1)

*R*(*i*, **X**(*k*−1)

)−<sup>1</sup> *<sup>r</sup>* ∑ *l*=0

*F*(*i*, **X**(*k*−1)

<sup>−</sup>*R*(*i*, **<sup>X</sup>**(*k*−1)

= *R*(*i*, **X˜**)−<sup>1</sup>

)−<sup>1</sup> *S*(*i*, **X**(*k*−1)

)−<sup>1</sup> 

) <sup>−</sup> *<sup>R</sup>*(*i*, **X˜**)

) <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)

)−<sup>1</sup> *<sup>r</sup>* ∑ *l*=0

) <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)�≤�*R*(*i*, **X˜**)−1�<sup>2</sup> �*S*(*i*, **X˜**)�

<sup>+</sup>�*R*(*i*, **X˜**)−1�

�*R*(*i*, **X˜**)−1��*S*(*i*, **X˜**)�

)�≤�*R*(*i*, **<sup>X</sup>**(0)

 *r* ∑ *l*=0

�*F*(*i*, **<sup>X</sup>**(*k*−1)

�*R*(*i*, **<sup>X</sup>**(*k*−1)

In addition, using **<sup>X</sup>**(0) <sup>≥</sup> **<sup>X</sup>**(*k*) we in fact have

)*<sup>T</sup>* + *R*(*i*, **X˜**)−<sup>1</sup> *S*(*i*, **X˜**)*<sup>T</sup>*

*<sup>S</sup>*(*i*, **X˜**)*<sup>T</sup>* <sup>−</sup> *<sup>R</sup>*(*i*, **<sup>X</sup>**(*k*−1)

*R*(*i*, **X**(*k*−1)

)−<sup>1</sup> *<sup>r</sup>* ∑ *l*=0

)−<sup>1</sup> *S*(*i*, **X˜**)*<sup>T</sup>*

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 153

*r* ∑ *l*=0

�*Bl*(*i*)�<sup>2</sup> <sup>+</sup>

)� , *i* = 1, . . . , *N* .

*R*(*i*, **X**(*k*−1)

*Bl*(*i*)*T*E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)*Al*(*i*)

)−<sup>1</sup> *S*(*i*, **X˜**)*<sup>T</sup>*

�*Bl*(*i*)�<sup>2</sup> �E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)�

�*Bl*(*i*)� �*Al*(*i*)�

 .

�*Bl*(*i*)� �*Al*(*i*)� �E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)� ,

*r* ∑ *l*=0

)−<sup>1</sup> *<sup>S</sup>*(*i*, **<sup>X</sup>**(*k*−1) <sup>±</sup> **X˜**)*<sup>T</sup>* <sup>+</sup> *<sup>R</sup>*(*i*, **X˜**)−<sup>1</sup> *<sup>S</sup>*(*i*, **X˜**)*<sup>T</sup>*

*Bl*(*i*)*T*E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)*Al*(*i*).

*Bl*(*i*)*T*E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)*Bl*(*i*)

*Bl*(*i*)*T*E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)*Al*(*i*).

*r* ∑ *l*=0

) <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)� ≤ *<sup>τ</sup>i*,2 �E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)� ,

*r* ∑ *l*=0

In this section, we are proving that iteration (5) has a linear rate of convergence.

**Theorem 2.2.** *Assume that conditions a),b),c) of theorem 2.1 are fulfilled for a symmetric solution* **Xˆ** <sup>∈</sup> *Dom* <sup>P</sup> *of set of equations (1). Then, the sequence* {**X**(*k*)}<sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> *defined by (5) converges to the maximal solution* **X˜** *. If*

$$\max\_{1 \le i \le N} \|X(i)^{(0)} - \tilde{X}(i)\| < \frac{2 - \sum\_{l=0}^{r} \|\tilde{A}\_l(i, \tilde{\mathbf{X}})\|^2}{a} = \frac{2 - b}{a}$$

*where*

$$a = \left\| \mathcal{R}(i, \mathbf{X}^{(0)}) \right\| \left\| \mathcal{R}(i, \tilde{\mathbf{X}})^{-1} \right\|^2 \left( \left\| \mathcal{R}(i, \tilde{\mathbf{X}})^{-1} \right\| \left\| \mathcal{S}(i, \tilde{\mathbf{X}}) \right\| \left\| \sum\_{l=0}^r \left\| \mathcal{B}\_l(i) \right\| \right\|^2 + \sum\_{l=0}^r \left\| \mathcal{B}\_l(i) \right\| \left\| \left\| \mathcal{A}\_l(i) \right\| \right\|^2 \right)^{1/2}$$

*then*

$$\max\_{1 \le i \le N} \left\lVert \left| X(i)^{(k)} - \breve{X}(i) \right| \right\rVert < \max\_{1 \le i \le N} \left\lVert \left| X(i)^{(k-1)} - \breve{X}(i) \right| \right\rVert.$$

*Proof.* Following the course of the proof of theorem 2.1 it has been proved that **<sup>X</sup>**(*k*) <sup>≥</sup> **Xˆ** for all *<sup>k</sup>*. Therefore, for *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>* we conclude *<sup>R</sup>*(*i*, **<sup>X</sup>**(*k*)) <sup>≥</sup> *<sup>R</sup>*(*i*, **Xˆ**) <sup>&</sup>gt; 0 . It follows that

$$\lim\_{k \to \infty} R(i, \mathbf{X}^{(k)}) = R(i, \mathbf{\tilde{X}})$$

and then, the limit *<sup>F</sup>*(*i*, **X˜**) = lim*k*→<sup>∞</sup> *<sup>F</sup>*(*i*, **<sup>X</sup>**(*k*)) exists and

$$F(i, \tilde{\mathbf{X}}) = -\mathcal{R}(i, \tilde{\mathbf{X}})^{-1} \mathcal{S}(i, \tilde{\mathbf{X}})^T \dots$$

Based on the proof of theorem 2.1 and the properties of lemma 2.1 the following equality is established

$$X(i)^{(k)} = \mathcal{P}\_{\mathbf{X}^{(k-1)}}(i, \mathbf{X}^{(k-1)}) + \frac{\varepsilon}{k} I\_{\mathbb{I}} \quad \text{and} \quad \tilde{X}(i) = \mathcal{P}\_{\tilde{\mathbf{X}}}(i, \tilde{\mathbf{X}}) \dots$$

Moreover

$$\begin{split} \mathcal{P}(\boldsymbol{i},\mathbf{X}^{(k-1)}) &= \sum\_{l=0}^{r} \tilde{A}\_{l}(\boldsymbol{i},\mathbf{\tilde{X}})^{T} \left( \mathcal{E}\_{\boldsymbol{i}}(\mathbf{X}^{(k-1)}) \right) \tilde{A}\_{l}(\boldsymbol{i},\mathbf{\tilde{X}}) + T(\boldsymbol{i},\mathbf{X}^{(k-1)}) \\ &- \left( F(\boldsymbol{i},\mathbf{X}^{(k-1)})^{T} - F(\boldsymbol{i},\mathbf{\tilde{X}})^{T} \right) R(\boldsymbol{i},\mathbf{X}^{(k-1)}) \left( F(\boldsymbol{i},\mathbf{X}^{(k-1)}) - F(\boldsymbol{i},\mathbf{\tilde{X}}) \right) . \end{split}$$

and

$$X(i)^{(k)} - \tilde{X}(i) = \mathcal{P}\_{\mathbf{X}^{(k-1)}}(i, \mathbf{X}^{(k-1)}) - \mathcal{P}\_{\tilde{\mathbf{X}}}(i, \tilde{\mathbf{X}}) + \frac{\varepsilon}{k}I\_{n\_k}$$

$$\begin{split} \langle X(i)^{(k)} - \tilde{X}(i) = \sum\_{l=0}^{r} \tilde{A}\_{l}(i, \tilde{\mathbf{X}})^{T} \left( \mathcal{E}\_{i} (\mathbf{X}^{(k-1)} - \tilde{\mathbf{X}}) \right) \tilde{A}\_{l}(i, \tilde{\mathbf{X}}) + \frac{\varepsilon}{k} I\_{\text{lt}} \\ &- \left( F(i, \mathbf{X}^{(k-1)})^{T} - F(i, \tilde{\mathbf{X}})^{T} \right) R(i, \mathbf{X}^{(k-1)}) \left( F(i, \mathbf{X}^{(k-1)}) - F(i, \tilde{\mathbf{X}}) \right) . \end{split}$$

Consider the difference

6 Will-be-set-by-IN-TECH

(ii) *the sequences* {*X*(1)(*k*)},..., {*X*(*N*)(*k*)} *converge to the maximal solution* **X˜** *of the set of*

**Theorem 2.2.** *Assume that conditions a),b),c) of theorem 2.1 are fulfilled for a symmetric solution*

<sup>2</sup> <sup>−</sup> <sup>∑</sup>*<sup>r</sup>*

�*R*(*i*, **X˜**)−1��*S*(*i*, **X˜**)�

*Proof.* Following the course of the proof of theorem 2.1 it has been proved that **<sup>X</sup>**(*k*) <sup>≥</sup> **Xˆ** for

*<sup>F</sup>*(*i*, **X˜**) = <sup>−</sup>*R*(*i*, **X˜**)−<sup>1</sup> *<sup>S</sup>*(*i*, **X˜**)*<sup>T</sup>* .

Based on the proof of theorem 2.1 and the properties of lemma 2.1 the following equality is

) + *<sup>ε</sup>*

) *A*˜

 *A*˜

<sup>E</sup>*i*(**X**(*k*−1)

*<sup>X</sup>*(*i*)(*k*) <sup>−</sup> *<sup>X</sup>*˜(*i*) = <sup>P</sup>**X**(*k*−1) (*i*, **<sup>X</sup>**(*k*−1)

)*<sup>T</sup>* <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)*<sup>T</sup>*

<sup>E</sup>*i*(**X**(*k*−1) <sup>−</sup> **X˜**)

)*<sup>T</sup>* <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)*<sup>T</sup>*

) = *R*(*i*, **X˜**)

all *<sup>k</sup>*. Therefore, for *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>* we conclude *<sup>R</sup>*(*i*, **<sup>X</sup>**(*k*)) <sup>≥</sup> *<sup>R</sup>*(*i*, **Xˆ**) <sup>&</sup>gt; 0 . It follows that

*<sup>l</sup>*=<sup>0</sup> �*A*˜

*r* ∑ *l*=0

*<sup>l</sup>*(*i*, **X˜**)�<sup>2</sup> *<sup>a</sup>* <sup>=</sup> <sup>2</sup> <sup>−</sup> *<sup>b</sup>*

�*Bl*(*i*)�<sup>2</sup> <sup>+</sup>

<sup>1</sup>≤*i*≤*<sup>N</sup>* �*X*(*i*)(*k*−1) <sup>−</sup> *<sup>X</sup>*˜(*i*)� .

*<sup>k</sup> In* and *<sup>X</sup>*˜(*i*) = <sup>P</sup>**X˜** (*i*, **X˜**).

) 

) − P**X˜** (*i*, **X˜**) + *<sup>ε</sup>*

*<sup>l</sup>*(*i*, **X˜**) + *<sup>ε</sup>*

) 

*R*(*i*, **X**(*k*−1)

*k In*

)

*F*(*i*, **X**(*k*−1)

*k In*

*F*(*i*, **X**(*k*−1)

) <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)

) <sup>−</sup> *<sup>F</sup>*(*i*, **X˜**)

 ,

> .

*<sup>l</sup>*(*i*, **X˜**) + *<sup>T</sup>*(*i*, **<sup>X</sup>**(*k*−1)

*R*(*i*, **X**(*k*−1)

*<sup>k</sup>*=<sup>1</sup> *defined by (5) converges to the*

�*Bl*(*i*)� �*Al*(*i*)�

<sup>2</sup> ,

*a*

*r* ∑ *l*=0

In this section, we are proving that iteration (5) has a linear rate of convergence.

**Xˆ** <sup>∈</sup> *Dom* <sup>P</sup> *of set of equations (1). Then, the sequence* {**X**(*k*)}<sup>∞</sup>

<sup>1</sup>≤*i*≤*<sup>N</sup>* �*X*(*i*)(0) <sup>−</sup> *<sup>X</sup>*˜(*i*)� <sup>&</sup>lt;

<sup>1</sup>≤*i*≤*<sup>N</sup>* �*X*(*i*)(*k*) <sup>−</sup> *<sup>X</sup>*˜(*i*)� <sup>&</sup>lt; max

lim *<sup>k</sup>*→<sup>∞</sup> *<sup>R</sup>*(*i*, **<sup>X</sup>**(*k*)

and then, the limit *<sup>F</sup>*(*i*, **X˜**) = lim*k*→<sup>∞</sup> *<sup>F</sup>*(*i*, **<sup>X</sup>**(*k*)) exists and

*<sup>X</sup>*(*i*)(*k*) <sup>=</sup> <sup>P</sup>**X**(*k*−1) (*i*, **<sup>X</sup>**(*k*−1)

*<sup>l</sup>*(*i*, **X˜**)*<sup>T</sup>*

*F*(*i*, **X**(*k*−1)

*<sup>l</sup>*(*i*, **X˜**)*<sup>T</sup>*

*F*(*i*, **X**(*k*−1)

*equations X*(*i*) = <sup>P</sup>(*i*, **<sup>X</sup>**) *and* **X˜** <sup>≥</sup> **Xˆ** *.*

max

)� �*R*(*i*, **X˜**)−1�<sup>2</sup>

max

*maximal solution* **X˜** *. If*

*<sup>a</sup>* <sup>=</sup> �*R*(*i*, **<sup>X</sup>**(0)

*where*

*then*

established

Moreover

and

<sup>P</sup>(*i*, **<sup>X</sup>**(*k*−1)

*<sup>X</sup>*(*i*)(*k*) <sup>−</sup> *<sup>X</sup>*˜(*i*) =

) = *r* ∑ *l*=0 *A*˜

> −

> > *r* ∑ *l*=0 *A*˜

− 

$$\begin{split} &F(i,\mathbf{X}^{(k-1)}) - F(i,\mathbf{\tilde{X}}) \\ &= -R(i,\mathbf{X}^{(k-1)})^{-1} S(i,\mathbf{X}^{(k-1)})^{T} + R(i,\mathbf{\tilde{X}})^{-1} S(i,\mathbf{\tilde{X}})^{T} \\ &= -R(i,\mathbf{X}^{(k-1)})^{-1} S(i,\mathbf{X}^{(k-1)} \pm \mathbf{\tilde{X}})^{T} + R(i,\mathbf{\tilde{X}})^{-1} S(i,\mathbf{\tilde{X}})^{T} \\ &= \left[ R(i,\mathbf{\tilde{X}})^{-1} - R(i,\mathbf{X}^{(k-1)})^{-1} \right] S(i,\mathbf{\tilde{X}})^{T} - R(i,\mathbf{X}^{(k-1)})^{-1} \sum\_{l=0}^{r} B\_{l}(i)^{T} \mathcal{E}\_{i} (\mathbf{X}^{(k-1)} - \mathbf{\tilde{X}}) A\_{l}(i)^{T} \\ &= R(i,\mathbf{\tilde{X}})^{-1} \left[ R(i,\mathbf{X}^{(k-1)}) - R(i,\mathbf{\tilde{X}}) \right] R(i,\mathbf{X}^{(k-1)})^{-1} S(i,\mathbf{\tilde{X}})^{T} \\ &- R(i,\mathbf{X}^{(k-1)})^{-1} \sum\_{l=0}^{r} B\_{l}(i)^{T} \mathcal{E}\_{i} (\mathbf{X}^{(k-1)} - \mathbf{\tilde{X}}) A\_{l}(i) \,. \end{split}$$

Then

$$\begin{aligned} &F(i, \mathbf{X}^{(k-1)}) - F(i, \mathbf{X}) \\ &= R(i, \mathbf{\tilde{X}})^{-1} \left[ \sum\_{l=0}^{r} B\_{l}(i)^{T} \mathcal{E}\_{i} (\mathbf{X}^{(k-1)} - \mathbf{\tilde{X}}) B\_{l}(i) \right] R(i, \mathbf{X}^{(k-1)})^{-1} S(i, \mathbf{\tilde{X}})^{T} \\ &\quad - R(i, \mathbf{X}^{(k-1)})^{-1} \sum\_{l=0}^{r} B\_{l}(i)^{T} \mathcal{E}\_{i} (\mathbf{X}^{(k-1)} - \mathbf{\tilde{X}}) A\_{l}(i) \ . \end{aligned}$$

Thus

$$\begin{split} \|\|F(i,\mathbf{X}^{(k-1)}) - F(i,\mathbf{\tilde{X}})\|\| &\leq \|\|R(i,\mathbf{\tilde{X}})^{-1}\|\|^{2} \,\|\|S(i,\mathbf{\tilde{X}})\|\| \sum\_{l=0}^{r} \|\|B\_{l}(i)\|\|^{2} \,\|\mathcal{E}\_{i}(\mathbf{X}^{(k-1)} - \mathbf{\tilde{X}})\|\| \\ &\quad + \|\|R(i,\mathbf{\tilde{X}})^{-1}\|\| \sum\_{l=0}^{r} \|\|B\_{l}(i)\|\| \,\|A\_{l}(i)\|\| \,\|\mathcal{E}\_{i}(\mathbf{X}^{(k-1)} - \mathbf{\tilde{X}})\|\|, \end{split}$$
 
$$\begin{split} \|\|F(i,\mathbf{X}^{(k-1)}) - F(i,\mathbf{\tilde{X}})\|\| \leq \tau\_{i,2} \, \|\|\mathcal{E}\_{i}(\mathbf{X}^{(k-1)} - \mathbf{\tilde{X}})\|\|, \end{split}$$

where

$$\tau\_{i,2} = \left\| R(i, \tilde{\mathbf{X}})^{-1} \right\| \left( \left\| R(i, \tilde{\mathbf{X}})^{-1} \right\| \left\| S(i, \tilde{\mathbf{X}}) \right\| \left\| \sum\_{l=0}^{r} \left\| B\_l(i) \right\| \right\|^2 + \sum\_{l=0}^{r} \left\| B\_l(i) \right\| \left\| A\_l(i) \right\| \right).$$

In addition, using **<sup>X</sup>**(0) <sup>≥</sup> **<sup>X</sup>**(*k*) we in fact have

$$\|\|R(i, \mathbf{X}^{(k-1)})\|\| \le \|\|R(i, \mathbf{X}^{(0)})\|\| \text{ \hspace{0.1cm}}\text{N} \dots \text{\hspace{0.1cm}}\text{N} \dots$$

#### 8 Will-be-set-by-IN-TECH 154 Stochastic Modeling and Control Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison <sup>9</sup>

Furthermore, we estimate for *i* = 1, . . . , *N*

$$\|\tilde{X}(i) - X(i)^{(k)}\| \le \sum\_{l=0}^{r} \|\tilde{A}\_l(i, \tilde{\mathbf{X}})\|^2 \left\|\mathcal{E}\_i(\mathbf{X}^{(k-1)} - \tilde{\mathbf{X}})\right\| + (\tau\_{i,2})^2 \left\|\mathcal{E}\_i(\mathbf{X}^{(k-1)} - \tilde{\mathbf{X}})\right\|^2 \left\|\mathcal{R}(i, \mathbf{X}^{(0)})\right\|.$$

*and the following transition probability matrix*

is the acceleration of method (6).

� *pij*� = ⎛ ⎝

0.67 0.17 0.16 0.30 0.47 0.23 0.26 0.30 0.44

We have executed hundred examples of each value of *n*. We report the maximal number of iteration steps *mIt* and the average number of iteration steps *avIt* of each size for all examples needed for achieving the accuracy. The used accuracy equals to 1.*e* − 10. The results are listed in Table 1. The average number of iteration steps for method (6) smaller than the corresponding average number for method (5). The last column of Table 1 shows how much

method (5) method (6) speed up

 212 79.2 166 60.1 0.75 141 87.1 112 65.3 0.75 216 104.6 165 77.7 0.74 235 132.9 177 97.0 0.73 389 195.8 288 143.5 0.73 1882 311.8 900 221.5 0.71

Further on, we execute some matrix manipulations on system (1) to derive new recurrence equations. We are going to prove the convergence properties to the proposed new iteration

**Y** = (*Y*(1),...,*Y*(*N*)), where *Y*(*i*) = E*i*(**X**) for *i* = 1, . . . , *N* ,

*<sup>l</sup>*(*i*) + *<sup>C</sup>*ˆ*T*(*i*)*C*ˆ(*i*) <sup>−</sup> *<sup>S</sup>*ˆ(*i*,*Y*(*i*))

*<sup>l</sup>*(*i*)*<sup>T</sup> Y*(*i*) *Bl*(*i*) + *L*ˆ(*i*),

1 − *δii*

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*<sup>T</sup> <sup>Y</sup>*(*i*) *Bl*(*i*)]−<sup>1</sup> *<sup>S</sup>*ˆ(*i*,*Y*(*i*))*<sup>T</sup>* <sup>+</sup> <sup>∑</sup>*<sup>N</sup>*

*Y*(*i*) = P(*i*, **Y**), (7)

*<sup>j</sup>*=<sup>1</sup> *γij Y*(*j*),

*<sup>l</sup>*(*i*), *C*ˆ(*i*), *L*ˆ(*i*) and

*C*(*i*), *l* = 0, . . . ,*r*,

(8)

n *mIt avIt mIt avIt*

**Table 1.** Results for Example 2.1. Comparison between iterations for 100 runs.

*<sup>l</sup>*(*i*)*<sup>T</sup> Y*(*i*) *A*ˆ

under new assumptions. Following the substitution

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*<sup>ˆ</sup>

<sup>×</sup>[*R*(*i*) + <sup>∑</sup>*<sup>r</sup>*

with appropriate transformations on the matrix coefficients *A*ˆ

1 − *δii*

*S*ˆ(*i*,*Y*(*i*)) =

*r* ∑ *l*=0 *A*ˆ

*Al*(*i*), *<sup>C</sup>*ˆ(*i*) = � *pii*

the equivalent system of equations is derived

<sup>P</sup>(*i*, **<sup>Y</sup>**) = <sup>∑</sup>*<sup>r</sup>*

*A*ˆ

*<sup>l</sup>*(*i*) = � *pii*

where

⎞ ⎠ .

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 155

Note that

$$\|\|\mathcal{E}\_i(\mathbf{X}^{(k-1)} - \tilde{\mathbf{X}})\|\| \le \sum\_{j=1}^N p\_{ij} \|\|\mathbf{X}^{(k-1)}(j) - \tilde{\mathbf{X}}(j)\|\| \le \max\_{1 \le j \le N} \|\|\mathbf{X}(j)^{(k-1)} - \tilde{\mathbf{X}}(j)\|\|\text{, } \forall i.$$

Further on,

$$\max\_{1 \le i \le N} \|\tilde{X}(i) - X(i)^{(k)}\| \le b \left( \max\_{1 \le j \le N} \|X(j)^{(k-1)} - \tilde{X}(j)\| \right) + a \left( \max\_{1 \le j \le N} \|X(j)^{(k-1)} - \tilde{X}(j)\| \right)^2.$$

Now, assuming that the inequality

$$\max\_{1 \le i \le N} \|X(i)^{(s)} - \tilde{X}(i)\| < \frac{2 - b}{a}$$

holds for *s* = 0, . . . , *k* − 1. Then

$$\begin{aligned} \max\_{1 \le i \le N} \|\mathcal{X}(i) - \mathcal{X}(i)^{(k)}\| &\le \max\_{1 \le j \le N} \|\mathcal{X}(j)^{(k-1)} - \mathcal{X}(j)\| \\ &\times \left( a \max\_{1 \le j \le N} \|\mathcal{X}(j)^{(k-1)} - \tilde{\mathcal{X}}(j)\| + b - 1 \right) < \max\_{1 \le j \le N} \|\mathcal{X}(j)^{(k-1)} - \tilde{\mathcal{X}}(j)\| .\end{aligned}$$

Thus, the proof of the theorem is complete.

Let us consider the following example in order to compare iterations (5) and (6).

**Example 2.1.** *We take the following weighting matrices:*

$$\begin{aligned} R(1) &= \text{diag}\left(0.0126, 0.024\right), \\ R(2) &= \text{diag}\left(0.09, 0.012\right), \\ Q(1) &= 0.75 \* \epsilonye(n, n), \quad Q(2) = 0.25 \* \epsilonye(n, n), \quad Q(3) = 0.05 \* \epsilonye(n, n). \end{aligned}$$

*The coefficient matrices A*0(*i*), *A*1(*i*), *B*0(*i*), *B*1(*i*), *L*(*i*), *i* = 1, 2, 3 *for system (1) are given through formulas (using the* MATLAB *notations):*

> *A*0(1) = *randn*(*n*, *n*)/6; *A*0(2) = *randn*(*n*, *n*)/6; *A*0(3) = *randn*(*n*, *n*)/6; *A*1(1) = *randn*(*n*, *n*)/7; *A*1(2) = *randn*(*n*, *n*)/7; *A*1(3) = *randn*(*n*, *n*)/7; *B*0(1) = *randn*(*n*, 2)/8; *B*0(2) = *randn*(*n*, 2)/8; *B*0(3) = *randn*(*n*, 2)/8; *B*1(1) = *randn*(*n*, 2)/8; *B*1(2) = *randn*(*n*, 2)/8; *B*1(3) = *randn*(*n*, 2)/8; *L*(1) = *randn*(*n*, 2)/8; *L*(2) = *randn*(*n*, 2)/8; *L*(3) = *randn*(*n*, 2)/8 ,

*and the following transition probability matrix*

8 Will-be-set-by-IN-TECH

*<sup>l</sup>*(*i*, **X˜**)�<sup>2</sup> �E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)� + (*τi*,2)<sup>2</sup> �E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)�<sup>2</sup> �*R*(*i*, **<sup>X</sup>**(0)

 + *a* max

2 − *b a*

< max

<sup>1</sup>≤*j*≤*<sup>N</sup>* �*X*(*j*)(*k*−1) <sup>−</sup> *<sup>X</sup>*˜(*j*)� .

<sup>1</sup>≤*j*≤*<sup>N</sup>* �*X*(*j*)(*k*−1) <sup>−</sup> *<sup>X</sup>*˜(*j*)� , <sup>∀</sup> *<sup>i</sup>*.

<sup>1</sup>≤*j*≤*<sup>N</sup>* �*X*(*j*)(*k*−1) <sup>−</sup> *<sup>X</sup>*˜(*j*)�

(*j*) <sup>−</sup> *<sup>X</sup>*˜(*j*)� ≤ max

)� .

<sup>2</sup> .

Furthermore, we estimate for *i* = 1, . . . , *N*

*r* ∑ *l*=0 �*A*˜

> *N* ∑ *j*=1

> > max

> > > max

*a* max

� ≤ *b*

� ≤ max

× 

**Example 2.1.** *We take the following weighting matrices:*

Thus, the proof of the theorem is complete.

*R*(1) = diag (0.0126, 0.024),

*formulas (using the* MATLAB *notations):*

*pij* �*X*(*k*−1)

<sup>1</sup>≤*j*≤*<sup>N</sup>* �*X*(*j*)(*k*−1) <sup>−</sup> *<sup>X</sup>*˜(*j*)�

<sup>1</sup>≤*i*≤*<sup>N</sup>* �*X*(*i*)(*s*) <sup>−</sup> *<sup>X</sup>*˜(*i*)� <sup>&</sup>lt;

<sup>1</sup>≤*j*≤*<sup>N</sup>* �*X*(*j*)(*k*−1) <sup>−</sup> *<sup>X</sup>*˜(*j*)� <sup>+</sup> *<sup>b</sup>* <sup>−</sup> <sup>1</sup>

*Q*(1) = 0.75 ∗ *eye*(*n*, *n*), *Q*(2) = 0.25 ∗ *eye*(*n*, *n*), *Q*(3) = 0.05 ∗ *eye*(*n*, *n*).

*The coefficient matrices A*0(*i*), *A*1(*i*), *B*0(*i*), *B*1(*i*), *L*(*i*), *i* = 1, 2, 3 *for system (1) are given through*

*A*0(1) = *randn*(*n*, *n*)/6; *A*0(2) = *randn*(*n*, *n*)/6; *A*0(3) = *randn*(*n*, *n*)/6; *A*1(1) = *randn*(*n*, *n*)/7; *A*1(2) = *randn*(*n*, *n*)/7; *A*1(3) = *randn*(*n*, *n*)/7;

*B*0(1) = *randn*(*n*, 2)/8; *B*0(2) = *randn*(*n*, 2)/8; *B*0(3) = *randn*(*n*, 2)/8; *B*1(1) = *randn*(*n*, 2)/8; *B*1(2) = *randn*(*n*, 2)/8; *B*1(3) = *randn*(*n*, 2)/8;

*L*(1) = *randn*(*n*, 2)/8; *L*(2) = *randn*(*n*, 2)/8; *L*(3) = *randn*(*n*, 2)/8 ,

<sup>1</sup>≤*j*≤*<sup>N</sup>* �*X*(*j*)(*k*−1) <sup>−</sup> *<sup>X</sup>*˜(*j*)�

Let us consider the following example in order to compare iterations (5) and (6).

*R*(2) = diag (0.09, 0.012), *R*(3) = diag (0.12, 0.105),

� ≤

�E*i*(**X**(*k*−1) <sup>−</sup> **X˜**)� ≤

Now, assuming that the inequality

holds for *s* = 0, . . . , *k* − 1. Then

�*X*˜(*i*) <sup>−</sup> *<sup>X</sup>*(*i*)(*k*)

Note that

Further on,

<sup>1</sup>≤*i*≤*<sup>N</sup>* �*X*˜(*i*) <sup>−</sup> *<sup>X</sup>*(*i*)(*k*)

<sup>1</sup>≤*i*≤*<sup>N</sup>* �*X*˜(*i*) <sup>−</sup> *<sup>X</sup>*(*i*)(*k*)

max

max

$$
\begin{pmatrix} p\_{ij} \\ \end{pmatrix} = \begin{pmatrix} 0.67 \ 0.17 \ 0.16 \\ 0.30 \ 0.47 \ 0.23 \\ 0.26 \ 0.30 \ 0.44 \\ \end{pmatrix}.$$

We have executed hundred examples of each value of *n*. We report the maximal number of iteration steps *mIt* and the average number of iteration steps *avIt* of each size for all examples needed for achieving the accuracy. The used accuracy equals to 1.*e* − 10. The results are listed in Table 1. The average number of iteration steps for method (6) smaller than the corresponding average number for method (5). The last column of Table 1 shows how much is the acceleration of method (6).


**Table 1.** Results for Example 2.1. Comparison between iterations for 100 runs.

Further on, we execute some matrix manipulations on system (1) to derive new recurrence equations. We are going to prove the convergence properties to the proposed new iteration under new assumptions. Following the substitution

$$\mathbf{Y} = (Y(1), \dots, Y(N)), \text{ where } \mathbf{Y}(i) = \mathcal{E}\_i(\mathbf{X}) \text{ for } i = 1, \dots, N\_{\prime}$$

the equivalent system of equations is derived

$$Y(i) = \mathcal{P}(i, \mathbf{Y}) \, , \tag{7}$$

where

$$\begin{split} \mathcal{P}(i,\mathbf{Y}) &= \sum\_{l=0}^{r} \hat{A}\_{l}(i)^{T} \, \mathbf{Y}(i) \, \hat{A}\_{l}(i) + \hat{\mathcal{C}}^{T}(i) \hat{\mathcal{C}}(i) - \hat{\mathcal{S}}(i,\mathbf{Y}(i)) \\ &\times [R(i) + \sum\_{l=0}^{r} B\_{l}(i)^{T} \, \mathbf{Y}(i) \, B\_{l}(i)]^{-1} \, \hat{\mathcal{S}}(i,\mathbf{Y}(i))^{T} + \sum\_{j=1}^{N} \gamma\_{\text{ij}} \, \mathbf{Y}(j) \, \end{split} \tag{8}$$

with appropriate transformations on the matrix coefficients *A*ˆ *<sup>l</sup>*(*i*), *C*ˆ(*i*), *L*ˆ(*i*) and

$$
\hat{S}(i, Y(i)) = \sum\_{l=0}^{r} \hat{A}\_{l}(i)^{T} Y(i) \, B\_{l}(i) + \hat{L}(i),
$$

$$
\hat{A}\_{l}(i) = \sqrt{\frac{p\_{ii}}{1 - \delta\_{ii}}} \, A\_{l}(i) \,, \,\hat{\mathbf{C}}(i) = \sqrt{\frac{p\_{ii}}{1 - \delta\_{ii}}} \, \mathbf{C}(i) \, \, \mathbf{J} = \mathbf{0}, \ldots, r,
$$

#### 10 Will-be-set-by-IN-TECH 156 Stochastic Modeling and Control Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison <sup>11</sup>

$$
\hat{L}(\mathbf{i}) = \sqrt{\frac{p\_{\text{ii}}}{1 - \delta\_{\text{ii}}}} \, L(\mathbf{i}) \,, \quad \hat{S}(\mathbf{i}, Y(\mathbf{i})) = \sum\_{l=0}^{r} \hat{A}\_l(\mathbf{i})^T \, Y(\mathbf{i}) \, B\_l(\mathbf{i}) + \hat{L}(\mathbf{i}) \, \rho
$$

equivalence between the feasibility of the LMI and the solvability of the corresponding stochastic Riccati equation. Moreover, the maximal solution of a given stochastic algebraic Riccati equation can be obtained by solving a corresponding convex optimization problem

Further on, following the classical linear quadratic theory [13, 14] we know that the

optimization problem is associated with (1) has the form (for example see [1, 7]):

*<sup>i</sup>*=<sup>1</sup> �*I*, *X*(*i*)� subject to *i* = 1, . . . , *N*

−*X*(*i*) + *Q*(*i*, **X**) *S*(*i*, **X**)

*S*(*i*, **X**)*<sup>T</sup> R*(*i*, **X**)

However, we can apply the same approach to equivalent system (7). As a result we formulate a new optimization problem assigned to (7) and we will use it to find the maximal solution to

The corresponding optimization problem, associated to the maximal solution to (7), is given

It is very important to analyze a case where the weighting matrices *R*(*i*), *Q*(*i*), *i* = 1, . . . , *N* are indefinite in the field of linear quadratic stochastic models. This case has a practical importance. There are studies where the cost matrices are allowed to be indefinite (see [3, 16] and reference there in). In this paper we will investigate this special case to considered general discrete-time Riccati equations (1). We will interpret iterations (6) and (9) in a case where matrices *R*(*i*), *i* = 1, . . . , *N* are indefinite and however, we will look for a maximal solution

Based on the next example, we compare the LMI approach (through optimization problems

(10) and (11)) for solving the maximal solution to set of nonlinear equations (1).

*<sup>j</sup>*=<sup>1</sup> *γij Y*(*j*)

*S*ˆ(*i*,*Y*(*i*))*<sup>T</sup> R*(*i*) + ∑*<sup>r</sup>*

⎞

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 157

⎟⎠ <sup>≥</sup> <sup>0</sup>

*S*ˆ(*i*,*Y*(*i*))

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*TY*(*i*)*Bl*(*i*)

⎞

⎟⎟⎠ ≥ 0 (10)

(11)

max ∑*<sup>N</sup>*

*R*(*i*, **X**) > 0 , *X*(*i*) = *X*(*i*)*<sup>T</sup>* .

⎛

⎜⎝

(an LMI approach).

(7).

by:

max ∑*<sup>N</sup>*

∑*r <sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*<sup>ˆ</sup>

*R*(*i*) + ∑*<sup>r</sup>*

*Y*(*i*) = *Y*(*i*)*<sup>T</sup>* .

⎛

⎜⎜⎝

from *Dom* P.

*<sup>i</sup>*=<sup>1</sup> �*I*,*Y*(*i*)� subject to *i* = 1, . . . , *N*

<sup>−</sup>*Y*(*i*) + *<sup>C</sup>*ˆ*T*(*i*)*C*ˆ(*i*) + <sup>∑</sup>*<sup>N</sup>*

*<sup>l</sup>*(*i*)*<sup>T</sup> Y*(*i*) *A*ˆ

*<sup>l</sup>*(*i*)

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*TY*(*i*)*Bl*(*i*) <sup>&</sup>gt; 0 ,

for *i* = 1, . . . , *N* , and

$$\Gamma = (\gamma\_{ip})\_1^N = \begin{cases} \gamma\_{ii} = 0\\ \gamma\_{ip} = \frac{\delta\_{lp}}{1 - \delta\_{il}}, \text{ if } i \neq p \end{cases}$$

and assume that Γ is nonnegative (*γip* ≥ 0). We introduce the notations

$$\begin{aligned} \mathcal{G}(i, \mathbf{Y}) &= \sum\_{p \neq i} \gamma\_{ip} \, \mathbf{Y}(p) \,, \\ \mathbf{Y}\_i || \boldsymbol{Z} &= \left( \boldsymbol{Y}(1) , \dots , \boldsymbol{Y}(i-1) , \boldsymbol{Z} , \boldsymbol{Y}(i+1) , \dots , \boldsymbol{Y}(N) \right) \end{aligned}$$

The new iteration scheme applied to the equivalent system (7) is:

$$\begin{aligned} Y^{(k+1)}(i) &= \sum\_{l=0}^{r} \hat{A}\_{l}(i, Y^{(k)}(i))^{T} Y^{(k)}(i) \, \hat{A}\_{l}(i, Y^{(k)}(i)) \\ &+ T(i, Y^{(k)}(i)) + \mathcal{G}(i, 1, \mathbf{Y}^{(k+1)}) + \mathcal{G}(i, 2, \mathbf{Y}^{(k)}), \\ &i = 1, \ldots, N. \end{aligned} \tag{9}$$

where

$$\mathcal{G}(i, 1, \mathbf{Z}) = \sum\_{j=1}^{i-1} \gamma\_{ij} Z(j) \,, \quad \text{and} \quad \mathcal{G}(i, 2, \mathbf{Z}) = \sum\_{j=i+1}^{N} \gamma\_{ij} Z(j) \,.$$

The convergence properties of (9) are investigated. We will prove that the convergence of (9) takes place if the algorithm starts at any suitable initial point **Y**(**0**). The new iteration (9) can be considered as an accelerated modification to iteration (5). The convergence result is given by the following theorem:

**Theorem 2.3.** *[11] We assume that* Γ *is a nonnegative matrix and <sup>λ</sup>ii* <sup>1</sup>−*δii are positive numbers for all values of i. Letting there are symmetric matrices* **Yˆ** = (*Y*ˆ(1),...,*Y*ˆ(*N*)) *and* **Y**(**0**) = (*Y*(0)(1),...,*Y*(0)(*N*)) *such that* (*a*) <sup>P</sup>(*i*, **Yˆ**) <sup>≥</sup> *<sup>Y</sup>*ˆ(*i*); (*b*) **<sup>Y</sup>**(**0**) <sup>≥</sup> **Yˆ** ; (*c*)P(*i*, **<sup>Y</sup>**(0)) <sup>≤</sup> *<sup>Y</sup>*(*i*)(0) *for <sup>i</sup>* <sup>=</sup> 1, . . . , *N. Then for the matrix sequences* {*Y*(1)(*k*)}<sup>∞</sup> *<sup>k</sup>*=1,..., {*Y*(*N*)(*k*)}<sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> *defined by (9) the following properties hold:*

(i) *We have* **<sup>Y</sup>**(*k*) <sup>≥</sup> **Yˆ** , **<sup>Y</sup>**(*k*) <sup>≥</sup> **<sup>Y</sup>**(*k*+1) *and*

$$\mathcal{P}(i, \mathbf{Y}^{(\mathbf{k})}) = \mathbf{Y}^{(k+1)}(i) + \mathcal{G}(i, 1, \mathbf{Y}^{(k)} - \mathbf{Y}^{(k+1)}),$$

*where k* = 0, 1, 2, . . .*;*

(ii) *The sequences* {*Y*(1)(*k*)},..., {*Y*(*N*)(*k*)} *converge to the solution* **Y˜** *of the equations Y*(*i*) = <sup>P</sup>(*i*, **<sup>Y</sup>**) *and* **Y˜** <sup>≥</sup> **Yˆ** *.*

#### **3. The LMI approach**

There exists an increasing interest to consider a computational approach to stochastic algebraic Riccati equations via a semidefinite programming problem over linear matrix inequalities. Similar studies can be found in [12–14]. The main result from such type studies is the equivalence between the feasibility of the LMI and the solvability of the corresponding stochastic Riccati equation. Moreover, the maximal solution of a given stochastic algebraic Riccati equation can be obtained by solving a corresponding convex optimization problem (an LMI approach).

10 Will-be-set-by-IN-TECH

 *<sup>γ</sup>ii* <sup>=</sup> <sup>0</sup> *<sup>γ</sup>ip* <sup>=</sup> *<sup>δ</sup>ip*

**Y***i*�*Z* = (*Y*(1),...,*Y*(*i* − 1), *Z*,*Y*(*i* + 1),...,*Y*(*N*))

*<sup>l</sup>*(*i*,*Y*(*k*)(*i*))*<sup>T</sup> Y*(*k*)(*i*) *A*ˆ

*γij Z*(*j*), and G(*i*, 2, **Z**) =

The convergence properties of (9) are investigated. We will prove that the convergence of (9) takes place if the algorithm starts at any suitable initial point **Y**(**0**). The new iteration (9) can be considered as an accelerated modification to iteration (5). The convergence result is given

*for all values of i. Letting there are symmetric matrices* **Yˆ** = (*Y*ˆ(1),...,*Y*ˆ(*N*)) *and* **Y**(**0**) = (*Y*(0)(1),...,*Y*(0)(*N*)) *such that* (*a*) <sup>P</sup>(*i*, **Yˆ**) <sup>≥</sup> *<sup>Y</sup>*ˆ(*i*); (*b*) **<sup>Y</sup>**(**0**) <sup>≥</sup> **Yˆ** ; (*c*)P(*i*, **<sup>Y</sup>**(0)) <sup>≤</sup> *<sup>Y</sup>*(*i*)(0) *for*

(ii) *The sequences* {*Y*(1)(*k*)},..., {*Y*(*N*)(*k*)} *converge to the solution* **Y˜** *of the equations Y*(*i*) =

There exists an increasing interest to consider a computational approach to stochastic algebraic Riccati equations via a semidefinite programming problem over linear matrix inequalities. Similar studies can be found in [12–14]. The main result from such type studies is the

<sup>+</sup>*T*(*i*,*Y*(*k*)(*i*)) + <sup>G</sup>(*i*, 1, **<sup>Y</sup>**(*k*+1)) + <sup>G</sup>(*i*, 2, **<sup>Y</sup>**(*k*)),

*r* ∑ *l*=0 *A*ˆ

<sup>1</sup>−*δii* , if *<sup>i</sup>* �<sup>=</sup> *<sup>p</sup>*

*<sup>l</sup>*(*i*)*<sup>T</sup> Y*(*i*) *Bl*(*i*) + *L*ˆ(*i*),

*<sup>l</sup>*(*i*,*Y*(*k*)(*i*))

*N* ∑ *j*=*i*+1

*<sup>k</sup>*=1,..., {*Y*(*N*)(*k*)}<sup>∞</sup>

)

(*i*) + <sup>G</sup>(*i*, 1, **<sup>Y</sup>**(*k*) <sup>−</sup> **<sup>Y</sup>**(*k*+1)

*γij Z*(*j*).

<sup>1</sup>−*δii are positive numbers*

*<sup>k</sup>*=<sup>1</sup> *defined by (9) the*

(9)

*L*(*i*), *S*ˆ(*i*,*Y*(*i*)) =

<sup>1</sup> =

and assume that Γ is nonnegative (*γip* ≥ 0). We introduce the notations

G(*i*, **<sup>Y</sup>**) = <sup>∑</sup>*p*�=*<sup>i</sup> <sup>γ</sup>ip <sup>Y</sup>*(*p*),

The new iteration scheme applied to the equivalent system (7) is:

*i*−1 ∑ *j*=1

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*<sup>ˆ</sup>

**Theorem 2.3.** *[11] We assume that* Γ *is a nonnegative matrix and <sup>λ</sup>ii*

) = *Y*(*k*+1)

*<sup>i</sup>* <sup>=</sup> 1, . . . , *N. Then for the matrix sequences* {*Y*(1)(*k*)}<sup>∞</sup>

<sup>P</sup>(*i*, **<sup>Y</sup>**(**k**)

(i) *We have* **<sup>Y</sup>**(*k*) <sup>≥</sup> **Yˆ** , **<sup>Y</sup>**(*k*) <sup>≥</sup> **<sup>Y</sup>**(*k*+1) *and*

*i* = 1, . . . , *N*.

*L*ˆ(*i*) =

for *i* = 1, . . . , *N* , and

where

 *pii* 1 − *δii*

*Y*(*k*+1)(*i*) = ∑*<sup>r</sup>*

G(*i*, 1, **Z**) =

by the following theorem:

*following properties hold:*

*where k* = 0, 1, 2, . . .*;*

<sup>P</sup>(*i*, **<sup>Y</sup>**) *and* **Y˜** <sup>≥</sup> **Yˆ** *.*

**3. The LMI approach**

Γ = (*γip*)*<sup>N</sup>*

Further on, following the classical linear quadratic theory [13, 14] we know that the optimization problem is associated with (1) has the form (for example see [1, 7]):

$$\begin{aligned} \max & \sum\_{i=1}^{N} \quad (I, X(i)) \\ \text{subject to } i &= 1, \dots, N \\\\ & \begin{pmatrix} -X(i) + Q(i, \mathbf{X}) \ S(i, \mathbf{X}) \\\\ S(i, \mathbf{X})^T & R(i, \mathbf{X}) \end{pmatrix} \ge 0 \\\\ & R(i, \mathbf{X}) > 0, \\ X(i) &= X(i)^T. \end{aligned} \tag{10}$$

However, we can apply the same approach to equivalent system (7). As a result we formulate a new optimization problem assigned to (7) and we will use it to find the maximal solution to (7).

The corresponding optimization problem, associated to the maximal solution to (7), is given by:

$$\begin{aligned} & \max \sum\_{i=1}^{N} \text{ } \langle I, Y(i) \rangle \\ & \text{subject to } i = 1, \dots, N \\ & \begin{pmatrix} -Y(i) + \hat{C}^{T}(i)\hat{C}(i) + \sum\_{j=1}^{N} \gamma\_{ij} Y(j) & & \hat{S}(i, Y(i)) \\ \sum\_{l=0}^{r} \hat{A}\_{l}(i)^{T} Y(i) \hat{A}\_{l}(i) & & \hat{S}(i, Y(i)) \\ & \hat{S}(i, Y(i))^{T} & & R(i) + \sum\_{l=0}^{r} B\_{l}(i)^{T} Y(i) B\_{l}(i) \end{pmatrix} \geq 0 \\ & \begin{aligned} & R(i) + \sum\_{l=0}^{r} B\_{l}(i)^{T} Y(i) B\_{l}(i) > 0, \\ & Y(i) = Y(i)^{T}. \end{aligned} \end{aligned} \tag{11}$$

It is very important to analyze a case where the weighting matrices *R*(*i*), *Q*(*i*), *i* = 1, . . . , *N* are indefinite in the field of linear quadratic stochastic models. This case has a practical importance. There are studies where the cost matrices are allowed to be indefinite (see [3, 16] and reference there in). In this paper we will investigate this special case to considered general discrete-time Riccati equations (1). We will interpret iterations (6) and (9) in a case where matrices *R*(*i*), *i* = 1, . . . , *N* are indefinite and however, we will look for a maximal solution from *Dom* P.

Based on the next example, we compare the LMI approach (through optimization problems (10) and (11)) for solving the maximal solution to set of nonlinear equations (1).

We take the *n* × *n* matrices *Q*(1), *Q*(2) and *Q*(3) as follows:

$$\begin{array}{l} Q(1) = \operatorname{diag}[0.0, \ 0.5, \ \dots \ 0.5] \text{.} \quad Q(2) = \operatorname{diag}[0.0, \ 1, \ \dots \ 1] \\ Q(3) = \operatorname{diag}[0.0, \ 0.05, \ \dots \ 0.05] \text{.} \end{array}$$

Test 3.1.3 Test 3.1.4 LMI for (10) LMI for (11) LMI for (10) LMI for (11) n *mIt avIt mIt avIt mIt avIt mIt avIt* 45 36.8 50 44.4 46 36.7 51 43.9 47 37.5 50 43.6 48 37.8 52 43.9 49 38.7 57 46.4 48 38.6 55 46.3 61 42.2 59 46.4 53 41.4 61 48.2 46 35.4 50 43.6 46 35.6 53 43.0 46 38.6 50 43.3 44 39.2 49 43.0 CPU time 10 runs (in seconds) 1401 355.66 1441.2 350.8

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 159

of execution. The executed four tests of examples have demonstrated that LMI problem (10) performance needs more computational work than LMI problem (11) and thus, LMI method

We will investigate new iterations for computing the maximal solution to a set of Riccati equations (1) where the matrices *R*(*i*, **X**), *i* = 1, . . . , *N* are positive semidefinite. It is well known the application of a special linear quadratic stochastic model in the finance [18] where the cost matrix *R* is zero and the corresponding matrix *R* + *BTXB* is singular. So, this special case where it is necessary to invert a singular matrix is important to the financial modelling process. Without loss of generality we assume that all matrices *R*(*i*, **X**), *i* = 1, . . . , *N* in (1) are positive semidefinite. Thus, we will investigate set of equations (3)-(4) for existence a maximal solution. Investigations on similar type of generalized equations have been done by

(11) is faster than LMI method (10) (see the CPU times displayed in tables 2 and 3).

**Table 3.** Comparison between iterations for Example 3.1.

**4. The positive semidefinite case**

many authors (see [8, 9] and literature therein).

*l*=0 

*<sup>A</sup><sup>l</sup>*(*i*, **<sup>X</sup>**(*k*−<sup>1</sup>))

*<sup>A</sup><sup>l</sup>*(*i*, **<sup>X</sup>**(*k*−<sup>1</sup>))

+*F*(*i*, **Z**)*<sup>T</sup> R*(*i*) *F*(*i*, **Z**).

Let us construct P(**X**) = (P(1, **X**),...,P(*N*, **X**)) and define the set

*<sup>T</sup>*

where *A<sup>l</sup>*(*i*, **Z**) = *Al*(*i*) + *Bl*(*i*)*F*(*i*, **Z**), *F*(*i*, **Z**) = − (*R*(*i*, **Z**))

*T*(*i*, **Z**) = *CT*(*i*)*C*(*i*) + *F*(*i*, **Z**)*<sup>T</sup> L*(*i*)*<sup>T</sup>* + *L*(*i*) *F*(*i*, **Z**)

We will prove that the matrix sequence defined by (12) converges to the maximal solution **X˜** of (3)-(4) and *R*(*i*, **X˜**), *i* = 1, . . . , *N* are positive semidefinite. Thus, the iteration (12) constructs

*dom* <sup>P</sup>† <sup>=</sup> {**<sup>X</sup>** ∈ H*<sup>n</sup>* : *<sup>R</sup>*(*i*, **<sup>X</sup>**) <sup>≥</sup> 0 and *KerR*(*i*, **<sup>X</sup>**) <sup>⊆</sup> *KerS*(*i*, **<sup>X</sup>**), *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*} .

<sup>E</sup>*i*1(**X**(*k*)) + *piiX*(*i*)(*k*−1) <sup>+</sup> <sup>E</sup>*i*2(**X**(*k*−<sup>1</sup>))

+ *T*(*i*, **X**(*k*−<sup>1</sup>)), *i* = 1, 2, . . . , *N*, *k* = 1, 2, 3 . . . ,

(12)

† *S*(*i*, **Z**)*<sup>T</sup>* ,

We introduce the following new iteration:

× 

*X*(*i*)(*k*) = ∑*<sup>r</sup>*

a convergent matrix sequence.

and the same probability matrix as in Example 2.1 .

**Example 3.1.** *The coefficient matrices A*0(*i*), *A*1(*i*), *B*0(*i*), *B*1(*i*), *L*(*i*), *i* = 1, 2, 3 *for system (1) are given through formulas (using the* MATLAB *notations):*

*A*0(1) = *randn*(*n*, *n*)/6; *A*0(2) = *randn*(*n*, *n*)/6; *A*0(3) = *randn*(*n*, *n*)/6; *A*1(1) = *randn*(*n*, *n*)/7; *A*1(2) = *randn*(*n*, *n*)/7; *A*1(3) = *randn*(*n*, *n*)/7;

*B*0(1) = *sprandn*(*n*, 2, 0.3); *B*0(2) = *sprandn*(*n*, 2, 0.3); *B*0(3) = *sprandn*(*n*, 2, 0.3); *B*1(1) = *randn*(*n*, 2)/8; *B*1(2) = *randn*(*n*, 2)/8; *B*1(3) = *randn*(*n*, 2)/8;

*L*(1) = *randn*(*n*, 2)/8; *L*(2) = *randn*(*n*, 2)/8; *L*(3) = *randn*(*n*, 2)/8 .

$$\begin{aligned} \text{Test 3.1.1 } R(1) &= \operatorname{diag}[0.02, \ 0.04] \; . \; R(2) = \operatorname{diag}[0.085, \ 0.2] \; . \; R(3) = \operatorname{diag}[0.125, \ 0.1] \; . \\ \text{Test 3.1.2 } R(1) &= \operatorname{zeros}(2, 2) \; . \; R(2) = \operatorname{zeros}(2, 2) \; . \; R(3) = \operatorname{zeros}(2, 2) \; . \\ \text{Test 3.1.3 } R(1) &= \operatorname{diag}[-0.002, \ 0.005] \; . \; R(2) = \operatorname{diag}[-0.003, \ 0.010] \; . \\ \text{Test 3.1.4 } R(1) &= \operatorname{diag}[0.02, \ -0.0004] \; . \\ \text{Test 3.1.4 } R(1) &= \operatorname{diag}[-0.00025, \ -0.00005] \; . \; R(2) = \operatorname{diag}[-0.00035, \ -0.00010] \; . \\ \text{Test 4.3.5 } R(3) &= \operatorname{diag}[-0.0002, \ -0.00005] \; . \end{aligned}$$


**Table 2.** Comparison between iterations for Example 3.1.

The MATLAB function mincx is applied with the relative accuracy equals to 1.*e* − 10 for solving the corresponding optimization problems. Our numerical experiments confirm the effectiveness of the LMI approach applied to the optimization problems (10) and (11). We have compared the results from these experiments in regard of number of iterations and time


**Table 3.** Comparison between iterations for Example 3.1.

of execution. The executed four tests of examples have demonstrated that LMI problem (10) performance needs more computational work than LMI problem (11) and thus, LMI method (11) is faster than LMI method (10) (see the CPU times displayed in tables 2 and 3).

## **4. The positive semidefinite case**

12 Will-be-set-by-IN-TECH

*Q*(1) = *diag*[0.0, 0.5, . . . 0.5] , *Q*(2) = *diag*[0.0, 1, . . . 1]

**Example 3.1.** *The coefficient matrices A*0(*i*), *A*1(*i*), *B*0(*i*), *B*1(*i*), *L*(*i*), *i* = 1, 2, 3 *for system (1) are*

*B*0(1) = *sprandn*(*n*, 2, 0.3); *B*0(2) = *sprandn*(*n*, 2, 0.3); *B*0(3) = *sprandn*(*n*, 2, 0.3);

Test 3.1.1. *R*(1) = *diag*[0.02, 0.04] , *R*(2) = *diag*[0.085, 0.2] , *R*(3) = *diag*[0.125, 0.1] ,

Test 3.1.4. *R*(1) = *diag*[−0.00025 , −0.00005] , *R*(2) = *diag*[−0.00035, −0.00010] ,

Test 3.1.1 Test 3.1.2 LMI for (10) LMI for (11) LMI for (10) LMI for (11) n *mIt avIt mIt avIt mIt avIt mIt avIt* 47 34.6 46 42.5 45 37.0 49 45.0 43 37.5 49 42.4 44 37.0 47 41.8 43 34.8 50 41.7 48 36.5 50 42.4 52 40.3 51 43.7 50 39.3 48 43.7 41 35.3 51 40.6 45 38.8 49 43.0 45 36.8 47 40.6 46 37.0 52 44.3 CPU time 10 runs (in seconds) 1332.4 341.1 1431 338.67

The MATLAB function mincx is applied with the relative accuracy equals to 1.*e* − 10 for solving the corresponding optimization problems. Our numerical experiments confirm the effectiveness of the LMI approach applied to the optimization problems (10) and (11). We have compared the results from these experiments in regard of number of iterations and time

*A*0(1) = *randn*(*n*, *n*)/6; *A*0(2) = *randn*(*n*, *n*)/6; *A*0(3) = *randn*(*n*, *n*)/6; *A*1(1) = *randn*(*n*, *n*)/7; *A*1(2) = *randn*(*n*, *n*)/7; *A*1(3) = *randn*(*n*, *n*)/7;

*B*1(1) = *randn*(*n*, 2)/8; *B*1(2) = *randn*(*n*, 2)/8; *B*1(3) = *randn*(*n*, 2)/8;

*L*(1) = *randn*(*n*, 2)/8; *L*(2) = *randn*(*n*, 2)/8; *L*(3) = *randn*(*n*, 2)/8 .

Test 3.1.2. *R*(1) = *zeros*(2, 2), *R*(2) = *zeros*(2, 2), *R*(3) = *zeros*(2, 2), Test 3.1.3. *R*(1) = *diag*[−0.002 , 0.005] , *R*(2) = *diag*[−0.003, 0.010] ,

We take the *n* × *n* matrices *Q*(1), *Q*(2) and *Q*(3) as follows:

and the same probability matrix as in Example 2.1 .

*given through formulas (using the* MATLAB *notations):*

*R*(3) = *diag*[0.02 , −0.0004] ,

**Table 2.** Comparison between iterations for Example 3.1.

*R*(3) = *diag*[−0.0002 , −0.00005] .

*Q*(3) = *diag*[0.0, 0.05, . . . 0.05] .

We will investigate new iterations for computing the maximal solution to a set of Riccati equations (1) where the matrices *R*(*i*, **X**), *i* = 1, . . . , *N* are positive semidefinite. It is well known the application of a special linear quadratic stochastic model in the finance [18] where the cost matrix *R* is zero and the corresponding matrix *R* + *BTXB* is singular. So, this special case where it is necessary to invert a singular matrix is important to the financial modelling process. Without loss of generality we assume that all matrices *R*(*i*, **X**), *i* = 1, . . . , *N* in (1) are positive semidefinite. Thus, we will investigate set of equations (3)-(4) for existence a maximal solution. Investigations on similar type of generalized equations have been done by many authors (see [8, 9] and literature therein).

We introduce the following new iteration:

$$\begin{split} X(i)^{(k)} &= \Sigma\_{l=0}^{r} \left[ \widehat{A}\_{l}(i, \mathbf{X}^{(k-1)}) \right]^{T} \left( \mathcal{E}\_{l1}(\mathbf{X}^{(k)}) + p\_{li} \mathbf{X}(i)^{(k-1)} + \mathcal{E}\_{l2}(\mathbf{X}^{(k-1)}) \right) \\ &\quad \times \left[ \widehat{A}\_{l}(i, \mathbf{X}^{(k-1)}) \right] + T(i, \mathbf{X}^{(k-1)}), i = 1, 2, \dots, N, \ k = 1, 2, 3 \dots, \\ \text{where} \quad \widehat{A}\_{l}(i, \mathbf{Z}) &= A\_{l}(i) + B\_{l}(i) F(i, \mathbf{Z}), \quad F(i, \mathbf{Z}) = -\left( R(i, \mathbf{Z}) \right)^{\dagger} S(i, \mathbf{Z})^{T}, \\ T(i, \mathbf{Z}) &= \mathbf{C}^{T}(i) \mathbf{C}(i) + F(i, \mathbf{Z})^{T} L(i)^{T} + L(i) \, F(i, \mathbf{Z}) \\ &\quad + F(i, \mathbf{Z})^{T} R(i) F(i, \mathbf{Z}) \, . \end{split} \tag{12}$$

We will prove that the matrix sequence defined by (12) converges to the maximal solution **X˜** of (3)-(4) and *R*(*i*, **X˜**), *i* = 1, . . . , *N* are positive semidefinite. Thus, the iteration (12) constructs a convergent matrix sequence.

Let us construct P(**X**) = (P(1, **X**),...,P(*N*, **X**)) and define the set

$$\operatorname{dom} \mathcal{P}^\dagger = \left\{ \mathbf{X} \in \mathcal{H}^n \; : \; R(i, \mathbf{X}) \ge 0 \text{ and } \operatorname{Ker} R(i, \mathbf{X}) \subseteq \operatorname{Ker} \mathbf{S}(i, \mathbf{X}), \; i = 1, \dots, N \right\}.$$

Consider the rational operator <sup>P</sup> : *dom* <sup>P</sup>† → H*<sup>n</sup>* given by

$$\mathcal{P}(i, \mathbf{X}) = \sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_l(\mathbf{X}) A\_l(i) + \mathbf{C}^T(i) \mathbf{C}(i) - \mathbf{S}(i, \mathbf{X}) \left(\mathbf{R}(i, \mathbf{X})\right)^\dagger \mathbf{S}(i, \mathbf{X})^T A\_l$$
 
$$i = 1, \dots, N$$

and

We define

*and*

*for i* = 1, . . . , *N .*

= *r* ∑ *l*=0

= *r* ∑ *l*=0

*KerR*(*i*, **X**) ⊆ *Ker*

Combining (13) and (14) we write down

*KerR*(*i*, **<sup>X</sup>**) <sup>⊆</sup> *Ker*

<sup>P</sup>**Z**(*i*, **<sup>Y</sup>**) = <sup>∑</sup>*<sup>r</sup>*

*where A*˜

P**Z**(*i*, **Z**) − P**Z**(*i*, **Y**) =

*Proof.* Let us consider the difference

<sup>−</sup>*F*(*i*, **<sup>Z</sup>**)*<sup>T</sup>*

*R*(*i*) ±

*r* ∑ *l*=0

P(*i*, **Y**) − *T*(*i*, **Z**)

*r* ∑ *l*=0

*S*(*i*, **Xˆ**) +

for **<sup>X</sup>** ∈ H*<sup>n</sup>* and **<sup>H</sup>** ∈ H*n*. Obviously *<sup>W</sup>***<sup>X</sup>**(*i*, **<sup>H</sup>**) <sup>≥</sup> 0 and *<sup>W</sup>***<sup>X</sup>**(*i*, **<sup>X</sup>**) = 0.

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*˜

*Bl*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*) <sup>⊆</sup> *Ker*

*r* ∑ *l*=0

*<sup>W</sup>***<sup>X</sup>**(*i*, **<sup>H</sup>**)=[*F*(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>H</sup>**)]*<sup>T</sup> <sup>R</sup>*(*i*, **<sup>H</sup>**) [*F*(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>H</sup>**)]

**Lemma 4.4.** *If* **<sup>Y</sup>** ∈ H*<sup>n</sup> and* **<sup>Z</sup>** ∈ H*<sup>n</sup> (or let* **<sup>Y</sup>***,* **<sup>Z</sup>***) be symmetric matrices with KerR*(*i*, **<sup>Y</sup>**) <sup>⊆</sup> *KerS*(*i*, **Y**) *and KerR*(*i*, **Z**) ⊆ *KerS*(*i*, **Z**) *for i* = 1, . . . , *N. Then, the following identities hold:*

*<sup>l</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**Y**)*A*˜

*r* ∑ *l*=0 *A*˜

*Al*(*i*)*T*E*i*(**Y**)*Al*(*i*) <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>Y</sup>**)*<sup>T</sup> <sup>R</sup>*(*i*, **<sup>Y</sup>**) *<sup>F</sup>*(*i*, **<sup>Y</sup>**)

<sup>−</sup>*F*(*i*, **<sup>Z</sup>**)*<sup>T</sup> <sup>L</sup>*(*i*)*<sup>T</sup>* <sup>−</sup> *<sup>L</sup>*(*i*) *<sup>F</sup>*(*i*, **<sup>Z</sup>**) <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup> <sup>R</sup>*(*i*) *<sup>F</sup>*(*i*, **<sup>Z</sup>**)

<sup>−</sup>*F*(*i*, **<sup>Z</sup>**)*<sup>T</sup> <sup>L</sup>*(*i*)*<sup>T</sup>* <sup>−</sup> *<sup>L</sup>*(*i*) *<sup>F</sup>*(*i*, **<sup>Z</sup>**) <sup>±</sup> *<sup>F</sup>*(*i*, **<sup>Y</sup>**)*<sup>T</sup> <sup>R</sup>*(*i*, **<sup>Y</sup>**) *<sup>F</sup>*(*i*, **<sup>Z</sup>**)

*Bl*(*i*)*T*E*i*(**Y**)*Bl*(*i*)

*<sup>l</sup>*(*i*, **Z**) = *Al*(*i*) + *Bl*(*i*) *F*(*i*, **Z**)

*<sup>l</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**<sup>Z</sup>** <sup>−</sup> **<sup>Y</sup>**)*A*˜

*Al*(*i*)*T*E*i*(**Y**)*Al*(*i*) <sup>−</sup> *<sup>F</sup>*(*i*, **<sup>Y</sup>**)*<sup>T</sup> <sup>R</sup>*(*i*, **<sup>Y</sup>**) *<sup>F</sup>*(*i*, **<sup>Y</sup>**) <sup>±</sup> *<sup>F</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup> <sup>R</sup>*(*i*, **<sup>Y</sup>**) *<sup>F</sup>*(*i*, **<sup>Y</sup>**)

*F*(*i*, **Z**).

*r* ∑ *l*=0

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 161

*<sup>l</sup>*(*i*, **Z**) + *T*(*i*, **Z**) − *W***<sup>Z</sup>**(*i*, **Y**)

*<sup>l</sup>*(*i*, **Z**) + *W***<sup>Z</sup>**(*i*, **Y**)

*Al*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*)

*Al*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*). (14)

(15)

= *Ker S*(*i*, **X**).

which has been investigated and some useful lemmas have been proved. We present some preliminary results from the matrix analysis.

**Lemma 4.1.** *[8, Lemma 4.2] Assume that Z is a m* × *n matrix and W is a p* × *n matrix. Then the following statements are equivalent:*


**Lemma 4.2.** *[8, Lemma 4.3(i)] Let H be a hermitian matrix of size n* + *m with H* = *L N N*∗ *M where*

*L is n* <sup>×</sup> *n and M is m* <sup>×</sup> *m. Then, H is positive semidefinite if and only if M* <sup>≥</sup> 0, *<sup>L</sup>* <sup>−</sup> *NM*†*N*<sup>∗</sup> <sup>≥</sup> <sup>0</sup> *and Ker M* ⊆ *Ker N.*

The next lemma generalizes lemma 3.1 derived by [9] in the following form:

**Lemma 4.3.** *If* **Xˆ** <sup>∈</sup> *dom* <sup>P</sup>† *and KerR*(*i*, **Xˆ**) <sup>⊆</sup> *KerS*(*i*, **Xˆ**) *for i* <sup>=</sup> 1, . . . , *N, then* **<sup>X</sup>** <sup>∈</sup> *dom* <sup>P</sup>† *for all* **<sup>X</sup>** <sup>≥</sup> **Xˆ** *.*

*Proof.* For **<sup>X</sup>** <sup>≥</sup> **Xˆ** we have

$$R(i, \mathbf{X}) \ge R(i, \hat{\mathbf{X}}) \ge 0$$

and

$$
\operatorname{Ker} \mathbf{R}(i, \mathbf{X}) \subseteq \operatorname{Ker} \mathbf{R}(i, \hat{\mathbf{X}}) \subseteq \operatorname{Ker} \mathbf{S}(i, \hat{\mathbf{X}}).\tag{13}
$$

We apply lemma 4.2 for *<sup>H</sup>* <sup>=</sup> <sup>Π</sup> (*i*, **<sup>X</sup>** <sup>−</sup> **Xˆ**) <sup>≥</sup> 0 and we conclude <sup>∑</sup>*<sup>r</sup> <sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*) <sup>≥</sup> <sup>0</sup> and

$$\text{Ker } \sum\_{l=0}^{r} B\_l(i)^T \mathcal{E}\_i(\mathbf{X} - \mathbf{\hat{X}}) B\_l(i) \subseteq \text{Ker } \sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_i(\mathbf{X} - \mathbf{\hat{X}}) B\_l(i) \text{ .}$$

Moreover,

$$\begin{aligned} 0 &\le R(i, \hat{\mathbf{X}}) \\ &= R(i) + \sum\_{l=0}^{r} B\_{l}(i)^{T} \mathcal{E}\_{i}(\hat{\mathbf{X}} \pm \mathbf{X}) B\_{l}(i) \\ &= R(i, \mathbf{X}) - \sum\_{l=0}^{r} B\_{l}(i)^{T} \mathcal{E}\_{i}(\mathbf{X} - \hat{\mathbf{X}}) B\_{l}(i) \dots \end{aligned}$$

Thus

$$R(i, \mathbf{X}) \ge \sum\_{l=0}^{r} B\_l(i)^T \mathcal{E}\_i(\mathbf{X} - \hat{\mathbf{X}}) B\_l(i)$$

and

14 Will-be-set-by-IN-TECH

*<sup>l</sup>*=<sup>0</sup> *Al*(*i*)*T*E*i*(**X**)*Al*(*i*) + *<sup>C</sup>T*(*i*)*C*(*i*) <sup>−</sup> *<sup>S</sup>*(*i*, **<sup>X</sup>**)(*R*(*i*, **<sup>X</sup>**))

which has been investigated and some useful lemmas have been proved. We present some

**Lemma 4.1.** *[8, Lemma 4.2] Assume that Z is a m* × *n matrix and W is a p* × *n matrix. Then the*

*L is n* <sup>×</sup> *n and M is m* <sup>×</sup> *m. Then, H is positive semidefinite if and only if M* <sup>≥</sup> 0, *<sup>L</sup>* <sup>−</sup> *NM*†*N*<sup>∗</sup> <sup>≥</sup> <sup>0</sup>

**Lemma 4.3.** *If* **Xˆ** <sup>∈</sup> *dom* <sup>P</sup>† *and KerR*(*i*, **Xˆ**) <sup>⊆</sup> *KerS*(*i*, **Xˆ**) *for i* <sup>=</sup> 1, . . . , *N, then* **<sup>X</sup>** <sup>∈</sup> *dom* <sup>P</sup>† *for all*

*<sup>R</sup>*(*i*, **<sup>X</sup>**) <sup>≥</sup> *<sup>R</sup>*(*i*, **Xˆ**) <sup>≥</sup> <sup>0</sup>

*KerR*(*i*, **<sup>X</sup>**) <sup>⊆</sup> *KerR*(*i*, **Xˆ**) <sup>⊆</sup> *KerS*(*i*, **Xˆ**). (13)

*Al*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*).

*r* ∑ *l*=0

*Bl*(*i*)*T*E*i*(**Xˆ** <sup>±</sup> **<sup>X</sup>**)*Bl*(*i*)

*Bl*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*).

*Bl*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*)

**Lemma 4.2.** *[8, Lemma 4.3(i)] Let H be a hermitian matrix of size n* + *m with H* =

The next lemma generalizes lemma 3.1 derived by [9] in the following form:

We apply lemma 4.2 for *<sup>H</sup>* <sup>=</sup> <sup>Π</sup> (*i*, **<sup>X</sup>** <sup>−</sup> **Xˆ**) <sup>≥</sup> 0 and we conclude <sup>∑</sup>*<sup>r</sup>*

<sup>0</sup> <sup>≤</sup> *<sup>R</sup>*(*i*, **Xˆ**)

= *R*(*i*) +

= *R*(*i*, **X**) −

*R*(*i*, **X**) ≥

*Bl*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*) <sup>⊆</sup> *Ker*

*r* ∑ *l*=0

> *r* ∑ *l*=0

*r* ∑ *l*=0 † *S*(*i*, **X**)*<sup>T</sup>* ,

 *L N N*∗ *M*

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*T*E*i*(**<sup>X</sup>** <sup>−</sup> **Xˆ**)*Bl*(*i*) <sup>≥</sup> <sup>0</sup>

 *where*

Consider the rational operator <sup>P</sup> : *dom* <sup>P</sup>† → H*<sup>n</sup>* given by

*i* = 1, . . . , *N*

preliminary results from the matrix analysis.

<sup>P</sup>(*i*, **<sup>X</sup>**) = <sup>∑</sup>*<sup>r</sup>*

*following statements are equivalent:*

(i) *Ker Z* ⊆ *Ker W;* (ii) *W* = *WZ*†*Z;* (iii) *W*† = *Z*†*ZW*†*.*

*and Ker M* ⊆ *Ker N.*

*Proof.* For **<sup>X</sup>** <sup>≥</sup> **Xˆ** we have

*Ker*

*r* ∑ *l*=0

**<sup>X</sup>** <sup>≥</sup> **Xˆ** *.*

and

and

Thus

Moreover,

$$\text{Ker}R(i, \mathbf{X}) \subseteq \text{Ker}\,\sum\_{l=0}^{r} B\_l(i)^T \mathcal{E}\_i(\mathbf{X} - \hat{\mathbf{X}}) B\_l(i) \subseteq \text{Ker}\,\sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_i(\mathbf{X} - \hat{\mathbf{X}}) B\_l(i) \,. \tag{14}$$

Combining (13) and (14) we write down

$$\operatorname{Ker} \mathbf{R}(i, \mathbf{X}) \subseteq \operatorname{Ker} \left[ \mathbf{S}(i, \mathbf{\hat{X}}) + \sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_l(\mathbf{X} - \mathbf{\hat{X}}) \mathcal{B}\_l(i) \right] = \operatorname{Ker} \mathbf{S}(i, \mathbf{X}) \,.$$

We define

$$\widehat{\mathcal{W}}\_{\mathbf{X}}(i,\mathbf{H}) = \left[F(i,\mathbf{X}) - F(i,\mathbf{H})\right]^T \mathcal{R}(i,\mathbf{H}) \left[F(i,\mathbf{X}) - F(i,\mathbf{H})\right]$$

for **<sup>X</sup>** ∈ H*<sup>n</sup>* and **<sup>H</sup>** ∈ H*n*. Obviously *<sup>W</sup>***<sup>X</sup>**(*i*, **<sup>H</sup>**) <sup>≥</sup> 0 and *<sup>W</sup>***<sup>X</sup>**(*i*, **<sup>X</sup>**) = 0.

**Lemma 4.4.** *If* **<sup>Y</sup>** ∈ H*<sup>n</sup> and* **<sup>Z</sup>** ∈ H*<sup>n</sup> (or let* **<sup>Y</sup>***,* **<sup>Z</sup>***) be symmetric matrices with KerR*(*i*, **<sup>Y</sup>**) <sup>⊆</sup> *KerS*(*i*, **Y**) *and KerR*(*i*, **Z**) ⊆ *KerS*(*i*, **Z**) *for i* = 1, . . . , *N. Then, the following identities hold:*

$$\begin{aligned} \mathcal{P}\_{\mathbf{Z}}(\mathbf{i}, \mathbf{Y}) &= \sum\_{l=0}^{r} \tilde{A}\_{l}(\mathbf{i}, \mathbf{Z})^{T} \mathcal{E}\_{i}(\mathbf{Y}) \tilde{A}\_{l}(\mathbf{i}, \mathbf{Z}) + T(\mathbf{i}, \mathbf{Z}) - \hat{W}\_{\mathbf{Z}}(\mathbf{i}, \mathbf{Y}) \\\\ where \qquad \tilde{A}\_{l}(\mathbf{i}, \mathbf{Z}) &= A\_{l}(\mathbf{i}) + B\_{l}(\mathbf{i}) \, \mathrm{F}(\mathbf{i}, \mathbf{Z}) \end{aligned} \tag{15}$$

*and*

$$\mathcal{P}\mathbf{Z}(\mathbf{i},\mathbf{Z}) - \mathcal{P}\mathbf{Z}(\mathbf{i},\mathbf{Y}) = \sum\_{l=0}^{r} \tilde{A}\_{l}(\mathbf{i},\mathbf{Z})^{T} \mathcal{E}\_{\mathbf{i}}(\mathbf{Z}-\mathbf{Y})\tilde{A}\_{l}(\mathbf{i},\mathbf{Z}) + \hat{W}\mathbf{Z}(\mathbf{i},\mathbf{Y})$$

*for i* = 1, . . . , *N .*

*Proof.* Let us consider the difference

$$\begin{split} &\mathcal{P}(i,\mathbf{Y}) - T(i,\mathbf{Z}) \\ &= \sum\_{l=0}^{r} A\_{l}(i)^{T} \mathcal{E}\_{l}(\mathbf{Y}) A\_{l}(i) - F(i,\mathbf{Y})^{T} R(i,\mathbf{Y}) \, F(i,\mathbf{Y}) \\ & \quad - F(i,\mathbf{Z})^{T} L(i)^{T} - L(i) \, F(i,\mathbf{Z}) - F(i,\mathbf{Z})^{T} R(i) \, F(i,\mathbf{Z}) \\ &= \sum\_{l=0}^{r} A\_{l}(i)^{T} \mathcal{E}\_{l}(\mathbf{Y}) A\_{l}(i) - F(i,\mathbf{Y})^{T} R(i,\mathbf{Y}) \, F(i,\mathbf{Y}) \pm F(i,\mathbf{Z})^{T} R(i,\mathbf{Y}) \, F(i,\mathbf{Y}) \\ & \quad - F(i,\mathbf{Z})^{T} L(i)^{T} - L(i) \, F(i,\mathbf{Z}) \pm F(i,\mathbf{Y})^{T} R(i,\mathbf{Y}) \, F(i,\mathbf{Z}) \\ & \quad - F(i,\mathbf{Z})^{T} \left( R(i) \pm \sum\_{l=0}^{r} B\_{l}(i)^{T} \mathcal{E}\_{l}(\mathbf{Y}) B\_{l}(i) \right) \, F(i,\mathbf{Z}) . \end{split}$$

According to lemma 4.1 we obtain *<sup>F</sup>*(*i*, **<sup>Z</sup>**)*TR*(*i*, **<sup>Z</sup>**) = <sup>−</sup>*S*(*i*, **<sup>Z</sup>**) and *<sup>F</sup>*(*i*, **<sup>Y</sup>**)*TR*(*i*, **<sup>Y</sup>**) = <sup>−</sup>*S*(*i*, **<sup>Y</sup>**). We derive

(*d*) <sup>P</sup>(*i*, **<sup>X</sup>**(0)) <sup>≤</sup> *<sup>X</sup>*(*i*)(0)*.*

*(3)-(4) and* **X˜** <sup>≥</sup> **Xˆ** *.*

*<sup>X</sup>*(*i*)(0) <sup>−</sup> *<sup>X</sup>*(*i*)(1) <sup>=</sup> <sup>∑</sup>*<sup>r</sup>*

• *<sup>X</sup>*(*i*)(*p*) <sup>≥</sup> *<sup>X</sup>*(*i*)(*p*+1),

the difference

• <sup>P</sup>(*i*, **<sup>X</sup>**(*p*)) = *<sup>X</sup>*(*i*)(*p*+1) <sup>+</sup> <sup>∑</sup>*<sup>r</sup>*

*<sup>X</sup>*(*i*)(*p*) <sup>−</sup> *<sup>X</sup>*ˆ(*i*) =

Based on identity (15) in the form

<sup>P</sup>(*i*, **<sup>X</sup>**ˆ) = <sup>P</sup>**X**(*p*−1) (*i*, **<sup>X</sup>**ˆ) = <sup>∑</sup>*<sup>r</sup>*

{*Xi*}<sup>∞</sup>

*properties are satisfied:*

*Then for the matrix sequences* {*X*(1)(*k*)}<sup>∞</sup>

(i) *We have* **<sup>X</sup>**(*k*) <sup>≥</sup> **Xˆ** , **<sup>X</sup>**(*k*) <sup>≥</sup> **<sup>X</sup>**(*k*+1) *and*

*where i* = 1, 2, . . . , *N*, *k* = 0, 1, 2, . . .*;*

) = *X*(*i*)(*k*+1) +

*r* ∑ *l*=0 *A*˜ *<sup>l</sup>*(*i*, **<sup>X</sup>**(*k*)

(ii) *the sequences* {*X*(1)(*k*)},..., {*X*(*N*)(*k*)} *converge to the maximal solution* **X˜** *of set of equations*

*Proof.* Let *<sup>k</sup>* <sup>=</sup> 0. We will prove the inequality **<sup>X</sup>**(0) <sup>≥</sup> **<sup>X</sup>**(1). From iteration (12) for *<sup>k</sup>* <sup>=</sup> 1 with

+ *T*(*i*, **X**(0)).

<sup>E</sup>*i*1(**X**(0) <sup>−</sup> **<sup>X</sup>**(1))

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*))*<sup>T</sup>*E*i*1(**X**(*p*) <sup>−</sup> **<sup>X</sup>**(*p*+1))*A*˜

<sup>E</sup>*i*1(**X**(*p*)

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*−<sup>1</sup>))*<sup>T</sup>* <sup>E</sup>*i*(**X**ˆ)*A*˜

) + *T*(*i*, **X**(*p*−1)

We will prove *<sup>X</sup>*(*i*)(*p*) <sup>≥</sup> *<sup>X</sup>*ˆ(*i*) for *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*. Using (15) with **<sup>Y</sup>** <sup>=</sup> **<sup>X</sup>**<sup>ˆ</sup> and **<sup>Z</sup>** <sup>=</sup> **<sup>X</sup>**(*p*−1) we from

)*<sup>T</sup>*

We conclude *<sup>X</sup>*(*i*)(0) <sup>−</sup> *<sup>X</sup>*(*i*)(1) <sup>≥</sup> 0 for *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>N</sup>*, under the assumption (*d*) of the theorem.

Beginning with **<sup>X</sup>**(**0**) and using iteration (12) we construct two matrix sequences {*FXi*

*<sup>i</sup>*=1. We will prove by induction the following statements for *i* = 1, . . . , *N*:

• *<sup>X</sup>*(*i*)(*p*) <sup>≥</sup> *<sup>X</sup>*ˆ(*i*) and thus **<sup>X</sup>**(*p*) <sup>∈</sup> *dom* <sup>P</sup>† and *KerR*(*i*, **<sup>X</sup>**(*p*)) <sup>⊆</sup> *KerS*(*i*, **<sup>X</sup>**(*p*)),

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*˜

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*−1)

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*−1)

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*˜

<sup>−</sup>*W***X**(*p*−1) (*i*, **Xˆ**).

*r* ∑ *l*=0 *A*˜

<sup>×</sup> *<sup>A</sup>*˜

*<sup>T</sup>*

<sup>P</sup>(*i*, **<sup>X</sup>**(*k*)

**<sup>X</sup>**(**0**) <sup>∈</sup> *dom* <sup>P</sup>† and for each *<sup>i</sup>* we get:

*X*(*i*)(1) = ∑*<sup>r</sup>*

*l*=0 *A*˜

We will derive an expression to *<sup>X</sup>*(*i*)(0) <sup>−</sup> *<sup>X</sup>*(*i*)(1). We obtain

× *A*˜

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*˜

*<sup>l</sup>*(*i*, **X**(0))

*<sup>l</sup>*(*i*, **X**(0))

*<sup>l</sup>*(*i*, **<sup>X</sup>**(0))*<sup>T</sup>*

*<sup>k</sup>*=1,..., {*X*(*N*)(*k*)}<sup>∞</sup>

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 163

)*T*E*i*1(**X**(*k*) <sup>−</sup> **<sup>X</sup>**(*k*+1)

<sup>E</sup>*i*1(**X**(1)) + *piiX*(*i*)(0) <sup>+</sup> <sup>E</sup>*i*2(**X**(0))

 *A*˜

*<sup>k</sup>*=<sup>1</sup> *defined by (12) the following*

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*k*) ),

*<sup>l</sup>*(*i*, **<sup>X</sup>**(0)) + *<sup>X</sup>*(*i*)(0) − P(*i*, **<sup>X</sup>**(0)).

*<sup>l</sup>*(*i*, **X**(*p*)).

) + *piiX*(*i*)(*p*−1) <sup>+</sup> <sup>E</sup>*i*2(**X**(*p*−1)

*<sup>l</sup>*(*i*, **X**(*p*−<sup>1</sup>)) + *T*(*i*, **X**(*p*−<sup>1</sup>))

) <sup>−</sup> *<sup>X</sup>*ˆ(*i*).

}∞ *<sup>i</sup>*=<sup>0</sup> , and

) 

)*A*˜

$$\begin{aligned} &-F(\mathbf{i},\mathbf{Z})^T(L(\mathbf{i})^T + R(\mathbf{i},\mathbf{Y})\,F(\mathbf{i},\mathbf{Y})) \\ &= -F(\mathbf{i},\mathbf{Z})^T(L(\mathbf{i})^T - S(\mathbf{i},\mathbf{Y})^T) = F(\mathbf{i},\mathbf{Z})^T \sum\_{l=0}^r B\_l(\mathbf{i})^T \mathcal{E}\_l(\mathbf{Y}) A\_l(\mathbf{i})\_l \end{aligned}$$

and

$$\begin{aligned} &-\left(L(i) + F(i, \mathbf{T})^T R(i, \mathbf{Y})\right) F(i, \mathbf{Z})\\ &= -\left(L(i) - S(i, \mathbf{Y})\right) F(i, \mathbf{Z}) = \sum\_{l=0}^{r} A\_l(i)^T \mathcal{E}\_l(\mathbf{Y}) B\_l(i) F(i, \mathbf{Z}).\end{aligned}$$

Then

$$\mathcal{P}(\mathbf{i},\mathbf{Y}) - T(\mathbf{i},\mathbf{Z}) \ = \sum\_{l=0}^{r} \tilde{A}\_{l}(\mathbf{i},\mathbf{Z})^{T} \mathcal{E}\_{l}(\mathbf{Y}) \tilde{A}\_{l}(\mathbf{i},\mathbf{Z}) - \hat{W}\_{\mathbf{Z}}(\mathbf{i},\mathbf{Y})$$

and

$$\mathcal{P}(i,\mathbf{Y}) = \mathcal{P}\_{\mathbf{Z}}(i,\mathbf{Y}) = \sum\_{l=0}^{r} \tilde{A}\_{l}(i,\mathbf{Z})^{T} \mathcal{E}\_{i}(\mathbf{Y}) \tilde{A}\_{l}(i,\mathbf{Z}) + T(i,\mathbf{Z}) - \widehat{W}\_{\mathbf{Z}}(i,\mathbf{Y}),$$

i.e. the identity (15) holds for all values of *i*.

Further on, taking **Y** = **Z** in (15) we obtain:

$$\mathcal{P}(i,\mathbf{Y}) = \sum\_{l=0}^{r} \tilde{A}\_{l}(i,\mathbf{Y})^{T} \mathcal{E}\_{i}(\mathbf{Y}) \tilde{A}\_{l}(i,\mathbf{Y}) - T(i,\mathbf{Y}) \dots$$

Combining the last two equations it is received

$$\mathcal{P}\mathbf{z}(i,\mathbf{Z}) - \mathcal{P}\mathbf{z}(i,\mathbf{Y}) = \sum\_{l=0}^{r} \tilde{A}\_{l}(i,\mathbf{Z})^{T}\mathcal{E}\_{i}(\mathbf{Z}-\mathbf{Y})\tilde{A}\_{l}(i,\mathbf{Z}) + \hat{W}\mathbf{Z}(i,\mathbf{Y}) \,.$$

Now, we are ready to investigate recurrence equations (12) where **X**(**0**) is a suitable matrix. We will prove some properties of the matrix sequence {*Xi*}<sup>∞</sup> *<sup>i</sup>*=<sup>0</sup> defined by the above recurrence

**Theorem 4.1.** *Letting there are symmetric matrices* **Xˆ** = (*X*<sup>ˆ</sup> 1,..., *<sup>X</sup>*<sup>ˆ</sup> *<sup>N</sup>*) *and* **<sup>X</sup>**(**0**) = (*X*(0) <sup>1</sup> ,..., *<sup>X</sup>*(0) *<sup>N</sup>* ) *such that (for i* = 1, . . . , *N):*

equation. The limit of this matrix sequence is a solution to (3)-(4). We will derive the theorem:

(*a*) **Xˆ** <sup>∈</sup> *dom* <sup>P</sup>† *with KerR*(*i*, **Xˆ**) <sup>⊆</sup> *KerS*(*i*, **Xˆ**)*;* (*b*) <sup>P</sup>(*i*, **Xˆ**) <sup>≥</sup> *<sup>X</sup>*ˆ(*i*)*;* (*c*) **<sup>X</sup>**(**0**) <sup>≥</sup> **Xˆ** *;*

(*d*) <sup>P</sup>(*i*, **<sup>X</sup>**(0)) <sup>≤</sup> *<sup>X</sup>*(*i*)(0)*.*

16 Will-be-set-by-IN-TECH

According to lemma 4.1 we obtain *<sup>F</sup>*(*i*, **<sup>Z</sup>**)*TR*(*i*, **<sup>Z</sup>**) = <sup>−</sup>*S*(*i*, **<sup>Z</sup>**) and *<sup>F</sup>*(*i*, **<sup>Y</sup>**)*TR*(*i*, **<sup>Y</sup>**) = <sup>−</sup>*S*(*i*, **<sup>Y</sup>**).

*r* ∑ *l*=0

*<sup>l</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**Y**)*A*˜

*<sup>l</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**Y**)*A*˜

*<sup>l</sup>*(*i*, **<sup>Y</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**Y**)*A*˜

Now, we are ready to investigate recurrence equations (12) where **X**(**0**) is a suitable matrix. We

equation. The limit of this matrix sequence is a solution to (3)-(4). We will derive the theorem:

**Theorem 4.1.** *Letting there are symmetric matrices* **Xˆ** = (*X*<sup>ˆ</sup> 1,..., *<sup>X</sup>*<sup>ˆ</sup> *<sup>N</sup>*) *and* **<sup>X</sup>**(**0**) = (*X*(0)

*<sup>l</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup>* <sup>E</sup>*i*(**<sup>Z</sup>** <sup>−</sup> **<sup>Y</sup>**)*A*˜

∑ *l*=0

*Al*(*i*)*T*E*i*(**Y**)*Bl*(*i*) *<sup>F</sup>*(*i*, **<sup>Z</sup>**).

*<sup>l</sup>*(*i*, **Z**) − *W***<sup>Z</sup>**(*i*, **Y**)

*<sup>l</sup>*(*i*, **Z**) + *T*(*i*, **Z**) − *W***<sup>Z</sup>**(*i*, **Y**),

*<sup>l</sup>*(*i*, **Z**) + *W***<sup>Z</sup>**(*i*, **Y**).

*<sup>i</sup>*=<sup>0</sup> defined by the above recurrence

<sup>1</sup> ,..., *<sup>X</sup>*(0)

*<sup>N</sup>* )

*<sup>l</sup>*(*i*, **Y**) − *T*(*i*, **Y**).

*Bl*(*i*)*T*E*i*(**Y**)*Al*(*i*),

<sup>−</sup>*F*(*i*, **<sup>Z</sup>**)*T*(*L*(*i*)*<sup>T</sup>* <sup>+</sup> *<sup>R</sup>*(*i*, **<sup>Y</sup>**) *<sup>F</sup>*(*i*, **<sup>Y</sup>**))

<sup>=</sup> <sup>−</sup>*F*(*i*, **<sup>Z</sup>**)*T*(*L*(*i*)*<sup>T</sup>* <sup>−</sup> *<sup>S</sup>*(*i*, **<sup>Y</sup>**)*T*) = *<sup>F</sup>*(*i*, **<sup>Z</sup>**)*<sup>T</sup> <sup>r</sup>*

<sup>−</sup>(*L*(*i*) + *<sup>F</sup>*(*i*, **<sup>T</sup>**)*<sup>T</sup> <sup>R</sup>*(*i*, **<sup>Y</sup>**)) *<sup>F</sup>*(*i*, **<sup>Z</sup>**)

*r* ∑ *l*=0 *A*˜

= −(*L*(*i*) − *S*(*i*, **Y**)) *F*(*i*, **Z**) =

*r* ∑ *l*=0 *A*˜

*r* ∑ *l*=0 *A*˜

> *r* ∑ *l*=0 *A*˜

P(*i*, **Y**) − *T*(*i*, **Z**) =

P(*i*, **Y**) =

will prove some properties of the matrix sequence {*Xi*}<sup>∞</sup>

P(*i*, **Y**) = P**Z**(*i*, **Y**) =

i.e. the identity (15) holds for all values of *i*. Further on, taking **Y** = **Z** in (15) we obtain:

Combining the last two equations it is received

P**Z**(*i*, **Z**) − P**Z**(*i*, **Y**) =

(*a*) **Xˆ** <sup>∈</sup> *dom* <sup>P</sup>† *with KerR*(*i*, **Xˆ**) <sup>⊆</sup> *KerS*(*i*, **Xˆ**)*;*

*such that (for i* = 1, . . . , *N):*

(*b*) <sup>P</sup>(*i*, **Xˆ**) <sup>≥</sup> *<sup>X</sup>*ˆ(*i*)*;*

(*c*) **<sup>X</sup>**(**0**) <sup>≥</sup> **Xˆ** *;*

We derive

and

Then

and

*Then for the matrix sequences* {*X*(1)(*k*)}<sup>∞</sup> *<sup>k</sup>*=1,..., {*X*(*N*)(*k*)}<sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> *defined by (12) the following properties are satisfied:*

(i) *We have* **<sup>X</sup>**(*k*) <sup>≥</sup> **Xˆ** , **<sup>X</sup>**(*k*) <sup>≥</sup> **<sup>X</sup>**(*k*+1) *and*

$$\mathcal{P}(i, \mathbf{X}^{(k)}) = X(i)^{(k+1)} + \sum\_{l=0}^{r} \tilde{A}\_l (i, \mathbf{X}^{(k)})^T \mathcal{E}\_{i1} (\mathbf{X}^{(k)} - \mathbf{X}^{(k+1)}) \tilde{A}\_l (i, \mathbf{X}^{(k)})\_l$$

*where i* = 1, 2, . . . , *N*, *k* = 0, 1, 2, . . .*;*

(ii) *the sequences* {*X*(1)(*k*)},..., {*X*(*N*)(*k*)} *converge to the maximal solution* **X˜** *of set of equations (3)-(4) and* **X˜** <sup>≥</sup> **Xˆ** *.*

*Proof.* Let *<sup>k</sup>* <sup>=</sup> 0. We will prove the inequality **<sup>X</sup>**(0) <sup>≥</sup> **<sup>X</sup>**(1). From iteration (12) for *<sup>k</sup>* <sup>=</sup> 1 with **<sup>X</sup>**(**0**) <sup>∈</sup> *dom* <sup>P</sup>† and for each *<sup>i</sup>* we get:

$$\begin{split} X(i)^{(1)} &= \Sigma\_{l=0}^{r} \left[ \tilde{A}\_{l}(i, \mathbf{X}^{(0)}) \right]^{T} \left( \mathcal{E}\_{i1}(\mathbf{X}^{(1)}) + p\_{ii} X(i)^{(0)} + \mathcal{E}\_{i2}(\mathbf{X}^{(0)}) \right) \\ &\quad \times \left[ \tilde{A}\_{l}(i, \mathbf{X}^{(0)}) \right] + T(i, \mathbf{X}^{(0)}) . \end{split}$$

We will derive an expression to *<sup>X</sup>*(*i*)(0) <sup>−</sup> *<sup>X</sup>*(*i*)(1). We obtain

$$X(i)^{(0)} - X(i)^{(1)} = \sum\_{l=0}^{r} \tilde{A}\_l (i, \mathbf{X}^{(0)})^T \left( \mathcal{E}\_{i1} (\mathbf{X}^{(0)} - \mathbf{X}^{(1)}) \right) \tilde{A}\_l (i, \mathbf{X}^{(0)}) + X(i)^{(0)} - \mathcal{P}(i, \mathbf{X}^{(0)}) \dots$$

We conclude *<sup>X</sup>*(*i*)(0) <sup>−</sup> *<sup>X</sup>*(*i*)(1) <sup>≥</sup> 0 for *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>N</sup>*, under the assumption (*d*) of the theorem. Beginning with **<sup>X</sup>**(**0**) and using iteration (12) we construct two matrix sequences {*FXi* }∞ *<sup>i</sup>*=<sup>0</sup> , and {*Xi*}<sup>∞</sup> *<sup>i</sup>*=1. We will prove by induction the following statements for *i* = 1, . . . , *N*:


We will prove *<sup>X</sup>*(*i*)(*p*) <sup>≥</sup> *<sup>X</sup>*ˆ(*i*) for *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*. Using (15) with **<sup>Y</sup>** <sup>=</sup> **<sup>X</sup>**<sup>ˆ</sup> and **<sup>Z</sup>** <sup>=</sup> **<sup>X</sup>**(*p*−1) we from the difference

$$X(i)^{(p)} - \hat{X}(i) = \sum\_{l=0}^{r} \tilde{A}\_l(i, \mathbf{X}^{(p-1)})^T \left( \mathcal{E}\_{l1}(\mathbf{X}^{(p)}) + p\_{li} \mathbf{X}(i)^{(p-1)} + \mathcal{E}\_{l2}(\mathbf{X}^{(p-1)}) \right),$$

$$\times \tilde{A}\_l(i, \mathbf{X}^{(p-1)}) + T(i, \mathbf{X}^{(p-1)}) - \hat{\mathbf{X}}(i).$$

Based on identity (15) in the form

$$\begin{split} \mathcal{P}(\mathbf{i}, \hat{\mathbf{X}}) &= \mathcal{P}\_{\mathbf{X}^{(p-1)}}(\mathbf{i}, \hat{\mathbf{X}}) = \sum\_{l=0}^{r} \tilde{A}\_{l}(\mathbf{i}, \mathbf{X}^{(p-1)})^{T} \mathcal{E}\_{l}(\hat{\mathbf{X}}) \tilde{A}\_{l}(\mathbf{i}, \mathbf{X}^{(p-1)}) + T(\mathbf{i}, \mathbf{X}^{(p-1)}) \\ & \qquad - \hat{W}\_{\mathbf{X}^{(p-1)}}(\mathbf{i}, \hat{\mathbf{X}}) .\end{split}$$

we derive

$$\begin{split} &X(i)^{(p)} - \hat{X}(i) - \mathcal{P}(i, \hat{\mathbf{X}}) \\ &= \sum\_{l=0}^{r} \tilde{A}\_{l}(i, \mathbf{x}^{(p-1)})^{T} \left( \mathcal{E}\_{i1}(\mathbf{X}^{(p)} - \hat{\mathbf{X}}) + p\_{li}(\mathbf{X}(i)^{(p-1)} - \hat{\mathbf{X}}(i)) + \mathcal{E}\_{i2}(\mathbf{X}^{(p-1)} - \hat{\mathbf{X}}) \right) \\ &\quad \times \tilde{A}\_{l}(i, \mathbf{x}^{(p-1)}) - \hat{\mathbf{X}}(i) + \hat{\mathcal{W}}\_{\mathbf{X}^{(p-1)}}(i, \hat{\mathbf{X}}) \,. \end{split}$$

Thus, it is received a nonincreasing matrix sequence {*Xi*}<sup>∞</sup>

Let us consider the modified optimization problem:

*Ker R*(*i*, **X**) ⊆ *Ker S*(*i*, **X**) ⇐⇒ *S*(*i*, **X**)

The last statement follows immediately by lemma 4.2.

and then *<sup>X</sup>*+(*j*) <sup>−</sup> *<sup>X</sup>*˜(*j*) = 0 for *<sup>j</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*. Then **<sup>X</sup>**<sup>+</sup> <sup>≡</sup> **<sup>X</sup>**˜ .

max ∑*<sup>N</sup>*

*X*(*i*) = *X*(*i*)*<sup>T</sup>* .

*<sup>X</sup>*(*i*) + *<sup>Q</sup>*(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>S</sup>*(*i*, **<sup>X</sup>**) *<sup>R</sup>*(*i*, **<sup>X</sup>**)† *<sup>S</sup>*(*i*, **<sup>X</sup>**)*<sup>T</sup>* <sup>=</sup> <sup>P</sup>(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>X</sup>*(*i*) <sup>≥</sup> 0 ,

restrictions of (16) and then **X**<sup>+</sup> is the solution of optimization problem (16).

⎛

⎜⎝

The theorem is proved.

method in this case?

problem (16) if and only if

*R*(*i*, **X**) ≥ 0 ,

for *i* = 1, . . . , *N* .

⎧ ⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

below by **Xˆ** which converges to **X˜** and **X˜** <sup>≥</sup> **Xˆ** . Hence, **X˜** <sup>∈</sup> *dom* <sup>P</sup>† by Lemma 4.3.

*<sup>i</sup>*=<sup>1</sup> �*I*, *X*(*i*)� subject to *i* = 1, . . . , *N*

−*X*(*i*) + *Q*(*i*, **X**) *S*(*i*, **X**)

*S*(*i*, **X**)*<sup>T</sup> R*(*i*, **X**)

**Theorem 4.2.** *Assume that* (**A**, **B**) *is stabilizable and there exists a solution to the inequalities* <sup>P</sup>(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>X</sup>*(*i*) <sup>≥</sup> <sup>0</sup> *for i* <sup>=</sup> 1, . . . , *N. Then there exists a maximal solution* **<sup>X</sup>**<sup>+</sup> *of (3)-(4) if and only if there exists a solution* **<sup>X</sup>**˜ *for the above convex programming problem (16) with* **<sup>X</sup>**<sup>+</sup> <sup>≡</sup> **<sup>X</sup>**˜ *.*

*Proof.* Note that the matrix **X** = (*X*(1),..., *X*(*N*)) satisfies the restrictions of optimization

�

Assume that **<sup>X</sup>**<sup>+</sup> is the maximal solution of (3)-(4). Thus, **<sup>X</sup>**<sup>+</sup> <sup>≥</sup> **<sup>X</sup>** and *trX*+(1) + ... <sup>+</sup> *trX*+(*N*) <sup>≥</sup> *trX*(1) + ... <sup>+</sup> *trX*(*N*) for any solution **<sup>X</sup>** of (3)-(4), i.e. for any matrix **<sup>X</sup>** satisfies

Further on, suppose that **X**˜ is a solution of optimization problem (16). The inequalities (17) are fulfilled for **<sup>X</sup>**˜ , i.e. **<sup>X</sup>**˜ <sup>∈</sup> *dom* <sup>P</sup>† and assumptions for theorem 4.1 hold for **Xˆ** <sup>=</sup> **<sup>X</sup>**˜ . Thus, there exists the maximal solution **<sup>X</sup>**<sup>+</sup> with **<sup>X</sup>**<sup>+</sup> <sup>≥</sup> **<sup>X</sup>**˜ . Moreover, the optimality of **<sup>X</sup>**˜ means that

*tr* (*X*+(1) <sup>−</sup> *<sup>X</sup>*˜(1)) + ... <sup>+</sup> *tr* (*X*+(*N*) <sup>−</sup> *<sup>X</sup>*˜(*N*)) <sup>≤</sup> <sup>0</sup>

*<sup>I</sup>* <sup>−</sup> *<sup>R</sup>*(*i*, **<sup>X</sup>**) *<sup>R</sup>*(*i*, **<sup>X</sup>**)†�

⎞

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 165

⎟⎠ <sup>≥</sup> <sup>0</sup>

Thus, following the above theorem we could compute the maximal solution **X**˜ of the set of equations (3)-(4). We should apply iteration (12). The next question is: How to apply the LMI

*<sup>i</sup>*=<sup>1</sup> of symmetric matrices bounded

= 0 (Lema 4.1 (ii)),

(16)

(17)

We know that <sup>P</sup>(*i*, **<sup>X</sup>**ˆ) <sup>−</sup> *<sup>X</sup>*ˆ(*i*) <sup>≥</sup> 0. Then *<sup>X</sup>*(*i*)(*p*) <sup>−</sup> *<sup>X</sup>*ˆ(*i*) <sup>≥</sup> 0 for all *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*.

Lemma 4.3 confirms that *<sup>X</sup>*(*i*)(*p*) <sup>∈</sup> *dom* <sup>P</sup>†. We compute *<sup>F</sup>*(*i*, **<sup>X</sup>**(*p*)) = − *R*(*i*, **X**(*p*)) † *S*(*i*, **X**(*p*))*<sup>T</sup>* . Next, we obtain the matrices *X*(*i*)(*p*+1) from (12) and we will prove *<sup>X</sup>*(*i*)(*p*) <sup>≥</sup> *<sup>X</sup>*(*i*)(*p*+1) for *<sup>i</sup>* <sup>=</sup> 1, . . . *<sup>N</sup>*. After some matrix manipulations we derive

$$\begin{split} &X(i)^{(p)} - X(i)^{(p+1)} \\ &= \sum\_{l=0}^{r} \tilde{A}\_{l}(i, \mathbf{X}^{(p)})^{T} \mathcal{E}\_{i1} (\mathbf{X}^{(p)} - \mathbf{X}^{(p+1)}) \, \tilde{A}\_{l}(i, \mathbf{X}^{(p)}) \\ &+ \sum\_{l=0}^{r} \tilde{A}\_{l}(i, \mathbf{X}^{(p-1)})^{T} \left( p\_{il} (\mathbf{X}(i)^{(p-1)} - \mathbf{X}(i)^{(p)}) + \mathcal{E}\_{i2} (\mathbf{X}^{(p-1)} - \mathbf{X}^{(p)}) \right) \\ &\times \tilde{A}\_{l}(i, \mathbf{X}^{(p-1)}) + \hat{W}\_{\mathbf{X}^{(p-1)}}(i, \mathbf{X}^{(p)}) \, . \end{split}$$

It is easy to see that *<sup>X</sup>*(*i*)(*p*) <sup>−</sup> *<sup>X</sup>*(*i*)(*p*+1) <sup>≥</sup> 0 for *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>N</sup>* from the last equation. Further on, we have to show that

$$\mathcal{P}(i, \mathbf{X}^{(p)}) = \mathbf{X}(i)^{(p+1)} + \sum\_{l=0}^{r} \tilde{A}\_l (i, \mathbf{X}^{(p)})^T \mathcal{E}\_{i1} (\mathbf{X}^{(p)} - \mathbf{X}^{(p+1)}) \tilde{A}\_l (i, \mathbf{X}^{(p)})^T$$

for *i* = 1, . . . , *N*.

We have

$$\mathcal{P}(i, \mathbf{X}^{(p)}) = \sum\_{l=0}^{r} \tilde{A}\_l(i, \mathbf{X}^{(p)})^T \mathcal{E}\_l(\mathbf{X}^{(p)}) \, \tilde{A}\_l(i, \mathbf{X}^{(p)}) + T(i, \mathbf{X}^{(p)})^T$$

and

$$\begin{split} X(i)^{(p+1)} &= T(i, \mathbf{X}^{(p)}) + \sum\_{l=0}^{r} \tilde{A}\_{l}(i, \mathbf{X}^{(p)})^T \\ &\quad \times \left( \mathcal{E}\_{i1}(\mathbf{X}^{(p+1)}) + p\_{ii} X(i)^{(p)} + \mathcal{E}\_{i2}(\mathbf{X}^{(p)}) \right) \tilde{A}\_{l}(i, \mathbf{X}^{(p)}) \dots \end{split}$$

Subtracting the last two equations we yield

$$\mathcal{P}(\mathbf{i}, \mathbf{X}^{(p)}) = X(\mathbf{i})^{(p+1)} + \sum\_{l=0}^{r} \tilde{A}\_l (\mathbf{i}, \mathbf{X}^{(p)})^T \mathcal{E}\_{l1} (\mathbf{X}^{(p)} - \mathbf{X}^{(p+1)}) \tilde{A}\_l (\mathbf{i}, \mathbf{X}^{(p)}) \,,$$

for *i* = 1, . . . , *N*.

Thus, it is received a nonincreasing matrix sequence {*Xi*}<sup>∞</sup> *<sup>i</sup>*=<sup>1</sup> of symmetric matrices bounded below by **Xˆ** which converges to **X˜** and **X˜** <sup>≥</sup> **Xˆ** . Hence, **X˜** <sup>∈</sup> *dom* <sup>P</sup>† by Lemma 4.3.

The theorem is proved.

18 Will-be-set-by-IN-TECH

Lemma 4.3 confirms that *<sup>X</sup>*(*i*)(*p*) <sup>∈</sup> *dom* <sup>P</sup>†. We compute *<sup>F</sup>*(*i*, **<sup>X</sup>**(*p*)) =

will prove *<sup>X</sup>*(*i*)(*p*) <sup>≥</sup> *<sup>X</sup>*(*i*)(*p*+1) for *<sup>i</sup>* <sup>=</sup> 1, . . . *<sup>N</sup>*. After some matrix manipulations we derive

) *A*˜

*pii*(*X*(*i*)(*p*−1) <sup>−</sup> *<sup>X</sup>*(*i*)(*p*)

).

)*<sup>T</sup>* <sup>E</sup>*i*(**X**(*p*)

*<sup>l</sup>*(*i*, **X**(*p*))*<sup>T</sup>*

<sup>E</sup>*i*1(**X**(*p*+1)) + *piiX*(*i*)(*p*) <sup>+</sup> <sup>E</sup>*i*2(**X**(*p*))

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*˜

*<sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*˜

) *A*˜

It is easy to see that *<sup>X</sup>*(*i*)(*p*) <sup>−</sup> *<sup>X</sup>*(*i*)(*p*+1) <sup>≥</sup> 0 for *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>N</sup>* from the last equation.

<sup>E</sup>*i*1(**X**(*p*) <sup>−</sup> **<sup>X</sup>**ˆ) + *pii*(*X*(*i*)(*p*−1) <sup>−</sup> *<sup>X</sup>*ˆ(*i*)) + <sup>E</sup>*i*2(**X**(*p*−1) <sup>−</sup> **<sup>X</sup>**ˆ)

*S*(*i*, **X**(*p*))*<sup>T</sup>* . Next, we obtain the matrices *X*(*i*)(*p*+1) from (12) and we

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*) )

)*T*E*i*1(**X**(*p*) <sup>−</sup> **<sup>X</sup>**(*p*+1)

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*)

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*))*<sup>T</sup>*E*i*1(**X**(*p*) <sup>−</sup> **<sup>X</sup>**(*p*+1))*A*˜

) + <sup>E</sup>*i*2(**X**(*p*−1) <sup>−</sup> **<sup>X</sup>**(*p*)

)*A*˜

) + *T*(*i*, **X**(*p*)

 *A*˜ *<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*) )

)

*<sup>l</sup>*(*i*, **X**(*p*)).

*<sup>l</sup>*(*i*, **X**(*p*)),

)  we derive

−  = *r* ∑ *l*=0 *A*˜

*R*(*i*, **X**(*p*))

<sup>×</sup>*A*˜

†

= *r* ∑ *l*=0 *A*˜ *<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*)

> + *r* ∑ *l*=0 *A*˜

<sup>×</sup> *<sup>A</sup>*˜

Further on, we have to show that

<sup>P</sup>(*i*, **<sup>X</sup>**(*p*)

<sup>P</sup>(*i*, **<sup>X</sup>**(*p*)

Subtracting the last two equations we yield

<sup>P</sup>(*i*, **<sup>X</sup>**(*p*)) = *<sup>X</sup>*(*i*)(*p*+1) <sup>+</sup> <sup>∑</sup>*<sup>r</sup>*

for *i* = 1, . . . , *N*.

for *i* = 1, . . . , *N*.

We have

and

*<sup>X</sup>*(*i*)(*p*) <sup>−</sup> *<sup>X</sup>*ˆ(*i*) − P(*i*, **<sup>X</sup>**ˆ)

*<sup>X</sup>*(*i*)(*p*) <sup>−</sup> *<sup>X</sup>*(*i*)(*p*+1)

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*−1)

) = *X*(*i*)(*p*+1) +

) = *r* ∑ *l*=0 *A*˜ *<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*)

*X*(*i*)(*p*+1) = *T*(*i*, **X**(*p*)) + ∑*<sup>r</sup>*

× 

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*−1)

)*<sup>T</sup>*

) <sup>−</sup> *<sup>X</sup>*ˆ(*i*) + *<sup>W</sup>***<sup>X</sup>**(*p*−1) (*i*, **Xˆ**).

)*<sup>T</sup>* <sup>E</sup>*i*1(**X**(*p*) <sup>−</sup> **<sup>X</sup>**(*p*+1)

) + *<sup>W</sup>***<sup>X</sup>**(*p*−1) (*i*, **<sup>X</sup>**(*p*)

*r* ∑ *l*=0 *A*˜ *<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*)

)*<sup>T</sup>*

We know that <sup>P</sup>(*i*, **<sup>X</sup>**ˆ) <sup>−</sup> *<sup>X</sup>*ˆ(*i*) <sup>≥</sup> 0. Then *<sup>X</sup>*(*i*)(*p*) <sup>−</sup> *<sup>X</sup>*ˆ(*i*) <sup>≥</sup> 0 for all *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*.

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*−1)

*<sup>l</sup>*(*i*, **<sup>X</sup>**(*p*−1)

Thus, following the above theorem we could compute the maximal solution **X**˜ of the set of equations (3)-(4). We should apply iteration (12). The next question is: How to apply the LMI method in this case?

Let us consider the modified optimization problem:

$$\begin{array}{ll}\max \sum\_{i=1}^{N} \ \langle I, X(i) \rangle \\\\ \text{subject to } i = 1, \dots, N \\\\ \begin{pmatrix} -X(i) + Q(i, \mathbf{X}) \ S(i, \mathbf{X}) \\\\ S(i, \mathbf{X})^T & R(i, \mathbf{X}) \end{pmatrix} \ge 0 \\\\ X(i) = X(i)^T. \end{array} \tag{16}$$

**Theorem 4.2.** *Assume that* (**A**, **B**) *is stabilizable and there exists a solution to the inequalities* <sup>P</sup>(*i*, **<sup>X</sup>**) <sup>−</sup> *<sup>X</sup>*(*i*) <sup>≥</sup> <sup>0</sup> *for i* <sup>=</sup> 1, . . . , *N. Then there exists a maximal solution* **<sup>X</sup>**<sup>+</sup> *of (3)-(4) if and only if there exists a solution* **<sup>X</sup>**˜ *for the above convex programming problem (16) with* **<sup>X</sup>**<sup>+</sup> <sup>≡</sup> **<sup>X</sup>**˜ *.*

*Proof.* Note that the matrix **X** = (*X*(1),..., *X*(*N*)) satisfies the restrictions of optimization problem (16) if and only if

$$\begin{cases} X(i) + Q(i, \mathbf{X}) - S(i, \mathbf{X}) \, R(i, \mathbf{X})^\dagger \, S(i, \mathbf{X})^\top = \mathcal{P}(i, \mathbf{X}) - X(i) \ge 0, \\\ R(i, \mathbf{X}) \ge 0, \\\ \text{Ker}\, R(i, \mathbf{X}) \subseteq \text{Ker}\, \mathbf{S}(i, \mathbf{X}) \iff S(i, \mathbf{X}) \left( I - R(i, \mathbf{X}) \, R(i, \mathbf{X})^\dagger \right) = 0 \, (\text{Lemma 4.1 (iii)}), \\\ \text{for } i = 1, \dots, N. \end{cases} \tag{17}$$

The last statement follows immediately by lemma 4.2.

Assume that **<sup>X</sup>**<sup>+</sup> is the maximal solution of (3)-(4). Thus, **<sup>X</sup>**<sup>+</sup> <sup>≥</sup> **<sup>X</sup>** and *trX*+(1) + ... <sup>+</sup> *trX*+(*N*) <sup>≥</sup> *trX*(1) + ... <sup>+</sup> *trX*(*N*) for any solution **<sup>X</sup>** of (3)-(4), i.e. for any matrix **<sup>X</sup>** satisfies restrictions of (16) and then **X**<sup>+</sup> is the solution of optimization problem (16).

Further on, suppose that **X**˜ is a solution of optimization problem (16). The inequalities (17) are fulfilled for **<sup>X</sup>**˜ , i.e. **<sup>X</sup>**˜ <sup>∈</sup> *dom* <sup>P</sup>† and assumptions for theorem 4.1 hold for **Xˆ** <sup>=</sup> **<sup>X</sup>**˜ . Thus, there exists the maximal solution **<sup>X</sup>**<sup>+</sup> with **<sup>X</sup>**<sup>+</sup> <sup>≥</sup> **<sup>X</sup>**˜ . Moreover, the optimality of **<sup>X</sup>**˜ means that

$$\operatorname{tr}\left(X^+(1) - \tilde{X}(1)\right) + \dots + \operatorname{tr}\left(X^+(N) - \tilde{X}(N)\right) \le 0$$

and then *<sup>X</sup>*+(*j*) <sup>−</sup> *<sup>X</sup>*˜(*j*) = 0 for *<sup>j</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*. Then **<sup>X</sup>**<sup>+</sup> <sup>≡</sup> **<sup>X</sup>**˜ .

Let us consider set of equations (7)-(8) under the assumption that *R*(*i*) + ∑*<sup>r</sup> <sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*TY*(*i*)*Bl*(*i*) ≥ 0, *i* = 1, . . . , *N*. Thus, optimization problem (11) is transformed to the new optimization problem:

$$\begin{array}{ll}\text{max}\sum\_{i=1}^{N}\langle I, Y(i)\rangle\\ \text{subject to } i = 1, \dots, N\\ \begin{pmatrix} -Y(i) + \hat{\mathsf{C}}^{T}(i)\hat{\mathsf{C}}(i) + \sum\_{j=1}^{N} \gamma\_{ij} \, Y(j) & & \hat{S}(i, \boldsymbol{Y}(i))\\ \sum\_{l=0}^{r} \hat{A}\_{l}(i)^{T} \, Y(i) \, \hat{A}\_{l}(i) & & \hat{S}(i, \boldsymbol{Y}(i))\\ & \hat{S}(i, \boldsymbol{Y}(i))^{T} & & \boldsymbol{R}(i) + \sum\_{l=0}^{r} \boldsymbol{B}\_{l}(i)^{T} \, Y(i) \, \boldsymbol{B}\_{l}(i) \end{pmatrix} \geq 0 \\\\ \boldsymbol{Y}(i) = \boldsymbol{Y}(i)^{T}. \end{array} \tag{18}$$

*In our definitions the functions randn(p,k) and sprand(q,m,0.3) return a p-by-k matrix of pseudorandom scalar values and a q-by-m sparse matrix respectively (for more information see the*

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 167

Results from experiments are given in table 4. The parameters *mIt* and *avIt* are the same as the previous tables. In addition, the CPU time in seconds is included. The optimization problems (16) and (18) need the equals iteration steps (the column *avIt*) for finding the maximal solution to set of equations (3) - (4). However, the executed examples have demonstrated that LMI problem (18) faster than LMI problem (16). Moreover, iterative method (12) is much faster

(12) LMI for (16) LMI for (18)

n *mIt avIt mIt avIt mIt avIt* 6 23 15.9 59 37.9 59 37.9 7 27 16.9 64 39.2 63 37.5 CPU time 20 runs (in seconds) 20 0.41 72.12 14.48

We introduce an additional example where the above optimization problems are compared. **Example 4.2.** *The parameters of this system are presented as follows. The coefficient matrices are*

> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

> > ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*Coefficient matrices B*0(*i*), *B*1(*i*), *i* = 1, 2, 3 *are* 6 × 3 *zero matrices with nonzero elements:*

*B*0(1)(5, 3) = 10.07; *B*0(1)(3, 1) = 2.56; *B*0(2)(1, 2) = 6.428; *B*0(2)(4, 2) = 5.48;

, *A*0(2) = 0.001 ∗

, *A*1(1) = 0.001 ∗

, *A*1(3) = 0.001 ∗

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

66 29 54 58 19 36 20 52 10 22 39 17 62 42 4 44 32 63 18 0 4 11 5 17 21 27 48 47 49 11 33 8 58 14 64 41

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

.

,

,

than the LMI approaches and it achieves the same accuracy.

**Table 4.** Comparison between methods for the maximal solution in Example 4.1.

58 20 66 60 45 13 7 33 45 3 33 45 21 19 36 20 11 42 58 34 26 38 28 20 7 51 53 31 59 59 40 16 56 17 27 29

> 63 54 24 17 46 13 27 34 44 63 65 61 8 18 11 64 46 33 56 51 6 12 65 3 45 61 16 11 22 14 22 3 48 6 39 9

MATLAB *description).*

(*r* = 1, *k* = 3, *n* = 6)*:*

*A*0(1) = 0.001 ∗

*A*0(3) = 0.001 ∗

*A*1(2) = 0.001 ∗

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

> > ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

It is easy to verify that the solution of (18) is the maximal solution to (7)-(8) with the positive semidefinite assumption to matrices *R*(*i*) + ∑*<sup>r</sup> <sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*TY*(*i*)*Bl*(*i*) <sup>≥</sup> 0, *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*.

We investigate the numerical behavior of the LMI approach applied to the described optimization problems LMI: (16) and LMI(Y): (18) for finding the maximal solution to set of discrete-time generalized Riccati equations (3) - (4). In addition, we compare these LMI solvers with derived recurrence equations (12) for the maximal solution to the same set of equations. We will carry out some experiments for this purpose. In the experiments in this section we construct a family of examples (*N* = 3, *k* = 3) with the weighting matrices

$$\begin{aligned} Q(1) &= \text{diag}[0; 0.5; \dots, 0.5] \,, \quad Q(2) = \text{diag}[0; 1; 1; \dots, 1] \,, \\ Q(3) &= \text{diag}[0; 0.05; 0.05; \dots, 0.05] \,, \\ R(1) &= R(2) = R(3) = zero(3, 3) \, , \end{aligned}$$

and zero matrices *L*(1), *L*(2), *L*(3) and the introduced transition probability matrix via Example 2.1.

**Example 4.1.** *We consider the case of r* = 1, *n* = 6, 7*, where the coefficient real matrices A*0(*i*), *A*1(*i*), *A*2(*i*), *B*0(*i*), *B*1(*i*), *B*2(*i*), *L*(*i*), *i* = 1, 2, 3 *are given as follows (using the* MATLAB *notations):*

$$\begin{array}{l} A\_{0}(1) = randn(n, n)/10; \ A\_{0}(2) = randn(n, n)/5; \ A\_{0}(3) = randn(n, n)/5; \\\\ A\_{1}(1) = randn(n, n)/100; \ A\_{1}(2) = randn(n, n)/50; \ A\_{1}(3) = randn(n, n)/100; \\\\ B\_{0}(1) = 100\*full(sprrand(n, k, 0.07)); \ B\_{0}(2) = 100\*full(sprrand(n, k, 0.07)); \\\\ B\_{0}(3) = 100\*full(sprrand(n, k, 0.07)); \\\\ B\_{1}(1) = 100\*full(sprrand(n, k, 0.07)); \ B\_{1}(2) = 100\*full(sprrand(n, k, 0.07)); \\\\ B\_{1}(3) = 100\*full(sprrand(n, k, 0.07)). \end{array}$$

*In our definitions the functions randn(p,k) and sprand(q,m,0.3) return a p-by-k matrix of pseudorandom scalar values and a q-by-m sparse matrix respectively (for more information see the* MATLAB *description).*

Results from experiments are given in table 4. The parameters *mIt* and *avIt* are the same as the previous tables. In addition, the CPU time in seconds is included. The optimization problems (16) and (18) need the equals iteration steps (the column *avIt*) for finding the maximal solution to set of equations (3) - (4). However, the executed examples have demonstrated that LMI problem (18) faster than LMI problem (16). Moreover, iterative method (12) is much faster than the LMI approaches and it achieves the same accuracy.


**Table 4.** Comparison between methods for the maximal solution in Example 4.1.

20 Will-be-set-by-IN-TECH

≥ 0, *i* = 1, . . . , *N*. Thus, optimization problem (11) is transformed to the new optimization

*S*ˆ(*i*,*Y*(*i*))

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*TY*(*i*)*Bl*(*i*)

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*TY*(*i*)*Bl*(*i*) <sup>≥</sup> 0, *<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>N</sup>*.

*<sup>l</sup>*=<sup>0</sup> *Bl*(*i*)*TY*(*i*)*Bl*(*i*)

(18)

⎞

⎟⎟⎟⎠ ≥ 0

Let us consider set of equations (7)-(8) under the assumption that *R*(*i*) + ∑*<sup>r</sup>*

*<sup>j</sup>*=<sup>1</sup> *γij Y*(*j*)

*S*ˆ(*i*,*Y*(*i*))*<sup>T</sup> R*(*i*) + ∑*<sup>r</sup>*

It is easy to verify that the solution of (18) is the maximal solution to (7)-(8) with the positive

We investigate the numerical behavior of the LMI approach applied to the described optimization problems LMI: (16) and LMI(Y): (18) for finding the maximal solution to set of discrete-time generalized Riccati equations (3) - (4). In addition, we compare these LMI solvers with derived recurrence equations (12) for the maximal solution to the same set of equations. We will carry out some experiments for this purpose. In the experiments in this

*Q*(1) = diag[0; 0.5; . . . , 0.5] , *Q*(2) = diag[0; 1; 1; . . . , 1] ,

and zero matrices *L*(1), *L*(2), *L*(3) and the introduced transition probability matrix via

**Example 4.1.** *We consider the case of r* = 1, *n* = 6, 7*, where the coefficient real matrices A*0(*i*), *A*1(*i*), *A*2(*i*), *B*0(*i*), *B*1(*i*), *B*2(*i*), *L*(*i*), *i* = 1, 2, 3 *are given as follows (using the* MATLAB *notations):*

*A*1(1) = *randn*(*n*, *n*)/100; *A*1(2) = *randn*(*n*, *n*)/50; *A*1(3) = *randn*(*n*, *n*)/100;

*B*0(1) = 100 ∗ *f ull*(*sprand*(*n*, *k*, 0.07)); *B*0(2) = 100 ∗ *f ull*(*sprand*(*n*, *k*, 0.07));

*B*1(1) = 100 ∗ *f ull*(*sprand*(*n*, *k*, 0.07)); *B*1(2) = 100 ∗ *f ull*(*sprand*(*n*, *k*, 0.07));

*A*0(1) = *randn*(*n*, *n*)/10; *A*0(2) = *randn*(*n*, *n*)/5; *A*0(3) = *randn*(*n*, *n*)/5;

section we construct a family of examples (*N* = 3, *k* = 3) with the weighting matrices

*Q*(3) = diag[0; 0.05; 0.05; . . . , 0.05] , *R*(1) = *R*(2) = *R*(3) = *zeros*(3, 3),

*B*0(3) = 100 ∗ *f ull*(*sprand*(*n*, *k*, 0.07));

*B*1(3) = 100 ∗ *f ull*(*sprand*(*n*, *k*, 0.07)).

*<sup>l</sup>*(*i*)

problem:

max ∑*<sup>N</sup>*

⎛

⎜⎜⎜⎝

Example 2.1.

*<sup>i</sup>*=<sup>1</sup> �*I*,*Y*(*i*)�

<sup>−</sup>*Y*(*i*) + *<sup>C</sup>*ˆ*T*(*i*)*C*ˆ(*i*) + <sup>∑</sup>*<sup>N</sup>*

semidefinite assumption to matrices *R*(*i*) + ∑*<sup>r</sup>*

*<sup>l</sup>*(*i*)*<sup>T</sup> Y*(*i*) *A*ˆ

subject to *i* = 1, . . . , *N*

∑*r <sup>l</sup>*=<sup>0</sup> *<sup>A</sup>*<sup>ˆ</sup>

*Y*(*i*) = *Y*(*i*)*<sup>T</sup>* .

We introduce an additional example where the above optimization problems are compared.

**Example 4.2.** *The parameters of this system are presented as follows. The coefficient matrices are* (*r* = 1, *k* = 3, *n* = 6)*:*

,

*A*0(1) = 0.001 ∗ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 58 20 66 60 45 13 7 33 45 3 33 45 21 19 36 20 11 42 58 34 26 38 28 20 7 51 53 31 59 59 40 16 56 17 27 29 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , *A*0(2) = 0.001 ∗ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 66 29 54 58 19 36 20 52 10 22 39 17 62 42 4 44 32 63 18 0 4 11 5 17 21 27 48 47 49 11 33 8 58 14 64 41 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ *A*0(3) = 0.001 ∗ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 63 54 24 17 46 13 27 34 44 63 65 61 8 18 11 64 46 33 56 51 6 12 65 3 45 61 16 11 22 14 22 3 48 6 39 9 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , *A*1(1) = 0.001 ∗ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 571 5 83 167 5 86 1 3 7 10 9 5 267 3 97 679 8 30 278 4 71 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , *A*1(2) = 0.001 ∗ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 985589 9 9 10 3 3 1 869092 6 7 3 8 10 7 10 7 9 7 4 8 676962 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , *A*1(3) = 0.001 ∗ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 761626 537569 592851 637357 882948 748539 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .

*Coefficient matrices B*0(*i*), *B*1(*i*), *i* = 1, 2, 3 *are* 6 × 3 *zero matrices with nonzero elements:*

$$B\_0(1)(5,3) = 10.07; \quad B\_0(1)(3,1) = 2.56; \; B\_0(2)(1,2) = 6.428; \quad B\_0(2)(4,2) = 5.48;$$


**6. References**

ISSN: 0005-1098.

ISSN: 0362-546X.

ISSN: 0925-5001.

ISSN: 0018-9286.

0005-1098.

1131-1143, ISSN: 0018-9286.

*Letters*, Vol. 41, pp. 123-133, ISSN: 0167-6911.

*Systems*, Vol. 12, 167-195, ISSN: 0932-4194.

[1] Costa, O.L.V. & Marques, R.P. (1999). Maximal and Stabilizing Hermitian Solutions for Discrete-Time Coupled Algebraic Ricacti Equations. *Mathematics of Control, Signals and*

Iterations for a General Class of Discrete-Time Riccati-Type Equations: A Survey and Comparison 169

[2] Costa, O.L.V., Fragoso, M.D., & Marques, R.P. (2005). Discrete-Time Markov Jump

[3] Costa, O.L.V. & de Paulo, W.L. (2007). Indefinite quadratic with linear costs optimal control of Markov jump with multiplicative noise systems, *Automatica*, Vol. 43, 587-597,

[4] Dragan, V. & Morozan, T. (2008). Discrete-time Riccati type Equations and the Tracking

[5] Dragan, V. & Morozan, T. (2010). A Class of Discrete Time Generalized Riccati Equations, *Journal of Difference Equations and Applications*, Vol.16, No.4, 291-320, ISSN: 1563-5120. [6] Dragan, V., Morozan, T. & Stoica., A. M. (2010a). Iterative algorithm to compute the maximal and stabilising solutions of a general class of discrete-time Riccati-type

equations, *International Journal of Control*, Vol.83, No.4, 837-847, ISSN: 1366-5820. [7] Dragan, V., Morozan, T. & Stoica, A.M. (2010b). Mathematical Methods in Robust Control of Discrete-time Linear Stochastic Systems, Springer, ISBN: 978-1-4419-0629-8. [8] Freiling, G. & Hochhaus, A. (2003). Properties of the Solutions of Rational Matrix Difference Equations, *Comput. Math. Appl.*, Vol.45, 1137-1154, ISSN: 0898-1221. [9] Ivanov, I. (2007). Properties of Stein (Lyapunov) iterations for solving a general Riccati equation, *Nonlinear Analysis Series A: Theory, Methods & Applications*, Vol. 67, 1155-1166,

[10] Ivanov, I. (2011). An Improved Method for Solving a System of Discrete-Time Generalized Riccati Equations, Journal of Numerical Mathematics and Stochastics, Vol.3,

[11] Ivanov, I., & Netov, N. (2012), A new iteration to discrete-time coupled generalized Riccati equations, submitted to *Computational & Applied Mathematics*, ISNN: 0101-8205. [12] Li, X., Zhou, X. & Rami, M. (2003). Indefinite stochastic linear quadratic control with Markovian jumps in infinite time horizon. *Journal of Global Optimization*, Vol. 27, 149-175,

[13] Rami, M. & Ghaoui, L. (1996). LMI Optimization for Nonstandard Riccati Equations Arising in Stochastic Control. *IEEE Transactions on Automatic Control*, Vol. 41, 1666-1671,

[14] Rami, M. & Zhou, X. (2000). Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls. *IEEE Transactions on Automatic Control*, Vol. 45,

[15] Rami, M. A., Zhou, X.Y., & Moore, J.B. (2000). Well-posedness and attainability of indefinite stochastic linear quadratic control in infinite time horizon, *Systems & Control*

[16] Song, X., Zhang, H. & Xie, L. (2009). Stochastic linear quadratic regulation for discrete-time linear systems with input delay, *Automatica*, Vol. 45, 2067-2073, ISSN:

No.1, 57-70, http://www.jnmas.org/jnmas3-7.pdf, ISNN: 2151-2302.

Linear Systems, Springer-Verlag, Berlin, ISBN: 978-1-85233-761-2.

Problem, *ICIC Express. Letters*, Vol.2, 109U116, ISNN: 1881-803X. ˝

*This choice of the matrices B*0(*i*), *B*1(*i*), *i* = 1, 2, 3 *guaranteed that the matrices R*(*i*, **X**) *are positive semidefinite, i.e. there are symmetric matrices* **X** *which belongs to dom P*†*. The remain coefficient matrices are already in place.*

We find the maximal solution to (3) - (4) for the constructed example with iterative method (12) and the LMI approach applied to optimization problems (16) and (18). The results are the following. Iteration (12) needs 15 iteration steps to achieve the maximal solution. The computed maximal solution **W** has the eigenvalues

> Eig *W*(1)=(4.9558*e* − 5; 0.50058; 0.50004; 0.50002; 0.50001; 0.5), Eig *W*(2)=(0.00019562; 1.0007; 1.0001; 1; 1; 1), Eig *W*(3)=(9.33*e* − 5; 0.05041; 0.05005; 0.050018; 0.050011; 0.050003). (19)

The LMI approach for optimization problem (16) does not give a satisfactory result. After 32 iteration step the calculations stop with the computed maximal solution **V**. However, the norm of the difference between two solutions **W** and **V** is �*W*(1) − *V*(1)� = 1.0122*e* − 9, �*W*(2) − *V*(2)� = 2.8657*e* − 6, �*W*(3) − *V*(3)� = 3.632*e* − 6.

The LMI approach for optimization problem (18) needs 28 iteration steps to compute the maximal solution to (1). This solution **Z** has the same eigenvalues as in (19). The norm of the difference between two solutions **W** and **Z** is �*W*(1) − *Z*(1)� = 7.4105*e* − 12, �*W*(2) − *Z*(2)� = 2.8982*e* − 11, �*W*(3) − *Z*(3)� = 3.0796*e* − 11.

The results from this example show that the LMI approach applied to optimization problem (18) gives the more accurate results than the LMI method for (16). Moreover, the results obtained for problem (16) are not applicability. A researcher has to be careful when applied the LMI approach for solving a set of general discrete time equations in positive semidefinite case.
