**3. System optimization with parameter estimation**

Now, an expanded optimal control problem, which combines the real plant and the cost function in Problem (P) into Problem (M) and is referred to as Problem (E), is introduced by

$$\begin{aligned} \min\_{\boldsymbol{u}(k)} I\_2(\boldsymbol{u}) &= \frac{1}{2} \mathbf{x}(\boldsymbol{N})^\mathrm{T} \mathbf{S}(\boldsymbol{N}) \mathbf{x}(\boldsymbol{N}) + \boldsymbol{\gamma}(\boldsymbol{N}) + \sum\_{k=0}^{N-1} \frac{1}{2} \left( \mathbf{x}(k)^\mathrm{T} \mathbf{Q} \mathbf{x}(k) + \boldsymbol{u}(k)^\mathrm{T} \mathbf{R} \boldsymbol{u}(k) \right) + \boldsymbol{\gamma}(\boldsymbol{k}) \\ &+ \frac{1}{2} r\_1 ||\boldsymbol{u}(k) - \boldsymbol{v}(k)||^2 + \frac{1}{2} r\_2 ||\boldsymbol{x}(k) - \boldsymbol{z}(k)||^2 \end{aligned}$$

subject to

these algorithms were introduced by Robert [9–11], and Robert and Becerra [12–14], respectively. The basic idea of DISOPE is applying the model-based optimal control, which has different structures and parameters compared to the original optimal control problem, to obtain the correct optimal solution of the original optimal control problem, in spite of model-reality differences. Recently, this algorithm has been extended to cover both deterministic and stochastic versions, and it is known as an integrated optimal control and parameter estimation (IOCPE) algorithm [15, 16]. On the other hand, the application of the optimization techniques, particularly, using the conjugate gradient method for solving the optimal control problem [17–19] has also been studied, where the open-loop control strategy

In this chapter, the conjugate gradient approach [17, 19] is employed to solve the linear model-based optimal control problem for obtaining the optimal solution of the original optimal control problem. In our approach, the simplified model, which is adding the adjusted parameters, is formulated initially. Then, an expanded optimal control problem, which combines the system dynamic and the cost function from the original optimal control problem into the simplified model, is introduced. By defining the Hamiltonian function and the augmented cost function, the

corresponding necessary conditions for optimality are derived. Among these necessary conditions, a set of necessary conditions is for the modified model-based optimal control problem, a set of necessary conditions defines the parameter estimation problem, and a set of necessary conditions calculates the multipliers [15]. By virtue of the modified model-based optimal control problem, an equivalence optimization problem is defined, and the related gradient function is determined. With an initial control sequence, the initial gradient and the initial search direction are computed. Then, the control sequences are updated through the line search technique, where the gradient and search direction would satisfy the conjugacy condition [17, 18]. During the iteration, the state and the costate are updated by the control sequence obtained from the conjugate gradient approach. When the convergence is achieved within a tolerance given, the iterative solution approximates to the correct optimal solution of the original optimal control problem, in spite of model-reality differences. For illustration, examples of linear and nonlinear cases, which are damped harmonic oscillator [7] and continuous stirred-tank chemical

The chapter is organized as follows. In Section 2, the problem statement is described in detail, where the original optimal control problem and the simplified model are discussed. In Section 3, the methodology used is further explained. The necessary conditions for optimality are derived, and the use of the conjugate gradient method is delivered in solving the equivalence optimization problem. In Section 4, examples of a damped harmonic oscillator and a continuous stirred-tank chemical reactor are studied. The results show the efficiency of the algorithm

Consider a general class of the discrete-time nonlinear optimal control problem,

*x k*ð Þ¼ þ 1 *f xk* ð Þ ð Þ, *u k*ð Þ, *k* , *x*ð Þ¼ 0 *x*<sup>0</sup>

*N*�1

*Lxk* ð Þ ð Þ, *u k*ð Þ, *k*

(1)

*k*¼0

*u k*ð Þ *<sup>J</sup>*0ð Þ¼ *<sup>u</sup> <sup>φ</sup>*ð Þþ *x N*ð Þ, *<sup>N</sup>* <sup>X</sup>

is concerned [3, 8].

*Control Theory in Engineering*

tank [8], are studied.

**2. Problem statement**

given by

**82**

subject to

proposed. Finally, concluding remarks are made.

min

$$\begin{aligned} x(k+1) &= Ax(k) + Bu(k) + a(k), \; x(0) = x\_0 \\ &\quad \frac{1}{2} z(N)^T S(N) x(N) + \gamma(N) = \rho(z(N), N) \\ &\quad \frac{1}{2} \left( z(k)^T Q z(k) + v(k)^T R v(k) \right) + \gamma(k) = L(z(k), v(k), k) \\ &\quad Ax(k) + Bv(k) + a(k) = f(z(k), v(k), k) \\ v(k) &= u(k) \\ z(k) &= x(k) \end{aligned}$$

(3)

where *v k*ð Þ<sup>∈</sup> *<sup>ℜ</sup>m*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>* � 1, and *z k*ð Þ<sup>∈</sup> *<sup>ℜ</sup>n*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>*, are introduced to separate the sequences of control and state in the optimization problem from the respective signals in the parameter estimation problem, and k�k denotes the usual Euclidean norm. The terms <sup>1</sup> <sup>2</sup> *<sup>r</sup>*1k k *u k*ð Þ� *v k*ð Þ <sup>2</sup> and <sup>1</sup> <sup>2</sup> *<sup>r</sup>*2k k *x k*ð Þ� *z k*ð Þ <sup>2</sup> with *r*1,*r*<sup>2</sup> ∈ *ℜ* are introduced to improve the convexity and to facilitate the convergence of the resulting iterative algorithm. Here, we clarify that the algorithm is designed such that the constraints *v k*ð Þ¼ *u k*ð Þ and *z k*ð Þ¼ *x k*ð Þ are satisfied upon termination of the iterations, assuming that convergence is achieved. Moreover, the state constraint *z k*ð Þ and the control constraint *v k*ð Þ are used for the computation of the parameter estimation and matching scheme, while the corresponding state constraint *x k*ð Þ and control constraint *u k*ð Þ are reserved for optimizing the model-based optimal control problem. Therefore, system optimization and parameter estimation are declared and mutually integrated.

(c) State equation:

(d) Boundary conditions:

(f) Modifier equations:

(g) Separable variables:

(e) Adjusted parameter equations:

*DOI: http://dx.doi.org/10.5772/intechopen.89711*

with *ξ*ð Þ¼ *k* 1 and *μ*ð Þ¼ *k p k* ^ð Þ þ 1 .

**3.2 Modified model-based optimal control problem**

*x N*ð Þ<sup>T</sup>*S N*ð Þ*x N*ð Þþ <sup>Γ</sup><sup>T</sup>*x N*ð Þþ *<sup>γ</sup>*ð Þþ *<sup>N</sup>* <sup>X</sup>

*<sup>r</sup>*1k k *u k*ð Þ� *v k*ð Þ <sup>2</sup> <sup>þ</sup>

referred to as Problem (MM), is defined by

2

*x k*ð Þ¼ þ 1 *Ax k*ð Þþ *Bu k*ð Þþ *α*ð Þ*k* , *x*ð Þ¼ 0 *x*<sup>0</sup>

1 2

<sup>þ</sup>*γ*ð Þþ *<sup>k</sup>* <sup>1</sup>

min *u k*ð Þ *<sup>J</sup>*3ð Þ¼ *<sup>u</sup>*

**85**

subject to

*Lzk* ð Þ¼ ð Þ, *v k*ð Þ, *<sup>k</sup>* <sup>1</sup>

*<sup>φ</sup>*ð Þ¼ *z N*ð Þ, *<sup>N</sup>* <sup>1</sup>

2

*Conjugate Gradient Approach for Discrete Time Optimal Control Problems with Model-Reality…*

*<sup>λ</sup>*ð Þ¼� *<sup>k</sup>* <sup>∇</sup>*v k*ð Þ*<sup>L</sup>* � *Rv k*ð Þ � � � *<sup>∂</sup> <sup>f</sup>*

*<sup>β</sup>*ð Þ¼� *<sup>k</sup>* <sup>∇</sup>*z k*ð Þ*<sup>L</sup>* � *Qz k*ð Þ � � � *<sup>∂</sup> <sup>f</sup>*

*x k*ð Þ¼ þ 1 *Ax k*ð Þþ *Bu k*ð Þþ *α*ð Þ*k* (8)

*p N*ð Þ¼ *S N*ð Þ*x N*ð Þþ Γ and *x*ð Þ¼ 0 *x*<sup>0</sup> (9)

*f zk* ð Þ¼ ð Þ, *v k*ð Þ, *k Az k*ð Þþ *Bv k*ð Þþ *α*ð Þ*k* (12)

*<sup>∂</sup>v k*ð Þ � *<sup>B</sup>* � �<sup>T</sup>

*<sup>∂</sup>z k*ð Þ � *<sup>A</sup>* � �<sup>T</sup>

*v k*ð Þ¼ *u k*ð Þ, *z k*ð Þ¼ *x k*ð Þ, *p k* ^ð Þ¼ *p k*ð Þ*:* (16)

*N*�1

1 2

*k*¼0

Notice that for the optimality necessary conditions obtained above, they are divided into three sets of necessary conditions. The first set of necessary conditions in (6)–(9) is the necessary conditions for the system optimization problem. The second set of necessary conditions in (10)–(12) defines the parameter estimation problem. The third set of necessary conditions in (13)–(15) provides the computation of multipliers. In fact, the necessary conditions, which are defined in (6)–(9), are the optimality for the modified model-based optimal control problem, and the adjusted parameters, which are calculated from the necessary conditions in (10)–(12), measure the differences between the real plant and the model used.

As a consequence, the modified model-based optimal control problem, which is

1 2

*z N*ð ÞT*S N*ð Þ*z N*ð Þþ *<sup>γ</sup>*ð Þ *<sup>N</sup>* (10)

<sup>2</sup> *z k*ð ÞT*Qz k*ð Þþ *v k*ð ÞT*Rv k*ð Þ � � <sup>þ</sup> *<sup>γ</sup>*ð Þ*<sup>k</sup>* (11)

Γ ¼ ∇*z N*ð Þ*φ* � *S N*ð Þ*z N*ð Þ (13)

*p k* ^ð Þ þ 1 (14)

*p k* ^ð Þ þ 1 (15)

*x k*ð Þ<sup>T</sup>*Qx k*ð Þþ *u k*ð Þ<sup>T</sup>*Ru k*ð Þ � �

(17)

*<sup>r</sup>*2k k *x k*ð Þ� *z k*ð Þ <sup>2</sup> � *<sup>λ</sup>*ð Þ*<sup>k</sup>* <sup>T</sup>*u k*ð Þ� *<sup>β</sup>*ð Þ*<sup>k</sup>* <sup>T</sup>*x k*ð Þ

#### **3.1 Necessary conditions for optimality**

Define the Hamiltonian function for Problem (E), given by:

$$\begin{aligned} H\_2(k) &= \frac{1}{2} \left( \mathbf{x}(k)^\mathrm{T} \mathbf{Q} \mathbf{x}(k) + \boldsymbol{u}(k)^\mathrm{T} \mathbf{R} \boldsymbol{u}(k) \right) + \boldsymbol{\chi}(k) + \frac{1}{2} r\_1 ||\boldsymbol{u}(k) - \boldsymbol{v}(k)||^2 \\ &+ \frac{1}{2} r\_2 ||\boldsymbol{x}(k) - \boldsymbol{z}(k)||^2 + p(k+1)^\mathrm{T} (\boldsymbol{A} \mathbf{x}(k) + \boldsymbol{B} \mathbf{u}(k) + \boldsymbol{a}(k)) - \boldsymbol{\lambda}(k)^\mathrm{T} \boldsymbol{u}(k) \\ &- \boldsymbol{\beta}(k)^\mathrm{T} \mathbf{x}(k) \end{aligned} \tag{4}$$

where *<sup>λ</sup>*ð Þ*<sup>k</sup>* <sup>∈</sup> *<sup>ℜ</sup><sup>m</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>* � 1, *<sup>β</sup>*ð Þ*<sup>k</sup>* <sup>∈</sup> *<sup>ℜ</sup><sup>n</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>*, and *p k*ð Þ<sup>∈</sup> *<sup>ℜ</sup><sup>n</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>*, are modifiers. Using this Hamiltonian function in (4), write the cost function in (3) to be the augmented cost function, that is,

$$\begin{aligned} f\_2'(\boldsymbol{u}) &= \frac{1}{2} \mathbf{x}(N)^T \mathbf{S}(N) \mathbf{x}(N) + \boldsymbol{\gamma}(\boldsymbol{N}) + p(\mathbf{0})^T \mathbf{x}(\mathbf{0}) - p(\mathbf{N})^T \mathbf{x}(N) \\ &+ \boldsymbol{\xi}(\boldsymbol{N}) \left( \boldsymbol{\rho}(\boldsymbol{z}(N), \boldsymbol{N}) - \frac{1}{2} \mathbf{z}(N)^T \mathbf{S}(N) \mathbf{z}(\boldsymbol{N}) - \boldsymbol{\gamma}(\boldsymbol{N}) \right) + \boldsymbol{\Gamma}^T(\boldsymbol{x}(N) - \boldsymbol{z}(N)) \\ &+ \sum\_{k=0}^{N-1} H\_2(k) - p(k)^T \mathbf{x}(k) + \boldsymbol{\lambda}(\boldsymbol{k})^T \boldsymbol{v}(k) + \boldsymbol{\beta}(\boldsymbol{k})^T \boldsymbol{z}(k) \\ &+ \boldsymbol{\xi}(\boldsymbol{k}) \left( L(\boldsymbol{z}(k), \boldsymbol{v}(k), k) - \frac{1}{2} \left( \boldsymbol{z}(k)^T \mathbf{Q} \boldsymbol{z}(k) + \boldsymbol{v}(k)^T \mathbf{R} \boldsymbol{v}(k) \right) - \boldsymbol{\gamma}(\boldsymbol{k}) \right) \\ &+ \boldsymbol{\mu}(\boldsymbol{k})^T (f(\boldsymbol{z}(k), \boldsymbol{v}(k), k) - A \boldsymbol{z}(k) - B\boldsymbol{v}(k) - \boldsymbol{a}(k)) \end{aligned} \tag{5}$$

where *p k*ð Þ, *ξ*ð Þ*k* , *λ*ð Þ*k* , *β*ð Þ*k* , *μ*ð Þ*k* and Γ are the appropriate multipliers to be determined later.

Applying the calculus of variation [7, 9, 11, 13, 15] to the augmented cost function in (5), the following necessary conditions for optimality are obtained:

(a) Stationary condition:

$$R u(k) + B^T p(k+1) - \lambda(k) + r\_1(u(k) - v(k)) = 0 \tag{6}$$

(b) Co-state equation:

$$p(k) = \mathbb{Q}\mathbf{x}(k) + A^T p(k+1) - \beta(k) + r\_2(\mathbf{x}(k) - \mathbf{z}(k)) \tag{7}$$

*Conjugate Gradient Approach for Discrete Time Optimal Control Problems with Model-Reality… DOI: http://dx.doi.org/10.5772/intechopen.89711*

(c) State equation:

where *v k*ð Þ<sup>∈</sup> *<sup>ℜ</sup>m*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>* � 1, and *z k*ð Þ<sup>∈</sup> *<sup>ℜ</sup>n*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>*, are introduced to separate the sequences of control and state in the optimization problem from the respective signals in the parameter estimation problem, and k�k denotes

*r*1,*r*<sup>2</sup> ∈ *ℜ* are introduced to improve the convexity and to facilitate the convergence of the resulting iterative algorithm. Here, we clarify that the algorithm is designed such that the constraints *v k*ð Þ¼ *u k*ð Þ and *z k*ð Þ¼ *x k*ð Þ are satisfied upon termination of the iterations, assuming that convergence is achieved. Moreover, the state constraint *z k*ð Þ and the control constraint *v k*ð Þ are used for the computation of the parameter estimation and matching scheme, while the corresponding state constraint *x k*ð Þ and control constraint *u k*ð Þ are reserved for optimizing the model-based optimal control problem. Therefore, system optimization and parameter estimation

<sup>2</sup> *<sup>r</sup>*1k k *u k*ð Þ� *v k*ð Þ <sup>2</sup> and <sup>1</sup>

<sup>þ</sup> *<sup>γ</sup>*ð Þþ *<sup>k</sup>* <sup>1</sup>

*<sup>r</sup>*2k k *x k*ð Þ� *z k*ð Þ <sup>2</sup> <sup>þ</sup> *p k*ð Þ <sup>þ</sup> <sup>1</sup> <sup>T</sup>ð*Ax k*ð Þþ *Bu k*ð Þþ *<sup>α</sup>*ð Þ*<sup>k</sup>* Þ � *<sup>λ</sup>*ð Þ*<sup>k</sup>* <sup>T</sup>*u k*ð Þ

*z N*ð Þ<sup>T</sup>*S N*ð Þ*z N*ð Þ� *<sup>γ</sup>*ð Þ *<sup>N</sup>*

� �

<sup>2</sup> *z k*ð Þ<sup>T</sup>*Qz k*ð Þþ *v k*ð Þ<sup>T</sup>*Rv k*ð Þ � �

*Ru k*ð Þþ *<sup>B</sup>*<sup>T</sup>*p k*ð Þ� <sup>þ</sup> <sup>1</sup> *<sup>λ</sup>*ð Þþ *<sup>k</sup> <sup>r</sup>*1ð Þ¼ *u k*ð Þ� *v k*ð Þ <sup>0</sup> (6)

*p k*ð Þ¼ *Qx k*ð Þþ *<sup>A</sup>*<sup>T</sup>*p k*ð Þ� <sup>þ</sup> <sup>1</sup> *<sup>β</sup>*ð Þþ *<sup>k</sup> <sup>r</sup>*2ð Þ *x k*ð Þ� *z k*ð Þ (7)

2

*<sup>x</sup>*ð Þ� <sup>0</sup> *p N*ð Þ<sup>T</sup>*x N*ð Þ

*<sup>r</sup>*1k k *u k*ð Þ� *v k*ð Þ <sup>2</sup>

<sup>2</sup> *<sup>r</sup>*2k k *x k*ð Þ� *z k*ð Þ <sup>2</sup> with

(4)

(5)

<sup>þ</sup> <sup>Γ</sup><sup>T</sup>ð Þ *x N*ð Þ� *z N*ð Þ

� *γ*ð Þ*k*

the usual Euclidean norm. The terms <sup>1</sup>

*Control Theory in Engineering*

are declared and mutually integrated.

*<sup>H</sup>*2ð Þ¼ *<sup>k</sup>* <sup>1</sup>

*J* 0 <sup>2</sup>ð Þ¼ *u* 2

þ 1 2

1 2

þ<sup>X</sup> *N*�1

determined later.

**84**

*k*¼0

(a) Stationary condition:

(b) Co-state equation:

� *<sup>β</sup>*ð Þ*<sup>k</sup>* <sup>T</sup>*x k*ð Þ

**3.1 Necessary conditions for optimality**

Define the Hamiltonian function for Problem (E), given by:

where *<sup>λ</sup>*ð Þ*<sup>k</sup>* <sup>∈</sup> *<sup>ℜ</sup><sup>m</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>* � 1, *<sup>β</sup>*ð Þ*<sup>k</sup>* <sup>∈</sup> *<sup>ℜ</sup><sup>n</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>*, and *p k*ð Þ<sup>∈</sup> *<sup>ℜ</sup><sup>n</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>*, are modifiers. Using this Hamiltonian function in (4),

write the cost function in (3) to be the augmented cost function, that is,

� �

*<sup>H</sup>*2ð Þ� *<sup>k</sup> p k*ð Þ<sup>T</sup>*x k*ð Þþ *<sup>λ</sup>*ð Þ*<sup>k</sup>* <sup>T</sup>*v k*ð Þþ *<sup>β</sup>*ð Þ*<sup>k</sup>* <sup>T</sup>*z k*ð Þ

where *p k*ð Þ, *ξ*ð Þ*k* , *λ*ð Þ*k* , *β*ð Þ*k* , *μ*ð Þ*k* and Γ are the appropriate multipliers to be

Applying the calculus of variation [7, 9, 11, 13, 15] to the augmented cost function in (5), the following necessary conditions for optimality are obtained:

2

<sup>þ</sup> *<sup>μ</sup>*ð Þ*<sup>k</sup>* <sup>T</sup>ð Þ *f zk* ð Þ� ð Þ, *v k*ð Þ, *<sup>k</sup> Az k*ð Þ� *Bv k*ð Þ� *<sup>α</sup>*ð Þ*<sup>k</sup>*

*x k*ð Þ<sup>T</sup>*Qx k*ð Þþ *u k*ð Þ<sup>T</sup>*Ru k*ð Þ � �

*x N*ð Þ<sup>T</sup>*S N*ð Þ*x N*ð Þþ *<sup>γ</sup>*ð Þþ *<sup>N</sup> <sup>p</sup>*ð Þ <sup>0</sup> <sup>T</sup>

<sup>þ</sup> *<sup>ξ</sup>*ð Þ *<sup>N</sup> <sup>φ</sup>*ð Þ� *z N*ð Þ, *<sup>N</sup>* <sup>1</sup>

<sup>þ</sup> *<sup>ξ</sup>*ð Þ*<sup>k</sup> Lzk* ð Þ� ð Þ, *v k*ð Þ, *<sup>k</sup>* <sup>1</sup>

$$\mathbf{x}(k+1) = A\mathbf{x}(k) + Bu(k) + a(k) \tag{8}$$

(d) Boundary conditions:

$$p(\mathbf{N}) = \mathbf{S}(\mathbf{N})\mathbf{x}(\mathbf{N}) + \Gamma \text{ and } \mathbf{x}(\mathbf{0}) = \mathbf{x}\_0 \tag{9}$$

(e) Adjusted parameter equations:

$$\varphi(\mathbf{z}(\mathbf{N}), \mathbf{N}) = \frac{1}{2} \mathbf{z}(\mathbf{N})^{\mathrm{T}} \mathbf{S}(\mathbf{N}) \mathbf{z}(\mathbf{N}) + \boldsymbol{\chi}(\mathbf{N}) \tag{10}$$

$$L(z(k), v(k), k) = \frac{1}{2} \left( z(k)^T Q z(k) + v(k)^T R v(k) \right) + \gamma(k) \tag{11}$$

$$f(z(k), v(k), k) = Az(k) + Bv(k) + a(k)\tag{12}$$

(f) Modifier equations:

$$
\Gamma = \nabla\_{\mathbf{z}(N)} \varphi - \mathbf{S}(N)\mathbf{z}(N) \tag{13}
$$

$$\lambda(k) = -\left(\nabla\_{v(k)}L - Rv(k)\right) - \left(\frac{\partial f}{\partial v(k)} - B\right)^{\mathrm{T}} \hat{p}(k+1) \tag{14}$$

$$\boldsymbol{\beta}(k) = -\left(\nabla\_{\boldsymbol{x}(k)}\boldsymbol{L} - \mathbb{Q}\boldsymbol{z}(k)\right) - \left(\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{z}(k)} - \boldsymbol{A}\right)^{\mathrm{T}} \boldsymbol{\hat{p}}(k+1) \tag{15}$$

with *ξ*ð Þ¼ *k* 1 and *μ*ð Þ¼ *k p k* ^ð Þ þ 1 .

(g) Separable variables:

$$
\nu(k) = \mu(k), \boldsymbol{z}(k) = \boldsymbol{x}(k), \hat{p}(k) = p(k). \tag{16}
$$

Notice that for the optimality necessary conditions obtained above, they are divided into three sets of necessary conditions. The first set of necessary conditions in (6)–(9) is the necessary conditions for the system optimization problem. The second set of necessary conditions in (10)–(12) defines the parameter estimation problem. The third set of necessary conditions in (13)–(15) provides the computation of multipliers. In fact, the necessary conditions, which are defined in (6)–(9), are the optimality for the modified model-based optimal control problem, and the adjusted parameters, which are calculated from the necessary conditions in (10)–(12), measure the differences between the real plant and the model used.

#### **3.2 Modified model-based optimal control problem**

As a consequence, the modified model-based optimal control problem, which is referred to as Problem (MM), is defined by

$$\min\_{\boldsymbol{\mu}(k)} J\_3(\boldsymbol{\mu}) = \frac{1}{2} \mathbf{x}(\boldsymbol{N})^\mathrm{T} \mathbf{S}(\boldsymbol{N}) \mathbf{x}(\boldsymbol{N}) + \boldsymbol{\Gamma}^\mathrm{T} \mathbf{x}(\boldsymbol{N}) + \boldsymbol{\gamma}(\boldsymbol{N}) + \sum\_{k=0}^{N-1} \frac{1}{2} \left( \mathbf{x}(k)^\mathrm{T} \mathbf{Q} \mathbf{x}(k) + \boldsymbol{u}(k)^\mathrm{T} \mathbf{R} \boldsymbol{u}(k) \right)$$

$$+ \boldsymbol{\gamma}(\boldsymbol{k}) + \frac{1}{2} r\_1 \left\| \boldsymbol{u}(\boldsymbol{k}) - \boldsymbol{v}(\boldsymbol{k}) \right\|^2 + \frac{1}{2} r\_2 \left\| \boldsymbol{x}(\boldsymbol{k}) - \boldsymbol{x}(\boldsymbol{k}) \right\|^2 - \boldsymbol{\lambda}(\boldsymbol{k})^\mathrm{T} \boldsymbol{u}(\boldsymbol{k}) - \boldsymbol{\beta}(\boldsymbol{k})^\mathrm{T} \mathbf{x}(\boldsymbol{k})$$

subject to

$$\varkappa(k+1) = A\varkappa(k) + Bu(k) + a(k), \ \varkappa(0) = \varkappa\_0$$

(17)

with the specified *α*ð Þ*k* , *γ*ð Þ*k* , *λ*ð Þ*k* , *β*ð Þ*k* , Γ, *v k*ð Þ and *z k*ð Þ, where the boundary conditions are given by *x*<sup>0</sup> and *p N*ð Þ with the specified multiplier Γ.

It is obvious that Problem (MM), which is derived from Problem (E), is a modification of optimal control problem and is also known as a modified linear quadratic regular problem. Importantly, the set of the necessary conditions in (6)–(9) for Problem (E) is the necessary conditions that are satisfied by Problem (MM). In addition, due to the quadratic criterion feature of the objective function, the conjugate gradient method [17, 18], which is one of the numerical optimization techniques, could be applied to solve Problem (MM).

#### **3.3 Conjugate gradient algorithm**

For simplicity [19], establish Problem (MM) as a nonlinear optimization problem with the initial control given by *<sup>u</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *u k*ð Þ<sup>0</sup> as follows:

$$\min\_{u(k)} J\_3(u) \text{ subject to } u = u(k) \in \mathfrak{R}^m \text{ for } k = 0, 1, \dots, N - 1 \tag{18}$$

Let this problem as Problem (Q). Moreover, the Hamiltonian function defined in (4) is taken into consideration as an equivalent objective function. Hence, this Hamiltonian function allows the evaluation of the gradient function, which is the stationary condition in (6), and by using the iterative solution *<sup>u</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *u k*ð Þ*<sup>i</sup>* to satisfy the state Eq. (8), which is solved forward in time, and the co-state Eq. (7), which is solved backward in time.

Define the gradient function *<sup>g</sup>* : *<sup>ℜ</sup><sup>m</sup>* ! *<sup>ℜ</sup><sup>m</sup>* as

$$\mathbf{g}\left(\boldsymbol{u}^{i}\right) = \nabla\_{\boldsymbol{u}} J\_{\mathfrak{I}}\left(\boldsymbol{u}^{i}\right) \tag{19}$$

*bi* <sup>¼</sup> *<sup>g</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>T</sup> � *<sup>g</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup>

*Conjugate Gradient Approach for Discrete Time Optimal Control Problems with Model-Reality…*

From the discussion above, we present the result as a proposition given as follows: Proposition 1. *Consider Problem* (*Q*)*. The control sequence u*ð Þ*<sup>i</sup> , which is defined in* (*22*)

*<sup>u</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> ð Þ *<sup>u</sup>*ð Þ <sup>0</sup> T,ð Þ *<sup>u</sup>*ð Þ<sup>1</sup> T, <sup>⋯</sup>,ð Þ *u N*ð Þ � <sup>1</sup> <sup>T</sup> h i,

*is generated through a set of the search direction vector d*ð Þ*<sup>i</sup> whose components are*

Step 0: Compute the initial gradient *g*ð Þ <sup>0</sup> from (20) and the initial search direc-

Step 1: Solve the state Eq. (8) forward in time from *k* ¼ 0 to *k* ¼ *N* with the

, *k* ¼ 0, 1, ⋯, *N:* Step 2: Solve the costate Eq. (7) backward in time from *k* ¼ *N* to *k* ¼ 0 with the

Step 6: Evaluate the gradient *g*ð Þ *<sup>i</sup>*þ<sup>1</sup> and the search direction *d*ð Þ *<sup>i</sup>*þ<sup>1</sup> , respectively, from (24) and (25) with computing *bi* from (26). If the gradient *<sup>g</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>g</sup>*ð Þ*<sup>i</sup>* , within

a. Step 0 is the preliminary step for setting the initial search direction based on

b. Steps 1, 2, and 3 are performed to solve the system optimization by using the

c. Steps 4, 5, and 6 are the computation steps in implementing the conjugate

Accordingly, Problem (Q ) is solved by using the conjugate gradient algorithm. Indeed, the solution procedure for system optimization with parameter estimation could be described by joining the conjugate gradient algorithm with the parameters estimated. A summary of the calculation procedure including the principle of

Data: *A*, *B*, *Q*, *R*, *S N*ð Þ, *N*, *x*0, *r*1, *r*2, *kv*, *kz*, *kp*, *f*, *L:* Note that *A* and *B* could be determined based on the linearization of *f* at *x*<sup>0</sup> or from the linear terms of *f*. Step 0: Compute a nominal solution. Assume that *α*ð Þ¼ *k* 0, *k* ¼ 0, 1, ⋯, *N* � 1,

,

the gradient direction in using the conjugate gradient algorithm.

for *i* = 0, 1, 2, … represents the iteration numbers.

*linearly independent. Also, the direction d*ð Þ*<sup>i</sup> is conjugacy.*

**Conjugate gradient algorithm**

*DOI: http://dx.doi.org/10.5772/intechopen.89711*

initial condition (9) to obtain *x k*ð Þ*<sup>i</sup>*

tion *<sup>d</sup>*ð Þ <sup>0</sup> from (21), respectively. Set *<sup>i</sup>* <sup>¼</sup> <sup>0</sup>*:*

The conjugate gradient algorithm is summarized below:

boundary condition (9), where *p k*ð Þ*<sup>i</sup>* is the solution obtained.

Step 4: Solve (23) to obtain the step size *ai*. Step 5: Calculate the control *u*ð Þ *<sup>i</sup>*þ<sup>1</sup> from (22).

a given tolerance, stop, else set *i* ¼ *i* þ 1, go to Step 1.

corresponding control sequence *u*ð Þ*<sup>i</sup>* .

Data: Choose the arbitrary initial control *u*ð Þ <sup>0</sup> and the tolerance *ε*.

Step 3: Calculate the value of the cost functional *J*<sup>3</sup> *u*ð Þ*<sup>i</sup>* � � from (17).

*and is represented by*

**Remark 1:**

direction.

**87**

**3.4 Iterative calculation procedure**

model-reality differences is listed as follows:

**Iterative algorithm based on model-reality differences**

and *<sup>r</sup>*<sup>1</sup> <sup>¼</sup> *<sup>r</sup>*<sup>2</sup> <sup>¼</sup> <sup>0</sup>*:* Solve Problem (M) defined by (2) to obtain *u k*ð Þ<sup>0</sup>

*<sup>g</sup>*ð Þ*<sup>i</sup>* <sup>T</sup> � *<sup>g</sup>*ð Þ*<sup>i</sup>* (26)

which is represented by the stationary condition in (6). For arbitrary initial control *u*ð Þ <sup>0</sup> ∈ *ℜ<sup>m</sup>*, the initial gradient and the initial search direction are calculated from

$$\mathbf{g}^{(0)} = \mathbf{g}\left(\boldsymbol{u}^{(0)}\right) \tag{20}$$

$$d^{(0)} = -\mathbf{g}^{(0)}.\tag{21}$$

The following line search equation is applied to update the control sequence:

$$u^{(i+1)} = u^{(i)} + a\_i \cdot d^{(i)}\tag{22}$$

where *ai* ∈ *ℜ* is the step size, and its value can be determined from

$$\mathfrak{a}\_{i} = \underset{a \geq 0}{\arg\min} \mathcal{J}\_{\mathfrak{z}} \Big( \mathfrak{u}^{(i)} + \mathfrak{a} \cdot \boldsymbol{d}^{(i)} \Big). \tag{23}$$

After that, the gradient and the search direction are updated by

$$\mathbf{g}^{(i+1)} = \mathbf{g}\left(\boldsymbol{\mu}^{(i+1)}\right) \tag{24}$$

$$d^{(i+1)} = -\mathbf{g}^{(i+1)} + b\_i \cdot d^{(i)}\tag{25}$$

with

*Conjugate Gradient Approach for Discrete Time Optimal Control Problems with Model-Reality… DOI: http://dx.doi.org/10.5772/intechopen.89711*

$$b\_i = \frac{\mathbf{g}^{(i+1)\mathbf{T}} \cdot \mathbf{g}^{(i+1)}}{\mathbf{g}^{(i)\mathbf{T}} \cdot \mathbf{g}^{(i)}}\tag{26}$$

for *i* = 0, 1, 2, … represents the iteration numbers.

From the discussion above, we present the result as a proposition given as follows: Proposition 1. *Consider Problem* (*Q*)*. The control sequence u*ð Þ*<sup>i</sup> , which is defined in* (*22*) *and is represented by*

$$\boldsymbol{u}^{(i)} = \left[ \left( \boldsymbol{u}(\mathbf{0}) \right)^{\mathrm{T}}, \left( \boldsymbol{u}(\mathbf{1}) \right)^{\mathrm{T}}, \cdots, \left( \boldsymbol{u}(N-1) \right)^{\mathrm{T}} \right],$$

*is generated through a set of the search direction vector d*ð Þ*<sup>i</sup> whose components are linearly independent. Also, the direction d*ð Þ*<sup>i</sup> is conjugacy.*

The conjugate gradient algorithm is summarized below:

#### **Conjugate gradient algorithm**

Data: Choose the arbitrary initial control *u*ð Þ <sup>0</sup> and the tolerance *ε*.

Step 0: Compute the initial gradient *g*ð Þ <sup>0</sup> from (20) and the initial search direction *<sup>d</sup>*ð Þ <sup>0</sup> from (21), respectively. Set *<sup>i</sup>* <sup>¼</sup> <sup>0</sup>*:*

Step 1: Solve the state Eq. (8) forward in time from *k* ¼ 0 to *k* ¼ *N* with the initial condition (9) to obtain *x k*ð Þ*<sup>i</sup>* , *k* ¼ 0, 1, ⋯, *N:*

Step 2: Solve the costate Eq. (7) backward in time from *k* ¼ *N* to *k* ¼ 0 with the boundary condition (9), where *p k*ð Þ*<sup>i</sup>* is the solution obtained.

Step 3: Calculate the value of the cost functional *J*<sup>3</sup> *u*ð Þ*<sup>i</sup>* � � from (17).

Step 4: Solve (23) to obtain the step size *ai*.

Step 5: Calculate the control *u*ð Þ *<sup>i</sup>*þ<sup>1</sup> from (22).

Step 6: Evaluate the gradient *g*ð Þ *<sup>i</sup>*þ<sup>1</sup> and the search direction *d*ð Þ *<sup>i</sup>*þ<sup>1</sup> , respectively, from (24) and (25) with computing *bi* from (26). If the gradient *<sup>g</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>g</sup>*ð Þ*<sup>i</sup>* , within a given tolerance, stop, else set *i* ¼ *i* þ 1, go to Step 1.

**Remark 1:**

with the specified *α*ð Þ*k* , *γ*ð Þ*k* , *λ*ð Þ*k* , *β*ð Þ*k* , Γ, *v k*ð Þ and *z k*ð Þ, where the boundary

For simplicity [19], establish Problem (MM) as a nonlinear optimization prob-

Let this problem as Problem (Q). Moreover, the Hamiltonian function defined in

(4) is taken into consideration as an equivalent objective function. Hence, this Hamiltonian function allows the evaluation of the gradient function, which is the stationary condition in (6), and by using the iterative solution *<sup>u</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *u k*ð Þ*<sup>i</sup>* to satisfy the state Eq. (8), which is solved forward in time, and the co-state Eq. (7), which is

which is represented by the stationary condition in (6). For arbitrary initial control *u*ð Þ <sup>0</sup> ∈ *ℜ<sup>m</sup>*, the initial gradient and the initial search direction are calculated

*<sup>g</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *g u*ð Þ <sup>0</sup>

The following line search equation is applied to update the control sequence:

*<sup>g</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *g u*ð Þ *<sup>i</sup>*þ<sup>1</sup>

*<sup>J</sup>*<sup>3</sup> *<sup>u</sup>*ð Þ*<sup>i</sup>* <sup>þ</sup> *<sup>a</sup>* � *<sup>d</sup>*ð Þ*<sup>i</sup>*

where *ai* ∈ *ℜ* is the step size, and its value can be determined from

*ai* ¼ arg min *a* ≥0

After that, the gradient and the search direction are updated by

*u k*ð Þ *<sup>J</sup>*3ð Þ *<sup>u</sup>* subject to *<sup>u</sup>* <sup>¼</sup> *u k*ð Þ<sup>∈</sup> *<sup>ℜ</sup><sup>m</sup>* for *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>* � 1 (18)

*g ui* <sup>¼</sup> <sup>∇</sup>*uJ*<sup>3</sup> *ui* (19)

*<sup>d</sup>*ð Þ <sup>0</sup> ¼ �*g*ð Þ <sup>0</sup> *:* (21)

*<sup>u</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>u</sup>*ð Þ*<sup>i</sup>* <sup>þ</sup> *ai* � *<sup>d</sup>*ð Þ*<sup>i</sup>* (22)

*<sup>d</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> ¼ �*g*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>þ</sup> *bi* � *<sup>d</sup>*ð Þ*<sup>i</sup>* (25)

*:* (23)

(20)

(24)

It is obvious that Problem (MM), which is derived from Problem (E), is a modification of optimal control problem and is also known as a modified linear quadratic regular problem. Importantly, the set of the necessary conditions in (6)–(9) for Problem (E) is the necessary conditions that are satisfied by Problem (MM). In addition, due to the quadratic criterion feature of the objective function, the conjugate gradient method [17, 18], which is one of the numerical optimization

conditions are given by *x*<sup>0</sup> and *p N*ð Þ with the specified multiplier Γ.

techniques, could be applied to solve Problem (MM).

Define the gradient function *<sup>g</sup>* : *<sup>ℜ</sup><sup>m</sup>* ! *<sup>ℜ</sup><sup>m</sup>* as

lem with the initial control given by *<sup>u</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *u k*ð Þ<sup>0</sup> as follows:

**3.3 Conjugate gradient algorithm**

*Control Theory in Engineering*

min

solved backward in time.

from

with

**86**


#### **3.4 Iterative calculation procedure**

Accordingly, Problem (Q ) is solved by using the conjugate gradient algorithm. Indeed, the solution procedure for system optimization with parameter estimation could be described by joining the conjugate gradient algorithm with the parameters estimated. A summary of the calculation procedure including the principle of model-reality differences is listed as follows:

#### **Iterative algorithm based on model-reality differences**

Data: *A*, *B*, *Q*, *R*, *S N*ð Þ, *N*, *x*0, *r*1, *r*2, *kv*, *kz*, *kp*, *f*, *L:* Note that *A* and *B* could be determined based on the linearization of *f* at *x*<sup>0</sup> or from the linear terms of *f*.

Step 0: Compute a nominal solution. Assume that *α*ð Þ¼ *k* 0, *k* ¼ 0, 1, ⋯, *N* � 1,

and *<sup>r</sup>*<sup>1</sup> <sup>¼</sup> *<sup>r</sup>*<sup>2</sup> <sup>¼</sup> <sup>0</sup>*:* Solve Problem (M) defined by (2) to obtain *u k*ð Þ<sup>0</sup> , *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>* � 1, and *x k*ð Þ<sup>0</sup> , *p k*ð Þ<sup>0</sup> , *k* ¼ 0, 1, ⋯, *N:* Then, with *α*ð Þ¼ *k* 0, *k* ¼ 0, 1, <sup>⋯</sup>, *<sup>N</sup>* � 1, and using *<sup>r</sup>*1, *<sup>r</sup>*<sup>2</sup> from the data. Set *<sup>i</sup>* <sup>¼</sup> 0, *v k*ð Þ<sup>0</sup> <sup>¼</sup> *u k*ð Þ<sup>0</sup> , *z k*ð Þ<sup>0</sup> <sup>¼</sup> *x k*ð Þ<sup>0</sup> and *p k* ^ð Þ<sup>0</sup> <sup>¼</sup> *p k*ð Þ<sup>0</sup> *:*

Step 1: Compute the parameters *<sup>γ</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* , *<sup>k</sup>* <sup>¼</sup> 0, 1, <sup>⋯</sup>, *<sup>N</sup>*, and *<sup>α</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* , *k* ¼ 0, 1, ⋯, *N* � 1, from (10)–(12). This is called the parameter estimation step.

Step 2: Compute the modifiers Γ*<sup>i</sup>* , *<sup>λ</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* and *<sup>β</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* , *k* ¼ 0, 1, ⋯, *N* � 1, from (13)– (15). Notice that this step requires taking the derivatives of *f* and *L* with respect to *v k*ð Þ*<sup>i</sup>* and *z k*ð Þ*<sup>i</sup> :*

Step 3: With *<sup>γ</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* , *<sup>α</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* , Γ*<sup>i</sup>* , *<sup>λ</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* , *<sup>β</sup>*ð Þ*<sup>k</sup> <sup>i</sup>* , *v k*ð Þ*<sup>i</sup>* , and *z k*ð Þ*<sup>i</sup>* , solve Problem (Q ) using the conjugate gradient algorithm. This is called the system optimization step.

Step 4: Test the convergence and update the optimal solution of Problem (P). In order to provide a mechanism for regulating convergence, a simple relaxation method is employed:

$$\boldsymbol{v}(\boldsymbol{k})^{i+1} = \boldsymbol{v}(\boldsymbol{k})^{i} + \boldsymbol{k}\_{\boldsymbol{v}} \left(\boldsymbol{u}(\boldsymbol{k})^{i} - \boldsymbol{v}(\boldsymbol{k})^{i}\right) \tag{27}$$

*<sup>x</sup>*\_ <sup>¼</sup> 0 1

*<sup>x</sup>*<sup>0</sup> <sup>¼</sup> ð Þ 10 10 <sup>T</sup>*:* Define the state *<sup>x</sup>* <sup>¼</sup> ð Þ *<sup>x</sup>*<sup>1</sup> *<sup>x</sup>*<sup>2</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.89711*

*<sup>J</sup>*ð Þ¼ <sup>0</sup> <sup>1</sup> 2 ð9*:*<sup>4</sup> 0*:*0

min*<sup>u</sup> J u*ð Þ¼ <sup>X</sup>

*x k*ð Þ¼ þ 1

min*<sup>u</sup> J u*ð Þ¼ <sup>X</sup>

*x k*ð Þ¼ þ 1

10

1 2

1 0 0 1 !*x k*ð Þþ

99% of the cost reduction to obtain the final cost of 128.50 units.

with the initial state *<sup>x</sup>*<sup>0</sup> <sup>¼</sup> ð Þ 10 10 T, and the adjusted parameters *<sup>γ</sup>*ð Þ*<sup>k</sup>* , *k* ¼ 0, 1, ⋯, *N*, and *α*ð Þ*k* , *k* ¼ 0, 1, ⋯, *N* � 1, are supplied to the model used. By using the algorithm proposed, the simulation result is shown in **Table 1**. Notice that the minimum cost for Problem (M) is 546.05 units without adding the adjusted parameters. Once the adjusted parameters are taken into consideration, the iterative solution approximates to the true optimal solution of the original optimal control problem, in spite of model-reality differences. It is highlighted that there is a

**Figures 1** and **2** show the trajectories of control and state, respectively. With this

**Number of iteration Initial cost Final cost Elapsed time (s)** 20 17053.11 128.50 1.38021

control effort, the state reaches at the steady state after 4 units of time, which presents the oscillator stopped from moving. **Figure 3** shows the changes of the

*k*¼0

for the discretization transform.

10

1 2

1*:*00 0*:*94 �0*:*60 0*:*85 !*x k*ð Þþ

with the initial state *<sup>x</sup>*<sup>0</sup> <sup>¼</sup> ð Þ 10 10 <sup>T</sup>*:* and the sampling time <sup>Δ</sup>*<sup>t</sup>* <sup>¼</sup> <sup>0</sup>*:*94 s is taken

Consider the model-based optimal control problem, which is regarded as Prob-

*<sup>x</sup>*1ð Þ*<sup>k</sup>* <sup>2</sup> <sup>þ</sup> *<sup>x</sup>*2ð Þ*<sup>k</sup>* <sup>2</sup> <sup>þ</sup> *u k*ð Þ<sup>2</sup> � � <sup>þ</sup> *<sup>γ</sup>*ð Þ*<sup>k</sup>* � �Δ*<sup>t</sup>*

*u k*ð Þþ *α*ð Þ*k*

1 0

!

*k*¼0

function

Problem (P), is given by:

subject to

lem (M), given by:

subject to

**Table 1.**

**89**

*Simulation result, Example 1.*

�*ω*<sup>2</sup> �2*δω* � �*<sup>x</sup>* <sup>þ</sup>

*Conjugate Gradient Approach for Discrete Time Optimal Control Problems with Model-Reality…*

the velocity. For the purpose of controlling this oscillator, the following objective

with the natural frequency *ω* = 0.8, the damping ratio *δ* = 0.1, and the initial state

is minimized. This problem is a continuous-time linear optimal control problem, and the equivalence discrete time optimal control problem, which is regarded as

<sup>Δ</sup>*t x*1ð Þ*<sup>k</sup>* <sup>2</sup> <sup>þ</sup> *<sup>x</sup>*2ð Þ*<sup>k</sup>* <sup>2</sup> <sup>þ</sup> *u k*ð Þ<sup>2</sup> � �

0 1

ð Þ *<sup>x</sup>*1ð Þ*<sup>t</sup>* <sup>2</sup> <sup>þ</sup> ð Þ *<sup>x</sup>*2ð Þ*<sup>t</sup>* <sup>2</sup> <sup>þ</sup> ð Þ *u t*ð Þ <sup>2</sup> � �*dt* (31)

0*:*00 0*:*94 !*u k*ð Þ (32)

(33)

� �*<sup>u</sup>* (30)

T, where *x*<sup>1</sup> is the displacement and *x*<sup>2</sup> is

$$z(k)^{i+1} = z(k)^i + k\_z \left(\varkappa(k)^i - z(k)^i\right) \tag{28}$$

$$\left(\hat{p}(k)^{i+1} = \hat{p}(k)^{i} + k\_{p}\left(p(k)^{i} - \hat{p}(k)^{i}\right)\right.\tag{29}$$

where *kv*, *kz*, *kp* <sup>∈</sup>ð � 0, 1 are scalar gains. If *v k*ð Þ*<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *v k*ð Þ*<sup>i</sup>* , *k* ¼ 0, 1, ⋯, *N* � 1, and *z k*ð Þ*<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *z k*ð Þ*<sup>i</sup>* , *k* ¼ 0, 1, ⋯, *N*, within a given tolerance, stop; else set *i* ¼ *i* þ 1, and repeat the procedure starting from Step 1.

#### **Remark 2:**

