**3. Necessary optimality conditions**

Consider the following conservation relations [8]:

$$\dot{\mathbf{x}}\_t = f\left(\mathbf{x}\_t, \boldsymbol{\mu}\_t\right) \mathbf{x}\_{t=0} \Rightarrow \mathbf{x}(0) \tag{20}$$

where *x <sup>t</sup>* is the composition and concentration of the pollutant at time *t, u <sup>t</sup>* denotes controls that enter on the boundary of the problem at time *t, f* is a set of nonlinear functions representing the conservation relation, and*xt*=0 denotes the initial condition of *x*. Every change in the control function changes the solution to Eq.(20). Thus, for a given objective functional to be maximized, a piecewise continuous control policy *u <sup>t</sup>* and the state variable *x <sup>t</sup>* have to be obtained. The principle technique is to determine the necessary conditions that define an optimal control policy *u*(*t*) that would cause the system to follow a path *x*(*t*), such that the performance functional

$$J\left(u\right) = \int\_{0}^{T} F\left(\mathbf{x}, u, t\right) dt\tag{21}$$

would be optimized.

,

Consider also the Lagrangian

$$L = F\left(\mathbf{x}, \mu, t\right) + \mathcal{X}\left(f\left(\cdot\right) - \dot{\mathbf{x}}\right) \tag{22}$$

where *λ* denotes the dynamic Lagrange multipliers or costate variables with its derivative given as *λ*′. For more simplification, an augmented functional with the same optimum of (21) could further be derived as

$$J = \int\_0^T L\left(\mathbf{x}, \dot{\mathbf{x}}, \boldsymbol{\mu}, \dot{\boldsymbol{\lambda}}, t\right) dt \tag{23}$$

and by introducing the variations *δ*(*x*), *δ*(*x*˙), *δ*(*u*), *δ*(*λ*), *δ*(*T* ), the first variation of the functional would be

$$\begin{split} \delta J &= \int\_{0}^{T} \left[ \frac{\partial L}{\partial \mathbf{x}} - \frac{d}{dt} \frac{\partial L}{\partial \dot{\mathbf{x}}} \right] \delta \mathbf{\dot{x}} (\mathbf{x}) dt + \left[ \frac{\partial L}{\partial \dot{\mathbf{x}}} (T) \right] \delta \mathbf{\dot{x}} (\mathbf{x}\_{T}) \\ &+ \left[ L \left( T \right) - \left( \frac{\partial L}{\partial \dot{\mathbf{x}}} (T) \right)' \dot{\mathbf{x}} (T) \right] \delta \mathbf{\dot{y}} (T) \\ &+ \int\_{0}^{T} \left( \frac{\partial L}{\partial \dot{\mathbf{x}}} \right)' \delta \left( \dot{\lambda} \right) dt + \int\_{0}^{T} \left( \frac{\partial L}{\partial \dot{u}} \right)' \delta \left( \mathbf{u} \right) dt \end{split} \tag{24}$$

Noticed that, by the fundamental theorem of variational calculus, for *x*(*t*) to be an optimum of the functional *J*, it is necessary that *δJ* =0. Because the controls and states are unbounded, the variations *δ*(*x*), *δ*(*λ*), and *δ*(*u*) are free and unconstrained. Thus, the following are the necessary conditions for optimality:

#### (i) *Existence and uniqueness: Euler-Lagrange equations*

Because the variation *δ*(*x*) was not bounded (i.e., it was free), we have

Sequential Optimization Model for Marine Oil Spill Control http://dx.doi.org/10.5772/63050 139

$$
\frac{\partial L}{\partial \mathbf{x}} - \frac{d}{dt} \frac{\partial L}{\partial \dot{\mathbf{x}}} = 0 \tag{25}
$$

Using Eq. (22), obtain

policy *u*(*t*) that would cause the system to follow a path *x*(*t*), such that the performance

() ( ) 0

*L F xut f x* = + ×- ( ) () , , l

where *λ* denotes the dynamic Lagrange multipliers or costate variables with its derivative given as *λ*′. For more simplification, an augmented functional with the same optimum of (21)

( )

and by introducing the variations *δ*(*x*), *δ*(*x*˙), *δ*(*u*), *δ*(*λ*), *δ*(*T* ), the first variation of the functional

& &

( ) ( )

*L L dt u dt <sup>u</sup>*

Noticed that, by the fundamental theorem of variational calculus, for *x*(*t*) to be an optimum of the functional *J*, it is necessary that *δJ* =0. Because the controls and states are unbounded, the variations *δ*(*x*), *δ*(*λ*), and *δ*(*u*) are free and unconstrained. Thus, the following are the

( ) () () ()

¢ ¢ æö æö ¶ ¶ + + ç÷ ç÷ èø èø ¶ ¶

*<sup>L</sup> LT T xT T <sup>x</sup>*

*L dL L <sup>J</sup> x dt T x x dt x x*

¢ ¢ é ù éù ¶¶ ¶ =- + ê ú êú ë û ëû ¶¶ ¶

0 0

d l

ò ò

l

Because the variation *δ*(*x*) was not bounded (i.e., it was free), we have

*T T*

é ù ¢ æ ö ¶ + - ê ú ç ÷ è ø ¶ ë û

( ) () ( )

d

dd

d

l

,,,, *<sup>T</sup> J L x x u t dt* =

0

0

ò

d

necessary conditions for optimality:

(i) *Existence and uniqueness: Euler-Lagrange equations*

*T*

, , *<sup>T</sup>*

*J u F x u t dt* <sup>=</sup> ò (21)

¢( )& (22)

ò & (23)

*T*

& & (24)

functional

would be optimized.

Consider also the Lagrangian

138 Robust Control - Theoretical Models and Case Studies

could further be derived as

,

would be

$$\frac{\partial L}{\partial \dot{\mathbf{x}}} = -\mathcal{A}.\tag{26}$$

The Euler-Lagrange equations could be transformed as

$$
\dot{\mathcal{A}} = \frac{\partial L}{\partial \mathfrak{X}},
\tag{27}
$$

and by the definition of the Lagrangian, Eq. (27) becomes

$$
\dot{\mathcal{A}} = \left(\frac{\partial f}{\partial \mathbf{x}}\right)' \mathcal{A} - \frac{\partial F}{\partial \mathbf{x}} \tag{28}
$$

Eq. (28) shows that the Euler-Lagrange equations are the equations that specify the dynamic Lagrange multipliers.

### (ii) *Constraints relations*

Because the variation*δ*(*λ*) is free, we have

$$\frac{\partial L}{\partial \mathcal{X}} = 0\tag{29}$$

which is equivalent to (20). This implies that, along the optimal trajectory, the state differential equations must hold.

#### (iii) *Optimal control*

Also, because the variation *δ*(*u*) is free, it follows that the optimal control policy must be consistent with

$$\frac{\partial L}{\partial u} = 0\tag{30}$$

$$\frac{\partial F}{\partial u} + \left(\frac{\partial f}{\partial u}\right)' \mathcal{A} = 0 \,, \text{ and} \tag{31}$$

(iv) Transversality boundary conditions

$$-\lambda'(T)\,\delta(T) + \left[F\left(T\right) + \lambda'(T)\, f\left(T\right)\right]\delta\left(T\right) = 0\tag{32}$$

The necessary conditions (i) to (iv) could be simplified further by introducing an Hamiltonian

$$H = F\left(\mathbf{x}, u, t\right) + \mathcal{X}f\left(\mathbf{x}, u, t\right) \tag{33}$$

Such that

i. Euler's equation:

$$
\dot{\mathcal{N}} = -\frac{\partial H}{\partial \mathbf{x}}\tag{34}
$$

ii. Constraints relations:

$$
\dot{\mathbf{x}} = f\left(\cdot\right) = \frac{\partial H}{\partial \mathcal{X}}\tag{35}
$$

iii. Optimal control:

$$\frac{\partial H}{\partial u} = 0, \text{and} \tag{36}$$

iv. Boundary conditions:

$$-\mathcal{X}'(T)\,\delta\left(\mathbf{x}\_T\right) + H(T)\,\delta\left(T\right) = 0\tag{37}$$

Furthermore, with the assumption that all the necessary conditions for optimality exist and sufficient for a unique optimal control, a sequential decision processes for optimal response strategy can be developed.
