3.1 High gain observer

2.2 Modeling of the asynchronous generator

Wind Solar Hybrid Renewable Energy System

8 >>>><

>>>>:

and iqs can be expressed as:

are the stator-rotor mutual flux.

electrical torque which can be expressed as:

based on the generator ratings and synchronous speed.

where

26

For the induction generator, the Park model is the model that is commonly used [16]. After applying the synchronously rotating reference frame transformation to the stator and rotor fluxes equations of the generator, the following differential

<sup>Φ</sup>\_ dr <sup>¼</sup> wb vdr <sup>þ</sup> ð Þ ws � wr <sup>Φ</sup>qr � Rridr

<sup>Φ</sup>\_ qr <sup>¼</sup> wb vqr � ð Þ ws � wr <sup>Φ</sup>dr � Rriqr

� �

� �

where ws = 1 is the synchronous angular speed in the synchronous frame and wb = 2πf rad/s is the base angular speed, with f = 60 Hz. With additional variables stator-rotor mutual flux Φdm and Φqm, rotor current idr and iqr and stator current ids

> idr <sup>¼</sup> <sup>Φ</sup>dr � <sup>Φ</sup>dm Llr

iqr <sup>¼</sup> <sup>Φ</sup>qr � <sup>Φ</sup>qm Llr

ids <sup>¼</sup> <sup>Φ</sup>ds � <sup>Φ</sup>dm Lls

iqs <sup>¼</sup> <sup>Φ</sup>qs � <sup>Φ</sup>qm Lls

> Φdr Llr þ Φds Lls

Φqr Llr þ Φqs Lls

Where constants Lad and Laq are the (d-q) mutual flux factors, expressed as:

The relationship between mechanical torque Tm, electrical torque Te and rotor

where constant F is the friction factor and H is the generator inertia, and Te, the

These equations are derived in [4] and all parameters are defined in per unit

Lad <sup>¼</sup> Laq <sup>¼</sup> <sup>1</sup> 1 Lm <sup>þ</sup> <sup>1</sup>

� �

Lls <sup>þ</sup> <sup>1</sup> Llr

� � (8)

<sup>2</sup><sup>H</sup> ð Þ Tm � Te � Fwr (10)

Te ¼ Φdsiqs � Φqsids (11)

<sup>Φ</sup>\_ ds <sup>¼</sup> wb vds <sup>þ</sup> wsΦqs � Rsids

<sup>Φ</sup>\_ qs <sup>¼</sup> wb vqs � wsΦds � Rsiqs

� �

� �

(5)

(6)

(7)

(9)

equations describe the dynamics of the rotor and stator fluxes [17]:

8 >><

>>:

8 >><

>>:

Φdm ¼ Lad

Φqm ¼ Laq

speed wr can be shown by the following differential equation,

<sup>w</sup>\_ <sup>r</sup> <sup>¼</sup> <sup>1</sup>

This observer class is applied for nonlinear system classes of the form Eq. (12). Its applications are so large [18, 19]. We briefly present the developed survey in [20] that points up the synthesis of observers adapted to the observable nonlinear systems. Consider the following nonlinear system:

$$\begin{cases} \dot{\mathbf{x}} = f(\mathbf{x}) + \mathbf{g}(\mathbf{x})u \\ \mathbf{y} = h(\mathbf{x}) \end{cases} \tag{12}$$

where x∈Rn, u∈ Rm, y ∈R<sup>s</sup> .

First, the system Eq. (12) must be uniformly locally observable, and then it will be possible to make the variable change z = Γ(x) that will transform the system Eq. (12) in the following form:

$$\begin{cases} \dot{z} = Az + \rho(u, z) \\ y = Cx \end{cases} \tag{13}$$

The observer must satisfy the following theorem [20]:

i. The function φ is globally Lipschitz uniformly to u.

$$\text{Let } K = \begin{bmatrix} K\_1 \\ & \ddots \\ & & K\_p \end{bmatrix} \text{ an adequate size matrix such as, for every } K\_i \text{ block, the}$$

matrix.

Ak � KkCk should give all its eigenvalues with negative real part:

Let's suppose that there exists two integer sets f g σ1,⋯, σ<sup>n</sup> ∈Z and δ<sup>1</sup> >0; ⋯; δ<sup>p</sup> > 0∈ N <sup>∗</sup> � � such as:

$$\text{iii.} \sigma\_{\mu\_k+v} = \sigma\_{\mu\_k+v-1} + \delta\_r, k = 1, \cdots, p, v = 1, \cdots, \eta\_k - 1$$

$$\text{iiii.} \frac{\partial p\_i}{\partial \mathbf{z}\_j} \neq \mathbf{0} \Rightarrow \sigma\_i \rhd \sigma\_j \text{, i.} \, j = \mathbf{1}, \dots, n, j \neq \mu\_k, k = \mathbf{1}, \dots, p$$

So,

$$\dot{\hat{z}} = A\hat{z} + \varphi(\hat{z}, \mu) - \mathbb{S}\_{\theta}^{-1}\mathbb{K}(\mathbb{C}\hat{z} - \mathfrak{y})\tag{14}$$

is an exponential observer for the system Eq. (13) as well. And there exists T1 such as, for all T, 0 < T < T1. With,

$$\mathcal{S}(\mathcal{S}, \delta) = \begin{bmatrix} \mathcal{S}^{\delta\_1} \Delta \left( \mathcal{S}^{\delta^1} \right) \\ & \ddots \\ & & \mathcal{S}^{\delta\_r} \Delta (\mathcal{S}^{\delta^r}) \end{bmatrix}.$$

$$\Delta\_\theta(\mathcal{S}) = \begin{bmatrix} 1 & & & \\ & \mathcal{S} & & \\ & & \ddots & \\ & & & \mathcal{S}^{\eta\_\theta - 1} \end{bmatrix}$$

By operating a reverse variable change for coming back to the initial nonlinear system, the observer for the system Eq. (12) is given by:

$$\dot{\hat{\mathbf{x}}} = \mathbf{f}(\hat{\boldsymbol{\alpha}}) + \mathbf{g}(\hat{\boldsymbol{\alpha}})\boldsymbol{u} - \left(\frac{\partial \Gamma}{\partial \hat{\boldsymbol{\alpha}}}(\hat{\boldsymbol{\alpha}}(t))\right)^{-1} \mathbf{S}\_{\theta}^{-1} (h(\hat{\boldsymbol{\alpha}}) - \boldsymbol{y}) \tag{15}$$

x^k,k�<sup>1</sup> ¼ ∑

is also omitted from the observation function, as for the prediction as in:

k,k�<sup>1</sup> <sup>¼</sup> <sup>h</sup> <sup>χ</sup>

2L i¼0 ηc <sup>i</sup> χ ð Þi

ψð Þ<sup>i</sup>

Pk,k�<sup>1</sup> ¼ Qk�<sup>1</sup> þ ∑

<sup>0</sup> <sup>¼</sup> <sup>λ</sup>=ð Þ <sup>L</sup> <sup>þ</sup> <sup>λ</sup> and <sup>η</sup><sup>c</sup>

Advanced Monitoring of Wind Turbine DOI: http://dx.doi.org/10.5772/intechopen.84840

^yk,k�<sup>1</sup> <sup>¼</sup> <sup>∑</sup>

Pyy

Pxy <sup>k</sup> ¼ ∑ 2L i¼0 ηc <sup>i</sup> χ ð Þi

2L i¼0 ηm <sup>i</sup> <sup>ψ</sup>ð Þ<sup>i</sup> k,k�1

<sup>k</sup> ¼ Rk þ ∑

2L i¼0 ηc <sup>i</sup> <sup>ψ</sup>ð Þ<sup>i</sup>

where η<sup>m</sup>

using:

state, as in:

29

covariance estimates.

3.3 The moving horizon estimation

2L i¼0 ηm i χ ð Þi

k,k�<sup>1</sup> � <sup>x</sup>^k,k�<sup>1</sup> <sup>χ</sup>

> ð Þi k,k�1; uk

where is a matrix of output sigma-points. Output sigma-points are used to calculate output covariance matrix, the predicted output and cross-covariance by using:

> k,k�<sup>1</sup> � ^yk,k�<sup>1</sup> <sup>ψ</sup>ð Þ<sup>i</sup>

k,k�<sup>1</sup> � <sup>x</sup>^k,k�<sup>1</sup> <sup>ψ</sup>ð Þ<sup>i</sup>

Due to the additive noise assumption, R is added to the output covariance matrix. For calculating the Kalman gain matrix K, covariance matrices are used,

Kk <sup>¼</sup> <sup>P</sup>xy

<sup>k</sup> <sup>P</sup>yy k

And then this Kalman gain matrix is used to update covariance estimates and the

<sup>k</sup> <sup>K</sup><sup>T</sup> k

<sup>x</sup>^<sup>k</sup> <sup>¼</sup> <sup>x</sup>^k,k�<sup>1</sup> <sup>þ</sup> Kk yk � ^yk,k�<sup>1</sup>

With yk, the measurement vector, x^<sup>k</sup> is the a posteriori state and Pk is the

The moving horizon estimation is a powerful means of estimating the states, and having in particular the possibility to constrain the outputs, states and noises. We can be described it as a least-squares optimization that leads to a states' estimation and working with a limited amount of information. Its particularity is to avoid the recursive manner characteristic of the extended Kalman filter. Under different approaches, several researchers [22–28] studied it, however presenting many similarities. The moving and full state estimations almost follow the same steps. In the moving state estimation, variables can be handled contrary to the full state estimation. In the full state estimation, at current time k, all variables from initial time n = 0 to n = k are used in the calculation. With a horizon H, the moving state estimation uses in the calculation only the concerned variables (measured outputs,

Pk <sup>¼</sup> Pk,k�<sup>1</sup> � KkPyy

k,k�<sup>1</sup> (20)

(21)

(23)

(25)

k,k�<sup>1</sup> � <sup>x</sup>^k,k�<sup>1</sup> <sup>T</sup>

(22)

k,k�<sup>1</sup> � ^yk,k�<sup>1</sup> <sup>T</sup>

�<sup>1</sup> (24)

k,k�<sup>1</sup> � ^yk,k�<sup>1</sup> <sup>T</sup>

ð Þi

<sup>0</sup> <sup>¼</sup> <sup>λ</sup>=ð Þþ <sup>L</sup> <sup>þ</sup> <sup>λ</sup> <sup>1</sup> � <sup>α</sup><sup>2</sup> <sup>þ</sup> <sup>β</sup>. The measurement noise

x^: Estimated value of x. <sup>Γ</sup>: An application <sup>R</sup><sup>n</sup> ! Rn. With,

$$\Gamma = \begin{bmatrix} h\_1, L\_f h\_1, L\_f^2 h\_1, \dots, L\_f^{\delta\_1} h\_1, h\_2, L\_f h\_2, L\_f^2 h\_2, \dots, L\_f^{\delta\_2} h\_2, \dots, h\_p, L\_f h\_p, L\_f^2 h\_p, \dots, L\_f^{\delta\_p} h\_p \end{bmatrix}^T$$

And L<sup>δ</sup><sup>k</sup> <sup>f</sup> is the Lie <sup>δ</sup><sup>i</sup> <sup>k</sup> derivative.

P: Number of outputs.

And Sθ satisfies the following Lyapunov relation:

$$\dot{\mathbf{S}} = -\theta \mathbf{S}\_{\theta} - \mathbf{A}^{T} \mathbf{S}\_{\theta} - \mathbf{S}\_{\theta} \mathbf{A} + \mathbf{C}^{T} \mathbf{C} = \mathbf{0} \tag{16}$$

In [20], the demonstration is done.

#### 3.2 The unscented Kalman filter

The unscented Kalman filter (UKF) has been essentially designed for the state estimation problems, and applied in some nonlinear control applications [11]. The unscented Kalman filter (UKF) compensates for approximation issues of the extended Kalman filter (EKF). A Gaussian random variable represents the state distribution, which is specified using a set of sample points chosen very carefully [12]. The unscented transformation (UT) is a method to estimate or calculate statistics of a random variable which is subjected to a nonlinear transformation [11]. In stochastic estimation problems, a common assumption usually is used which underline the fact that the process and measurement noise terms are additive, as in:

$$\begin{aligned} \boldsymbol{x}\_{k} &= \boldsymbol{f}(\boldsymbol{x}\_{k-1}, \boldsymbol{u}\_{k-1}) + \boldsymbol{w}\_{k-1} \\ \boldsymbol{\nu}\_{k} &= \boldsymbol{h}(\boldsymbol{x}\_{k}, \boldsymbol{u}\_{k}) + \boldsymbol{\nu}\_{k} \end{aligned} \tag{17}$$

The dimension of the sigma-points is the same as the state vector, that is to say L = nx. The UKF is recursively executed, starting with the assumed initial conditions x^<sup>0</sup> and P0. First a set of sigma-points are generated from the prior state estimate x^<sup>k</sup>�<sup>1</sup> and covariance Pk � <sup>1</sup> at each discrete-time step, as in:

$$\boldsymbol{\chi}\_{k-1} = \begin{bmatrix} \hat{\boldsymbol{x}}\_{k-1} \ \hat{\boldsymbol{x}}\_{k-1} + \sqrt{\boldsymbol{L} + \boldsymbol{\lambda}} \sqrt{\boldsymbol{P}\_{k-1}} \ \hat{\boldsymbol{x}}\_{k-1} - \sqrt{\boldsymbol{L} + \boldsymbol{\lambda}} \sqrt{\boldsymbol{P}\_{k-1}} \end{bmatrix} \tag{18}$$

For the next point, each sigma point is passed through the state prediction function f that is nonlinear.

$$\chi\_{k,k-1}^{(i)} = f\left(\chi\_{k-1}^{(i)}, u\_{k-1}\right), \quad i = 0, 1, 2, \dots, 2L \tag{19}$$

χk,k�<sup>1</sup> means that this is the predicted value of the sigma-point based on the information from the prior time step. Sigma-points transformed, the post transformation mean and covariance are computed using weighted averages of the transformed sigma-points [21],

Advanced Monitoring of Wind Turbine DOI: http://dx.doi.org/10.5772/intechopen.84840

By operating a reverse variable change for coming back to the initial nonlinear

<sup>∂</sup>x^ ð Þ x t ^ð Þ � ��<sup>1</sup>

<sup>f</sup> <sup>h</sup>2; …; <sup>L</sup><sup>δ</sup><sup>2</sup>

h i<sup>T</sup>

The unscented Kalman filter (UKF) has been essentially designed for the state estimation problems, and applied in some nonlinear control applications [11]. The unscented Kalman filter (UKF) compensates for approximation issues of the extended Kalman filter (EKF). A Gaussian random variable represents the state distribution, which is specified using a set of sample points chosen very carefully [12]. The unscented transformation (UT) is a method to estimate or calculate statistics of a random variable which is subjected to a nonlinear transformation [11]. In stochastic estimation problems, a common assumption usually is used which underline the fact that the process and measurement noise terms are additive, as in:

xk ¼ f xk�<sup>1</sup> ð Þþ ; uk�<sup>1</sup> wk�<sup>1</sup>

The dimension of the sigma-points is the same as the state vector, that is to say L = nx. The UKF is recursively executed, starting with the assumed initial conditions x^<sup>0</sup> and P0. First a set of sigma-points are generated from the prior state estimate

Pk�<sup>1</sup>

For the next point, each sigma point is passed through the state prediction

χk,k�<sup>1</sup> means that this is the predicted value of the sigma-point based on the information from the prior time step. Sigma-points transformed, the post transfor-

mation mean and covariance are computed using weighted averages of the

<sup>p</sup> <sup>x</sup>^<sup>k</sup>�<sup>1</sup> � ffiffiffiffiffiffiffiffiffiffiffi

h i <sup>p</sup> (18)

<sup>L</sup> <sup>þ</sup> <sup>λ</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffi

Pk�<sup>1</sup>

, i ¼ 0, 1, 2, …, 2L (19)

yk ¼ h xð Þþ <sup>k</sup>; uk vk

<sup>L</sup> <sup>þ</sup> <sup>λ</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffi

ð Þi <sup>k</sup>�<sup>1</sup>; uk�<sup>1</sup> � �

x^<sup>k</sup>�<sup>1</sup> and covariance Pk � <sup>1</sup> at each discrete-time step, as in:

k,k�<sup>1</sup> <sup>¼</sup> <sup>f</sup> <sup>χ</sup>

<sup>χ</sup><sup>k</sup>�<sup>1</sup> <sup>¼</sup> <sup>x</sup>^<sup>k</sup>�<sup>1</sup> <sup>x</sup>^<sup>k</sup>�<sup>1</sup> <sup>þ</sup> ffiffiffiffiffiffiffiffiffiffiffi

χ ð Þi

function f that is nonlinear.

transformed sigma-points [21],

28

S�<sup>1</sup>

<sup>S</sup> ¼ �θS<sup>θ</sup> � ATS<sup>θ</sup> � <sup>S</sup>θ<sup>A</sup> <sup>þ</sup> CTC <sup>¼</sup> <sup>0</sup> (16)

<sup>f</sup> <sup>h</sup>2; …; hp; Lf hp; <sup>L</sup><sup>2</sup>

<sup>θ</sup> ð Þ hð Þ� x^ y (15)

<sup>f</sup> hp; …; <sup>L</sup><sup>δ</sup><sup>p</sup>

<sup>f</sup> hp

(17)

system, the observer for the system Eq. (12) is given by:

<sup>x</sup>^ <sup>¼</sup> <sup>f</sup>ð Þþ <sup>x</sup>^ <sup>g</sup>ð Þ <sup>x</sup>^ <sup>u</sup> � <sup>∂</sup><sup>Γ</sup>

<sup>f</sup> <sup>h</sup>1; <sup>h</sup>2; Lf <sup>h</sup>2; <sup>L</sup><sup>2</sup>

\_

Wind Solar Hybrid Renewable Energy System

<sup>f</sup> <sup>h</sup>1; …; <sup>L</sup><sup>δ</sup><sup>1</sup>

<sup>k</sup> derivative.

\_

In [20], the demonstration is done.

3.2 The unscented Kalman filter

And Sθ satisfies the following Lyapunov relation:

x^: Estimated value of x. <sup>Γ</sup>: An application <sup>R</sup><sup>n</sup> ! Rn.

<sup>Γ</sup> <sup>¼</sup> <sup>h</sup>1; Lf <sup>h</sup>1; <sup>L</sup><sup>2</sup>

<sup>f</sup> is the Lie <sup>δ</sup><sup>i</sup>

P: Number of outputs.

With,

And L<sup>δ</sup><sup>k</sup>

$$
\hat{\boldsymbol{x}}\_{k,k-1} = \sum\_{i=0}^{2L} \eta\_i^m \boldsymbol{\chi}\_{k,k-1}^{(i)} \tag{20}
$$

$$P\_{k,k-1} = Q\_{k-1} + \sum\_{i=0}^{2L} \eta\_i^{\epsilon} \left( \chi\_{k,k-1}^{(i)} - \hat{\varkappa}\_{k,k-1} \right) \left( \chi\_{k,k-1}^{(i)} - \hat{\varkappa}\_{k,k-1} \right)^T \tag{21}$$

where η<sup>m</sup> <sup>0</sup> <sup>¼</sup> <sup>λ</sup>=ð Þ <sup>L</sup> <sup>þ</sup> <sup>λ</sup> and <sup>η</sup><sup>c</sup> <sup>0</sup> <sup>¼</sup> <sup>λ</sup>=ð Þþ <sup>L</sup> <sup>þ</sup> <sup>λ</sup> <sup>1</sup> � <sup>α</sup><sup>2</sup> <sup>þ</sup> <sup>β</sup>. The measurement noise is also omitted from the observation function, as for the prediction as in:

$$
\psi\_{k,k-1}^{(i)} = h\left(\chi\_{k,k-1}^{(i)}, u\_k\right) \tag{22}
$$

where is a matrix of output sigma-points. Output sigma-points are used to calculate output covariance matrix, the predicted output and cross-covariance by using:

$$\begin{aligned} \hat{\boldsymbol{\eta}}\_{k,k-1} &= \sum\_{i=0}^{2L} \boldsymbol{\eta}\_{i}^{m} \boldsymbol{\nu}\_{k,k-1}^{(i)} \\ \boldsymbol{P}\_{k}^{\text{yr}} &= \boldsymbol{R}\_{k} + \sum\_{i=0}^{2L} \boldsymbol{\eta}\_{i}^{c} \left( \boldsymbol{\nu}\_{k,k-1}^{(i)} - \boldsymbol{\hat{\boldsymbol{\eta}}}\_{k,k-1} \right) \left( \boldsymbol{\nu}\_{k,k-1}^{(i)} - \boldsymbol{\hat{\boldsymbol{\eta}}}\_{k,k-1} \right)^{T} \\ \boldsymbol{P}\_{k}^{\text{yr}} &= \sum\_{i=0}^{2L} \boldsymbol{\eta}\_{i}^{c} \left( \boldsymbol{\zeta}\_{k,k-1}^{(i)} - \boldsymbol{\hat{\boldsymbol{\omega}}}\_{k,k-1} \right) \left( \boldsymbol{\nu}\_{k,k-1}^{(i)} - \boldsymbol{\hat{\boldsymbol{\eta}}}\_{k,k-1} \right)^{T} \end{aligned} \tag{23}$$

Due to the additive noise assumption, R is added to the output covariance matrix. For calculating the Kalman gain matrix K, covariance matrices are used, using:

$$K\_k = P\_k^{\text{xy}} \left( P\_k^{\text{py}} \right)^{-1} \tag{24}$$

And then this Kalman gain matrix is used to update covariance estimates and the state, as in:

$$\begin{aligned} \hat{\boldsymbol{x}}\_k &= \hat{\boldsymbol{x}}\_{k,k-1} + \boldsymbol{K}\_k \Big( \boldsymbol{y}\_k - \hat{\boldsymbol{y}}\_{k,k-1} \Big) \\ \boldsymbol{P}\_k &= \boldsymbol{P}\_{k,k-1} - \boldsymbol{K}\_k \boldsymbol{P}\_k^{\mathcal{Y}} \boldsymbol{K}\_k^T \end{aligned} \tag{25}$$

With yk, the measurement vector, x^<sup>k</sup> is the a posteriori state and Pk is the covariance estimates.

#### 3.3 The moving horizon estimation

The moving horizon estimation is a powerful means of estimating the states, and having in particular the possibility to constrain the outputs, states and noises. We can be described it as a least-squares optimization that leads to a states' estimation and working with a limited amount of information. Its particularity is to avoid the recursive manner characteristic of the extended Kalman filter. Under different approaches, several researchers [22–28] studied it, however presenting many similarities. The moving and full state estimations almost follow the same steps. In the moving state estimation, variables can be handled contrary to the full state estimation. In the full state estimation, at current time k, all variables from initial time n = 0 to n = k are used in the calculation. With a horizon H, the moving state estimation uses in the calculation only the concerned variables (measured outputs,

manipulated inputs and estimated states) from n = k + 1 � H to n = k, a moving vectors collect them. First of all, consider the full state estimation problem. Let assume that the process can be represented by the following continuous-time model [29–31]:

$$\dot{\boldsymbol{x}}(t) = \boldsymbol{f}(\boldsymbol{x}(t), \boldsymbol{u}(t)) + \boldsymbol{G}\boldsymbol{w}(t) \tag{26}$$

<sup>¼</sup> min z, wk�H, …, wk�<sup>1</sup>

Advanced Monitoring of Wind Turbine DOI: http://dx.doi.org/10.5772/intechopen.84840

k�H�1 and x0.

and to approximate Jk�Hð Þz as:

vT <sup>i</sup>þ<sup>1</sup>R�<sup>1</sup>

w<sup>∗</sup>

<sup>k</sup>�H; …; <sup>w</sup><sup>∗</sup>

where x^mhe

Jk ¼ ∑ k�1 i¼k�H

Figure 2.

31

Moving horizon estimation algorithm.

þ J mhe <sup>k</sup>�<sup>H</sup>ð Þz

filter is called to update Πk:

criterion J <sup>∗</sup>

∑ k�1 i¼k�H

Jk�Hð Þ<sup>z</sup> <sup>≈</sup> <sup>z</sup> � <sup>x</sup>^mhe

Under these assumptions, the criterion Eq. (31) becomes:

viþ<sup>1</sup> <sup>þ</sup> <sup>w</sup><sup>T</sup>

vT <sup>i</sup>þ1R�<sup>1</sup>

where z is the arrival state xk�<sup>H</sup> based on the optimized variables

<sup>k</sup> obtained by moving horizon estimation denoted by J

k�H <sup>T</sup>

<sup>i</sup> Q�<sup>1</sup> wi <sup>þ</sup> <sup>z</sup> � <sup>x</sup>^mhe

<sup>Π</sup><sup>k</sup> <sup>¼</sup> <sup>A</sup>Π<sup>k</sup>�<sup>1</sup>AT <sup>þ</sup> GQG<sup>T</sup> � <sup>A</sup>Π<sup>k</sup>�<sup>1</sup>C<sup>T</sup> <sup>C</sup>Π<sup>k</sup>�<sup>1</sup>C<sup>T</sup> <sup>þ</sup> <sup>R</sup> �<sup>1</sup>

viþ<sup>1</sup> <sup>þ</sup> <sup>w</sup><sup>T</sup>

<sup>þ</sup> <sup>J</sup> <sup>∗</sup>

In practice, it is very complicated and almost impossible to really minimize Jk�Hð Þz when k becomes large enough as this would be a full estimation problem again. The recommend solution is to retain the previous values of the optimized

Π�<sup>1</sup>

The discrete Riccati equation we used for the covariance matrix of the Kalman

<sup>k</sup>�<sup>H</sup> <sup>z</sup> � <sup>x</sup>^mhe

<sup>k</sup>�<sup>H</sup> is the state estimated by moving horizon observer at time (k�H).

k�H <sup>þ</sup> <sup>J</sup>

k�H <sup>T</sup>

Π�<sup>1</sup>

<sup>i</sup> Q�<sup>1</sup> wi <sup>k</sup>�Hð Þz (34)

<sup>k</sup> ð Þz along time k

(36)

<sup>k</sup>�Hð Þz (35)

k�H 

<sup>k</sup>�<sup>1</sup>A<sup>T</sup> (37)

mhe

mhe

<sup>k</sup>�<sup>H</sup> <sup>z</sup> � <sup>x</sup>^mhe

CΠ<sup>T</sup>

where wk is the control noise.

Where the Gaussian noise of zero mean is w. We can describe the measured outputs y by the discrete-time model

$$\mathcal{y}\_k = h(\mathfrak{x}\_k) + v\_k \tag{27}$$

where vk is the observation noise.

The equivalent linear discrete model is given by:

$$\mathbf{x}\_{k+1} = A\mathbf{x}\_k + Bu\_k + Gw\_k \tag{28}$$

where the matrices A and B are the Jacobian matrices with respect to f in relation to xk and uk, respectively. The measurement model is linearized as:

$$\mathcal{y}\_{k+1} = \mathbb{C}\mathbb{x}\_{k+1} + \upsilon\_{k+1} \tag{29}$$

where the matrix C is the Jacobian matrix of h with respect to xk. In the full state estimation problem, we have to minimize the following criterion with respect to the sequence of noises f g w0; …; wk�<sup>1</sup> and to the initial state x0, and then the states x^<sup>i</sup> are obtained by using Eq. (28).

$$J\_k = (\mathbf{x}\_0 - \hat{\mathbf{x}}\_0)^T \Pi\_0^{-1} (\mathbf{x}\_0 - \hat{\mathbf{x}}\_0) + \sum\_{i=0}^{k-1} \left( v\_{i+1}^T \mathbf{R}^{-1} v\_{i+1} + w\_i^T \mathbf{Q}^{-1} w\_i \right) \tag{30}$$

The weighting matrices Π�<sup>1</sup> <sup>0</sup> , Q�<sup>1</sup> and R�<sup>1</sup> , respectively, symbolize the initial estimation, the confidence in the dynamic model and the measurements. The main disadvantage of full state estimation is that during the computation we notice the size of the optimization problem grows as time increases, and would likely cause a failure in the optimization. The favorable solution to this increasing size is to set the problem according to a moving-horizon approach.

Let us consider the problem of moving state estimation. The criterion Eq. (30) is split into two parts [24, 25]:

$$J\_k = J\_{k-H} + \sum\_{i=k-H}^{k-1} \left( v\_{i+1}^T R^{-1} v\_{i+1} + w\_i^T Q^{-1} w\_i \right) = J\_{k-H} + J^{mh\epsilon} \tag{31}$$

The second term Jmhe of the criterion Eq. (31) depends on the sequence of noises f g wk�<sup>H</sup>; …; wk�<sup>1</sup> and on the state xk�H. Assume that k > H and set the optimized criterion:

$$J\_{k-H}^\* = \min\_{\mathbf{x}\_{0\cdot} w\_{0\cdot} \dots w\_{k-H-1}} J\_{k-H} \tag{32}$$

And then, in the full optimized criterion becomes:

$$J\_k^\* = \min\_{\mathbf{x}\_{0\_\flat} w\_{0\_\flat} \dots w\_{k-1}} J\_k \tag{33}$$

Advanced Monitoring of Wind Turbine DOI: http://dx.doi.org/10.5772/intechopen.84840

manipulated inputs and estimated states) from n = k + 1 � H to n = k, a moving vectors collect them. First of all, consider the full state estimation problem. Let assume that the process can be represented by the following continuous-time model

Where the Gaussian noise of zero mean is w. We can describe the measured

where the matrices A and B are the Jacobian matrices with respect to f in relation

where the matrix C is the Jacobian matrix of h with respect to xk. In the full state estimation problem, we have to minimize the following criterion with respect to the sequence of noises f g w0; …; wk�<sup>1</sup> and to the initial state x0, and then the states x^<sup>i</sup> are

> k�1 i¼0

estimation, the confidence in the dynamic model and the measurements. The main disadvantage of full state estimation is that during the computation we notice the size of the optimization problem grows as time increases, and would likely cause a failure in the optimization. The favorable solution to this increasing size is to set the

Let us consider the problem of moving state estimation. The criterion Eq. (30) is

viþ<sup>1</sup> <sup>þ</sup> <sup>w</sup><sup>T</sup>

The second term Jmhe of the criterion Eq. (31) depends on the sequence of noises

f g wk�<sup>H</sup>; …; wk�<sup>1</sup> and on the state xk�H. Assume that k > H and set the optimized

<sup>k</sup>�<sup>H</sup> <sup>¼</sup> min x0, <sup>w</sup>0, …, wk�H�<sup>1</sup>

<sup>k</sup> <sup>¼</sup> min x0, <sup>w</sup>0, …, wk�<sup>1</sup>

<sup>i</sup> Q�<sup>1</sup> wi

<sup>¼</sup> Jk�<sup>H</sup> <sup>þ</sup> <sup>J</sup>

vT <sup>i</sup>þ<sup>1</sup>R�<sup>1</sup>

x t \_ðÞ¼ f xt ð Þþ ð Þ; u tð Þ Gw tð Þ (26)

yk ¼ h xð Þþ <sup>k</sup> vk (27)

xkþ<sup>1</sup> ¼ Axk þ Buk þ Gwk (28)

ykþ<sup>1</sup> <sup>¼</sup> Cxkþ<sup>1</sup> <sup>þ</sup> vkþ<sup>1</sup> (29)

viþ<sup>1</sup> <sup>þ</sup> <sup>w</sup><sup>T</sup>

<sup>i</sup> Q�<sup>1</sup> wi (30)

, respectively, symbolize the initial

Jk�<sup>H</sup> (32)

Jk (33)

mhe (31)

[29–31]:

where wk is the control noise.

Wind Solar Hybrid Renewable Energy System

outputs y by the discrete-time model

where vk is the observation noise.

obtained by using Eq. (28).

Jk ¼ ð Þ x<sup>0</sup> � x^<sup>0</sup>

split into two parts [24, 25]:

criterion:

30

The weighting matrices Π�<sup>1</sup>

<sup>T</sup>Π�<sup>1</sup>

problem according to a moving-horizon approach.

k�1 i¼k�H

J ∗

J ∗

And then, in the full optimized criterion becomes:

vT <sup>i</sup>þ<sup>1</sup>R�<sup>1</sup>

Jk ¼ Jk�<sup>H</sup> þ ∑

The equivalent linear discrete model is given by:

to xk and uk, respectively. The measurement model is linearized as:

<sup>0</sup> ð Þþ x<sup>0</sup> � x^<sup>0</sup> ∑

<sup>0</sup> , Q�<sup>1</sup> and R�<sup>1</sup>

$$=\min\_{\boldsymbol{z}\_{2},\boldsymbol{w}\_{k-Ht},\ldots,\boldsymbol{w}\_{k-1}} \left[ \sum\_{i=k-H}^{k-1} \left( \boldsymbol{v}\_{i+1}^{T} \boldsymbol{R}^{-1} \boldsymbol{v}\_{i+1} + \boldsymbol{w}\_{i}^{T} \boldsymbol{Q}^{-1} \boldsymbol{w}\_{i} \right) \right] + \boldsymbol{f}\_{k-H}^{\*} (\boldsymbol{z}) \tag{34}$$

where z is the arrival state xk�<sup>H</sup> based on the optimized variables w<sup>∗</sup> <sup>k</sup>�H; …; <sup>w</sup><sup>∗</sup> k�H�1 and x0.

In practice, it is very complicated and almost impossible to really minimize Jk�Hð Þz when k becomes large enough as this would be a full estimation problem again. The recommend solution is to retain the previous values of the optimized criterion J <sup>∗</sup> <sup>k</sup> obtained by moving horizon estimation denoted by J mhe <sup>k</sup> ð Þz along time k and to approximate Jk�Hð Þz as:

$$J\_{k-H}(\mathbf{z}) \approx \left(\mathbf{z} - \hat{\boldsymbol{\mathfrak{x}}}\_{k-H}^{mhe}\right)^{T} \boldsymbol{\Pi}\_{k-H}^{-1} \left(\mathbf{z} - \hat{\boldsymbol{\mathfrak{x}}}\_{k-H}^{mhe}\right) + J\_{k-H}^{mhe}(\mathbf{z})\tag{35}$$

where x^mhe <sup>k</sup>�<sup>H</sup> is the state estimated by moving horizon observer at time (k�H). Under these assumptions, the criterion Eq. (31) becomes:

$$\begin{split} J\_k &= \sum\_{i=k-H}^{k-1} \left( v\_{i+1}^T R^{-1} v\_{i+1} + w\_i^T Q^{-1} w\_i \right) + \left( z - \hat{\boldsymbol{\pi}}\_{k-H}^{mh\epsilon} \right)^T \Pi\_{k-H}^{-1} \left( z - \hat{\boldsymbol{\pi}}\_{k-H}^{mh\epsilon} \right) \\ &+ J\_{k-H}^{mh\epsilon} (\mathbf{z}) \end{split} \tag{36}$$

The discrete Riccati equation we used for the covariance matrix of the Kalman filter is called to update Πk:

$$\boldsymbol{\Pi}\_{k} = \boldsymbol{A}\boldsymbol{\Pi}\_{k-1}\boldsymbol{\mathbf{A}}^{T} + \boldsymbol{G}\boldsymbol{Q}\boldsymbol{G}^{T} - \boldsymbol{A}\boldsymbol{\Pi}\_{k-1}\boldsymbol{\mathbf{C}}^{T}\left[\boldsymbol{C}\boldsymbol{\Pi}\_{k-1}\boldsymbol{\mathbf{C}}^{T} + \boldsymbol{R}\right]^{-1}\boldsymbol{C}\boldsymbol{\Pi}\_{k-1}^{T}\boldsymbol{\mathbf{A}}^{T} \tag{37}$$

Figure 2. Moving horizon estimation algorithm.

With Π<sup>0</sup> given. The Moving horizon estimation algorithm is described by the diagram in Figure 2.
