**5. Simple numerical examples**

#### **5.1 One-dimensional system**

It is seen that using the structure (22) this optimization procedure does not

The previous section shows how the AF is designed to deal with the difficulty in specification of covariances of the ME and ObE. This is done without exploiting a possibility to determine, more or less correctly, a subspace for the ME. If such a subspace can be determined without major difficulties, it would be beneficial for better estimating the AF gain and improving the filter performance. In [19], the hypothesis of the structure of the ME has been introduced and a number of

There is a long history of joint estimation of state and ME for filtering algorithms, in particular, with the bias and covariance estimation. One of the most original approaches, dealing with the treatment of bias in recursive filtering (known as bias-separated estimation—BSE), is carried out by Friedland in [20]. He has shown that the MMSE *state estimator* for a linear dynamical system augmented with bias states can be decomposed into three parts: (1) bias-free state estimator; (2) bias estimator; and (3) blender. This BSE approach has the advantage that it requires fewer numerical operations than the traditional augmented-state implementation and avoids numerical ill-conditioning compared to the case of bias-separated esti-

It is common to treat the bias as part of the system state and then estimate the bias as well as the system state. There are two types of ME—deterministic (DME) and stochastic (SME). Generally speaking, a suitable equation can be introduced for the ME. In the presence of bias, under the assumption on constant *b*, instead of (1)

*x t*ð Þ¼ þ 1 *Φx t*ðÞþ *b t*ð Þþ *w t*ð Þ, *b t*ð Þ¼ þ 1 *b t*ð Þ, *t* ¼ 0, 1, 2 … (23)

*Gw* ∈*R<sup>n</sup>*�*nw* , *Gb* ∈ *Rn*�*nd* , *n* ≥*nw*, *n*≥*nb* (24)

To introduce a subspace for the variables *w t*ð Þ, *b t*ð Þ the SME and DME in (23), let

*w* ¼ *Gwρ*, *b* ¼ *Gbd*

Generally speaking, *Gw*, *Gb* are unknown, and finding reasonable hypothesizes for them is desirable but not self-evident. In [19], one hypothesis for *Gw*, *Gb* has been introduced (it will be referred to as Hypothesis on model error—HME).

The information on *Gw*, *Gb*, given in (25), allows to better estimate the DME *b* and SME *w* for improving the filter performance, especially for *nb* < *n*, *nw* < *n* in a HdS setting. The difficulty, encountered in practice of operational forecasting systems, is that (practically) nothing is given a priori on the space of the ME values. To overcome this difficulty, one simple hypothesis has been introduced in [19]. This hypothesis is postulated by taking into consideration the fact that for a large number of data assimilation problems in HdSs, the model time step *δt* (chosen for ensuring a stability of numerical scheme and for guaranteeing a high precision of the discrete solution) is much smaller than Δ*t*—the assimilation window (time

*Hypothesis* (on the subspace of ME—HME) [19]. Under the condition that *na* is relatively large, the ME belongs to the subspace spanned by all unstable and neutral

interval between two successive observation arrivals).

EiVecs (or SchVecs) of the system dynamics *Φ*.

Suppose that Δ*t* ¼ *naδt* where *na* is a positive integer number.

require the information on the ME statistics.

*Dynamic Data Assimilation - Beating the Uncertainties*

experiments have been successfully conducted.

mation by filtering technique.

one has

**12**

**4. Joint estimation of state and model error in AF**

To see the difference between the AF and the KF in doing with ME uncertainties, introduce the one-dimensional system

$$\mathbf{x}(t+1) = \Phi \mathbf{x}(t) + w(t), \mathbf{z}(t+1) = h\mathbf{x}(t+1) + v(t+1), t = \mathbf{0}, \mathbf{1}, \dots \tag{25}$$

In (25), *Φ* is the unique eigenvalue (also the singular value) of the system dynamics.

i. For simplicity, let *Φ* ¼ 1, *h* ¼ 1. This corresponds to the situation when the system is *neutrally stable*. The filter fundamental matrix (18) now is *L K*ð Þ¼ ð Þ 1 � *K* which is stable if *K* ∈ð Þ 0, 2 . For the KF gain (4)–(8), as *Kkf*ðÞ¼ *t Mkf*ð Þ*t Mkf*ð Þþ*<sup>t</sup> <sup>R</sup>* we have *Kkf*ð Þ*<sup>t</sup>* <sup>∈</sup>ð Þ 0, 1 , *Mkf*ð Þ*<sup>t</sup>* is the solution of (7). That is true for any *Mkf*ð Þ*t* ≥ 0, *R*> 0. It means then the KF is stable. Mention that if *<sup>Q</sup>* <sup>&</sup>gt; 0 always *Mkf*ð Þ*<sup>t</sup>* <sup>&</sup>gt; 0. In general, *Kkf*ðÞ¼ *<sup>t</sup> Mkf*ð Þ*<sup>t</sup> Mkf*ðÞþ*<sup>t</sup> <sup>R</sup>* <sup>þ</sup> where ½ � *A* <sup>þ</sup> is the pseudo-inverse of *A* [21].

For the AF, we have in this case *Pr* ¼ 1*:* Consider the gain *Kaf*ð Þ*θ* ≔ *PrθKe*, where *Ke* is the gain of the form *Ke* <sup>¼</sup> *Me Me*þ*<sup>R</sup>*, *Me* <sup>&</sup>gt;0, *<sup>R</sup>*>0, *Me* is constant. We have then for the NAF (*θ* ¼ 1Þ 0<*Ke* < 1 and *Knaf* ¼ *Ke*.

For the AF, the transition matrix (18) reads *Laf*ð Þ¼ *θ* ð Þ 1 � *θKe* . For *θ* ∈ð Þ 0, 2 , ∣*Lkf*ð Þ*θ* ∣ ∈ð Þ 0, 1 , *Kaf*ð Þ*θ* ∈ ð Þ 0, 2 and the AF is stable. It is evident that there is a larger margin for varying the gain in the AF than that in the KF since *Kkf*ð Þ*t* ∈ ð Þ 0, 1 . One sees that the stationary KF is a member of the class of stable AFs (19). The performance of AF is optimized by solving the problem (9) using the procedure (11) and (12) or SPSA algorithms (*Comment 2.2*).


For *Φ* >1 we have

$$\frac{\Phi - 1}{K\_{\epsilon} \Phi} < \theta < \frac{\Phi + 1}{K\_{\epsilon} \Phi} \tag{26}$$

In particular, when *<sup>Φ</sup>* ! 1, approximately *<sup>θ</sup>* <sup>∈</sup> 0, <sup>2</sup> *Ke* . When *<sup>Q</sup>* <sup>≫</sup> *<sup>R</sup>* (that is usually in practice), approximately *θ* ∈ ð Þ 0, 2 as in the situation (i). For large *Φ* ≫ 1, *Φ*�1 *Ke<sup>Φ</sup>* ! <sup>1</sup> *Ke* (left-hand limit), *<sup>Φ</sup>*þ<sup>1</sup> *Ke<sup>Φ</sup>* ! <sup>1</sup> *Ke* (right-hand limit) and there remains no margin for varying *θ* (or *Q* ≫ *R*) and *Kaf* ! 1. It is important to emphasize that as *Ke* is chosen by designer, we can define the interval for varying *θ* if the amplitude of *Φ* is more or less known. In practice, one can vary *θ* ∈½ � *ϵ*, 2 þ *ϵ* with small *ϵ*>0 for *Φ* close to 1, and with *ϵ* close to 1 for large *Φ*.

For *Φ* < � 1 we have

$$\frac{\Phi + 1}{K\_o \Phi} < \theta < \frac{\Phi - 1}{K\_o \Phi} \tag{27}$$

From (31) conditions for ∣*lii*∣<1 can be obtained as done in Section 5.1 with the one-dimensional system since *lii*, *i* ¼ 1, 2 are independent one from another. The length of the interval *Ii*for varying *θ<sup>i</sup>* depends on the value of *Φii* (see (26)).

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties*

This example shows that for *Pr* ¼ *Id*, it is always possible to construct a stable AF whatever are the EiVs of *Φ* (stable or unstable). There are some constraints for *Mii* (they are positive) and for *Ri* (small positive). Optimality of the AF is obtained by searching recursively (in time) the optimal *θ<sup>i</sup>* during assimilation process. Thus, in the AF, a correct specification of ME and ObE statistics (second order) is not

Consider the situation when *Pr* is constructed from only one vector. Let *Pr* ¼ ð Þ 1, 0 *<sup>T</sup>*—the EiVec associated with *<sup>Φ</sup>*<sup>11</sup> (the results remain the same if we choose

We show now that the filter with the gain (19) is unstable. We have (for *Θ* ¼ *Id*),

*<sup>e</sup> HeMeH<sup>T</sup>*

if we put *R* ¼ *αI*. For *Lnaf* ¼ ½ � *I* � *KH Φ*, taking into account (32), (33) it implies

*αΦ*<sup>11</sup> *me* <sup>þ</sup> *<sup>α</sup>* <sup>0</sup>

*me*þ*<sup>α</sup>* can be made as small as desired by choosing small *<sup>α</sup>*>0, the first EiV

Consider the filtering problem (1) and (2), the dynamical system (1) describes a sequence of system states at time instants *t* ¼ 0, 1, … when the observations are available. It means that *Φ* represents the transition of system state over the (observation) time window Δ*t* separating arrivals of two successive observations. In practice, the interval Δ*t* is much larger than the *model time step δt* which is the step size in approximating the temporal derivative. The choice of *δt* is important for guaranteeing a stability of discretized scheme and having high is important for guaranteeing a stability of discretized scheme and having high precision of the discretized solution (wrt the continuous solution). We have then Δ*T* ¼ *naδt*, where *na* is a relatively large positive integer. For example, in the HYCOM model at SHOM (French marine) for the Bay of Biscay configuration, the interval Δ*t* between two observation arrivals is 7 days which is equivalent to integrating 1200 model time

*me*þ*<sup>α</sup>* can be made stable. However, the second EiV in (34) *<sup>l</sup>*<sup>22</sup> <sup>¼</sup> *<sup>Φ</sup>*<sup>22</sup> <sup>&</sup>gt;1 is unstable. It implies that the filter with the gain (19) s.t. *Pr* <sup>¼</sup> ð Þ 1, 0 *<sup>T</sup>* is unstable. This happens even for *Θ* 6¼ *Id*. It means that when the projection subspace *R Pr* ½ � does not contain all unstable and neutral EiVecs of the system dynamics, it is impossible to

2 6 4

0 *Φ*<sup>22</sup>

*Lnaf* ¼

*Φ* ¼ *diag* ð Þ *Φ*11, *Φ*<sup>22</sup> , ∣*Φ*11∣>1, j j *Φ*<sup>22</sup> >1, *H* ¼ *diag*ð Þ 1, 1 (32)

*<sup>e</sup>* <sup>þ</sup> *<sup>R</sup>* � ��<sup>1</sup> <sup>¼</sup> *MePr PrMePT*

3 7 *<sup>r</sup>* <sup>þ</sup> *<sup>α</sup><sup>I</sup>* � ��<sup>1</sup>

<sup>5</sup> (34)

(33)

important as happens in the KF.

*DOI: http://dx.doi.org/10.5772/intechopen.92194*

*Pr* <sup>¼</sup> ð Þ 0, 1 *<sup>T</sup>*—the EiVec associated with *<sup>Φ</sup>*22).

*He* <sup>¼</sup> *HPr* <sup>¼</sup> *Pr*,*<sup>K</sup>* <sup>¼</sup> *PrKe*,*Ke* <sup>¼</sup> *MeH<sup>T</sup>*

guarantee a stability of the filter.

**5.3 Two-dimensional system: estimation of ME**

*5.2.2 Unstable filter*

As *<sup>α</sup>*

*<sup>l</sup>*<sup>11</sup> <sup>¼</sup> *αΦ*<sup>11</sup>

**15**

It is seen from (27) that when *<sup>Φ</sup>* ! �1, approximately *<sup>θ</sup>* <sup>∈</sup> 0, <sup>2</sup> *Ke* . As for the situation *<sup>Φ</sup>* <sup>≪</sup> � 1 , *<sup>Φ</sup>*þ<sup>1</sup> *Ko<sup>Φ</sup>* ! <sup>1</sup> *Ko* (left-hand limit), *<sup>Φ</sup>*�<sup>1</sup> *Ko<sup>Φ</sup>* ! <sup>1</sup> *Ko* (right-hand limit) when *<sup>Q</sup>* <sup>≫</sup> *<sup>R</sup>* approximately *<sup>θ</sup>* ! <sup>1</sup> *Ko* hence *Kaf* ! 1.

It is important to stress that the KF gain is computed on the basis of *Q* and *R* (under the condition that the statistics of the initial state will be forgotten as *t* becomes large); whereas, the gain of the AF is updated on the basis of samples of the innovation vector. It means that the KF is optimal in the MMSE sense (under the condition of exact knowledge of the required statistics) whereas the AF is optimized during the assimilation process using PE realizations of the system output (innovation vector). The KF gain can be computed in an offline fashion, whereas the AF gain is a function of observation and computed in online.

#### **5.2 Two-dimensional system: specification of covariances**

#### *5.2.1 Stable filter*

To see the role of the correction subspace *R Pr* ½ � in ensuring a stability of the AF, let us consider the system (1) and (2) s.t.

$$\Phi = \text{diag}\left(\Phi\_{11}, \Phi\_{22}\right),\\H = \text{diag}\left(\mathbf{1}, \mathbf{1}\right) \tag{28}$$

Consider the AF with the gain (19) s.t. *Pr* ¼ *Id*, i.e., two columns of *Pr* are in fact the EiVecs associated with two EiVs *Φ*<sup>11</sup> and *Φ*22. Let us denote the AF gain *Kaf* ¼ *Pr*Θ*MH<sup>T</sup> HMH<sup>T</sup>* <sup>þ</sup> *<sup>R</sup>* �<sup>1</sup> <sup>¼</sup> <sup>Θ</sup>*Ke* with *Ke* <sup>≔</sup> *MeH<sup>T</sup> HMeH<sup>T</sup>* <sup>þ</sup> *<sup>R</sup>* �<sup>1</sup> which is structurally identical to that of the KF. For the nonadaptive filter *Knaf* ¼ *Ke* and with the choice *Me* ¼ *diag M*ð Þ 11, *M*<sup>22</sup> , taking into account (28) one gets

$$K\_{naf} = \text{diag}\left(K\_{11}, K\_{22}\right), K\_{ii} = \frac{M\_{ii}}{M\_{ii} + R\_i}, i = 1, 2\tag{29}$$

$$L\_{\rm naf} = \text{diag}(l\_{11}, l\_{22}), l\_{\rm ii} = \left(1 - \frac{M\_{\rm ii}}{M\_{\rm ii} + R\_i}\right) \Phi\_{\rm / ii}, i = 1, 2 \tag{30}$$

The filter transition matrix (30) is obtained on the basis of *Lnaf* <sup>¼</sup> *<sup>I</sup>* � *KnafH <sup>Φ</sup>* and the assumption (29). It is easily to see that *Lnaf* has two EiVs, *λ<sup>i</sup>* ¼ *lii*, *i* ¼ 1, 2 where *lij* est. the ð Þ*ij* element of *Lnaf* .

Stability of the filter depends on the condition ∣*lii*∣<1, *i* ¼ 1, 2. For *Mii* >0, *Ri* >0, if *Φii* is *stable* or *neutrally stable*, i.e. ∣*Φii*∣ ≤1, *i* ¼ 1, 2, we have ∣*lii*∣< 1. For *unstable Φii* (∣*Φii*∣>1, *<sup>i</sup>* <sup>¼</sup> 1, 2Þ, the filter is unstable if *<sup>Φ</sup>ii* <sup>&</sup>gt; *Mii*þ*Ri Ri* (situation *Φii* >1) or *Φii* < � *Mii*þ*Ri Ri* (situation *Φii* < � 1). These conditions should be taken into account when the EiVs of *Φ* are large.

For the AF gain (19) (*Pr* ¼ *I*),

$$d\_{ii} = \left(\mathbf{1} - \frac{\theta\_i \mathbf{M}\_{ii}}{\mathbf{M}\_{ii} + \mathbf{R}\_i}\right) \Phi\_{ii},\tag{31}$$

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties DOI: http://dx.doi.org/10.5772/intechopen.92194*

From (31) conditions for ∣*lii*∣<1 can be obtained as done in Section 5.1 with the one-dimensional system since *lii*, *i* ¼ 1, 2 are independent one from another. The length of the interval *Ii*for varying *θ<sup>i</sup>* depends on the value of *Φii* (see (26)).

This example shows that for *Pr* ¼ *Id*, it is always possible to construct a stable AF whatever are the EiVs of *Φ* (stable or unstable). There are some constraints for *Mii* (they are positive) and for *Ri* (small positive). Optimality of the AF is obtained by searching recursively (in time) the optimal *θ<sup>i</sup>* during assimilation process. Thus, in the AF, a correct specification of ME and ObE statistics (second order) is not important as happens in the KF.

#### *5.2.2 Unstable filter*

For *Φ* < � 1 we have

situation *<sup>Φ</sup>* <sup>≪</sup> � 1 , *<sup>Φ</sup>*þ<sup>1</sup>

*5.2.1 Stable filter*

*<sup>Q</sup>* <sup>≫</sup> *<sup>R</sup>* approximately *<sup>θ</sup>* ! <sup>1</sup>

*Φ* þ 1 *KoΦ*

It is seen from (27) that when *<sup>Φ</sup>* ! �1, approximately *<sup>θ</sup>* <sup>∈</sup> 0, <sup>2</sup>

*Ko* hence *Kaf* ! 1.

*Ko* (left-hand limit), *<sup>Φ</sup>*�<sup>1</sup>

*Ko<sup>Φ</sup>* ! <sup>1</sup>

*Dynamic Data Assimilation - Beating the Uncertainties*

gain is a function of observation and computed in online.

let us consider the system (1) and (2) s.t.

where *lij* est. the ð Þ*ij* element of *Lnaf* .

For the AF gain (19) (*Pr* ¼ *I*),

*Mii*þ*Ri*

**14**

EiVs of *Φ* are large.

(∣*Φii*∣>1, *<sup>i</sup>* <sup>¼</sup> 1, 2Þ, the filter is unstable if *<sup>Φ</sup>ii* <sup>&</sup>gt; *Mii*þ*Ri*

**5.2 Two-dimensional system: specification of covariances**

choice *Me* ¼ *diag M*ð Þ 11, *M*<sup>22</sup> , taking into account (28) one gets

*Knaf* <sup>¼</sup> *diag K*ð Þ 11, *<sup>K</sup>*<sup>22</sup> , *Kii* <sup>¼</sup> *Mii*

*Lnaf* <sup>¼</sup> *diag l*ð Þ 11, *<sup>l</sup>*<sup>22</sup> , *lii* <sup>¼</sup> <sup>1</sup> � *Mii*

<*θ* <

It is important to stress that the KF gain is computed on the basis of *Q* and *R* (under the condition that the statistics of the initial state will be forgotten as *t* becomes large); whereas, the gain of the AF is updated on the basis of samples of the innovation vector. It means that the KF is optimal in the MMSE sense (under the condition of exact knowledge of the required statistics) whereas the AF is optimized during the assimilation process using PE realizations of the system output (innovation vector). The KF gain can be computed in an offline fashion, whereas the AF

To see the role of the correction subspace *R Pr* ½ � in ensuring a stability of the AF,

Consider the AF with the gain (19) s.t. *Pr* ¼ *Id*, i.e., two columns of *Pr* are in fact the EiVecs associated with two EiVs *Φ*<sup>11</sup> and *Φ*22. Let us denote the AF gain *Kaf* ¼ *Pr*Θ*MH<sup>T</sup> HMH<sup>T</sup>* <sup>þ</sup> *<sup>R</sup>* �<sup>1</sup> <sup>¼</sup> <sup>Θ</sup>*Ke* with *Ke* <sup>≔</sup> *MeH<sup>T</sup> HMeH<sup>T</sup>* <sup>þ</sup> *<sup>R</sup>* �<sup>1</sup> which is structurally identical to that of the KF. For the nonadaptive filter *Knaf* ¼ *Ke* and with the

The filter transition matrix (30) is obtained on the basis of *Lnaf* <sup>¼</sup> *<sup>I</sup>* � *KnafH <sup>Φ</sup>* and the assumption (29). It is easily to see that *Lnaf* has two EiVs, *λ<sup>i</sup>* ¼ *lii*, *i* ¼ 1, 2

Stability of the filter depends on the condition ∣*lii*∣<1, *i* ¼ 1, 2. For *Mii* >0, *Ri* >0, if *Φii* is *stable* or *neutrally stable*, i.e. ∣*Φii*∣ ≤1, *i* ¼ 1, 2, we have ∣*lii*∣< 1. For *unstable Φii*

*Ri* (situation *Φii* < � 1). These conditions should be taken into account when the

*Mii* þ *Ri* 

*lii* <sup>¼</sup> <sup>1</sup> � *<sup>θ</sup>iMii*

*Φ* ¼ *diag* ð Þ *Φ*11, *Φ*<sup>22</sup> , *H* ¼ *diag* ð Þ 1, 1 (28)

*Mii* þ *Ri*

*Mii* þ *Ri* 

, *i* ¼ 1, 2 (29)

*Φ=ii*, *i* ¼ 1, 2 (30)

*Ri* (situation *Φii* >1) or *Φii* < �

*Φii*, (31)

*Φ* � 1

*Ko<sup>Φ</sup>* ! <sup>1</sup>

*Ko<sup>Φ</sup>* (27)

*Ke* 

*Ko* (right-hand limit) when

. As for the

Consider the situation when *Pr* is constructed from only one vector. Let *Pr* ¼ ð Þ 1, 0 *<sup>T</sup>*—the EiVec associated with *<sup>Φ</sup>*<sup>11</sup> (the results remain the same if we choose *Pr* <sup>¼</sup> ð Þ 0, 1 *<sup>T</sup>*—the EiVec associated with *<sup>Φ</sup>*22).

$$\Phi = \text{diag}\left(\Phi\_{11}, \Phi\_{22}\right), |\Phi\_{11}| > 1, |\Phi\_{22}| > 1,\\ H = \text{diag}(\mathbf{1}, \mathbf{1})\tag{32}$$

We show now that the filter with the gain (19) is unstable. We have (for *Θ* ¼ *Id*),

$$H\_{\varepsilon} = HP\_r = P\_r,\\ K = P\_r K\_{\varepsilon},\\ K\_{\varepsilon} = M\_{\varepsilon} H\_{\varepsilon}^T \left[ H\_{\varepsilon} M\_{\varepsilon} H\_{\varepsilon}^T + R \right]^{-1} = M\_{\varepsilon} P\_r \left[ P\_r M\_{\varepsilon} P\_r^T + aI \right]^{-1} \tag{33}$$

if we put *R* ¼ *αI*. For *Lnaf* ¼ ½ � *I* � *KH Φ*, taking into account (32), (33) it implies

$$L\_{naf} = \begin{bmatrix} a\Phi\_{11} & & \\ \overline{m\_{\epsilon} + a} & 0 \\ 0 & \Phi\_{22} \end{bmatrix} \tag{34}$$

As *<sup>α</sup> me*þ*<sup>α</sup>* can be made as small as desired by choosing small *<sup>α</sup>*>0, the first EiV *<sup>l</sup>*<sup>11</sup> <sup>¼</sup> *αΦ*<sup>11</sup> *me*þ*<sup>α</sup>* can be made stable. However, the second EiV in (34) *<sup>l</sup>*<sup>22</sup> <sup>¼</sup> *<sup>Φ</sup>*<sup>22</sup> <sup>&</sup>gt;1 is unstable. It implies that the filter with the gain (19) s.t. *Pr* <sup>¼</sup> ð Þ 1, 0 *<sup>T</sup>* is unstable. This happens even for *Θ* 6¼ *Id*. It means that when the projection subspace *R Pr* ½ � does not contain all unstable and neutral EiVecs of the system dynamics, it is impossible to guarantee a stability of the filter.

#### **5.3 Two-dimensional system: estimation of ME**

Consider the filtering problem (1) and (2), the dynamical system (1) describes a sequence of system states at time instants *t* ¼ 0, 1, … when the observations are available. It means that *Φ* represents the transition of system state over the (observation) time window Δ*t* separating arrivals of two successive observations. In practice, the interval Δ*t* is much larger than the *model time step δt* which is the step size in approximating the temporal derivative. The choice of *δt* is important for guaranteeing a stability of discretized scheme and having high is important for guaranteeing a stability of discretized scheme and having high precision of the discretized solution (wrt the continuous solution). We have then Δ*T* ¼ *naδt*, where *na* is a relatively large positive integer. For example, in the HYCOM model at SHOM (French marine) for the Bay of Biscay configuration, the interval Δ*t* between two observation arrivals is 7 days which is equivalent to integrating 1200 model time

steps *δt*. It means *na* ¼ 1200. Symbolically we have then the equations for model time step integration

$$\mathbf{x}'(\tau+\mathbf{1}) = \Phi^{\prime}\mathbf{x}'(\tau) + \boldsymbol{\upmu}^{\prime}(\tau), \boldsymbol{\upmu}^{\prime}(\tau) \coloneqq \boldsymbol{b}^{\prime}(\tau) + \boldsymbol{w}^{\prime}(\tau) \tag{35}$$

In (35), *Φ*<sup>0</sup> represents the integration of numerical model over one model time step *δt*. Hence

$$\Phi = \left[\Phi'\right]^{n\_a} \tag{36}$$

sees that initialized by the same value, the two gains become different during assimi-

The mean temporal RMS (root mean square) of the innovation is shown in **Figure 3**. It is interesting to remark that no significant difference is observed between two curves and a slightly better performance is produced by the KF.

In **Figure 4**, we show RMS of the state FE produced by the KF and AF under the condition that the variance *Q* is known exactly. One sees that the KF, as expected,

lation process. The KF gain has reached a stationary regime very quickly.

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties*

*DOI: http://dx.doi.org/10.5772/intechopen.92194*

produces the best results.

**Figure 1.**

**Figure 2.**

**17**

*Temporal evolution of the parameter θm*ð Þ*t in the AF gain.*

*Temporal evolution of gains in KF and AF during data assimilation.*

The contribution of *ψ*<sup>0</sup> ð Þ*τ* , over the assimilation window ½ � *t* � 1, *t* (for simplicity and without loss of generality, one supposes *t* � 1 ≔ 0, *t* ≔ *n*\_*a*) is

$$\nu(t) = b(t) + w(t),\\ b(t) \coloneqq \sum\_{\tau=0}^{n\_d} \nu\_1(\tau)\_1,\\ \nu\_1(\tau) \coloneqq \left[\Phi'\right]^{n\_d - 1 - \tau} b'(\tau),$$

$$w(t) \coloneqq \sum\_{\tau=0}^{n\_d} \nu\_2(\tau), \ \nu\_2(\tau) \coloneqq \Phi^{n\_d - 1 - \tau} w'(\tau), \left[\Phi'\right]^{-1} \coloneqq 0 \tag{37}$$

The HME in Section 4 says that the SME *w t*ð Þ and DME *b t*ð Þ, as functions of *na*, belong to the subspace spanned by leading EiVs (or SchVecs) of *Φ* for a relatively large *na*. The initial filtering problem now has the form (1) and (2) s.t. (36) and (37) where *t* ¼ *τ=na*.

To illustrate this HME, continue the two-dimensional system in Section 5.2.2 and suppose that ∣*Φ*<sup>0</sup> <sup>11</sup> >1, j*Φ*<sup>0</sup> j j <sup>22</sup> < 1. Applying HME in this case is equivalent to saying that the values of MEs *b t*ð Þ, *w t*ð Þ, approximately, belong to the subspace *R u*½ � <sup>1</sup> spanned by the first EiVec *u*<sup>1</sup> ¼ *col*ð Þ 1, 0 , associated with the EiV *Φ*<sup>0</sup> 11. Here *y* ¼ *col y*1, … , *yn* � � denotes the vector-column with components *<sup>y</sup>*1, … , *yn*. It follows that the covariance matrix of *w t*ð Þ is assumed to be of the form *<sup>Q</sup>* <sup>¼</sup> *<sup>σ</sup>*<sup>2</sup> *wu*1*uT* 1 and *b t*ð Þ—of the structure *b t*ðÞ¼ *cu*1, *c* is a scalar to be estimated. For the algorithm of joint estimation of state and bias (in term of *c*), see [19].

#### **6. Simulation results**

#### **6.1 One-dimensional system**

In this section, the filtering problem (25) in Section 5.1 is considered s.t.

$$\Phi = 0.99, H = 1.02, Q = 0.09, R = 0.01.$$

The true system states and observations are simulated using the initial state *x*ð Þ¼ 0 1 and *w t*ð Þ, *v t*ð Þ are zero mean Gaussian mutually uncorrelated and temporal uncorrelated sequences.

To see the performance of the AF, unknown system states are estimated on the basis of the AF algorithm. To obtain a reference, the standard KF is also implemented for solving this filtering problem. In the filtering algorithms, the estimate of the initial state is *x*^ð Þ¼ 0 2*:* The gain *Knaf* in the NAF is taken as that of the KF at *t* ¼ 0, i.e., *Knaf* ¼ *Kkf*ð Þ 0 .

**Figure 1** shows the temporal evolution of the parameters *θm*ð Þ*t* during assimilation process.

The gains in the KF and AF during the assimilation process are displayed in **Figure 2**. Mention that the KF gain is computed s.t. true statistics *Q*, *R*. In the AF, *θm*ð Þ*t* has been used for computation of the AF gain, i.e., *Kaf* ¼ *θm*ð Þ*t K*. From **Figure 2**, one

#### *Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties DOI: http://dx.doi.org/10.5772/intechopen.92194*

sees that initialized by the same value, the two gains become different during assimilation process. The KF gain has reached a stationary regime very quickly.

The mean temporal RMS (root mean square) of the innovation is shown in **Figure 3**. It is interesting to remark that no significant difference is observed between two curves and a slightly better performance is produced by the KF.

In **Figure 4**, we show RMS of the state FE produced by the KF and AF under the condition that the variance *Q* is known exactly. One sees that the KF, as expected, produces the best results.

**Figure 1.** *Temporal evolution of the parameter θm*ð Þ*t in the AF gain.*

**Figure 2.** *Temporal evolution of gains in KF and AF during data assimilation.*

steps *δt*. It means *na* ¼ 1200. Symbolically we have then the equations for model

ð Þ*τ* , *ψ*<sup>0</sup>

In (35), *Φ*<sup>0</sup> represents the integration of numerical model over one model time

*τ*¼0

The HME in Section 4 says that the SME *w t*ð Þ and DME *b t*ð Þ, as functions of *na*, belong to the subspace spanned by leading EiVs (or SchVecs) of *Φ* for a relatively large *na*. The initial filtering problem now has the form (1) and (2) s.t. (36) and (37)

To illustrate this HME, continue the two-dimensional system in Section 5.2.2

saying that the values of MEs *b t*ð Þ, *w t*ð Þ, approximately, belong to the subspace *R u*½ � <sup>1</sup> spanned by the first EiVec *u*<sup>1</sup> ¼ *col*ð Þ 1, 0 , associated with the EiV *Φ*<sup>0</sup>

� � denotes the vector-column with components *<sup>y</sup>*1, … , *yn*. It follows that the covariance matrix of *w t*ð Þ is assumed to be of the form *<sup>Q</sup>* <sup>¼</sup> *<sup>σ</sup>*<sup>2</sup>

and *b t*ð Þ—of the structure *b t*ðÞ¼ *cu*1, *c* is a scalar to be estimated. For the algorithm

In this section, the filtering problem (25) in Section 5.1 is considered s.t.

*Φ* ¼ 0*:*99, *H* ¼ 1*:*02, *Q* ¼ 0*:*09, *R* ¼ 0*:*01*:*

The true system states and observations are simulated using the initial state *x*ð Þ¼ 0 1 and *w t*ð Þ, *v t*ð Þ are zero mean Gaussian mutually uncorrelated and temporal

basis of the AF algorithm. To obtain a reference, the standard KF is also implemented for solving this filtering problem. In the filtering algorithms, the estimate of the initial state is *x*^ð Þ¼ 0 2*:* The gain *Knaf* in the NAF is taken as that of

**Figure 1** shows the temporal evolution of the parameters *θm*ð Þ*t* during

The gains in the KF and AF during the assimilation process are displayed in **Figure 2**. Mention that the KF gain is computed s.t. true statistics *Q*, *R*. In the AF, *θm*ð Þ*t* has been used for computation of the AF gain, i.e., *Kaf* ¼ *θm*ð Þ*t K*. From **Figure 2**, one

To see the performance of the AF, unknown system states are estimated on the

<sup>ν</sup>2ð Þ*<sup>τ</sup>* , *<sup>ν</sup>*2ð Þ*<sup>τ</sup>* <sup>≔</sup> *<sup>Φ</sup>na*�1�*<sup>τ</sup>*

ð Þ*τ* ≔ *b*<sup>0</sup>

ð Þþ *τ w*<sup>0</sup>

*<sup>Φ</sup>* <sup>¼</sup> *<sup>Φ</sup>*<sup>0</sup> ½ �*na* (36)

*b*0 ð Þ*τ* ,

ð Þ*<sup>τ</sup>* , *<sup>Φ</sup>*<sup>0</sup> ½ ��<sup>1</sup> <sup>≔</sup> 0 (37)

11. Here

*wu*1*uT* 1

ð Þ*τ* , over the assimilation window ½ � *t* � 1, *t* (for simplicity

<sup>ν</sup>1ð Þ*<sup>τ</sup>* <sup>1</sup>, <sup>ν</sup>1ð Þ*<sup>τ</sup>* <sup>≔</sup> *<sup>Φ</sup>*<sup>0</sup> ½ �*na*�1�*<sup>τ</sup>*

*w*0

<sup>11</sup> >1, j*Φ*<sup>0</sup> j j <sup>22</sup> < 1. Applying HME in this case is equivalent to

ð Þ*τ* (35)

ð Þþ *τ ψ*<sup>0</sup>

time step integration

step *δt*. Hence

where *t* ¼ *τ=na*.

*y* ¼ *col y*1, … , *yn*

and suppose that ∣*Φ*<sup>0</sup>

**6. Simulation results**

uncorrelated sequences.

assimilation process.

**16**

the KF at *t* ¼ 0, i.e., *Knaf* ¼ *Kkf*ð Þ 0 .

**6.1 One-dimensional system**

*x*0

The contribution of *ψ*<sup>0</sup>

ð Þ¼ *τ* þ 1 *Φ*<sup>0</sup>

*Dynamic Data Assimilation - Beating the Uncertainties*

*x*0

and without loss of generality, one supposes *t* � 1 ≔ 0, *t* ≔ *n*\_*a*) is

*<sup>ψ</sup>*ðÞ¼ *<sup>t</sup> b t*ðÞþ *w t*ð Þ, *b t*ð Þ <sup>≔</sup> <sup>X</sup>*na*

*τ*¼0

of joint estimation of state and bias (in term of *c*), see [19].

*w t*ð Þ <sup>≔</sup> <sup>X</sup>*na*

**Figure 3.** *RMS of innovation produced by the KF and AF.*

produced by the AF. It is interesting to note that when *Q* is correctly specified, the KF behaves better than the AF, but misspecification of *Q* leads to growing of the error in the KF. The AF is robust wrt the error in the specification of *Q*. This fact says in favor of the AF as an efficient tool for overcoming uncertainties in the ME.

*RMS of FE as a function of Q. The true value of Q is equal to 0. It is noted that the KF behaves better than the AF s.t. true Q but is more and more degraded as the ME becomes greater and greater. At the same time, the FE of*

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties*

*DOI: http://dx.doi.org/10.5772/intechopen.92194*

According to the notations in Section 5.3, consider the two-dimensional system (1)

Numerically one finds that the first SchVec is equal to *<sup>u</sup>*<sup>1</sup> ¼ �ð Þ 1, �7*:*<sup>0</sup> *<sup>E</sup>* � <sup>7</sup> T. **Figure 6** [19] shows the simulation results obtained on the basis of (37). One sees that, for *na* >10, the second component of *w t*ð Þ is close to 0 whereas the first component becomes bigger and bigger (in absolute value) as *na* increases. Here,

ð Þ*τ* is a sequence of independent two-dimensional Gaussian random vectors of zero mean and variance 1. This means that the values of *w t*ð Þ become more and more close to the subspace *R u*½ � <sup>1</sup> spanned by *u*1, hence the HME is practically valid for *na* >10 in this example. Mention that, as a rule, in ocean numerical models, *na* is of order *o*(100) (*na* ¼ 800 or the MICOM model in the experiment in Section 6.3).

been carried out (see (35)). The observations are picked at *τ* ¼ 15, 30, … , 390.

<sup>21</sup> ¼ 0, *Φ*<sup>0</sup>

*:* Thus the first EiV is unstable, the second—stable [19].

<sup>22</sup> ¼ 0*:*9, *H* ¼ ð Þ 1, 1 with the true

ð Þ*τ* , *τ* ¼ 1, … , 390 has

<sup>12</sup> ¼ 0*:*1, *Φ*<sup>0</sup>

First simulation of the sequence of true system states *x*<sup>0</sup>

**6.2 Two dimensional system**

*ij* , *Φ*<sup>0</sup>

ð Þ¼ *τ* col 0ð Þ *:*1, 0*:*1 <sup>0</sup>

with *Φ*<sup>0</sup> ¼ *Φ*<sup>0</sup>

See also [22].

*6.2.2 Assimilation results*

DME *b*<sup>0</sup>

**Figure 5.**

*the AF remains very robust.*

*w*0

**19**

*6.2.1 Illustration of hypothesis HME*

<sup>11</sup> ¼ 1*:*02, *Φ*<sup>0</sup>

**Figure 4.**

*RMS of the state FE produced by the KF and AF under the condition that the variance of ME is known exactly. It is seen that when the ME is correctly specified, the KF behaves better than the AF.*

**Figure 5** shows the RMS of FE as a function of the variance *Q*. Here, the value of *Q* varies from 0.1 to 1.9. Note that the true value of *Q* is 0.1. The red curve represents the RMS of FE produced by the KF at the end of the assimilation period (as a function of *Q*). The green curve has the same meaning, but for the FE

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties DOI: http://dx.doi.org/10.5772/intechopen.92194*

#### **Figure 5.**

*RMS of FE as a function of Q. The true value of Q is equal to 0. It is noted that the KF behaves better than the AF s.t. true Q but is more and more degraded as the ME becomes greater and greater. At the same time, the FE of the AF remains very robust.*

produced by the AF. It is interesting to note that when *Q* is correctly specified, the KF behaves better than the AF, but misspecification of *Q* leads to growing of the error in the KF. The AF is robust wrt the error in the specification of *Q*. This fact says in favor of the AF as an efficient tool for overcoming uncertainties in the ME.

#### **6.2 Two dimensional system**

#### *6.2.1 Illustration of hypothesis HME*

According to the notations in Section 5.3, consider the two-dimensional system (1) with *Φ*<sup>0</sup> ¼ *Φ*<sup>0</sup> *ij* , *Φ*<sup>0</sup> <sup>11</sup> ¼ 1*:*02, *Φ*<sup>0</sup> <sup>12</sup> ¼ 0*:*1, *Φ*<sup>0</sup> <sup>21</sup> ¼ 0, *Φ*<sup>0</sup> <sup>22</sup> ¼ 0*:*9, *H* ¼ ð Þ 1, 1 with the true DME *b*<sup>0</sup> ð Þ¼ *τ* col 0ð Þ *:*1, 0*:*1 <sup>0</sup> *:* Thus the first EiV is unstable, the second—stable [19].

Numerically one finds that the first SchVec is equal to *<sup>u</sup>*<sup>1</sup> ¼ �ð Þ 1, �7*:*<sup>0</sup> *<sup>E</sup>* � <sup>7</sup> T.

**Figure 6** [19] shows the simulation results obtained on the basis of (37). One sees that, for *na* >10, the second component of *w t*ð Þ is close to 0 whereas the first component becomes bigger and bigger (in absolute value) as *na* increases. Here, *w*0 ð Þ*τ* is a sequence of independent two-dimensional Gaussian random vectors of zero mean and variance 1. This means that the values of *w t*ð Þ become more and more close to the subspace *R u*½ � <sup>1</sup> spanned by *u*1, hence the HME is practically valid for *na* >10 in this example. Mention that, as a rule, in ocean numerical models, *na* is of order *o*(100) (*na* ¼ 800 or the MICOM model in the experiment in Section 6.3). See also [22].

#### *6.2.2 Assimilation results*

First simulation of the sequence of true system states *x*<sup>0</sup> ð Þ*τ* , *τ* ¼ 1, … , 390 has been carried out (see (35)). The observations are picked at *τ* ¼ 15, 30, … , 390.

**Figure 5** shows the RMS of FE as a function of the variance *Q*. Here, the value of

represents the RMS of FE produced by the KF at the end of the assimilation period (as a function of *Q*). The green curve has the same meaning, but for the FE

*RMS of the state FE produced by the KF and AF under the condition that the variance of ME is known exactly.*

*Q* varies from 0.1 to 1.9. Note that the true value of *Q* is 0.1. The red curve

*It is seen that when the ME is correctly specified, the KF behaves better than the AF.*

**Figure 3.**

**Figure 4.**

**18**

*RMS of innovation produced by the KF and AF.*

*Dynamic Data Assimilation - Beating the Uncertainties*

**Figure 6.** *Two components of w t*ð Þ *as functions of na.*

In terms of *x t*ð Þ, the filtering problem then is of the form (1) and (2) s.t. *t* ¼ *τ* <sup>15</sup> , *<sup>Φ</sup>* <sup>¼</sup> *<sup>Φ</sup>*0<sup>15</sup> (if no bias exists). When there is a bias, instead of *w t*ð Þ stands *ψ*ð Þ*t* ≔ *b t*ð Þþ *w t*ð Þ*:*

In the experiment, the true system states *x*<sup>0</sup> ð Þ*t* are generated by (35) s.t. *b*<sup>0</sup> ð Þ¼ *τ col*ð Þ 0*:*1, 0*:*1 , *w*<sup>0</sup> ð Þ*t* is zero mean with the covariance *Q*<sup>0</sup> ¼ *Id*. The observation error is of zero mean and covariance *R* ¼ 0*:*16. For the state *x t*ð Þ in the KF and NAF, the forecast is obtained at each assimilation instant *<sup>t</sup>* as *x t* ^ð Þ¼ <sup>þ</sup> <sup>1</sup>*=<sup>t</sup> <sup>Φ</sup>x t* ^ðÞþ ^ *b t*ð Þ. The simulation yields *b t*ðÞ¼ ð Þ 0*:*2296, 2*:*0589E � 02 which results from applying (37) s.t. *b*<sup>0</sup> ð Þ¼ *τ col*ð Þ 0*:*1, 0*:*1 .

**6.3 Data assimilation in the high-dimensional ocean model**

*Sample time average RMS of the state filtered error produced by the NAF, KF, and AF.*

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties*

*DOI: http://dx.doi.org/10.5772/intechopen.92194*

circulation over 7 *ds* requires 800 model time steps ð Þ *δt* integration.

*6.3.1 AF with optimal initial gain*

1, … , 4 are initialized as *λ<sup>i</sup>* ¼ 1, *i* ¼ 1, … , 4.

*Id*,ð Þ *QG <sup>T</sup>* h i*<sup>T</sup>*

**Figure 8.**

identity operator.

**21**

To illustrate the effectiveness of the AF in dealing with uncertainties in HdSs, this section presents the results on data assimilation in the oceanic numerical model MICOM (Miami Isopycnic Coordinate Ocean Model) [19]. This MICOM describes the oceanic circulation in the North Atlantics. The model has four vertical layers with the state consisting of three variables *x* ¼ ð Þ *h*, *u*, *v* where *h* is layer thickness, ð Þ *u*, *v* are two velocity components. The horizontal grid is 140 � 180. Totally at each time instant *t* we have the state *x t*ð Þ of dimension 302400 140 ð � 180 � 4 ð Þ *layers* �3 ð ÞÞ *variables* . For more details on the configuration of this experiment, see [22]. The experiment is carried out on estimating the oceanic circulation using sea surface height (SSH) measurements. The SSH observation is available each 7 days (*ds*) (hence the observation window Δ*T* ¼ 7*ds*). Mention that simulating the

First, in order to examine whether the method of optimal gain initialization, described in Section 3.2, is really useful for improving the filter performance, the optimization problem (21) has been solved. Symbolically, in the gain (20), *Pr* ¼

is the quasi-geostrophy operator computing the correction for velocity using the SSH innovation *ζ*. The gain *Ke* computes the correction for *h* using *ζ*. The ECM *M* ¼ *Mv* ⊗ *Mh*—Kronecker product of *Mh*—ECM of horizontal variable, *Mv*—ECM with vertical variable (see below for details). The two parameter matrices Θ and Λ are related to parameterization of *Mv*. The problem (21) is solved s.t. Θ ¼ *Id*—

The optimal parameters *λi*, *i* ¼ 1, … , 4 are found by solving the minimization problem (21) using SPSA algorithm. **Figure 9** shows the averaged values (see *Comment 2.1*) of *λi*, *i* ¼ 1, … , 4 resulting during the optimization process. All *λi*, *i* ¼

The two NAFs are performed, one (denoted as NAFI) is with the gain (20) s.t. Θ ¼ *Id*, Λ ¼ *Id*, and the other (denoted by NAFOI) s.t. Θ ¼ *Id* and *λi*, *i* ¼ 1, … , 4 obtained by solving (21) (their values are those displayed at the end of the

where *Id* is the identity operator on the space of layer thickness *h*, *QG*

**Figure 7** depicts the time evolution of the KF and AF gains. One sees here as in the experiment with 1D system (**Figure 2**) that the KF gain is stabilized very quickly compared to that of the AF gain.

**Figure 8** (from [19]) shows the sample time average RMS of the state FE produced by the three filters NAF, KF, and AF. One sees that the AF outperforms the NAF and KF.

**Figure 7.**

*Gain coefficients in KF and AF: The gains in KF and AF are identical at the beginning of the assimilation process.*

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties DOI: http://dx.doi.org/10.5772/intechopen.92194*

**Figure 8.** *Sample time average RMS of the state filtered error produced by the NAF, KF, and AF.*

#### **6.3 Data assimilation in the high-dimensional ocean model**

To illustrate the effectiveness of the AF in dealing with uncertainties in HdSs, this section presents the results on data assimilation in the oceanic numerical model MICOM (Miami Isopycnic Coordinate Ocean Model) [19]. This MICOM describes the oceanic circulation in the North Atlantics. The model has four vertical layers with the state consisting of three variables *x* ¼ ð Þ *h*, *u*, *v* where *h* is layer thickness, ð Þ *u*, *v* are two velocity components. The horizontal grid is 140 � 180. Totally at each time instant *t* we have the state *x t*ð Þ of dimension 302400 140 ð � 180 � 4 ð Þ *layers* �3 ð ÞÞ *variables* . For more details on the configuration of this experiment, see [22].

The experiment is carried out on estimating the oceanic circulation using sea surface height (SSH) measurements. The SSH observation is available each 7 days (*ds*) (hence the observation window Δ*T* ¼ 7*ds*). Mention that simulating the circulation over 7 *ds* requires 800 model time steps ð Þ *δt* integration.

#### *6.3.1 AF with optimal initial gain*

First, in order to examine whether the method of optimal gain initialization, described in Section 3.2, is really useful for improving the filter performance, the optimization problem (21) has been solved. Symbolically, in the gain (20), *Pr* ¼ *Id*,ð Þ *QG <sup>T</sup>* h i*<sup>T</sup>* where *Id* is the identity operator on the space of layer thickness *h*, *QG* is the quasi-geostrophy operator computing the correction for velocity using the SSH innovation *ζ*. The gain *Ke* computes the correction for *h* using *ζ*. The ECM *M* ¼ *Mv* ⊗ *Mh*—Kronecker product of *Mh*—ECM of horizontal variable, *Mv*—ECM with vertical variable (see below for details). The two parameter matrices Θ and Λ are related to parameterization of *Mv*. The problem (21) is solved s.t. Θ ¼ *Id* identity operator.

The optimal parameters *λi*, *i* ¼ 1, … , 4 are found by solving the minimization problem (21) using SPSA algorithm. **Figure 9** shows the averaged values (see *Comment 2.1*) of *λi*, *i* ¼ 1, … , 4 resulting during the optimization process. All *λi*, *i* ¼ 1, … , 4 are initialized as *λ<sup>i</sup>* ¼ 1, *i* ¼ 1, … , 4.

The two NAFs are performed, one (denoted as NAFI) is with the gain (20) s.t. Θ ¼ *Id*, Λ ¼ *Id*, and the other (denoted by NAFOI) s.t. Θ ¼ *Id* and *λi*, *i* ¼ 1, … , 4 obtained by solving (21) (their values are those displayed at the end of the

In terms of *x t*ð Þ, the filtering problem then is of the form (1) and (2) s.t. *t* ¼

of zero mean and covariance *R* ¼ 0*:*16. For the state *x t*ð Þ in the KF and NAF, the forecast is obtained at each assimilation instant *<sup>t</sup>* as *x t* ^ð Þ¼ <sup>þ</sup> <sup>1</sup>*=<sup>t</sup> <sup>Φ</sup>x t* ^ðÞþ ^

simulation yields *b t*ðÞ¼ ð Þ 0*:*2296, 2*:*0589E � 02 which results from applying (37)

the experiment with 1D system (**Figure 2**) that the KF gain is stabilized very

**Figure 8** (from [19]) shows the sample time average RMS of the state FE produced by the three filters NAF, KF, and AF. One sees that the AF outperforms

*Gain coefficients in KF and AF: The gains in KF and AF are identical at the beginning of the assimilation*

**Figure 7** depicts the time evolution of the KF and AF gains. One sees here as in

ð Þ*t* are generated by (35) s.t. *b*<sup>0</sup>

ð Þ*t* is zero mean with the covariance *Q*<sup>0</sup> ¼ *Id*. The observation error is

ð Þ¼ *τ*

*b t*ð Þ. The

<sup>15</sup> , *<sup>Φ</sup>* <sup>¼</sup> *<sup>Φ</sup>*0<sup>15</sup> (if no bias exists). When there is a bias, instead of *w t*ð Þ stands

In the experiment, the true system states *x*<sup>0</sup>

*Dynamic Data Assimilation - Beating the Uncertainties*

*τ*

**Figure 6.**

s.t. *b*<sup>0</sup>

**Figure 7.**

*process.*

**20**

*ψ*ð Þ*t* ≔ *b t*ð Þþ *w t*ð Þ*:*

ð Þ¼ *τ col*ð Þ 0*:*1, 0*:*1 .

*Two components of w t*ð Þ *as functions of na.*

quickly compared to that of the AF gain.

*col*ð Þ 0*:*1, 0*:*1 , *w*<sup>0</sup>

the NAF and KF.

**Figure 9.**

*Control parameters λi*, *i* ¼ 1, … , 4 *during optimization process.*

optimization process in **Figure 9**). The performances of these two NAFs are shown in **Figure 10**. One sees here that the NAFOI has improved considerably the quality of estimates of the velocity *u*-component compared with the NAFI. This result justifies that offline optimization (21) is an interesting strategy for finding the optimal initial gain in the NAF.

component at the surface resulting from two AFs. The curve *AF*0*U* corresponds to the AF whose nonadaptive version has the gain computed on the basis of the ECM *M* using an ensemble of PE samples (generated by the PeSP in [18]). The curve *AF*3*U* shows the performance of the AF with the modified ECM (by adding the vertical ME covariance *Qv* to the vertical ECM *Mv*). More precisely, *Qv* is assumed to belong to the subspace spanned by three leading EiVs of *Mv*. This choice is justified by the fact: the eigenvalue decomposition of *Mv* has the first three EiVs with the explained variances 67, 17, 15%, respectively. As the fourth EiVec has only the explained variance 0.7E-07%, it is dropped from the subspace constructed for the vertical ME. The better performance of the *AF*3*U*, in comparison with that

*Performance of the AF: (i) AF*0*U—no ME ECM has been taken into account; (ii) AF*3*U—with ME ECM*

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties*

*DOI: http://dx.doi.org/10.5772/intechopen.92194*

The above experiment shows in details how, on the basis of HME, the subspace for the ME can be constructed, and how one estimates the ECM for the model error. The superior performance of the *AF*3*U* over that of the *AF*0*U* validates the usefulness of the HME which can serve as an important tool for estimating the ME and improving the performance of the AF for solving the data assimilation problems

One of the key assumptions to ensure the optimal performance of the KF is that

a priori knowledge of the system model is given without any uncertainty. This assumption, however, is never valid in practice for dynamical systems under consideration. The uncertainties exist everywhere in modeling a real process like structural uncertainty, model parameterization, model resolution, model bias or ME statistics. For HdSs, order reduction introduced either in the original numerical model or in the filtering algorithms, inevitably leads to uncertainty in the ME,

Our focus in this chapter is to show how the AF solves efficiently filtering

As seen from this chapter, the AF has proven to be efficient to deal with uncertainties in the specification of the ME statistics, system bias or model reduction. The reasons of the success of the AF are that (i) it belongs to the class of

problems for systems operating in an uncertain environment.

of the *AF*0*U*, is apparently seen in **Figure 11**.

especially in geophysical numerical models.

with HdSs.

**23**

**Figure 11.**

*computed in accordance with the HME.*

**7. Conclusions**

#### *6.3.2 Estimating the ECM of ME*

In practice, for real operational systems, information on the space of ME is not available or very poorly known. Usually, there is a big difference between the model and the real physical process and if the ME statistics are taken more or less properly, in some way, in the filtering algorithm, one can improve the filter performance and reduce the estimation error.

This idea is tested here by applying the HME in Section 4. We carry out the procedure for estimating the ECM of the ME by first constructing the subspace for the ME. For more details on the structure of the ECM *M* in the AF, see [23]. According to [23], the ECM *M* is assumed to be of the structure *M* ¼ *Mv* ⊗ *Mh*—the Kronecker product of *Mh* with *Mv* where *Mh* is the ECM of the horizontal variable, *Mv*—ECM with vertical variable. **Figure 11** displays RMS of FE for the *u* velocity

#### **Figure 10.**

*Temporal average RMS of FE for total velocity u-component produced by the NAF s.t. the initial gain (red curve) and the NAF s.t. optimal initial gain resulting from solving (22) (green curve).*

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties DOI: http://dx.doi.org/10.5772/intechopen.92194*

#### **Figure 11.**

optimization process in **Figure 9**). The performances of these two NAFs are shown in **Figure 10**. One sees here that the NAFOI has improved considerably the quality of estimates of the velocity *u*-component compared with the NAFI. This result justifies that offline optimization (21) is an interesting strategy for finding the

In practice, for real operational systems, information on the space of ME is not available or very poorly known. Usually, there is a big difference between the model and the real physical process and if the ME statistics are taken more or less properly, in some way, in the filtering algorithm, one can improve the filter performance and

This idea is tested here by applying the HME in Section 4. We carry out the procedure for estimating the ECM of the ME by first constructing the subspace for the ME. For more details on the structure of the ECM *M* in the AF, see [23].

According to [23], the ECM *M* is assumed to be of the structure *M* ¼ *Mv* ⊗ *Mh*—the Kronecker product of *Mh* with *Mv* where *Mh* is the ECM of the horizontal variable, *Mv*—ECM with vertical variable. **Figure 11** displays RMS of FE for the *u* velocity

*Temporal average RMS of FE for total velocity u-component produced by the NAF s.t. the initial gain (red*

*curve) and the NAF s.t. optimal initial gain resulting from solving (22) (green curve).*

optimal initial gain in the NAF.

*Control parameters λi*, *i* ¼ 1, … , 4 *during optimization process.*

*Dynamic Data Assimilation - Beating the Uncertainties*

**Figure 9.**

**Figure 10.**

**22**

*6.3.2 Estimating the ECM of ME*

reduce the estimation error.

*Performance of the AF: (i) AF*0*U—no ME ECM has been taken into account; (ii) AF*3*U—with ME ECM computed in accordance with the HME.*

component at the surface resulting from two AFs. The curve *AF*0*U* corresponds to the AF whose nonadaptive version has the gain computed on the basis of the ECM *M* using an ensemble of PE samples (generated by the PeSP in [18]). The curve *AF*3*U* shows the performance of the AF with the modified ECM (by adding the vertical ME covariance *Qv* to the vertical ECM *Mv*). More precisely, *Qv* is assumed to belong to the subspace spanned by three leading EiVs of *Mv*. This choice is justified by the fact: the eigenvalue decomposition of *Mv* has the first three EiVs with the explained variances 67, 17, 15%, respectively. As the fourth EiVec has only the explained variance 0.7E-07%, it is dropped from the subspace constructed for the vertical ME. The better performance of the *AF*3*U*, in comparison with that of the *AF*0*U*, is apparently seen in **Figure 11**.

The above experiment shows in details how, on the basis of HME, the subspace for the ME can be constructed, and how one estimates the ECM for the model error. The superior performance of the *AF*3*U* over that of the *AF*0*U* validates the usefulness of the HME which can serve as an important tool for estimating the ME and improving the performance of the AF for solving the data assimilation problems with HdSs.

#### **7. Conclusions**

One of the key assumptions to ensure the optimal performance of the KF is that a priori knowledge of the system model is given without any uncertainty. This assumption, however, is never valid in practice for dynamical systems under consideration. The uncertainties exist everywhere in modeling a real process like structural uncertainty, model parameterization, model resolution, model bias or ME statistics. For HdSs, order reduction introduced either in the original numerical model or in the filtering algorithms, inevitably leads to uncertainty in the ME, especially in geophysical numerical models.

Our focus in this chapter is to show how the AF solves efficiently filtering problems for systems operating in an uncertain environment.

As seen from this chapter, the AF has proven to be efficient to deal with uncertainties in the specification of the ME statistics, system bias or model reduction. The reasons of the success of the AF are that (i) it belongs to the class of

parametrized stable filters; (ii) it is defined as the best member minimizing mean PE for the system outputs; (iii) The tuning parameters are chosen as elements of stabilizing gain and they are of no physical sense.

**References**

2001

[1] Kalman REA. New approach to linear filtering and prediction problems. Journal of Basic Engineering. 1960;**82**:

*DOI: http://dx.doi.org/10.5772/intechopen.92194*

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties*

Processes and Related Fields. 1996;**2**(4):

[13] Hoang HS, De Mey P, Talagrand O, Baraille R. A new reduced-order adaptive filter for state estimation in high dimensional systems. Automatica.

Automation and Remote Control. 1990;

Stochastic Search and Optimization. New Jersey: Wiley; 2003. ISBN 978-0-

[16] Hoang HS, Baraille R. Stochastic simultaneous perturbation as powerful method for state and parameter

estimation in high dimensional systems. In: Baswell AR, editor. Advances in Mathematics Research. Nova Science Publishers; 2015;**20**:117-148. ISBN:

[17] Hoang HS, Baraille R. A comparison study on performance of an adaptive

methods for state estimation in highdimensional system. In: Hokimoto T, editor. Advances in Statistical

Methodologies and their Application to

[18] Hoang HS, Baraille R. Prediction error sampling procedure based on dominant Schur decomposition. Application to state estimation in high dimensional oceanic model. Journal of Applied Mathematics and Computing.

1997;**33**(8):1475-1498

**51**(7):937-946

471-33052-3

978-1-63482-741-6c

10.5772/67005

2011;**218**(7):3689-3709

filter with other estimation

Real Problems. Rijeka, Croatia: IntechOpen; 2017. pp. 29-52. DOI:

[14] Polyak BT. New method of stochastic approximation type.

[15] Spall JC. Introduction to

[12] Fitzgerald R. Divergence of the Kalman filter. IEEE Transactions on Automatic Control. 1971;**16**(6):736-747

555-580

35-45. DOI: 10.1115/1.3662552

River: Prentice-Hall; 2000

[4] Sayed AH. Fundamentals of Adaptive Filtering. NJ: Wiley; 2003

[5] Kucera V. The discrete Riccati equation of optimal control. Kybernetika. 1972;**8**(5):430-447

for high dimensional systems. Automatica. 2001;**37**:341-359

[6] Hoang HS, Talagrand O, Baraille R. On the design of a stable adaptive filter

[7] Simon D. Optimal State Estimation. Hoboken, NJ: John Wiley and Sons; 2006. ISBN: 978-0-471-70858-2

[8] Gustafsson F, Hendeby G. Some relations between extended and unscented Kalman filters. IEEE

**60**(2):545-555

Transactions on Signal Processing. 2012;

[9] Evensen G. The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dynamics. 2003;**53**(4):343-367. DOI:

[10] Chen Y, Snyder C. Assimilating vortex position with an ensemble Kalman filter. Monthly Weather Review. 2007;**135**(5):1828-1845. DOI:

[11] Del Moral P. Non linear filtering: Interacting particle solution. Markov

10.1007/s10236-003-0036-9

10.1175/MWR3351.1

**25**

[2] Kailath T, Sayed AH, Hassibi B. Linear Estimation. NJ, Upper Saddle

[3] Liptser RS, Shiryaev AN. Statistics of Random Processes—I. General Theory. Berlin and Heidelberg: Springer-Verlag;

It is obvious from this chapter that the performance of the AF is comparable with that of the KF when perfect knowledge of all ME statistics is given, and it outperforms the KF in presence of uncertainties. This happens since the AF acquires knowledge during assimilation process, regardless of uncertainties existing in the filtering problems. From the computational point of view, implementation of the AF consumes much less memory and computational time than the KF or other assimilation methods.

Simple numerical examples and simulation results, presented in Sections 5 and 6, clearly demonstrate the advantages gained through application of the AF in dealing with uncertainties. These positive results encourage a wide application of the AF in different fields of technology and applied sciences like automatic control, finance, aerospace, space exploration, meteorology, and oceanography. A more in-depth and significant research on the capacity of the AF to deal with uncertainties is surely a challenge for the near future.

## **Author details**

Hong Son Hoang\* and Remy Baraille REC/HOM/SHOM, Toulouse, France

\*Address all correspondence to: hhoang@shom.fr

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Adaptive Filter as Efficient Tool for Data Assimilation under Uncertainties DOI: http://dx.doi.org/10.5772/intechopen.92194*
