1. Introduction

Study in high-dimensional systems (HdS) today constitutes one of the most important research subjects thanks to the exponential increase of the power and speed of computers: after Moore's law, the number of transistors in a dense integrated circuit doubles approximately every 2 years (see Myhrvold [1]). However, this exponential increase is still far from being sufficient for responding to great demand on computational and memory resources in implementing the

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

optimal data assimilation algorithms (like Kalman filter (KF) [2], for example) for operational forecasting systems (OFS).

2. Optimal perturbations: predictability and filter stability

The behavior of atmosphere or ocean is recognized as highly sensitive to initial conditions. It means that a small change in an initial condition can alter strongly the trajectory of the system. It is therefore important to be able to know about the directions of rapid growth of the system state. The research on OP is namely aimed at finding the methods to better capture these rapidly growing directions of the system dynamics, to optimize the predictability of the

On Optimal and Simultaneous Stochastic Perturbations with Application to Estimation of High-Dimensional…

http://dx.doi.org/10.5772/intechopen.77273

63

To explain this phenomenon more clearly, consider a standard linear filtering problem

squared (MMS) estimate <sup>b</sup>xk can be obtained by the well-known KF [2].

where Φ ∈ Rn�<sup>n</sup> is the state transition matrix, H ∈Rp�<sup>n</sup> is an observation matrix. Under standard conditions related to the model and observation noises wk, vk, the minimum mean

where <sup>ζ</sup>ð Þ¼ <sup>k</sup> <sup>þ</sup> <sup>1</sup> z kð Þ� <sup>þ</sup> <sup>1</sup> <sup>H</sup>bx kð Þ <sup>þ</sup> <sup>1</sup>=<sup>k</sup> is the innovation vector, <sup>b</sup>x kð Þ <sup>þ</sup> <sup>1</sup> is the filtered (or analysis) estimate, <sup>b</sup>x kð Þ <sup>þ</sup> <sup>1</sup>=<sup>k</sup> is the one-step ahead prediction for x kð Þ <sup>þ</sup> <sup>1</sup> . The KF gain <sup>K</sup> is

From Eq. (2), it can be shown that the transition matrix for the filtered estimate equation is

For HdS, the KF gain (3) is impossible to compute. In a study by Hoang et al. [7], it is suggested

system space Rn, Ke ∈Rne�<sup>p</sup> is the gain for the reduced filter. One of very important questions arising here is how one can choose a subspace of projection and structure of Ke to make L to be stable? It is found in the work done by Hoang et al. [8] that detectability of the input-output system (1) is sufficient for the existence of a stabilizing gain K and this gain can be constructed with Pr consisting from all unstable EVs (or unstable SVs, SchVs. See

—an operator projecting a vector from the reduced space Rne to the full

x kð Þ¼ þ 1 Φx kð Þþ w kð Þ þ 1 ,z kð Þ¼ þ 1 Hx kð Þþ þ 1 v kð Þ þ 1 : (1)

<sup>b</sup>x kð Þ¼ <sup>þ</sup> <sup>1</sup> <sup>b</sup>x kð Þþ <sup>þ</sup> <sup>1</sup>=<sup>k</sup> <sup>K</sup>ζð Þ <sup>k</sup> <sup>þ</sup> <sup>1</sup> , <sup>b</sup>x kð Þ¼ <sup>þ</sup> <sup>1</sup>=<sup>k</sup> <sup>Φ</sup>bx kð Þ (2)

<sup>K</sup> <sup>¼</sup> MH<sup>T</sup> HMH<sup>T</sup> <sup>þ</sup> <sup>R</sup> � ��<sup>1</sup> (3)

K ¼ PrKe (4)

2.1. Stability of filter

given by

with Pr ∈ Rn�ne

expressed by L ¼ ½ � I � KH Φ.

to find the gain with the structure

Section 3) of the system dynamics.

2.2. Singular value decomposition and optimal perturbations

Consider the singular value decomposition (SVD) of Φ [9],

physical process under consideration.

This chapter is devoted to the role of perturbations as an efficient tool for predictability of dynamical system, ensemble forecasting and for overcoming the difficulties in the design of data assimilation algorithms, in particular, of the optimal adaptive filtering for extremely HdS. In [3, 4], Lorenz has studied the problem of predictability of the atmosphere. It is found that the atmosphere is a chaotic system and a predictability limit to numerical forecast is of about 2 weeks. The barrier of predictability has to be overcome in order to increase the time period of a forecast further. The fact that estimates of the current state are inaccurate and that numerical models have inadequacies leads to forecast errors that grow with increasing forecast lead time. Ensemble forecasting aims at quantifying this flow-dependent forecast uncertainty. Today, a medium-range forecast has become a standard product. In the 1990s, the ensemble forecasting (EnF) technique was introduced in operational centers such as the European Centre for Medium-range Weather Forecast (ECMWF) (see Palmer et al. [5]), the NCEP (US National Center for Environmental Prediction) (Toth and Kalnay [6]). It is found that a single forecast can depart rapidly from the real atmosphere. The idea of the ensemble forecasting is to add the perturbations around the control forecast to produce a collection of forecasts that try to better simulate the possible uncertainties in a numerical forecast. The ensemble mean can then act as a nonlinear filter such that its skill is higher than that of individual members in a statistical sense (Toth and Kalnay [6]).

The chapter is organized as follows. Section 2 outlines first the optimal perturbation (OP) theory, on how the OP plays the important role for seeking the most growing direction of prediction error (PE). The predictability theory of the dynamical system as well as a stability of the filtering algorithm all are developed on the basis of OP. The definition of the optimal deterministic perturbation (ODP) and some theoretical results on the ODP are introduced. It is found that the ODP is associated with the right singular vector (SV) of the system dynamics. In Section 3, the two other classes of ODPs are presented: the leading eigenvector (EV) and real Schur vector (SchV) of the system dynamics. Mention that the first EV is the ODS in the eigen invariant subspace (EI-InS) of the system dynamics. As to the leading SchV, it is ODS in the Schur invariant subspace (Sch-InS) which is closely related to the EI-InS in the sense that the subspace of the leading SchVs, generated by the sampling procedure (Sampling-P, Section 3), converges to the EI-InS. In Section 4, we present the other type of OP called as optimal stochastic perturbation (OSP). Mention that the OSP is a natural extension of the ODP which gives insight into understanding of what represents the most growing PE and how one can produce it by stochastically perturbing the initial state. One important class of perturbations (known as simultaneous stochastic perturbation—SSP) is presented in Section 5. It will be shown that the SSP is very efficient for solving optimization problems in high-dimensional (Hd) setting. The different algorithms for estimating, decomposing … Hd matrices are also presented here. Numerical examples are presented in Section 6 for illustrating the theoretical results and efficiency of the OPs in solving data assimilation problems. The experiment on data assimilation in the Hd ocean model MICOM by the filters constructed on the basis of the Schur ODSs and SSPs is presented in Section 7. The concluding remarks are presented in Section 8.
