**5. Kernel-based identification of dynamical systems**

In the case of a strongly linear dynamical system, the linear model (19) can be not accurate enough. We have to include suitable nonlinear observables in the data and the model. In this section, observables are selected from kernel-based approximations [15]. Then we use the variant kernel-DMD (*k*-DMD, [4]) approach to identify *F*.

A real-valued function *<sup>k</sup>* on *<sup>d</sup>* � *<sup>d</sup>* is called a positive definite kernel function if it is symmetric and if the following property holds:

$$\forall m \in \mathbb{N}^{\star}, \ \forall \{\mathbf{z}\_{i}\}\_{i=1}^{m} \in \left(\mathbb{R}^{d}\right)^{m}, \forall \{a\_{i}\}\_{i=1}^{m} \in \mathbb{R}^{m}, \quad \sum\_{i=1}^{m} \sum\_{j=1}^{m} a\_{i}a\_{j}k(\mathbf{z}\_{i}, \mathbf{z}\_{j}) \ge 0. \tag{20}$$

*Space-Time-Parameter PCA for Data-Driven Modeling with Application to Bioengineering DOI: http://dx.doi.org/10.5772/intechopen.103756*

In other words, the square matrix *K* ¼ *k zi*, *z <sup>j</sup>* � � � � *<sup>i</sup>*¼1, … ,*m*,*j*¼1, … ,*<sup>m</sup>* is positive semidefinite. A standard kernel function is for example the Gaussian one

$$k(\mathbf{z}, \mathbf{z}') = \exp\left(-\frac{1}{2} \frac{\left\|\mathbf{z} - \mathbf{z}'\right\|^2}{\sigma^2}\right) \tag{21}$$

for a given parameter *σ* >0.

## **5.1 Kernel-based interpolation**

Kernel functions can be used for interpolation in spaces of arbitrary dimension. Let *<sup>g</sup>* : *<sup>d</sup>* ! be a continuous function and assume that we know the values of *<sup>g</sup>*ð Þ *<sup>z</sup><sup>i</sup>* at particular points *zi*, *i* ¼ 1, … , *m*. Then one can define an interpolator I*g* of *g* defined as

$$\mathcal{L}\mathbf{g}(\mathbf{z}) = \sum\_{j=1}^{m} a\_j k(\mathbf{z}, \mathbf{z}\_j) \tag{22}$$

where the coefficient vector *<sup>α</sup>* <sup>¼</sup> ð Þ *<sup>α</sup>*1, … , *<sup>α</sup><sup>m</sup> <sup>T</sup>* is determined such that the interpolation property

$$\mathcal{L}\mathbf{g}(\mathbf{z}\_i) = \mathbf{g}(\mathbf{z}\_i) \quad \forall i \in \{1, \ldots, m\}. \tag{23}$$

holds. The interpolation conditions clearly lead to the solution of the symmetric linear square system of size *m*

$$\mathbf{K}\mathbf{a} = \mathbf{b} \tag{24}$$

where *<sup>b</sup>* <sup>¼</sup> ð Þ *<sup>g</sup>*ð Þ *<sup>z</sup>*<sup>1</sup> , … , *<sup>g</sup>*ð Þ *<sup>z</sup><sup>m</sup> <sup>T</sup>*Þ. Assuming that *<sup>K</sup>* is positive definite, then Eq. (24) has a unique solution. Let

$$k\_i(\mathbf{z}) = k(\mathbf{z}, \mathbf{z}\_i), \quad i = 1, \ldots, m \tag{25}$$

and *V* ¼ spanð Þ *k*1, … , *km* . Considering any function ~*g* ∈*V*, it is easy to check that ~*g* ¼ I~*g*. One can derive the interpolation error

$$\begin{aligned} \|\mathbf{g} - \mathcal{L}\mathbf{g}\|\_{L^{\infty}} &= \|\mathbf{g} - \tilde{\mathbf{g}} + \tilde{\mathbf{g}} - \mathcal{L}\mathbf{g}\|\_{L^{\infty}} \\ &= \|\mathbf{g} - \tilde{\mathbf{g}} + \mathcal{L}\tilde{\mathbf{g}} - \mathcal{L}\mathbf{g}\|\_{L^{\infty}} \\ &\le \|\mathbf{g} - \tilde{\mathbf{g}}\|\_{L^{\infty}} + |||\mathcal{L}||| \quad \|\mathbf{g} - \tilde{\mathbf{g}}\|\_{L^{\infty}} \end{aligned} \tag{26}$$

so that

$$\|\|\mathbf{g} - \mathcal{T}\mathbf{g}\|\|\_{L^{\infty}} \le (\mathbf{1} + |||\mathcal{T}|||) \inf\_{\tilde{\mathbf{g}} \in V} \|\|\mathbf{g} - \tilde{\mathbf{g}}\|\|\_{L^{\infty}}.\tag{27}$$

The interpolation error is controlled by the best approximation error multiplied by a stability constant depending on the norm of the interpolation operator.

## **5.2 Use of kernel features and** *k***-DMD**

Let us go back to the parameterized dynamical system of interest (14) and consider a point cloud *a*ð Þ*<sup>j</sup>* � �*<sup>m</sup> <sup>j</sup>*¼<sup>1</sup> of sample states in the admissible reduced state space <sup>X</sup> <sup>⊂</sup> *<sup>K</sup>*. The functions

$$k\_j(\mathfrak{a}) = k\left(\mathfrak{a}, \mathfrak{a}\_{(j)}\right) \tag{28}$$

can be seen as features and thus be used as suitable nonlinear observables to approximate the Koopman operator. From any known full state vector *a<sup>n</sup> <sup>μ</sup>* at time *t n*, we build the vector of observables

$$\mathbf{x}\left(\boldsymbol{a}\_{\mu}^{n}\right) = \left(k\_{1}\left(\boldsymbol{a}\_{\mu}^{n}\right), k\_{2}\left(\boldsymbol{a}\_{\mu}^{n}\right), \dots, k\_{m}\left(\boldsymbol{a}\_{\mu}^{n}\right)\right)^{T}.\tag{29}$$

By definition of the Koopman operator, we have *ki a<sup>n</sup>*þ<sup>1</sup> *<sup>μ</sup>* � � <sup>¼</sup> ð Þ *ki* <sup>∘</sup> *<sup>F</sup> <sup>a</sup><sup>n</sup> μ* � � <sup>¼</sup> ð Þ <sup>K</sup>*ki <sup>a</sup><sup>n</sup>* � �. Then we look for a finite-dimensional approximation *<sup>A</sup><sup>μ</sup>* of the Koopman

*μ* operator K in the sense

$$\kappa\left(\mathfrak{a}\_{\mu}^{n+1}\right) \approx \mathsf{A}\_{\mu}\kappa\left(\mathfrak{a}\_{\mu}^{n}\right) \quad \forall n \in \{0, \ldots, N-1\} \tag{30}$$

The matrix *A<sup>μ</sup>* is searched as the minimum of the least square problem

$$\min\_{A \in \mathcal{M}\mathcal{M}\_m(\mathbb{R})} \quad \frac{1}{2} \|Y\_\mu - AX\_\mu\|\_F^2 \tag{31}$$

where the entry and output data matrices are now *X<sup>μ</sup>* ¼

*κ a*<sup>0</sup> *μ* � �, *<sup>κ</sup> <sup>a</sup>*<sup>1</sup> *μ* � �, … , *<sup>κ</sup> <sup>a</sup><sup>N</sup>*�<sup>1</sup> *<sup>μ</sup>* h i � � and *<sup>Y</sup><sup>μ</sup>* <sup>¼</sup> *<sup>κ</sup> <sup>a</sup>*<sup>1</sup> *μ* � �, *<sup>κ</sup> <sup>a</sup>*<sup>2</sup> *μ* � �, … , *<sup>κ</sup> <sup>a</sup><sup>N</sup> μ* h i � � . We get the dynamical system *κ a<sup>n</sup>*þ<sup>1</sup> *<sup>μ</sup>* � � <sup>¼</sup> *<sup>A</sup><sup>μ</sup> <sup>κ</sup> <sup>a</sup><sup>n</sup> μ* � � with a specified initial condition *<sup>κ</sup> <sup>a</sup>*<sup>0</sup> *μ* � �.

Let us emphasize that the computational variables are now the *κ a<sup>n</sup> μ* � �. But we still need the full state variables *a<sup>n</sup>* to determine the displacements or the capsule shapes. The full state can be retrieved for example by interpolation: taking *gi* ð Þ¼ *a a* � *ei*, we get

$$\left(\left(\mathfrak{a}\_{\mu}\right)\_{i}^{n+1} \approx \mathcal{Z} \mathfrak{g}\_{i}\left(\mathfrak{a}\_{\mu}^{n+1}\right) = \sum\_{j=1}^{m} \left(\mathfrak{a}\_{j}\right)\_{i} k\_{i}\left(\mathfrak{a}\_{\mu}^{n+1}\right) \tag{32}$$

with interpolation coefficients vectors *α <sup>j</sup>* � �*<sup>m</sup> <sup>j</sup>*¼<sup>1</sup> such that

$$\mathfrak{a}\_{(i)} = \sum\_{j=1}^{m} \mathfrak{a}\_{j} k\_{j}(\mathfrak{a}\_{(i)}) \quad \forall i \in \{1, \ldots, m\} \tag{33}$$

(linear system of dimension *m* � *K*). The coefficient vectors *α <sup>j</sup>* can be computed once for all.

### **5.3 Including the full state vector and the constants into the features**

The *k*-DMD approach can be improved by including the full-state vector *a<sup>μ</sup>* itself into the input feature vector. Moreover, if the kernel functions are not able to perfectly reproduce the constant ones, for accuracy reasons it is justified to add the constants into the features. Considering the augmented vector *η a<sup>μ</sup>* � � of observables

*Space-Time-Parameter PCA for Data-Driven Modeling with Application to Bioengineering DOI: http://dx.doi.org/10.5772/intechopen.103756*

$$\eta(a\_{\mu}) = \begin{bmatrix} a\_{\mu} \\ \kappa(a\_{\mu}) \\ \mathbf{1} \end{bmatrix} \in \mathbb{R}^{K+m+1},\tag{34}$$

we look for a dynamical system in the form

$$a\_{\mu}^{n+1} = A\_{\mu} \eta \left( a\_{\mu}^{n} \right) \approx \mathbf{F}\_{\mu} \left( a\_{\mu}^{n} \right) \tag{35}$$

with a constant rectangular matrix *A<sup>μ</sup>* of size *K* � ð Þ *K* þ *m* þ 1 to identify. The first advantage is that the output vector is the full state itself. The second one is the number of elements of *<sup>A</sup><sup>μ</sup>* which is less than *<sup>m</sup>*<sup>2</sup> as soon as *K K*ð Þ <sup>þ</sup> *<sup>m</sup>* <sup>þ</sup> <sup>1</sup> <sup>≤</sup> *<sup>m</sup>*2. In this case, there are less coefficients to identify. The input and output matrices now are

$$X\_{\mu} = \left[ \eta \left( a\_{\mu}^{0} \right), \eta \left( a\_{\mu}^{1} \right), \dots, \eta \left( a\_{\mu}^{N-1} \right) \right], \quad Y\_{\mu} = \left[ a\_{\mu}^{1}, a\_{\mu}^{2}, \dots, a\_{\mu}^{N} \right]. \tag{36}$$

Assuming *N* ≥ð Þ *K* þ *m* , the rectangular matrix *A<sup>μ</sup>* solution of the least square problem

$$\min\_{A \in \mathcal{M}\_{K, K+m+1}(\mathbb{R})} \quad \frac{1}{2} \|Y\_{\mu} - AX\_{\mu}\|\_{F}^{2} \tag{37}$$

is computed as *<sup>A</sup><sup>μ</sup>* <sup>¼</sup> *<sup>Y</sup>μX<sup>T</sup> <sup>μ</sup> XμX<sup>T</sup> μ* � ��<sup>1</sup> .

#### **5.4 Summary and whole algorithm**

We give a summary of the model-order reduction algorithm:


$$\bar{\mathcal{T}}\_x = A^0 \otimes \mathbf{e}^N + \sum\_{k=1}^K \sum\_{\ell'=1}^L \sum\_{m=1}^M a\_{k\ell m} \Phi\_k \otimes \Psi\_{\ell'} \otimes w\_m \tag{38}$$

3. Online stage: choose a parameter vector *μ*. From (13), compute the data coefficients *a<sup>μ</sup> t <sup>n</sup>* ð Þ¼ *<sup>a</sup><sup>n</sup> <sup>μ</sup>* (see Eq. (13)) from (11). Choose a kernel function *k*ð Þ *:*, *:* , choose *<sup>m</sup>* and the centers *<sup>a</sup>*ð Þ*<sup>j</sup>* , *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>m</sup>*. Assemble the observables *<sup>η</sup> <sup>a</sup><sup>n</sup> μ* � � and assemble the matrices *X<sup>μ</sup>* and *Y<sup>μ</sup>* (Eq. (36)). Compute the *k*-DMD matrix *A<sup>μ</sup>* ¼ *YμX<sup>T</sup> <sup>μ</sup> XμX<sup>T</sup> μ* � ��<sup>1</sup> . Get the reduced-order dynamical system (35) with *a*<sup>0</sup> *<sup>μ</sup>* ¼ **0** as initial value.
