**2. Matrices, moments, quadrature, and PDE**

To provide context for the latest advancements with KSS methods, we begin with an overview of KSS methods as described in [3], applied to the 1-D parabolic PDE

$$Lu\_t + Lu = 0, \quad 0 \le \mathbf{x} \le 2\pi, \quad t \ge 0, \quad Lu \equiv -(p(\mathbf{x})u\_x)\_x + q(\mathbf{x})u,\tag{3}$$

$$u(\mathbf{x},0) = f(\mathbf{x}), \quad 0 \le \mathbf{x} \le 2\pi,\tag{4}$$

$$u(0,t) \equiv u(2\pi, t), \quad t \ge 0. \tag{5}$$

where *p*(*x*) > 0 and *q*(*x*) ≥ 0 on [0, 2*π*], and *q*(*x*) is not identically zero. In KSS methods, each Fourier coefficient of the solution *ũ*(*x*, *tn* + 1) is obtained by applying the exact solution operator of the PDE to *ũ*(*x*, *tn*). For a given wave number *ω*, such a Fourier coefficient is given by

$$
\hat{u}(\alpha, t\_{n+1}) = \left\langle \frac{1}{\sqrt{2\pi}} e^{i\alpha x}, e^{-L\Lambda} \tilde{u}(x, t\_n) \right\rangle,\tag{6}
$$

where

Matrices, Moments and Quadrature: Applications to Time-Dependent Partial Differential Equations http://dx.doi.org/10.5772/62247 5

$$\langle f, \mathbf{g} \rangle = \int\_0^{2\pi} \overline{f(\mathbf{x})} \mathbf{g}(\mathbf{x}) d\mathbf{x}$$

is the standard inner product on [0, 2*π*] and *e*<sup>−</sup>*LΔt* is the solution operator of the PDE (3).

#### **2.1. Matrices, moments, and quadrature**

by explicitly applying block Lanczos iteration to *A* for *each* Fourier component, without

**•** Once the frequency-dependent interpolation points are obtained, it is necessary to construct and apply frequency-dependent interpolating polynomial approximations *pω*(*λ*) of *φ* to *A*, for each wave number *ω*, and then the Fourier coefficients of the solution are inner products

This paper provides implementation details for this task, and explains how it can be

The outline of the paper is as follows. Section 2 gives an overview of KSS methods. Section 3 reviews their acceleration based on asymptotic analysis of recursion coefficients, first present‐ ed in [11], and discusses extension to non-self-adjoint operators. Section 4 provides a brief overview of EPI methods and demonstrates the spurious high-frequency oscillations that can occur when using standard Krylov projection within an EPI method. Section 5 describes how KSS and EPI methods are combined. Numerical results are presented in Section 6. Concluding

To provide context for the latest advancements with KSS methods, we begin with an overview


(5)

<sup>+</sup> % (6)

(4)

= 0, 0 < < 2 , > 0, = ( ( ) ) ( ) , *<sup>t</sup> x x u Lu x t Lu p x u q x u* +

*ux f x x* ( ,0) = ( ), 0 < < 2 ,

*ut u t t* (0, ) = (2 , ), > 0. p

<sup>1</sup> ˆ( , )= , ( , ) , <sup>2</sup> *i x Lt*

*n n u t e e u xt* w

p

where *p*(*x*) > 0 and *q*(*x*) ≥ 0 on [0, 2*π*], and *q*(*x*) is not identically zero. In KSS methods, each Fourier coefficient of the solution *ũ*(*x*, *tn* + 1) is obtained by applying the exact solution operator of the PDE to *ũ*(*x*, *tn*). For a given wave number *ω*, such a Fourier coefficient is given by


p

accomplished with as few Fourier transforms as possible [6, 7].

remarks and directions for future work are given in Section 7.

of KSS methods as described in [3], applied to the 1-D parabolic PDE

1

w

p

**2. Matrices, moments, quadrature, and PDE**

*<sup>H</sup> <sup>p</sup>ω*(*Aτ*)**b**, where *ê<sup>ω</sup>* is a discretization of an appropriate Fourier basis function.

sacrificing accuracy [11].

4 Applied Linear Algebra in Action

^ *ω*

of the form *e*

where

The spatial discretization of (6) yields the bilinear form

$$\mathbf{u}^H \boldsymbol{\varphi}(A)\mathbf{v},\tag{7}$$

where **u**<sup>=</sup> <sup>1</sup> <sup>2</sup>*<sup>π</sup> <sup>e</sup>iω<sup>x</sup>* and **v** = *ũ*(*x*, *tn*) are *N*-vectors, *A* = *LN*, where *LN* is a spectral discretization of *L*, and *φ*(*λ*) = *e*<sup>−</sup>*λt*.

Because the Sturm-Liouville operator *L* is self-adjoint and positive definite, it follows that the matrix *A* is symmetric positive definite. As such, it has real eigenvalues *b* = *λ*<sup>1</sup> ≥ *λ*<sup>2</sup> ≥ ⋯ ≥ *λN* = *a* > 0, and corresponding orthonormal eigenvectors **q***<sup>j</sup>* , *j* = 1, …, *N*. From the spectral decomposition of *A*, we obtain the following representation of (6):

$$\mathbf{u}^H \boldsymbol{\varphi}(\boldsymbol{A}) \mathbf{v} = \sum\_{j=1}^N \boldsymbol{\varphi}(\boldsymbol{\lambda}\_j) \mathbf{u}^H \mathbf{q}\_j \mathbf{q}\_j^H \mathbf{v}. \tag{8}$$

As described by Golub and Meurant in [12], (8) can be viewed as a Riemann-Stieltjes integral

$$\mathbf{u}^H \boldsymbol{\varphi}(A)\mathbf{v} = \int\_a^b \boldsymbol{\varphi}(\lambda)d\boldsymbol{\alpha}(\lambda),\tag{9}$$

where the measure *α*(*λ*) is defined by

$$\alpha(\boldsymbol{\lambda}) = \begin{cases} 0 & \text{if } \boldsymbol{\lambda} \le \boldsymbol{a} \\ \sum\_{j=l}^{N} \boldsymbol{\mu}\_{j} \boldsymbol{\nu}\_{j}, & \text{if } \boldsymbol{\lambda}\_{l} \le \boldsymbol{\lambda} < \boldsymbol{\lambda}\_{l-1}, \quad \boldsymbol{\mu}\_{j} = \mathbf{u}^{H} \mathbf{q}\_{j}, \quad \boldsymbol{\nu}\_{j} = \mathbf{q}\_{j}^{H} \mathbf{v}. \\ \sum\_{j=1}^{N} \boldsymbol{\mu}\_{j} \boldsymbol{\nu}\_{j}, & \text{if } \boldsymbol{b} \le \boldsymbol{\lambda} \end{cases}$$

This allows approximation of (6) using Gaussian quadrature rules, where the nodes and weights are obtained by applying the Lanczos algorithm to *A* with initial vectors **u** and **v** [12].

**Figure 1** demonstrates why integrals of the form (8) can be approximated accurately with a small number of nodes in the case where *A* is a discretization of a differential operator and the vector **u** is used to extract a particular Fourier coefficient of *f*(*A*)**v**. We examine the distribution *dα*(*λ*) in the case where **u** = **v** = *eiωx* for small and large values of *ω*, and for *A* discretizing a differential operator of the form − ∂*<sup>x</sup> a*(*x*)∂*x*, with *a*(*x*) > 0 being a smooth function or a piecewise constant function. In either case, *dα*(*λ*) is mostly concentrated within a portion of the interval of integration [*a*, *b*]. Gaussian quadrature rules for such integrals naturally target these relevant portions [3, 13].

**Figure 1.** The distribution *dα*(*λ*) from (8) where the matrix *A* represents a spectral discretization of a 1-D, second-order differential operator with smooth leading coefficient (top plot) and discontinuous leading coefficient (bottom plot), where **u** = **v** is a discretization of *e*<sup>2</sup>*ix* (solid curve) or *e*<sup>64</sup>*ix* (dashed curve).

#### **Block Lanczos algorithm**

*<sup>X</sup>*0 = 0, *R*<sup>0</sup> = **u v** , *R*0 = *X*<sup>1</sup> *<sup>B</sup>*0 (*QR* factorization) **for** *n* = 1, 2, …, *K V* = *AXn Mn* = *Xn <sup>H</sup> V Rn* =*V* − *Xn*−1*Bn*−<sup>1</sup> *<sup>H</sup>* <sup>−</sup> *XnMn Rn* = *Xn* + 1 *Bn* (*QR* factorization) **end**

In the case where **u** ≠ **v**, the presence of a negative weight would destabilize the quadrature rule [14]. Alternatively, we consider the approximation of the 2 × 2 matrix integral

$$\begin{bmatrix} \mathbf{u} & \mathbf{v} \end{bmatrix}^H \boldsymbol{\varphi}(\boldsymbol{A}) \begin{bmatrix} \mathbf{u} & \mathbf{v} \end{bmatrix} . \tag{10}$$

We use the most general *K*-node quadrature formula, as described in [12], to get an approxi‐ mation for (9) of the form

Matrices, Moments and Quadrature: Applications to Time-Dependent Partial Differential Equations http://dx.doi.org/10.5772/62247 7

$$\int\_{a}^{b} \boldsymbol{\rho}(\boldsymbol{\lambda}) \, d\boldsymbol{\mu}(\boldsymbol{\lambda}) = \sum\_{j=1}^{2K} \boldsymbol{\rho}(\boldsymbol{\lambda}\_{j}) \mathbf{v}\_{j} \mathbf{v}\_{j}^{H} + error,\tag{11}$$

where, for each *j*, *λ<sup>j</sup>* is a scalar and **v***<sup>j</sup>* is a 2-vector. Each node *λ<sup>j</sup>* is an eigenvalue of the matrix

$$T\_K = \begin{bmatrix} M\_1 & B\_1^H \\ B\_1 & M\_2 & B\_2^H \\ & \ddots & \ddots & \ddots \\ & & B\_{K-1} & M\_K \end{bmatrix},\tag{12}$$

which is a block-tridiagonal matrix of order 2*K*. The vector **v***<sup>j</sup>* consists of the first two ele‐ ments of the corresponding normalized eigenvector. The matrices *Mj* and *Bj* are computed using the block Lanczos algorithm [15], described below.

#### **2.2. Krylov subspace spectral methods**

The block KSS method [3, 4] for (3) begins by defining

$$R\_o = \begin{bmatrix} \hat{\mathbf{e}}\_o & \mathbf{u}'' \end{bmatrix},\tag{13}$$

where **ê***ω* is a discretization of <sup>1</sup> <sup>2</sup>*<sup>π</sup> <sup>e</sup>iω<sup>x</sup>* and **u***<sup>n</sup>* is a discretization of the approximate solution *u*(*x*, *t*) at time *tn* = *nΔt*. Next, we compute the *QR* factorization of *R*0,

$$R\_0 = X\_1(o)B\_0(o)$$

which outputs

$$X\_1(\alpha) = \left[\hat{\mathbf{e}}\_{\alpha} \quad \frac{\mathbf{u}\_{\alpha}^{\prime}}{||\mathbf{u}\_{\alpha}^{\prime}||\_2}\right] \tag{14}$$

and

vector **u** is used to extract a particular Fourier coefficient of *f*(*A*)**v**. We examine the distribution *dα*(*λ*) in the case where **u** = **v** = *eiωx* for small and large values of *ω*, and for *A* discretizing a differential operator of the form − ∂*<sup>x</sup> a*(*x*)∂*x*, with *a*(*x*) > 0 being a smooth function or a piecewise constant function. In either case, *dα*(*λ*) is mostly concentrated within a portion of the interval of integration [*a*, *b*]. Gaussian quadrature rules for such integrals naturally target these relevant

**Figure 1.** The distribution *dα*(*λ*) from (8) where the matrix *A* represents a spectral discretization of a 1-D, second-order differential operator with smooth leading coefficient (top plot) and discontinuous leading coefficient (bottom plot),

In the case where **u** ≠ **v**, the presence of a negative weight would destabilize the quadrature

We use the most general *K*-node quadrature formula, as described in [12], to get an approxi‐

*A* (10)

rule [14]. Alternatively, we consider the approximation of the 2 × 2 matrix integral

[ ] () . [ ] *<sup>H</sup>* **uv uv** j

where **u** = **v** is a discretization of *e*<sup>2</sup>*ix* (solid curve) or *e*<sup>64</sup>*ix* (dashed curve).

portions [3, 13].

6 Applied Linear Algebra in Action

**Block Lanczos algorithm**

*<sup>H</sup> V*

*Rn* = *Xn* + 1 *Bn* (*QR* factorization)

mation for (9) of the form

*Rn* =*V* − *Xn*−1*Bn*−<sup>1</sup>

**end**

**for** *n* = 1, 2, …, *K V* = *AXn Mn* = *Xn*

*<sup>X</sup>*0 = 0, *R*<sup>0</sup> = **u v** , *R*0 = *X*<sup>1</sup> *<sup>B</sup>*0 (*QR* factorization)

*<sup>H</sup>* <sup>−</sup> *XnMn*

$$B\_0(\alpha) = \begin{bmatrix} 1 & \hat{\mathbf{e}}\_{\alpha}^H \mathbf{u}^n \\ 0 & ||\mathbf{u}\_{\alpha}^n||\_2 \end{bmatrix},\tag{15}$$

where

$$\mathbf{u}''\_{o} = \mathbf{u}'' - \hat{\mathbf{e}}\_{o} \hat{\mathbf{e}}^{H}\_{o} \mathbf{u}'' = \mathbf{u}'' - \hat{\mathbf{e}}\_{o} \hat{\mu}(o\nu, t\_{u}).\tag{16}$$

Then we apply the block Lanczos algorithm [15] to the matrix *LN*, which comes from the discretization of *L*, with initial block *X*1(*ω*). This produces a block tridiagonal matrix T*K* of the form (12), where every entry of T*K* is a function of *ω*. Then, at time *tn* + 1, each Fourier coefficient of the solution is

$$\mathbb{E}\left[\hat{\mathbf{u}}^{n+1}\right]\_{o} \equiv [B\_0(o)^H E\_{12}^H \exp[-T\_\chi(o)\Delta t] E\_{12} B\_0(o)]\_{12}, \quad E\_{12} \equiv \begin{bmatrix} \mathbf{e}\_1 & \mathbf{e}\_2 \end{bmatrix}.\tag{17}$$

An inverse FFT applied to the vector of Fourier coefficients **û***<sup>n</sup>* + 1 yields the vector **û***<sup>n</sup>* + 1, which consists of the values of the solution *u*(*x*, *tn* + 1) at the grid points.

This algorithm has temporal accuracy *O*(*Δt*<sup>2</sup>*<sup>K</sup>* <sup>−</sup> 1) for parabolic problems [3]. Even higher-order accuracy, *O*(*Δt*<sup>4</sup>*<sup>K</sup>* <sup>−</sup> 2), is obtained for the second-order wave equation [4], due to the secondorder time derivative. Furthermore, under appropriate assumptions on the coefficients of the PDE, the 1-node KSS method is unconditionally stable [3, 4]. More generally, the temporal order of accuracy is *O*(*Δt*(2*K* − 1)*<sup>d</sup>* ), where *d* is the highest order of a time derivative in the PDE; this order of accuracy has been observed with the Schrödinger equation [16] and Maxwell's equations [18].
