**1. Introduction**

Consider an autonomous system of ordinary differential equations (ODEs) of the form

$$\mathbf{y}' = F(\mathbf{y}), \quad \mathbf{y}(t\_0) = \mathbf{y}\_0,\tag{1}$$

such as one that would naturally arise from spatial discretization of a partial differential equation (PDE). Such systems are generally stiff when the underlying PDE has significant diffusive or advective terms, and this stiffness is exacerbated by increasing the spatial resolu‐ tion. Stiffness leads to significantly increased computational expense for both explicit and implicit time-stepping methods. For explicit methods, the time step is severely restricted by the CFL condition, while implicit methods require the solution of ill-conditioned systems of linear equations during each time step. Such systems are generally sparse and therefore best suited for iterative methods, but for these stiff systems of ODEs, iterative methods require many iterations or a specially developed preconditioner [1].

Exponential propagation iterative (EPI) methods, introduced by Tokman et al. [1, 2], are Runge-Kutta like time-stepping methods that are designed to minimize the number of matrix function-vector products of the form **w** = *φ*(*Aτ*)**b**, where *φ* is a smooth function, *A* is a square matrix, *τ* is a parameter determined by the time step, and **b** is a column vector. The approach used by EPI methods to compute **w** for a given symmetric matrix *A* is to apply the Lanczos algorithm to *A* with the initial vector **b**, until we obtain a matrix *Xj* with orthonormal columns and a tridiagonal matrix *Tj* such that *Xj <sup>T</sup> AXj* <sup>=</sup>*Tj* . Then, we can compute the approximation

$$\mathbf{w}\_{\;/} = \left\| \begin{array}{c} \mathbf{b} \ \left\| \begin{array}{c} X \ \boldsymbol{\rho}(T\_{\;/} \mathbf{r}) \mathbf{e}\_{\;/} \end{array} \right. \end{array} \right\|, \tag{2}$$

where **e**<sup>1</sup> <sup>=</sup> 1 0 <sup>⋯</sup> <sup>0</sup> *<sup>T</sup>* . Since each column **x***k* of the matrix *Xj* is of the form **x***k* = *pk* <sup>−</sup> 1(*A*)**b**, where *pn*(*A*) is a polynomial of degree *n* in *A*, **w***<sup>j</sup>* is the product of a polynomial in *A* of degree *j* − 1 and **b**. Since the matrix *A* arises from a stiff PDE, the eigenvalues of *A* are distributed over several orders of magnitude. As it is generally not possible to accurately approximate *φ*(*λ*) on such a large interval with a low-degree polynomial, a large number of Lanczos iterations is generally necessary in order to obtain a sufficiently accurate of **w**.

The difficulty that time-stepping methods have with stiffness stems from the fact that low- and high-frequency components of the solution, which change at widely varying speeds, are coupled and therefore must be evolved using a common time step. While the low-frequency components generally make the most significant contribution to the solution, the highfrequency components change most rapidly, and therefore constrain the time step. The greater the spatial resolution, the greater the bandwidth of the solution, thus constraining the time step even further.

**1. Introduction**

2 Applied Linear Algebra in Action

and a tridiagonal matrix *Tj*

Consider an autonomous system of ordinary differential equations (ODEs) of the form

such as one that would naturally arise from spatial discretization of a partial differential equation (PDE). Such systems are generally stiff when the underlying PDE has significant diffusive or advective terms, and this stiffness is exacerbated by increasing the spatial resolu‐ tion. Stiffness leads to significantly increased computational expense for both explicit and implicit time-stepping methods. For explicit methods, the time step is severely restricted by the CFL condition, while implicit methods require the solution of ill-conditioned systems of linear equations during each time step. Such systems are generally sparse and therefore best suited for iterative methods, but for these stiff systems of ODEs, iterative methods require

Exponential propagation iterative (EPI) methods, introduced by Tokman et al. [1, 2], are Runge-Kutta like time-stepping methods that are designed to minimize the number of matrix function-vector products of the form **w** = *φ*(*Aτ*)**b**, where *φ* is a smooth function, *A* is a square matrix, *τ* is a parameter determined by the time step, and **b** is a column vector. The approach used by EPI methods to compute **w** for a given symmetric matrix *A* is to apply the Lanczos algorithm to *A* with the initial vector **b**, until we obtain a matrix *Xj* with orthonormal columns

*<sup>T</sup> AXj* <sup>=</sup>*Tj*

**wb e** *j jj* = ( ), P P2 1 *X T* j t

and **b**. Since the matrix *A* arises from a stiff PDE, the eigenvalues of *A* are distributed over several orders of magnitude. As it is generally not possible to accurately approximate *φ*(*λ*) on such a large interval with a low-degree polynomial, a large number of Lanczos iterations is

The difficulty that time-stepping methods have with stiffness stems from the fact that low- and high-frequency components of the solution, which change at widely varying speeds, are coupled and therefore must be evolved using a common time step. While the low-frequency components generally make the most significant contribution to the solution, the highfrequency components change most rapidly, and therefore constrain the time step. The greater

many iterations or a specially developed preconditioner [1].

such that *Xj*

where **e**<sup>1</sup> <sup>=</sup> 1 0 <sup>⋯</sup> <sup>0</sup> *<sup>T</sup>* . Since each column **x***k* of the matrix *Xj*

generally necessary in order to obtain a sufficiently accurate of **w**.

*pn*(*A*) is a polynomial of degree *n* in *A*, **w***<sup>j</sup>*

0 0 **y yy y** ' = ( ), ( ) = , *F t* (1)

. Then, we can compute the approximation

(2)

is the product of a polynomial in *A* of degree *j* − 1

is of the form **x***k* = *pk* <sup>−</sup> 1(*A*)**b**, where

This coupling, however, is not the only cause of the greater computational expense incurred by time-stepping methods applied to stiff systems. A key contributing factor is the use of the same polynomial or rational function to approximate all of these components of *φ*(*Aτ*)**b**, when such a function cannot effectively approximate *φ*(*λτ*) on a large interval except at high degree, which increases computational expense. An alternative is to use Krylov subspace spectral (KSS) methods [3, 4], which employ a component-wise approach to these matrix functionvector products. In KSS methods, which are explicit, each Fourier coefficient of the solution is computed using an interpolating polynomial with frequency-dependent interpolation points. This individualized approach to computing each Fourier component yields high-order accuracy and stability like that of implicit methods, even though KSS methods themselves are explicit.

To date, KSS methods have been applied mostly to linear PDEs on *d*-dimensional boxes, for *d* = 1, 2, 3, with either periodic or homogeneous Dirichlet or Neumann boundary conditions. A first-order KSS method was applied to nonlinear diffusion equations for image processing by Guidotti et al. [5], but for that problem, a straightforward linearization was used during each time step. In order to compute solutions of nonlinear PDEs with higher-order accuracy in time, it is necessary to treat the nonlinearity more carefully. This can be accomplished by combining KSS methods with another high-order time-stepping method, such as EPI methods.

In this paper, this combination is presented, for the purpose of solving systems of ODEs of the form (1) that are obtained through spatial discretization of nonlinear PDEs defined on rectangular domains with periodic, homogeneous Dirichlet, or homogeneous Neumann boundary conditions. The proposed algorithm, first described in [6, 7], uses the componentwise approach of KSS methods to more efficiently compute matrix function-vector products of the form **y** = *φ*(*Aτ*)**b**. The following features distinguish the combined approach from previous work on either EPI or KSS methods:


by explicitly applying block Lanczos iteration to *A* for *each* Fourier component, without sacrificing accuracy [11].

**•** Once the frequency-dependent interpolation points are obtained, it is necessary to construct and apply frequency-dependent interpolating polynomial approximations *pω*(*λ*) of *φ* to *A*, for each wave number *ω*, and then the Fourier coefficients of the solution are inner products of the form *e* ^ *ω <sup>H</sup> <sup>p</sup>ω*(*Aτ*)**b**, where *ê<sup>ω</sup>* is a discretization of an appropriate Fourier basis function. This paper provides implementation details for this task, and explains how it can be accomplished with as few Fourier transforms as possible [6, 7].

The outline of the paper is as follows. Section 2 gives an overview of KSS methods. Section 3 reviews their acceleration based on asymptotic analysis of recursion coefficients, first present‐ ed in [11], and discusses extension to non-self-adjoint operators. Section 4 provides a brief overview of EPI methods and demonstrates the spurious high-frequency oscillations that can occur when using standard Krylov projection within an EPI method. Section 5 describes how KSS and EPI methods are combined. Numerical results are presented in Section 6. Concluding remarks and directions for future work are given in Section 7.
