2.1.3 Exploiting the discretization

2.1.2 Function space discretization

Wavelet Transform and Complexity

mated as

element.

element.

38

where f g bi <sup>n</sup>

Another class of techniques discretizes the function space Hð Þ Ω by approximating it with an n-dimensional space Hn, that is, unknown function u is approxi-

By exploiting approximation (6), one can transform PDE (1)–(3) into a finitedimensional problem. The different solution techniques differ in how (6) is used

One possibility is to choose functions bi that are infinitely differentiable and nonvanishing on the whole Ω. This gives rise to so-called spectral methods. Typical choices for basis functions can be complex exponential/sinusoidal functions (if the

solution is expected to be periodic), Chebyshev polynomials (for separable domains, e.g., d-dimensional cubes), and spherical harmonics (for systems with spherical symmetry). Spectral methods can work very well if the solution is expected to be smooth; they can even converge exponentially fast. However, their spatial localization is not good, and if the functions involved are not smooth (e.g.,

Another approach, very popular, is FEM that chooses functions bi by first partitioning the domain Ω into a set of elements (triangles and their multidimensional counterpart are a popular choice) and assigning to every element a suitable finite-dimensional vector space. The final approximation of u is constructed in a piecewise fashion by gluing, so to say, the approximations of u over every single

In a typical implementation of FEM, all the elements are affine images of a single reference element. This simplifies the implementation since it suffices to choose only the vector space of the reference element T0. Another popular choice is to choose the space associated to the elements as spaces of polynomials. The basis is selected by choosing a set of control points in q1, q2, … ∈ T<sup>0</sup> and choosing as basis

<sup>¼</sup> <sup>δ</sup>i,j <sup>¼</sup> 1 if <sup>i</sup> <sup>¼</sup> <sup>j</sup>

A generalization of this idea is to choose a set of functionals σ<sup>j</sup> mapping func-

Eq. (8) gives back (7) if σ<sup>j</sup> is defined as the functional that corresponds to

than (7) since it can be used, for example, to control the flow through a face of the

An issue with FEM is that creating the grid of elements can be expensive. This is especially true in those problems where the geometry is not fixed but needs to be updated. An example of this type of system is free-surface fluid flows, where the interface between air and fluid changes with time, requiring a continuous update of the mesh. In order to solve this problem, meshless methods have been

0 if i 6¼ j

σjð Þ¼ bi δi,j (8)

. Eq. (8) is, however, more general

(7)

αibi, α<sup>i</sup> ∈ IR (6)

i¼1.

u≈ ∑<sup>n</sup> i¼1

<sup>i</sup>¼<sup>1</sup> is a basis of Hn.

and in the way of choosing space Hn and its basis f g bi <sup>n</sup>

they are discontinuous), they lose most of their interest.

vectors bi the polynomials that satisfy the interpolation property

bi qj 

Remark 2.1 (generalized collocation method).

tions defined over T<sup>0</sup> to IR and requiring

evaluating the argument of the functional in qj

After expressing u as linear combination of bi, we are left with the problem of determining the coefficients of the linear combination. Several approaches are possible; the easiest way to briefly present them is by rewriting the differential equation as

$$\mathcal{R}u \coloneqq \mathcal{D}u - f = 0 \tag{10}$$

where operator R : Hð Þ! Ω Hð Þ Ω is called the residual.

If we restrict u to be a linear combination of bi, most probably we will not be able to make residual (10) exactly zero; therefore, we will aim to make it as small as possible. Since the result of the residual operator is a function, there are many possible approaches in minimizing it.

With the collocation approach, we choose a number of points of the domain p1, p2, …, pn ∈ Ω and ask that the residual is zero on the chosen points, that is,

$$\mathbf{0} = [\mathcal{R}u] \begin{pmatrix} p\_j \\ \end{pmatrix} = [\mathcal{D}u] \begin{pmatrix} p\_j \\ \end{pmatrix} - f \begin{pmatrix} p\_j \\ \end{pmatrix} \qquad j = \mathbf{1}, \dots, n \tag{11}$$

Eq. (11) represents a system of n equations having as unknown the coefficients αi, i ¼ 1, …, n. For example, if D is linear, (11) becomes

$$f\left(p\_j\right) = \left[\mathcal{D}\sum\_{i=1}^n a\_i b\_i\right] \left(p\_j\right) = \sum\_{i=1}^n a\_i \left[\mathcal{D}b\_i\right] \left(p\_j\right) = \sum\_{i=1}^n a\_i A\_{j,i} \qquad j = 1, \ldots, n \tag{12}$$

where, clearly, Aj,i ¼ Dbi ½ � pj � �. Note that (12) is a linear system in unknowns <sup>α</sup>i. Remark 2.2.

With reference to Remark 2.1, one can generalize the collocation method by using a set of linear functionals σ<sup>j</sup> : Hð Þ! Ω IR. In this case one can obtain a generalized version of (12), namely,

$$
\sigma\_{\hat{\mathcal{I}}} f = \sum\_{i=1}^{n} a\_i \underbrace{\sigma\_{\hat{\mathcal{I}}}(\mathcal{D}b\_i)}\_{A\_{\hat{\mathcal{I}}^i}} \qquad j = 1, \dots, n \tag{13}
$$

Another approach is to solve Ru ¼ 0 in a least square sense, that is, to search for coefficients α<sup>i</sup> that minimize

$$\left\|\left\|\mathcal{R}u\right\|\right\|^2 = \left\langle\mathcal{R}u, \mathcal{R}u\right\rangle \tag{14}$$

Standard algebra allows to show that (14) is minimized when Ru is orthogonal to ∂Ru=∂α<sup>i</sup> for every i, that is,

$$\left\langle \mathcal{R}u, \frac{\partial \mathcal{R}u}{\partial a\_i} \right\rangle = 0 \qquad i = 1, \ldots, n \tag{15}$$

Remark 2.3.

3. Wavelets

L2

precision.

vector.

Remark 3.1.

Hilbert space L<sup>2</sup>

cisely, define

41

sequence f g Vn <sup>n</sup><sup>∈</sup> <sup>Z</sup> of subspaces of <sup>L</sup><sup>2</sup>

grant this, we need another axiom

means that whatever f ∈ L<sup>2</sup>

An axiom dual to (24) is

It is worth observing that from the weighted residual method, Galerkin and least square methods can be derived by a suitable choice of βi; even collocation method can be derived if we allow β<sup>i</sup> to be a delta function (so that the scalar product needs to be interpreted as a distribution pairing). Moreover, for every v since map

x↦h i x; v is a functional, it is easy to recognize that every method can be considered

ð Þ IR with

like a generalized collocation method, as described in Remark 2.1.

Wavelets for Differential Equations and Numerical Operator Calculus

DOI: http://dx.doi.org/10.5772/intechopen.82820

The idea of multiresolution analysis is to approximate vectors of L<sup>2</sup>

⋃ n∈ Z

ð Þ IR ). Axiom (24) requires that every vector of <sup>L</sup><sup>2</sup>

n∈ IN and the constant 1 are an orthogonal basis of L<sup>2</sup>

define V<sup>0</sup> as the space generated by S0, that is,

in the left hand; this means that given any ϵ . 0 and f ∈ L<sup>2</sup>

variable degrees of resolution. This is achieved through a multiresolution analysis scheme defined by means of some axioms. The first axiom is the existence of a

The idea is that if one approximates (in a least square sense) a function f with vectors belonging to Vn, the approximation error gets smaller as n increases since every vector of Vn also belongs to Vnþ1. Note, however, that (23) does not grant that we will be able to approximate f with an error as small as desired; in order to

Vn <sup>¼</sup> <sup>L</sup><sup>2</sup>

where the overline denotes set closure (in the topology induced by the norm on

an element of the union whose distance from f is less than ϵ. In other words, (24)

that requires that there is only one "lowest resolution vector," that is, the null

In order to see that axiom (24) is not obvious, it is more convenient to work with

Define S<sup>0</sup> ¼ f g sin 2ð Þ πð Þ 2k t ; k∈ IN as the set of all the even-numbered sines, and

Now define spaces Vn, n , 0 by removing one vector at time from the basis of V0, and define spaces Vn, n , 0 by adding one odd harmonic at time. More pre-

ð Þ ½ � 0; 1 . Recall that functions x↦ cos 2ð Þ πnx , x↦ sin 2ð Þ πnx , and

error allowed ϵ, one can find a space Vn that approximates f with the required

⋂ n∈ Z

ð Þ IR nested one inside the other, that is,

ð Þ IR (24)

ð Þ IR is in the closure of the union

ð Þ IR , it is possible to find

⋯ ⊂V�<sup>2</sup> ⊂V�<sup>1</sup> ⊂V<sup>0</sup> ⊂V<sup>1</sup> ⊂V<sup>2</sup> ⊂ ⋯ (23)

ð Þ IR and whatever the chosen maximum approximation

Vn ¼ f g0 (25)

ð Þ ½ � 0; 1 .

V<sup>0</sup> ≔ spanS<sup>0</sup> (26)

If D is linear,

$$\frac{\partial \mathcal{R}u}{\partial a\_{\dagger}} = \frac{\partial}{\partial a\_{\dagger}} \left[ \mathcal{D} \sum\_{i=1}^{n} a\_{i}b\_{i} - f \right] = \mathcal{D}b\_{\dagger} \tag{16}$$

and we get

$$\begin{aligned} \mathbf{0} &= \left< \mathcal{R}u, \frac{\partial \mathcal{R}u}{\partial a\_j} \right> \\\\ &= \left< \mathcal{D} \sum\_{i=1}^n a\_i b\_i - f, \mathcal{D}b\_j \right> \\\\ &= \sum\_{i=1}^n a\_i \langle \mathcal{D}b\_i, \mathcal{D}b\_j \rangle - \langle f, \mathcal{D}b\_j \rangle \end{aligned} \tag{17}$$

which is still a linear system.

The Galerkin method is inspired on the idea that in a least square approximation, the error is orthogonal to the space where the approximating function lives. We would like to approximate the solution of the PDE with a vector of Hn; however, we do not know the solution, so we ask for the residual to be orthogonal to Hn, that is,

$$\langle \mathcal{R}u, v \rangle = \mathbf{0} \qquad \forall v \in H\_n \tag{18}$$

Eq. (18) is equivalent to

$$
\langle \mathcal{D}u, v \rangle = \langle f, v \rangle \qquad \forall v \in H\_n \tag{19}
$$

which can be interpreted as the original differential equation Du ¼ f in weak form. Form (19) is often exploited by integrating by parts the left-hand side scalar product, moving one differentiation from the unknown function u to the test function v. This is often useful when a piecewise linear approximation is employed and D contains second-order differential operators (that cannot be applied on piecewise linear functions). Eq. (19) is verified for all v∈ Hn if and only if it is verified for every vector in a basis of Hn, that is, (19) is equivalent to

$$
\langle \mathcal{D}u, b\_j \rangle = \langle f, b\_j \rangle \qquad j = \mathbf{1}, \ldots, n \tag{20}
$$

If D is linear, from (20) one can easily derive the linear system in α<sup>i</sup>

$$\left< f, b\_j \right> = \sum\_{i=1}^{n} a\_i \left< \mathcal{D}b\_i, b\_j \right> \qquad j = 1, \ldots, n \tag{21}$$

Finally, it is worth citing the method of weighted residuals that can be seen as a generalization of the Galerkin PDE method. The idea is that instead of asking the residual being orthogonal to the space Hn used to approximate u, we ask the residual to be orthogonal to a different n-dimensional space Kn ¼ span β<sup>1</sup> f g ; …; β<sup>n</sup> where f g β<sup>i</sup> n <sup>i</sup>¼<sup>1</sup> is clearly a basis of Kn. One obtains

$$\left\langle f, \beta\_{j} \right\rangle = \sum\_{i=1}^{n} a\_{i} \left\langle \mathcal{D}b\_{i}, \beta\_{j} \right\rangle \qquad j = 1, \ldots, n \tag{22}$$

Wavelets for Differential Equations and Numerical Operator Calculus DOI: http://dx.doi.org/10.5772/intechopen.82820

## Remark 2.3.

Ru;

∂Ru ∂αj

If D is linear,

Wavelet Transform and Complexity

and we get

which is still a linear system.

Eq. (18) is equivalent to

f g β<sup>i</sup> n

40

∂Ru ∂αi � �

> ¼ ∂ ∂αj

0 ¼ Ru;

¼ D∑ n i¼1

¼ ∑ n i¼1 D ∑ n i¼1

∂Ru ∂αj � �

α<sup>i</sup> Dbi; Dbj

αibi � f; Dbj � �

The Galerkin method is inspired on the idea that in a least square approximation, the error is orthogonal to the space where the approximating function lives. We would like to approximate the solution of the PDE with a vector of Hn; however, we do not know the solution, so we ask for the residual to be orthogonal to Hn, that is,

which can be interpreted as the original differential equation Du ¼ f in weak form. Form (19) is often exploited by integrating by parts the left-hand side scalar product, moving one differentiation from the unknown function u to the test function v. This is often useful when a piecewise linear approximation is employed and D contains second-order differential operators (that cannot be applied on piecewise linear functions). Eq. (19) is verified for all v∈ Hn if and only if it is

verified for every vector in a basis of Hn, that is, (19) is equivalent to

n i¼1

If D is linear, from (20) one can easily derive the linear system in α<sup>i</sup>

α<sup>i</sup> Dbi; bj

α<sup>i</sup> Dbi; β<sup>j</sup> D E

Finally, it is worth citing the method of weighted residuals that can be seen as a generalization of the Galerkin PDE method. The idea is that instead of asking the residual being orthogonal to the space Hn used to approximate u, we ask the residual to be orthogonal to a different n-dimensional space Kn ¼ span β<sup>1</sup> f g ; …; β<sup>n</sup> where

Du; bj � � <sup>¼</sup> <sup>f</sup>; bj

f; bj � � <sup>¼</sup> <sup>∑</sup>

<sup>i</sup>¼<sup>1</sup> is clearly a basis of Kn. One obtains

f; β<sup>j</sup> D E ¼ ∑ n i¼1 � � � <sup>f</sup>; <sup>D</sup>bj

� �

h i Ru; v ¼ 0 ∀v∈ Hn (18)

h i Du; v ¼ h i f; v ∀v∈ Hn (19)

� � <sup>j</sup> <sup>¼</sup> <sup>1</sup>, …, n (20)

� � <sup>j</sup> <sup>¼</sup> <sup>1</sup>, …, n (21)

j ¼ 1, …, n (22)

αibi � f � �

¼ 0 i ¼ 1, …, n (15)

¼ Dbj (16)

(17)

It is worth observing that from the weighted residual method, Galerkin and least square methods can be derived by a suitable choice of βi; even collocation method can be derived if we allow β<sup>i</sup> to be a delta function (so that the scalar product needs to be interpreted as a distribution pairing). Moreover, for every v since map x↦h i x; v is a functional, it is easy to recognize that every method can be considered like a generalized collocation method, as described in Remark 2.1.
