**2. Differential equations associated with linear systems of algebraic equations**

We start with simple vector differential equations whose solutions are related to the solution of general systems of linear algebraic equations. Later, in Section 5 we construct differential equations that allow an efficient numerical integration in order to obtain the solution of these systems.

#### **2.1 Matrix series solution of some vector differential equations**

Consider a first order vector differential equation of the form

$$\frac{d\mathbf{x}}{d\mathbf{v}} = f(\mathbf{v})(A\mathbf{x} - b) \tag{1}$$

If *f v*ð Þ is chosen to be *f v*ð Þ¼�1 then *g v*ð Þ¼�*v*. Choosing *vo* ¼ 0 gives *g v*ð Þ¼ *<sup>o</sup>* 0

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems…*

If *f v*ð Þ¼ 1*=v* then *g v*ð Þ¼ ln *v*. With *vo* ¼ 1 and *g v*ð Þ¼ *<sup>o</sup>* 0 (6) becomes now

The solution of (1) with *f v*ð Þ¼ 1*=v* can also be obtained by employing a Taylor

where *I* is the identity matrix. The series in (10) is convergent for *v*∈ ð Þ 0, 2 but

and assume that *x* is a continuous function of the real variable *v* over a certain

For positive definite matrices *A* and a finite *g v*ð Þ*<sup>o</sup>* this is achieved if *g v*ð Þ!�∞ for *v* ! *vS*. Thus, the solution of (11) cannot be computed directly from (6). On the other hand, in the particular case of *f v*ð Þ¼ 1*=v*, *g v*ð Þ¼ ln *v* the solution of (11) can

interval. The solution of (11) can be obtained from (3) for a *v* � *vS* for which

its rate of convergence is, in general, very small for *v* very close to zero.

Consider now a system of equations written in matrix form as

To satisfy this condition *g v*ð Þ in (3) must be chosen such that

The expression in the brackets is just the inverse of *A*,

**2.2 Relationship with linear systems of algebraic equations**

ð7Þ

ð8Þ

ð9Þ

ð10Þ

ð11Þ

ð12Þ

ð13Þ

ð14Þ

and (6) becomes

series expansion about *xo* ¼ 1. Indeed,

*DOI: http://dx.doi.org/10.5772/intechopen.89342*

and evaluating the derivatives yields

be obtained from (10) with *v* ¼ *vS* ¼ 0,

**23**

where *A* ∈*R<sup>n</sup>*�*<sup>n</sup>* is a general nonsingular matrix, *b*∈*R<sup>n</sup>* is a given *n*-dimensional vector, *<sup>x</sup>* <sup>¼</sup> ð Þ *<sup>x</sup>*1, *<sup>x</sup>*2, …, *xn <sup>T</sup>* is the unknown *<sup>n</sup>*-dimensional vector,*<sup>T</sup>* indicates the transpose, and *f v*ð Þ is a continuous function over some interval of the real variable *v*. With the condition

$$\mathbf{x}(\boldsymbol{\nu}\_{\boldsymbol{\nu}}) = \mathbf{x}\_{\boldsymbol{\nu}} \tag{2}$$

*xo* being a given vector, (1) has a unique solution over the interval considered [4]. Integrating both sides of (1) from *vo* to *v* yields

$$A\mathbf{x}(\nu) - h = e^{\sqrt{\mathbb{L}}\mathbf{z}^{(\mathbf{v}) - \mathbf{z}(\mathbf{v}\_s)}}(A\mathbf{x}\_s - h) \tag{3}$$

where

$$\lg(\nu) \equiv \int \int (\nu) \, d\nu \tag{4}$$

is a primitive for *f v*ð Þ, i.e., *f v*ð Þ¼ *dg=dv*. Thus, (1) can be written in the form

$$\frac{d\boldsymbol{k}}{d\boldsymbol{\nu}} = \boldsymbol{f}(\boldsymbol{\nu})\boldsymbol{e}^{\star[\boldsymbol{\lambda}\times\boldsymbol{\xi}\mapsto\boldsymbol{\kappa}(\boldsymbol{e}\_{\boldsymbol{\nu}})]}(\boldsymbol{\mathcal{A}}\boldsymbol{x}\_{\boldsymbol{\lambda}}-\boldsymbol{\mathsf{b}})\tag{5}$$

Integrating again both sides from *vo* to *v* gives

$$\mathbf{x}(\nu) = \boldsymbol{\chi}\_o + \mathbf{e}^{-\mathsf{A}\mathbf{g}(\boldsymbol{\nu}\_o)} \sum\_{k=0}^{\infty} \frac{\boldsymbol{A}^k}{(k+l)!} \Big[ \left( \mathbf{g}(\nu) \right)^{k+l} - \left( \mathbf{g}(\boldsymbol{\nu}\_o) \right)^{k+l} \Big] (A\boldsymbol{\chi}\_o - b) \tag{6}$$

Two particular cases are presented.

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems… DOI: http://dx.doi.org/10.5772/intechopen.89342*

If *f v*ð Þ is chosen to be *f v*ð Þ¼�1 then *g v*ð Þ¼�*v*. Choosing *vo* ¼ 0 gives *g v*ð Þ¼ *<sup>o</sup>* 0 and (6) becomes

$$\mathbf{x}(\nu) = e^{-\mathcal{A}\nu}\mathbf{x}\_o + \nu \sum\_{k=0}^{\ll} (-1)^k \frac{\left(\mathcal{A}\nu\right)^k}{(k+l)!} b \tag{7}$$

If *f v*ð Þ¼ 1*=v* then *g v*ð Þ¼ ln *v*. With *vo* ¼ 1 and *g v*ð Þ¼ *<sup>o</sup>* 0 (6) becomes now

$$\left(\lambda r(\nu) - x\_{\text{v}} + (\ln \nu) \sum\_{k=\text{v}}^{\text{v}} \frac{\left(A \ln \nu \right)^{k}}{\left(k + l\right)!} (A x\_{\text{v}} - b) \tag{8}$$

The solution of (1) with *f v*ð Þ¼ 1*=v* can also be obtained by employing a Taylor series expansion about *xo* ¼ 1. Indeed,

$$\mathbf{x}(\nu) = \mathbf{x}(\mathbf{l}) + \frac{\left(\nu - \mathbf{l}\right)}{1!} \mathbf{x}^{(\mathbf{l})}(\mathbf{l}) + \frac{\left(\nu - \mathbf{l}\right)^{\circ}}{2!} \mathbf{x}^{(\mathbf{l})}(\mathbf{l}) + \dots + \frac{\left(\nu - \mathbf{l}\right)^{\circ}}{k!} \mathbf{x}^{(\mathbf{e})}(\mathbf{l}) + \dotsb \tag{9}$$

and evaluating the derivatives yields

$$\chi(\nu) = \chi\_o - (1 - \nu) \left[ I + \sum\_{k=l}^{\nu} \frac{(1 - \nu)^k}{k + 1} (I - A) \left( I - \frac{A}{2} \right) \cdots \left( I - \frac{A}{k} \right) \right] (A \chi\_o - b) \tag{10}$$

where *I* is the identity matrix. The series in (10) is convergent for *v*∈ ð Þ 0, 2 but its rate of convergence is, in general, very small for *v* very close to zero.

#### **2.2 Relationship with linear systems of algebraic equations**

Consider now a system of equations written in matrix form as

$$\mathcal{A}\mathfrak{x} = \mathfrak{h} \tag{11}$$

and assume that *x* is a continuous function of the real variable *v* over a certain interval. The solution of (11) can be obtained from (3) for a *v* � *vS* for which

$$
\Delta\kappa(\nu\_S) - b = 0 \tag{12}
$$

To satisfy this condition *g v*ð Þ in (3) must be chosen such that

$$\mathbf{e}^{\mathcal{A}\left[\mathbf{g}(\mathbf{v}\_{\iota})-\mathbf{g}(\mathbf{v}\_{\iota})\right]}=\mathbf{0}\tag{13}$$

For positive definite matrices *A* and a finite *g v*ð Þ*<sup>o</sup>* this is achieved if *g v*ð Þ!�∞ for *v* ! *vS*. Thus, the solution of (11) cannot be computed directly from (6). On the other hand, in the particular case of *f v*ð Þ¼ 1*=v*, *g v*ð Þ¼ ln *v* the solution of (11) can be obtained from (10) with *v* ¼ *vS* ¼ 0,

$$\mathbf{x}\_{\circ} = \mathbf{x}\_{\circ} - \left[ I + \sum\_{k=1}^{\bullet} \frac{1}{k+1} (I - A) \left( I - \frac{A}{2} \right) \cdots \left( I - \frac{A}{k} \right) \right] (A \mathbf{x}\_{\circ} - b) \tag{14}$$

The expression in the brackets is just the inverse of *A*,

accurately and with a small computational effort. A survey of various existent algorithms for computing matrix exponentials and a useful discussion of the diffi-

**2. Differential equations associated with linear systems of algebraic**

**2.1 Matrix series solution of some vector differential equations**

Consider a first order vector differential equation of the form

In the present work, we use new formulae for arbitrary matrix exponentials that contain highly convergent infinite series which allow accurate and stable numerical computations. Employing these formulae, two new solution methods are proposed which are particularly efficient for large-scale general linear algebraic systems.

We start with simple vector differential equations whose solutions are related to the solution of general systems of linear algebraic equations. Later, in Section 5 we construct differential equations that allow an efficient numerical integration in

where *A* ∈*R<sup>n</sup>*�*<sup>n</sup>* is a general nonsingular matrix, *b*∈*R<sup>n</sup>* is a given *n*-dimensional vector, *<sup>x</sup>* <sup>¼</sup> ð Þ *<sup>x</sup>*1, *<sup>x</sup>*2, …, *xn <sup>T</sup>* is the unknown *<sup>n</sup>*-dimensional vector,*<sup>T</sup>* indicates the transpose, and *f v*ð Þ is a continuous function over some interval of the real variable *v*.

*xo* being a given vector, (1) has a unique solution over the interval considered

is a primitive for *f v*ð Þ, i.e., *f v*ð Þ¼ *dg=dv*. Thus, (1) can be written in the form

ð1Þ

ð2Þ

ð3Þ

ð4Þ

ð5Þ

ð6Þ

culties involved are presented in [3].

order to obtain the solution of these systems.

[4]. Integrating both sides of (1) from *vo* to *v* yields

Integrating again both sides from *vo* to *v* gives

Two particular cases are presented.

**equations**

*Functional Calculus*

With the condition

where

**22**

*Functional Calculus*

$$\mathcal{A}^{-1} = I + \sum\_{k=1}^{\cdots} \frac{1}{k+1} (I - A) \left( I - \frac{A}{2} \right) \cdots \left( I - \frac{A}{k} \right) \tag{15}$$

that can be integrated term by term. From (18) and (19)

*DOI: http://dx.doi.org/10.5772/intechopen.89342*

in (20). For example, for any *<sup>v</sup>* <sup>¼</sup> *<sup>e</sup>*�*q*, *<sup>q</sup>*<sup>&</sup>gt; � ln 2, we have

e.g., and then replacing *A* with 2*A* we have

with alternating in sign series coefficients.

**3.2 Rapidly convergent series formulae**

*<sup>A</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ by *<sup>A</sup>* we obtain

(20) from 1 to *v*, we have

**25**

which is valid for any *A*, positive definite or not, of arbitrary condition. One can see that, for a positive definite *A* and for *v* ! 0, the expression in the brackets of (20) gives a series expansion for the inverse *A*�<sup>1</sup> (see (14) and (15)). Various expressions for the matrix exponential are obtained by giving particular values to *v*

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems…*

the series becoming less convergent as *q* increases above *q* = 0. On the other hand, with *v* ¼ 1*=e* in (20) and then *A* replaced with *qA* we obtain for any real number *q*

this series being more convergent than that in (21) for greater values of *q*. For any *v*∈ð Þ 1, 2 , the terms in the series expressions derived from (20) have coefficients that are alternately positive and negative. With *<sup>v</sup>* <sup>¼</sup> *<sup>e</sup>*<sup>1</sup>*=*<sup>2</sup> = 1.64872127,

As well, by replacing *A* by � *qA* in (23) one obtains instead of (22) an expression

From the basic Eq. (20) we derive now formulae which contain series that have

Firstly, it is obvious that for values of *v* close to 1 the series in (20) has a high rate

Secondly, the convergence of the series in the expressions derived from (20) can further be improved by successive integrations. Indeed, integrating both sides of

a higher rate of convergence than those presented in the previous subsection.

of convergence. For instance, with *<sup>v</sup>* <sup>¼</sup> <sup>1</sup> <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>*, 10�*<sup>q</sup>* <sup>≪</sup> 1, and replacing

where *cq* � <sup>1</sup>*<sup>=</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ. This series is rapidly convergent.

ð20Þ

ð21Þ

ð22Þ

ð23Þ

ð24Þ

ð25Þ

The rate of convergence of the series in (14) and (15) is very small and they cannot be practically used in numerical computation for arbitrary matrices.

We note that the solution of (11) can be formally expressed from (6) as

$$
\lambda \boldsymbol{\pi}\_{\boldsymbol{s}} = \boldsymbol{\pi}\_{\boldsymbol{o}} + \boldsymbol{e}^{\mathrm{A\underline{\mathbf{x}}}(\boldsymbol{v}\_{\boldsymbol{s}})} \left( \boldsymbol{e}^{\mathrm{A\underline{\mathbf{x}}}(\boldsymbol{v}\_{\boldsymbol{o}})} - \boldsymbol{e}^{\mathrm{A\underline{\mathbf{x}}}(\boldsymbol{v})} \right)^{-1} \left( \boldsymbol{\chi}(\boldsymbol{\nu}) - \boldsymbol{\chi}\_{\boldsymbol{o}} \right) \tag{16}
$$

which is valid for any *v* 6¼ *vo* in the interval considered. To compute *xs* with (16) would require the computation of *<sup>e</sup>Ag v*ð Þ*<sup>o</sup>* � *<sup>e</sup>Ag v*ð Þ �<sup>1</sup> . Equation (16) will be used in Subsection 4.1.

**Note.** The matrix product in the terms of the series in (10), (14) and (15) can be expressed as a matrix polynomial using the relationship with the Stirling numbers of the first kind *S* ð Þ *m <sup>k</sup>*þ<sup>1</sup> [5]

$$(I - A) \left( I - \frac{A}{2} \right) \cdots \left( I - \frac{A}{k} \right) = (-1)^k \frac{1}{k!} \sum\_{m=1}^{k+1} S\_{k+1}^{(m)} A^{m-1} \tag{17}$$

While each new term in such series as those in (10), (14) and (15) is calculated through a multiplication with a matrix that becomes more and more well-conditioned as *k* increases, the computation with the expression in (17) would require successive multiplications with the same original matrix and, for each *k*, a new polynomial is to be constructed and new Stirling numbers have to be generated. The formulae derived in the next two sections contain the same type of series and, therefore, are simpler and more efficient to be used for numerical computations.

#### **3. New formulae for computing matrix exponentials**

In this section, we derive matrix exponential expressions which contain highly convergent infinite series that allow accurate and stable numerical computations in numerous applications. They shall also be used in the next section.

#### **3.1 Series expressions for matrix exponentials**

Consider the matrix function *<sup>v</sup><sup>A</sup>* � *<sup>e</sup><sup>A</sup>* ln *<sup>v</sup>* where *<sup>v</sup>* is a positive real variable and *<sup>A</sup>* a general square matrix with real number entries. By integration,

$$\left. \int\_{1}^{\prime} \nu^{\prime \prime} \, d\nu = \frac{\nu^{\prime \prime}}{\mathcal{A}} \right|\_{\ast = 1}^{\ast} = \frac{1}{\mathcal{A}} \left( e^{\varepsilon \ln \nu} - I \right) \tag{18}$$

For *v*∈ð Þ 0, 2 the integrand can be expanded in a power series as

$$\begin{split} \mathbf{v}^{s} \cdot \mathbf{i} &= \left[ 1 - (1 - \nu) \right]^{s} \cdot \mathbf{i} - I - \frac{(1 - \nu)}{1!} (A - I) + \frac{(1 - \nu)^{2}}{2!} (A - I)(A - 2I) \\ &- \frac{(1 - \nu)^{3}}{3!} (A - I)(A - 2I)(A - 3I) + \dotsb \end{split} \tag{19}$$

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems… DOI: http://dx.doi.org/10.5772/intechopen.89342*

that can be integrated term by term. From (18) and (19)

$$e^{A\mathbf{r}\cdot\mathbf{v}} = I - (1 - \mathbf{v})A\left[I + \sum\_{k=1}^{\bullet} \frac{(1 - \mathbf{v})^k}{k+1}(I - A)\left(I - \frac{A}{2}\right)\cdots\left(I - \frac{A}{k}\right)\right] \tag{20}$$

which is valid for any *A*, positive definite or not, of arbitrary condition. One can see that, for a positive definite *A* and for *v* ! 0, the expression in the brackets of (20) gives a series expansion for the inverse *A*�<sup>1</sup> (see (14) and (15)). Various expressions for the matrix exponential are obtained by giving particular values to *v* in (20). For example, for any *<sup>v</sup>* <sup>¼</sup> *<sup>e</sup>*�*q*, *<sup>q</sup>*<sup>&</sup>gt; � ln 2, we have

$$e^{-\omega t} = I - (1 - e^{-t})A\_1 \left[ I + \sum\_{k=1}^{\infty} \frac{(1 - e^{-t})^k}{k+1} (I - A) \left( I - \frac{A}{2} \right) \cdots \left( I - \frac{A}{k} \right) \right] \tag{21}$$

the series becoming less convergent as *q* increases above *q* = 0. On the other hand, with *v* ¼ 1*=e* in (20) and then *A* replaced with *qA* we obtain for any real number *q*

$$e^{-\gamma s} = I - (1 - e^{-1})qA \left| \begin{array}{c} l + \sum\_{\lambda=1}^{n} (l - e^{-1}) ^ {\lambda} \\ k + 1 \end{array} (l - qA) \left( l - \frac{qA}{2} \right) \cdots \left( l - \frac{qA}{k} \right) \right| \tag{22}$$

this series being more convergent than that in (21) for greater values of *q*.

For any *v*∈ð Þ 1, 2 , the terms in the series expressions derived from (20) have coefficients that are alternately positive and negative. With *<sup>v</sup>* <sup>¼</sup> *<sup>e</sup>*<sup>1</sup>*=*<sup>2</sup> = 1.64872127, e.g., and then replacing *A* with 2*A* we have

$$e^{\star} = I + 2\{e^{\mathbb{V}\bar{\gamma}} - 1\} \cdot A \left[ I + \sum\_{k=1}^{\nu} \frac{(-1)^{k} \left( e^{\mathbb{V}\bar{\gamma}} - 1 \right)^{k}}{k+1} (I - 2\ A) \left( I - \frac{2\ A}{2} \right) \cdots \left( I - \frac{2\ A}{k} \right) \right] \tag{23}$$

As well, by replacing *A* by � *qA* in (23) one obtains instead of (22) an expression with alternating in sign series coefficients.

#### **3.2 Rapidly convergent series formulae**

From the basic Eq. (20) we derive now formulae which contain series that have a higher rate of convergence than those presented in the previous subsection.

Firstly, it is obvious that for values of *v* close to 1 the series in (20) has a high rate of convergence. For instance, with *<sup>v</sup>* <sup>¼</sup> <sup>1</sup> <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>*, 10�*<sup>q</sup>* <sup>≪</sup> 1, and replacing *<sup>A</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ by *<sup>A</sup>* we obtain

$$\mathbf{e}^{\mathcal{A}} = I + \mathbf{I} \, \mathbf{0}^{-q} \, \mathbf{c}\_q \mathcal{A} \left[ I + \sum\_{t=1}^{\tilde{\omega}} \frac{(-\mathbf{l})^k \, \mathbf{l} \, \mathbf{0}^{-qk}}{k+1} (I - \mathbf{c}\_q \mathcal{A}) \left( I - \frac{\mathbf{c}\_q \mathcal{A}}{2} \right) \cdots \left( I - \frac{\mathbf{c}\_q \mathcal{A}}{k} \right) \right] \tag{24}$$

where *cq* � <sup>1</sup>*<sup>=</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ. This series is rapidly convergent.

Secondly, the convergence of the series in the expressions derived from (20) can further be improved by successive integrations. Indeed, integrating both sides of (20) from 1 to *v*, we have

$$\begin{split} \text{Vec}^{\star 4^{\text{linv}}} &-I + (1-\nu)(I+A) \bigg[ -I + \frac{1-\nu}{2}A \\ &+ A \sum\_{\succ=1}^{\bullet} \frac{(1-\nu)^{\not\succ 1}}{(k+1)(k+2)}(I-A) \bigg( I - \frac{A}{2} \bigg) \cdots \left( I - \frac{A}{k} \right) \bigg] \end{split} \tag{25}$$

ð15Þ

ð16Þ

ð17Þ

ð18Þ

ð19Þ

. Equation (16) will be used in

The rate of convergence of the series in (14) and (15) is very small and they cannot be practically used in numerical computation for arbitrary matrices. We note that the solution of (11) can be formally expressed from (6) as

which is valid for any *v* 6¼ *vo* in the interval considered. To compute *xs* with (16)

**Note.** The matrix product in the terms of the series in (10), (14) and (15) can be expressed as a matrix polynomial using the relationship with the Stirling numbers

While each new term in such series as those in (10), (14) and (15) is calculated through a multiplication with a matrix that becomes more and more well-conditioned as *k* increases, the computation with the expression in (17) would require successive multiplications with the same original matrix and, for each *k*, a new polynomial is to be constructed and new Stirling numbers have to be generated. The formulae derived in the next two sections contain the same type of series and, therefore, are simpler

In this section, we derive matrix exponential expressions which contain highly convergent infinite series that allow accurate and stable numerical computations in

Consider the matrix function *<sup>v</sup><sup>A</sup>* � *<sup>e</sup><sup>A</sup>* ln *<sup>v</sup>* where *<sup>v</sup>* is a positive real variable and *<sup>A</sup>*

would require the computation of *<sup>e</sup>Ag v*ð Þ*<sup>o</sup>* � *<sup>e</sup>Ag v*ð Þ �<sup>1</sup>

and more efficient to be used for numerical computations.

**3. New formulae for computing matrix exponentials**

**3.1 Series expressions for matrix exponentials**

numerous applications. They shall also be used in the next section.

a general square matrix with real number entries. By integration,

For *v*∈ð Þ 0, 2 the integrand can be expanded in a power series as

ð Þ *m <sup>k</sup>*þ<sup>1</sup> [5]

Subsection 4.1.

*Functional Calculus*

of the first kind *S*

**24**

Integrating repeatedly we obtain the identity

$$\begin{split} \mathbf{v}^{\boldsymbol{\nu}} e^{\boldsymbol{\mu}^{\boldsymbol{\nu}} \boldsymbol{\lambda}^{\boldsymbol{\nu}}} &= I + P! \sum\_{\ell=1}^{\nu} \frac{(-1)^{\ell} (1-\nu)^{\ell}}{k! (\boldsymbol{\mu}-\boldsymbol{k})!} \binom{\boldsymbol{I} + \boldsymbol{\frac{A}{P}}}{P} \binom{\boldsymbol{I} + \boldsymbol{\frac{A}{P}}}{P-1} \cdots \binom{\boldsymbol{I} + \boldsymbol{\frac{A}{P}}}{P-k+1} \\ &+ (-1)^{\nu+1} (1-\nu)^{\nu+1} \mathcal{A} (I+\boldsymbol{A}) \binom{\boldsymbol{I} + \boldsymbol{\frac{A}{P}}}{P+2} \cdots \binom{\boldsymbol{I} + \boldsymbol{\frac{A}{P}}}{P} \binom{\boldsymbol{I}}{P+1} \boldsymbol{I} \\ &+ \rho! \sum\_{\boldsymbol{i=1}}^{\nu} \frac{(1-\nu)^{\boldsymbol{\hat{i}}}}{(k+1)(k+2)\cdots(k+\rho+1)} (I-\boldsymbol{A}) \binom{\boldsymbol{I} - \boldsymbol{\frac{A}{P}}}{P-k} \cdots \binom{\boldsymbol{I} - \boldsymbol{\frac{A}{P}}}{P-k} \bigg], \end{split} \tag{26}$$
 
$$\boldsymbol{\rho} = 1, 2, \ldots, \boldsymbol{\varepsilon} \quad \nu \in (0, 2)$$

ð29Þ

ð30Þ

ð31Þ

ð32Þ

*<sup>S</sup>* and *<sup>x</sup>*ð Þ <sup>0</sup> *<sup>S</sup>* ,

*<sup>S</sup>* satisfies (11) with a desired

which reduces for *A* ¼ 0 to an elementary binomial sum.

In what follows, we apply the matrix exponential formulae from the previous section and present a new iteration procedure and a matrix product formula for the

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems…*

where *xS* ¼ *x v*ð Þ*<sup>S</sup>* is the solution of (11), *xo* ¼ *x v*ð Þ*<sup>o</sup>* , *A* is a positive definite matrix and *g v*ð Þ is a function of the real variable *v* such that *g v*ð Þ!�∞ for *v* ! *vs* (see Subsection 2.2). To get *x v*ð Þ for values of *v* very close to *vS* we choose an adequate *g v*ð Þ and a formula for the matrix exponential from Section 3. When

If *g v*ð Þ� ln *<sup>v</sup>*, with *vo* <sup>¼</sup> 1 and *vs* <sup>¼</sup> 0, we compute *x e*�*<sup>N</sup>* for *<sup>N</sup>* <sup>≫</sup> 1 which is

*<sup>S</sup>* � *xo*. This equation is applied iteratively by replacing *<sup>x</sup>*ð Þ<sup>1</sup>

To evaluate the amount of computation necessary to obtain the solution of (11) with a certain accuracy, let us take *N* such that *N A*k k ¼ 10 when one needs about 30 terms in the infinite series, i.e., 30 matrix-vector multiplications. The number of iterations increases with the condition number of *A*. To see this and to determine the corresponding number of iterations, consider (11) with *b* ¼ 0 and *A* replaced with a diagonal matrix whose entries are positive numbers, the greatest of these being 1, and whose condition is the same as that of *A*. The solution of this system is

*<sup>S</sup>* , *<sup>i</sup>* <sup>¼</sup> 2, 3, …, until *<sup>x</sup>*ð Þ*<sup>i</sup>*

**4. Solution of general linear systems**

*DOI: http://dx.doi.org/10.5772/intechopen.89342*

**4.1 An iterative method**

closer to the solution *xS*,

where *x*ð Þ <sup>0</sup>

accuracy.

**27**

respectively, with *x*ð Þ*<sup>i</sup>*

solution of large systems of linear algebraic equations.

Equation (16) can be written in the form

*g v*ð Þ¼ *<sup>o</sup>* 0, applying (22) for instance gives

*<sup>S</sup>* and *<sup>x</sup>*ð Þ *<sup>i</sup>*�<sup>1</sup>

Same result is obtained by replacing *A* with *pI* þ *A* in (20). This identity contains an infinite series whose coefficients decrease rapidly as *p* increases. Obviously, for a given *A* and *v*, (26) generates more efficient computational formulae than those in the previous subsection. As before, for *v*∈ð Þ 1, 2 the infinite series have coefficients that alternate in sign. For example, with *<sup>v</sup>* <sup>¼</sup> *<sup>e</sup>*<sup>1</sup>*=*<sup>2</sup> in (26) and then replacing *A* by 2*A* and *p* by 2*p* we have instead of (23)

$$\begin{split} e^{s^{\ell}} &= e^{-\rho} \left[ I + (2\,\mu)! \sum\_{k=1}^{2\mu} \frac{(e^{k\lambda} - 1)^{k}}{k!(2\,\mu - k)!} \Big| \left( I + \frac{2\,\mu}{2\,\mu} \right) \Big| \left( I + \frac{2\,\mu}{2\,\mu - 1} \right) \cdots \Big( I + \frac{2\,\mu}{2\,\mu - k + 1} \Big) \right. \\ &+ 2(e^{\vee 2} - 1)^{2\mu + 1} A(I + 2\,A) \Big| \left( I + \frac{2\,\mu}{2} \right) \cdots \Big( I + \frac{2\,A}{2\,\mu} \Big| \left[ \frac{1}{2\,\mu + 1} I \Big] \\ &+ (2\,p)! \sum\_{k=1}^{\nu} \frac{(-1)^{k} (e^{\vee \top} - 1)^{k}}{(k + 1)(k + 2) \cdots (k + 2\,p + 1)} (I - 2\,A) \Big( I - \frac{2\,A}{2} \Big) \cdots \Big( I - \frac{2\,A}{k} \Big) \Big| \Big], \\ &\qquad p = 1, 2, \ldots \end{split} \tag{27}$$

Taking *<sup>v</sup>* <sup>¼</sup> <sup>1</sup> <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>*, with *<sup>q</sup>*>0 and *cq* � <sup>1</sup>*<sup>=</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ, (26) gives (compare with (24))

ð28Þ

Notice that, in the new formulae derived from (26) the infinite series are very rapidly convergent, with their rate of convergence increasing when the parameter *p* increases. Highly accurate numerical results can be generated with only a small number of terms retained in the infinite series of these formulae (see Section 4).

**Note.** All the formulae presented in this section remain valid if *A* is changed in �*A*. Obviously, in all these expressions *A* can be replaced by a real number and the identity matrix *I* by 1, yielding a few novel identities and summation formulae for series of real numbers. Also, the expression in the brackets of (26) for *v* ! 0 is just ð Þ *<sup>I</sup>* <sup>þ</sup> *<sup>A</sup>=<sup>p</sup>* �<sup>1</sup> *=p* if *I* þ *A=p* is positive definite and, thus, we obtain another identity, i.e.,

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems… DOI: http://dx.doi.org/10.5772/intechopen.89342*

$$\begin{split} &p! \sum\_{k=1}^{p} \frac{(-1)^{k+1}}{k!(p-k)!} \binom{I+\frac{A}{p}}{I+\frac{A}{p-1}} \binom{A}{I+\frac{A}{p-k+1}} \\ &= I + \frac{(-1)^{p+1}}{p} A(I+A) \binom{I+\frac{A}{2}}{I+\frac{A}{2}} \cdots \binom{I+\frac{A}{p-1}}{I+\frac{A}{p-1}}, \quad p=2,3,\ldots \end{split} \tag{29}$$

which reduces for *A* ¼ 0 to an elementary binomial sum.

#### **4. Solution of general linear systems**

In what follows, we apply the matrix exponential formulae from the previous section and present a new iteration procedure and a matrix product formula for the solution of large systems of linear algebraic equations.

#### **4.1 An iterative method**

ð26Þ

ð27Þ

ð28Þ

Integrating repeatedly we obtain the identity

*Functional Calculus*

replacing *A* by 2*A* and *p* by 2*p* we have instead of (23)

with (24))

Section 4).

is just ð Þ *<sup>I</sup>* <sup>þ</sup> *<sup>A</sup>=<sup>p</sup>* �<sup>1</sup>

identity, i.e.,

**26**

Same result is obtained by replacing *A* with *pI* þ *A* in (20). This identity contains an infinite series whose coefficients decrease rapidly as *p* increases. Obviously, for a given *A* and *v*, (26) generates more efficient computational formulae than those in the previous subsection. As before, for *v*∈ð Þ 1, 2 the infinite series have coefficients that alternate in sign. For example, with *<sup>v</sup>* <sup>¼</sup> *<sup>e</sup>*<sup>1</sup>*=*<sup>2</sup> in (26) and then

Taking *<sup>v</sup>* <sup>¼</sup> <sup>1</sup> <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>*, with *<sup>q</sup>*>0 and *cq* � <sup>1</sup>*<sup>=</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ, (26) gives (compare

Notice that, in the new formulae derived from (26) the infinite series are very rapidly convergent, with their rate of convergence increasing when the parameter *p* increases. Highly accurate numerical results can be generated with only a small number of terms retained in the infinite series of these formulae (see

**Note.** All the formulae presented in this section remain valid if *A* is changed in �*A*. Obviously, in all these expressions *A* can be replaced by a real number and the identity matrix *I* by 1, yielding a few novel identities and summation formulae for series of real numbers. Also, the expression in the brackets of (26) for *v* ! 0

*=p* if *I* þ *A=p* is positive definite and, thus, we obtain another

Equation (16) can be written in the form

$$\mathbf{x}(\nu) = \mathbf{x}\_o + \left(I - e^{\sqrt{\mathbf{x}(\nu) - \mathbf{g}(\nu\_o)}}\right) \left(\mathbf{x}\_S - \mathbf{x}\_o\right) \tag{30}$$

where *xS* ¼ *x v*ð Þ*<sup>S</sup>* is the solution of (11), *xo* ¼ *x v*ð Þ*<sup>o</sup>* , *A* is a positive definite matrix and *g v*ð Þ is a function of the real variable *v* such that *g v*ð Þ!�∞ for *v* ! *vs* (see Subsection 2.2). To get *x v*ð Þ for values of *v* very close to *vS* we choose an adequate *g v*ð Þ and a formula for the matrix exponential from Section 3. When *g v*ð Þ¼ *<sup>o</sup>* 0, applying (22) for instance gives

$$\begin{split} \mathbf{x}(\nu) &= \mathbf{x}\_o + (1 - e^{-t}) \mathbf{g}(\nu) \Big[ I + \sum\_{k=1}^{\pi} \frac{(1 - e^{-t})^k}{k+1} (I + A \mathbf{g}(\nu)) \\ &\times \Big( I + \frac{A}{2} \mathbf{g}(\nu) \Big) \cdots \Big( I + \frac{A}{k} \mathbf{g}(\nu) \Big) \Big] (A \mathbf{x}\_o - b) \end{split} \tag{31}$$

If *g v*ð Þ� ln *<sup>v</sup>*, with *vo* <sup>¼</sup> 1 and *vs* <sup>¼</sup> 0, we compute *x e*�*<sup>N</sup>* for *<sup>N</sup>* <sup>≫</sup> 1 which is closer to the solution *xS*,

$$\begin{split} \mathcal{A}(\boldsymbol{x}^{(1)}) = \boldsymbol{x}\_{\boldsymbol{\xi}}^{(1)} &= \boldsymbol{x}\_{\boldsymbol{\xi}}^{(0)} - \mathcal{N}(1 - \boldsymbol{\kappa}^{-1}) \Bigg[ I + \sum\_{k=1}^{n} \frac{(1 - \boldsymbol{\kappa}^{-1})^{k}}{k+1} (I - \mathcal{N}\boldsymbol{\mathcal{A}}) \\ &\times \Big( I - \frac{\mathcal{N}}{2}\boldsymbol{\mathcal{A}} \Big) \cdots \Big( I - \frac{\mathcal{N}}{k}\boldsymbol{\mathcal{A}} \Big) \Bigg] (\mathcal{A}\boldsymbol{x}\_{\boldsymbol{\xi}}^{(0)} - \boldsymbol{\mu}) \end{split} \tag{32}$$

where *x*ð Þ <sup>0</sup> *<sup>S</sup>* � *xo*. This equation is applied iteratively by replacing *<sup>x</sup>*ð Þ<sup>1</sup> *<sup>S</sup>* and *<sup>x</sup>*ð Þ <sup>0</sup> *<sup>S</sup>* , respectively, with *x*ð Þ*<sup>i</sup> <sup>S</sup>* and *<sup>x</sup>*ð Þ *<sup>i</sup>*�<sup>1</sup> *<sup>S</sup>* , *<sup>i</sup>* <sup>¼</sup> 2, 3, …, until *<sup>x</sup>*ð Þ*<sup>i</sup> <sup>S</sup>* satisfies (11) with a desired accuracy.

To evaluate the amount of computation necessary to obtain the solution of (11) with a certain accuracy, let us take *N* such that *N A*k k ¼ 10 when one needs about 30 terms in the infinite series, i.e., 30 matrix-vector multiplications. The number of iterations increases with the condition number of *A*. To see this and to determine the corresponding number of iterations, consider (11) with *b* ¼ 0 and *A* replaced with a diagonal matrix whose entries are positive numbers, the greatest of these being 1, and whose condition is the same as that of *A*. The solution of this system is *xS* <sup>¼</sup> 0 and the components of the solution of (1), with *f v*ð Þ¼ <sup>1</sup>*=v*, are *xokv<sup>λ</sup><sup>k</sup>* where *λk*, *k* ¼ 1, 2, …, *n*, are the entries in the diagonal matrix. In order to make the magnitudes of all these components at least 1%, e.g., of the corresponding magnitudes of the initial components, one needs no iteration if the condition number is less than 2, but 5 and, respectively, 46 iterations are needed if the condition number is 10 and 100.

What is remarkable in the iterative method based on (32) is that, for matrices with same condition number and same norm, the number of iterations required is the same, independently of the size of the matrices. Considering approximately 2*n*<sup>2</sup> arithmetic operations for one matrix-vector multiplication, where *n* is the number of equations and unknowns in (11), the total number of arithmetic operations required is, thus, proportional to only *n*2. In the examples given above one has to perform, respectively, 60*n*2, 300*n*<sup>2</sup> and 2760*n*<sup>2</sup> arithmetic operations. Assuming only 2*n*3*=*3 arithmetic operations for the Gaussian elimination procedure, the method presented in this subsection requires less computation for the same examples if, respectively, *n*>90, *n*>450 and *n*>4140. One can also notice that the application of Eq. (32) leads to the actual solution of (11) independently of the small error introduced in the computation at each iteration.

#### **4.2 A matrix product formula**

The original general system (11) is replaced with an equivalent system such that its solution is obtained in terms of matrix exponentials for which highly convergent and accurate series formulae have been derived in Section 3.

Namely, instead of (11) we use the system

$$\left(I - e^{-\alpha \mathcal{A}}\right)\mathbf{x} = b\_{\alpha} \tag{33}$$

If the infinite series in (35) is truncated to *k* ¼ *NS* the rest of the series has a norm

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems…*

Much less numerical computation (see below) is needed if the infinite series in

which is also valid for matrices whose norm is less than 1. Thus, (35) becomes

with the norm of the exponentials *e*�2*kα<sup>A</sup>* decreasing very rapidly when *k* increases. Truncating the infinite product to *k* ¼ *NP*, i.e., *NP* þ 1 factors, leaves a

Let us compare the maximum value of the norm of the truncated matrix in the brackets of (35) and (40) with that of the corresponding untruncated matrix in order to get a rough estimate of the numbers *NS* and *NP* of matrix exponentials involved in the numerical computation to achieve a certain accuracy. This will also allow to estimate the total number of matrix-vector multiplications necessary to obtain the solution. The ratio of the maximum norm of the truncated matrix to the maximum norm of the untruncated matrix in the brackets of (35) and (40) is,

To illustrate the computation complexity when using (35) or (40), assume that k k *A* ¼ 1 and *α* ¼ 20. If *ρ* is imposed to be ≈0*:*99, for example, one needs *NS* ¼ 2, i.e.,

numbers increase to *NS* <sup>¼</sup> 23 and *NP* <sup>¼</sup> 4, and when *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>3</sup> one gets *NS* <sup>¼</sup> 230, but *NP* only increases to *NP* ¼ 7. It is clear that applying the formula (40) the number of

three terms in (35) and *NP* <sup>¼</sup> 1, i.e., two factors in (40) if *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>1</sup>

(35) is transformed into an infinite product using the identity [5]

*DOI: http://dx.doi.org/10.5772/intechopen.89342*

remaining factor

whose norm is

respectively, (see (38) and (42))

and

**29**

ð38Þ

ð39Þ

ð40Þ

ð41Þ

ð42Þ

ð43Þ

ð44Þ

. If *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>2</sup> these

where *α* is a real scalar to be chosen, *α* 6¼ 0, and

$$b\_{\alpha} = \alpha \left[ 1 - \frac{(\alpha A)}{2!} + \frac{(\alpha A)^2}{3!} - \frac{(\alpha A)^3}{4!} + \dotsb \right] h \tag{34}$$

Assuming *A* to be positive definite, *α* is taken positive. Then, since *e*�*α<sup>A</sup>* <1 for a normal matrix, the solution can be expressed as

$$\chi\_s = \left(I - e^{-\alpha \mathcal{A}}\right)^{-1} b\_\alpha = \left[\sum\_{k=0}^{\alpha} e^{-k\alpha \mathcal{A}}\right] b\_\alpha \tag{35}$$

with the norm of the matrix exponentials decreasing when *k* increases [6]

$$\left| \left| e^{-\dot{\omega}\_{\rm tr} t} \right| < \mathcal{C} \right. \tag{36}$$

where *λ* is the smallest eigenvalue of *A*. *b<sup>α</sup>* can be accurately computed by using instead of (34) an equivalent expression, for instance (see (22))

$$h\_{\omega} = (1 - \alpha^{-1})\chi\omega \left[ \lambda + \sum\_{k=1}^{\omega} \frac{(1 - \alpha^{-1})^k}{k+1} (\lambda - \alpha A) \left( \lambda - \frac{\alpha A}{2} \right) \cdots \left( \lambda - \frac{\alpha A}{k} \right) \right] \lambda \tag{37}$$

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems… DOI: http://dx.doi.org/10.5772/intechopen.89342*

If the infinite series in (35) is truncated to *k* ¼ *NS* the rest of the series has a norm

$$\left\| R\_{\mathbb{A}\_{\epsilon}} \right\| < \frac{e^{-(\mathbb{A}\_{\epsilon} + 1) \cdot \omega}}{\left\| -\omega \right\|^{\omega \lambda}} \tag{38}$$

Much less numerical computation (see below) is needed if the infinite series in (35) is transformed into an infinite product using the identity [5]

$$(\mathfrak{l} - \mathfrak{u})^{-1} = \prod\_{k=0}^{\alpha} (\mathfrak{l} + \mathfrak{u}^{2^{\mathfrak{l}}}), \quad 0 < \mathfrak{u} < 1 \tag{39}$$

which is also valid for matrices whose norm is less than 1. Thus, (35) becomes

$$\propto\_{s} = \left[\prod\_{k=0}^{\infty} (\mathbb{I} + e^{-2^k a A})\right] b\_a \tag{40}$$

with the norm of the exponentials *e*�2*kα<sup>A</sup>* decreasing very rapidly when *k* increases. Truncating the infinite product to *k* ¼ *NP*, i.e., *NP* þ 1 factors, leaves a remaining factor

$$F\_{N\_{\rho}} = \prod\_{k=N\_{\rho}+1}^{\infty} (I + e^{-2^k a A}) \tag{41}$$

whose norm is

*xS* <sup>¼</sup> 0 and the components of the solution of (1), with *f v*ð Þ¼ <sup>1</sup>*=v*, are *xokv<sup>λ</sup><sup>k</sup>* where *λk*, *k* ¼ 1, 2, …, *n*, are the entries in the diagonal matrix. In order to make the magnitudes of all these components at least 1%, e.g., of the corresponding magnitudes of the initial components, one needs no iteration if the condition number is less than 2, but 5 and, respectively, 46 iterations are needed if the condition number is

What is remarkable in the iterative method based on (32) is that, for matri-

The original general system (11) is replaced with an equivalent system such that its solution is obtained in terms of matrix exponentials for which highly convergent

Assuming *A* to be positive definite, *α* is taken positive. Then, since *e*�*α<sup>A</sup>*

with the norm of the matrix exponentials decreasing when *k* increases [6]

instead of (34) an equivalent expression, for instance (see (22))

where *λ* is the smallest eigenvalue of *A*. *b<sup>α</sup>* can be accurately computed by using

ð33Þ

ð34Þ

ð35Þ

ð36Þ

ð37Þ

<1 for

and accurate series formulae have been derived in Section 3.

Namely, instead of (11) we use the system

where *α* is a real scalar to be chosen, *α* 6¼ 0, and

a normal matrix, the solution can be expressed as

ces with same condition number and same norm, the number of iterations required is the same, independently of the size of the matrices. Considering approximately 2*n*<sup>2</sup> arithmetic operations for one matrix-vector multiplication, where *n* is the number of equations and unknowns in (11), the total number of arithmetic operations required is, thus, proportional to only *n*2. In the examples given above one has to perform, respectively, 60*n*2, 300*n*<sup>2</sup> and 2760*n*<sup>2</sup> arithmetic operations. Assuming only 2*n*3*=*3 arithmetic operations for the Gaussian elimination procedure, the method presented in this subsection requires less computation for the same examples if, respectively, *n*>90, *n*>450 and *n*>4140. One can also notice that the application of Eq. (32) leads to the actual solution of (11) independently of the small error introduced in the computation at each

10 and 100.

*Functional Calculus*

iteration.

**28**

**4.2 A matrix product formula**

$$\left| \left| F\_{N\_{\mathcal{I}}} \right| \right| < \prod\_{\mathcal{k} = N\_{\mathcal{I}} - 1}^{\omega} (1 - e^{-\mathcal{z}^{\hat{\mathsf{a}}} \mathsf{a} \mathsf{a} \mathsf{i}}) - \frac{1}{1 - e^{-\omega \mathsf{I}}} \frac{1}{\prod\_{\mathsf{k} = \mathsf{D}}^{N\_{\mathcal{I}}} (\mathsf{I} + e^{\mathcal{z}^{\hat{\mathsf{a}}} \mathsf{a} \mathsf{i}})} \tag{42}$$

Let us compare the maximum value of the norm of the truncated matrix in the brackets of (35) and (40) with that of the corresponding untruncated matrix in order to get a rough estimate of the numbers *NS* and *NP* of matrix exponentials involved in the numerical computation to achieve a certain accuracy. This will also allow to estimate the total number of matrix-vector multiplications necessary to obtain the solution. The ratio of the maximum norm of the truncated matrix to the maximum norm of the untruncated matrix in the brackets of (35) and (40) is, respectively, (see (38) and (42))

$$
\rho = \mathbb{I} - \varepsilon^{\alpha^{(\frac{1}{N}, 1) \boxtimes \mathbb{N}}} \tag{43}
$$

and

$$\varphi = (1 - e^{-\alpha \beta}) \prod\_{\lambda = \alpha}^{N\_p} (1 + e^{-2^{\kappa} \alpha \ell}) \tag{44}$$

To illustrate the computation complexity when using (35) or (40), assume that k k *A* ¼ 1 and *α* ¼ 20. If *ρ* is imposed to be ≈0*:*99, for example, one needs *NS* ¼ 2, i.e., three terms in (35) and *NP* <sup>¼</sup> 1, i.e., two factors in (40) if *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>1</sup> . If *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>2</sup> these numbers increase to *NS* <sup>¼</sup> 23 and *NP* <sup>¼</sup> 4, and when *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>3</sup> one gets *NS* <sup>¼</sup> 230, but *NP* only increases to *NP* ¼ 7. It is clear that applying the formula (40) the number of

exponentials needed in the numerical computation is much smaller than that if formula (35) would be applied. For all the matrix exponentials involved in the numerical computation we use the formula (28) containing a highly convergent series, such that appropriately preconditioned. Practically, the computation with (40) is continued

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems…*

In this section, we introduce first order differential equations whose numerical integration allows to efficiently find the solution of linear systems of algebraic equations. Differential equations of the type of those in (1), with *f v*ð Þ¼�1 or *f v*ð Þ¼ 1*=v*, cannot be used for this purpose due to the fact that the first and higher order derivatives of *x v*ð Þ tend to infinite values as *x* tends to the solution *xS* of (11)

Here below, we construct ordinary differential equations for *x v*ð Þ which satisfy the condition that the first few derivatives are finite when *x v*ð Þ tends to *xS* and, therefore, are particularly useful for an accurate computation of *xS*. Let us consider the system (11) with a symmetric positive definite matrix. A quadratic functional

is associated with (11) [6] whose minimum value is *F x*ð Þ¼ *<sup>S</sup>* 0. Define now a real

where *r* is a real number to be chosen, *r*>0, with *v* ¼ 0 corresponding to the

This is the differential equation to be integrated from *v* ¼ *vo* to *v* ¼ *vS* ¼ 0. The

In order to see the behaviour of the derivatives close to the solution *xS*, Eqs. (49)

*<sup>o</sup>*. Then,

solution *<sup>x</sup>* <sup>¼</sup> *xS* and *<sup>v</sup>* <sup>¼</sup> *vo* to an initial point *xo*, *F x*ð Þ¼ *<sup>o</sup> vr*

second derivative of *x* is obtained in the form

Higher order derivatives can be worked out if needed.

ð46Þ

ð47Þ

ð48Þ

ð49Þ

ð50Þ

**5. Solution of general linear systems by numerical integration of**

factor by factor and the accuracy of *x* is checked after each step.

**differential equations**

*DOI: http://dx.doi.org/10.5772/intechopen.89342*

(see Section 2).

variable *v*, *v*≥0, such that

and, thus,

and (50) are rewritten as

**31**

$$e^{\
u \cdot \epsilon} = (1 + 10^{-\nu})^{-\nu} \left\{ I + p! \sum\_{z=1}^{\nu} \frac{10^{-\alpha}}{k!(p-k)!} \left[ I - \frac{c\_q \alpha A}{p} \right] \left( I - \frac{c\_q \alpha A}{p-1} \right) \cdots \left( I - \frac{c\_q \alpha A}{p-k+1} \right) \right. \\ \left. \left. - \left( I - \frac{c\_q \alpha A}{p-k+1} \right) \right| \left. \frac{1}{\alpha} \right| \left. \frac{1}{\alpha} \right| \left. \frac{1}{\alpha} \right| \left. \frac{1}{\alpha} \right|$$

$$\left. - 10^{-(\varphi - 1)q} c\_q \alpha A (I - c\_q \alpha A) \left( I - \frac{c\_q \alpha A}{2} \right) \cdots \left( I - \frac{c\_q \alpha A}{p} \right) \right] \frac{I}{p+1}$$

$$\left. + p! \sum\_{k=1}^{\ldots} \frac{(-1)^k \left. 10^{-\varphi k} \right|}{(k + p + 1)!} (I + c\_q \alpha A) \left( I + \frac{c\_q \alpha A}{2} \right) \cdots \left( I + \frac{c\_q \alpha A}{k} \right) \right] \right\},$$

$$p = 1, 2, \ldots$$

where *<sup>q</sup>*>*<sup>o</sup>* and *cq* � <sup>1</sup>*<sup>=</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ. With *<sup>α</sup>*k k *<sup>A</sup>* <sup>¼</sup> 20 and choosing *<sup>q</sup>* <sup>¼</sup> 1 and *<sup>p</sup>* <sup>¼</sup> 10, for instance, *<sup>e</sup>*�*α<sup>A</sup>* is determined accurately by retaining 50 terms in the infinite series and, thus, to multiply *e*�*α<sup>A</sup>* with a vector one needs 71 matrix-vector multiplications. To compute *xs* from (40) one has to use repeatedly the multiplication of *<sup>e</sup>*�20*<sup>A</sup>* with a vector. For a matrix *<sup>A</sup>* with *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>1</sup> one has to retain two factors in (40) and, thus, to multiply *e*�20*<sup>A</sup>* and *e*�2�20*<sup>A</sup>* with a vector. This means to use repeatedly 3 times the multiplication of *e*�20*<sup>A</sup>* with a vector which requires, therefore, 3 � <sup>71</sup> <sup>¼</sup> 213 matrix-vector multiplications. When *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>2</sup> the infinite product in (40) is truncated at *k* ¼ *NP* ¼ 4 and this requires the multiplication of *<sup>e</sup>*�2*k*�20*<sup>A</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, 2, 3, 4, with a vector, i.e., to use repeatedly 31 times the multiplication of *<sup>e</sup>*�20*<sup>A</sup>* with a vector, for a total of 31 � <sup>71</sup> <sup>¼</sup> 2201 matrix-vector multiplications. We also have to add the matrix-vector multiplications required to compute *b<sup>α</sup>* in (37). A very accurate result for *b<sup>α</sup>* when *α* ¼ 20 can be achieved by applying four times the series in the brackets of (37) for *α* ¼ 5, each time retaining 30 terms. This requires a total of about 120 multiplications of a matrix *I* � 5*A=k* with a vector. In all the matrix-vector multiplications involved when applying (45), the matrices are in the form *I* � *cqαA=k* and become better and better conditioned as *k* increases.

Adding up the number of arithmetic operations involved shows that, with respect to the classical Gaussian elimination method, the procedure presented in this subsection is advantageous for very large systems (11). Namely, assuming same accuracy and only 2*n*<sup>3</sup>*=*3 arithmetic operations for the Gaussian elimination, with the data given above, one has to have *n*>3 � ð Þ¼ 213 þ 120 999 equations and unknowns if *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>1</sup> and *<sup>n</sup>*><sup>3</sup> � ð Þ¼ <sup>2201</sup> <sup>þ</sup> <sup>120</sup> 6963 equations and unknowns if *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>2</sup> for the proposed method to be more advantageous. For a given *<sup>α</sup>*k k *<sup>A</sup>* , one application of *e*�*α<sup>A</sup>* requires a determined finite number of matrix-vector multiplications, independently of the size of *A*. It is remarkable, as for the iterative method in the previous subsection, that for a given condition of *A*, one has to apply *e*�*α<sup>A</sup>* a well-determined number of times and, thus, the total number of arithmetic operations necessary to compute the solution with an imposed accuracy is proportional to only *n*2.

It should be noted that, since the infinite series in the expression (45) is truncated and thus determined with a finite accuracy, the accuracy of the solution *xS* becomes compromised after a too big a number of matrix exponential-vector multiplications. This is why, the worse conditioned systems (11) should be

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems… DOI: http://dx.doi.org/10.5772/intechopen.89342*

appropriately preconditioned. Practically, the computation with (40) is continued factor by factor and the accuracy of *x* is checked after each step.

### **5. Solution of general linear systems by numerical integration of differential equations**

In this section, we introduce first order differential equations whose numerical integration allows to efficiently find the solution of linear systems of algebraic equations. Differential equations of the type of those in (1), with *f v*ð Þ¼�1 or *f v*ð Þ¼ 1*=v*, cannot be used for this purpose due to the fact that the first and higher order derivatives of *x v*ð Þ tend to infinite values as *x* tends to the solution *xS* of (11) (see Section 2).

Here below, we construct ordinary differential equations for *x v*ð Þ which satisfy the condition that the first few derivatives are finite when *x v*ð Þ tends to *xS* and, therefore, are particularly useful for an accurate computation of *xS*. Let us consider the system (11) with a symmetric positive definite matrix. A quadratic functional

$$\mathcal{A}^{\prime}(\mathbf{x}^{\prime}) = \frac{1}{2} \mathbf{x}^{\prime} \mathcal{A} \mathbf{x} - \mathbf{x}^{\prime} \boldsymbol{\hbar} + \frac{1}{2} \boldsymbol{\hbar}^{\prime} \mathcal{A}^{\prime} \boldsymbol{\hbar} \tag{46}$$

is associated with (11) [6] whose minimum value is *F x*ð Þ¼ *<sup>S</sup>* 0. Define now a real variable *v*, *v*≥0, such that

$$V(\chi) = \nu^{\prime} \tag{47}$$

where *r* is a real number to be chosen, *r*>0, with *v* ¼ 0 corresponding to the solution *<sup>x</sup>* <sup>¼</sup> *xS* and *<sup>v</sup>* <sup>¼</sup> *vo* to an initial point *xo*, *F x*ð Þ¼ *<sup>o</sup> vr <sup>o</sup>*. Then,

$$\frac{d\vec{W}}{d\nu} = r\nu^{e^{-1}} = (A\kappa - b)^T \frac{d\kappa}{d\nu} \tag{48}$$

and, thus,

exponentials needed in the numerical computation is much smaller than that if formula (35) would be applied. For all the matrix exponentials involved in the numerical computation we use the formula (28) containing a highly convergent series, such that

where *<sup>q</sup>*>*<sup>o</sup>* and *cq* � <sup>1</sup>*<sup>=</sup>* ln 1 <sup>þ</sup> <sup>10</sup>�*<sup>q</sup>* ð Þ. With *<sup>α</sup>*k k *<sup>A</sup>* <sup>¼</sup> 20 and choosing *<sup>q</sup>* <sup>¼</sup> 1 and *<sup>p</sup>* <sup>¼</sup> 10, for instance, *<sup>e</sup>*�*α<sup>A</sup>* is determined accurately by retaining 50 terms in the infinite series and, thus, to multiply *e*�*α<sup>A</sup>* with a vector one needs 71 matrix-vector multiplications. To compute *xs* from (40) one has to use repeatedly the multiplication of *<sup>e</sup>*�20*<sup>A</sup>* with a vector. For a matrix *<sup>A</sup>* with *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>1</sup> one has to retain two factors in (40) and, thus, to multiply *e*�20*<sup>A</sup>* and *e*�2�20*<sup>A</sup>* with a vector. This means to use repeatedly 3 times the multiplication of *e*�20*<sup>A</sup>* with a vector which requires, therefore, 3 � <sup>71</sup> <sup>¼</sup> 213 matrix-vector multiplications. When *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>2</sup> the infinite product in (40) is truncated at *k* ¼ *NP* ¼ 4 and this requires the multiplication of *<sup>e</sup>*�2*k*�20*<sup>A</sup>*, *<sup>k</sup>* <sup>¼</sup> 0, 1, 2, 3, 4, with a vector, i.e., to use repeatedly 31 times the multiplication of *<sup>e</sup>*�20*<sup>A</sup>* with a vector, for a total of 31 � <sup>71</sup> <sup>¼</sup> 2201 matrix-vector multiplications. We also have to add the matrix-vector multiplications required to compute *b<sup>α</sup>* in (37). A very accurate result for *b<sup>α</sup>* when *α* ¼ 20 can be achieved by applying four times the series in the brackets of (37) for *α* ¼ 5, each time retaining 30 terms. This requires a total of about 120 multiplications of a matrix *I* � 5*A=k* with a vector. In all the matrix-vector multiplications involved when applying (45), the matrices are in the form *I* � *cqαA=k* and become better and better conditioned as *k* increases. Adding up the number of arithmetic operations involved shows that, with respect to the classical Gaussian elimination method, the procedure presented in this subsection is advantageous for very large systems (11). Namely, assuming same accuracy and only 2*n*<sup>3</sup>*=*3 arithmetic operations for the Gaussian elimination, with the data given above, one has to have *n*>3 � ð Þ¼ 213 þ 120 999 equations and unknowns if *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>1</sup> and *<sup>n</sup>*><sup>3</sup> � ð Þ¼ <sup>2201</sup> <sup>þ</sup> <sup>120</sup> 6963 equations and unknowns if *<sup>λ</sup>* <sup>¼</sup> <sup>10</sup>�<sup>2</sup> for the proposed method to be more advantageous. For a given *<sup>α</sup>*k k *<sup>A</sup>* , one application of *e*�*α<sup>A</sup>* requires a determined finite number of matrix-vector multiplications, independently of the size of *A*. It is remarkable, as for the iterative method in the previous subsection, that for a given condition of *A*, one has to apply *e*�*α<sup>A</sup>* a well-determined number of times and, thus, the total number of arithmetic operations necessary to compute the solution with an imposed accuracy is proportional to

It should be noted that, since the infinite series in the expression (45) is truncated and thus determined with a finite accuracy, the accuracy of the solution *xS* becomes compromised after a too big a number of matrix exponential-vector mul-

tiplications. This is why, the worse conditioned systems (11) should be

only *n*2.

*Functional Calculus*

**30**

ð45Þ

$$\frac{d\kappa}{d\eta} = r\nu^{r-1} \frac{\mathcal{A}\kappa - \hbar}{\left\|\mathcal{A}\kappa - \hbar\right\|^2} \tag{49}$$

This is the differential equation to be integrated from *v* ¼ *vo* to *v* ¼ *vS* ¼ 0. The second derivative of *x* is obtained in the form

$$\begin{split} \frac{d^2 \mathbf{x}}{dv^2} &= r(r-1)\mathbf{v}^{r-2} \frac{\mathcal{A}\mathbf{x} - \boldsymbol{b}}{\left\|\mathcal{A}\mathbf{x} - \boldsymbol{b}\right\|^2} + (r\mathbf{v}^{r'-1})^2 \frac{\mathbf{l}}{\left\|\mathcal{A}\mathbf{x} - \boldsymbol{b}\right\|^4} \begin{cases} \mathbf{A} \\ \mathbf{A}\mathbf{x} - \boldsymbol{b} \end{cases} \\ &- 2 \frac{\mathbb{E}\left[ (\mathcal{A}\mathbf{x} - \boldsymbol{b})^T \mathcal{A}(\mathcal{A}\mathbf{x} - \boldsymbol{b}) \right]}{\left\| (\mathcal{A}\mathbf{x} - \boldsymbol{b}) \right\|^2} \begin{cases} \mathbf{A}\mathbf{x} - \mathbf{b} \end{cases} \end{split} (50)$$

Higher order derivatives can be worked out if needed.

In order to see the behaviour of the derivatives close to the solution *xS*, Eqs. (49) and (50) are rewritten as

*Functional Calculus*

$$\frac{d\boldsymbol{\kappa}}{d\boldsymbol{\nu}} = r \frac{\left(\boldsymbol{F}(\boldsymbol{\kappa})\right)^{\perp \cdot} (\boldsymbol{\mathcal{A}}\boldsymbol{\kappa} - \boldsymbol{b})}{\left|\boldsymbol{\mathcal{A}}\boldsymbol{\kappa} - \boldsymbol{b}\right|^2} \tag{51}$$

ð59Þ

ð60Þ

and for *x* ! *xS* we have (see (54))

*DOI: http://dx.doi.org/10.5772/intechopen.89342*

*r*>4 even *d*<sup>2</sup>

times as needed.

accuracy of 1%.

**6. Conclusions**

**33**

*F x*ð Þ¼ <sup>1</sup>*=*<sup>2</sup> ð Þ *Ax* � *<sup>b</sup> <sup>T</sup>*ð Þ *Ax* � *<sup>b</sup> :*

*x=dv*<sup>2</sup> 

The differential Eqs. (49) in *v* and (55) in *s* require practically the same amount of computation for their right-hand sides, i.e., one matrix-vector multiplication. The first derivatives *dx=dv* for *r* ¼ 2 and *dx=ds* remain finite when *x* tends to *xS*, while the second derivatives increase to infinite values as in (54) and (58). For *r* ¼ 4 the second derivative in (52) remains finite when *x* tends to *xS* (see (54)), while the first derivative and the ratio *ds=dv* tend to zero as in (54) and (60), respectively. If

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems…*

Equations (49) and (55) can be integrated by classical numerical methods. Since we are not looking for an accurate solution of these equations all along from *xo* to *xS* but for finding accurately the final value *x* ¼ *xS*, we can use a lower order method, for instance, even Euler's method [7]. This yields an approximate value of *xS* which is to be used as initial point for repeating the numerical integration procedure. As we get closer to the solution *xS*, we decrease the step size in order to reduce the error. In the case of Euler's method the error is determined in terms of the norm of the second derivative. Higher order numerical integration methods can also be used

To find a starting point for the integration procedure which is reasonably close to the solution point, one can minimize *F x*ð Þ in (46) along the normal direction, followed by a minimization of the distance to the solution point *xS* along the direction of the normal to *F* [8]. These two preliminary steps are repeated a few

Numerical experiments have been performed applying Euler's method to (49) for *r* ¼ 2, *r* ¼ 4 and *r* ¼ 8, and to (55). Systems (11) of various sizes have been automatically generated and the differential Eqs. (49) for *r* ¼ 2 and *r* ¼ 4, and (55) have produced results with the least amount of computation when imposing an

For matrices which are not symmetric positive definite, (46) is replaced with

A special type of matrix series are used in Section 2 to express the relationship between some first order ordinary differential equations and systems of linear algebraic equations and, also, in Section 3 to derive efficient formulae for matrix exponentials that allow accurate and stable numerical computations in various applications. The main feature of these series consists in the fact that, starting with their first term which is already a matrix substantially better conditioned than the original problem matrix, each of the subsequent terms is obtained through a multiplication with a better and better conditioned matrix that tends to the identity matrix. The new matrix exponential formulae contain very rapidly convergent series and can be applied to general, arbitrarily conditioned, positive definite or not matrices. They are used in Section 4 for two new methods of solution for general

tends to zero as in (54).

in order to increase the computation efficiency.

and

$$\begin{aligned} \frac{d^2x}{dr^2} - r(r-1)\frac{(\mathcal{A}^\*(x))^{\frac{\alpha}{2}}(\mathcal{A}x-\hbar)}{\left\|\mathcal{A}x-\hbar\right\|^2} + r^2 \frac{(\mathcal{A}^\*(x))^{\frac{\alpha}{2}}\hat{i}}{\left\|\mathcal{A}x-\hbar\right\|^4} \end{aligned} \tag{52}$$

$$-2\frac{\left[(\mathcal{A}x-\hbar)^\top \mathcal{A}(\mathcal{A}x-\hbar)\right]}{\left\|\mathcal{A}x-\hbar\right\|^\gamma} \Big| (\mathcal{A}x-\hbar)$$

with *F x*ð Þ in (46) put in the form

$$F(\boldsymbol{\kappa}) = \frac{1}{2}(A\boldsymbol{\kappa} - b)^{\mathsf{T}}(\boldsymbol{\kappa} - \boldsymbol{\kappa}\_{\mathsf{N}}) \tag{53}$$

Notice that as *x* tends to *xs*, when k k *Ax* � *b* � *ε* tends to zero,

$$\left\|\frac{d\mathbf{x}}{d\nu}\right\| \sim K\_{\text{\textquotedblleft}}(\nu)\boldsymbol{\varepsilon}^{\text{\textquotedblleft}\boldsymbol{\varepsilon}^{\text{\textquotedblleft}\boldsymbol{\varepsilon}}}, \quad \left\|\frac{d^{2}\boldsymbol{\chi}}{d\nu^{2}}\right\| \sim K\_{\text{\textquotedblleft}\boldsymbol{\varepsilon}}(\nu)\boldsymbol{\varepsilon}^{\text{\textquotedblleft}\boldsymbol{\varepsilon}^{\text{\textquotedblright}\boldsymbol{\varepsilon}}}\right\|\tag{54}$$

where *K*1ð Þ*v* and *K*2ð Þ*v* are finite when *v* ! 0. Therefore, as *<sup>x</sup>* ! *xS*, k k! *dx=dv* 0 if *<sup>r</sup>*>2 and *<sup>d</sup>*<sup>2</sup> *x=dv*<sup>2</sup> ! 0 if *r*>4.

Another differential equation we present here is

$$\frac{d\boldsymbol{x}}{d\boldsymbol{s}} - \frac{A\boldsymbol{x} - \boldsymbol{b}}{\left\|{A\boldsymbol{x} - \boldsymbol{b}}\right\|}\tag{55}$$

with the second derivative

$$\frac{d^2\lambda}{ds^2} \quad \frac{1}{\left\| \left| \mathcal{A}\mathbf{x} - b \right\| \right\|^2} \left\{ A - \frac{\left[ (\mathcal{A}\mathbf{x} - b)' \; \mathcal{A}(\mathcal{A}\mathbf{x} - b) \right]}{\left| \left. \mathcal{A}\mathbf{x} - b \right\| \right]^2} \right\} (\mathcal{A}\mathbf{x} - b) \tag{56}$$

In this case, always

$$
\left| \frac{d\mathbf{r}}{ds} \right| = 1\tag{57}
$$

even for *x* ! *xS*, but the second derivative tends to an infinite value when *x* ! *xS*

$$\left\| \frac{d^2 \mathbf{x}}{ds^2} \right\| \sim K(s) \mathcal{L}^{-1} \tag{58}$$

where *K s*ð Þ is finite and *ε* � k k *Ax* � *b* . The relationship between the differentials of the variables *v* and *s* in (49) and (55) is

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems… DOI: http://dx.doi.org/10.5772/intechopen.89342*

$$d\mathbf{s} = \frac{r(F(\mathbf{x})^\frac{\cdot}{\cdot})}{\left| \left| A\mathbf{x} - b \right| } d\mathbf{v} \tag{59}$$

and for *x* ! *xS* we have (see (54))

ð51Þ

ð52Þ

ð53Þ

ð54Þ

ð55Þ

ð56Þ

ð57Þ

ð58Þ

and

*Functional Calculus*

with *F x*ð Þ in (46) put in the form

*<sup>x</sup>* ! *xS*, k k! *dx=dv* 0 if *<sup>r</sup>*>2 and *<sup>d</sup>*<sup>2</sup>

with the second derivative

of the variables *v* and *s* in (49) and (55) is

In this case, always

when *x* ! *xS*

**32**

Notice that as *x* tends to *xs*, when k k *Ax* � *b* � *ε* tends to zero,

where *K*1ð Þ*v* and *K*2ð Þ*v* are finite when *v* ! 0. Therefore, as

Another differential equation we present here is

*x=dv*<sup>2</sup> 

even for *x* ! *xS*, but the second derivative tends to an infinite value

where *K s*ð Þ is finite and *ε* � k k *Ax* � *b* . The relationship between the differentials

! 0 if *r*>4.

$$\frac{d\mathbf{s}}{d\mathbf{v}} \sim K\_{\mathbf{i}}(\nu)\,\kappa^{\mathsf{I}-\mathsf{\dot{\mathsf{T}}}} \tag{60}$$

The differential Eqs. (49) in *v* and (55) in *s* require practically the same amount of computation for their right-hand sides, i.e., one matrix-vector multiplication. The first derivatives *dx=dv* for *r* ¼ 2 and *dx=ds* remain finite when *x* tends to *xS*, while the second derivatives increase to infinite values as in (54) and (58). For *r* ¼ 4 the second derivative in (52) remains finite when *x* tends to *xS* (see (54)), while the first derivative and the ratio *ds=dv* tend to zero as in (54) and (60), respectively. If *r*>4 even *d*<sup>2</sup> *x=dv*<sup>2</sup> tends to zero as in (54).

Equations (49) and (55) can be integrated by classical numerical methods. Since we are not looking for an accurate solution of these equations all along from *xo* to *xS* but for finding accurately the final value *x* ¼ *xS*, we can use a lower order method, for instance, even Euler's method [7]. This yields an approximate value of *xS* which is to be used as initial point for repeating the numerical integration procedure. As we get closer to the solution *xS*, we decrease the step size in order to reduce the error. In the case of Euler's method the error is determined in terms of the norm of the second derivative. Higher order numerical integration methods can also be used in order to increase the computation efficiency.

To find a starting point for the integration procedure which is reasonably close to the solution point, one can minimize *F x*ð Þ in (46) along the normal direction, followed by a minimization of the distance to the solution point *xS* along the direction of the normal to *F* [8]. These two preliminary steps are repeated a few times as needed.

Numerical experiments have been performed applying Euler's method to (49) for *r* ¼ 2, *r* ¼ 4 and *r* ¼ 8, and to (55). Systems (11) of various sizes have been automatically generated and the differential Eqs. (49) for *r* ¼ 2 and *r* ¼ 4, and (55) have produced results with the least amount of computation when imposing an accuracy of 1%.

For matrices which are not symmetric positive definite, (46) is replaced with *F x*ð Þ¼ <sup>1</sup>*=*<sup>2</sup> ð Þ *Ax* � *<sup>b</sup> <sup>T</sup>*ð Þ *Ax* � *<sup>b</sup> :*

#### **6. Conclusions**

A special type of matrix series are used in Section 2 to express the relationship between some first order ordinary differential equations and systems of linear algebraic equations and, also, in Section 3 to derive efficient formulae for matrix exponentials that allow accurate and stable numerical computations in various applications. The main feature of these series consists in the fact that, starting with their first term which is already a matrix substantially better conditioned than the original problem matrix, each of the subsequent terms is obtained through a multiplication with a better and better conditioned matrix that tends to the identity matrix. The new matrix exponential formulae contain very rapidly convergent series and can be applied to general, arbitrarily conditioned, positive definite or not matrices. They are used in Section 4 for two new methods of solution for general

linear algebraic systems. One is an iterative method which corresponds to the solution of the differential Eq. (1) with *f v*ð Þ¼ 1*=v:* It is based on the exact analytical expressions (30)–(32) that always yield results converging finally to the exact solution of the system (11). In a second method, the original algebraic system (11) is replaced with an equivalent system containing a matrix exponential *e*�*α<sup>A</sup>* such that instead of inverting the system matrix *<sup>A</sup>* we have now to invert *<sup>I</sup>* � *<sup>e</sup>*�*αA*. The exact analytical solution is obtained in the form of a series of matrix exponentials which is transformed into an infinite matrix product in order to reduce substantially the necessary amount of computation. It should be remarked that, since the number of matrix-vector multiplications required for the application of one matrix exponential-vector multiplication only depends on the norm of the matrix while the number of matrix exponential-vector multiplications depends on the condition of the system matrix, the total number of arithmetic operations needed to achieve an imposed accuracy when applying each of the two methods is practically proportional to *n*2, where *n* is the dimension of the matrix. The two methods require a comparable total amount of computation. It is also remarkable that for both methods the necessary amount of computation can be roughly predicted beforehand in terms of the system size, the system condition and the desired accuracy.

**References**

Technology and Applied

978-1-4673-8478-0

S00361445024180

(Paperback). QA372.H93

0-12-294755-X. QA55.G6613

88-3961

92-32192

**35**

[1] Ciric IR. Rapidly convergent matrix series for solving linear systems of equations. In: Proceedings of the 17th International Symposium on Antenna

*DOI: http://dx.doi.org/10.5772/intechopen.89342*

*New Matrix Series Formulae for Matrix Exponentials and for the Solution of Linear Systems…*

Electromagnetics (ANTEM); 10-16 July 2016; Montréal, Canada: IEEE; 2016. DOI: 10.1109/ANTEM.2016.755022/

[2] Lanczos C. Applied Analysis. New York: Dover Publications, Inc.; 1988. ISBN: 0-486-65656-X (Paperback). CIP:

[3] Moler CB, Van Loan CF. Nineteen

exponential of a matrix. SIAM Review.

[4] Hurewicz W. Lectures on Ordinary Differential Equations. New York: Dover; 1990. ISBN: 0-486-66420-I

[5] Gradshteyn IS, Ryzhik IM. Table of Integrals, Series and Products. 5th ed. New York: Academic Press; 1994. ISBN:

[6] Golub GH, Van Loan CF. Matrix Computations. 2nd ed. Baltimore: Johns Hopkins; 1989. ISBN: 0-8018-3772-3, 0-8018-3739-1 (Paperback). 88-4504

[7] Burden RL, Faires JD. Numerical Analysis. 5th ed. Boston: PWS-KENT; 1993. ISBN: 0-534-93219-3. CIP:

[8] Ciric IR. A geometric property of the functional associated with general systems of algebraic equations. In: Proceedings of the International Symposium on Antenna Technology and Applied Electromagnetics and Canadian Radio Sciences Conference (ANTEM/URSI); 17-19 July 2006; Montréal, Canada: IEEE; 2006. p. 311-315. ISBN: 978-0-9738425-1-7

dubious ways to compute the

1978;**20**:801-836. DOI: 10.1137/

In Section 5, a powerful method is presented based on the numerical integration of specially constructed ordinary differential equations.
