**1. Introduction**

The problem of solving equations and systems of nonlinear equations is among the most important in theory and practice, not only of applied mathematics, but also in many branches of science, engineering, physics, computer science, astronomy, finance, etc. A glance at the literature shows a high level of contemporary interest.

The search for solutions of systems of nonlinear equations is an old, frequent, and important problem for many applications in mathematics and engineering (for example, see Refs. [1–5]).

The main goal of this chapter is to describe different methods for approximating a solution *ξ* of a system of nonlinear equations *F*(*x*) = 0, where *F* : *Ω* ⊆ ℝ*<sup>n</sup>* → ℝ*<sup>n</sup>* is a sufficiently differentiable function on the convex set *Ω* ⊆ ℝ*<sup>n</sup>*. The most commonly used techniques are iterative methods,

which, from an initial guess a sequence of iterates is built, were converging to the solution of the problem under some conditions. Although not as many as in the case of scalar equations, some publications have appeared in the recent years, proposing different iterative methods for solving nonlinear systems (see, for example, Refs. [6–8], among others). They have made several modifications to the classical methods to accelerate the convergence and to reduce the number of operations and functional evaluations per step of the iterative method. Newton's method is the most used iterative technique for solving these kind of problems (see Ref. [9]), whose iterative expression is

$$\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{x}^{(k)}), k = 0, 1, \dots \tag{1}$$

where *F*′ (*x*(*k*) ) denotes the Jacobian matrix associated to function *F* on *x*(*k*) .

We remember the concepts of order of convergence and efficiency index of an iterative scheme.

**Definition 1.1.** *Let* {*x*(*k*) }*<sup>k</sup>*≥0 *be a sequence in* ℝ*<sup>n</sup> convergent to ξ*. *Then, the convergence is said to be linear, if there exist M*, 0 < *M* < 1, *and k*<sup>0</sup> <sup>∈</sup> <sup>ℕ</sup> *such that* ( + 1) − ≤ () − , ∀ 0, *and of order p*, *p* > 1, *if there exist M, M* > 0, *and k*<sup>0</sup> ∈ ℕ *such that* ( + 1) − ≤ () − , ∀ .

On the other hand, Ostrowski [10] introduced the *efficiency index* of an iterative method as *p*1/*<sup>d</sup>* , where *p* is the order of convergence and *d* is the number of functional evaluations per iteration. In the multidimensional case, it is more useful for the efficiency index to be defined as 1/( + ) , where *op* is the number of products-quotients per iteration.

The most direct technique is to adapt the methods designed for solving nonlinear equations to the multidimensional case. This process is easy only if, in the denominators of the iterative expression, does not appear any evaluation of the nonlinear function that describes the system. This is the case of Newton's schemes and Newton-type methods coming from quadrature formulas, such as those described in Refs. [11–18]. In Refs. [19] and [20], the authors designed a general procedure called *pseudo-composition* that allows to obtain predictor-corrector methods with high order of convergence. These multipoint schemes use any method as a predictor and a corrector step, where Gaussian quadrature is used.

On the other hand, other multipoint schemes have been developed by using different techniques: Adomian decomposition [21–23], the replacement of the second derivative in one-point schemes by some approximation that yields multipoint iterative methods [8, 9], the Steffensentype methods adapted to multidimensional case (see, for instance, Refs. [17, 24], among others). In all these papers, the references therein are also important.

A generally used technique for constructing iterative schemes is the composition of known methods. This technique was introduced by Traub [9]: if two iterative methods with orders of convergence *p*1 and *p*2, respectively, are composed, the resulting scheme has order *p*1*p*2. The use of this technique greatly increases the number of functional evaluations of *F* and *F′*. Therefore, it is necessary to avoid as much as possible such new evaluations by means of approximation techniques. In Ref. [25], the authors composed twice Newton's scheme with itself 'frozening' the derivatives and, by means of undetermined coefficients method, obtained the following fifth-order iterative scheme

$$z^{(k)} = \mathcal{y}^{(k)} - \mathcal{S}[F^{'}(\mathbf{x}^{(k)})]^{-1}F(\mathbf{y}^{(k)}),$$

$$\mathbf{x}^{(k+1)} = z^{(k)} + \frac{1}{\mathcal{S}}[F^{'}(\mathbf{x}^{(k)})]^{-1}[\mathbf{l}\,\mathcal{S}F(\mathbf{y}^{(k)}) - F(z^{(k)})],$$

where *y*(*k*) is a Newton's step. Let us observe that it only needs three functional evaluations and one Jacobian evaluation. Moreover, all the linear systems involved have the same matrix of coefficients. As a consequence, the efficiency index of this method is the best one, as far as we know. A similar procedure is used by Cordero et al. [26], getting Newton-Jarratt type methods of fifth and sixth orders. In addition, the following method described by Arroyo et al. [27] belongs to the class of Jarratt-type methods, but it has order of convergence five,

$$\mathbf{x}^{(k+1)} = \mathbf{y}^{(k)} + [F'(\mathbf{x}^{(k)}) - \mathfrak{S}F'(\mathbf{y}^{(k)})]^{-1}[F'(\mathbf{x}^{(k)}) - \mathfrak{S}F'(\mathbf{y}^{(k)})][F'(\mathbf{x}^{(k)})]^{-1}F(\mathbf{y}^{(k)}),$$

where *y*(*k*) is a Newton's step.

which, from an initial guess a sequence of iterates is built, were converging to the solution of the problem under some conditions. Although not as many as in the case of scalar equations, some publications have appeared in the recent years, proposing different iterative methods for solving nonlinear systems (see, for example, Refs. [6–8], among others). They have made several modifications to the classical methods to accelerate the convergence and to reduce the number of operations and functional evaluations per step of the iterative method. Newton's method is the most used iterative technique for solving these kind of problems (see Ref. [9]),

) denotes the Jacobian matrix associated to function *F* on *x*(*k*)

, ∀ .

a corrector step, where Gaussian quadrature is used.

In all these papers, the references therein are also important.

We remember the concepts of order of convergence and efficiency index of an iterative scheme.

*linear, if there exist M*, 0 < *M* < 1, *and k*<sup>0</sup> <sup>∈</sup> <sup>ℕ</sup> *such that* ( + 1) − ≤ () − , ∀ 0, *and of order p*, *p* > 1, *if there exist M, M* > 0, *and k*<sup>0</sup> ∈ ℕ *such that*

On the other hand, Ostrowski [10] introduced the *efficiency index* of an iterative method as *p*1/*<sup>d</sup>*

where *p* is the order of convergence and *d* is the number of functional evaluations per iteration. In the multidimensional case, it is more useful for the efficiency index to be defined as

The most direct technique is to adapt the methods designed for solving nonlinear equations to the multidimensional case. This process is easy only if, in the denominators of the iterative expression, does not appear any evaluation of the nonlinear function that describes the system. This is the case of Newton's schemes and Newton-type methods coming from quadrature formulas, such as those described in Refs. [11–18]. In Refs. [19] and [20], the authors designed a general procedure called *pseudo-composition* that allows to obtain predictor-corrector methods with high order of convergence. These multipoint schemes use any method as a predictor and

On the other hand, other multipoint schemes have been developed by using different techniques: Adomian decomposition [21–23], the replacement of the second derivative in one-point schemes by some approximation that yields multipoint iterative methods [8, 9], the Steffensentype methods adapted to multidimensional case (see, for instance, Refs. [17, 24], among others).

A generally used technique for constructing iterative schemes is the composition of known methods. This technique was introduced by Traub [9]: if two iterative methods with orders of convergence *p*1 and *p*2, respectively, are composed, the resulting scheme has order *p*1*p*2. The use of this technique greatly increases the number of functional evaluations of *F* and *F′*. Therefore,

, where *op* is the number of products-quotients per iteration.

( 1) ( ) ' ( ) 1 ( ) [ ( )] ( ), 0,1, + - *x x F x Fx k kk k k* = - = ¼ (1)

}*<sup>k</sup>*≥0 *be a sequence in* ℝ*<sup>n</sup> convergent to ξ*. *Then, the convergence is said to be*

.

,

whose iterative expression is

88 Nonlinear Systems - Design, Analysis, Estimation and Control

where *F*′ (*x*(*k*)

1/( + )

**Definition 1.1.** *Let* {*x*(*k*)

( + 1) − ≤ () −

In recent years, the technique of weight functions has also been developed, mainly for scalar equations. Weight functions are introduced in the iterative scheme to increase the order of convergence without increasing the number of functional evaluations. Among others, Sharma et al. [28] constructed a fourth-order scheme by using this procedure and, more recently, Artidiello et al. [7, 29] presented different families of high-order iterative methods by using matrix weight functions.

As we have previously mentioned, most of the iterative methods for nonlinear equations are not directly extendable to systems. However, in Refs. [6, 30], the authors present a general procedure to transform any scalar iterative method to the multidimensional case.

On the other hand, the dynamical analysis of an iterative method is becoming a trend in recent publications on iterative methods for scalar equations because it allows us to classify the different iterative formulas, not only from the point of view of its order of convergence, but also analyzing how these formulas behave as a function of the initial estimate that is taken. Another advantage of this analysis is to select the more stable elements of a parametric family whose members have the same order of convergence (see, for example, Ref. [31]). A first step in this direction on nonlinear systems was given by Cordero et al. [32], and a deeper analysis was made by Cordero et al. [33], studying the behavior of several methods on particular polynomial systems. In any case, the dynamical study in the multidimensional case is an emerging research topic with a promising future.

The rest of the chapter is organized as follows. In Section 2, we describe different techniques for designing iterative methods for nonlinear systems. In Section 3, we give some touches about the dynamic study of an iterative method for the scalar case and its extension to the multidimensional case. In order to check the introduced methods and compare them with other classical ones, in Section 4, we apply them on different test problems.

## **2. Design of the methods**

In this section, we introduce three techniques for designing iterative methods for solving nonlinear systems of equations: pseudo-composition, weight function procedure, and a technique for extending scalar methods to the multidimensional case, in a non-trivial way.

The convergence results are going to be demonstrated by means of the n-dimensional Taylor expansion of the functions involved. Let : ℝ <sup>ℝ</sup> be a sufficiently Frèchet differentiable function in *Ω*. By using the notation introduced by Cordero et al. [26], the *q*th derivative of *F* at ℝ, <sup>1</sup>, is the *q*-linear function :ℝ × ℝ⋯ × ℝ ℝ such that 1, …, ℝ. It is easy to observe that: and *F(q)(u)(v1,…,vq−1*,·) ∈, (ℝn) and 1, …, = 1, …, , for all permutation *σ* of {1, 2, …, *q*}. We will use the notation:

$$F^{(q)}(\mu)(\nu\_1, \nu\_2, \dots, \nu\_q) = F^{(q)}(\mu)\nu\_1\nu\_2\dots\nu\_q,\quad \text{and}$$

$$F^{(q)}(\mu)\nu^{q-1}F^{(\rho)}\nu^{(\rho)} = F^{(q)}(\mu)F^{(\rho)}(\mu)\nu^{q+\rho-1}.$$

For + ℎ ℝ lying in a neighborhood of a solution *ξ* of the nonlinear system *F*(*x*) = 0 and assuming that the Jacobian matrix *F*′(*ξ*) is nonsingular, Taylor's expansion can be applied, obtaining

$$F(\xi + h) = F^\cdot(\xi) \left[ h + \sum\_{q=1}^{p-1} C\_q h^q \right] + O[h^p],$$

where <sup>=</sup> <sup>1</sup> ! ′ −1() ,*q* ≥ 2 We observe that ℝ since *F*(*q*)(*ξ*) ∈ (ℝ*<sup>n</sup>* × ⋯ × ℝ*<sup>n</sup>* → ℝ*<sup>n</sup>*). In addition, we can express the Jacobian matrix of *F*, *F*′ as

$$\boldsymbol{F}^{\cdot}(\boldsymbol{\xi} + \boldsymbol{h}) = \boldsymbol{F}^{\cdot}(\boldsymbol{\xi}) \left[ \boldsymbol{I} + \sum\_{q=1}^{p-1} q \boldsymbol{C}\_{q} \boldsymbol{h}^{q-1} \right] + O[\boldsymbol{h}^{p}],$$

where *I* is the identity matrix. Therefore, ℎ <sup>1</sup> (ℝ*<sup>n</sup>*). From this expansion we can conjecture that

$$[[F'(\xi+h)]^{-1} = [I + X\_2h + X\_3h^2 + X\_4h^3 + \cdots][F'(\xi)]^{-1} + O[h^p]\_p$$

and taking into account that ′( + ℎ) −1′ + ℎ = ′ + ℎ ′( + ℎ) −1 = , we obtain

$$X\_2 = -2C\_2,\\ X\_3 = 4C\_2^2 - 3C\_3,\\ X\_4 = -8C\_2^3 + 6C\_2C\_3 + 6C\_3C\_2 - 4C\_4, \dots$$

We denote = () − the error in the *k*th iteration. The equation + 1 <sup>=</sup> + + 1 , where *L* is a *p*-linear function (ℝ*<sup>n</sup>*×…×ℝ*<sup>n</sup>*, ℝ*<sup>n</sup>*), is called *error equation*, and *p* is the *order of convergence*.

#### **2.1. Pseudo-composition technique**

the dynamic study of an iterative method for the scalar case and its extension to the multidimensional case. In order to check the introduced methods and compare them with other

In this section, we introduce three techniques for designing iterative methods for solving nonlinear systems of equations: pseudo-composition, weight function procedure, and a technique for extending scalar methods to the multidimensional case, in a non-trivial way. The convergence results are going to be demonstrated by means of the n-dimensional Taylor expansion of the functions involved. Let : ℝ <sup>ℝ</sup> be a sufficiently Frèchet differentiable function in *Ω*. By using the notation introduced by Cordero et al. [26], the *q*th derivative of *F* at ℝ, <sup>1</sup>, is the *q*-linear function :ℝ × ℝ⋯ × ℝ ℝ such that

( ) ( )

'

 x

'' 1

 x

+= + + ê ú

x

! ′ −1() ,*q* ≥ 2 We observe that

→ ℝ*<sup>n</sup>*). In addition, we can express the Jacobian matrix of *F*, *F*′ as

x

*q q*

1 2 1 2 () 1 () () () () 1 ( )( , , , ) ( ) and , ( ) () () .

=

*q q p p q p qp F u v v v F u vv v F uv F v F uF uv* - + -

For + ℎ ℝ lying in a neighborhood of a solution *ξ* of the nonlinear system *F*(*x*) = 0 and assuming that the Jacobian matrix *F*′(*ξ*) is nonsingular, Taylor's expansion can be applied,

*q q*

1


*q p*

*q p*

<sup>ℝ</sup> since *F*(*q*)(*ξ*) ∈ (ℝ*<sup>n</sup>* × ⋯ × ℝ*<sup>n</sup>*

1 ( ) () [ ], *p*

ë û <sup>å</sup>

= é ù

1


1 ( ) () [ ], *p*

ë û <sup>å</sup>

= é ù

*q F h F I qC h O h*

where *I* is the identity matrix. Therefore, ℎ <sup>1</sup> (ℝ*<sup>n</sup>*). From this expansion we can

+= + + ê ú

*q*


*q q F h F h Ch Oh*

¼= ¼

ℝ. It is easy to observe that: and *F(q)(u)(v1,…,vq−1*,·) ∈, (ℝn) and

= 1, …, , for all permutation *σ* of {1, 2, …, *q*}. We will use the

classical ones, in Section 4, we apply them on different test problems.

**2. Design of the methods**

90 Nonlinear Systems - Design, Analysis, Estimation and Control

1, …,

notation:

obtaining

where <sup>=</sup> <sup>1</sup>

conjecture that

1, …,

We use the generic formulas of the Gaussian quadrature and develop families of predictorcorrector iterative methods, variants of Newton's scheme, for solving nonlinear systems. Starting with any method of order *p* as a predictor and correcting over Gaussian quadrature, we will show that the final order of the obtained method will depend, among other things, on the order of the last two steps of the predictor. Let

$$y^{(k)} = \xi + \sum\_{j=q}^{\xi q - 1} M\_{\nearrow} e\_k^{\nearrow} + O[e\_k^{\xi q}] \ , \ \quad z^{(k)} = \xi + \sum\_{j=p}^{\xi p - 1} N\_{\nearrow} e\_k^{\nearrow} + O[e\_k^{\xi p}] \ ,$$

be the penultimate and last steps of any iterative method with orders of convergence *p* and *q*, respectively. Taking this scheme as a predictor, we introduce the Gaussian quadrature as a corrector and we get four cases with the following iterative formulas:

$$\begin{aligned} \text{(a)} \quad \mathbf{x}^{(k+1)} &= \mathbf{y}^{(k)} - 2K^{-1}F(\mathbf{y}^{(k)}), \qquad \text{(b)} \quad \mathbf{x}^{(k+1)} = z^{(k)} - 2K^{-1}F(z^{(k)}), \\\\ \text{(c)} \quad \mathbf{x}^{(k+1)} &= \mathbf{y}^{(k)} - 2K^{-1}F(z^{(k)}), \qquad \text{(d)} \quad \mathbf{x}^{(k+1)} = z^{(k)} - 2K^{-1}F(\mathbf{y}^{(k)}), \end{aligned} \tag{2}$$

where for all cases = ∑ = 1 ω ′( ()), () <sup>=</sup> <sup>1</sup> <sup>2</sup>[(1 + )() + (1 − )()], and *ωi* and *τi* are the weights and nodes, respectively, of the orthogonal polynomial of degree *m,* which defines the corresponding Gaussian quadrature. Then, () is calculated by using the points obtained in the last two steps of the predictor.

To simplify the calculations, we use the following notation: ∑ = 5 − 1 = 1() and ∑ = 5 − 1 = 2(), where the subscripts in parentheses denote the value of the smallest power assumed by *j* in the sum. By using this notation, () can be expressed as

 () =+ <sup>1</sup> <sup>2</sup>( + )(), where =2() + 1() and =2() − 1(). By expansion of (()), (()) and ′( ) in the Taylor series around *ξ*, we obtain

$$\begin{split} F(\boldsymbol{\eta}^{(k)}) &= F'(\boldsymbol{\xi})[\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(q)} + \boldsymbol{C}\_{2}\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(2q)}^{2} + \boldsymbol{C}\_{3}\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(3q)}^{3} + \boldsymbol{C}\_{4}\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(4q)}^{4}] + O[\boldsymbol{e}\_{k}^{\boldsymbol{\mathfrak{l}}^{q}}], \\ F(\boldsymbol{\varepsilon}^{(k)}) &= F'(\boldsymbol{\xi})[\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(p)} + \boldsymbol{C}\_{2}\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(2p)}^{2} + \boldsymbol{C}\_{3}\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(3p)}^{3} + \boldsymbol{C}\_{4}\boldsymbol{A}\_{\boldsymbol{\mathfrak{l}}(4p)}^{4}] + O[\boldsymbol{e}\_{k}^{\boldsymbol{\mathfrak{l}}^{p}}], \\ F(\boldsymbol{\eta}^{(k)}) &= F'(\boldsymbol{\xi})[\boldsymbol{I} + \boldsymbol{B} + \boldsymbol{C}\boldsymbol{\tau}\_{i} + \boldsymbol{D}\boldsymbol{\tau}\_{i}^{2} + \boldsymbol{E}\boldsymbol{\tau}\_{i}^{3}] + O[\boldsymbol{e}\_{k}^{\boldsymbol{\mathfrak{l}}^{q}}], \end{split}$$

where=1 + <sup>3</sup> 43<sup>2</sup> <sup>+</sup> <sup>1</sup> 243,=2 + <sup>3</sup> 43( <sup>+</sup> ) + <sup>1</sup> 24(2 + <sup>+</sup> 2), = <sup>3</sup> 432 <sup>+</sup> <sup>1</sup> 24( + 2+2) and = <sup>1</sup> 243. We also introduce the following notation: ∑ = 1 = and <sup>1</sup> ∑ = 1 = with *j* = 1, 2, …, which will allow us to simplify the analysis of the convergence conditions of the described methods.

Now, we develop the expression = ∑ = 1 ′( ()) appearing in Eq. (2) and we obtain = ′ +1() + 2(2) + 3(3) + [ 4],where1() = 2( + 1)(), 2(2) <sup>=</sup> <sup>3</sup> <sup>4</sup>3(2 + 1 <sup>+</sup> + 22) (2)and 3(3) = [<sup>1</sup> <sup>2</sup>4(<sup>3</sup> + 1 2 + <sup>+</sup> 2 + 2 2 <sup>+</sup> + 2 + 23) (3). Recalling that −1=, we get −1 = −1 + ′1() + ′2(2) + ′3(3) ′ −1 + [ 4], where ′1() <sup>=</sup> − 1(), ′2(2) = (1 <sup>2</sup> − 2) (2), and ′3(3) <sup>=</sup> −1 <sup>3</sup> + 21 + 12 − 3 (3). Therefore, considering case (a), we obtain for = 2−1(()) the following expression:

$$\begin{split} L &= \frac{2}{\sigma} A\_{\mathfrak{l}(q)} + \frac{2}{\sigma} [(C\_2 A\_{\mathfrak{l}} + K\_{\mathfrak{l}}^{'}) A\_{\mathfrak{l}}]\_{(2q)} + \frac{2}{\sigma} [(C\_2 A\_{\mathfrak{l}}^2 + K\_{\mathfrak{l}}^{'} C\_2 A\_{\mathfrak{l}} + K\_{\mathfrak{l}}^{'}) A\_{\mathfrak{l}}]\_{(3q)} \\ &+ \frac{2}{\sigma} [(C\_4 A\_{\mathfrak{l}}^3 + K\_{\mathfrak{l}}^{'} C\_3 A\_{\mathfrak{l}}^2 + K\_{\mathfrak{l}}^{'} C\_2 A\_{\mathfrak{l}} + K\_{\mathfrak{l}}^{'}) A\_{\mathfrak{l}}]\_{(4q)} + O[\mathfrak{e}\_{\mathfrak{k}}^{4q}]. \end{split}$$

Since ( + 1) = () − , the error equation can be expressed as

#### Design, Analysis, and Applications of Iterative Methods for Solving Nonlinear Systems http://dx.doi.org/10.5772/64106 93

$$\begin{aligned} e\_{k+1} &= A\_{\mathfrak{l}(q)} - \frac{2}{\sigma} A\_{\mathfrak{l}(q)} - \frac{2}{\sigma} [(C\_2 A\_{\mathfrak{l}} + K\_{\mathfrak{l}}^{'}) A\_{\mathfrak{l}}]\_{(2q)} - \frac{2}{\sigma} [(C\_2 A\_{\mathfrak{l}}^2 + K\_{\mathfrak{l}}^{'} C\_2 A\_{\mathfrak{l}} + K\_{\mathfrak{l}}^{'}) A\_{\mathfrak{l}}]\_{(3q)} \\ &- \frac{2}{\sigma} [(C\_4 A\_{\mathfrak{l}}^3 + K\_{\mathfrak{l}}^{'} C\_3 A\_{\mathfrak{l}}^2 + K\_{\mathfrak{l}}^{'} C\_2 A\_{\mathfrak{l}} + K\_{\mathfrak{l}}^{'}) A\_{\mathfrak{l}}]\_{(4q)} + O[\underline{e}\_{\mathfrak{l}}^{4q}] \end{aligned}$$

To simplify the calculations, we use the following notation:

power assumed by *j* in the sum. By using this notation,

92 Nonlinear Systems - Design, Analysis, Estimation and Control

x

x

24( + 2+2) and = <sup>1</sup>

of the convergence conditions of the described methods.

 x

243,=2 + <sup>3</sup>

h

43<sup>2</sup> <sup>+</sup> <sup>1</sup>

∑ = 1 =

Now, we develop the expression = ∑ = 1

= ′ +1() + 2(2) + 3(3) + [

′1() <sup>=</sup> − 1(), ′2(2) = (1

s

 s

s

Since ( + 1) = () − , the error equation can be expressed as

<sup>4</sup>3(2 + 1 <sup>+</sup> + 22)

∑ = 5 − 1 =

() =+ <sup>1</sup>

<sup>2</sup>( +

(()), (()) and ′(

where=1 + <sup>3</sup>

43<sup>2</sup> <sup>+</sup> <sup>1</sup>

= and <sup>1</sup>

= <sup>3</sup>

∑ = 1

2(2) <sup>=</sup> <sup>3</sup>

3(3) = [<sup>1</sup>

∑ = 5 − 1 = 1() and

() can be expressed as

24(2 + <sup>+</sup> 2),

()) appearing in Eq. (2) and we obtain

*q q k*

(3). Recalling that

<sup>3</sup> + 21 + 12 − 3 (3).

4], where

24<sup>3</sup>. We also introduce the following notation:

with *j* = 1, 2, …, which will allow us to simplify the analysis

4],where1() = 2( + 1)(),

2(), where the subscripts in parentheses denote the value of the smallest

) in the Taylor series around *ξ*, we obtain

( ) 2 3 4 5 1( ) 2 1(2 ) 3 1(3 ) 4 1(4 ) ( ) 2 3 4 5 1( ) 2 1(2 ) 3 1(3 ) 4 1(4 ) ( ) 23 4

*k q*

*k p*

*k q i ii i k*

 tt

( ) ( )[ ] [ ], ( ) ( )[ ] [ ], ( ) ( )[ ] [ ],

= +++ + ¢ = +++ + ¢ = ++ + + + ¢

*F y F A CA CA CA Oe F z F A CA CA CA Oe F F I B C D E Oe*

(2)and

<sup>2</sup>4(<sup>3</sup> + 1 2 + <sup>+</sup> 2 + 2 2 <sup>+</sup> + 2 + 23)

−1=, we get −1 = −1 + ′1() + ′2(2) + ′3(3) ′ −1 + [

Therefore, considering case (a), we obtain for = 2−1(()) the following expression:

<sup>2</sup> − 2)

′(

(2), and ′3(3) <sup>=</sup> −1

1( ) 2 1 1 1 (2 ) 2 1 1 2 1 2 1 (3 )

s

¢¢¢

<sup>2</sup> [( ) ] [ ]. *q q q*

*CA K CA K CA K A Oe*

2 2 <sup>2</sup> [( ) ] [( ) ]

+ +++ +

*L A CA K A CA KCA K A*

=+ + + + +

3 2 4 4 1 1 3 1 2 2 1 3 1 (4 )

2

¢ ¢ ¢

)(), where =2() + 1() and =2() − 1(). By expansion of

*q q q qk*

*p p p pk*

43( <sup>+</sup> ) + <sup>1</sup>

 t We note that if *σ* = 2 we obtain order of convergence at least *2q*. The possibility of obtaining a convergence order greater than *2q* depends on the expression (21 + ′1)1. We develop it and get 21 + ′1 1 = 12 1 2 (2) − (1 + 1)2 21 ( + ). Then, *σ*1 = 0, the error equation is:

$$\begin{split} \mathbf{e}\_{k+1} &= C\_2(A\_2 A\_1)\_{(p\*q)} - \frac{2}{\sigma} \mathbf{[(}C\_2 A\_1^2 + K\_{\phantom{\}} C\_2 A\_1 + K\_{\phantom{\}}) A\_1]\_{(3q)}} \\ &- \frac{2}{\sigma} \mathbf{[(}C\_4 A\_1^3 + K\_{\phantom{\}} C\_3 A\_1^2 + K\_{\phantom{\}} C\_2 A\_1 + K\_{\phantom{\}}) A\_1]\_{(4q)}} + O[\mathbf{e}\_k^{4q}]. \end{split} \tag{3}$$

It is clear that the higher order of convergence in case (a) is min{*p*+*q*, 3*q*}. Finally, it is easy to prove that in case (b), the order of convergence of the resulting method is at least *p*. If *σ* = 2 the convergence order is *p* + *q* unless *σ*1 = 2 or *C*2 = 0, being in case the order of convergence is at least 2*q* + *p*. In cases (c) and (d) the convergence order is *q*, which is lower than the order of the predictor. It should be noticed that in case (b), two further functional evaluations are required and a new linear system per step must be solved. This causes the obtained method to be inefficient, from the standpoint of computational efficiency. In addition, we would like to remark that the order of convergence can be greater than *p* + *q* depending on expressions *A*<sup>1</sup> and *A*2, which represent the errors of penultimate and last steps of the predictor method, and also of *σ*1 in case (b). The comments above allow us to state the following result, whose proof can be found by Cordero et al. [19], which establishes the order of convergence of the schemes that are obtained by using any method as a predictor of order *p* and eventually correcting it by the use of Gaussian quadrature, in case (a).

**Theorem 2.1.** *Let* :Ω ⊂ ℝ ℝ *be sufficiently differentiable function in* Ω and *F′*(*x*) *continuous and nonsingular at ξ* ∈ *Ω*, *solution of the nonlinear system. Let y(k) and z(k) be the penultimate and last steps of orders q and p, respectively, of a certain iterative method. Taking this scheme as a predictor we get a new approximation x(k + 1) of ξ given by Eq. (2). Then,*


In **Table 1**, we show the values of *σ* and *σ*1 for some Gaussian quadrature.

In terms of computational efficiency, the most efficient methods are those which use fewer nodes and few functional evaluations, so we only consider case (a). Also, Theorem 2.1 shows that the order of convergence does not depend on the number of nodes, it only depends on the order of convergence of the penultimate and last step of the predictor method. Therefore, it is computationally more efficient to use one or two nodes. We note that the Gauss-Chebyshev quadrature does not fulfill the second condition of Theorem 2.1. Then, its order of convergence is *q*. The method obtained by using Gauss-Radau quadrature of one node does not fulfill the third condition but it verifies the second one; hence, its order of convergence is at least 2*q*. The remaining quadrature with nodes 1, 2, and 3 satisfies the conditions of Theorem 2.1 and the order of convergence is at least min{*p* + *q*, 3*q*}.


**Table 1.** Quadrature formula used.

If we use case (a) and the Gauss-Legendre quadrature with 1 node or Gauss-Lobatto quadrature with one node such as corrector, we obtain the midpoint method, where = 2′ + <sup>2</sup> . In case of using Gauss-Radau quadrature with one node, we obtain Newton's method ( = ′(())). Finally, if we use Gauss-Lobatto quadrature with two nodes or Gauss-Radau quadrature with two nodes such as corrector, we obtain trapezoidal method with = 2′ () + () and Noor's scheme where = 8 ′ + 3′ + 2 <sup>3</sup> , respectively.

#### **2.2. Weight function procedure**

The different methods obtained in the previous section are not optimal in the sense of Kung-Traub conjecture [34], when they are applied to scalar equations. By using the weight functions technique, we can increase the order of convergence of the designed methods without adding new functional evaluations.

We denote by = ℝ× the Banach space of all *n* × *n* real square matrices. The weight function in this context is a Frèchet differentiable function : satisfying:

**(i)** ′()() = 1, ′ being the first derivative of , ′: (), 1 ∈ ℝ and () denotes the space of linear mapping from *X* to itself.

**(ii)** ″(, )() = 2, ″ being the second derivative of , ″:× (), 2 ∈ ℝ.

Then, the Taylor expansion of *H* around the identity matrix *I*, of size *n* × *n*, gives

In terms of computational efficiency, the most efficient methods are those which use fewer nodes and few functional evaluations, so we only consider case (a). Also, Theorem 2.1 shows that the order of convergence does not depend on the number of nodes, it only depends on the order of convergence of the penultimate and last step of the predictor method. Therefore, it is computationally more efficient to use one or two nodes. We note that the Gauss-Chebyshev quadrature does not fulfill the second condition of Theorem 2.1. Then, its order of convergence is *q*. The method obtained by using Gauss-Radau quadrature of one node does not fulfill the third condition but it verifies the second one; hence, its order of convergence is at least 2*q*. The remaining quadrature with nodes 1, 2, and 3 satisfies the conditions of Theorem 2.1 and

1 π 0 2 0 2 0 2 −1 2 π 0 2 0 2 0 20 3 π 0 2 0 2 0 20

If we use case (a) and the Gauss-Legendre quadrature with 1 node or Gauss-Lobatto quadrature with one node such as corrector, we obtain the midpoint method, where

Newton's method ( = ′(())). Finally, if we use Gauss-Lobatto quadrature with two nodes or Gauss-Radau quadrature with two nodes such as corrector, we obtain trapezoidal method

The different methods obtained in the previous section are not optimal in the sense of Kung-Traub conjecture [34], when they are applied to scalar equations. By using the weight functions technique, we can increase the order of convergence of the designed methods without adding

We denote by = ℝ× the Banach space of all *n* × *n* real square matrices. The weight function

**(i)** ′()() = 1, ′ being the first derivative of , ′: (), 1 ∈ ℝ and () denotes the

in this context is a Frèchet differentiable function : satisfying:

space of linear mapping from *X* to itself.

with = 2′ () + () and Noor's scheme where = 8 ′ + 3′

<sup>2</sup> . In case of using Gauss-Radau quadrature with one node, we obtain

 + 2 <sup>3</sup> ,

**Chebyshev Legendre Lobatto Radau σ σ1 σ σ1 σ σ1 σ σ1**

the order of convergence is at least min{*p* + *q*, 3*q*}.

94 Nonlinear Systems - Design, Analysis, Estimation and Control

**Number of nodes Gaussian Quadrature**

**Table 1.** Quadrature formula used.

+

**2.2. Weight function procedure**

new functional evaluations.

= 2′

respectively.

$$H(\boldsymbol{\psi}^{(k)}) \approx H\_0(I) + H\_1(\boldsymbol{\psi}^{(k)} - I) + \frac{1}{2}H\_2(\boldsymbol{\psi}^{(k)} - I)^2$$

Now, by using a relaxed Newton's method as a predictor and a weight function procedure in the corrector step, we design the following families of two-point schemes:

$$\begin{aligned} z^{(k)} &= x^{(k)} - \beta [F'(\mathbf{x}^{k})]^{-1} F(\mathbf{x}^{k}),\\ \mathbf{x}^{(k+1)} &= z^{(k)} - 2H(\mathbf{u}^{(k)}) [\sum\_{i}^{l=1} \alpha\_{i} F'(\eta\_{i}^{(k)})]^{-1} F(\mathbf{x}^{(k)}) \end{aligned} \tag{4}$$

where *β* is a non-zero real parameter and () <sup>=</sup> <sup>1</sup> ∑ = 1 ′(()) −1∑ = 1 ′( ()). Here,

we have used the same notations previously introduced. The following result establishes the order of convergence of this iterative scheme under some conditions of function *H*.

**Theorem 2.2.** *Let ξ* ∈ *Ω be a zero of a sufficiently differentiable function* : ℝ <sup>ℝ</sup>. *Let us also suppose that the initial estimation x*(0) *is close enough to the solution ξ and F*′(*ξ*) *is nonsingular. The iterative methods (Eq. 4) have order of convergence four if*

$$\beta = \frac{4(\mathbf{l} + \sigma\_{\parallel})}{3(\mathbf{l} + 2\sigma\_{\parallel} + \sigma\_{\perp})}$$

*and a sufficiently differentiable function H is chosen satisfying the conditions*

$$H\_0(I) = \frac{\sigma}{2}I, \quad H\_1 = \frac{\sigma(1 + 2\sigma\_1 + 4\sigma\_1^2 + 3\sigma\_2)}{8(1 + \sigma\_1)^2}$$

$$H\_2 = -\frac{3\sigma(-1 - 4\sigma\_1 - 2\sigma\_1^2 + 4\sigma\_1^3 - 4\sigma\_2 - 8\sigma\_1\sigma\_2 + 2\sigma\_1^2\sigma\_2 - 3\sigma\_2^2)}{8(1 + \sigma\_1)^4}$$

*and H*‴ *is a bounded operator, where σ and σj j = 1, 2, depend on the Gaussian quadrature used* (see Ref. [35]).

**Table 2** shows the values of σ1, σ2, and β, depending on the weights and the nodes of the orthogonal polynomials used in the Gaussian quadrature and also the corresponding values of the weight function and its derivatives, which are used in the iterative scheme.


**Table 2.** Nodes, *H*0(*I*), *H*1 and *H*2, for different Gaussian quadrature.

Let us remark that the class of methods designed allows the use of any Gaussian quadrature. In fact, when the technique of pseudo-composition was defined in Section 2.1 based also in Gaussian quadrature, the Chebyshev orthogonal polynomials could not be employed, as they did not verify the hypothesis of the main theorem. Nevertheless, with this new design, all the orthogonal polynomials can be applied and all of them derive optimal methods in the scalar case. In practice, we only use quadrature formulas with one and two nodes because, according to Theorem 2.2, the order of convergence is independent of the number of nodes.

For the one-dimensional case, according to the Kung-Traub's conjecture [34], the obtained fourth-order methods are optimal (in case of Gauss-Radau with one node, classical Newton's method is obtained). Other new methods are obtained in the rest of cases.


**Table 3.** Notation and weight functions for different Gaussian quadrature formulas.

In **Table 3**, we show the weight functions used for each iterative scheme coming from the respective Gaussian quadrature rules. Let us note that the iterative method coming from Gauss-Lobatto with one node is the same as the one resulting from the application of Gauss-Legendre, also with one node. In this case, both coincide with the fourth-order procedure recently published by Sharma et al. [28]. Let us note that other weight functions should derive in other new schemes. For example, the expression of the method obtained by using Gauss-Legendre quadrature with one node is

Design, Analysis, and Applications of Iterative Methods for Solving Nonlinear Systems http://dx.doi.org/10.5772/64106 97

$$\mathbf{y}^{(k)} = \mathbf{x}^{(k)} - \frac{4}{3} [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{x}^{(k)}), \quad \mathbf{z}^{(k)} = \frac{1}{2} (\mathbf{x}^{(k)} + \mathbf{y}^{(k)}),$$

$$\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - \frac{9}{8} [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{x}^{(k)}) + \left(\frac{1}{2}I - \frac{3}{8} [F'(\mathbf{x}^{(k)})]^{-1} [F'(\mathbf{z}^{(k)})]^{-1}\right) [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{x}^{(k)}).$$

#### **2.3. Divided difference operator**

**Quadrature No. of nodes σ σ1 σ2** *β H***0(***I***)** *H***<sup>1</sup>** *H***<sup>2</sup>** Gauss-Chebyshev *1 π 0 0 4/3 (π/2)I π/8 3π/8* Gauss-Legendre *1 2 0 0 4/3 I 1/4 3/4* Gauss-Lobatto *1 2 0 0 4/3 I 1/4 3/4*

Gauss-Radau *1 2 −1 0 0 (1/2)I − −*

to Theorem 2.2, the order of convergence is independent of the number of nodes.

method is obtained). Other new methods are obtained in the rest of cases.

**Quadrature No. of nodes Name** *H***(***t***)** Gauss-Chebyshev <sup>1</sup> GC1

Gauss-Legendre <sup>1</sup> GLe1 1

Gauss-Lobatto <sup>2</sup> GLo2 9

**Table 3.** Notation and weight functions for different Gaussian quadrature formulas.

Gauss-Radau 2 GR2

Legendre quadrature with one node is

Let us remark that the class of methods designed allows the use of any Gaussian quadrature. In fact, when the technique of pseudo-composition was defined in Section 2.1 based also in Gaussian quadrature, the Chebyshev orthogonal polynomials could not be employed, as they did not verify the hypothesis of the main theorem. Nevertheless, with this new design, all the orthogonal polynomials can be applied and all of them derive optimal methods in the scalar case. In practice, we only use quadrature formulas with one and two nodes because, according

For the one-dimensional case, according to the Kung-Traub's conjecture [34], the obtained fourth-order methods are optimal (in case of Gauss-Radau with one node, classical Newton's

In **Table 3**, we show the weight functions used for each iterative scheme coming from the respective Gaussian quadrature rules. Let us note that the iterative method coming from Gauss-Lobatto with one node is the same as the one resulting from the application of Gauss-Legendre, also with one node. In this case, both coincide with the fourth-order procedure recently published by Sharma et al. [28]. Let us note that other weight functions should derive in other new schemes. For example, the expression of the method obtained by using Gauss-

**Table 2.** Nodes, *H*0(*I*), *H*1 and *H*2, for different Gaussian quadrature.

96 Nonlinear Systems - Design, Analysis, Estimation and Control

*2 2 0 1 2/3 I −1/2 6*

2 2 0 1/3 1 I 0 2

16

5 − 12 + 152 2

2)

8(9 − 4 + 3

<sup>2</sup> + 3 2

<sup>2</sup> <sup>−</sup> <sup>13</sup>

<sup>2</sup> − 2 + 2

In this section, we present a design published by Cordero et al. [30] of some families of parametric iterative methods for solving nonlinear equations by means of some known schemes and afterwards extend one of them to systems of nonlinear equations. For this purpose, we use Ostrowski's [10] and Chun's [36] methods with iterative schemes

$$x\_{k+1} = \mathbf{y}\_k - \frac{f(\mathbf{x}\_k)}{f(\mathbf{x}\_k) - 2f(\mathbf{y}\_k)} \frac{f(\mathbf{y}\_k)}{f'(\mathbf{x}\_k)} \text{ and } x\_{k+1} = \mathbf{y}\_k - \frac{f(\mathbf{x}\_k) + 2f(\mathbf{y}\_k)}{f(\mathbf{x}\_k)} \frac{f(\mathbf{y}\_k)}{f'(\mathbf{x}\_k)}, \text{ respectively, where}$$

*yk* is a Newton's step. We propose a new family as a generalization of the previous methods in the following form:

$$\mathbf{y}\_{k} = \mathbf{x}\_{k} - \alpha \frac{f(\mathbf{x}\_{k})}{f'(\mathbf{x}\_{k})}$$

$$\mathbf{x}\_{k+1} = \mathbf{y}\_{k} - \left[ \frac{f(\mathbf{x}\_{k})}{a\_{\mathbf{f}}f(\mathbf{x}\_{k}) + a\_{\mathbf{f}}f(\mathbf{y}\_{k})} + \frac{b\_{\mathbf{f}}f(\mathbf{x}\_{k}) + b\_{\mathbf{f}}f(\mathbf{y}\_{k})}{f(\mathbf{x}\_{k})} \right] \frac{f(\mathbf{y}\_{k})}{f'(\mathbf{x}\_{k})},\tag{5}$$

where *α*, *a*1, *a*2, *b*1, and *b*2 are real parameters. In the following result, we show which values of these parameters are necessary to guarantee at least order of convergence 4.

**Theorem 2.3.** *Let* : ℝ ℝ *be a sufficiently differentiable function in an open interval I, such that ξ* ∈ *I is a simple root of the nonlinear equation f*(*x*) = 0. *If* = 1, 2 = 1 <sup>2</sup> 2 − 2 , 1 = 1 <sup>−</sup> <sup>1</sup> 1 *and for all a*<sup>1</sup> *and* 2 ∈ ℝ *with*, 1 ≠ 0 *then sequence* <sup>0</sup> *obtained from (Eq. 5) converges to ξ with local*

$$e\_{k+1} = ((\mathcal{S} - a\_1(b\_2 - \mathcal{D})^2)c\_2^3 - c\_2c\_3)e\_k^4 + O[e\_k^{s^\*}],$$

*where* = − *and* <sup>=</sup> <sup>1</sup> ! () ′ , <sup>2</sup>.

*order of convergence at least four. In this case, the error equation is*

It is easy to prove this result by using any symbolic software as Wolfram Mathematica. To extend family (Eq. 5) to multivariate case, we need to rewrite its iterative expression in such a way that no functional evaluations of *f* remain at the denominator, as they will become vectors in the multidimensional case. So, let us consider that the first step of (Eq. 5) can be rewritten as <sup>=</sup> <sup>1</sup> − ′ . By using this, we can rewrite quotient as

$$\frac{f\left(\mathcal{Y}\_{k}\right)}{f\left(\mathbf{x}\_{k}\right)} = 1 - \alpha \frac{f\left[\mathbf{x}\_{k}, \mathcal{Y}\_{k}\right]}{f'\left(\mathbf{x}\_{k}\right)},$$

where , <sup>=</sup> − − is the first-order divided difference. By using this transformation, the proposed family (Eq. 5) is fully extensible to several variables in the following way

$$\begin{aligned} \mathbf{y}^{(k+1)} &= \mathbf{x}^{(k)} - \alpha [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{x}^{(k)}),\\ \mathbf{x}^{(k+1)} &= \mathbf{y}^{(k)} - (G\_1(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) + G\_2(\mathbf{x}^{(k)}, \mathbf{y}^{(k)})) [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{y}^{(k)}),\\ G\_1(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) &= [(a\_1 + a\_2)I - \alpha a\_2 [F'(\mathbf{x}^{(k)})]^{-1} [\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F]]^{-1},\\ G\_2(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) &= (b\_1 + b\_2)I - \alpha b\_2 [F'(\mathbf{x}^{(k)})]^{-1} [\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F],\end{aligned} \tag{6}$$

where , ; denotes the divided difference operator of *F* on *x*(*k*) and *y*(*k*) , *I* is the identity matrix, and ′ is the Jacobian matrix of the system.

Since the analysis of the local convergence is based on Taylor series expansion around the solution, we need to obtain the corresponding development of the divided difference operator. Let us denote by [ , ; ] the divided difference operator defined by Ortega and Rheinboldt [37] as the function ⋅, ⋅; : × ℝ × ℝ (ℝ) that satisfies , ; − = − , ∀, . To achieve this, we use the Genocchi-Hermite formula (see [37])

$$[\left(\mathbf{x}, \mathbf{x} + h; F\right) = \int\_0^1 F'(\mathbf{x} + th)dt$$

and, by developing ′( + ℎ) in Taylor series around *x*, we obtain

$$\int\_0^1 F'(\mathbf{x} + th)dt = F'(\mathbf{x}) + \frac{1}{2}F''(\mathbf{x})h + \frac{1}{6}F'''(\mathbf{x})h^2 + O[h^3].\tag{7}$$

Assuming that

#### Design, Analysis, and Applications of Iterative Methods for Solving Nonlinear Systems http://dx.doi.org/10.5772/64106 99

$$\begin{aligned} F(\mathbf{x}) &= F'(\boldsymbol{\xi})(\mathbf{e}\_k + C\_2 \mathbf{e}\_k^2 + C\_3 \mathbf{e}\_k^3 + C\_4 \mathbf{e}\_k^4) + O[\mathbf{e}\_k^{\circ \cdot}], \\ F'(\mathbf{x}) &= F'(\boldsymbol{\xi})(I + 2C\_2 \mathbf{e}\_k + 3C\_3 \mathbf{e}\_k^2 + 4C\_4 \mathbf{e}\_k^3) + O[\mathbf{e}\_k^4], \\ F''(\mathbf{x}) &= F'(\boldsymbol{\xi})(2C\_2 + 6C\_3 \mathbf{e}\_k + 12C\_4 \mathbf{e}\_k^2) + O[\mathbf{e}\_k^3], \\ F'''(\mathbf{x}) &= F'(\boldsymbol{\xi})(6C\_3 + 24C\_4 \mathbf{e}\_k) + O[\mathbf{e}\_k^2], \end{aligned} \tag{8}$$

and replacing these developments in the formula of Genocchi-Hermite, denoting the second point of the divided difference by *y = x + h* and the error at the first step by , <sup>=</sup> , we have , ; = ′ +2(, ) + 3 <sup>2</sup> + 3 . In particular, if *y* is the approximation of the solution provided by Newton's method, i.e., ℎ = <sup>=</sup> ′() 1(), we obtain , ; = ′() +2 <sup>+</sup> 2 <sup>2</sup> + 3 <sup>2</sup> + 3 . These tools allow us to prove the following result.

**Theorem 2.4.** *Let* : ℝ <sup>ℝ</sup> *be a sufficiently differentiable function in a convex set Ω*, *and be a solution of the nonlinear system of equations F*(*x*) = 0. *Then, the sequence* () <sup>0</sup> *obtained by using expression (6) converges to ξ with local order of convergence at least four if* = 1, 2 = 1 2(2 2), 1 = 1 <sup>1</sup> 1 *and for all <sup>a</sup>*<sup>1</sup> *and* 2 <sup>ℝ</sup> *with <sup>a</sup>*1 ≠ 0. *The error equation is*

$$e\_{k \ast 1} = ((\mathbb{S} - a\_1(b\_2 - 2)^2)C\_2^3 - C\_2C\_3)e\_k^4 + O[e\_k^{\ast}],$$

*where* = () *and* <sup>=</sup> <sup>1</sup> ! ′() 1()(), <sup>2</sup>.

in the multidimensional case. So, let us consider that the first step of (Eq. 5) can be rewritten

[ ] ( )

is the first-order divided difference. By using this transforma-


and *y*(*k*)


, 1 , *<sup>k</sup> k k k k f y fx y fx f x* = a¢

tion, the proposed family (Eq. 5) is fully extensible to several variables in the following way

( 1) ( ) ( ) ( ) () () ' () 1 ()

*k k kk kk k k k k k kk k k k kk*

a

Since the analysis of the local convergence is based on Taylor series expansion around the solution, we need to obtain the corresponding development of the divided difference operator. Let us denote by [ , ; ] the divided difference operator defined by Ortega and Rheinboldt [37] as the function ⋅, ⋅; : × ℝ × ℝ (ℝ) that satisfies , ; − = − , ∀, . To achieve this, we use the Genocchi-Hermite formula

> 1 <sup>0</sup> [ , *x x h F F x th dt* ; ] () ¢ += + ò

<sup>1</sup> 2 3

1 1 ( ) ( ) ( ) ( ) [ ]. 2 6

¢ ¢ += + + + ¢¢ ¢¢¢ ò *F x th dt F x F x h F x h O h* (7)

() () ' () 1 () () 1

( , ) [( ) [ ( )] [ , ; ]] , ( , ) ( ) [ ( )] [ , ;

[ ( )] ( ), ( ( , ) ( , ))[ ( )] ( ),

() () ' () 1 () ()

a

*Gx y b bI bFx x y F*],

( 1) ( ) ' ( ) 1 ( )

*kk k k*

+ -

*x y G x y G x y F x Fy Gx y a a I aFx x y F*

1 2

=+ - =+ -

a

*y x F x Fx*

+ -

1 12 2

where , ; denotes the divided difference operator of *F* on *x*(*k*)

and, by developing ′( + ℎ) in Taylor series around *x*, we obtain

0

matrix, and ′ is the Jacobian matrix of the system.

= - =- +

2 12 2

 

as

(6)

, *I* is the identity

− ′ . By using this, we can rewrite quotient

( ) ( )

as <sup>=</sup> <sup>1</sup>

where , <sup>=</sup>

(see [37])

Assuming that

 − −

98 Nonlinear Systems - Design, Analysis, Estimation and Control

*Proof:* By using Taylor expansion around *ξ*, we obtain:

$$F(\boldsymbol{x}^{(k)}) = \boldsymbol{F}^{\prime}(\boldsymbol{\xi}^{\varepsilon})(\boldsymbol{e}\_{k} + \boldsymbol{C}\_{2}\boldsymbol{e}\_{k}^{2} + \boldsymbol{C}\_{3}\boldsymbol{e}\_{k}^{3} + \boldsymbol{C}\_{4}\boldsymbol{e}\_{k}^{4}) + O[\boldsymbol{e}\_{k}^{\prime}],$$

$$F^{\prime(\boldsymbol{x}^{(k)})} = \boldsymbol{F}^{\prime}(\boldsymbol{\xi}^{\varepsilon})(\boldsymbol{I} + 2\boldsymbol{C}\_{2}\boldsymbol{e}\_{k} + 3\boldsymbol{C}\_{3}\boldsymbol{e}\_{k}^{2} + 4\boldsymbol{C}\_{4}\boldsymbol{e}\_{k}^{3}) + O[\boldsymbol{e}\_{k}^{4}].$$

Let us consider F' x (k) 1 <sup>=</sup> I+X2ek + X3ek <sup>2</sup> + X4ek <sup>3</sup> <sup>F</sup>'(ξ) 1 + O ek 4 . Forcing F' x (k) 1 F' x (k) = I, we get *X*2 = −2*C*2, *X*3 = 22 <sup>2</sup> −3*C*3 and X4 <sup>=</sup> 4C4 + 6C3C2 4C2 <sup>2</sup> + 6C2C3. These expressions allow us to obtain for first step of iterative formula (6)

$$\mathbf{y}^{(k)} = \xi + (\mathbf{l} - \alpha)\mathbf{e}\_k - \alpha(A\_2\mathbf{e}\_k^2 + A\_3\mathbf{e}\_k^3 + A\_4\mathbf{e}\_k^4) + O[\mathbf{e}\_k^5],\tag{9}$$

where 2 <sup>=</sup> − 2 − 2, 3 <sup>=</sup> <sup>−</sup> 3 <sup>−</sup> 22 − 3 and 4 <sup>=</sup> <sup>−</sup> 4 <sup>−</sup> 32 <sup>−</sup> 23 − 4. By using these results and the Taylor series expansion around *ξ*, we obtain

$$F(\boldsymbol{\psi}^{(k)}) = F^\prime(\boldsymbol{\xi}^{\varepsilon})(B\_1 \boldsymbol{e}\_k + B\_2 \boldsymbol{e}\_k^2 + B\_3 \boldsymbol{e}\_k^3 + B\_4 \boldsymbol{e}\_k^4) + O[\boldsymbol{e}\_k^{\varepsilon}],$$

where *B*1=*β*, *B*2=(*α* + *β*<sup>2</sup> )*C*2, *B*3=−*αA*3 + 2*α*β*C*2*A*3 + 3*αβ*<sup>2</sup> *C*3*C*2 + *β*<sup>4</sup> *C*4, *B*4=−*αA*4 + *α*<sup>2</sup> *C*2 3 −2*α*β*C*2*A*<sup>3</sup> + 3*αβ*<sup>2</sup> *C*3*C*2 + *β*<sup>4</sup> *C*4 and *β* = 1 − *α*. We calculate the Taylor expansion of [*x(k)*, *y(k)*;*F*] by using Eq. (9), [ *x(k)*, *y(k)*;*F*]= *F*' (*ξ*)(*I* + *D*2*ek* + *D*3*ek 2* + *D*4*ek 3* ) + *O*[*ek 4* ], where *D*2= (2−*α*)*C*2, *D*3= *αC*<sup>2</sup> 2 + (3−3*α* + *α*<sup>2</sup> )*C*<sup>3</sup> and *D*4= 2*αC*2*C*3 + *α*(3−2*α*)*C*3*C*2−(4−6*α* + 4*α*<sup>2</sup> −*α*<sup>3</sup> )*C*4.

Then,

$$M = (a\_1 + a\_2)I - aa\_2[F'(\mathbf{x}^{(k)})]^{-1}[\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F] = a\_1 + E\_2 e\_k + E\_3 e\_k^2 + E\_4 e\_k^3 + O[e\_k^4],$$

where *E*2 = *αa*2*C*2, *E*3 = *αC*<sup>2</sup> 3 + *α*(*α*−3)*C*3 and *E*4 = 6*αC*2*C*3−2*αC*<sup>2</sup> 3 −4*C*4 + 5*α*(2−*α*)*C*3*C*2 + (4−6*α* + 4*α*<sup>2</sup> −*α*<sup>3</sup> )*C*3*C*4.

Thus, we obtain *G*1(*x*(*k*) , *y*(*k*) ) as the inverse of matrix *M*: *G*1(*x*(*k*) , *y*(*k*) )=*I* + *Y*2*e*k+*Y*3*ek* 2 +*Y*4*ek* 3 +*O*[*ek* 4 ], where 2 <sup>=</sup> 2 1 2, 3 <sup>=</sup> 2 1 <sup>2</sup> 2 − 3 2 <sup>2</sup> <sup>+</sup> <sup>3</sup> 3 ,

4 <sup>=</sup> 2 1 <sup>3</sup> 81 + 312 + 32 <sup>−</sup> 2 2 <sup>3</sup> 2 <sup>3</sup> and *G*2(*x*(*k*) , *y*(*k*) ) = *b*1+*F*2*ek*+*F*3*ek* 2 +*F*4*ek* 3 +*F*5*ek* 4 +*O*[*ek* 5 ] where *F*2=*αb*2*C*2, *F*3=−*αb*2[2*C*<sup>2</sup> 2 −(*α*−3)*C*3] and *F*4=*b*2[*α*(6−4*α*+*α*<sup>2</sup> )*C*4−6*α*(2−*α*)*C*3*C*2+4(*α*+1)*C*<sup>2</sup> 3 −6(*α*+1)*C*2*C*3].

Finally, we obtain the error equation of the proposed method

$$e\_{k \leftrightarrow 1} = H\_1 e\_k + H\_2 e\_k^2 + H\_3 e\_k^3 + H\_4 e\_k^4 + O\left[e\_k^{s^\*}\right],$$

where 1 <sup>=</sup> <sup>1</sup> 1 1 + 1 1 − 1 ( 1). If *α* = 1, then *H*1 = 0 and the error equation takes the form: + 1 = 2 ′ <sup>2</sup> + 3 ′ <sup>3</sup> + 4 ′ <sup>4</sup> + <sup>5</sup> where 2 ′ = <sup>−</sup> <sup>1</sup> 1 1 + 1(1 − 1) 2. We note that if 1 = 1 <sup>−</sup> 1/1, then 2 ′ =0 We introduce this value of *b*1 and obtain the new form of the error equation + 1 = 3 ′′ <sup>3</sup> + 4 ′′ <sup>4</sup> + <sup>5</sup> where 3 ′′ = 1/1 2 2 <sup>−</sup> 1 2(2 − 2) 2 2. Finally, if 2 <sup>=</sup> 1 2(2 − 2), the error equation is:

$$e\_{k+1} = -[(a\_1(b\_2 - 2)^2 - \mathfrak{S})C\_2^3 + C\_2 C\_3 \rfloor e\_k^4 + O[e\_k^{"\} ]$$

and the proof is finished.□

where 2 <sup>=</sup> − 2 − 2, 3 <sup>=</sup> <sup>−</sup> 3 <sup>−</sup> 22 − 3 and 4 <sup>=</sup> <sup>−</sup> 4 <sup>−</sup> 32 <sup>−</sup> 23 − 4. By using

() ' 234 5 <sup>1234</sup> ( ) ( )( = +++ +) [ ], *<sup>k</sup> F y F Be Be Be Be O e kkkk k*

*C*3*C*2 + *β*<sup>4</sup>

], where *D*2= (2−*α*)*C*2, *D*3= *αC*<sup>2</sup>

3

, *y*(*k*)

) = *b*1+*F*2*ek*+*F*3*ek*

)*C*4−6*α*(2−*α*)*C*3*C*2+4(*α*+1)*C*<sup>2</sup>

ë û

2 <sup>−</sup> 1

2 1

> 2 +*F*4*ek* 3 +*F*5*ek* 4 +*O*[*ek* 5 ] where

*C*4 and *β* = 1 − *α*. We calculate the Taylor expansion of [*x(k)*, *y(k)*;*F*] by using Eq.

' () 1 () () 234

*C*4, *B*4=−*αA*4 + *α*<sup>2</sup>

*C*2 3

+ (3−3*α* + *α*<sup>2</sup>

2

−4*C*4 + 5*α*(2−*α*)*C*3*C*2 + (4−6*α*

2 +*Y*4*ek* 3 +*O*[*ek* 4 ],

3

1 + 1(1 − 1) 2. We note that if

2(2 − 2) 2

<sup>2</sup> <sup>+</sup> <sup>3</sup> 3 ,

−6(*α*+1)*C*2*C*3].

2. Finally, if

)=*I* + *Y*2*e*k+*Y*3*ek*

<sup>2</sup> 2 − 3 2

−2*α*β*C*2*A*<sup>3</sup>

)*C*<sup>3</sup>

these results and the Taylor series expansion around *ξ*, we obtain

(*ξ*)(*I* + *D*2*ek* + *D*3*ek*

a

3

, *y*(*k*)

<sup>3</sup> 81 + 312 + 32 <sup>−</sup>

2

and *D*4= 2*αC*2*C*3 + *α*(3−2*α*)*C*3*C*2−(4−6*α* + 4*α*<sup>2</sup>

100 Nonlinear Systems - Design, Analysis, Estimation and Control

where *B*1=*β*, *B*2=(*α* + *β*<sup>2</sup>

where *E*2 = *αa*2*C*2, *E*3 = *αC*<sup>2</sup>

where 2 <sup>=</sup>

Thus, we obtain *G*1(*x*(*k*)

*C*3*C*2 + *β*<sup>4</sup>

(9), [ *x(k)*, *y(k)*;*F*]= *F*'

+ 3*αβ*<sup>2</sup>

Then,

+ 4*α*<sup>2</sup> −*α*<sup>3</sup> )*C*3*C*4.

4 <sup>=</sup>

2 1

where 1 <sup>=</sup> <sup>1</sup>

2 <sup>=</sup> 1

form: + 1 = 2

1 = 1 <sup>−</sup> 1/1, then 2

equation + 1 = 3

1

′ <sup>2</sup> + 3 ′ <sup>3</sup> + 4 ′ <sup>4</sup> +

> ′′ <sup>3</sup> + 4 ′′ <sup>4</sup> +

2(2 − 2), the error equation is:

*F*2=*αb*2*C*2, *F*3=−*αb*2[2*C*<sup>2</sup>

x

*2* + *D*4*ek 3* ) + *O*[*ek 4*

2 1

> 2 2 <sup>3</sup> 2

Finally, we obtain the error equation of the proposed method

−(*α*−3)*C*3] and *F*4=*b*2[*α*(6−4*α*+*α*<sup>2</sup>

)*C*2, *B*3=−*αA*3 + 2*α*β*C*2*A*3 + 3*αβ*<sup>2</sup>

−*α*<sup>3</sup> )*C*4.

12 2 12 3 4 ( ) [ ( )] [ , ; ]

) as the inverse of matrix *M*: *G*1(*x*(*k*)

<sup>3</sup> and *G*2(*x*(*k*)

11 2 3 4 , *k kkkkk e He He He He Oe* <sup>+</sup> =+ ++ + é ù

<sup>5</sup> where 2

<sup>5</sup> where 3

1 12 2 23 [( ( 2) 5) ] [ ] *<sup>k</sup>* <sup>+</sup> =- - - + +*k k e a b C CC e Oe*

[ ], - =+ - =+ + + + *k kk M a a I a F x x y F a Ee Ee Ee Oe kk k k*

+ *α*(*α*−3)*C*3 and *E*4 = 6*αC*2*C*3−2*αC*<sup>2</sup>

2, 3 <sup>=</sup>

, *y*(*k*)

2345

1 + 1 1 − 1 ( 1). If *α* = 1, then *H*1 = 0 and the error equation takes the

′ = <sup>−</sup> <sup>1</sup> 1

′ =0 We introduce this value of *b*1 and obtain the new form of the error

′′ = 1/1 2

23 45

By using the same hypothesis as in Theorem 2.4, the iterative scheme of the class (Eq. 6) takes the form:

$$\mathbf{y}^{(k+1)} = \mathbf{x}^{(k)} - \left[\boldsymbol{F}'(\mathbf{x}^{(k)})\right]^{-1} \boldsymbol{F}(\mathbf{x}^{(k)}),$$

$$\mathbf{x}^{(k+1)} = \mathbf{y}^{(k)} - (G\_1(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) + G\_2(\mathbf{x}^{(k)}, \mathbf{y}^{(k)})) \mathbf{I}'(\mathbf{x}^{(k)})^{-1} \boldsymbol{F}(\mathbf{y}^{(k)}),$$

$$G\_1(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) = \frac{1}{a\_1} \left[ (1 + a\_1 b\_2 - 2a\_1) \boldsymbol{I} - a\_1 (b\_2 - 2) \boldsymbol{I} \boldsymbol{F}(\mathbf{x}^{(k)}) \right]^{-1} [\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; \boldsymbol{F}] \right]^{-1} \tag{10}$$

$$G\_2(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) = \frac{1}{a\_1} \left[ (a\_1 + a\_1 b\_2 - \mathrm{l}) \boldsymbol{I} - b\_2 [\boldsymbol{F}'(\mathbf{x}^{(k)})]^{-1} [\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; \boldsymbol{F}] \right].$$

In the following we propose some particular cases:

**1.** When *a*1 = 1, the iterative expression, being (), () = 1 (), () + 2 (), () takes the form:

$$\begin{aligned} G(\boldsymbol{\mathfrak{x}}^{(k)}, \boldsymbol{\mathcal{y}}^{(k)}) &= \left[ (b\_2 - 1)I - (b\_2 - 2) [\boldsymbol{F}^{\boldsymbol{\cdot}}(\boldsymbol{\mathfrak{x}}^{(k)})]^{-1} [\boldsymbol{\mathfrak{x}}^{(k)}, \boldsymbol{\mathfrak{y}}^{(k)}; \boldsymbol{F}] \right]^{-1} \\ &+ b\_2 I - b\_2 [\boldsymbol{F}^{\boldsymbol{\cdot}}(\boldsymbol{\mathfrak{x}}^{(k)})]^{-1} [\boldsymbol{\mathfrak{x}}^{(k)}, \boldsymbol{\mathfrak{y}}^{(k)}; \boldsymbol{F}] \end{aligned}$$

and we have a family of schemes with interesting particular cases, among others,

**a)** If *b*2 = 2, Chun's method transferred to systems is obtained

$$\boldsymbol{\chi}^{(k+1)} = \boldsymbol{\mathcal{y}}^{(k)} - \left(\boldsymbol{I} - 2\left[\boldsymbol{F}^{\boldsymbol{\upleft}^{\boldsymbol{\upleft}^{\boldsymbol{\upleft}}{}}\boldsymbol{\upright}}\right]^{-1}\left[\boldsymbol{\upchi}^{\boldsymbol{\upleft}^{\boldsymbol{\upleft}}{}}\boldsymbol{\upleft},\boldsymbol{\upchi}^{\boldsymbol{\upleft}}\right]\right]\right] \boldsymbol{F}^{\boldsymbol{\upleft}^{\boldsymbol{\upleft}}{}}\boldsymbol{\upright}^{\boldsymbol{\upleft}}\right]^{-1}\boldsymbol{F}\left(\boldsymbol{\upchi}^{\boldsymbol{\upleft}}\right).$$

**b)** If *b*2 = 0 we get Ostrowski's scheme transferred to systems

$$\mathbf{x}^{(k+1)} = \mathbf{y}^{(k)} - \left(-I + 2\left[F^{\binom{k}{k}}\right]^{-1}\left[\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F\right]\right)^{-1}\left[F^{\binom{k}{k}}\right]^{-1}F\left(\mathbf{y}^{(k)}\right).$$

**2.** When *b*2 = 0 for any *a*1 ≠ 0, the iterative expression of the parametric family is

$$G(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) = \frac{(\mathbf{3} - \mathbf{2}a\_1)I + \mathbf{2}(a\_1 - \mathbf{l})[F'(\mathbf{x}^{(k)})]^{-1}[\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F]}{(\mathbf{l} - \mathbf{2}a\_1)I + \mathbf{2}a\_1[F'(\mathbf{x}^{(k)})]^{-1}[\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F]}.$$

If we express −2(*a*1 − 1) = *β* we get the King's family transferred to systems

$$\mathbf{x}^{\left(\boldsymbol{\lambda}^{\left(k>1\right)}\right)} = \mathbf{y}^{\left(k\right)} - \frac{\left(1+\beta\right)I + \beta\left[\boldsymbol{F}^{\left(\boldsymbol{\lambda}^{\left(k\right)}\right)}\right]^{-1}\left[\mathbf{x}^{\left(k\right)},\mathbf{y}^{\left(k\right)};\boldsymbol{F}\right]}}{\left(\beta-1\right)I + \left(\beta-2\right)\left[\boldsymbol{F}^{\left(\boldsymbol{\lambda}^{\left(k\right)}\right)}\right]^{-1}\left[\mathbf{x}^{\left(k\right)},\mathbf{y}^{\left(k\right)};\boldsymbol{F}\right]}} \left[\boldsymbol{F}^{\left(\boldsymbol{\lambda}^{\left(k\right)}\right)}\right]^{-1}F\left(\boldsymbol{\nu}^{\left(k\right)}\right).$$

**3.** When *b*2 = 1 for any *a*1 ≠ 0 and *a*1 ≠ 1 the iterative form, the last step of the method (Eq. 10) takes the form:

$$\begin{split} \mathbf{x}^{(k+1)} &= \mathbf{y}^{(k)} - (\frac{1}{a\_1} [\mathbf{I} - a\_1] I - a\_1 [F'(\mathbf{x}^{(k)})]^{-1} [\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F])^{-1} \\ &+ \frac{1}{a\_1} [(2a\_1 - \mathbf{I}) I - [F'(\mathbf{x}^{(k)})]^{-1} [\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F]]) [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{y}^{(k)}). \end{split}$$

## **3. Dynamic studies of some methods**

In the last years, a new branch of the analysis of iterative methods for solving nonlinear equations or systems has taken relevance: the dynamical analysis of the rational functions associated with the fixed point operator associated with the iterative scheme on polynomials. By using complex or real dynamics techniques, the stability and reliability of a method can be checked. Indeed, if a parametric family of iterative procedures is considered, this kind of analysis allows selecting those elements of the class with better stability and also to know which ones behave chaotically even on the most simple functions, as low degree polynomials.

The use of these dynamical tools is very frequent on scalar iterative methods; see, for example, Refs. [38–45] and the references therein, but in the multidimensional case, it is a starting area of research.

Now, let us recall some basic concepts on complex dynamics. Given a rational function , where is the Riemann sphere, the *orbit of a point* 0 <sup>∈</sup> is defined as 0, (0), 2(0), …, (0), … . A point \* ∈ is called a *fixed point* of () if it verifies that \* = \*. Moreover, *z***\*** is called a *periodic point* of period *p* > 1 if it is a point such that \* = \* but \* ≠ 0, for each *k* < *p*. Moreover, a point *z*\* is called *pre-periodic* if it is not periodic but there exists a *k* > 0 such that \* is periodic.

There exist different types of fixed points depending on their associated multiplier . Taking the associated multiplier into account, a fixed point *z*\* is called: *superattracting* if , *attracting* if , *repulsive* if , and *parabolic* if . The fixed point operator of any iterative method on an arbitrary polynomial *p*(*z*) is a rational function. The fixed points of this rational function that do not correspond to the roots of the polynomial *p*(*z*) are called *strange fixed points*. On the other hand, a *critical point* z*\** is a point satisfying ′ \* = 0.

If we express −2(*a*1 − 1) = *β* we get the King's family transferred to systems

 b

( 1) ( ) ' () 1 () () 1 1 1

+ - -

*k k k kk*

*x y aI aF x x y F <sup>a</sup>*

( [(1 ) [ ( )] [ , ; ]]

( ) ( ) () ()

*x k k x k k k*

1


'

1 , ;

*k*

é ù + + é ù ê ú ë û ë û é ù = - ê ú é ù ë û

'

*k*

**3.** When *b*2 = 1 for any *a*1 ≠ 0 and *a*1 ≠ 1 the iterative form, the last step of the method (Eq. 10)

In the last years, a new branch of the analysis of iterative methods for solving nonlinear equations or systems has taken relevance: the dynamical analysis of the rational functions associated with the fixed point operator associated with the iterative scheme on polynomials. By using complex or real dynamics techniques, the stability and reliability of a method can be checked. Indeed, if a parametric family of iterative procedures is considered, this kind of analysis allows selecting those elements of the class with better stability and also to know which ones behave chaotically even on the most simple functions, as low degree polynomials.

The use of these dynamical tools is very frequent on scalar iterative methods; see, for example, Refs. [38–45] and the references therein, but in the multidimensional case, it is a starting area

Now, let us recall some basic concepts on complex dynamics. Given a rational function

0, (0), 2(0), …, (0), … . A point \* ∈ is called a *fixed point* of () if it verifies that

There exist different types of fixed points depending on their associated multiplier . Taking the associated multiplier into account, a fixed point *z*\* is called: *superattracting* if

, *attracting* if , *repulsive* if , and *parabolic* if . The fixed

, where is the Riemann sphere, the *orbit of a point* 0 <sup>∈</sup> is defined as

is called a *periodic point* of period *p* > 1 if it is a point such that \* = \*

is called *pre-periodic* if it is not periodic but

*I F xyF x y F Fy I F xyF*

1 2 ,;


1

( ) ( ) () ()

*x k k*

1


( ) ( ) ( ) ( )

.

1


'

' () 1 () () ' () 1 ()


*k kk k k*

<sup>1</sup> [(2 1) [ ( )] [ , ; ]])[ ( )] ( ).

*a I F x x y F F x Fy <sup>a</sup>*

*k*

( )

b

1

=- - -

1

1

**3. Dynamic studies of some methods**

but \* ≠ 0, for each *k* < *p*. Moreover, a point *z*\*

there exists a *k* > 0 such that \* is periodic.

b

( )( )

1

+ --

 b

( ) ()

102 Nonlinear Systems - Design, Analysis, Estimation and Control

+

takes the form:

of research.

\* = \*. Moreover, *z***\***

The basin of attraction of an attractor α is defined as The Fatou set of the rational function *R*, is the set of points whose orbits tend to an attractor (fixed point, periodic orbit or infinity). Its complement in is the Julia set, . That means that the basin of attraction of any fixed point belongs to the Fatou set and the boundaries of these basins of attraction belong to the Julia set.

Some other concepts are a key fact in this kind of analysis, as the immediate basin of attraction of an attracting fixed point α (considered as a periodic point of period 1), as the connected component of the basin containing α. This concept is directly related with the existence of critical points, as can be seen in the following classical result, due to Fatou and Julia.

**Theorem 3.1.** *Let R be a rational function. The immediate basins of attraction of any attracting periodic point hold, at least, a critical point.*

If a scaling theorem can be established, the qualitative behavior of the class of iterative schemes is analyzed on a generic quadratic polynomial () = 2 − , its dynamics being analytically conjugated by affine transformations. Then, the fixed points of their associated rational function (being or not roots of the polynomial) are calculated and it is studied if they are stable (that is, if the successive iterations of the method converge to them) or if they are unstable, repelling the iterations near them. Finally, the calculation of critical points and their use as starting points of the iterative process will allow us, by applying Theorem 3.1, to find all the values of the parameter (that is, methods of the family) that do not converge to the roots of the polynomial, resulting in unstable behavior (attracting periodic orbits, chaos, etc.).

This kind of analysis has been developed by Cordero et al. [46] on the fourth-order biparametric Ostrowski-Chun family, whose iterative expression was shown in Eq. (5). In this manuscript, the strange fixed points and free independent critical points of the fixed point rational operator

$$\begin{aligned} &O\_p(z, a\_1, b\_2) \\ &= -z^4 \frac{(z+1)^2(z^2+4z+5) - a\_1(b\_2^2 - b\_2(z^3+4z^2+5z+4) + 2(z+1)^2(z+2))}{z^4(a\_1(b\_2-2)^2 - 5) + z^3(-5a\_1(b\_2-2)-14) - 2z^2(2a\_1(b\_2-2)+7) - z(a\_1(b\_2-2)+6) - 1} \end{aligned}$$

have been identified. As the behavior of the elements of the family depends on two parameters, only real values have been considered in the plane (*a*1, *b*2) in order to calculate the multipliers of the strange fixed points: *z = 1* and

$$\begin{aligned} \text{ex}\_1 &= \frac{1}{12}(A - \sqrt{3B} - \sqrt{6(C+D)}), \\ \text{ex}\_2 &= \frac{1}{12}(A - \sqrt{3B} + \sqrt{6(C+D)}), \\ \text{ex}\_3 &= \frac{1}{12}(A + \sqrt{3B} - \sqrt{6(C+D)}), \\ \text{ex}\_4 &= \frac{1}{12}(A + \sqrt{3B} + \sqrt{6(C+D)}), \end{aligned}$$

where *A, B, C,* and *D* depend on both parameters *a*1 and *b*2. In **Figure 1**, the stability function of strange fixed point *z* = 1 is represented, ′ 1, 1, 2 . The orange region corresponds to real values of parameters *a*1 and *b*2 where *z* = 1 is attracting, meanwhile gray region includes values of *a*1 and *b*2 where this strange fixed point is repulsive, and therefore, the corresponding iterative methods have better behavior. Similar studies can be made on the rest of strange fixed points, searching for a more stable position of plane (*a*1, *b*2).

**Figure 1.** Stability function of *z* = 1 as fixed point of operator *Op*(*z*, *a*1, *b*2).

On the other hand, as the dynamics of critical points could lead to a Fatou component, their associated parameter planes are represented to see the behavior of the method when the initial estimate is a critical point. In this case, *z* = −*1* and , = 1, 2, 3, 4, are free critical points of Op(z,*a***1,** *<sup>b</sup>***2**) such that 1 <sup>=</sup> <sup>1</sup> 2 and 3 <sup>=</sup> <sup>1</sup> 4 , that will be used to calculate the different parameter planes associated with the fixed point operator. As *z* = *−1* is a pre-image of *z* = 1, only two critical points can be considered as free and independent: *cr*2 and *cr*4.

The parameter space of a free critical point is obtained by associating each point of the parameter plane with real values of free parameters *a*1 and *b*2, so every point on the plane represents a different member of the iterative family. These parameter planes (one of them can be seen in **Figure 2**) have been created using a vectorized version of the MATLAB code programs presented by Chicharro et al. [47], with 800 × 800 different combinations of *a***1** and *b***2**. Black points correspond to the parameter values for which the associated iterative method does not converge to the conjugated values of the zeros of polynomial *p*(*z*) with a tolerance of 10−3 after 500 iterations, taking the same free independent critical point as the initial estimate.

**Figure 2.** Parameter plane of operator *Op*(*z*, *a*1, *b*2).

1

<sup>1</sup> ( 3 6( )), <sup>12</sup>

*ex A B C D*

= -- +

*ex A B C D*

= -+ +

*ex A B C D*

= +- +

*ex A B C D*

= ++ +

where *A, B, C,* and *D* depend on both parameters *a*1 and *b*2. In **Figure 1**, the stability function

values of parameters *a*1 and *b*2 where *z* = 1 is attracting, meanwhile gray region includes values of *a*1 and *b*2 where this strange fixed point is repulsive, and therefore, the corresponding iterative methods have better behavior. Similar studies can be made on the rest of strange fixed

On the other hand, as the dynamics of critical points could lead to a Fatou component, their associated parameter planes are represented to see the behavior of the method when the initial

4

<sup>1</sup> ( 3 6( )), <sup>12</sup>

<sup>1</sup> ( 3 6( )), <sup>12</sup>

<sup>1</sup> ( 3 6( )), <sup>12</sup>

′ 1, 1, 2 . The orange region corresponds to real

, = 1, 2, 3, 4, are free critical points of

, that will be used to calculate the different

2

3

4

points, searching for a more stable position of plane (*a*1, *b*2).

**Figure 1.** Stability function of *z* = 1 as fixed point of operator *Op*(*z*, *a*1, *b*2).

estimate is a critical point. In this case, *z* = −*1* and

2

and 3 <sup>=</sup> <sup>1</sup>

Op(z,*a***1,** *<sup>b</sup>***2**) such that 1 <sup>=</sup> <sup>1</sup>

of strange fixed point *z* = 1 is represented,

104 Nonlinear Systems - Design, Analysis, Estimation and Control

Points shown in red in **Figure 2** (and also simultaneously red in the rest of parameter planes corresponding to other free independent critical points of the rational function) correspond to the most stable methods of the family. In these terms, **Figure 3** shows the dynamical plane associated with one of these stable elements of the family: each point with 800 × 800 mesh in the complex plane is an initial estimation for the iterative method. This point has an associated color depending on the convergence of the method after 80 iterations: if it converges to a fixed point that corresponds to a root of *p*(*z*), then a color (orange and blue) is assigned (see **Figure 3a**). If not, then it diverges or converges to any other element (attracting strange fixed point, periodic orbit, etc.), and it is painted in black (see **Figure 3b**). In the former case, the orbit of an initial point in a black region is marked in yellow.

**Figure 3.** Dynamical plane of operator *Op*(*z*, *a*1, *b*2).

Selected values of parameters *a***1** and *b***<sup>2</sup>** in these red regions were tested numerically, not only on scalar equations, but also on multidimensional problems. These tests showed that good performance of the member of the family observed in the dynamical study could also be noticed in the multidimensional case.

However, very recently real multivariate dynamical tools have revealed also to be a reliable implement to analyze the stability of iterative methods, specially designed for solving nonlinear systems. In order to study the stability of the fixed points of vectorial rational functions associated with iterative schemes for solving nonlinear systems, a new tool was presented by Cordero et al. [33]. In it, the authors checked the consistence of this tool by applying it on known methods as Newton's and Traub's schemes and also on a family of parametric iterative procedures

$$\begin{aligned} \mathbf{y}^{(k)} &= \mathbf{x}^{(k)} - \theta [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{x}^{(k)}), \\ \mathbf{z}^{(k)} &= \mathbf{x}^{(k)} - [F'(\mathbf{x}^{(k)})]^{-1} (\theta F(\mathbf{x}^{(k)}) + F(\mathbf{y}^{(k)})), \\ \mathbf{x}^{(k+1)} &= \mathbf{x}^{(k)} - [F'(\mathbf{x}^{(k)})]^{-1} (\theta F(\mathbf{x}^{(k)}) + F(\mathbf{y}^{(k)}) + F(\mathbf{z}^{(k)})), \end{aligned}$$

Presented by Hueso et al. [48] and denoted by HMT.

In order to analyze, under a multidimensional point of view, the qualitative behavior of this family, the Cordero et al. [33] introduced some concepts of multidimensional real discrete dynamics that we will recall in the following; some of them are the natural extensions of the defined concepts in complex dynamics, but others will be defined, as being different from those.

Let us denote by *G(x)* the vectorial fixed-point function associated with the iterative method or family on polynomial *p(x)*. The dynamical behavior of the orbit of a point of ℝ can be classified depending on its asymptotic behavior. In this way, a point *x\** is a fixed point of *G* if *G*(*x*\* ) = *x*\* .

**Theorem 3.2.** [43, page 558] *Let* :ℝ ℝ be . *Assume x\* is a period-k point. Let* 1, 2, …, *be the eigenvalues of G'(x\* ).*


Moreover, a fixed point is called *hyperbolic* if all the eigenvalues of *G′(x\* )* have <sup>≠</sup> <sup>1</sup>. Indeed, if there is an eigenvalue such that < 1 and also there exists an eigenvalue such that > 1, the hyperbolic point is called *saddle point*.

To avoid the calculation of spectrum of *G′(x\* ),* the authors proposed by Cordero et al. [33] a result that, being consistent with the previous theorem, provided a practical tool to classify the stability of fixed points in many multidimensional cases.

**Proposition 3.1.** *Let x\* be a fixed point of G. then,*

**Figure 3.** Dynamical plane of operator *Op*(*z*, *a*1, *b*2).

106 Nonlinear Systems - Design, Analysis, Estimation and Control

noticed in the multidimensional case.

parametric iterative procedures

those.

*G*(*x*\* ) = *x*\* .

Selected values of parameters *a***1** and *b***<sup>2</sup>** in these red regions were tested numerically, not only on scalar equations, but also on multidimensional problems. These tests showed that good performance of the member of the family observed in the dynamical study could also be

However, very recently real multivariate dynamical tools have revealed also to be a reliable implement to analyze the stability of iterative methods, specially designed for solving nonlinear systems. In order to study the stability of the fixed points of vectorial rational functions associated with iterative schemes for solving nonlinear systems, a new tool was presented by Cordero et al. [33]. In it, the authors checked the consistence of this tool by applying it on known methods as Newton's and Traub's schemes and also on a family of

() () ' () 1 ()

*kk k k*

*y x F x Fx*

q

= -

Presented by Hueso et al. [48] and denoted by HMT.

+ -

classified depending on its asymptotic behavior. In this way, a point *x\**

() () ' () 1 () ()

*kk k k k*

*z x F x Fx Fy*

= - +

[ ( )] ( ),


( 1) ( ) ' ( ) 1 ( ) ( ) ( )

In order to analyze, under a multidimensional point of view, the qualitative behavior of this family, the Cordero et al. [33] introduced some concepts of multidimensional real discrete dynamics that we will recall in the following; some of them are the natural extensions of the defined concepts in complex dynamics, but others will be defined, as being different from

Let us denote by *G(x)* the vectorial fixed-point function associated with the iterative method or family on polynomial *p(x)*. The dynamical behavior of the orbit of a point of ℝ can be

*kk k k k k*

q

= - + +

*x x F x Fx Fy Fz*

q

[ ( )] ( ( ) ( )),

[ ( )] ( ( ) ( ) ( )),

is a fixed point of *G* if

$$\begin{array}{l} \text{(a)} \quad \left| \frac{\partial g\_{\ell}(\mathbf{x}^\*)}{\partial \mathbf{x}\_{\ell}} \right| < \frac{1}{n}, \text{ for all } \{1, 2, \ldots, n\}, \text{ then } \mathbf{x}^\* \text{ is attracting.} \\\\ \text{(b)} \quad \left| \frac{\partial g\_{\ell}(\mathbf{x}^\*)}{\partial \mathbf{x}\_{\ell}} \right| < \frac{1}{n} \text{ and } \left| \frac{\partial g\_{\ell}(\mathbf{x}^\*)}{\partial \mathbf{x}\_{\ell}} \right| < \frac{1}{n}. \end{array}$$

$$\begin{array}{l} \mathsf{(b)} \quad \Big| \begin{array}{l} \frac{\partial g\_{l}(\mathbf{x}^{\*})}{\partial \mathbf{x}\_{j}} \Big| = 0 \\ \text{, for all } \mathbf{i}, j \in \{1, 2, \ldots, n\}, \text{ then } \mathbf{x}^{\*} \text{ is superattracting.} \end{array}$$

**c)** *if , for all* 1, 2, …, *, then x\* is unstable and lies at the Julia set.*

*being the coordinate functions of the fixed point multivariate function G.*

Let us remark that if the order of convergence of the iterative method is at least two, then the roots of the nonlinear function are superattracting fixed points of the vectorial rational function corresponding to the iterative scheme. If a fixed point is not a root of the nonlinear function, it is called strange fixed point and its character can be analyzed in the same manner.

The concept of critical point can be defined following the idea of multivariate convergence of iterative methods.

**Definition 3.1.** *A fixed point x\* is a critical point of G if its coordinate functions* () *satisfy* , *for all* , 1, 2, …, .

From this definition, a superattracting fixed point will also be a critical point of the operator and, from numerical point of view, the iterative scheme will have, at least, quadratic order of convergence. A critical point that is not root of *p(x)* will be called free critical point.

The stability of this family is studied on two systems of real quadratic polynomials, *p(x) = 0* and *q(x) = 0*, where

$$p(\mathbf{x}\_1, \mathbf{x}\_2) = \begin{cases} \mathbf{x}\_1^2 - \mathbf{l}, \\ \mathbf{x}\_2^2 - \mathbf{l}, \end{cases} \\ and \quad q(\mathbf{x}\_1, \mathbf{x}\_2) = \begin{cases} \mathbf{x}\_1^2 - \mathbf{l}, \\ \mathbf{x}\_2^2 - \mathbf{l}. \end{cases}$$

The components of the iteration function *G* are, in case of *p(x)*,

$$\log\_1(\mathbf{x}\_1, \mathbf{x}\_2) = -\frac{8\theta^2 \mathbf{x}\_1^2 (\mathbf{x}\_1^2 - \mathbf{l})^3 + \theta^4 (\mathbf{x}\_1^2 - \mathbf{l})^4 - 16\mathbf{x}\_1^4 (\mathbf{G} \mathbf{x}\_1^4 + 6\mathbf{x}\_1^2 - \mathbf{l})}{128\mathbf{x}\_1^7}, \dots$$

$$\mathbf{g}\_2(\mathbf{x}\_1, \mathbf{x}\_2) = -\frac{8\theta^2 \mathbf{x}\_2^2 (\mathbf{x}\_2^2 - \mathbf{l})^3 + \theta^4 (\mathbf{x}\_2^2 - \mathbf{l})^4 - 16\mathbf{x}\_2^4 (3\mathbf{x}\_2^4 + 6\mathbf{x}\_2^2 - \mathbf{l})}{128\mathbf{x}\_2^7}.$$

It was proved by Cordero et al. [33] that the number of fixed points of the vectorial rational function *G*(*x*) associated with HMT iterative method on *p*(*x*) is 64, four of them (those corresponding to the roots of *p*(x)) are superattractive for any value of *θ*. Moreover, there are 48 unstable strange fixed points and the character of the final 12 real strange fixed points is simultaneously attractive for two ranges of values of *θ*, [−*0.3847551*, −*0.3838109*] and [*0.3838109*, *0.3847551*], being superattractive if 0 . 3838109 or 0 . 3838109.

In **Figure 4**, we can see the dynamical plane of HMT method for 0 . 3838109, where those 12 attracting strange fixed points appear as white circles, with small basins of attraction. The roots of *p*(x) appear as white stars, in their own basins of attraction.

**Figure 4.** Dynamical plane of multivariate HMT method on *p*(*x*) for *θ ≈ −0.3838109*.

The parametric plots, as the extension of parameter planes in multivariable case, were revealed to be an interesting procedure that allowed to detect the most stable and unstable elements of a family of iterative methods. By using the information that gives us the iteration of the elements of the family on the different free critical points, we can know the global behavior of the family depending on the value of the parameter. Orbits of each free critical point are showed in **Figure 5**. In each one of these pictures, a different free critical point is used as initial guess of each member of the class of iterative schemes, taking values of parameter *θ* in [−5,5]. The values of *θ* corresponding to the members of the family are placed in the abscissa axis and the ordinate axis corresponds to 0.1, 0.2, 0.3, or 0.4 if the iterative process has converged to each one of the solutions of the quadratic polynomial real system, *p(x)*, respectively. Moreover, the ordinate of a point is −0.1 if the process diverges and it is null in other cases.

**Figure 5.** Different parameter plots of HMT family on *p*(*x*).

2 2 1 1 1 2 2 2 1 2 2 2 1, 1, (, ) (, ) 1, 1. *x x p x x and q x x x x*

> 22 2 3 4 2 4 4 4 2 1 1 1 11 1

> 22 2 3 4 2 4 4 4 2 2 2 2 22 2

8 ( 1) ( 1) 16 (3 6 1) (, ) , <sup>128</sup> -+ -- + - = - *xx x x x x gxx <sup>x</sup>*

8 ( 1) ( 1) 16 (3 6 1) (, ) . <sup>128</sup> -+ -- + - = - *xx x x x x g xx <sup>x</sup>*

It was proved by Cordero et al. [33] that the number of fixed points of the vectorial rational function *G*(*x*) associated with HMT iterative method on *p*(*x*) is 64, four of them (those corresponding to the roots of *p*(x)) are superattractive for any value of *θ*. Moreover, there are 48 unstable strange fixed points and the character of the final 12 real strange fixed points is simultaneously attractive for two ranges of values of *θ*, [−*0.3847551*, −*0.3838109*] and

In **Figure 4**, we can see the dynamical plane of HMT method for 0 . 3838109, where those 12 attracting strange fixed points appear as white circles, with small basins of attraction. The

The parametric plots, as the extension of parameter planes in multivariable case, were revealed to be an interesting procedure that allowed to detect the most stable and unstable elements of a family of iterative methods. By using the information that gives us the iteration of the elements of the family on the different free critical points, we can know the global behavior of the family depending on the value of the parameter. Orbits of each free critical point are

[*0.3838109*, *0.3847551*], being superattractive if 0 . 3838109 or 0 . 3838109.

 q

 q 1

2

ì ì ï ï - - = = í í ï ï î î - -

The components of the iteration function *G* are, in case of *p(x)*,

108 Nonlinear Systems - Design, Analysis, Estimation and Control

q

q

roots of *p*(x) appear as white stars, in their own basins of attraction.

**Figure 4.** Dynamical plane of multivariate HMT method on *p*(*x*) for *θ ≈ −0.3838109*.

112 7

212 7

**Figure 6.** 4-Periodic orbits in HMT family.

The goal of these graphics is to show the elements of the family HMT (that is, the values of parameter *θ*) that present unstable behavior (attracting strange fixed points, periodic orbits, …). These plots were obtained by using 20,000 subintervals, a maximum of 40 iterations and an error estimation of 10−3, when iterates tend to a fixed point.

Thanks to this analysis, some non-desirable behavior was detected, as attracting periodic orbits of different periods, as can be observed in **Figure 6**, where two periodic orbits of period four are showed in yellow.

## **4. Numerical results**

In this section, we are going to check the numerical performance of some elements of family (Eq. 10) compared with the multidimensional version of some other known methods of the same order and also with Newton's one.

**Definition 4.1.** *Let ξ be a zero of function F and suppose that x*(*<sup>k</sup>*−1), *x*(*k*) *and x*(*k*+1) *are three consecutive iterations close to ξ. Then, the computational order of convergence p can be approximated using the formula* (see Ref. [12]).

$$p \approx \frac{\ln(||\,\mathbf{x}^{(k+1)} - \mathbf{x}^{(k)}\,||\,/\,||\,\mathbf{x}^{(k)} - \mathbf{x}^{(k-1)}\,||\,)}{\ln(||\,\mathbf{x}^{(k)} - \mathbf{x}^{(k-1)}\,||\,/\,||\,\mathbf{x}^{(k-1)} - \mathbf{x}^{(k-2)}\,||\,)}$$

Numerical computations have been carried out in MATLAB, with variable precision arithmetic that uses floating point representation of 1000 decimal digits of mantissa. If *n* > 1, every iterate *x*(*k*+1) is obtained from the previous one, *x*(*k*) , by adding one term of the form *A*−1*b*. Matrix *A* and vector *b* are different according to the method used, but in any case the inverse calculation −*A* −1*b* is computed by solving the linear system *Ay* = −*b*, by using Gaussian elimination with partial pivoting. Nevertheless, if several systems with the same matrix *A* must be solved in iteration, *LU* factorization is made once and the corresponding triangular systems are solved by substitution. The stopping criterion used is () < 10−700 or ( + 1) − () < 10−700.

#### **4.1. Molecular interaction problem**

We are going to solve the equation of molecular interaction,

$$
\mu\_{\infty} + \mu\_{\mathcal{Y}^\flat} = \mu^2, (\mathfrak{x}, \mathcal{Y}) \in [0, 1] \times [0, 1],
$$

with the boundary conditions (, 0) = 22 − + 1, (, 1) = 2, (0, )=2 2 − + 1 and *u*(1, *y*) = 2, for all (*x*, *y*) in their domain.

For approximating its solution, we are going to use central divided differences procedure to transform the problem in a nonlinear system of equations. This system is solved by means of the proposed methods of order four and another known ones.

The discretization process yields the nonlinear system of equations,

$$
\mu\_{i+1,j} - 4\mu\_{i,j} + \mu\_{i-1,j} + \mu\_{i,j+1} + \mu\_{i,j-1} - h^2 \mu\_{i,j}^2 = 0,\\
i = 1, 2, \dots, n \ge j = 1, 2, \dots, n\text{y},
$$

where *ui,j* denotes the estimation of the unknown *u*(*xi ,yj* ), where *xi* = *ih* with *i* = 1,2,…, *nx* and *yj* <sup>=</sup>*jk* with *j* = 1,2, …, *ny* are the nodes in both variables, with ℎ = <sup>1</sup> , <sup>=</sup> <sup>1</sup> and *nx* = *ny*.

For checking purposes, we consider *nx* = *ny* = 4, getting a mesh of 5 × 5 points. Applying the boundary conditions we have only nine unknowns, which we rename as:

$$\begin{aligned} \mathfrak{x}\_1 &= \mathfrak{u}\_{1,1}, & \mathfrak{x}\_2 &= \mathfrak{u}\_{2,1}, & \mathfrak{x}\_3 &= \mathfrak{u}\_{3,1}, \\ \mathfrak{x}\_4 &= \mathfrak{u}\_{1,2}, & \mathfrak{x}\_5 &= \mathfrak{u}\_{2,2}, & \mathfrak{x}\_6 &= \mathfrak{u}\_{3,2}, \\ \mathfrak{x}\_7 &= \mathfrak{u}\_{1,3}, & \mathfrak{x}\_8 &= \mathfrak{u}\_{2,3}, & \mathfrak{x}\_9 &= \mathfrak{u}\_{3,3}. \end{aligned}$$

Then, the nonlinear system associated with the partial differential equation, including the boundary conditions, can be expressed as

$$F(\mathbf{x}) = A\mathbf{x} + \varphi(\mathbf{x}) - b = \mathbf{0}\_{\ast}$$

where

…). These plots were obtained by using 20,000 subintervals, a maximum of 40 iterations and

Thanks to this analysis, some non-desirable behavior was detected, as attracting periodic orbits of different periods, as can be observed in **Figure 6**, where two periodic orbits of period four

In this section, we are going to check the numerical performance of some elements of family (Eq. 10) compared with the multidimensional version of some other known methods of the

**Definition 4.1.** *Let ξ be a zero of function F and suppose that x*(*<sup>k</sup>*−1), *x*(*k*) *and x*(*k*+1) *are three consecutive iterations close to ξ. Then, the computational order of convergence p can be approximated using the*

*x x xx <sup>p</sup> xx x x*


Numerical computations have been carried out in MATLAB, with variable precision arithmetic that uses floating point representation of 1000 decimal digits of mantissa. If *n* > 1, every iterate

vector *b* are different according to the method used, but in any case the inverse calculation −*A* −1*b* is computed by solving the linear system *Ay* = −*b*, by using Gaussian elimination with partial pivoting. Nevertheless, if several systems with the same matrix *A* must be solved in iteration, *LU* factorization is made once and the corresponding triangular systems are solved by substitution. The stopping criterion used is () < 10−700 or

<sup>2</sup> += Î ´ ,( , ) [0,1] [0,1], *xx yy u u u xy*

For approximating its solution, we are going to use central divided differences procedure to transform the problem in a nonlinear system of equations. This system is solved by means of

( 1) ( ) ( ) ( 1) ( ) ( 1) ( 1) ( 2) ln( / ) ln( / ) *k k kk kk k k*

P PP P P PP P

+ - - --

, by adding one term of the form *A*−1*b*. Matrix *A* and

2

− + 1 and *u*(1, *y*)

an error estimation of 10−3, when iterates tend to a fixed point.

are showed in yellow.

**4. Numerical results**

*formula* (see Ref. [12]).

same order and also with Newton's one.

110 Nonlinear Systems - Design, Analysis, Estimation and Control

*x*(*k*+1) is obtained from the previous one, *x*(*k*)

( + 1) − () < 10−700.

**4.1. Molecular interaction problem**

= 2, for all (*x*, *y*) in their domain.

We are going to solve the equation of molecular interaction,

the proposed methods of order four and another known ones.

with the boundary conditions (, 0) = 22 − + 1, (, 1) = 2, (0, )=2

$$A = \begin{pmatrix} M & -I & 0 \\ -I & M & -I \\ 0 & -I & M \end{pmatrix}, \quad M = \begin{pmatrix} 4 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 4 \end{pmatrix}.$$

I being the 3 × 3 identity matrix, ()=ℎ<sup>2</sup> 1 2, 2 2, …, 9 2 and <sup>=</sup> <sup>7</sup> 4, 1, <sup>27</sup> <sup>8</sup> , 1, 0, 2, <sup>27</sup> <sup>8</sup> , 2, 4 .

In this case,

$$F^\cdot(\mathbf{x}) = A + 2h^2 \text{diag}(\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_9).$$

For solving this system, we apply the extension for systems of Ostrowski's method (OM), Chun's scheme (CM), Jarratt's method (JM), Newton's method (NM), and three elements of family (Eq. 10) obtained by selecting stable values of the parameters.

$$\begin{aligned} MA: a\_1 &= \frac{\mathfrak{F}}{4}, \; b\_2 = 0, \\ MB: a\_1 &= 1, \; b\_2 = 1, \\ MC: a\_1 &= 1, \; b\_2 = 3. \end{aligned}$$

In **Table 4**, the approximated computational order of convergence ACOC, the number of iterations, the difference between the two last iterations and the residual of the function at the last iteration are shown, for each one of the methods. In all cases, the initial estimation is a null vector and the Euclidean norm is used in the calculation of the residuals.


**Table 4.** Numerical results for molecular interaction problem.

All the checked schemes provide the same solution of the nonlinear system. In **Table 4**, we can observe that all the fourth-order methods have a similar performance, but we note that the lowest error corresponds to method MA, duplicating the number of exact digits with respect to the other ones.
