**4.2. Numerical procedure of calculating the derivatives (the first and the second) for the matrix determinant**

**Theorem**. *If the elements of square matrix* **D**( )  *are differentiable functions with respect to parameter , then for any for derivatives of determinant* det ( ) ( ) **D** *f of matrix* **D**( )  *the relations* 

$$f'(\mathcal{A}) \equiv \left[ \det \mathbf{D}(\mathcal{A}) \right]' = \sum\_{k=1}^{n} v\_{k\:k}(\mathcal{A}) \prod\_{i=1, i\neq k}^{n} u\_{ii}(\mathcal{A}) \,. \tag{30}$$

$$f''(\boldsymbol{\lambda}) \equiv \left[ \det \mathbf{D}(\boldsymbol{\lambda}) \right]'' = \sum\_{k=1}^{n} w\_{kk}(\boldsymbol{\lambda}) \prod\_{i=1, i \neq k}^{n} u\_{ii}(\boldsymbol{\lambda}) + \sum\_{k=1}^{n} v\_{kk}(\boldsymbol{\lambda}) \left( \sum\_{j=1, j \neq k}^{n} v\_{jj}(\boldsymbol{\lambda}) \prod\_{i=1, i \neq k, i \neq j}^{n} u\_{ii}(\boldsymbol{\lambda}) \right) \tag{31}$$

*are true, where* ( ), ( ) *ii ii u v and* ( ) *wii are, respectively, the elements of the upper triangular matrix* **U**( ) *,* **V**( )  *and* **W**( )  *in decompositions* 

$$\mathbf{D}(\mathcal{X}) = \mathbf{L}(\mathcal{X})\mathbf{U}(\mathcal{X}),\tag{32}$$

$$\mathbf{B}(\mathcal{Z}) = \mathbf{M}(\mathcal{Z})\mathbf{U}(\mathcal{Z}) + \mathbf{L}(\mathcal{Z})\mathbf{V}(\mathcal{Z}),\tag{33}$$

$$\mathbf{C}(\boldsymbol{\lambda}) = \mathbf{N}(\boldsymbol{\lambda})\mathbf{U}(\boldsymbol{\lambda}) + 2\mathbf{M}(\boldsymbol{\lambda})\mathbf{V}(\boldsymbol{\lambda}) + \mathbf{L}(\boldsymbol{\lambda})\mathbf{W}(\boldsymbol{\lambda}),\tag{34}$$

and **L**( )  *is the lower triangular matrix with single diagonal elements.*

**Proof**. It is known that matrix **D**( ) of order *n* , which for any has the major minorities of all orders from 1 to ( 1) *n* , differen from zero, by using the *LU*-decomposition can be written as (32), where **L**( ) is the lower triangular matrix with single diagonal elements, and **U**( ) is the upper triangular matrix. Then

$$f(\mathcal{A}) = \det \mathbf{L}(\mathcal{A}) \det \mathbf{U}(\mathcal{A}) = \prod\_{i=1}^{n} \mu\_{ii}(\mathcal{A}).$$

Since the elements of a square matrix **D**( ) (and hence the matrix **U**( ) ) are differentiable functions with respect to , then for any we obtain that

$$f'(\lambda) = \sum\_{k=1}^{n} \mu'\_{kk}(\lambda) \prod\_{i=1, i \neq k}^{n} \mu\_{ii}(\lambda)\_{\prime \prime},\tag{35}$$

Numerical Algorithms of Finding the Branching Lines

and Bifurcation Points of Solutions for One Class of Nonlinear Integral Equations 299

1 1, 1 1, 1, ,

*n n n n n m kk ii kk j j i i k i ik k j jk i i ki j*

 *w uv v u* .

Elements of matrices in the decompositions (37) can be calculated using the recurrent

*r n* 1,2, ... , , ,

*u d lu k r n* 

*l d lu u i r n*

*v b mu lv k r n*

*m b mu lv lv u i r n*

*w c nu mv lw k r n*

*n c nu mv lw mv l w u i r n*

If some of the principal minors of the matrix of order *j n* 1 are zero, then the

In practice, the best way to establish the possibility of *LU*-decomposition is to try to calculate it. It may happen that 0 *rr u* (*r* is the order of the main minor of the matrix, which is zero). To avoid this, in the process of decomposition one may use a series of permutations of rows (and/or columns) of matrix **D** with a choice of principal element. In this case the

**PC = NU + 2MV + LW**

, , ... , ,

/ , 1, ... , ,

( ) , , ... , ,

( ) / , 1, ... , ,

( 2 ) , , ... , ,

( 2 ) 2 / , 1, ... , .

**PD = LU** (39)

**PB = MU + LV** (40)

1

*r rk rk rj jk j*

1

 

1

*r ir ir ij jr rr j*

1

1

*r rk rk rj jk rj jk j*

1

*ir ir ij jr ij jr ir rr rr*

*rk rk rj jk rj jk rj jk*

*ir ir ij jr ij jr ij jr ir rr ir rr rr*

1

*r*

*j*

1

*r*

*j*

decomposition (37) can be written as

1

1

1

*r*

*j*

decomposition (32) may not exist or, if it exists, it is ambiguous.

1

 

( )

*f* 

relations

$$f''(\boldsymbol{\lambda}) = \sum\_{k=1}^{n} \boldsymbol{u}\_{k\boldsymbol{\lambda}}''(\boldsymbol{\lambda}) \prod\_{i=1, i\neq k}^{n} \boldsymbol{u}\_{ii}(\boldsymbol{\lambda}) + \sum\_{k=1}^{n} \boldsymbol{u}\_{k\boldsymbol{\lambda}}'(\boldsymbol{\lambda}) \left( \sum\_{j=1, j\neq k}^{n} \boldsymbol{u}\_{j\boldsymbol{\lambda}}'(\boldsymbol{\lambda}) \prod\_{i=1, i\neq k, i\neq j}^{n} \boldsymbol{u}\_{ii}(\boldsymbol{\lambda}) \right). \tag{36}$$

To find the values ( ) *i i u* we differentiate (32) with respect to . We obtaine (33), i.e.

$$\mathbf{B}(\mathcal{X}) = \mathbf{M}(\mathcal{X})\mathbf{U}(\mathcal{X}) + \mathbf{L}(\mathcal{X})\mathbf{V}(\mathcal{X})\_{\mathcal{X}}$$

where **B D** () () , **M L** () () , **V U** () () , and () () *i i ii v u* are the elements of matrix **V**( ) . Now, differentiating the last equality with respect to , we obtain (34), namely:

$$\mathbf{C}(\boldsymbol{\lambda}) = \mathbf{N}(\boldsymbol{\lambda})\mathbf{U}(\boldsymbol{\lambda}) + 2\mathbf{M}(\boldsymbol{\lambda})\mathbf{V}(\boldsymbol{\lambda}) + \mathbf{L}(\boldsymbol{\lambda})\mathbf{W}(\boldsymbol{\lambda}),$$

where **CB D** () () () , **N M** () () , **WV U** () () () , and а () () () *w vu i i ii ii* are the elements of matrix **W**( ) . Thus, from (33) and (34) we obtain (35) and (36), i.e. (30) and (31). Theorem is proved.

Therefore, to calculate, ( ) *<sup>m</sup> f* , ( ) *<sup>m</sup> f* and ( ) *<sup>m</sup> f* it is necessary to calculate

$$
\mathbf{D} = \mathbf{L}\mathbf{U}
$$

$$
\mathbf{B} = \mathbf{M}\mathbf{U} + \mathbf{L}\mathbf{V} \tag{37}
$$

### **C = NU + 2MV + LW**,

at a fixed *<sup>m</sup>* , from which

$$f(\boldsymbol{\lambda}\_m) = \prod\_{i=1}^n \boldsymbol{u}\_{i\boldsymbol{i}\boldsymbol{\prime}} \; \; f(\boldsymbol{\lambda}\_m) = \sum\_{k=1}^n \boldsymbol{v}\_{k\boldsymbol{k}} \prod\_{i=1, i\neq k}^n \boldsymbol{u}\_{i\boldsymbol{i}\boldsymbol{i}\boldsymbol{\prime}}\tag{38}$$

$$f''(\mathcal{A}\_m) = \sum\_{k=1}^n w\_{kk} \prod\_{i=1, i \neq k}^n u\_{ii} + \sum\_{k=1}^n v\_{kk} \left( \sum\_{j=1, j \neq k}^n v\_{jj} \prod\_{i=1, i \neq k, i \neq j}^n u\_{ii} \right).$$

Since the elements of a square matrix **D**( )

 , **M L** () () 

 

 , ( ) *<sup>m</sup> f* 

(35) and (36), i.e. (30) and (31). Theorem is proved.

is the upper triangular matrix. Then

, then for any

of order *n* , which for any

all orders from 1 to ( 1) *n* , differen from zero, by using the *LU*-decomposition can be

( ) det ( )det ( ) ( ).

**L U**

1 1, ( ) ( ) ( ), *n n*

1 1, 1 1, 1, , ( ) ( ) ( ) ( ) ( ) ( ). *n n n n n*

we differentiate (32) with respect to

 

 , **V U** () () 

. Now, differentiating the last equality with respect to

 , **N M** () () 

are the elements of matrix **W**( )

1 () , *n m ii i f u* 

 

**C =N U + M V +L W**

 and ( ) *<sup>m</sup> f* 

**D = LU**

**C = NU + 2MV + LW**,

 

**B MU LV** ( ) ( ) ( ) ( ) ( ),

 

1 1, ( ) , *n n m kk ii k i ik f vu*

*f u*

is the lower triangular matrix with single diagonal elements,

 

, (35)

 

1

 

. (36)

 , and () () *i i ii v u* 

> ,

 , **WV U** () () () 

it is necessary to calculate

**B = MU + LV** (37)

(38)

(and hence the matrix **U**( )

we obtain that

*k k i i k i ik fu u*

*k k i i k k j j i i k i ik k j jk i i ki j fu uu u u*

> 

*n ii i*

has the major minorities of

) are differentiable

are the elements of

, and а

, we obtain (34),

 

. We obtaine (33), i.e.

 

. Thus, from (33) and (34) we obtain

**Proof**. It is known that matrix **D**( )

written as (32), where **L**( )

functions with respect to

To find the values ( ) *i i u*

where **B D** () () 

() () () *w vu i i ii ii*

at a fixed *<sup>m</sup>* 

 

matrix **V**( )

namely:

where **CB D** () () () 

Therefore, to calculate, ( ) *<sup>m</sup> f*

 

, from which

and **U**( ) 

Elements of matrices in the decompositions (37) can be calculated using the recurrent relations

$$\begin{aligned} r &= 1, 2, \dots, m\_r \\\\ u\_{rk} &= d\_{rk} - \sum\_{j=1}^{r-1} l\_{rj} u\_{jk} \quad , \quad k = r, \dots, n\_r \\\\ l\_{ir} &= \left( d\_{ir} - \sum\_{j=1}^{r-1} l\_{rj} u\_{jr} \right) / u\_{rr} \quad , \quad i = r+1, \dots, n\_r \\\\ \upsilon\_{rk} &= b\_{rk} - \sum\_{j=1}^{r-1} (m\_{rj} u\_{jk} + l\_{rj} \upsilon\_{jk}) \quad , \quad k = r, \dots, n\_r \\\\ m\_{ir} &= \left[ b\_{ir} - \sum\_{j=1}^{r-1} (m\_{ij} u\_{jr} + l\_{ij} \upsilon\_{jk}) - l\_{ir} \upsilon\_{rr} \right] / u\_{rr} \quad , \quad i = r+1, \dots, n\_r \\\\ \upsilon\_{rk} &= c\_{rk} - \sum\_{j=1}^{r-1} (m\_{rj} u\_{jk} + 2m\_{rj} \upsilon\_{jk} + l\_{jr} \upsilon\_{jk}) \quad , \quad k = r, \dots, n\_r \\\\ m\_{ir} &= \left[ c\_{ir} - \sum\_{j=1}^{r-1} (m\_{ij} u\_{jr} + 2m\_{ij} \upsilon\_{jr} + l\_{ij} \upsilon\_{jk}) - 2m\_{ir} \upsilon\_{rr} - l\_{ir} \upsilon\_{rr} \right] / u\_{rr} \quad , i = r+1, \dots, n\_r \end{aligned}$$

If some of the principal minors of the matrix of order *j n* 1 are zero, then the decomposition (32) may not exist or, if it exists, it is ambiguous.

In practice, the best way to establish the possibility of *LU*-decomposition is to try to calculate it. It may happen that 0 *rr u* (*r* is the order of the main minor of the matrix, which is zero). To avoid this, in the process of decomposition one may use a series of permutations of rows (and/or columns) of matrix **D** with a choice of principal element. In this case the decomposition (37) can be written as

$$\text{PD} = \text{L.U} \tag{39}$$

$$\mathbf{PB} = \mathbf{M}\mathbf{U} + \mathbf{L}\mathbf{V} \tag{40}$$

$$\text{PC} = \text{NU} + 2\text{MV} + \text{LW}$$

where **P** is a permutation matrix, moreover det ( 1)*<sup>q</sup>* **P** , where *q* is a number of permutations (for example, rows ). In this case the relations (38) take the form

$$f(\lambda\_m) = (-1)^q \prod\_{i=1}^n u\_{ii}, \; f'(\lambda\_m) = (-1)^q \sum\_{k=1}^n \upsilon\_{kk} \prod\_{i=1, i \neq k}^n u\_{ii}, \tag{41}$$

$$f''(\lambda\_m) = (-1)^q \sum\_{k=1}^n \upsilon\_{k,k} \prod\_{i=1, i \neq k}^n u\_{ii} + (-1)^q \sum\_{k=1}^n \upsilon\_{k,k} \left( \sum\_{j=1, j \neq k}^n \upsilon\_{j,j} \prod\_{i=1, i \neq k, i \neq j}^n u\_{ii} \right).$$

Since in the relations (38) the value of function and its derivative is calculated only on the boundary region *G* , i.e. at the given points , 1, ... , *<sup>j</sup> j N* , then for their calculation we use the same numerical procedure of decomposition of matrices (37). As a result, to calculate the values , 0,1,2, ..., *<sup>k</sup> s k* we obtain the relation

$$s\_k = \frac{1}{N} \sum\_{j=1}^{N} \left( \left( \lambda\_j \right)^k \rho\_t \exp(i \frac{2\pi j}{N}) \sum\_{r=1}^n \frac{\upsilon\_{rr}}{u\_{rr}} \right) \tag{42}$$

Numerical Algorithms of Finding the Branching Lines

**/** , and also the number of

giving the next meaning for the

, we determine the number of

, we refine all eigenvalues that fall

 

(45)

are satisfied, and the second order partial derivatives

is a

and Bifurcation Points of Solutions for One Class of Nonlinear Integral Equations 301

*<sup>t</sup>* . For this purpose we put this interval *t* in a circle (area *G* ), setting the center

1 1

*N Nu*

*j r rr*

1 1/ , *<sup>n</sup> rr r rr v u*

 

or bilateral analogue of Newton's method (44). As initial approximation we take the

**Step 6.** If necessary, we correct the area *G* by changing it center and / or radius and go to

Application of modification of algorithm for linear two-parameter problems was considered

Note that if two eigenvalue curves intersect at some point, this point is called the point of bifurcation (or branch the point). Sufficient criterion for the existence of such points have been known long ago (see, for example, [8]) and consists in that the point ( , ) *b b*

( , ) det ( , ) 0, *<sup>n</sup> f D*

are different from zero. But this criterion was not often used in practical calculations,

 

because it requires calculation of derivatives of the determinant of matrix.

**5. Algorithm of finding the bifurcation points of eigenvalue curves** 

exp , *N n rr*

*j v*

**/** and radius ( )2 *t t t dc*

 

1 2

and their approximate values we find by solving the system of equations (24), after

*t*

*m s i*

1

 

points of partition *N* of the boundary of the region *G* , i.e. the circle.

of the circle 0 ( )2 *t tt c d <sup>r</sup>*

values *<sup>k</sup>* and *<sup>k</sup>* .

 

calculating the right part of the formula (42).

approximate values obtained in *Step 3*.

Step 2, otherwise go to Step 7.

**Step 5.** Go to Step 2.

bifurcation point of equation

 

, (,) <sup>0</sup> *<sup>f</sup>*

 

if conditions (,) <sup>0</sup> *<sup>f</sup>*

**Step 7.** The end.

in [16].

**Step 4.** Using the decomposition (37) for real values

in the area *G* , using the Newton method

**Step 2.** Determine the value of parameter *k k*

**Step 3.** Using the decomposition (37) for complex values

eigenvalues that are in the selected area *G* , by the formula

0

where , *kk kk u v* are the elements of matrices **U, V** in decomposition (37) at fixed *<sup>j</sup>* .

Now, if we know some approximation to the eigenvalue, then the correction ( )/ ( ) *ll l f f* to construct the successive approximations for Newton's method (28) assumes the form

$$\Delta \mathcal{A}\_l = 1 / \sum\_{k=1}^{n} \frac{\upsilon\_{kk}}{\mu\_{kk}}, \ l = 0, 1, \dots \tag{43}$$

and bilateral analogue of Newton's method (29) takes the form

$$\begin{cases} \dot{\lambda}\_{2l+1} = \dot{\lambda}\_{2l} - \left( \sum\_{k=1}^{n} \frac{\overline{\upsilon}\_{kk}}{\overline{\mu}\_{kk}} \right) / \sum\_{k=1}^{n} \left[ \left( \frac{\overline{\upsilon}\_{kk}}{\overline{\mu}\_{kk}} \right)^{2} - \frac{\overline{\upsilon}\_{kk}}{\overline{\mu}\_{kk}} \right], \\\\ \dot{\lambda}\_{2l+2} = \dot{\lambda}\_{2l+1} - 1 / \sum\_{k=1}^{n} \frac{\overline{\upsilon}\_{kk}}{\overline{\mu}\_{kk}}, \end{cases} \tag{44}$$

where , , *kk kk kk uvw* are the elements of matrices **U, V** and **W** in the decompositions (37) at fixed 2*<sup>l</sup>* , and , *kk kk u v* are the elements of matrices **U, V** in the decompositions (37) at fixed 2 1 *<sup>l</sup>* .

Thus, the algorithm of finding the eigenvalue curves of the problem (19) consists of the following steps.

### **Algorithm 1**.

**Step 1.** Determine the interval , *c d* **[ ]** , where we find the eigenvalues of problem (21). This can be a single-spaced interval or a sequence of intervals , *t t t cd* **[ ]** such that

 *<sup>t</sup>* . For this purpose we put this interval *t* in a circle (area *G* ), setting the center of the circle 0 ( )2 *t tt c d <sup>r</sup>* **/** and radius ( )2 *t t t dc* **/** , and also the number of points of partition *N* of the boundary of the region *G* , i.e. the circle.


$$m = s\_0 = \frac{1}{N} \sum\_{j=1}^{N} \rho\_t \exp\left(i\frac{2\pi j}{N}\right) \sum\_{r=1}^{n} \frac{v\_{rr}}{u\_{rr}}.$$

and their approximate values we find by solving the system of equations (24), after calculating the right part of the formula (42).

**Step 4.** Using the decomposition (37) for real values , we refine all eigenvalues that fall in the area *G* , using the Newton method

$$\mathcal{A}\_{\ell+1} = \mathcal{A}\_{\ell} - 1 / \left( \sum\_{r=1}^{n} \frac{\upsilon\_{rr}}{\mu\_{rr}} \right) / $$

or bilateral analogue of Newton's method (44). As initial approximation we take the approximate values obtained in *Step 3*.

**Step 5.** Go to Step 2.

**Step 6.** If necessary, we correct the area *G* by changing it center and / or radius and go to Step 2, otherwise go to Step 7.

**Step 7.** The end.

300 Nonlinearity, Bifurcation and Chaos – Theory and Applications

*f* 

values , 0,1,2, ..., *<sup>k</sup> s k* we obtain the relation

( )/ ( ) *ll l*

assumes the form

 

 *f f* 

at fixed 2*<sup>l</sup>* 

at fixed 2 1 *<sup>l</sup>* .

following steps.

**Algorithm 1**.

where **P** is a permutation matrix, moreover det ( 1)*<sup>q</sup>* **P** , where *q* is a number of

1 1, ( ) ( 1) , *<sup>n</sup> <sup>n</sup> <sup>q</sup> m kk ii k i ik*

*v*

(42)

*<sup>l</sup>* 0,1, ... , (43)

, where we find the eigenvalues of problem (21).

to construct the successive approximations for Newton's method (28)

2

, and , *kk kk u v* are the elements of matrices **U, V** in the decompositions (37)

/ ,

(41)

*j N* , then for their calculation we use

 .

*l* 0,1,2, ... , , (44)

*t t t cd* **[ ]** 

such that

*v u*

1 1, 1 1, 1, , ( ) ( 1) ( 1) . *<sup>n</sup> n n n n q q m kk ii k k j j i i k i ik k j jk i i ki j*

permutations (for example, rows ). In this case the relations (38) take the form

*f* 

*f wu v v u*

Since in the relations (38) the value of function and its derivative is calculated only on the

the same numerical procedure of decomposition of matrices (37). As a result, to calculate the

 <sup>2</sup> 1 1 <sup>1</sup> exp( ) , *N n <sup>k</sup> j rr*

*N u*

Now, if we know some approximation to the eigenvalue, then the correction

*j r r r*

 

*k jt N*

where , *kk kk u v* are the elements of matrices **U, V** in decomposition (37) at fixed *<sup>j</sup>*

1 1/ , *<sup>n</sup> kk*

1 1

*<sup>n</sup> kk*

*v u*

1

 

This can be a single-spaced interval or a sequence of intervals ,

*k kk*

1/ ,

*k kk v u*

*n n kk kk kk*

*v vw u uu*

*k k kk kk kk*

where , , *kk kk kk uvw* are the elements of matrices **U, V** and **W** in the decompositions (37)

Thus, the algorithm of finding the eigenvalue curves of the problem (19) consists of the

*l*

and bilateral analogue of Newton's method (29) takes the form

21 2

**Step 1.** Determine the interval , *c d* **[ ]**

*l l*

 

22 21

 

*l l*

*s i*

 

1

 

( ) ( 1) , *n q m i i i*

*u*

boundary region *G* , i.e. at the given points , 1, ... , *<sup>j</sup>*

Application of modification of algorithm for linear two-parameter problems was considered in [16].
