**Proof**

Since applies *Qi <sup>T</sup> Ai* <sup>=</sup>*Ri* we have *Ai*+**<sup>1</sup>** <sup>=</sup>*Ri Qi* =*Qi <sup>T</sup> Ai Qi* . Based on past relationships, it is clear that it is true *Ai*+**<sup>1</sup>** =*Qi <sup>T</sup> Qi*−**<sup>1</sup>** *<sup>T</sup>* ⋅⋯⋅*Q***<sup>0</sup>** *<sup>T</sup> <sup>A</sup>***0***Q***<sup>0</sup>** ⋅⋯⋅*Qi*−**1***Qi* . It is obvious that the matrix *Q* := *Q***<sup>0</sup>** ⋅ ⋯ ⋅ *Qi* orthogonal matrix and the theorem is proved.

Let's look briefly at some properties QR algorithm:

**a.** For QR factorization full matrix it is necessary *O*(*n*<sup>3</sup> ) flops by factorization. Total QR iteration needs *O*(*n*<sup>4</sup> ) flops, which means that it is objectively very slow. But the problem of the slowness of the basic forms of the algorithm can be overcome with the two strategies, namely: translation matrix in Hessenberg form and the introduction of shift.

Let *A* ∈ *ℝ*(*n*,*n*) then for each and *Qi* , *Ri* ∈ *ℝ*(*n*,*n*) and the whole algorithm is performed in real area. If all eigenvalues of matrix A modulo different, then they are all real and the algorithm converges. However, if there is at least one, and so it is worth | *<sup>λ</sup>i*+1 *λi* |≈1 QR algorithm is very slow. If the matrix **A** has complex eigenvalues then a basic form of the QR algorithm does not converge. We have already seen the need for the introduction of Hessenberg form of a matrix and it will introduce the following definition.

**Definition 2.2.2.** For the matrix **A** will say that in the upper Hessenberg form if holds

$$a\_{ij} = 0 \text{ for } i - j > 2$$

i.e. the upper triangular matrix has a side below the main diagonal.

The process of reducing the Hessenberg form can be performed using Househelder reflectors and Givens rotation. Let's look at bringing in Hessenberg form using Househelder spotlight.

$$\text{Let } A = \begin{pmatrix} a\_{11} & \mathbf{c}^{\top} \\ \mathbf{b} & B \end{pmatrix} \text{ with } \mathbf{b} \ast \mathbf{0}$$

Our goal is to determine **ω** ∈ ℂn − 1 with features a‖ω‖2 = 1 and

$$Q\_1 \mathbf{b} := \left( I\_{n-1} - 2 \boldsymbol{\omega} \boldsymbol{\omega} \boldsymbol{\omega}^H \right) \mathbf{b} = \mathbf{k} \mathbf{e}^1$$

where *In* − **1** is identity matrix row *n-1* a **e1** the first column of the matrix *In* − **1**. We define Householder reflector as follows

$$P\_1 \coloneqq \begin{pmatrix} 1 & \mathbf{0}^r \\ \mathbf{0} & \mathbf{Q}\_1 \end{pmatrix}$$

Now applies

*A RQ i ii* <sup>+</sup>**<sup>1</sup>** =

*Qi* =*Qi <sup>T</sup> Ai Qi*

namely: translation matrix in Hessenberg form and the introduction of shift.

**Definition 2.2.2.** For the matrix **A** will say that in the upper Hessenberg form if holds

, *Ri* ∈ *ℝ*(*n*,*n*)

*<sup>T</sup> <sup>A</sup>***0***Q***<sup>0</sup>** ⋅⋯⋅*Qi*−**1***Qi*

resulting in algorithms 2.2.1 are unitary similar.

) flops, which means that it is objectively very slow. But the problem

of the slowness of the basic forms of the algorithm can be overcome with the two strategies,

area. If all eigenvalues of matrix A modulo different, then they are all real and the algorithm

slow. If the matrix **A** has complex eigenvalues then a basic form of the QR algorithm does not converge. We have already seen the need for the introduction of Hessenberg form of a matrix

0 for 2 *ij a ij* = ->

The process of reducing the Hessenberg form can be performed using Househelder reflectors and Givens rotation. Let's look at bringing in Hessenberg form using Househelder spotlight.

*<sup>Q</sup>*1**b**: =(*In*−**<sup>1</sup>** <sup>−</sup>2**ωω***<sup>H</sup>* )**b**=k**e**<sup>1</sup>

the first column of the matrix *In* − **1**.

. Based on past relationships, it is clear that

) flops by factorization. Total QR


. It is obvious that the matrix *Q* := *Q***<sup>0</sup>** ⋅ ⋯ ⋅ *Qi*

and the whole algorithm is performed in real

*λi*

End

**Proof**

Since applies *Qi*

it is true *Ai*+**<sup>1</sup>** =*Qi*

Let *A* ∈ *ℝ*(*n*,*n*)

Let *A*=(

*a*<sup>11</sup> **c***<sup>T</sup>*

**<sup>b</sup>** *<sup>B</sup>* ) with **<sup>b</sup>**≠**<sup>0</sup>**

where *In* − **1** is identity matrix row *n-1* a **e1**

**Theorem 2.2.1** All matrices *Ai*

64 Applied Linear Algebra in Action

iteration needs *O*(*n*<sup>4</sup>

*<sup>T</sup> Qi*−**<sup>1</sup>**

*<sup>T</sup> Ai* <sup>=</sup>*Ri* we have *Ai*+**<sup>1</sup>** <sup>=</sup>*Ri*

*<sup>T</sup>* ⋅⋯⋅*Q***<sup>0</sup>**

Let's look briefly at some properties QR algorithm:

then for each and *Qi*

and it will introduce the following definition.

**a.** For QR factorization full matrix it is necessary *O*(*n*<sup>3</sup>

converges. However, if there is at least one, and so it is worth | *<sup>λ</sup>i*+1

i.e. the upper triangular matrix has a side below the main diagonal.

Our goal is to determine **ω** ∈ ℂn − 1 with features a‖ω‖2 = 1 and

orthogonal matrix and the theorem is proved.

$$A\_1 \coloneqq P\_1 A P\_1 = \begin{pmatrix} a\_{11} & \mathbf{c}^T \mathbf{Q}\_1 \\ k \\ 0 \\ \vdots & \mathbf{Q}\_1 B \mathbf{Q}\_1 \\ 0 \end{pmatrix}$$

Obviously, the first column of the matrix *A*1 have a look what is required in the upper Hessenberg form. In this way, we showed that the first column is converted into a suitable form. An analogous procedure can be performed on 2*,*3*,…,n-1* column.

For the implementation of QR algorithm is important that in each iteration in implementing QR algorithm preserves the structure of the matrix. For a matrix form in the upper Hessenberg applies following theorem.

**Theorem 2.2.2** If *A* is upper Hessenberg matrix, then the **Q** in its QR factorization *A=QR* also upper Hessenberg matrix.

The above theorem that QR and also upper Hessenberg matrix is the product of the upper triangular matrix and upper Hessenberg matrix.

Preserving the structure of the matrix is very important to stop the efficiency of the algorithm. Namely, if the upper Hessenberg matrix *A* matrix of its QR factorization is only *O*(*n*<sup>2</sup> ) instead of *O*(*n*<sup>3</sup> ) as is necessary for QR factorization (decomposition) of full matrix.

Let's look at one more advantage of the transformation matrix in Hessenberg form. Namely, *Ai* → *R* (*i* → ∞), where *R* is an upper triangular matrix. Because the matrix *A<sup>i</sup>* is always upper Hessenberg matrix proves that elements of the second diagonal tend to zero and it's worth *a <sup>j</sup>*+1, *<sup>j</sup>* (*i*) <sup>→</sup><sup>0</sup> (*<sup>i</sup>* <sup>→</sup> <sup>∞</sup>), *<sup>j</sup>* =1, 2, <sup>⋯</sup>, *<sup>n</sup>* <sup>−</sup>1 wherein *<sup>a</sup> <sup>j</sup>*+1, *<sup>j</sup>* (*i*) elements of the matrix *A<sup>i</sup>* . It is now clear that for sufficiently large and can be read eigenvalues initial value of A on the basis of theorem 2.1.3. as the diagonal elements of the matrix *A<sup>i</sup>*

For a further improve of the algorithm, shift is used . The idea of improving the QR algorithm is based on the simple fact that if the eigenvalues of *A* are equal *λ<sup>i</sup>* that the eigenvalues of the matrix *A* − *σI* equal *λ<sup>i</sup>* − *σ*. If we shift *σ* chosen close eigenvalues, there is a strong acceleration of the algorithm.

Let *A***0** := *A*.

**Algorithm 2.2.2. .** (QR algorithm with shift)

For *i* = 0, 1, ⋯ until convergence

Choose shift *σ<sup>i</sup>* near the eigenvalues value

Decompose *Ai* − *σ<sup>i</sup> I* = *Qi Ri* (QR decomposition)

$$\mathcal{A}\_{l+1} = \mathcal{R}\_l \mathcal{Q}\_l + \sigma\_l I$$

End

It is easy to prove that all matrices in the algorithm 2.2.1. are unitary similar. From the above it is clear that in the case of real matrices with real eigenvalues is best to shift the parameters taken *σ<sup>i</sup>* =*an*.*<sup>n</sup>* (*i*) .

For a further analysis of the parameter shifts we will need the concept of reduced upper Hessenberg matrix as well as the implicit Q theorem.

**Definition 2.2.3** The upper Hessenberg matrix *H* is **unreduced upper Hessenberg matrix** if the first second diagonal not a single zero.

**Theorem 2.2.3.** Let *Q<sup>T</sup>AQ* = *H* unreduced upper Hessenberg matrix with positive subdiagonal elements *hk* + 1,*<sup>k</sup>* a *Q* unitary matrix. The columns of the matrix *Q* and matrix *H* starting from the second to n-th, are uniquely determined first column of the matrix *Q*.

### **Proof**

Let *Q* = (*q*<sup>1</sup> , *q*<sup>2</sup> , ⋯, *q<sup>n</sup>*) some prior *q*<sup>1</sup> , *q*<sup>2</sup> , ⋯, *q<sup>k</sup>* and the first *k* − 1 column of the matrix *H* is determined. The proof will be carried out by mathematical induction by *k.* For *k* = 1 is deter‐ mined *q*<sup>1</sup> and the process can start. Because *QH* = *AQ* and *H* = (*hij*) the upper Hessenberg matrix applies

$$h\_{k+1,k}q^{k+1} + h\_{kk}q^k + \dots + h\_{\mathbb{I}k}q^{\mathbb{I}} = Aq^k \dots$$

If you multiply the last equality with (*q<sup>i</sup>* ) *<sup>H</sup>* we get

$$h\_{ik} = \left(\boldsymbol{q}^i\right)^H \mathcal{A} \boldsymbol{q}^k \left(i = 1, 2, \cdots, k\right).$$

From here it's k-th column of *H* except element *hk* + 1,*<sup>k</sup>* specified.

Because *hk* + 1,*<sup>k</sup>* ≠ 0 we have

$$\boldsymbol{\sigma}^{k+1} = \frac{1}{h\_{k+1,k}} \left( \boldsymbol{A} \boldsymbol{\sigma}^k - \sum\_{i=1}^k h\_{ik} \ \boldsymbol{\sigma}^i \right),$$

From (*q<sup>k</sup>* + 1) *<sup>H</sup>q<sup>k</sup>* + 1 = 1 and positivity *hk* + 1,*<sup>k</sup>* we get *hk* + 1,*<sup>k</sup>* in a unitary way.

Theorem is proved.

Let *A***0** := *A*.

66 Applied Linear Algebra in Action

Choose shift *σ<sup>i</sup>*

Decompose *Ai*

taken *σ<sup>i</sup>* =*an*.*<sup>n</sup>*

(*i*) .

End

**Proof**

Let *Q* = (*q*<sup>1</sup>

mined *q*<sup>1</sup>

applies

, *q*<sup>2</sup>

Because *hk* + 1,*<sup>k</sup>* ≠ 0 we have

**Algorithm 2.2.2. .** (QR algorithm with shift)

near the eigenvalues value

Hessenberg matrix as well as the implicit Q theorem.

, ⋯, *q<sup>n</sup>*) some prior *q*<sup>1</sup>

the first second diagonal not a single zero.

If you multiply the last equality with (*q<sup>i</sup>*

(QR decomposition)

the second to n-th, are uniquely determined first column of the matrix *Q*.

, *q*<sup>2</sup>

) *<sup>H</sup>* we get

From here it's k-th column of *H* except element *hk* + 1,*<sup>k</sup>* specified.

*<sup>H</sup> i k ik h ik* = = *q Aq* L

, ⋯, *q<sup>k</sup>*

*A RQ I i ii* <sup>+</sup>**<sup>1</sup>** = +

It is easy to prove that all matrices in the algorithm 2.2.1. are unitary similar. From the above it is clear that in the case of real matrices with real eigenvalues is best to shift the parameters

For a further analysis of the parameter shifts we will need the concept of reduced upper

**Definition 2.2.3** The upper Hessenberg matrix *H* is **unreduced upper Hessenberg matrix** if

**Theorem 2.2.3.** Let *Q<sup>T</sup>AQ* = *H* unreduced upper Hessenberg matrix with positive subdiagonal elements *hk* + 1,*<sup>k</sup>* a *Q* unitary matrix. The columns of the matrix *Q* and matrix *H* starting from

determined. The proof will be carried out by mathematical induction by *k.* For *k* = 1 is deter‐

1 1 1, <sup>1</sup> . *kk k k k kk <sup>k</sup> hhh* <sup>+</sup> <sup>+</sup> *q q q Aq* + ++ = <sup>L</sup>

( ) ( ) 1, 2, ,

and the process can start. Because *QH* = *AQ* and *H* = (*hij*) the upper Hessenberg matrix

and the first *k* − 1 column of the matrix *H* is

s*i*

For *i* = 0, 1, ⋯ until convergence

 − *σ<sup>i</sup> I* = *Qi Ri* **Remark 2.2.3** Condition *hk* + 1,*<sup>k</sup>* > 0 in the previous theorem was we only need to ensure uniformity of matrices *Q* i *H*.

With the help of implicit Q theorem we discuss the selection of shift *A* = *A*<sup>0</sup> real matrix has complex eigenvalues. Then you have to make a double shift for *σ* i *σ*¯. Namely,

$$\begin{aligned} \mathcal{A}\_0 - \sigma I &= \mathcal{Q}\_{\mathrm{l}} \mathcal{R}\_{\mathrm{l}}, & \mathcal{A}\_{\mathrm{l}} &= \mathcal{R}\_{\mathrm{l}} \mathcal{Q}\_{\mathrm{l}} + \sigma I \\ \mathcal{A}\_0 - \overline{\sigma} I &= \mathcal{Q}\_{\mathrm{l}} \mathcal{R}\_{\mathrm{l}}, & \mathcal{A}\_{\mathrm{l}} &= \mathcal{R}\_{\mathrm{l}} \mathcal{Q}\_{\mathrm{l}} + \overline{\sigma} I. \end{aligned}$$

From there, easy to get *A*<sup>2</sup> =*Q*<sup>2</sup> *<sup>T</sup> <sup>Q</sup>*<sup>1</sup> *<sup>T</sup> <sup>A</sup>*0*Q*1*Q*2. Matrices *Q*1i *Q*2 can be chosen so that *Q*1*Q*2 real matrices and therefore the matrix *A*2 is real matrix. Applying

$$\begin{split} \mathbf{Q}\_{\mathbf{l}} \mathbf{Q}\_{2} \mathbf{R}\_{2} \mathbf{R}\_{\mathbf{l}} &= \mathbf{Q}\_{\mathbf{l}} \left( A\_{\mathbf{l}} - \overline{\sigma} I \right) \mathbf{R}\_{\mathbf{l}} = \mathbf{Q}\_{\mathbf{l}} \left( \mathbf{R}\_{\mathbf{l}} \mathbf{Q}\_{\mathbf{l}} + (\sigma - \overline{\sigma}) I \right) \mathbf{R}\_{\mathbf{l}} = \mathbf{Q}\_{\mathbf{l}} \mathbf{R}\_{\mathbf{l}} \mathbf{Q}\_{\mathbf{l}} \mathbf{R}\_{\mathbf{l}} + (\sigma - \overline{\sigma}) \mathbf{Q}\_{\mathbf{l}} \mathbf{R}\_{\mathbf{l}} \\ &= \left( A\_{0} - \sigma I \right)^{2} + \left( \sigma - \overline{\sigma} \right) \left( A\_{0} - \sigma I \right) = A\_{0}^{2} + \left( \sigma + \overline{\sigma} \right) A\_{0} + \left| \sigma \right|^{2} I =: M. \end{split}$$

Because of *σ* + *σ*¯ ∈*P*, matrix M is real. Then *Q*1*Q*2*R*2*R*1 QR factorization real matrix and that means *Q*1*Q*2 i *R*2*R*1 we can choose a real matrix. The first column of the matrix *Q*1*Q*2 is proportional to the first column of the matrix *M*, and the other columns are calculated by applying implicit Q theorem.

#### **2.3. Mathematical background for Hermitian (symmetric) case**

In this section we look at the problem of eigenvalues in the case of a symmetric or Hermitian matrix.

**Definition 2.3.1**. Matrix *A* ∈ *ℝ*(*n*,*n*) is called **symmetric** if applies *A* = *A<sup>T</sup>*.

**Definition 2.3.2** Matrix *A* ∈ ℂ(*n*,*n*) is called **Hermitian** if applies *A* = *A<sup>H</sup>*.

**Remark 2.3.1** Symmetric Matrices are only a special case of Hermitian matrices in the case that the elements matrices are real numbers. Therefore, we will formulate a theorem for Hermitian matrices.

**Remark 2.3.2** Hermitian and symmetric matrices are normal matrices, which means that they can diagonalize.

The following theorem gives important information on the reality of eigenvalues Hermitian (symmetric) matrices. This feature greatly facilitates consideration of the problem of eigen‐ values for this class of matrices, which makes this class of matrices applicable in practice.

**Theorem 2.3.1 .** If *A* is Hermitian (symmetric) matrix, then:


Since all the eigenvalues of real ones can be compared. Therefore, we assume that the eigen‐ values in order of size, i.e. It applies *λ*1 ≤ *λ*2 ≤ ⋯ ≤ *λn* and that the corresponding orthonormal eigenvectors.

If the matrix *A* is symmetrical due to symmetry feature, it comes to the significant acceleration algorithms for the unsymmetrical case. We will demonstrate the QR algorithm which is presented in section 2.2. for the unsymmetrical case. In symmetric case is important to note that the upper Hessenberg form of symmetric matrix tridiagonal matrix whose QR decompo‐ sition is only necessary *O*(*n*) operation. It is also important that during the QR algorithm preserves the structure of the matrix or all tridiagonal matrices *A<sup>i</sup>* . For a shift in this case is usually taken Wilkins shift which is defined as an eigenvalue of matrix

$$
\begin{pmatrix} a\_{n-1,n-1} & a\_{n-1,n} \\ a\_{n-1,n} & a\_{n,n} \end{pmatrix} \\ \text{that is closest } a\_{n,n}
$$

For a QR algorithm Wilkinson shifts apply following theorem, whose proof is given in [2]

**Theorem 2.3.2** (Wilkinson) QR algorithm with Wilkinson shifts for symmetric tridiagonal matrix converges globally and at least linearly. For a almost all of the matrices are on asymp‐ totically cubic converging.

Now we introduce the very important concept of the Rayleigh quotient, because it gives the best estimate of the eigenvalues for a given vector **x** ∈ ℂ*<sup>n</sup>*, **x** ≠ 0

**Definition 2.3.3** Let A be a Hermitian (symmetric) matrix. For a given vector **x** ∈ ℂ*<sup>n</sup>*, **x** ≠ 0 **Rayleigh quotient** is defined *R*(*x*): = **<sup>x</sup>H***A***<sup>x</sup> <sup>x</sup>Hx** .

The importance of the Rayleigh quotient is seen in the following theorems

**Theorem 2.3.3.** (Features Rayleigh quotient)


Paragraph (d) in the previous theorem is known as Rayleigh principle. However it is numer‐ ically worthless, because for the determination example, we need eigenvector or *x*<sup>1</sup> corre‐ sponding to the eigenvalue *λ*1. In order to overcome this disadvantage is introduced min max principle of Poincaré that is listed in the following theorems.

**Theorem 2.3.4 .** (min max principle of Poincaré )

The following theorem gives important information on the reality of eigenvalues Hermitian (symmetric) matrices. This feature greatly facilitates consideration of the problem of eigen‐ values for this class of matrices, which makes this class of matrices applicable in practice.

Since all the eigenvalues of real ones can be compared. Therefore, we assume that the eigen‐ values in order of size, i.e. It applies *λ*1 ≤ *λ*2 ≤ ⋯ ≤ *λn* and that the corresponding orthonormal

If the matrix *A* is symmetrical due to symmetry feature, it comes to the significant acceleration algorithms for the unsymmetrical case. We will demonstrate the QR algorithm which is presented in section 2.2. for the unsymmetrical case. In symmetric case is important to note that the upper Hessenberg form of symmetric matrix tridiagonal matrix whose QR decompo‐ sition is only necessary *O*(*n*) operation. It is also important that during the QR algorithm

For a QR algorithm Wilkinson shifts apply following theorem, whose proof is given in [2]

**Theorem 2.3.2** (Wilkinson) QR algorithm with Wilkinson shifts for symmetric tridiagonal matrix converges globally and at least linearly. For a almost all of the matrices are on asymp‐

Now we introduce the very important concept of the Rayleigh quotient, because it gives the

**Definition 2.3.3** Let A be a Hermitian (symmetric) matrix. For a given vector **x** ∈ ℂ*<sup>n</sup>*, **x** ≠ 0

**c.** If **x** ≠ **0** with *λ*1 = *R*(**x**) respectively *λn* = *R*(**x**) *x i*s then x is the eigenvector corresponding to

**<sup>x</sup>Hx** .

The importance of the Rayleigh quotient is seen in the following theorems

=0, *j* =1, ⋯, *i* −1, **x**≠**0**} =

=0, *j* =*i* + 1, ⋯*n*, **x**≠**0**}

. For a shift in this case is

**Theorem 2.3.1 .** If *A* is Hermitian (symmetric) matrix, then:

**b.** Eigenvectors from different eigenspace are orthogonal.

preserves the structure of the matrix or all tridiagonal matrices *A<sup>i</sup>*

best estimate of the eigenvalues for a given vector **x** ∈ ℂ*<sup>n</sup>*, **x** ≠ 0

usually taken Wilkins shift which is defined as an eigenvalue of matrix

**a.** The eigenvalues of *A* are all real numbers.

), that is closest *an*,*<sup>n</sup>*.

**Rayleigh quotient** is defined *R*(*x*): = **<sup>x</sup>H***A***<sup>x</sup>**

**Theorem 2.3.3.** (Features Rayleigh quotient) **a.** For all **x** ∈ ℂ*<sup>n</sup>*, **x** ≠ **0** worth *λ*1 ≤ *R*(**x**) ≤ *λn*.

**b.** *λ*<sup>1</sup> =min*x*≠**<sup>0</sup>** *R*(**x**), *λ<sup>n</sup>* =max*x*≠**<sup>0</sup>** *R*(**x**)

*λ*1 respectively *λn*.

**d.** *<sup>λ</sup><sup>i</sup>* =min {*R*(**x**): **<sup>x</sup>**H**<sup>x</sup>** *<sup>j</sup>*

max {*R*(**x**): *x* <sup>H</sup>**x** *<sup>j</sup>*

eigenvectors.

68 Applied Linear Algebra in Action

*an*−1,*n*−<sup>1</sup> *an*−1,*<sup>n</sup> an*−1,*<sup>n</sup> an*,*<sup>n</sup>*

totically cubic converging.

(

$$\mathcal{A}\_{\boldsymbol{t}} = \min\_{\dim V = \boldsymbol{t}} \max\_{\mathbf{x} \in V \backslash \{\mathbf{0}\}} R\left(\mathbf{x}\right) = \max\_{\dim V = n - i + 1} \min\_{\mathbf{x} \in V \backslash \{\mathbf{0}\}} R\left(\mathbf{x}\right)$$

The following formulation is known as min max principle of Courant-Fischer and often favorable to use.

**Theorem 2.3.5. (**min max principle of Courant-Fischer )

$$\mathcal{A}\_{i} = \min\_{\left\{p^{1}, \cdots, p^{i-1}\right\}} \max\left\{R\left(\mathbf{x}\right) : \mathbf{x}^{\mathbf{H}} \mathbf{p}^{j} = 0, \ j = i+1, \cdots, n, \mathbf{x} \neq \mathbf{0}\right\},$$

$$\max\_{\left\{p^{1}, \cdots, p^{n-i}\right\}} \min\left\{R\left(\mathbf{x}\right) : \mathbf{x}^{\mathbf{H}} \mathbf{p}^{j} = 0, \ j = i+1, \cdots, n, \mathbf{x} \neq \mathbf{0}\right\}.$$

From the above it is clear that these theorems are important for the localization of eigenvalues.

The following is an algorithm that is in linear algebra known as Rayleigh quotient iteration and it reads as follows

Let *A* ∈ *ℝ*(**n**,**n**) symmetric matrix and **x**<sup>0</sup> initial vector, which is the standardized. For which applies ‖**x**<sup>0</sup> ‖2 = 1.

**Algorithm 2.3.1 .** (Rayleigh quotient iteration)

$$
\sigma\_0 = \left(\mathbf{x}^0\right)^\top \mathbf{A} \mathbf{x}^0
$$

For *i* = 1, 2 ⋯ until convergence Solve (*A<sup>i</sup>* − *σ<sup>i</sup> I*)**y***<sup>i</sup>* = **x***<sup>i</sup>* − 1

$$\mathbf{x}^i = \frac{\mathbf{y}^i}{\mathbf{y}^i\_{\;2}}$$

$$
\sigma\_0 = \left(\mathbf{x}^\prime\right)^T \mathbf{A} \mathbf{x}^\prime.
$$

End

**Theorem2.3.6.** Rayleigh quotient iteration converges cubic.

Finally point out that the most effective for symmetric matrices Divide-and-Conquer method. This method was introduced by Cupen [3] and the first effective implementation is the work of Cu and Eisenstat [4]. About this method, more information can be found in [4].
