2. A matrix with the exponential function as an eigenvector

Here, we consider the N � N antisymmetric, tridiagonal matrix

$$\mathbf{D}\_{N} = \begin{pmatrix} -e^{-\alpha\Lambda} & 1 & 0 & \dots & 0 & 0 & 0\\ \frac{-1}{2\chi(v,\Lambda)} & 0 & \frac{1}{2\chi(v,\Lambda)} & \dots & 0 & 0 & 0\\ \frac{-1}{2\chi(v,\Lambda)} & 0 & \frac{1}{2\chi(v,\Lambda)} & \dots & 0 & 0 & 0\\ 0 & \frac{-1}{2\chi(v,\Lambda)} & 0 & \dots & 0 & 0 & 0\\ \vdots & & & & & &\\ 0 & 0 & 0 & \dots & 0 & \frac{1}{2\chi(v,\Lambda)} & 0\\ 0 & 0 & 0 & \dots & \frac{-1}{2\chi(v,\Lambda)} & 0 & \frac{1}{2\chi(v,\Lambda)}\\ 0 & 0 & 0 & \dots & 0 & \frac{-1}{2\chi(v,\Lambda)} & \frac{e^{\alpha\Lambda}}{2\chi(v,\Lambda)} \end{pmatrix},\tag{1}$$

where v∈ℂ—it can be pure real or pure imaginary—, Δ∈ ℝþ, and χð Þ v; Δ ≔sinhð Þ vΔ =v <sup>≈</sup><sup>Δ</sup> <sup>þ</sup> <sup>v</sup><sup>2</sup>Δ<sup>3</sup> <sup>=</sup><sup>6</sup> <sup>þ</sup> <sup>O</sup> <sup>Δ</sup><sup>5</sup> � �. This function <sup>χ</sup>ð Þ <sup>v</sup>;<sup>Δ</sup> is well defined for <sup>v</sup> <sup>¼</sup> 0, with value <sup>χ</sup>ð Þ¼ <sup>0</sup>;<sup>Δ</sup> <sup>Δ</sup>. This matrix is interesting because, as we will see below, it represents a derivation on a partition. A rescaled matrix D<sup>N</sup> is defined as

$$
\overline{\mathbf{D}}\_{N} \coloneqq \begin{pmatrix}
0 & -1 & 0 & \dots & 0 & 0 & 0 \\
\vdots & & & & & \\
0 & 0 & 0 & \dots & 0 & 1 & 0 \\
0 & 0 & 0 & \dots & -1 & 0 & 1 \\
0 & 0 & 0 & \dots & 0 & -1 & z
\end{pmatrix} \tag{2}
$$

where <sup>z</sup> <sup>¼</sup> <sup>e</sup><sup>v</sup><sup>Δ</sup>, and

$$\mathbf{D}\_{N} \coloneqq \frac{\overline{\mathbf{D}}\_{N}}{2\chi(v,\Delta)}.\tag{3}$$

We are mainly interested in finding the eigenvalues and the corresponding eigenvectors of these matrices.

We start our study with a result about the determinant of D<sup>N</sup> � λIN,

$$\begin{aligned} \left| \mathbf{D}\_{N} - \lambda \mathbf{I}\_{N} \right| &= \left| \mathbf{D}\_{N} + \alpha \mathbf{I}\_{N} \right| \\ & \begin{vmatrix} \alpha - 1/z & 1 & 0 & 0 & \dots & 0 & 0 & 0 \\ -1 & \alpha & 1 & 0 & \dots & 0 & 0 & 0 \\ 0 & -1 & \alpha & 1 & \dots & 0 & 0 & 0 \\ 0 & 0 & -1 & \alpha & \dots & 0 & 0 & 0 \\ \vdots & & & & & & \\ 0 & 0 & 0 & \dots & \alpha & 1 & 0 & 0 \\ 0 & 0 & 0 & \dots & -1 & \alpha & 1 & 0 \\ 0 & 0 & 0 & \dots & 0 & -1 & \alpha & 1 \\ 0 & 0 & 0 & \dots & 0 & -1 & \alpha + z \\ 0 & 0 & 0 & \dots & 0 & 0 & -1 & \alpha + z \\ \end{vmatrix} \tag{4}$$

$$\begin{aligned} &= \left( \alpha - \frac{1}{z} \right) \mathbf{A}\_{N-1}(\alpha) + \mathbf{A}\_{N-2}(\alpha), \end{aligned}$$

where λ ¼ �α,

scale. The inverse of this matrix is just the integration operation. These are interesting subjects

In this chapter, we will consider only the case of the derivative and the integration of the expo-

<sup>2</sup>χð Þ <sup>v</sup>;<sup>Δ</sup> <sup>0</sup> … <sup>000</sup>

<sup>2</sup>χð Þ <sup>v</sup>;<sup>Δ</sup> <sup>0</sup> … <sup>000</sup>

<sup>000</sup> … <sup>0</sup> <sup>1</sup>

<sup>000</sup> … <sup>0</sup> �<sup>1</sup>

where v∈ℂ—it can be pure real or pure imaginary—, Δ∈ ℝþ, and χð Þ v; Δ ≔sinhð Þ vΔ =v

This matrix is interesting because, as we will see below, it represents a derivation on a partition.

�1=z 1 0 … 000 �1 01 … 000 0 �1 0 … 000

0 00 … 010 0 00 … �10 1 0 00 … 0 �1 z

<sup>D</sup>N<sup>≔</sup> <sup>D</sup><sup>N</sup>

We are mainly interested in finding the eigenvalues and the corresponding eigenvectors of

<sup>=</sup><sup>6</sup> <sup>þ</sup> <sup>O</sup> <sup>Δ</sup><sup>5</sup> � �. This function <sup>χ</sup>ð Þ <sup>v</sup>;<sup>Δ</sup> is well defined for <sup>v</sup> <sup>¼</sup> 0, with value <sup>χ</sup>ð Þ¼ <sup>0</sup>;<sup>Δ</sup> <sup>Δ</sup>.

<sup>000</sup> … �<sup>1</sup>

<sup>2</sup>χð Þ <sup>v</sup>;<sup>Δ</sup> … <sup>000</sup>

<sup>2</sup>χð Þ <sup>v</sup>;<sup>Δ</sup> <sup>0</sup>

2χð Þ v;Δ

1

CCCCCCCCCCCCCCCCCCCCCCCA

, (1)

e<sup>v</sup><sup>Δ</sup> 2χð Þ v;Δ

, (2)

<sup>2</sup>χð Þ <sup>v</sup>;<sup>Δ</sup> <sup>0</sup> <sup>1</sup>

2χð Þ v;Δ

1

CCCCCCCCCCCA

<sup>2</sup>χð Þ <sup>v</sup>; <sup>Δ</sup> : (3)

by itself, but they are also of interest in the quantum physics realm [5–7].

2. A matrix with the exponential function as an eigenvector

Here, we consider the N � N antisymmetric, tridiagonal matrix

1

<sup>2</sup>χð Þ <sup>v</sup>; <sup>Δ</sup> <sup>0</sup> <sup>1</sup>

DN≔

⋮

BBBBBBBBBBB@

0

�e�v<sup>Δ</sup> 2χð Þ v; Δ

0

22 Matrix Theory-Applications and Theorems

�1

<sup>0</sup> �<sup>1</sup>

nential function.

DN≔

<sup>≈</sup><sup>Δ</sup> <sup>þ</sup> <sup>v</sup><sup>2</sup>Δ<sup>3</sup>

where <sup>z</sup> <sup>¼</sup> <sup>e</sup><sup>v</sup><sup>Δ</sup>, and

these matrices.

⋮

A rescaled matrix D<sup>N</sup> is defined as

BBBBBBBBBBBBBBBBBBBBBBB@

$$\mathbf{A}\_{\boldsymbol{\beta}}(\boldsymbol{\alpha}) \coloneqq \begin{vmatrix} \boldsymbol{\alpha} & \boldsymbol{1} & \boldsymbol{0} & \dots & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\ -1 & \boldsymbol{\alpha} & \boldsymbol{1} & \dots & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & -1 & \boldsymbol{\alpha} & \dots & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\ \vdots & & & & & & \\ \boldsymbol{0} & \boldsymbol{0} & \dots & \boldsymbol{\alpha} & \boldsymbol{1} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \dots & -1 & \boldsymbol{\alpha} & \boldsymbol{1} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \dots & \boldsymbol{0} & -1 & \boldsymbol{\alpha} & \boldsymbol{1} \\ \boldsymbol{0} & \boldsymbol{0} & \dots & \boldsymbol{0} & -1 & \boldsymbol{\alpha} & \boldsymbol{1} \\ \boldsymbol{0} & \boldsymbol{0} & \dots & \boldsymbol{0} & \boldsymbol{0} & -1 & \boldsymbol{\alpha} + \boldsymbol{z} \\ \end{vmatrix} \tag{5}$$

$$= (\boldsymbol{\alpha} + \boldsymbol{z}) \mathbf{B}\_{\boldsymbol{\beta}-1}(\boldsymbol{\alpha}) + \mathbf{B}\_{\boldsymbol{\beta}-2}(\boldsymbol{\alpha}),$$

and

$$\mathbf{B}\_{\dot{l}}(\alpha) = \begin{vmatrix} \alpha & 1 & 0 & \dots & 0 & 0 \\ -1 & \alpha & 1 & \dots & 0 & 0 \\ 0 & -1 & \alpha & \dots & 0 & 0 \\ \vdots & & & & \\ 0 & 0 & \dots & \alpha & 1 & 0 \\ 0 & 0 & \dots & -1 & \alpha & 1 \\ 0 & 0 & \dots & 0 & -1 & \alpha \end{vmatrix} . \tag{6}$$

Strikingly, we recognize the determinant Bjð Þ α as the Fibonacci polynomial of index j þ 1 [10, 11], i.e., Bjð Þ α =F<sup>j</sup>þ<sup>1</sup>ð Þ α . Fibonacci polynomials are defined as

$$F\_0(\mathbf{x}) = \mathbf{0}, \quad F\_1(\mathbf{x}) = \mathbf{1}, \quad F\_j(\mathbf{x}) = \mathbf{x} F\_{j-1}(\mathbf{x}) + F\_{j-2}(\mathbf{x}), \quad j \ge 2. \tag{7}$$

and then,

ing Fk�<sup>1</sup> <sup>¼</sup> w wk�<sup>1</sup> � Fk

polynomials, we obtain

partition q1; q2;…; qN

takes the form <sup>e</sup><sup>1</sup> <sup>¼</sup> <sup>c</sup> <sup>1</sup>; <sup>z</sup>; …; zN�<sup>1</sup> � �<sup>T</sup>

tage of the normalization constant and write

tive of functions of continuous variable.

em,j em,jþ<sup>1</sup> � �

i.e., the j-th component of the m-th eigenvector is

em,j ¼ Fjð Þþ λ<sup>m</sup>

Fkþ<sup>1</sup>ð Þþ x

Fkð Þx

<sup>¼</sup> wk :

<sup>e</sup><sup>1</sup> <sup>¼</sup> c evq<sup>1</sup> ; ;<sup>e</sup>

Fibonacci polynomial λ<sup>m</sup> ¼ �xm, m ¼ 1, 2, …, N � 1, and have the form

0

BBBBBBBB@

1

⋮

e<sup>m</sup> ¼ c

<sup>¼</sup> Fj�<sup>1</sup>ð Þ <sup>λ</sup><sup>m</sup> Fjð Þ <sup>λ</sup><sup>m</sup> Fjð Þ λ<sup>m</sup> Fjþ<sup>1</sup>ð Þ λ<sup>m</sup> � � em,<sup>0</sup>

> Fj�<sup>1</sup>ð Þ λ<sup>m</sup> z

For the case of the eigenvalue λ<sup>1</sup> ¼ z � 1=z, we can rewrite Eq. (17) by noticing that if we let <sup>x</sup> <sup>¼</sup> <sup>w</sup> � <sup>w</sup>�<sup>1</sup> (<sup>w</sup> <sup>∈</sup>ℂ), then Fnð Þþ <sup>x</sup> Fn�<sup>1</sup>ð Þ<sup>x</sup> <sup>=</sup><sup>w</sup> <sup>¼</sup> wn�<sup>1</sup> for <sup>n</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …. This can be proved by induction method as follows. For n ¼ 1, it is immediately verified. First, suppose that the equality holds for n ≤ k. Next, we compute the right-hand side of the equality for k þ 1. Substitut-

<sup>w</sup> <sup>¼</sup> xFkð Þþ <sup>x</sup> Fk�<sup>1</sup>ð Þþ <sup>x</sup>

Therefore, according to Eqs. (17) and (18), the eigenvector for the eigenvalue λ<sup>1</sup> ¼ 2sinhð Þ vΔ

with eigenvalue λ<sup>1</sup> ¼ v (in original scaling, i.e., the eigenvalue of the matrix DN), q<sup>1</sup> is an arbitrary constant, and qj ¼ q<sup>1</sup> þ ð Þ j � 1 Δ. This means that the exponential function is an eigenvector of the derivative matrix which is a global representation of the derivative on the

The remain of the eigenvectors have eigenvalues equal to the negative of the roots of the N-th

<sup>F</sup>3ð Þþ <sup>λ</sup><sup>m</sup> <sup>e</sup>�v<sup>Δ</sup>F2ð Þ <sup>λ</sup><sup>m</sup>

FN�<sup>1</sup>ð Þþ <sup>λ</sup><sup>m</sup> <sup>e</sup>�v<sup>Δ</sup>FN�<sup>2</sup>ð Þ <sup>λ</sup><sup>m</sup>

<sup>F</sup>2ð Þþ <sup>λ</sup><sup>m</sup> <sup>e</sup>�v<sup>Δ</sup>

<sup>e</sup>�v<sup>Δ</sup>FN�<sup>1</sup>ð Þ <sup>λ</sup><sup>m</sup>

� � in the expression for <sup>k</sup> <sup>þ</sup> 1, and using the properties of the Fibonacci

<sup>¼</sup> xFkð Þþ <sup>x</sup> <sup>w</sup><sup>k</sup> � wFkð Þþ <sup>x</sup>

vq<sup>2</sup> ; …; ;e

� �. Recall that the exponential function is an eigenfunction of the deriva-

em,<sup>1</sup>

� �em, <sup>1</sup> for <sup>j</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …, N: (17)

Fkð Þx w

> Fkð Þx w

, where c is a normalization constant. We can take advan-

vqN ð ÞT, (19)

1

CCCCCCCCA

(18)

25

(20)

� �, j <sup>¼</sup> <sup>1</sup>, …, N: (16)

http://dx.doi.org/10.5772/intechopen.74356

Matrices Which are Discrete Versions of Linear Operations

Since we have that Bjð Þ α =F<sup>j</sup>þ<sup>1</sup>ð Þ α , and the recursion relationship for Fibonacci polynomials, we also have that

$$\mathbf{A}\_{j}(a) = (\alpha + z)\mathbf{F}\_{j}(a) + \mathbf{F}\_{j-1}(a) = z\mathbf{F}\_{j}(a) + \mathbf{F}\_{j+1}(a),\tag{8}$$

and then

$$\begin{aligned} &|\overline{\mathbf{D}}\_{N} + \alpha \mathbf{I}\_{N}| \\ &= \left(\alpha - \frac{1}{z}\right) [zF\_{N-1}(\alpha) + F\_{N}(\alpha)] + zF\_{N-2}(\alpha) + F\_{N-1}(\alpha) \\ &= z[\alpha F\_{N-1}(\alpha) + F\_{N-2}(\alpha)] + \left(\alpha - \frac{1}{z}\right) F\_{N}(\alpha) \\ &= \left(\alpha + z - \frac{1}{z}\right) F\_{N}(\alpha). \end{aligned} \tag{9}$$

Then, the eigenvalues of the derivative matrix <sup>D</sup><sup>N</sup> are <sup>λ</sup><sup>1</sup> <sup>¼</sup> <sup>z</sup> � <sup>1</sup>=<sup>z</sup> <sup>¼</sup> ev<sup>Δ</sup> � <sup>e</sup>�v<sup>Δ</sup> <sup>¼</sup> 2sinhð Þ <sup>v</sup><sup>Δ</sup> and λ<sup>m</sup> ¼ �αm, where α<sup>m</sup> is the m-th root of the N-th Fibonacci polynomial, which is a polynomial of degree N � 1 [10, 11].

The system of simultaneous equations for the eigenvector e<sup>T</sup> <sup>m</sup> ¼ ð Þ em, <sup>1</sup> em, <sup>2</sup>;…;eN corresponding to λm, can be put in a form similar to the recursion relationship for the Fibonacci polynomials, i.e.,

$$
\varepsilon\_{m,2} = \lambda\_m \varepsilon\_{m,1} + \frac{\varepsilon\_{m,1}}{z},
\tag{10}
$$

$$
\epsilon\_{m,j+1} = \lambda\_m \epsilon\_{m,j} + \epsilon\_{m,j-1}, \quad 1 < j < N,\tag{11}
$$

$$
\varepsilon z e\_{m,N} = \lambda\_m e\_{m,N} + e\_{m,N-1} \,. \tag{12}
$$

This set of recursion relationships can be written as the matrix equation

$$
\begin{pmatrix} \mathfrak{e}\_{m,j} \\ \mathfrak{e}\_{m,j+1} \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 1 & \lambda\_m \end{pmatrix} \begin{pmatrix} \mathfrak{e}\_{m,j-1} \\ \mathfrak{e}\_{m,j} \end{pmatrix}, \quad j = 1, \ldots, N,\tag{13}
$$

where em, <sup>0</sup> ¼ em,1=z and em,Nþ<sup>1</sup> ¼ z em,N. Thus

$$
\begin{pmatrix} \mathfrak{e}\_{\mathfrak{m},j} \\ \mathfrak{e}\_{\mathfrak{m},j+1} \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 1 & \lambda\_{\mathfrak{m}} \end{pmatrix}^{j} \begin{pmatrix} \mathfrak{e}\_{\mathfrak{m},0} \\ \mathfrak{e}\_{\mathfrak{m},1} \end{pmatrix}, \quad j = 1, \ldots, N,\tag{14}
$$

but

$$
\begin{pmatrix} 0 & 1 \\ 1 & \lambda\_m \end{pmatrix}^j = \begin{pmatrix} F\_{j-1}(\lambda\_m) & F\_j(\lambda\_m) \\ F\_j(\lambda\_m) & F\_{j+1}(\lambda\_m) \end{pmatrix} \tag{15}
$$

and then,

F0ð Þ¼ x 0, F1ð Þ¼ x 1, Fjð Þ¼ x xFj�<sup>1</sup>ð Þþ x Fj�<sup>2</sup>ð Þx , j ≥ 2: (7)

Ajð Þ¼ α ð Þ α þ z Fjð Þþ α F<sup>j</sup>�<sup>1</sup>ð Þ¼ α zFjð Þþ α F<sup>j</sup>þ<sup>1</sup>ð Þ α , (8)

½zFN�<sup>1</sup>ð Þþ α FNð Þ α � þ zFN�<sup>2</sup>ð Þþ α FN�<sup>1</sup>ð Þ α

z 

em,<sup>1</sup>

em,jþ<sup>1</sup> ¼ λmem,j þ em,j�<sup>1</sup>, 1 < j < N, (11)

z em,N ¼ λmem,N þ em,N�<sup>1</sup>: (12)

FNð Þ α

<sup>m</sup> ¼ ð Þ em, <sup>1</sup> em, <sup>2</sup>;…;eN corresponding

<sup>z</sup> , (10)

, j ¼ 1, …, N, (13)

, j ¼ 1, …, N, (14)

, (15)

(9)

Since we have that Bjð Þ α =F<sup>j</sup>þ<sup>1</sup>ð Þ α , and the recursion relationship for Fibonacci polynomials, we

Then, the eigenvalues of the derivative matrix <sup>D</sup><sup>N</sup> are <sup>λ</sup><sup>1</sup> <sup>¼</sup> <sup>z</sup> � <sup>1</sup>=<sup>z</sup> <sup>¼</sup> ev<sup>Δ</sup> � <sup>e</sup>�v<sup>Δ</sup> <sup>¼</sup> 2sinhð Þ <sup>v</sup><sup>Δ</sup> and λ<sup>m</sup> ¼ �αm, where α<sup>m</sup> is the m-th root of the N-th Fibonacci polynomial, which is a

to λm, can be put in a form similar to the recursion relationship for the Fibonacci polynomials,

em,<sup>2</sup> ¼ λmem,<sup>1</sup> þ

em,j�<sup>1</sup>

<sup>j</sup> em,<sup>0</sup>

em,j 

> em,<sup>1</sup>

<sup>¼</sup> Fj�<sup>1</sup>ð Þ <sup>λ</sup><sup>m</sup> Fjð Þ <sup>λ</sup><sup>m</sup> Fjð Þ λ<sup>m</sup> Fjþ<sup>1</sup>ð Þ λ<sup>m</sup> 

also have that

24 Matrix Theory-Applications and Theorems

∣D<sup>N</sup> þ αIN∣ <sup>¼</sup> <sup>α</sup> � <sup>1</sup> z 

<sup>¼</sup> <sup>α</sup> <sup>þ</sup> <sup>z</sup> � <sup>1</sup>

polynomial of degree N � 1 [10, 11].

The system of simultaneous equations for the eigenvector e<sup>T</sup>

<sup>¼</sup> <sup>z</sup>½αFN�<sup>1</sup>ð Þþ <sup>α</sup> FN�<sup>2</sup>ð Þ <sup>α</sup> � þ <sup>α</sup> � <sup>1</sup>

FNð Þ α :

z

This set of recursion relationships can be written as the matrix equation

<sup>¼</sup> 0 1 1 λ<sup>m</sup>

<sup>¼</sup> 0 1 1 λ<sup>m</sup>

em,j em,jþ<sup>1</sup> 

em,j em,jþ<sup>1</sup> 

> 0 1 1 λ<sup>m</sup> <sup>j</sup>

where em, <sup>0</sup> ¼ em,1=z and em,Nþ<sup>1</sup> ¼ z em,N. Thus

and then

i.e.,

but

$$
\begin{pmatrix} \mathbf{e}\_{m,j} \\ \mathbf{e}\_{m,j+1} \end{pmatrix} = \begin{pmatrix} F\_{j-1}(\lambda\_m) & F\_j(\lambda\_m) \\ F\_j(\lambda\_m) & F\_{j+1}(\lambda\_m) \end{pmatrix} \begin{pmatrix} \mathbf{e}\_{m,0} \\ \mathbf{e}\_{m,1} \end{pmatrix}, \quad j = 1, \ldots, N. \tag{16}
$$

i.e., the j-th component of the m-th eigenvector is

$$e\_{m,j} = \left[ F\_j(\lambda\_m) + \frac{F\_{j-1}(\lambda\_m)}{z} \right] e\_{m,1} \text{ for } j = 1, 2, \dots, N. \tag{17}$$

For the case of the eigenvalue λ<sup>1</sup> ¼ z � 1=z, we can rewrite Eq. (17) by noticing that if we let <sup>x</sup> <sup>¼</sup> <sup>w</sup> � <sup>w</sup>�<sup>1</sup> (<sup>w</sup> <sup>∈</sup>ℂ), then Fnð Þþ <sup>x</sup> Fn�<sup>1</sup>ð Þ<sup>x</sup> <sup>=</sup><sup>w</sup> <sup>¼</sup> wn�<sup>1</sup> for <sup>n</sup> <sup>¼</sup> <sup>1</sup>, <sup>2</sup>, …. This can be proved by induction method as follows. For n ¼ 1, it is immediately verified. First, suppose that the equality holds for n ≤ k. Next, we compute the right-hand side of the equality for k þ 1. Substituting Fk�<sup>1</sup> <sup>¼</sup> w wk�<sup>1</sup> � Fk � � in the expression for <sup>k</sup> <sup>þ</sup> 1, and using the properties of the Fibonacci polynomials, we obtain

$$F\_{k+1}(\mathbf{x}) + \frac{F\_k(\mathbf{x})}{w} = \mathbf{x}F\_k(\mathbf{x}) + F\_{k-1}(\mathbf{x}) + \frac{F\_k(\mathbf{x})}{w}$$

$$= \mathbf{x}F\_k(\mathbf{x}) + w^k - wF\_k(\mathbf{x}) + \frac{F\_k(\mathbf{x})}{w} \tag{18}$$

$$= w^k.$$

Therefore, according to Eqs. (17) and (18), the eigenvector for the eigenvalue λ<sup>1</sup> ¼ 2sinhð Þ vΔ takes the form <sup>e</sup><sup>1</sup> <sup>¼</sup> <sup>c</sup> <sup>1</sup>; <sup>z</sup>; …; zN�<sup>1</sup> � �<sup>T</sup> , where c is a normalization constant. We can take advantage of the normalization constant and write

$$\mathbf{e}\_1 = \mathbf{c}\left(e^{\imath \eta\_1}, e^{\imath \eta\_2}, \dots, e^{\imath \eta\_N}\right)^T,\tag{19}$$

with eigenvalue λ<sup>1</sup> ¼ v (in original scaling, i.e., the eigenvalue of the matrix DN), q<sup>1</sup> is an arbitrary constant, and qj ¼ q<sup>1</sup> þ ð Þ j � 1 Δ. This means that the exponential function is an eigenvector of the derivative matrix which is a global representation of the derivative on the partition q1; q2;…; qN � �. Recall that the exponential function is an eigenfunction of the derivative of functions of continuous variable.

The remain of the eigenvectors have eigenvalues equal to the negative of the roots of the N-th Fibonacci polynomial λ<sup>m</sup> ¼ �xm, m ¼ 1, 2, …, N � 1, and have the form

$$\mathbf{e}\_m = c \begin{pmatrix} 1 \\ F\_2(\lambda\_m) + e^{-v\Delta} \\ F\_3(\lambda\_m) + e^{-v\Delta} F\_2(\lambda\_m) \\ \vdots \\ F\_{N-1}(\lambda\_m) + e^{-v\Delta} F\_{N-2}(\lambda\_m) \\ e^{-v\Delta} F\_{N-1}(\lambda\_m) \end{pmatrix} \tag{20}$$

The vector that we will be interested on is the one which is the exponential function (19) with eigenvalue v.
