**1. Introduction**

Inverse problems appear in a wide variety of disciplines and they may be of many different kinds. Inverse eigenvalue problems, for instance, constitute an important subclass of inverse problems that arise in the context of mathematical modeling and parameter identification. A simple application of such problems is the construction of Leontief models in economics. Inverse eigenvalue problems have to do with constructing a certain matrix from some spectral information. Associated with any inverse eigenvalue problem, there are two important issues: the existence of a solution and the construction of a solution matrix. The structure of the solution matrix (usually it is not unique) plays a fundamental role in the study of the inverse

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

eigenvalue problems. It is necessary to properly formulate the problem, otherwise it could become a trivial one. Chu and Golub [1] say that "an inverse eigenvalue problem should always be a structured problem." In this chapter, we study the *Nonnegative Inverse Elementa‐ ry Divisors Problem* (hereafter, the *NIEDP*), which is the problem of finding necessary and sufficient conditions for the existence of a nonnegative matrix with prescribed elementary divisors.

Let *A* be an *n* × *n* complex matrix, and let

$$J\left(A\right) = S^{-1}AS = \begin{bmatrix} J\_{n\_1(\lambda\_1)} & 0 & \ddots & 0 \\ 0 & J\_{n\_2(\lambda\_2)} & \ddots & \ddots \\ \cdot & \cdot & \cdot & \cdot \\ 0 & \cdot & 0 & J\_{n\_k(\lambda\_k)} \end{bmatrix}$$

be its *Jordan canonical form* (hereafter, *JCF*). The *ni* × *ni* submatrices

$$J\_{n\_i} \left( \mathcal{A}\_i \right) = \begin{bmatrix} \mathcal{A}\_i & 1 & & \\ & \mathcal{A}\_i & \ddots & \\ & & \ddots & \ddots & \\ & & & \mathcal{A}\_i \end{bmatrix}, \ i = 1, 2, \dots, k.$$

are called the *Jordan blocks* of *J*(*A*). The *elementary divisors* of *A* are the polynomials (*λ* −*λ<sup>i</sup>* ) *ni* , that is, the characteristic polynomials of *Jni* (*λi* ), *i* = 1, …, *k*. The inverse elementary divisors problem (*IEDP*) is the problem of determining necessary and sufficient conditions under which the polynomials (*λ* −*λ*1) *n*1 , (*λ* −*λ*2) *n*2 , …, (*λ* −*λ<sup>k</sup>* ) *nk* , *n* <sup>1</sup> + ⋯ + *nk* = *n*, are the elementary divisors of an *n* × *n* matrix *A*. It is clear that for any arbitrarily prescribed Jordan canonical form *J*, and for any nonsingular matrix *S*, there exists a matrix *A* = *SJS*<sup>−</sup> 1 with *J* as its *JCF*. In order that the problem be meaningful, the matrix *A* is required to have a particular structure. When *A* is required to be an entrywise nonnegative matrix, the problem is called the *nonnegative inverse elementary divisors problem* (*NIEDP*) (see [2–4]). The *NIEDP* is strongly related to another inverse problem, the *nonnegative inverse eigenvalue problem* (hereafter, the *NIEP*), which is the problem of determining necessary and sufficient conditions for a list of complex numbers *Λ* = {*λ*1, *λ*2, …, *λn*} to be the spectrum of an *n* × *n* entrywise nonnegative matrix. If there exists a nonnegative matrix *A* with spectrum *Λ*, we say that *Λ* is realizable and that *A* is the realizing matrix. The *NIEDP* contains the *NIEP*, and both problems are equivalent if the prescribed eigenvalues are all distinct. Both problems remain unsolved (the *NIEP* is solved only for *n* ≤ 4). A number of sufficient conditions or realizability criteria for the *NIEP* to have a solution are known in the literature regarding the problem (see [5] and the references therein). In contrast, only a few works are known about the *NIEDP* (see [3, 4, 6–12]) According to Minc [2], the *NIEDP* looks for to give an answer to the question: which matrices are similar to a nonnegative matrix?

A matrix *A*=(*aij*)*i*, *<sup>j</sup>*=1 *<sup>n</sup>* is said to have *constant row sums* if all its rows sum up to the same constant, say *α*, i.e.

$$\sum\_{j=1}^{n} a\_{\circ^{i}} = \alpha, \ i = 1, \ldots, n.$$

The set of all matrices with constant row sums equal to *α* is denoted by CS*α*. It is clear that any matrix in CS*α* has the eigenvector **e** = (1, 1, …1)*<sup>T</sup>* corresponding to the eigenvalue *α*. A nonnegative matrix *A* is called stochastic if *A*∈CS1 and it is called doubly stochastic if *A*, *A<sup>T</sup>* ∈CS1. For lack of simplicity, we shall call *nonnegative generalized stochastic* to a nonneg‐ ative matrix *A*∈CS*α* and *nonnegative generalized doubly stochastic* to a nonnegative matrix *A* with *A*, *A<sup>T</sup>* ∈CS*α*. The relevance of matrices with constant row sums is due to the well-known fact that if *Λ* = {*λ*1, *λ*2,…, *λn*} is the spectrum of a nonnegative matrix, then *Λ* is also the spectrum of a nonnegative matrix with constant row sums equal to its Perron eigenvalue (spectral radius).

Denote by **e***k* the vector with one in the *k*-th position and zeros elsewhere. Let *S* be a nonsingular matrix such that *S*− 1*AS* = *J*(*A*) is the *JCF* of *A*. If *A*∈CS*λ*<sup>1</sup> , *S* can be chosen so that *S***e**1 = **e** and, in this case, it is easy to see that the rows of *S*− 1 = (*ŝij*) satisfy:

$$\sum\_{j=1}^{n} \hat{\mathbf{s}}\_{1j} = 1 \text{ and } \sum\_{j=1}^{n} \hat{\mathbf{s}}\_{ij} = \mathbf{0}, i = \mathbf{2}, \dots, n. \tag{1}$$

If *T* is an *n* × *n* matrix of the form

eigenvalue problems. It is necessary to properly formulate the problem, otherwise it could become a trivial one. Chu and Golub [1] say that "an inverse eigenvalue problem should always be a structured problem." In this chapter, we study the *Nonnegative Inverse Elementa‐ ry Divisors Problem* (hereafter, the *NIEDP*), which is the problem of finding necessary and sufficient conditions for the existence of a nonnegative matrix with prescribed elementary

( )

l

1 1 1 2 2

*n*

*J*

1

é ù ê ú

ê ú ë û

l

*i*

are called the *Jordan blocks* of *J*(*A*). The *elementary divisors* of *A* are the polynomials (*λ* −*λ<sup>i</sup>*

*i*

l

(*λi*

, …, (*λ* −*λ<sup>k</sup>* )

, 1,2, , <sup>1</sup> *<sup>i</sup>*

O O

ê ú <sup>=</sup> = ¼ ê ú

*J i k*

(*IEDP*) is the problem of determining necessary and sufficient conditions under which the

*n* × *n* matrix *A*. It is clear that for any arbitrarily prescribed Jordan canonical form *J*, and for any nonsingular matrix *S*, there exists a matrix *A* = *SJS*<sup>−</sup> 1 with *J* as its *JCF*. In order that the problem be meaningful, the matrix *A* is required to have a particular structure. When *A* is required to be an entrywise nonnegative matrix, the problem is called the *nonnegative inverse elementary divisors problem* (*NIEDP*) (see [2–4]). The *NIEDP* is strongly related to another inverse problem, the *nonnegative inverse eigenvalue problem* (hereafter, the *NIEP*), which is the problem of determining necessary and sufficient conditions for a list of complex numbers *Λ* = {*λ*1, *λ*2, …, *λn*} to be the spectrum of an *n* × *n* entrywise nonnegative matrix. If there exists a nonnegative matrix *A* with spectrum *Λ*, we say that *Λ* is realizable and that *A* is the realizing matrix. The *NIEDP* contains the *NIEP*, and both problems are equivalent if the prescribed eigenvalues are all distinct. Both problems remain unsolved (the *NIEP* is solved only for *n* ≤ 4). A number of sufficient conditions or realizability criteria for the *NIEP* to have a solution are known in the literature regarding the problem (see [5] and the references therein). In contrast, only a few works are known about the *NIEDP* (see [3, 4, 6–12]) According to Minc [2], the

0

( )

é ù ê ú

l

*n*

O OO O

× *ni*

*i*

l

*J*

0 0 *k k*

ë û

0 0

O

( )

), *i* = 1, …, *k*. The inverse elementary divisors problem

*nk* , *n* <sup>1</sup> + ⋯ + *nk* = *n*, are the elementary divisors of an

) *ni* , that

l

0

*n*

*J*

submatrices

O O

divisors.

86 Applied Linear Algebra in Action

Let *A* be an *n* × *n* complex matrix, and let

( )

be its *Jordan canonical form* (hereafter, *JCF*). The *ni*

is, the characteristic polynomials of *Jni*

*n*1 , (*λ* −*λ*2) *n*2

polynomials (*λ* −*λ*1)

*J A S AS*

( )

l

*n i*


= =

$$T = \begin{bmatrix} \lambda\_1 & \* & \dots & \* \\ 0 & \* & \ddots & \vdots \\ \vdots & \vdots & \ddots & \* \\ 0 & \* & \dots & \* \end{bmatrix}, \text{ and } \ S = \begin{bmatrix} 1 & s\_{12} & \dots & s\_{1n} \\ 1 & s\_{22} & \ddots & s\_{2n} \\ \vdots & \ddots & \ddots & \ddots \\ 1 & s\_{n2} & \ddots & s\_{nn} \end{bmatrix}.$$

is nonsingular, then *STS*− 1**e** = *λ*1**e**, that is, *ST S* <sup>−</sup>1∈CS*λ*<sup>1</sup> . We shall denote by *Eij* the *n* × *n* matrix with 1 in the (*i*, *j*)*th* position and zeros elsewhere. The following simple perturbation allows us to join two or more Jordan blocks corresponding to a same eigenvalue *λ<sup>p</sup>* to obtain one Jordan block of a bigger size: Let *A* be an *n* × *n* matrix with *JCF J*(*A*). Let the Jordan blocks *Jm*1 (*λp*), *Jm*<sup>2</sup> (*λp*), …, *Jmq* (*λp*) be corresponding to the eigenvalue *λp*. Let *<sup>ξ</sup>* <sup>=</sup>∑ *j*=1 *p*−1 *mj* , 1 ≤ *p* ≤ *k*, with *ξ* = 0 if *p* = 1, and let *E* = ∑ *i*∈*K Ei*,*i*+1. Then by using the perturbation *J*(*A*) + *E*, with

$$K = \left\{ \xi + m\_1, \xi + m\_1 + m\_2, \dots, \xi + \sum\_{i=1}^{q-1} m\_i \right\},$$

we obtain a Jordan block of bigger size *Jγ*(*λp*), corresponding to the elementary divisor (*λ* − *λp*) *<sup>γ</sup>*, *γ* = *m*1 + *m*2 + … + *mq*.

Observe that if *E* = ∑ *i*∈*K Ei*,*i*+1 with *K* ⊂ {1, 2, …, *n* − 1}, and *A* is an *n* × *n* complex matrix with *JCF J*(*A*) = *S*− 1*AS*, then for an appropriate set *K*

$$J\left(A\right) + E = S^{-1}AS + E = S^{-1}\left(A + SES^{-1}\right)S^{-1}$$

is the *JCF* of *A* + *SES*− 1. If *A*∈CS*λ*<sup>1</sup> and *S* = [**e**| ∗ | ⋯ |∗], then

$$\left(\boldsymbol{A} + \boldsymbol{S} \boldsymbol{S} \boldsymbol{S}^{-1}\right) \in \operatorname{cs}\_{\boldsymbol{\lambda}\_{1}}.$$

The first works on the *NIEDP* are due to H. Minc [3, 4]. Minc studied the problem for non‐ negative and doubly stochastic matrices, modulo the *NIEP*. In particular, he proved the following two results, which we collect as:

**Theorem 1** *Minc* [3] *Let Λ* = {*λ*1, *λ*2, …, *λn*} *be a list of complex numbers, which is realizable by a diagonalizable positive (diagonalizable positive doubly stochastic) matrix A*. *Then, for each JCF J<sup>Λ</sup> associated with Λ*, *there exists a positive (positive doubly stochastic) matrix B with the same spectrum as A*, *and with JCF J*(*B*) = *JΛ*.

According to Minc, the positivity condition is essential in his proof, and it is not known if the result holds without this condition (see [2]). Specifically, it is not known: *i*) whether for every positive matrix, there exists a diagonalizable positive matrix with the same spectrum, *ii*) whether for every nonnegative diagonalizable matrix with spectrum *Λ* = {*λ*1, …, *λn*}, there exists a nonnegative matrix for each *JCF* associated with *Λ*.

Usually, to work with the *NIEDP*, we are given a list of complex numbers *Λ* = {*λ*1, *λ*2, …, *;n*}, from which we want to construct a nonnegative or positive matrix with spectrum *Λ* and with prescribed elementary divisors. In this sense, we mention two matrix perturbation results, which have been employed in connection with the *NIEP* and the *NIEDP*, to derive sufficient conditions for the existence and construction of nonnegative matrices with prescribed spectrum and prescribed elementary divisors. The first result, due to Brauer [13, Theorem 27], shows how to change a single eigenvalue of an *n* × *n* matrix, via a rank-1 perturbation, without changing any of the remaining *n* − 1 eigenvalues. The second result, due to R. Rado and introduced by Perfect in [14], is an extension of the Brauer result. It shows how to change *r* eigenvalues of an *n* × *n* matrix *A* via a rank-*r* perturbation, without changing any of the remaining *n* − *r* eigenvalues (see [15] to understand how Rado's result is applied to the NIEP). The proof of the Brauer result, which we give here, is due to R. Reams [16].

**Theorem 2** *Brauer* [13] *Let A be an n* × *n arbitrary matrix with eigenvalues λ*1, …, *λn*. *Let* **v** = (*v*1, …, *vn*) *<sup>T</sup> an eigenvector of A associated with the eigenvalue λ<sup>k</sup> and let***q** = (*q*1,…, *qn*) *<sup>T</sup> be any n-dimensional vector. Then the matrix A* + **vq***<sup>T</sup> has eigenvalues λ*1, …, *λ<sup>k</sup>* − 1, *λk* + *vT q*, *λk* + 1, …, *λn*.

**Proof**. [16] From the Schur's triangularization theorem, let *U* be an *n* × *n* nonsingular matrix such that

$$U^{-1}AU = \begin{bmatrix} \lambda\_1 & \* & \dots & \* \\ & \lambda\_2 & \ddots & \vdots \\ & & \ddots & \* \\ & & & \lambda\_n \end{bmatrix}$$

is an upper triangular matrix, with **v** being the first column of *U*. Then,

$$\begin{aligned} \boldsymbol{U}^{-1} \begin{pmatrix} \boldsymbol{A} + \mathbf{v} \mathbf{q}^{T} \end{pmatrix} \boldsymbol{U} &= \boldsymbol{U}^{-1} \boldsymbol{A} \boldsymbol{U} + \begin{bmatrix} q\_{1} & q\_{2} & \cdots & q\_{n} \end{bmatrix} \boldsymbol{U} \\\\ \begin{bmatrix} \boldsymbol{\lambda}\_{1} + \mathbf{q}^{T} \mathbf{v} & \ast & \cdots & \ast \\\\ \boldsymbol{\lambda}\_{2} & \ddots & \ddots & \vdots \\\\ & \ddots & \ast & \ast \\\\ & & \boldsymbol{\lambda}\_{n} \end{bmatrix} .\end{aligned}$$

and the result follows.■

**Theorem 3** *Rado* [14] *Let A be an n* × *n arbitrary matrix with eigenvalues λ*1, …, *λ<sup>n</sup> and let Ω* = *diag*{*λ*1, …, *λr*} *for some r* ≤ *n*. *Let X be an n* × *r matrix with rank r such that its columns x*1, *x*2, …, *xr satisfy Axi* = *λ<sup>i</sup> xi* , *i* = 1, …, *r*. *Let C be an r* × *n arbitrary matrix. Then the matrix A* + *XC has eigenvalues μ*1, …, *μr*, *λr* + 1, …, *λn*,*where μ*1, …, *μr are eigenvalues of the matrix Ω* + *CX*.

**Proof**. Let *S* = [*X*|*Y*] a nonsingular matrix with *<sup>S</sup>* <sup>−</sup><sup>1</sup> <sup>=</sup> *<sup>V</sup> <sup>U</sup>* . Then *UX* = *Ir*, *VY* = *In* − *<sup>r</sup>*, and *VX* = 0, *UY* = 0. Let *C* = [*C*1|*C*2], *<sup>X</sup>* <sup>=</sup> *<sup>X</sup>*<sup>2</sup> *<sup>X</sup>*<sup>1</sup> , *<sup>Y</sup>* <sup>=</sup> *<sup>Y</sup>*<sup>2</sup> *<sup>Y</sup>*<sup>1</sup> . Then, since *AX* = *XΩ*,

$$S^{-1}AS = \begin{bmatrix} U \\ V \end{bmatrix} \begin{bmatrix} X\Omega \mid AY \end{bmatrix} = \begin{bmatrix} \Omega & UAY \\ 0 & VAY \end{bmatrix}$$

and

1


*q i i*

1

=

*Ei*,*i*+1 with *K* ⊂ {1, 2, …, *n* − 1}, and *A* is an *n* × *n* complex matrix with *JCF*

1 12

xx

(*λ* − *λp*)

Observe that if *E* = ∑

88 Applied Linear Algebra in Action

*<sup>γ</sup>*, *γ* = *m*1 + *m*2 + … + *mq*.

*i*∈*K*

is the *JCF* of *A* + *SES*− 1. If *A*∈CS*λ*<sup>1</sup>

*J*(*A*) = *S*− 1*AS*, then for an appropriate set *K*

following two results, which we collect as:

a nonnegative matrix for each *JCF* associated with *Λ*.

*as A*, *and with JCF J*(*B*) = *JΛ*.

*K m mm m*

we obtain a Jordan block of bigger size *Jγ*(*λp*), corresponding to the elementary divisor

( ) ( ) 1 11 *J A E S AS E S A SES S* - -- += += +

and *S* = [**e**| ∗ | ⋯ |∗], then

( ) <sup>1</sup> <sup>1</sup> *A SES cs* .

The first works on the *NIEDP* are due to H. Minc [3, 4]. Minc studied the problem for non‐ negative and doubly stochastic matrices, modulo the *NIEP*. In particular, he proved the

**Theorem 1** *Minc* [3] *Let Λ* = {*λ*1, *λ*2, …, *λn*} *be a list of complex numbers, which is realizable by a diagonalizable positive (diagonalizable positive doubly stochastic) matrix A*. *Then, for each JCF J<sup>Λ</sup> associated with Λ*, *there exists a positive (positive doubly stochastic) matrix B with the same spectrum*

According to Minc, the positivity condition is essential in his proof, and it is not known if the result holds without this condition (see [2]). Specifically, it is not known: *i*) whether for every positive matrix, there exists a diagonalizable positive matrix with the same spectrum, *ii*) whether for every nonnegative diagonalizable matrix with spectrum *Λ* = {*λ*1, …, *λn*}, there exists

Usually, to work with the *NIEDP*, we are given a list of complex numbers *Λ* = {*λ*1, *λ*2, …, *;n*}, from which we want to construct a nonnegative or positive matrix with spectrum *Λ* and with prescribed elementary divisors. In this sense, we mention two matrix perturbation results, which have been employed in connection with the *NIEP* and the *NIEDP*, to derive sufficient conditions for the existence and construction of nonnegative matrices with prescribed spectrum and prescribed elementary divisors. The first result, due to Brauer [13, Theorem 27], shows how to change a single eigenvalue of an *n* × *n* matrix, via a rank-1 perturbation, without changing any of the remaining *n* − 1 eigenvalues. The second result, due to R. Rado and introduced by Perfect in [14], is an extension of the Brauer result. It shows how to change *r* eigenvalues of an *n* × *n* matrix *A* via a rank-*r* perturbation, without changing any of the remaining *n* − *r* eigenvalues (see [15] to understand how Rado's result is applied to the NIEP).

The proof of the Brauer result, which we give here, is due to R. Reams [16].


l

ì ü = + + + ¼+ í ý î þ <sup>å</sup>

, ,, ,

 x

$$S^{-1}XCS = \begin{bmatrix} I\_r \\ 0 \end{bmatrix} \begin{bmatrix} C\_1 \mid C\_2 \end{bmatrix} S = \begin{bmatrix} C\_1 & C\_2 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} X\_1 & Y\_1 \\ X\_2 & Y\_2 \end{bmatrix} = \begin{bmatrix} CX & CY \\ 0 & 0 \end{bmatrix}.$$

Thus,

$$S^{-1} \left( A + XC \right) S = S^{-1}AS + S^{-1}XCS = \begin{bmatrix} \Omega + CX & UAY + CY \\ 0 & VAY \end{bmatrix},$$

and we have *σ*(*A* + *XC*) = *σ*(*Ω* + *CX*) + *σ*(*A*) − *σ*(*Ω*).■

The following result in [6], which will be frequently used later, shows how is the *JCF* of the Brauer perturbation *A* + **eq***<sup>T</sup>*.

**Lemma 1** [6] *Let A*∈CS*λ*<sup>1</sup> *be with JCF*

$$J\left(\boldsymbol{A}\right) = \boldsymbol{S}^{-1}\boldsymbol{A}\boldsymbol{S} = \operatorname{diag}\left(J\_{\boldsymbol{1}}\left(\boldsymbol{\lambda}\_{1}\right), J\_{\boldsymbol{u}\_{2}}\left(\boldsymbol{\lambda}\_{2}\right), \dots, J\_{\boldsymbol{u}\_{k}}\left(\boldsymbol{\lambda}\_{k}\right)\right)$$

$$Let \mathbf{q}^T = (q\_1, \dots, q\_n) \, and \, \lambda\_1 + \sum\_{i=1}^n q\_i \star \lambda\_{i^t} \text{ is } \mathbf{2}, \dots, \mathbf{n}. \text{ Then the } \mathbf{f} \text{CF of } \mathbf{A} + \mathbf{e} \mathbf{q}^T \text{ is } \mathbf{J}(\mathbf{A}) + \left(\sum\_{i=1}^n q\_i \right) \mathbf{E}\_{11}. \text{ In particular,}$$

$$\text{if } \sum\_{i=1}^n q\_i = 0, \text{ then } \mathbf{A} \text{ and } \mathbf{A} + \mathbf{e} \mathbf{q}^T \text{ are similar.}$$
