**3.1. Basic Theory**

In order to solve Problem A, a discussion of the Cayley–Hamilton theorem is required. The Cayley–Hamilton theorem states that if *p*(*λ*) is the characteristic polynomial of a square matrix *A*, obtained from *p*(*λ*) = det(*λI* − *A*), then substituting *A* for *λ* in the polynomial gives the zero matrix. Thus, by applying the theorem, matrix *A* satisfies its own characteristic polynomial, *p*(*A*) = 0 [13].

The Cayley–Hamilton theorem can be useful in inverse eigenvalue problems beyond the typical statement that a square matrix satisfies its own characteristic equation. Once the characteristic polynomial of a system is found from the specified spectral data, the Cayley– Hamilton theorem can be used to find an unknown matrix *A*, which represents the system. A set of unknown entries of the matrix *A* can be found from the set of equations that arise from the Cayley–Hamilton theorem. For example, suppose that a 2 × 2 matrix *A* is given by the form:

$$\mathcal{A} = \begin{bmatrix} a\_{11} & a\_{12} \\ a\_{21} & a\_{22} \end{bmatrix} \tag{1}$$

In order to solve the inverse eigenvalue problem, the entries of matrix *A* can be populated using the Cayley–Hamilton theorem in conjunction with the set of specified eigenvalues. In order to demonstrate the process, suppose that the specified eigenvalues are given by −2 and −3. The characteristic polynomial can then constructed as

$$p\left(\mathcal{X}\right) = \left(\mathcal{X} + \mathcal{Z}\right)\left(\mathcal{X} + \mathcal{Z}\right) = \mathcal{X}^2 + \mathcal{Y}\mathcal{X} + 6\tag{2}$$

Using the Cayley–Hamilton theorem, *λ* in equation (2) is replaced by *A* from equation (1) so that

$$p\left(A\right) = A^2 + \dots + 6I + 6I = \mathbf{0} \tag{3}$$

where **I** is the identity matrix and **0** is the zero matrix. Expressing equation (3) term-by-term gives four equations with four unknowns, specifically

$$
\begin{bmatrix} a\_{11}^2 + a\_{12}a\_{21} + \mathfrak{S}a\_{11} + 6 & a\_{11}a\_{12} + a\_{12}a\_{22} + \mathfrak{S}a\_{12} \\ a\_{11}a\_{21} + a\_{21}a\_{22} + \mathfrak{S}a\_{21} & a\_{22}^2 + a\_{12}a\_{21} + \mathfrak{S}a\_{22} + 6 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \tag{4}$$

However, equation independence is unclear. A discussion on this point is found in section 4.1. Solving the set of equations in equation (4) leads to two solutions:

$$\left[a\_{11} = -a\_{22} - 5, \ a\_{12} = -\frac{a\_{22}^2 + 5a\_{22} + 6}{a\_{21}}\right] \tag{5}$$

Any values can be assigned to *a*21 and *a*22 in equation (5), and the matrix *A* will have the desired eigenvalues given in equation (2). Consequently, Problem A has been solved, although it is clear that many solutions exist. This solution is particularly useful in solving inverse eigen‐ value problems because it gives a range of *A* matrix values for which a system produces the same eigenvalues. The factors limiting the solutions as given in equation (5) are then based on the physical limits and fixed parameters of the system. Physical limits and fixed parameters serve to produce a set of solutions that are mathematically as well as physically possible.

#### **3.2. Generalized Cayley–Hamilton theorem**

Problem B is related to Problem A but takes on a more general form, which is conducive to real physical systems since many physical problems can be written as generalized eigenvalue problems. Although the characteristic polynomial for the generalized eigenvalue problem is similar to the single matrix case, it now involves two matrices rather than one. Therefore, the lesser known generalized Cayley–Hamilton theorem must be used by Chang and Chen [14].

The generalized Cayley–Hamilton theorem is modified to include a two square matrices, *K* and *M*. The nomenclature is used as a reminder that in many engineering applications the matrices represent mass (*M*) and stiffness (*K*) matrices. For the generalized eigenvalue problem, the characteristic polynomial takes the form *p*(*λ*) = det(*K* − *λM*). By substituting *K* and *M* for *λ* into the characteristic polynomial, the following relationship must be satisfied.

$$p\left(K,M\right) = c\_n \left(M^{-1}K\right)^n + c\_{n-1} \left(M^{-1}K\right)^{n-1} \dots + c\_1 \left(M^{-1}K\right) + c\_0 I = 0 \tag{6}$$

where *cn* is the coefficient of *λ<sup>n</sup>* in *p*(*λ*). Equation (6) is valid as long as *M* is non-singular.

If the matrices *K* and *M* commute (i.e. *KM* = *MK*), then the generalized Cayley–Hamilton theorem can be written as

$$p\left(K,M\right) = \mathbf{c}\_{\boldsymbol{n}}K^{\boldsymbol{n}} + \mathbf{c}\_{\boldsymbol{n}-1}K^{\boldsymbol{n}-1}M\dots + \mathbf{c}\_{\boldsymbol{1}}KM^{\boldsymbol{n}-1} + \mathbf{c}\_{\boldsymbol{0}}M^{\boldsymbol{n}} = \mathbf{0} \tag{7}$$

where no other restrictions are placed on matrix *M*. Solving these equations leads to the solution of Problem B.

#### **3.3. Numerical example**

order to demonstrate the process, suppose that the specified eigenvalues are given by −2 and

Using the Cayley–Hamilton theorem, *λ* in equation (2) is replaced by *A* from equation (1) so

where **I** is the identity matrix and **0** is the zero matrix. Expressing equation (3) term-by-term

However, equation independence is unclear. A discussion on this point is found in section 4.1.

2 22 22

Any values can be assigned to *a*21 and *a*22 in equation (5), and the matrix *A* will have the desired eigenvalues given in equation (2). Consequently, Problem A has been solved, although it is clear that many solutions exist. This solution is particularly useful in solving inverse eigen‐ value problems because it gives a range of *A* matrix values for which a system produces the same eigenvalues. The factors limiting the solutions as given in equation (5) are then based on the physical limits and fixed parameters of the system. Physical limits and fixed parameters serve to produce a set of solutions that are mathematically as well as physically possible.

Problem B is related to Problem A but takes on a more general form, which is conducive to real physical systems since many physical problems can be written as generalized eigenvalue problems. Although the characteristic polynomial for the generalized eigenvalue problem is similar to the single matrix case, it now involves two matrices rather than one. Therefore, the lesser known generalized Cayley–Hamilton theorem must be used by Chang and Chen [14].

The generalized Cayley–Hamilton theorem is modified to include a two square matrices, *K* and *M*. The nomenclature is used as a reminder that in many engineering applications the

5 6 5, *a a aa a <sup>a</sup>* é ù + + ê ú =- - =-

21

ë û (5)

5 6 5 0 0 5 5 6 0 0

é ù + ++ + + é ù ê ú <sup>=</sup> ê ú ë û + + + ++ ë û (4)

 ll

= + += + + 2 3 56 (2)

( ) <sup>2</sup> *pA A A I* =+ += 5 6 **0** (3)

( ) ( )( ) <sup>2</sup> *p*

11 12 21 11 11 12 12 22 12 2 11 21 21 22 21 22 12 21 22

*a aa a aa aa a aa aa a a aa a*

Solving the set of equations in equation (4) leads to two solutions:

11 22 12

 l

−3. The characteristic polynomial can then constructed as

gives four equations with four unknowns, specifically

2

**3.2. Generalized Cayley–Hamilton theorem**

that

30 Applied Linear Algebra in Action

ll

Once again, a simple numerical example is used to demonstrate concepts. Using the same desired eigenvalues as for the previous example gives the characteristic polynomial given in equation (2). This time, the modified generalized Cayley–Hamilton theorem is used to form an equation for unknown matrices *K* and *M* so that

$$\mathbb{E}\left(P(K,M)\right) = \left(M^{-1}K\right)^2 + \mathbb{S}\left(M^{-1}K\right) + 6I = \mathbf{0} \tag{8}$$

Assuming that the *K* and *M* matrices have a similar form to that of *A* in equation (1), then equation (8) can be expanded to give four equations (from the four matrix entries) with eight unknowns (four unknown entries in each of the two matrices). Solving these equations produces several results, one of which can be written as

$$\begin{bmatrix} k\_{11} = \frac{k\_{11}k\_{21}k\_{22} + \dots + k\_{12}k\_{21}m\_{22} + 6k\_{12}m\_{21}m\_{22} + 6k\_{21}m\_{12}m\_{22} - 6k\_{22}m\_{12}m\_{21}}{\left(k\_{22} + 2m\_{22}\right)\left(k\_{22} + 3m\_{22}\right)}\\ m\_{11} = \frac{k\_{11}k\_{22}m\_{12} + k\_{12}k\_{22}m\_{21} - k\_{12}k\_{21}m\_{22} + 5k\_{22}m\_{12}m\_{21} + 6m\_{12}m\_{21}m\_{22}}{\left(k\_{22} + 2m\_{22}\right)\left(k\_{22} + 3m\_{22}\right)}\end{bmatrix} \tag{9}$$

Once again it becomes clear that only two solutions are dependent, and selecting any values for *k*12, *m*12, *k*21, *m*21, *k*22, *m*22, will result in *K* and *M* matrices that produce a solution with the desired eigenvalues.

Increasing the size of the square matrices increases the number of independent variables faster than it does the dependent variables. In other words, for an *n*th order system with *n*-specified eigenvalues, 2*n*<sup>2</sup> unknown variables are required to find *K* and *M*, but, as will be shown later in this chapter, the generalized Cayley–Hamilton theorem only produces *n*-independent equations.
