**3. Multilinear iterative Wiener filter**

It can be easily checked that

$$\begin{aligned} \mathbf{g} &= \mathbf{h}\_N \otimes \mathbf{h}\_{N-1} \otimes \cdots \otimes \mathbf{h}\_1 \\ &= (\mathbf{h}\_N \otimes \mathbf{h}\_{N-1} \otimes \cdots \otimes \mathbf{I}\_{L\_i}) \mathbf{h}\_1 \\ &= (\mathbf{h}\_N \otimes \mathbf{h}\_{N-1} \otimes \cdots \otimes \mathbf{h}\_3 \otimes \mathbf{I}\_{L\_i} \otimes \mathbf{h}\_1) \mathbf{h}\_2 \\ &\vdots \\ &= (\mathbf{h}\_N \otimes \mathbf{h}\_{N-1} \otimes \cdots \otimes \mathbf{I}\_{L\_i} \otimes \mathbf{h}\_{L\_i-1} \otimes \cdots \otimes \mathbf{h}\_1) \mathbf{h}\_i \\ &\vdots \\ &= (\mathbf{I}\_{L\_N} \otimes \mathbf{h}\_{N-1} \otimes \cdots \otimes \mathbf{h}\_1) \mathbf{h}\_N, \end{aligned} \tag{19}$$

where **I***Li* denotes the identity matrix of size *Li* � *Li*.

Hence, the cost function given by Eq. (16) may be expressed in *N* equivalent forms:

$$\begin{split} J\left(\hat{\mathbf{h}}\_{1},\hat{\mathbf{h}}\_{2},\ldots,\hat{\mathbf{h}}\_{N}\right) &= \sigma\_{d}^{2} - 2\hat{\mathbf{g}}^{T}\mathbf{p} + \hat{\mathbf{g}}^{T}\mathbf{R}\hat{\mathbf{g}} \\ &= \sigma\_{d}^{2} - 2\hat{\mathbf{h}}\_{i}^{T}\left(\hat{\mathbf{h}}\_{N}\otimes\hat{\mathbf{h}}\_{N-1}\otimes\cdots\otimes\mathbf{I}\_{L\_{i}}\otimes\hat{\mathbf{h}}\_{L\_{i}-1}\otimes\cdots\otimes\hat{\mathbf{h}}\_{1}\right)^{T}\mathbf{p} \\ &+ \hat{\mathbf{h}}\_{i}^{T}\left(\hat{\mathbf{h}}\_{N}\otimes\hat{\mathbf{h}}\_{N-1}\otimes\cdots\otimes\mathbf{I}\_{L\_{i}}\otimes\hat{\mathbf{h}}\_{L\_{i}-1}\otimes\cdots\otimes\hat{\mathbf{h}}\_{1}\right)^{T} \\ &\times \mathbf{R}\left(\hat{\mathbf{h}}\_{N}\otimes\hat{\mathbf{h}}\_{N-1}\otimes\cdots\otimes\mathbf{I}\_{L\_{i}}\otimes\hat{\mathbf{h}}\_{L\_{i}-1}\otimes\cdots\otimes\hat{\mathbf{h}}\_{1}\right)\hat{\mathbf{h}}\_{i} \\ &= \sigma\_{d}^{2} - 2\hat{\mathbf{h}}\_{i}^{T}\mathbf{p}\_{i} + \hat{\mathbf{h}}\_{i}^{T}\mathbf{R}\_{i}\hat{\mathbf{h}}\_{i}, \quad i = 1,2,\ldots,N,\end{split} \tag{20}$$

*Identification of Multilinear Systems: A Brief Overview DOI: http://dx.doi.org/10.5772/intechopen.102765*

where

$$\mathbf{p}\_{i} = \left(\hat{\mathbf{h}}\_{N} \otimes \hat{\mathbf{h}}\_{N-1} \otimes \dots \otimes \mathbf{I}\_{L\_{i}} \otimes \hat{\mathbf{h}}\_{L\_{i}-1} \otimes \dots \otimes \hat{\mathbf{h}}\_{1}\right)^{T} \mathbf{p},\tag{21}$$

$$\begin{split} \mathbf{R}\_{i} &= \left( \hat{\mathbf{h}}\_{N} \otimes \hat{\mathbf{h}}\_{N-1} \otimes \dots \otimes \mathbf{I}\_{L\_{i}} \otimes \hat{\mathbf{h}}\_{L\_{i}-1} \otimes \dots \otimes \hat{\mathbf{h}}\_{1} \right)^{T} \mathbf{R} \\ &\times \left( \hat{\mathbf{h}}\_{N} \otimes \hat{\mathbf{h}}\_{N-1} \otimes \dots \otimes \mathbf{I}\_{L\_{i}} \otimes \hat{\mathbf{h}}\_{L\_{i}-1} \otimes \dots \otimes \hat{\mathbf{h}}\_{1} \right). \end{split} \tag{22}$$

If all coefficients except **h**^*<sup>i</sup>* are kept fixed, we may define

$$\begin{aligned} J\_{\hat{\mathbf{h}}\_{l},\hat{\mathbf{h}}\_{l},\ldots,\hat{\mathbf{h}}\_{l-1},\hat{\mathbf{h}}\_{l+1},\ldots,\hat{\mathbf{h}}\_{N}} \left(\hat{\mathbf{h}}\_{l}\right) &= \sigma\_{d}^{2} - 2\hat{\mathbf{h}}\_{i}^{T}\mathbf{p}\_{i} + \hat{\mathbf{h}}\_{i}^{T}\mathbf{R}\_{i}\hat{\mathbf{h}}\_{i}, \\ \quad i = 1,2,\ldots,N. \end{aligned} \tag{23}$$

The minimization of this convex cost function with respect to ^ **h***<sup>i</sup>* yields

$$
\hat{\mathbf{h}}\_i = \mathbf{R}\_i^{-1} \mathbf{p}\_i, \ i = 1, 2, \dots, N. \tag{24}
$$

Using this result, an iterative approach can be derived. A set of initial values **<sup>h</sup>**^ð Þ <sup>0</sup> *<sup>i</sup>* , *i* ¼ 1, 2, … , *N* are chosen for starting the algorithm, and then we can compute

$$\mathbf{p}\_1^{(0)} = \left(\hat{\mathbf{h}}\_N^{(0)} \otimes \hat{\mathbf{h}}\_{N-1}^{(0)} \otimes \dots \otimes \hat{\mathbf{h}}\_2^{(0)} \otimes \hat{\mathbf{l}}\_{L\_1}\right)^T \mathbf{p},\tag{25}$$

$$\mathbf{R}\_1^{(0)} = \left(\hat{\mathbf{h}}\_N^{(0)} \otimes \hat{\mathbf{h}}\_{N-1}^{(0)} \otimes \dots \otimes \hat{\mathbf{h}}\_2^{(0)} \otimes \hat{\mathbf{l}}\_{L\_4}\right)^T \mathbf{R} \left(\hat{\mathbf{h}}\_N^{(0)} \otimes \hat{\mathbf{h}}\_{N-1}^{(0)} \otimes \dots \otimes \hat{\mathbf{h}}\_2^{(0)} \otimes \hat{\mathbf{l}}\_{L\_4}\right),\tag{26}$$

$$J\_{\hat{\mathbf{h}}\_{2},\hat{\mathbf{h}}\_{3},\ldots,\hat{\mathbf{h}}\_{N}}\left(\hat{\mathbf{h}}\_{1}^{(1)}\right) = \sigma\_{d}^{2} - 2\left(\hat{\mathbf{h}}\_{1}^{(1)}\right)^{T}\mathbf{p}\_{1}^{(0)} + \left(\hat{\mathbf{h}}\_{1}^{(1)}\right)^{T}\mathbf{R}\_{1}^{(0)}\left(\hat{\mathbf{h}}\_{1}^{(1)}\right). \tag{27}$$

The minimization of the cost function yields

$$
\hat{\mathbf{h}}\_1^{(1)} = \left(\mathbf{R}\_1^{(0)}\right)^{-1} \mathbf{p}\_1^{(0)}.\tag{28}
$$

Using **<sup>h</sup>**^ð Þ<sup>1</sup> <sup>1</sup> and **<sup>h</sup>**^ð Þ <sup>0</sup> *<sup>i</sup>* , *<sup>i</sup>* <sup>¼</sup> 3, … , *<sup>N</sup>*, we can now compute **<sup>h</sup>**^ð Þ<sup>1</sup> <sup>2</sup> . Then, the cost function becomes

$$J\_{\hat{\mathbf{h}}\_1, \hat{\mathbf{h}}\_3, \dots, \hat{\mathbf{h}}\_N} \left(\hat{\mathbf{h}}\_2^{(1)}\right) = \sigma\_d^2 - 2 \left(\hat{\mathbf{h}}\_2^{(1)}\right)^T \mathbf{p}\_2^{(1)} + \left(\hat{\mathbf{h}}\_2^{(1)}\right)^T \mathbf{R}\_2^{(1)} \left(\hat{\mathbf{h}}\_2^{(1)}\right),\tag{29}$$

where

$$\mathbf{p}\_2^{(1)} = \left(\hat{\mathbf{h}}\_N^{(0)} \otimes \hat{\mathbf{h}}\_{N-1}^{(0)} \otimes \dots \otimes \hat{\mathbf{h}}\_3^{(0)} \otimes \hat{\mathbf{l}}\_{L\_2} \otimes \hat{\mathbf{h}}\_1^{(1)}\right)^T \mathbf{p},\tag{30}$$

$$\mathbf{R}\_2^{(1)} = \left(\hat{\mathbf{h}}\_N^{(0)} \otimes \hat{\mathbf{h}}\_{N-1}^{(0)} \otimes \dots \otimes \hat{\mathbf{h}}\_3^{(0)} \otimes \hat{\mathbf{l}}\_1 \hat{\mathbf{h}}\_1^{(1)}\right)^T \mathbf{R} \left(\hat{\mathbf{h}}\_N^{(0)} \otimes \hat{\mathbf{h}}\_{N-1}^{(0)} \otimes \dots \otimes \hat{\mathbf{h}}\_3^{(0)} \otimes \hat{\mathbf{l}}\_2 \hat{\mathbf{h}}\_1^{(1)}\right). \tag{31}$$

The minimization of the cost function yields

$$
\hat{\mathbf{h}}\_2^{(1)} = \left(\mathbf{R}\_2^{(1)}\right)^{-1} \mathbf{p}\_2^{(1)}.\tag{32}
$$

All the other estimates ^ **h** ð Þ1 *<sup>i</sup>* , *i* ¼ 3, 4, … , *N* can be computed in a similar manner. By further iterating up to iteration *n*, the estimates of the *N* vectors are obtained. This minimization technique is called "block coordinate descent" [41].
