**2. System model in the multilinear framework**

Let us consider a MISO system, whose output signal at the time index *t* can be written as

$$\mathbf{y}(t) = \sum\_{l\_1=1}^{L\_1} \sum\_{l\_2=1}^{L\_2} \cdots \sum\_{l\_N=1}^{L\_N} \mathbf{x}\_{l\_1 l\_2 l\_3 \dots l\_N}(t) h\_{1, l\_1} h\_{2, l\_2} \cdots h\_{N, l\_N},\tag{1}$$

where the individual channels are modeled by the vectors:

$$\mathbf{h}\_{i} = \begin{bmatrix} h\_{i,1} \ h\_{i,2} \ \cdots \ h\_{i,L\_i} \end{bmatrix}^T, \ i = 1, 2, \ \dots, N,\tag{2}$$

the superscript *<sup>T</sup>* denotes the transpose operator, and the input signals may be expressed in a tensorial form as **<sup>X</sup>** ð Þ*<sup>t</sup>* <sup>∈</sup> *<sup>L</sup>*1�*L*2�*L*3�⋯�*LN* , with the elements ð Þ **X** *<sup>l</sup>*1*l*<sup>2</sup> … *lN* ðÞ¼ *t xl*1*l*<sup>2</sup> … *lN* ð Þ*t* . Consequently, the output signal becomes

$$\mathbf{y}(t) = \mathcal{X}(t) \times\_1 \mathbf{h}\_1^T \times\_2 \mathbf{h}\_2^T \times\_3 \cdots \times\_N \mathbf{h}\_N^T,\tag{3}$$

where �*<sup>i</sup>* (for *i* ¼ 1, 2, … , *N*) denotes the mode-*i* product [7]. It can be said that *y t*ð Þ is a multilinear form because it is a linear function of each of the vectors **h***i*, *i* ¼ 1, 2, … , *N*, when the other *N* � 1 vectors are fixed. In this context, *y t*ð Þ may be regarded as an extension of the bilinear form [19]. Next, let us define

$$\mathcal{H} = \mathbf{h}\_1 \circ \mathbf{h}\_2 \circ \cdots \circ \mathbf{h}\_N,\tag{4}$$

*Identification of Multilinear Systems: A Brief Overview DOI: http://dx.doi.org/10.5772/intechopen.102765*

where <sup>∘</sup> is the vector outer product, i.e., **<sup>h</sup>**<sup>1</sup> <sup>∘</sup> **<sup>h</sup>**<sup>2</sup> <sup>¼</sup> **<sup>h</sup>**1**h***<sup>T</sup>* <sup>2</sup> , ð Þ **h**<sup>1</sup> ∘ **h**<sup>2</sup> *<sup>i</sup>*,*<sup>j</sup>* ¼ *h*1,*ih*2,*j*, vecð Þ¼ **h**<sup>1</sup> ∘ **h**<sup>2</sup> **h**<sup>2</sup> ⊗ **h**1, and

$$(\mathcal{H})\_{l\_1, l\_2, \dots, l\_N} = h\_{1, l\_1} h\_{2, l\_2} \cdots h\_{N, l\_N},\tag{5}$$

$$\text{vec}(\mathcal{H}) = \mathbf{h}\_N \otimes \mathbf{h}\_{N-1} \otimes \dots \otimes \mathbf{h}\_1,\tag{6}$$

where ⊗ denotes the Kronecker product and vecð Þ� is the vectorization operation:

$$\text{vec}(\mathcal{H}) = \begin{bmatrix} \text{vec}(\mathbf{H}\_{:\ldots:1}) \\ \vdots \\ \text{vec}(\mathbf{H}\_{:\ldots:L\_{N}}) \end{bmatrix}, \tag{7}$$

$$\text{vec}(\mathbf{H}\_{:\ldots:l\_{i}}) = \begin{bmatrix} \text{vec}(\mathbf{H}\_{:\ldots:1,i}) \\ \vdots \\ \text{vec}(\mathbf{H}\_{:\ldots:L\_{N-1,i}}) \end{bmatrix}, \tag{8}$$

and so on, where **H**:: … :*li* ∈ *<sup>L</sup>*1�*L*2�*L*3�⋯�*LN*�<sup>1</sup> represent the frontal slices of the tensor **H**. Therefore, the output signal can be expressed as

$$y(t) = \text{vec}^T(\mathcal{H})\text{vec}[\mathcal{X}(t)],\tag{9}$$

where

$$\mathbf{vec}[\mathcal{X}(t)] = \begin{bmatrix} \mathbf{vec}[\mathbf{X}\_{:\dots:1}(t)] \\ \vdots \\ \mathbf{vec}[\mathbf{X}\_{:\dots:L\_N}(t)] \end{bmatrix} = \mathbf{x}(t), \tag{10}$$

with **X**:: … :*li* ð Þ*<sup>t</sup>* <sup>∈</sup> *<sup>L</sup>*1�*L*2�*L*3�⋯�*LN*�<sup>1</sup> being the frontal slices of the tensor **<sup>X</sup>** ð Þ*<sup>t</sup>* . Let us denote the global impulse response of length *L*1*L*2⋯*LN* as

$$\mathbf{g} = \mathbf{vec}(\mathcal{H}) = \mathbf{h}\_N \otimes \mathbf{h}\_{N-1} \otimes \dots \otimes \mathbf{h}\_1. \tag{11}$$

Here, an observation can be made: the solution of the decomposition in Eq. (11) is not unique [17, 24]. Despite this, no scaling ambiguity occurs in the identification of the global impulse response, **g**.

Using Eqs. (9)–(11), we may rewrite *y t*ð Þ as

$$\mathbf{y}(t) = \mathbf{g}^T \mathbf{x}(t). \tag{12}$$

We aim to identify the global impulse response, **g**. We can define the reference (or desired) signal as

$$d(t) = \mathbf{g}^T \mathbf{x}(t) + w(t),\tag{13}$$

where *w t*ð Þ denotes the additive noise, which is uncorrelated with the input signals, and whose variance is

$$
\sigma\_d^2 = \mathbf{g}^T E\left[\mathbf{x}(t)\mathbf{x}^T(t)\right] \mathbf{g} + \sigma\_w^2 = \mathbf{g}^T \mathbf{R} \mathbf{g} + \sigma\_w^2,\tag{14}
$$

with *<sup>E</sup>*½ �� denoting mathematical expectation, **<sup>R</sup>** <sup>¼</sup> *<sup>E</sup>* **<sup>x</sup>**ð Þ*<sup>t</sup>* **<sup>x</sup>***<sup>T</sup>*ð Þ*<sup>t</sup>* , and *<sup>σ</sup>*<sup>2</sup> *<sup>w</sup>* <sup>¼</sup> *E w*<sup>2</sup> ½ � ð Þ*<sup>t</sup>* . Next, the error signal can be defined as

$$e(t) = d(t) - \hat{\mathbf{g}}^T \mathbf{x}(t),\tag{15}$$

where **g**^ denotes an estimate of the global impulse response.

The optimization criterion is the minimization of the mean-squared error (MSE), which can be defined using Eq. (15):

$$J(\hat{\mathbf{g}}) = E[\boldsymbol{\epsilon}^2(t)] = \sigma\_d^2 - 2\hat{\mathbf{g}}^T \mathbf{p} + \hat{\mathbf{g}}^T \mathbf{R} \hat{\mathbf{g}},\tag{16}$$

where **p** ¼ *Ed t* ½ � ð Þ**x**ð Þ*t* denotes the cross-correlation vector between *d t*ð Þ and **x**ð Þ*t* . The solution to this minimization problem is given by the popular Wiener filter [40]:

$$
\hat{\mathbf{g}}\_W = \mathbf{R}^{-1} \mathbf{p}.\tag{17}
$$

Relation (17) provides the global impulse response. In order to obtain the *N* coefficient vectors **h***i*, *i* ¼ 1, 2, … , *N*, the nonlinear equation set containing *L*1*L*2⋯*LN* equations with *L*<sup>1</sup> þ *L*<sup>2</sup> þ ⋯ þ *LN* scalar variables needs to be solved:

$$
\hat{\mathbf{g}}\_{\mathcal{W}} = \hat{\mathbf{h}}\_{\mathcal{W},\mathcal{N}} \otimes \hat{\mathbf{h}}\_{\mathcal{W},\mathcal{N}-1} \otimes \dots \otimes \hat{\mathbf{h}}\_{\mathcal{W},\mathcal{1}}.\tag{18}
$$
