**A.1 Auto associative kernel regression**

The framework of the AAKR technique comprises three steps, briefly presented below [61, 62].

#### **1) Distance calculation**

The distance between the query vector **x***<sup>q</sup>* and each of the memory data vectors is computed. There are many different distance metrics that can be used, but the most commonly used one is the Euclidean distance (L2 -Norm):

$$d\_i(\mathbf{X}\_i, \mathbf{x}\_q) = \left\| \mathbf{X}\_i - \mathbf{x}\_q \right\|\_2 = \sqrt{\sum\_{j=1}^p \left( \mathbf{x}\_{ij} - \mathbf{x}\_{qj} \right)^2} \tag{A1}$$

For a single query vector, this calculation is repeated for each of *M* memory vectors, resulting into**d**∈ *<sup>M</sup>*�<sup>1</sup> .

#### **2) Similarity weight quantification**

The distance *di* in vector **d** is used to determine the weights for the AAKR, for example, by evaluating the Gaussian kernel:

$$k\_i(\mathbf{X}\_i, \mathbf{x}\_q) = \exp\left(\frac{-d\_i^2}{2h^2}\right) \tag{A2}$$

where *h* is the bandwidth.

#### **3) Output estimation**

Finally, the quantified weights (Eq. (A2)) are combined with the memory data vectors to make estimations by using a weighted average:

$$
\hat{\mathbf{x}}\_{\mathbf{q}^j} = \frac{\sum\_{i=1}^{M} k\_i \mathbf{x}\_{\mathbf{ij}}}{\sum\_{i=1}^{M} k\_i} \tag{A3}
$$
