**2.10 Tent-chaff points**

After projection of the feature onto the polynomial p using features as:

$$p\left(f\_q\right) = c\_{d+1} \cdot f\_q^{\;d+1} + c\_d \cdot f\_q^{\;d} + \dots + c\_1 \cdot f\_q. \tag{1}$$

The genuine point list is comprised of:

$$\text{Genine Points} = \begin{bmatrix} f\_{q\_1} & p\left(f\_{q\_1}\right) \\ \vdots & \vdots \\ f\_{q\_m} & p\left(f\_{q\_m}\right) \end{bmatrix} \tag{2}$$

**Figure 5.** *Features projection on polynomial poly.*

To mask the identity of true points, chaff points are added using chaotic tent map. Let *μ* be the seed, *Xn* = initial

$$\boldsymbol{\infty}\_{n+1} = \begin{cases} \mu \boldsymbol{\pi}\_n \boldsymbol{\pi}\_n \in [0, 0.5) \\\\ \mu (\mathbf{1} - \boldsymbol{\pi}\_n), \boldsymbol{\pi}\_n \in [0.5, 1] \end{cases} \tag{3}$$

Each chaff point and other genuine points do not need to put distance between them. The reason is that the chaff points are known for the two sides.

$$\text{Chaff Points} = \begin{bmatrix} \text{Ch}\_{\text{x}\_1} & \text{Ch}\_{\text{y}\_1} \\ \vdots & \vdots \\ \text{Ch}\_{\text{x}\_n} & p\text{Ch}\_{\text{x}\_n} \end{bmatrix} \tag{4}$$

Finally, genuine Points and Chaff Points are combined, and the new matrix is shuffled. That represents fuzzy vault final matrix.

**Figure 6** shows how chaff points hide genuine points; red circles in (a) are chaff points, (b) showing how attacker sees the final points projection.

#### **2.11 Unlock the vault**

The message vault is received and is attempted to decrypted it using input features produced from evaluation dataset. (*X*<sup>0</sup> ) are evaluated in the model build to identify the person in the vault pairs. If predicted person is the same as the original, then regenerate the chaff point set using the agreed seed and initial state for both parties and then remove the chaff points, after that from (*xi*, *yi*) pairs recover the message through polynomial reconstruction; otherwise reject;

```
Input: A fuzzy vault R,
  Output: A value S' ∈ F k f g null .
  Variables : CH :chaff points,R :the vault ,Q:is the reconstructed polynomial
Q ϕ
  regenerate chaff points CH
  for i = 1 to n do
  Temp= CH xi, yi
             ==R xi, yi

  R(Temp)= []
  Q=R
Q Q xi, yi

S0 RSDECODEð Þ Q, k
  Output S' or null
```
**Figure 6.** *How chaff points hide the polynomial.*

#### **2.12 Lagrange interpolation**

It is a method used to reconstruct polynomials; it is a method that computes the interpolation polynomial to form the system:

$$A\mathfrak{x} = b\_i,\text{ where }bi = \mathfrak{y},\text{ }i = 0,\dots,n,\tag{5}$$

"The *A* entries can be defined by *aij = p j(xi),* where *i, j = 0*, *…* , *n,* and *xi = x0, x*1, *…* , *xn* are the points at which the data *y*0, *y*1, *…* , *yn* are obtained, and *pj* (*x*) *= x <sup>j</sup> ,j=* 0, 1*, …* , *n.* The basis {1, *x*, *…* , *xn* } of the polynomials'

space of degree *n* + 1 is called the monomial basis, and the corresponding matrix A is called the Vandermonde matrix for the points *x0, x*1, *…* , *xn*, In Lagrange interpolation, the matrix A is simply the identity matrix, by virtue of the fact that the interpolating polynomial is written in the form:"

$$p\_n(\mathbf{x}) = \sum\_{j=0}^n y\_i \mathcal{L}\_{nj}(\mathbf{x}) \tag{6}$$

where the polynomials L*<sup>n</sup>*,*<sup>j</sup>* � �, *<sup>n</sup> <sup>j</sup>*¼0=0 have the property that

$$\mathcal{L}\_{n\boldsymbol{j}}(\boldsymbol{\infty}\_{i}) = \begin{cases} \mathbf{1} & \text{if } i = \boldsymbol{j} \\ \mathbf{0} & \text{if } i \neq \boldsymbol{j} \end{cases} \tag{7}$$

The Lagrange polynomials for interpolation is: L*<sup>n</sup>*,*<sup>j</sup>* � � where *j =* 0, *…* , *n*, *x0*, *x*<sup>1</sup> *…* , *xn*. are the interpolation points, they are defined by:

$$\mathcal{L}\_{n,j}(\mathbf{x}) = \prod\_{k=0,\,k\neq j}^{n} \frac{\mathbf{x} - \mathbf{x}\_k}{\mathbf{x}\_j - \mathbf{x}\_k} \tag{8}$$

### **3. Conclusion**

After testing the system, it gives a good accuracy of classifying, which is 96%, but the run time of fuzzy vault authentication algorithm is kind of slow regarding that authentication must be fast to be practical for using it in real life; the reason of its slowness is because of the high number of EEG features, which result of many of multiple operations to compute Lagrange's interpolation that slow the work of the algorithm that make the algorithm impractical, on the other hand, using the tent chaff points gives the system an advantage because it reduces the error occurrence when separating chaff points from the genuine points, which are the EEG signal features because the initial seeds are known by both sender and receiver so, the system can regenerate the chaff points again and rise them without or less effecting the genuine points, and in the traditional chaff point generation, it needs to keep distance from the genuine point, which requires more calculation, which this method does not. Also we have difficulties in converting the features that are float numbers into integers so they can be used in Galois field, which needs integers to deal with, another problem is the repeated numbers produced from the conversion into integers because the features' values are close so they result in a repetition. The repeated values cannot use when reconstructing the polynomial because it results in division on zero, which is not acceptable because we need a unique number.

The system used nine healthy persons' EEGs from the BCI Competition and extracted features from signals spectrum of beta and alpha band of EEG signal, then extracted features from three channels and used support vector Machine (SVM) to classify two tasks left hand and right hand that achieve 96.98% validation accuracy, using 10-fold cross-validation on the training set and then saved the model, these features are evaluated on a polynomial generated from the secret key, then chaff points using tent map are added, which reduce the error; for decoding, we use Lagrange interpolation for polynomial reconstruction and returning the key.

To measure performance, a confusion matrix was calculated from which its precision and recall are calculated, a part from using the confusion matric to find precision and recall, it is important to analyze the result by the help of the confusion matrix to analyze the results as it also gives a very strong, it gives the evidence where your

**Figure 7.** *Confusion matrix for the classification model.*


#### **Table 2.** *TPR and FNR.*

classifier is going wrong. So for our model, we can see that our classifier goes in the right direction, which means the classifier can distinguish between subjects' labels; **Figure 7** illustrates confusion matrix for nine subjects.

The total validation accuracy is 96.98%, from confusion matrix; also, one can calculate the true positive rate (TPR) and the false negative rate (FNR) as shown below, by observing the table; TPR is high and FNR is low, which means the performance at its best. See **Table 2**.
