2.3. Performance discussion

Figure 2 shows the receiver operating characteristic (ROC) curves for single-user soft and hard fusion rules under Additive White Gaussian Noise (AWGN) channel. In order to generate this figure, we assume a cognitive radio system with 7 cooperative nodes (i.e., K = 7) operate at SNR γ<sup>u</sup> = �22 dB. The local node decisions are made after observing1000 samples (i.e., energy

Figure 2. ROC curves for the soft and hard fusion rules under the case of AWGN receiver noise, σ<sup>u</sup> <sup>2</sup> <sup>¼</sup> 1,γu<sup>=</sup> �22 dB, <sup>K</sup> = 7 users and energy detection over N=1000 samples.

detection samples N = 100). For soft fusion rules, the SNRs γ<sup>j</sup> for the nodes are equal to {�24.3, �21.8, �20.6, �21.6, �20.4, �22.2, �21.3} and the noise variances σ<sup>j</sup> <sup>2</sup> are f g 1; 1; 1; 1; 1; 1; 1 : We use a false alarm probability Pf varied from 0 to 1 increasing by 0.025. The simulation results show that soft EGC and optimal MRC fusion rules perform better than other soft and hard fusion rules even though that soft EGC fusion rule does not need any channel state information from the nodes.

Figure 3 shows the ROC curve depicting the performance of SVM classifier in classifying 1000 new frames after training it over a set containing M = 1000 frames. The thresholds used for training SVM classifier (i.e., single-user threshold, AND, OR, MRC, SLS, and EGC fusion rule threshold) are obtained numerically by considering the cognitive system used to generate Figure 2; however, here, we set the false alarm probability to Pf ¼ 0:1 .

SVM classifier (i.e., the proportion of all true classifications over all testing examples) and the precession of classification (i.e., proportion of true positive classes over all positive classes) as well as the recall of classification (i.e., the effectiveness of the classifier in identifying positive

Threshold Single user (%) AND rule (%) OR rule (%) MRC rule (%) SLS rule (%) EGC rule (%)

Machine Learning Approaches for Spectrum Management in Cognitive Radio Networks

http://dx.doi.org/10.5772/intechopen.74599

127

Accuracy 96:1 98:3 98:1 97:6 98:9 98.0 Precession 77.7 100 53.7 89.4 74.4 90.1 Recall 100 97.8 100 100 100 100

Table 1. The accuracy, precession and the recall of SVM classifier.

Figure 4 shows ROC curves showing the comparison of four machine-learning classifiers: Knearest neighbor (KNN), support vector machine (SVM), Naive Bayes and Decision tree when used to classify 1000 frames after training them over a set containing 1000 frames with singleuser threshold (Note: the same system used to generate the simulation of Figure 3. is considered

Figure 4. ROC curves shows a comparison of four machine learning classifiers: KNN, SVM, naive Bayes, and decision tree in classifying 1000 frames after training them over a set with 1000 frames using single user scheme threshold.

classes).

SVM

From Figure 3 and Table 1, we can notice that when training SVM classifier with anyone of the following thresholds: single user, OR, MRC, SLS, or EGC, it can detect 100% positive classes. We can also notice that training with EGC threshold can provide 90% precession in classifying the positive classes with 10% harmful interference, whereas training SVM with AND threshold can precisely classify the positive classes by 97.8%. Table 1 shows the classification accuracy of

Figure 3. ROC curves shows the performance of SVM classifier in predicting the decisions for 1000 new frames after training it over a set containing1000 frames when single user, AND, OR, MRC, SLS, and EGC thresholds are used for training process.


Table 1. The accuracy, precession and the recall of SVM classifier.

detection samples N = 100). For soft fusion rules, the SNRs γ<sup>j</sup> for the nodes are equal to

f g 1; 1; 1; 1; 1; 1; 1 : We use a false alarm probability Pf varied from 0 to 1 increasing by 0.025. The simulation results show that soft EGC and optimal MRC fusion rules perform better than other soft and hard fusion rules even though that soft EGC fusion rule does not need any

Figure 3 shows the ROC curve depicting the performance of SVM classifier in classifying 1000 new frames after training it over a set containing M = 1000 frames. The thresholds used for training SVM classifier (i.e., single-user threshold, AND, OR, MRC, SLS, and EGC fusion rule threshold) are obtained numerically by considering the cognitive system used to generate

From Figure 3 and Table 1, we can notice that when training SVM classifier with anyone of the following thresholds: single user, OR, MRC, SLS, or EGC, it can detect 100% positive classes. We can also notice that training with EGC threshold can provide 90% precession in classifying the positive classes with 10% harmful interference, whereas training SVM with AND threshold can precisely classify the positive classes by 97.8%. Table 1 shows the classification accuracy of

Figure 3. ROC curves shows the performance of SVM classifier in predicting the decisions for 1000 new frames after training it over a set containing1000 frames when single user, AND, OR, MRC, SLS, and EGC thresholds are used for

Figure 2; however, here, we set the false alarm probability to Pf ¼ 0:1 .

<sup>2</sup> are

{�24.3, �21.8, �20.6, �21.6, �20.4, �22.2, �21.3} and the noise variances σ<sup>j</sup>

channel state information from the nodes.

126 Machine Learning - Advanced Techniques and Emerging Applications

training process.

SVM classifier (i.e., the proportion of all true classifications over all testing examples) and the precession of classification (i.e., proportion of true positive classes over all positive classes) as well as the recall of classification (i.e., the effectiveness of the classifier in identifying positive classes).

Figure 4 shows ROC curves showing the comparison of four machine-learning classifiers: Knearest neighbor (KNN), support vector machine (SVM), Naive Bayes and Decision tree when used to classify 1000 frames after training them over a set containing 1000 frames with singleuser threshold (Note: the same system used to generate the simulation of Figure 3. is considered

Figure 4. ROC curves shows a comparison of four machine learning classifiers: KNN, SVM, naive Bayes, and decision tree in classifying 1000 frames after training them over a set with 1000 frames using single user scheme threshold.

for computing the single-user threshold). We can notice from both Figure 4. and Table 2 that KNN and decision tree classifier perform better than Naïve Bayes and SVM classifier in terms of the accuracy of classifying the new frames.

in Section 2.1 can also be considered here); (2) the model that generates a time series to capture PU channel state based on the detection sequence; and (3) the model for predicting the generated time series used to capture PU channel state based on hidden Markov model (HMM) and Markov switching model (MSM). The block diagram in Figure 5. illustrates these

The PU channel state detection model can be written using Eq. (4); by giving probability of false alarm Pf, the detection threshold for single-user energy detector can be written as:

!

And the decision of the sensing (i.e., PU detection sequence) over the time can be written as follows:

0} PU absent Yt < λ

1} PU present Yt ≥ λ

Given PU channel state detection sequence over the time (i.e., PU absent, PU present), if we denote the period that the PU is inactive as "idle state," and the period that PU is active as "occupied state," our goal now is to predict when the detection sequence Dt will change from one state to another (i.e., "idle" to "occupied "or vice versa) before that happens so that the secondary user can avoid interfering with primary user transmission. For this reason, we generate a time series zt to map each state of the detection sequence Dt (i.e., "PU present" and "PU absent") into another observation space using two different random variable distributions for each state (i.e., zt ∈f g v1; v2…vL represents PU absent or idle state and zt ∈f g vLþ<sup>1</sup>…:vM represents PU occupied or present), the time series zt can be written as

> zt <sup>∈</sup> f g v1; v2;…; vL Yt <sup>&</sup>lt; <sup>λ</sup> f g vLþ<sup>1</sup>; ::vM Yt ≥ λ

Now, supposing that we have given observations value O ¼ O1; O2; Ot f g ;…OT , Ot ∈ f g v1; v2…vM and a PU channel state at time step t, Xt ∈ si, i ¼ 1, 2…:K, si ∈f g 0; 1 (i.e., 0 for

ffiffiffiffi 2 N r

Q�<sup>1</sup>

ð Þþ Pf 1

σu

Machine Learning Approaches for Spectrum Management in Cognitive Radio Networks

1 ≤ t ≤ T

1 ≤ t ≤ T

<sup>2</sup> (24)

http://dx.doi.org/10.5772/intechopen.74599

(25)

129

(26)

λ ¼

}

(

}

�

Figure 5. Block diagram of PU channel state prediction model.

ð Þ: is the inverse of the Q ð Þ: function.

Dt ¼

3.1. Time series generation model

three models.

where Q�<sup>1</sup>

Table 3 shows the accuracy, precession, and the recall for decision tree classifier when used to classify 3000 frames after training it over a set containing 1000 frames for the same cognitive system used to generate Figure 3. The single-user threshold is used for training the classifier. The simulation was run with different number of samples for energy detection process. It is clear from the table that decision tree can classify all of the 3000 frames correctly or achieve 100% detection rate using only 200 samples for the energy detection process. And, due to the fact that the sensing time is proportional to the number of samples taken by energy detector, a less number of samples used for energy detection leads to less sensing time. Thus, when we use machine-learning-based fusion, such as decision tree or KNN, we can reduce the sensing time from 200 to 40 μs for 5 MHz bandwidth channel as an example, while we still achieve 100% detection rate of the spectrum hole.


Table 2. The accuracy, precession and recall of KNN, SVM, NB, and DT classifiers used in classifying 1000 new frames after being trained with 1000 frames.


Table 3. The accuracy, precession, and recall for decision tree classifier used in classifying 3000 frames for different number of samples.
