**4. Experiments**

#### **4.1 Datasets**

The multilabel datasets used in this paper and their associated statistics are shown in **Table 1**.

#### **4.2 Experimental setup**

In this study, we have added probabilistic classifier chains [36], CSMLC [37] and RethinkNet [38] as baselines for comparison. The experimental settings are as follows: First, multilabel datasets are preprocessed to [0,1] as inputs, 80% of the samples are used for model (both multilabel learning and proposed method) training, and the last 20% of the samples are used as test sets. We also add Gaussian noise ranging from 6% to 12% of each test sample to test the robustness of the model. The overall framework is shown in **Figure 2**.


*Multilabel datasets are available at http://mulan.sourceforge.net/datasets-mlc.html \*\*AR Face dataset is available at http://www2.ece.ohio-state.edu/\_aleix/ARdatabase.html*

#### **Table 1.**

*Statistics of the multilabel datasets.*

For deep learning, we train all models for 200 epochs using Adam [39] with a learning rate of 0.01 and the mean square error as the loss function.

#### **4.3 Evaluation metrics**

In multilabel learning, the evaluation metrics must be more rigorous than traditional single-label learning because one sample may be associated with multiple labels. These evaluation metrics [15] are divided into three groups, as shown in **Figure 3**. The higher the values of the F1 score, precision, mean average precision and recall, the better the performance is. The lower the values of the Hamming loss, one-error, coverage and ranking loss, the better the performance is. We consider the Hamming loss, one-error and mean average precision as three major metrics.

#### **4.4 Experimental results**

All experiments use different combinations of training and test data to verify the trained model and average the results after repeating the training ten times. According to the observations in **Figures 4**–**6**, the following conclusions are reached:

**Figure 3.** *Taxonomy of evaluation metrics.*

*Multilabel Classification Based on Graph Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.99681*

#### **Figure 4.**

*Results of the proposed method compared with multilabel learning algorithms on the used multilabel datasets. (a)–(c) show the results without adding Gaussian noise.*

#### **Figure 5.**

*Results of the proposed method compared with multilabel learning algorithms on the used multilabel datasets. (a)–(c) show the results of adding 6% Gaussian noise.*

*Multilabel Classification Based on Graph Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.99681*

#### **Figure 6.**

*Results of the proposed method compared with multilabel learning algorithms on the used multilabel datasets. (a)–(c) show the results of adding 12% Gaussian noise.*

