*Evaluation of Principal Component Analysis Variants to Assess Their Suitability… DOI: http://dx.doi.org/10.5772/intechopen.105418*

**Figure 7** shows the normal PCA method used in the dataset to reduce the 471 featured dimensions into two PCs. It also shows that from the explained variance PC1 has more information than PC2. Normal PCA is the default form of PCA, more suitable for all kinds of data to reduce the dimensions effectively without any information loss.

**Figure 8** shows the normal PCA results of two PCs as PC1 and PC2.

**Figure 8** represents that the total 471 features are reduced into two PCs PC1 and PC2, without removing any of the data features. This helps to train the data and develop the machine learning model effectively with less memory consumption.

**Figure 9** shows the method of sparse PCA in mobile malware data. Similarly, the sparse PCA is widely suited for sparse data so that the 471 data features are reduced into two set of PCs PC1 and PC2.

**Figure 10** shows the method of randomized PCA in mobile malware data. Randomized PCA is suitable for big data processing so that the features are randomly selected to derive the two set of PCs PC1 and PC2.

**Figure 11** shows the method of incremental PCA in mobile malware data. Incremental PCA is similar to randomized PCA, but it gradually increases the batch size to reduce the total number of features into two set of PCs PC1 and PC2.

**Figure 12** shows the method of kernel PCA in mobile malware data. Kernel PCA is widely applicable for non-linear data modelling. It also helps to reduce the dimensions of the data based on the kernel function like gamma etc. to derive the two or more sets of PCs PC1and PC2.

In this case study, the 471 data features of CICMalDroid\_2020 dataset is transformed into two set of PCs PC1 and PC2 for all variants of PCA, depending upon the suitability of the data values.


#### **Figure 7.** *Normal PCA.*


**Figure 8.** *Normal PCA with PC1 and PC2.*

