**2. Concept of PCA**

PCA attempts to project high-dimensional data onto the feasible smallestdimensional space. PCA takes into account the variance of each character because a high attribute indicates good class separation, and so minimizes dimensionality. Image processing, movie recommendation systems, and optimizing power distribution across numerous communication channels are some of PCA real-world use cases. It preserves the essential variables while rejecting the less important ones because it is a feature extraction approach [2].

The mathematical concepts employed in the PCA are:


#### **2.1 Common terms used in PCA**

The following are the standard terms widely used in the PCA are discussed below:

*Evaluation of Principal Component Analysis Variants to Assess Their Suitability… DOI: http://dx.doi.org/10.5772/intechopen.105418*


$$\text{Var}(\mathbf{x}) = \frac{\sum \left( \mathbf{X}\_i \overline{\mathbf{X}} \right)^2}{N} \tag{1}$$

Covariance: Covariance can determine the degree to which comparable components from two sets of grouped data move in the same direction. It is used to uncover relationships and correlations between dataset attributes in layman's terms. The following is the formula for calculating the Cov (*x*, *y*):

$$\text{Cov}\left(\mathbf{x}\right) = \frac{\sum \left(\mathbf{X}\_i - \overline{\mathbf{X}}\right)\left(\mathbf{Y}\_i - \overline{\mathbf{Y}}\right)}{N} \tag{2}$$

Covariance matrix: The covariance matrix shows how two variables are related. Principal components: The new set of data variables created from the original dataset is referred to as PCs. The newly created data variables are pretty valuable and self-contained. They have access to all of the essential data from the original variables.
