**5.4 Incremental PCA**

The above-described PCA variants need the entire training dataset to be stored in memory. Incremental PCA can be employed when the dataset is too huge to fit in memory. It divides the dataset into mini-batches, each of which can fit into memory, and then feed each mini-batch to the incremental PCA algorithm one at a time.

#### **5.5 Kernel PCA**

A typical linear technique is PCA. It works well with linearly separable datasets. However, if the dataset contains non-linear relationships, the results will be unfavourable. Kernel PCA is a technique that uses the "kernel trick" to project linearly inseparable data into a higher dimension where it may be separated linearly. Many different kernels are commonly employed, including linear, polynomial, RBF, and sigmoid.
