**3.2 Perfection prediction system**

After training the autoencoder, it will serve as the feature extractor as shown in **Figure 2**. In this step, we apply the transfer learning method to transfer the welllearned filters for facial part feature extraction. We try two different lengths of the embedding vector, say, 32 and 64, as shown in **Table 1**. We also try both to freeze and to retrain the extractor.

This model predicts the probability of perfection; the output range of the model is a one-dimensional vector with the interval of [0–1] that reflects the probability of perfection. The probability of perfection is defined as follows:


However, in a real situation, we cannot obtain a big dataset of the perfect/nonperfect face. Thus, we assume that the outcome of surgery is perfect (value is 1) and the original face is not perfect (value is 0).

#### **Table 1.**

*Autoencoder model and classification model.*

*(2.2) upsampling with transposed convolution.*
