8. Experimental results

ekðpÞ ¼ yd, <sup>k</sup>ðpÞ � ykðpÞ ð29Þ

δkðpÞ ¼ ykðpÞ½1 � ykðpÞ�ekðpÞ ð30Þ

ðpÞekðpÞykðpÞ½1 � ykðpÞ� ð31Þ

δkðpÞwjkðpÞ ð32Þ

ðpÞ�ejðpÞ ð33Þ

<sup>ð</sup>yd, <sup>k</sup>ðpÞ � ykðpÞÞ<sup>2</sup> <sup>ð</sup>35<sup>Þ</sup>

ðpÞ� ð34Þ

where yd,k (p) and yk (p) are the desired and predicted outputs, respectively.

The weights between the hidden layer and the output layer are adjusted by

and the error for the output of the hidden layer is computed as follows

ðpÞδkðpÞ ¼ αyj

ejðpÞ ¼ <sup>X</sup> 1

δjðpÞ ¼ yj

The weights between the input layer and the hidden layer are computed by

<sup>E</sup> <sup>¼</sup> <sup>1</sup> 2 X NT

obtained from different people, which is formulated as follows

p¼1

ðpÞδjðpÞ ¼ αyi

X l

k¼1

k¼1

ðpÞ½1 � yj

Thus, the iteration pis increased by 1, and the procedure is repeated until the sum of squared errors is sufficiently small or the number of iterations reaches a given maximum. The following

where NT is the number of training samples and l is the number of neurons in the output layer. In our proposed method, such a three-layer perceptron is applied to classifying the type of

Photo aesthetics is subjective to different groups of people. To deal with this problem, we adopt social networks to collect people's preferences; for instance, the attributes of personal information and the features of his/her favorite pictures. The correlation between the attributes of people and photo features is calculated. A bias is used to influence one of the feature values

ðpÞejðpÞyj

ðpÞ½1 � yj

Thus, the error for the output of the output layer can be propagated back to the hidden layer,

The error gradient for the output layer is computed as follows

116 Perception of Beauty

ΔwjkðpÞ ¼ αyj

The error gradient for the hidden layer is formulated below

ΔwijðpÞ ¼ αyi

defines the sum of squared errors

image composition for an input photo.

7. Personal preference

In this chapter, the performance of two computational aesthetics manners for the perception of beauty is evaluated, which are based on image composition analysis and low-level features to determine whether a photo meets the criterion of a professional photographing via different classifiers. The parameters for the classifiers are depicted as follows: for the support vector machine (SVM), the radial basis function (RBF) is chosen as the kernel function, and the cost is set to 1. For the MLP, the number of neurons in the hidden layer is set to half the sum of the numbers of features and classes, which is equal to 8 for the first experiment and 22 for the second experiment. The learning rate is 0.3, and the number of iterations is 500 during the training. For the Radial basis network, the minimum standard deviation is set to 0.1, clustering seed is 1, the number of clusters is 2, and the ridge is 10�<sup>8</sup> . For the AdaBoost algorithm, the weak classifiers are decision stump and the number of iterations is set to 10, the seed is set to 1, and the weight of threshold is set to 100. For the decision tree J48, the confidence factor is set to 0.25. For the random forest, the number of trees is set to 100.

#### 8.1. Test on low-level features used for perceiving the beauty of photos

In this experiment, we choose multiple low-level features to classify whether a photo is favorable or not automatically. In total, 15,000 photos are collected, and 13 features are extracted from them, which include color components (red, green, blue, cyan, magenta, and yellow), sharpness, brightness, contrast, saturation, color balance, colorfulness, and simplicity. Each of the photos is marked as favorable or unfavorable for training. The testing is performed under 10-fold verification.

Many actual photos are examined in the photo beauty measurement system, including true positive samples (favorable photos classified as favorable; correct result), false negative samples (favorable photos classified as unfavorable; incorrect result), false positive samples (unfavorable photos classified as favorable; incorrect result), true negative samples (unfavorable photos classified as unfavorable; correct result). The classification results of some sample photos are as shown in Figure 24 where Figures 24(a) and 24(d) are correct results. Figure 24(b) shows the photos whose features should be salient enough but they are determined as amateur

Figure 24. Four classification results of some sample photos: (a) true positive; (b) false negative; (c) false positive; (d) true negative.

and unfavorable. Figure 24(c) shows that human knowledge about photo contents should also be applied; however, they are still recognized as favorable in spite of ill problems appearing in the contents.

Table 1 shows both the accuracy and the area under a ROC curve (AUC) of classifying whether a photo meets the condition of a beautiful photo. Compared to other classifiers, the MLP and tree-based ones have better performance. The MLP can be used to show the aesthetic score of a photo, while the decision tree J48 is able to generate readable rules of the classifier.

#### 8.2. Test on image composition analysis for perceiving the beauty of photos

In this experiment, the image composition analysis is tested. Thirty five features are used for training an MLP, including a stack vector of 25 salient region values depicted in Section 5.1, and the angle degrees of prominent lines are ranged from 90 to 90 where every 18 angle degrees results in a bin. Therefore, the numbers of prominent lines are counted in 10 bins of the On the Design of a Photo Beauty Measurement Mechanism Based on Image Composition and Machine Learning http://dx.doi.org/10.5772/intechopen.69502 119


Table 1. The AUC and accuracy of different classifiers.

and unfavorable. Figure 24(c) shows that human knowledge about photo contents should also be applied; however, they are still recognized as favorable in spite of ill problems appearing in

Figure 24. Four classification results of some sample photos: (a) true positive; (b) false negative; (c) false positive; (d) true

Table 1 shows both the accuracy and the area under a ROC curve (AUC) of classifying whether a photo meets the condition of a beautiful photo. Compared to other classifiers, the MLP and tree-based ones have better performance. The MLP can be used to show the aesthetic score of a photo, while the decision tree J48 is able to generate readable rules of the classifier.

In this experiment, the image composition analysis is tested. Thirty five features are used for training an MLP, including a stack vector of 25 salient region values depicted in Section 5.1, and the angle degrees of prominent lines are ranged from 90 to 90 where every 18 angle degrees results in a bin. Therefore, the numbers of prominent lines are counted in 10 bins of the

8.2. Test on image composition analysis for perceiving the beauty of photos

the contents.

negative.

118 Perception of Beauty

angle degrees, which act as the remaining features described in Section 5.2. In consequence, some photo samples are provided to illustrate the performance of image composition analysis. Figure 25 shows the correctly classified samples whereas Figure 26 shows the incorrectly classified ones.

Figure 25. Correctly classified image compositions: (a) central; (b) rule of thirds; (c) vertical; (d) horizontal; (e) diagonal; (f) perspective.

Figure 26. Incorrectly classified image compositions: (a) central misclassified as horizontal; (b) rule of thirds misclassified as horizontal; (c) vertical misclassified as horizontal; (d) horizontal misclassified as rule of thirds; (e) diagonal misclassified as horizontal; (f) perspective misclassified as rule of thirds.


Table 2. The accuracy of image composition analysis using different classifiers.


Table 3. The AUC of multiple image compositions.

From the incorrectly classified composition samples, we can see that horizontal and rule of thirds compositions are two commonly mistaken ones, which may be caused by horizontal lines existing in most photos and distractors often existing in one-third of photos. A solution is that the weights of horizontal and rule of thirds compositions can be lowered in the output layer of the MLP.

In Table 2, the accuracy of image composition classification is calculated by the percentage of correctly classified instances to all samples. Each of the classifiers is tested using 10-fold verification. Tree-based and MLP classifiers have higher performance than others.

Table 3 lists AUC of six image composition. The AUC is calculated by the percentage of the area under a ROC curve. The rule of thirds composition has the least AUC, because it is often confused with the center composition, while the perspective composition also has lower AUC, because it is frequently confused with the diagonal composition.
