**5. Evaluation of results**

The datasets that were used in the feature specific sentiment classification and knowledgebased product recommendations were the collection of electronic device reviews from Amazon. The electronic devices were Iphone 6 s plus, oppo f1 plus and Samsung galaxy j7 prime smartphones. These products were named as P1, P2 and P3, respectively. The selection of reviews was considered in such a way as each review contains the mention of the product features. **Table 1** presents the details of the datasets used for this experiment.

The reviews preprocessing was carried out by eliminating stop words and non-English words. The negation words which were present by the adjective in review sentences were handled with care. For such review sentences, the sentiment orientation of the word was determined by flipping the actual sentiment. The product features and opinions extracted on the considered mobile phone reviews using NLP-based language model and LDAbased language model are collected. PROO ontology is engineered and annotated with the collected product features and opinions. Only one product type for the rule-based sentiments analysis as the PROO ontology is developed for a class of mobile phones of different manufacturers.

ILP rules are also extracted from PROO ontology. The rule predecessor is learned by forming a conjunction of PROO ontology classes and the relevant properties which relate to these classes. The class instances and the property values are reasoned for extracting the target sentiment class instance which is the rule consequent. The generated rules cover the positive instances of the product feature. The assessment of the generated rules is envisioned with area under receiver operating characteristic curve (AUC).

The AUC is a measure to showcase the reviews covered in either of the two sentiment groups (good/bad) available from the dataset. The parameters of the receiver operating characteristic (ROC) curve are the target class label and the ranking attribute. The target instance considered is good for the sentiment class and the ranking attribute is considered as opinion strength. An accuracy of 86.7% of ROC area coverage is obtained. The k-common features identified after the customer searched for Iphone 6 s plus are tabulated in **Table 2**. The value of k found is 17. The similar products are Oppo f1 plus and Samsung galaxy j7 prime.


**Table 2.** List of k-common features.

level *ALCIN(D)*. ALCIN(D) is attribute logic with complement, role inverse, unqualified number restriction and datatype. This ontology is robustly scalable and the rules learned from it are computationally solvable in polynomial running time, i.e., PTIME. The target sentiment which is learned as the rule consequent on the object properties of PROO ontology is decidable as the rules are deductible in the PTIME. Also the learned rules are DL-safe as these rules

There were some issues encountered at the time of PROO ontology development. This PROO ontology development was based on design decisions taken at two stages. The two stages were namely the design decisions made before the ontology development and, the decisions

The first design decision before the development of ontology was on the scope of the ontology to represent the appropriate knowledge for conceptualization. In the product reviews domain, the PROO ontology was intended to support the new customers in retrieving the object information from a large number of reviews by reasoning on object property ontology path. The second design decision was on adhering to the development of a formal ontology so as to reason the ontology for making meaningful conclusions. The PROO ontology was developed using the formal Web Ontology Language (OWL) constructs. The third design decision was whether to annotate the product features and opinions extracted from the reviews as

The design decision taken during the development of ontology was to choose the required superclass-subclass taxonomies in the ontology. The taxonomies created in the development of PROO ontology were the hierarchy of the product features and the PoS word class tags. For some queries on PROO ontology, it was observed that the information retrieved is incorrect. The same instance that was used in analyzing the different product reviews has led to the

The datasets that were used in the feature specific sentiment classification and knowledgebased product recommendations were the collection of electronic device reviews from Amazon. The electronic devices were Iphone 6 s plus, oppo f1 plus and Samsung galaxy j7 prime smartphones. These products were named as P1, P2 and P3, respectively. The selection of reviews was considered in such a way as each review contains the mention of the product

The reviews preprocessing was carried out by eliminating stop words and non-English words. The negation words which were present by the adjective in review sentences were handled with care. For such review sentences, the sentiment orientation of the word was determined by flipping the actual sentiment. The product features and opinions extracted on the considered mobile phone reviews using NLP-based language model and LDAbased language model are collected. PROO ontology is engineered and annotated with the

features. **Table 1** presents the details of the datasets used for this experiment.

are restricted to known instances of the ontology.

194 Machine Learning - Advanced Techniques and Emerging Applications

made at the time of ontology development.

instances to the concepts of the ontology or not.

former mentioned problem.

**5. Evaluation of results**

The algorithm calculates sentiments for all the three cellular products on 17 features. Now the algorithm gets all the taxonomical and nontaxonomical constraints for learning feature sentiments from the ontology in the form of rules. In this work, the height of the PROO ontology is 3 and the depth of the child feature node in the ontology tree for taxonomical sentiments is 2. In order to evaluate the sentiments of the k-common features for recommending products, similarity metrics namely cosine similarity [18] and Better [19] are considered.

The small number for k-common features restricts the ability to compare the products during retrieval. This leads to a problem called 'sparsity problem'. This problem is common in collaborative filtering systems.

An empirical analysis is carried out to understand the impact of small k values on product recommendations. The scatter plot for the percentage of products with different k values with the searched product is presented in **Figure 3**.

Different values for 'k' provide the useful understanding about the products comparison for eventual recommendations. The variations in the number of k-common features on the similar products using sentiments without ontology and with ontology are tabulated in **Table 4**. The higher better values in relative comparison with the search product specify that the product is on the top of the recommendation list. The lower cosine values in relative comparison with the search product specify that the product is on the top of the recommendation list. The sentiments of k-common features on the three products in the absence of ontology are

Sentiment-Based Semantic Rule Learning for Improved Product Recommendations

http://dx.doi.org/10.5772/intechopen.72514

197

The product similarity with the sentiment data on the similar products without the support

The sentiments of k-common features on the three products in the presence of ontology are

The product similarity with the sentiment data on the similar products with the support of

The product recommendations based on the Cosine similarity measures with and without

From the results in **Table 5**, it is observed that without the support of ontology for different values of 'k' (4,8,12) the cosine similarity returned the similar products as recommendations in the same order (product P2 comes first in the list and then the product P3) by using the sentiments on k-features. The product with higher cosine value between two similar products is

> **Better (P1,P2)**

**Cosine (P1,P2)** **Better (P1,P3)** **Cosine (P1,P3)**

**Cosine (P1,P3)**

 −0.0275 0.87 −0.0075 0.79 −0.0275 0.75 −0.0075 0.95 −0.0006 0.61 −0.08938 0.45 −0.00063 0.33 −0.08938 0.52 0.0370 0.54 0.044583 0.51 0.037083 0.54 0.025874 0.51 0.0997 0.29 0.058235 0.48 0.099705 0.29 0.035866 0.49

**Table 4.** Better and cosine similarity measures statistics for analyzing similarities between products.

ontology support for different 'k' values are specified in **Table 5**.

**k Without ontology With ontology**

**Better (P1,P3)**

**Without ontology With ontology**

1 1 1 1 1 2 0.86 0.89 0.86 0.89 3 0.69 0.94 0.69 0.94

k Cosine(P1,P2) Cosine(P1,P3) Cosine(P1,P2) Cosine(P1,P3)

displayed in **Figure 4**.

displayed in **Figure 6**.

**Better (P1,P2)**

of ontology is displayed in **Figure 5**.

**Table 3.** Cosine similarity values for small k.

ontology is displayed in **Figure 7**.

**Cosine (P1,P2)**

It is observed from the above figure that at k value of 1, the product recommendations are not possible as all the products have same similarity value. It is also observed that a single product is not recommended with the available k features as the products are competing with respect to the sentiments on these k features. From k = 2 through 17, the product recommendations has started.

In order to know the product recommendations for small k values, the cosine similarity values for k values 1, 2, and 3 without and with ontology are tabulated in **Table 3**.

An important observation is made **Table 3**. It is that the cosine similarity values without and with ontology for small values of k are same. The influence of taxonomical and nontaxonomical constraints on the product recommendations is reflected from the k value of 4.

**Figure 3.** Scatter plot for the percentage of products with different k values.


**Table 3.** Cosine similarity values for small k.

Different values for 'k' provide the useful understanding about the products comparison for eventual recommendations. The variations in the number of k-common features on the similar products using sentiments without ontology and with ontology are tabulated in **Table 4**.

The higher better values in relative comparison with the search product specify that the product is on the top of the recommendation list. The lower cosine values in relative comparison with the search product specify that the product is on the top of the recommendation list. The sentiments of k-common features on the three products in the absence of ontology are displayed in **Figure 4**.

The product similarity with the sentiment data on the similar products without the support of ontology is displayed in **Figure 5**.

The sentiments of k-common features on the three products in the presence of ontology are displayed in **Figure 6**.

The product similarity with the sentiment data on the similar products with the support of ontology is displayed in **Figure 7**.

The product recommendations based on the Cosine similarity measures with and without ontology support for different 'k' values are specified in **Table 5**.

From the results in **Table 5**, it is observed that without the support of ontology for different values of 'k' (4,8,12) the cosine similarity returned the similar products as recommendations in the same order (product P2 comes first in the list and then the product P3) by using the sentiments on k-features. The product with higher cosine value between two similar products is


**Table 4.** Better and cosine similarity measures statistics for analyzing similarities between products.

**Figure 3.** Scatter plot for the percentage of products with different k values.

The algorithm calculates sentiments for all the three cellular products on 17 features. Now the algorithm gets all the taxonomical and nontaxonomical constraints for learning feature sentiments from the ontology in the form of rules. In this work, the height of the PROO ontology is 3 and the depth of the child feature node in the ontology tree for taxonomical sentiments is 2. In order to evaluate the sentiments of the k-common features for recommending products,

The small number for k-common features restricts the ability to compare the products during retrieval. This leads to a problem called 'sparsity problem'. This problem is common in col-

An empirical analysis is carried out to understand the impact of small k values on product recommendations. The scatter plot for the percentage of products with different k values with

It is observed from the above figure that at k value of 1, the product recommendations are not possible as all the products have same similarity value. It is also observed that a single product is not recommended with the available k features as the products are competing with respect to the sentiments on these k features. From k = 2 through 17, the product recommen-

In order to know the product recommendations for small k values, the cosine similarity val-

An important observation is made **Table 3**. It is that the cosine similarity values without and with ontology for small values of k are same. The influence of taxonomical and nontaxonomi-

ues for k values 1, 2, and 3 without and with ontology are tabulated in **Table 3**.

cal constraints on the product recommendations is reflected from the k value of 4.

similarity metrics namely cosine similarity [18] and Better [19] are considered.

laborative filtering systems.

dations has started.

the searched product is presented in **Figure 3**.

196 Machine Learning - Advanced Techniques and Emerging Applications

**Figure 4.** Sentiments of k-common features of similar products in the absence of ontology.

shown as first product in the recommendations list. For k value of 17, the order in the product recommendations is changed. This is because the product P3 has higher cosine value and P2 has lower cosine value when compared with the searched product.

**Figure 6.** Sentiments of k-common features of similar products in the presence of ontology.

**Figure 7.** Products comparison with the searched product in the presence of ontology.

**Product recommendations order–without** 

 Oppo f1 plus, Samsung Galaxy j7 Samsung Galaxy j7, Oppo f1 plus Oppo f1 plus, Samsung Galaxy j7 Samsung Galaxy j7, Oppo f1 plus Oppo f1 plus, Samsung Galaxy j7 Oppo f1 plus, Samsung Galaxy j7 Samsung Galaxy j7, Oppo f1 plus Samsung Galaxy j7, Oppo f1 plus

**Product recommendations order–with** 

**ontology**

Sentiment-Based Semantic Rule Learning for Improved Product Recommendations

http://dx.doi.org/10.5772/intechopen.72514

199

**(product1, product2)**

**Searched product in E-Commerce site: Iphone 6 s plus**

**ontology**

**(product1, product2)**

**k**

**features)**

**(No. of common product** 

**Table 5.** Product recommendations.

When ontological knowledge is utilized in the product recommendations analysis, the sentiments of the taxonomical features [(battery, battery life) and (camera, camera quality)] are not changed as the sentiments of the parent features are greater than the sentiments of the child features in the taxonomy. The sentiments of the non-taxonomical features [in the work the related features are (RAM, mobile performance), (brand, price), and (screen, display)] are improved in the similar products of k-common features by using the recommendation algorithm. It is observed that the order of product recommendations after improving the sentiments of the related features is changed for two k values (for values 4 and 8). This is because the related sentiments of product P3 have improved so they show higher cosine value than the product P2. This shows the improvement in the product recommendations.

**Figure 5.** Products comparison with the searched product in the absence of ontology.

**Figure 6.** Sentiments of k-common features of similar products in the presence of ontology.

**Figure 7.** Products comparison with the searched product in the presence of ontology.


**Table 5.** Product recommendations.

shown as first product in the recommendations list. For k value of 17, the order in the product recommendations is changed. This is because the product P3 has higher cosine value and P2

When ontological knowledge is utilized in the product recommendations analysis, the sentiments of the taxonomical features [(battery, battery life) and (camera, camera quality)] are not changed as the sentiments of the parent features are greater than the sentiments of the child features in the taxonomy. The sentiments of the non-taxonomical features [in the work the related features are (RAM, mobile performance), (brand, price), and (screen, display)] are improved in the similar products of k-common features by using the recommendation algorithm. It is observed that the order of product recommendations after improving the sentiments of the related features is changed for two k values (for values 4 and 8). This is because the related sentiments of product P3 have improved so they show higher cosine value than

the product P2. This shows the improvement in the product recommendations.

**Figure 5.** Products comparison with the searched product in the absence of ontology.

has lower cosine value when compared with the searched product.

**Figure 4.** Sentiments of k-common features of similar products in the absence of ontology.

198 Machine Learning - Advanced Techniques and Emerging Applications
