**5. Experiments**

The learning behavior components shown in **Tables 1**-4 are different in scale and sparsity. Combined with the density of learning behavior components, the specific situation is shown in **Table 5**. In order to realize the comparison and test of the algorithms, the traditional Eclat algorithm and the Eclat algorithm based on descending "Support" (DES Eclat) are selected to carry out the experiments.

*Improved Probabilistic Frequent Itemset Analysis Strategy of Learning Behaviors Based on… DOI: http://dx.doi.org/10.5772/intechopen.97219*


**Table 5.**

*Density of data sets.*

#### **5.1 Performance Indicators**

Based on the Eclat framework, the traditional Eclat algorithm, des-Eclat algorithm and LB-Eclat algorithm are written into Python 3.7 and run in the same experimental configuration. In the whole experiment process, we set different min \_*RST* to mine frequent itemsets, and record the indicators generated in the whole processes, which are mainly reflected in the running time of the algorithm, the proportion of memory and the number of probabilistic frequent itemsets.

The test of each indicator is divided into three series according to the sparsity of the data set. The comparative statistical results of corresponding time are shown in **Figures 2**–4. The larger the min \_*RST*, the smaller the time curve distribution of each subgraph. It can be seen from **Figure 2** that the algorithm execution results of sparse density dataset based on the same value show that the traditional Eclat algorithm has advantages. The special sorting of data of DES-Eclat and LB-Eclat increases the time complexity, and the analysis process increases the data time. The execution time of LB-Eclat algorithm is the lowest in **Figures 3** and **4**, which

**Figure 2.** *Comparison of running time of three algorithms on sparse density datasets.*

**Figure 3.** *Comparison of running time of three algorithms on moderate density datasets.*

#### **Figure 4.**

*Comparison of running time of three algorithms on dense density datasets.*

indicates that the improvement of the algorithm is more conducive to the analysis of data sets with higher density, and is more effective for mining and processing frequent itemsets of learning behaviors. It can not be found from the time that the DES-Elat algorithm based on the reverse order strategy has a long running time.

*Improved Probabilistic Frequent Itemset Analysis Strategy of Learning Behaviors Based on… DOI: http://dx.doi.org/10.5772/intechopen.97219*

**Figure 5.** *Comparison of memory space of three algorithms on sparse density datasets.*

The comparative results of memory space of the three algorithms are shown in **Figures 5**–**7**. No matter what the density of data set, the three algorithms occupy the same memory space distribution, the value change trend is the same, LB-Eclat algorithm is slightly smaller than other algorithms, the larger the data set density,

**Figure 6.** *Comparison of memory space of three algorithms on moderate density datasets.*

compared with the traditional Eclat algorithm and des-Eclat algorithm, the smaller the space complexity, that improve the utilization of memory.

The comparison results of probabilistic frequent itemsets mined by the algorithms are shown in **Figures 8**–**10**. With different min \_*RST*, the number of

**Figure 7.** *Comparison of memory space of three algorithms on dense density datasets.*

**Figure 8.** *Comparison of probabilistic frequent Itemsets of three algorithms on sparse density datasets.*

*Improved Probabilistic Frequent Itemset Analysis Strategy of Learning Behaviors Based on… DOI: http://dx.doi.org/10.5772/intechopen.97219*

probabilistic frequent itemsets depends on the items of learning behaviors and the density of transactions. Although the running time and memory space of the three algorithms are different on the same dataset, the number of probabilistic frequent

#### **Figure 9.**

*Comparison of probabilistic frequent Itemsets of three algorithms on moderate density datasets.*

**Figure 10.** *Comparison of probabilistic frequent Itemsets of three algorithms on dense density datasets.*

itemsets obtained is basically the same. With the increase of min \_*RST*, the fewer the number, the smaller the value, The larger the number, the more transactions and items need to be analyzed and calculated, which will inevitably increase the time complexity and space complexity.

The experimental results show that the LB-Eclat algorithm is effective in the study of uncertain learning behavior probabilistic frequent itemsets. About the running time and memory space, LB-Eclat is better than the other two approximate algorithms in mining and analyzing the probabilistic frequent itemsets of sparse density data sets, moderate density data sets and dense density data sets. Since there are 11 learning behavior data sets, the data are all from the real learning processes, and the comparison test process is fully complete. The indicators show that LB-Eclat algorithm are robust and realistic.
