**5. Experiment and validation**

In this section, we present the validation tests of the Bayesian network derived from our model of the learner.

The learners involved in the experiment presented herein are students of the module "Data‐ base", in the first year of DUT (Technical university diploma) at the Ecole Normale Superieure of Tétouan at Abdelmalek Essaâdi University.

#### **5.1. UnBBayes software**

where *<sup>h</sup>* 1={ <sup>1</sup> *if Yi* <sup>=</sup> *<sup>X</sup>*

*P*(*not X* |*Y* 1, *Y* 2, ..., *Yn*)=1−*P*(*X* |*Y* 1, *Y* 2, ..., *Yn*).

178 E-Learning - Instructional Design, Organizational Strategy and Management

**Table 2.** The conditional probability table of "Learner" node

**Table 3.** The conditional probability table of "Learner" parents

**Table 4.** The conditional probability table of "Pretest" node

**b.** The Conditional Probability Table of the Node "Pretest"

Table 4 represents the CPT of each child node of the parent node Pretest.

**c.** The Conditional Probability Table of the Node "Learning Activity"

medium value of 0.5 in most cases).

**a.** The Conditional Probability Table of the Node "Learner"

Table 2 represents the CPT of each child node of the parent node Learner.

<sup>0</sup> *otherwise* with given random binary variables X and Yi. Obviously,

**P(J=0)**

**1-p(J=1)**

1-p(J=1)

**<sup>A</sup> <sup>T</sup> <sup>E</sup> P(J=1) 1-P(J=1)**

 1 1 1.0 (0.1\*1 + 0.4\*1+ 0.5\*1) 0.0 0 1 0.6 (0.1\*1 + 0.4\*0+ 0.5\*1) 0.4 1 0 0.5 (0.1\*1 + 0.4\*1+ 0.5\*0) 0.5 0 0 0.1 (0.1\*1 + 0.4\*0+ 0.5\*0) 0.9 1 1 0.9 (0.1\*0 + 0.4\*1+ 0.5\*1) 0.1 0 1 0.5 (0.1\*0 + 0.4\*0+ 0.5\*1) 0.5 1 0 0.4 (0.1\*0 + 0.4\*1+ 0.5\*0) 0.4 0 0 0.0 (0.1\*0 + 0.4\*0+ 0.5\*0) 1.0

Because concepts A, E, and T have no prerequisite knowledge for understanding, their CPTs are specified as prior probabilities obeying uniform distribution, as stated in Table 3 (assigned

**P(A=1) P(A=0) P(T=1) P(T=1) P(E=1) P(E=1)** 0.5 0.5 0.5 0.5 0.5 0.5

**<sup>K</sup> P(J=1) P(J=0)**

1 0.8(0.8\*1) 0.2 0 0.0(0.8\*0) 1.0 Sk P(J=1) P(J=0)

1 0.2(0.2\*1) 0.8 0 0.0(0.8\*0) 1.0 UnBBayes [10] is a probabilistic network framework written in Java. It has both a GUI and an API with inference, sampling, learning and evaluation. It supports BN, ID, MSBN, OOBN, HBN, MEBN/PR-OWL, PRM, structure, parameter and incremental learning.

JAVA UnBBayes uses a technique to reason by odds in intelligent systems. Through a proba‐ bilistic network–graph where the nodes are likely variables representing domain knowledge and the arcs represent relationships between them, we can estimate probabilities conditioned to evidence that assists us in decision making. This calculation is called probabilistic inference. With the addition of tree techniques, inferences in probabilistic networks can be made with high efficiency.

To make this technique easy to use, we create the JAVA UnBBayes, which is a visual system that is interactive and platform independent, making it possible to edit, build networks, and show evidence of entry and probabilistic reasoning.

#### **5.2. Metrics**

In this section, before presenting the results of our tests, we introduce the metric through which we measure the performance of a learner module modeled using Bayesian networks. The UnBBayes software allows us to evaluate the performance of each node in our network dynamically and in real time. Here are the metrics we used to evaluate our Bayesian network:


#### **5.3. The combined Bayesian network**

Before presenting the evaluation results of each node of our Bayesian network modeling the learner model in an adaptive system, we begin by presenting the combined Bayesian network through the UnBBayes software.

Figure 8 provides a map of the combined network, in which marginal variables of each node of our network are developed. We can observe the change in the marginal variables of each node in our network, simply by changing one or more marginal variable of one or more parent nodes of the selected node.

**Figure 8.** The combined Bayesian network of the learner model

**Figure 9.** The combined Bayesian network of the learner model

If we change the marginal variable "Succeed" of the node "Knowledge" from 40 % to 100 %, and the marginal variable "Succeed" of the node "Skills" from 10 % to 100 %, we notice that in Fig. 9, the marginal variable "Succeed" of the parent node "Pretest" will change from the initial state of 50 % into a total of 100 % completion. We also notice that a marginal variable of the parent's node of the node "Pretest"—the node "Learner"—will also change from 50 % to 72.5 %.

By changing the information of each node, and after compiling our network, all marginal variables will change automatically, giving us the ability to track in a dynamic way the flow of the learner's path, and to detect the causes of change during all stages of the learning situation.
