**1. Introduction**

122 E-Learning – Organizational Infrastructure and Tools for Specific Areas

Pereira, J. R, & Pinho, P. (2010). Using Modern Tools to Explain the Use of the Smith Chart,

ISSN 1045-9243

*IEEE Antennas and Propagation Magazine*, Vol. 52, No. 2, (April 2010), pp. 145-150,

In self-directed learning such as e-learning, it is significant for learner to recongnize his understanding level. Wrong judgment would provide inefficient learning, and cause a lack of motivation. Test is widely used as an effective method evaluating learner's understanding level, and we often judge it based only on its score (accuracy rate). For easy system construction, many e-learning systems use a multiple-choice test, which often provides a right answer with no comprehension.

Studies on estimating learner's understanding level by using learning history and test have been reported. One of them is construction of learner model based on accuracy rate and answer time in test (Sobue et al., 2004). However, it requires an enormous amount of data and consideration of learner's uncertain elements.

Another solution is to use heuristic model. It is not appropriate to classify learners into at most two groups (either comprehend or not comprehend) due to effective instruction. Then, we add answer time as an objective criterion. Their combination can provide various understanding levels and more precise estimation. The problem that estimates leaner's understanding level based on objective information can be related to ambiguous classification. Heuristic model is effective for it.

Many intelligent tutoring system (ITS) (Gunel, 2010; Nkambou & Bourdeau, 2008) has been developed in recent years. Constructing an accurate learner model is an important element in ITS. The method that generates fuzzy rule and uses classification has been proposed (Takashi et al., 2006). However, it requires enormous data. The methods that use Bayesian Network which utilizes statistical approximation and Support Vector Machine (SVM) which has a high generalization ability have been reported (Okamoto & Kayama, 2008; Sumada et al., 2007). The models have significant problems with computational amount and learning time.

This study focuses on Cellular Neural Network (CNN) (Chua & Yang, 1988) which is one of neural network models because it has the following remarkable characteristics:


C(4,5)

*x*˙*ij* = −*xij* + *Tij* ∗ *yij* + *Iij*. (4)

(|*xij* + 1|−|*xij* − 1|). (5)

x˙ = −x + T y + I (6)

⎫ ⎬ ⎭

: neighbor cells of

CNN used in this study is *uij* = 0 for simplicity. Hence, the dynamics is

*yij* <sup>=</sup> <sup>1</sup> 2

Fig.2 shows the binary output function. Eq.(5) guarantees a high computation accuracy. By using vector notation, the differential equation of *m* × *n* CNN can be represented as

,where x, y and I show a state, an output, and a threshold vector, respectively. T is a template

<sup>x</sup> = (*x*11, *<sup>x</sup>*12, ··· , *<sup>x</sup>*1*n*, ··· , *xm*1, ··· , *xmn*)*<sup>T</sup>* <sup>y</sup> = (*y*11, *<sup>y</sup>*12, ··· , *<sup>y</sup>*1*n*, ··· , *ym*1, ··· , *ymn*)*<sup>T</sup>* <sup>I</sup> = (*I*11, *<sup>I</sup>*12, ··· , *<sup>I</sup>*1*n*, ··· , *Im*1, ··· , *Imn*)*<sup>T</sup>*

Moreover, *yij* is a function of *xij* as the following:

00 0 0 0 0 0 0 0 *<sup>t</sup>*45(−2,−2) *<sup>t</sup>*45(−2,−1) *<sup>t</sup>*45(−2,0) *<sup>t</sup>*45(−2,1) *<sup>t</sup>*45(−2,2) 0 0 *<sup>t</sup>*45(−1,−2) *<sup>t</sup>*45(−1,−1) *<sup>t</sup>*45(−1,0) *<sup>t</sup>*45(−1,1) *<sup>t</sup>*45(−1,2) 0 0 *<sup>t</sup>*45(0,−2) *<sup>t</sup>*45(0,−1) *<sup>t</sup>*45(0,0) *<sup>t</sup>*45(0,1) *<sup>t</sup>*45(0,2) 0 0 *<sup>t</sup>*45(1,−2) *<sup>t</sup>*45(1,−1) *<sup>t</sup>*45(1,0) *<sup>t</sup>*45(1,1) *<sup>t</sup>*45(1,2) 0 0 *<sup>t</sup>*45(2,−2) *<sup>t</sup>*45(2,−1) *<sup>t</sup>*45(2,0) *<sup>t</sup>*45(2,1) *<sup>t</sup>*45(2,2) 00 0 0 0 0 0

Intelligent Tutoring System with Associative Cellular Neural Network 125

Fig. 1. 7 × 7 and *r* = 2 CNN

Table 1. The matrix t<sup>45</sup>

matrix.

In fact, CNN has been widely used in image processing, texture classification (Szianyi & Csapodi, 1998; Yang et al., 2001), time series processing, and so on.

In addition, it has been presented that CNN is useful for associative memory (Liu & Michel, 1993). The method is to relate stored patterns to asymptotically stable equilibrium points of dynamics. It enables us that verifying incomplete recall and improving efficiency are easy in pattern classification since information of stored patterns are aggregated in template which represents a connection between each cell and its neighbors. So far, we have presented some associative CNN systems of diagnosing liver diseases, recognizing Chinese characters and detecting abnormal automobiles' sounds, and so on (Namba, 2005; 2006; Zhang et al., 2005).

This study aims to construct an ITS with associatve CNN which diagnoses learner's understanding level and give appropriate feedback to him. As described above, since associative CNN is useful in classifying ambiguous data, we will apply associative CNN to the diagnosis system.

This chapter describes a new diagnosis method with Binary Output CNN(BCNN) (Namba, 2008; Namba2, 2008). As classification information, accuracy rate and answer time are used. The usefulness is verified by using experimental result of Java programming test (15 learners). The classification results and the validity is discussed.

Furthermore, in order to improve versatility of the system, CNN is extended to Tri-valued Output CNN (TCNN) (Namba, 2010; Namba2, 2010; Namba, 2011). By extending CNN function, more kinds of diagnosis information can be used, and representable understanding levels can be increased. Hence, more exact instructions can be realized. Then, a method with associative TCNN is described. In order to evaluate usefulness of the system, associative TCNN estimates understanding levels of 20 learners. The classification results and the validity is also discussed. In addition, in order to evaluate diagnosis ability of associative TCNN, comparison experiment is performed with linear function which is conventional estimation method.

#### **2. Associative Cellular Neural Network**

#### **2.1 Cellular Neural Network**

CNN is composed of simple analog circuits called cell. Each cell is connected with *r*−neighbors, which means a local connectivity. Fig.1 shows 7 × 7 and *r* = 2 CNN. In this figure, the black cell is C(4,5), and gray cells are the neighborhood. In *r*−neighborhood *m* × *n* CNN, the *i*-th row, *j*-th column(*i* = 1, 2, ··· , *m*; *j* = 1, 2, ··· , *n*) cell C(*i*, *j*) is represented by

$$\dot{\mathbf{x}}\_{\text{i}\dot{\mathbf{j}}} = -\mathbf{x}\_{\text{i}\dot{\mathbf{j}}} + T\_{\text{i}\dot{\mathbf{j}}} \ast y\_{\text{i}\dot{\mathbf{j}}} + S\_{\text{i}\dot{\mathbf{j}}} \ast u\_{\text{i}\dot{\mathbf{j}}} + I\_{\text{i}\dot{\mathbf{j}}} \tag{1}$$

,where *xij*, *yij*, *uij* and *Iij* show a state, an output, input variable, and a threshold, respectively. *Tij* and *Sij* are connection coefficients which show influence from the neighbor cells. ∗ means the following opeation:

$$T\_{ij} \* y\_{ij} = \sum\_{\substack{u=-r \ v=-r}}^{r} \sum\_{v=-r}^{r} t\_{ij(u,v)} y\_{i+u,j+v} \tag{2}$$

For example, the dynamics of cell C(4,5) in *r* = 2,7 × 7 CNN is represented as the following:

$$\dot{\mathbf{x}}\_{45} = -\mathbf{x}\_{45} + \sum\_{u=-2}^{2} \sum\_{v=-2}^{2} t\_{45(u,v)} y\_{4+u,5+v} + I\_{45} \tag{3}$$

and the matrix t<sup>45</sup> is described as Table 1.

0 0 *<sup>t</sup>*45(−2,−2) *<sup>t</sup>*45(−2,−1) *<sup>t</sup>*45(−2,0) *<sup>t</sup>*45(−2,1) *<sup>t</sup>*45(−2,2) 0 0 *<sup>t</sup>*45(−1,−2) *<sup>t</sup>*45(−1,−1) *<sup>t</sup>*45(−1,0) *<sup>t</sup>*45(−1,1) *<sup>t</sup>*45(−1,2) 0 0 *<sup>t</sup>*45(0,−2) *<sup>t</sup>*45(0,−1) *<sup>t</sup>*45(0,0) *<sup>t</sup>*45(0,1) *<sup>t</sup>*45(0,2) 0 0 *<sup>t</sup>*45(1,−2) *<sup>t</sup>*45(1,−1) *<sup>t</sup>*45(1,0) *<sup>t</sup>*45(1,1) *<sup>t</sup>*45(1,2) 0 0 *<sup>t</sup>*45(2,−2) *<sup>t</sup>*45(2,−1) *<sup>t</sup>*45(2,0) *<sup>t</sup>*45(2,1) *<sup>t</sup>*45(2,2) 00 0 0 0 0 0

Table 1. The matrix t<sup>45</sup>

2 Will-be-set-by-IN-TECH

In fact, CNN has been widely used in image processing, texture classification

In addition, it has been presented that CNN is useful for associative memory (Liu & Michel, 1993). The method is to relate stored patterns to asymptotically stable equilibrium points of dynamics. It enables us that verifying incomplete recall and improving efficiency are easy in pattern classification since information of stored patterns are aggregated in template which represents a connection between each cell and its neighbors. So far, we have presented some associative CNN systems of diagnosing liver diseases, recognizing Chinese characters and detecting abnormal automobiles' sounds, and so on (Namba, 2005; 2006; Zhang et al., 2005). This study aims to construct an ITS with associatve CNN which diagnoses learner's understanding level and give appropriate feedback to him. As described above, since associative CNN is useful in classifying ambiguous data, we will apply associative CNN to

This chapter describes a new diagnosis method with Binary Output CNN(BCNN) (Namba, 2008; Namba2, 2008). As classification information, accuracy rate and answer time are used. The usefulness is verified by using experimental result of Java programming test (15 learners).

Furthermore, in order to improve versatility of the system, CNN is extended to Tri-valued Output CNN (TCNN) (Namba, 2010; Namba2, 2010; Namba, 2011). By extending CNN function, more kinds of diagnosis information can be used, and representable understanding levels can be increased. Hence, more exact instructions can be realized. Then, a method with associative TCNN is described. In order to evaluate usefulness of the system, associative TCNN estimates understanding levels of 20 learners. The classification results and the validity is also discussed. In addition, in order to evaluate diagnosis ability of associative TCNN, comparison experiment is performed with linear function which is conventional estimation

CNN is composed of simple analog circuits called cell. Each cell is connected with *r*−neighbors, which means a local connectivity. Fig.1 shows 7 × 7 and *r* = 2 CNN. In this figure, the black cell is C(4,5), and gray cells are the neighborhood. In *r*−neighborhood *m* × *n* CNN, the *i*-th row, *j*-th column(*i* = 1, 2, ··· , *m*; *j* = 1, 2, ··· , *n*) cell C(*i*, *j*) is represented by

,where *xij*, *yij*, *uij* and *Iij* show a state, an output, input variable, and a threshold, respectively. *Tij* and *Sij* are connection coefficients which show influence from the neighbor cells. ∗ means

> *r* ∑*v*=−*r*

For example, the dynamics of cell C(4,5) in *r* = 2,7 × 7 CNN is represented as the following:

2 ∑ *v*=−2

*r* ∑*u*=−*r*

2 ∑ *u*=−2

*Tij* ∗ *yij* =

*x*˙45 = −*x*<sup>45</sup> +

and the matrix t<sup>45</sup> is described as Table 1.

*x*˙*ij* = −*xij* + *Tij* ∗ *yij* + *Sij* ∗ *uij* + *Iij* (1)

*tij*(*u*,*v*)*yi*<sup>+</sup>*u*,*j*+*v*. (2)

*t*45(*u*,*v*)*y*4+*u*,5+*<sup>v</sup>* + *I*45, (3)

(Szianyi & Csapodi, 1998; Yang et al., 2001), time series processing, and so on.

the diagnosis system.

method.

The classification results and the validity is discussed.

**2. Associative Cellular Neural Network**

**2.1 Cellular Neural Network**

the following opeation:

CNN used in this study is *uij* = 0 for simplicity. Hence, the dynamics is

$$
\dot{\mathfrak{x}}\_{i\dot{j}} = -\mathfrak{x}\_{i\dot{j}} + T\_{i\dot{j}} \ast \mathcal{Y}\_{i\dot{j}} + I\_{i\dot{j}}.\tag{4}
$$

Moreover, *yij* is a function of *xij* as the following:

$$y\_{ij} = \frac{1}{2}(|\mathbf{x}\_{ij} + \mathbf{1}| - |\mathbf{x}\_{ij} - \mathbf{1}|). \tag{5}$$

Fig.2 shows the binary output function. Eq.(5) guarantees a high computation accuracy. By using vector notation, the differential equation of *m* × *n* CNN can be represented as

$$
\dot{x} = -x + Ty + I \tag{6}
$$

,where x, y and I show a state, an output, and a threshold vector, respectively. T is a template matrix.

$$\begin{array}{l} x = (x\_{11}, x\_{12}, \dots, x\_{1n}, \dots, x\_{m1}, \dots, x\_{mm})^T \\ y = (y\_{11}, y\_{12}, \dots, y\_{1n}, \dots, y\_{m1}, \dots, y\_{mm})^T \\ I = (I\_{11}, I\_{12}, \dots, I\_{1n}, \dots, I\_{m1}, \dots, I\_{mn})^T \end{array}$$

removing from b*k*,t*<sup>k</sup>* and A:

we solve t*<sup>r</sup>*

Hence we have

respectively.

**3.1 Overview**

method.

b*r <sup>k</sup>* <sup>=</sup> <sup>t</sup>*<sup>r</sup>*

*<sup>k</sup>* by using a singular value decomposition as the following:

t*r <sup>k</sup>* <sup>=</sup> <sup>b</sup>*<sup>r</sup>*

This solution is the minimal norm of Eq.(12). [λ]

of square root of the eigenvalue of matrix [A*r*]

**3. Design of associative CNN diagnosis system**

the diagnosis system, we use prior knowledge as the following:

• Higher understanding level has high accuracy and short time.

cells which means answer and time. Each cell denotes the following: 1. Answer cell means that an answer is right/wrong (Discrete). 2. Time cell means that an answer time is short/long (Continuous).

optimal solution of differential equations.

**3.2 Expressing cell and answer pattern**

• High understanding level has high accuracy.

A*<sup>r</sup>* = U*k*[λ]

*<sup>k</sup>*V*k*[λ]

In the CNN designed by the method described above, stored patterns theoretically correspond to each equilibrium point of dynamics. CNN can recall a pattern by solving Eq.(6) when an initial state is given. In other words, it will converge on the stable equilibrium point of the

Fig.3 shows the overview of associative CNN diagnosis system. In advance, we define understanding levels, and correspond them to stored patterns in associative CNN. Learner pattern is generated from his test data. After that, a stored pattern can be obtained by the self-recall process of associative CNN. The understanding level which corresponds to the pattern is the diagnosis result. Due to the characterisic of associative CNN, stored patterns are not always recalled. This means that the system can't diagnose his understanding level.

We here relate understanding levels to stored patterns of associative CNN. In order to design

They can be expressed by combination of cells. We allocate information of a question to two

Table 2 shows understanding levels defined per question. Moreover, by aggregating them to the number of questions, answer pattern can be generated. Fig.4 shows the expression

• Low accuracy and short answer time may mean that he makes a careless mistake.

,where <sup>A</sup>*<sup>r</sup>* is a matrix having removed the elements not belonging to *<sup>r</sup>*−neighborhood of the *k*−th cell from matrix A. b*<sup>k</sup>* and t*<sup>k</sup>* are the similar meaning vectors. The amount of computing can be decreased by these procedures. Generally, the matrix A*<sup>r</sup>* is not a square matrix. Hence,

Intelligent Tutoring System with Associative Cellular Neural Network 127

1/2V *<sup>T</sup>*

<sup>−</sup>1/2U *<sup>T</sup>*

*<sup>k</sup>*A*<sup>r</sup>* (12)

*<sup>k</sup>* . (13)

*<sup>k</sup>* . (14)

1/2 is a diagonally dominant matrix consisting

*<sup>T</sup>*A*r*. U*<sup>k</sup>* and V*<sup>k</sup>* are the orthogonal matrices,

Fig. 2. Binary output function

#### **2.2 Design of associative BCNN**

Associative BCNN can store a lot of patterns because they are related to asymptotically stable equilibrium points of dynamics. We prepare *p* state vectors β1,β2, ··· ,β*<sup>p</sup>* multiplied *p* stored vectors α*i*(*i* = 1, 2, ··· , *p*) whose element is ±1 by constant *c*(*c* > 1). In other words, we can obtain Eq.(7).

$$
\beta\_l = c \alpha\_l \tag{7}
$$

It is evident that α*i*,β*i*, T and I concurrently satisfy the following equations:

$$\begin{pmatrix} -\beta\_1 + T\alpha\_1 + I = 0\\ -\beta\_2 + T\alpha\_2 + I = 0\\ \cdots\\ -\beta\_p + T\alpha\_p + I = 0 \end{pmatrix} \tag{8}$$

Let matrices A and B be

$$\begin{array}{l} \mathbf{A} = (\alpha\_1 - \alpha\_{p\prime}\alpha\_2 - \alpha\_{p\prime}\cdots, \alpha\_{p-1} - \alpha\_p) \\ \mathbf{B} = (\beta\_1 - \beta\_{p\prime}\beta\_2 - \beta\_{p\prime}\cdots, \beta\_{p-1} - \beta\_p) \end{array} \tag{9}$$

We are able to obtain the following equations:

$$\begin{aligned} \mathbf{B} &= \mathbf{T} \mathbf{A} \\ \mathbf{I} &= \beta\_p - \mathbf{T} \alpha\_p \end{aligned} \tag{10}$$

T and I which satisfy these expressions must exist in order for the CNN to have α*<sup>i</sup>* as stored vectors. We are easily able to find T and I by adopting the following method.

If we focus on the computing at *k*−th cell in CNN(*k* = *n*(*i* − 1) + *j*), its conditional equation is given by

$$\mathbf{b}\_{k} = \mathbf{t}\_{k}\mathbf{A} \tag{11}$$

,where b*<sup>k</sup>* and t*<sup>k</sup>* are the *k*−th row vector of matrix B and T. A large number of null elements are included in the vector t*k*. Using the property of the *r*−neighborhood, we obtain Eq.(12) by removing from b*k*,t*<sup>k</sup>* and A:

$$\mathbf{b}\_k^r = \mathbf{t}\_k^r \mathbf{A}^r \tag{12}$$

,where <sup>A</sup>*<sup>r</sup>* is a matrix having removed the elements not belonging to *<sup>r</sup>*−neighborhood of the *k*−th cell from matrix A. b*<sup>k</sup>* and t*<sup>k</sup>* are the similar meaning vectors. The amount of computing can be decreased by these procedures. Generally, the matrix A*<sup>r</sup>* is not a square matrix. Hence, we solve t*<sup>r</sup> <sup>k</sup>* by using a singular value decomposition as the following:

$$\mathbf{A}^r = \mathbf{U}\_k[\lambda]^{1/2} \mathbf{V}\_k^T. \tag{13}$$

Hence we have

4 Will-be-set-by-IN-TECH

+1


It is evident that α*i*,β*i*, T and I concurrently satisfy the following equations:

···

Fig. 2. Binary output function

Let matrices A and B be

We are able to obtain the following equations:

obtain Eq.(7).

is given by

**2.2 Design of associative BCNN**

y ij


Associative BCNN can store a lot of patterns because they are related to asymptotically stable equilibrium points of dynamics. We prepare *p* state vectors β1,β2, ··· ,β*<sup>p</sup>* multiplied *p* stored vectors α*i*(*i* = 1, 2, ··· , *p*) whose element is ±1 by constant *c*(*c* > 1). In other words, we can

> −β<sup>1</sup> + T α<sup>1</sup> + I = 0 −β<sup>2</sup> + T α<sup>2</sup> + I = 0

−β*<sup>p</sup>* + T α*<sup>p</sup>* + I = 0

<sup>A</sup> = (α<sup>1</sup> − <sup>α</sup>*p*, <sup>α</sup><sup>2</sup> − <sup>α</sup>*p*, ··· , <sup>α</sup>*p*−<sup>1</sup> − <sup>α</sup>*p*) <sup>B</sup> = (β<sup>1</sup> − <sup>β</sup>*p*, <sup>β</sup><sup>2</sup> − <sup>β</sup>*p*, ··· ,β*p*−<sup>1</sup> − <sup>β</sup>*p*)

> B = T A I = β*<sup>p</sup>* − T α*<sup>p</sup>*

vectors. We are easily able to find T and I by adopting the following method.

T and I which satisfy these expressions must exist in order for the CNN to have α*<sup>i</sup>* as stored

If we focus on the computing at *k*−th cell in CNN(*k* = *n*(*i* − 1) + *j*), its conditional equation

,where b*<sup>k</sup>* and t*<sup>k</sup>* are the *k*−th row vector of matrix B and T. A large number of null elements are included in the vector t*k*. Using the property of the *r*−neighborhood, we obtain Eq.(12) by

O xij

⎫ ⎪⎪⎬

⎪⎪⎭

�

β*<sup>i</sup>* = *c*α*<sup>i</sup>* (7)

�

b*<sup>k</sup>* = t*k*A (11)

. (8)

. (10)

. (9)

+1

$$\mathbf{t}\_k^r = \mathbf{b}\_k^r \mathbf{V}\_k[\lambda]^{-1/2} \mathbf{U}\_k^T. \tag{14}$$

This solution is the minimal norm of Eq.(12). [λ] 1/2 is a diagonally dominant matrix consisting of square root of the eigenvalue of matrix [A*r*] *<sup>T</sup>*A*r*. U*<sup>k</sup>* and V*<sup>k</sup>* are the orthogonal matrices, respectively.

In the CNN designed by the method described above, stored patterns theoretically correspond to each equilibrium point of dynamics. CNN can recall a pattern by solving Eq.(6) when an initial state is given. In other words, it will converge on the stable equilibrium point of the optimal solution of differential equations.

#### **3. Design of associative CNN diagnosis system**

#### **3.1 Overview**

Fig.3 shows the overview of associative CNN diagnosis system. In advance, we define understanding levels, and correspond them to stored patterns in associative CNN. Learner pattern is generated from his test data. After that, a stored pattern can be obtained by the self-recall process of associative CNN. The understanding level which corresponds to the pattern is the diagnosis result. Due to the characterisic of associative CNN, stored patterns are not always recalled. This means that the system can't diagnose his understanding level.

#### **3.2 Expressing cell and answer pattern**

We here relate understanding levels to stored patterns of associative CNN. In order to design the diagnosis system, we use prior knowledge as the following:


They can be expressed by combination of cells. We allocate information of a question to two cells which means answer and time. Each cell denotes the following:


Table 2 shows understanding levels defined per question. Moreover, by aggregating them to the number of questions, answer pattern can be generated. Fig.4 shows the expression method.

A B

Intelligent Tutoring System with Associative Cellular Neural Network 129

C D

Level Accuracy Rate(%) Category Number I 80−100 Easy 19 II 40−80 Standard 20 III 0−40 Difficult 11

(*dk* <sup>−</sup> ¯*tk*) (15)

*qk* <sup>=</sup> <sup>1</sup> *σk*

,where *<sup>σ</sup><sup>k</sup>* and ¯*tk* are standard deviation and average of all learners in the *<sup>k</sup>*−th question.

15 learners who studied Java programming were experimented. They were randomly divided into two groups (ten and five students), and experimented in each group. The former belongs

The learners in Group X were experimented. At first, all 50 questions were divided into three levels according to accuracy rate of Group X. As shown in Table 3, question level was defined, and all questions were classified. Moreover, the average and standard deviation of answer time in Group X were obtained, and all data were transformed by using Eq.(15). The diagnosis

"Others" in Table 4 indicates that the output was not any stored patterns, in other words, CNN couldn't diagnose. In the experiment, diagnosis rate, which means that CNN recalled a stored pattern, was about 93.3 %. It is recognized that learner patterns are classified into A,B in easy questions (Level I) and C,D in difficult (Level III). Hence, this result can be appropriate.

Fig. 5. Stored patterns

we use

Table 3. Question level defined in the experiment

**4. Experimental results by associative BCNN**

to Group X, and the latter belongs to Group Y.

**4.1 Experiment 1 – Group X**

results are shown in Table 4.

Fig. 3. Overview of our system


Table 2. Understanding levels defined per question

Fig. 4. Expression of answer pattern by associative CNN

#### **3.3 Stored patterns**

This study defines three understanding levels, and design four stored patterns corrresponding to them. Fig.5 shows them, and Labels A-D correspond to those of Table 2.

#### **3.4 Generating input pattern**

The answer time is transformed into −1 ∼ +1 for input to associative CNN (scaling). Hence, in order to transform an actual answer time *dk* into normalized one *qk* in the *k*−th question,

Fig. 5. Stored patterns


Table 3. Question level defined in the experiment

we use

6 Will-be-set-by-IN-TECH

Stored Patterns

Learner's Pattern

Fig. 3. Overview of our system

**3.3 Stored patterns**

**3.4 Generating input pattern**

Table 2. Understanding levels defined per question

Answer

Fig. 4. Expression of answer pattern by associative CNN

Time

Learner's Test Data Corresponding

Label Answer Time Understanding level A Right Short Higher B Right Long High C Wrong Short Low D Wrong Long Low

Q1

This study defines three understanding levels, and design four stored patterns corrresponding

The answer time is transformed into −1 ∼ +1 for input to associative CNN (scaling). Hence, in order to transform an actual answer time *dk* into normalized one *qk* in the *k*−th question,

Q2

to them. Fig.5 shows them, and Labels A-D correspond to those of Table 2.

Associative CNN

Self-recall

Understanding Level

Recalled Pattern

Q5

$$q\_k = \frac{1}{\sigma\_k} (d\_k - \bar{t}\_k) \tag{15}$$

,where *<sup>σ</sup><sup>k</sup>* and ¯*tk* are standard deviation and average of all learners in the *<sup>k</sup>*−th question.
