**4. Development of Adaptive Neuro-Fuzzy Inference System (ANFIS)**

Basically, fuzzy rules and fuzzy reasoning are the backbone of fuzzy inference systems, which are the most important modeling tools based on fuzzy sets (Jang et al., 1997). Fuzzy reasoning is an inference procedure that derives conclusions from the set of fuzzy *If-Then*

rules and known facts. The ANFIS model is proposed to form a complete fuzzy rule bases so that all possible input conditions of the fuzzy rules are being generated.

It is necessary to take into consideration the scarcity of data and the style of input space partitions. For example, for a single input problem, usually 10 data points are necessary to come up with a good model (Jang et al., 1997). Details on ANFIS model structure will be described in section 4.1.

#### **4.1 ANFIS model structure**

70 Fuzzy Inference System – Theory and Applications

*F1* and *F2* are the membership functions for input 1 and 2, respectively

Table 4. Initial fuzzy rules determine by human experts

problem of incomplete and vague decisions made by human.

**3.4 Defuzzification of output distribution** 

equation (10).

expert FIS have been recorded.

**Rule S T A H P** 

1 High Fast A few Little Has Mastered 2 High Fast A few Average Moderately Mastered 3 High Fast A few Needed Moderately Mastered 4 High Fast Average Little Moderately Mastered 5 High Fast Average Average Moderately Mastered 6 High Fast Average Needed Moderately Mastered 7 High Fast Many Little Not Mastered 8 High Fast Many Average Not Mastered 9 High Fast Many Needed Not Mastered 10 High Average A few Little Has Mastered 11 High Average A few Average Moderately Mastered 12 High Average Many Needed Not Mastered 13 High Slow A few Little Has Mastered 14 High Slow Many Needed Not Mastered 15 Moderate Fast A few Little Moderately Mastered 16 Moderate Average Average Average Moderately Mastered 17 Moderate Average Many Needed Not Mastered 18 Low x x x Not Mastered

The input for the defuzzification process is a fuzzy set and the output is a single number crispness recovered from fuzziness. Given a fuzzy set that encompasses a range of output values, we need to return one number, thereby moving from a fuzzy set to a crisp output. The final output of the system is the weighted average of all rule outputs, computed as in

����� ������ � <sup>∑</sup> ����

Finally, all the outputs of datasets for reasoning of the student's performance in the human

Next section describes the ANFIS approach to form a complete fuzzy rule base to solve the

Basically, fuzzy rules and fuzzy reasoning are the backbone of fuzzy inference systems, which are the most important modeling tools based on fuzzy sets (Jang et al., 1997). Fuzzy reasoning is an inference procedure that derives conclusions from the set of fuzzy *If-Then*

**4. Development of Adaptive Neuro-Fuzzy Inference System (ANFIS)** 

� ��� ∑ �� � ���

(10)

Where:

The ANFIS model structure consists of four nodes for input layer, the nodes of hidden layer and one node for output layer as presented in Fig. 6. The input layer represents the antecedent part of the fuzzy rule, which is the student's learning behavior such as the scores (*S*) earned, the time (*T*) spent, the attempts (*A*), and helps (*H*); the output layer represents the consequent part of the rule, i.e. the student's performance (*P*). The size of the hidden layer is determined experimentally.

In this work, the ANFIS model is trained with 18 fuzzy rules obtained from the human expert. These rules are considered as the rules that are certain. After that, 81 potential fuzzy rules are used for testing the network that represent the 3 3 3 3 rule antecedents.

Fig. 6. ANFIS model structure

From the Fig. 6, every nodes of the same layer have similar functions. Layer 1 is the input layer and the neurons in this layer simply pass external crisp signals to Layer 2.

$$\begin{aligned} \nu\_l^{(1)} &= \mu\_{Sl}(\mathbb{S}) \\ \nu\_l^{(1)} &= \mu\_{Tl}(T) \\ \nu\_l^{(1)} &= \mu\_{Al}(A) \\ \nu\_l^{(1)} &= \mu\_{Hl}(H) \end{aligned} \tag{11}$$

$$\mathbf{y}\_l^{(2)} = \mathbf{e}^{-(\frac{\mathbf{x}\_l^{(2)} - c\_l}{2\sigma})^2} \tag{12}$$

$$\mathbf{y}\_{l}^{\{3\}} = \mp \overline{\mathbf{w}}\_{l} = \mu\_{\rm Sl}(\mathbf{S}) \times \mu\_{\rm Tl}(T) \times \mu\_{\rm Al}(A) \times \mu\_{\rm fll}(H) \quad l = 1, 2 \tag{13}$$

$$\mathcal{W}\_l^{(4)} = \overline{\mathcal{W}\_l} = \frac{\mathcal{W}\_l}{\mathcal{W}\_1 + \mathcal{W}\_2 + \mathcal{W}\_3 + \mathcal{W}\_4} \tag{14}$$

$$\mathbf{y}\_{l}^{\{\mathcal{S}\}} = \overline{\mathbf{w}}\_{l} \mathbf{f}\_{l} = \overline{\mathbf{w}}\_{l} \{ \mathbf{a}\_{l} \mathbf{S} + \mathbf{b}\_{l} \mathbf{T} + \mathbf{c}\_{l} \mathbf{N} \mathbf{T} + \mathbf{d}\_{l} \mathbf{N} \mathbf{H} + \mathbf{e}\_{l} \} \tag{15}$$

$$\mathbf{y}\_{l}^{(6)} = \text{overall output} = \sum\_{l} \overline{\mathbf{w}\_{l}} f\_{l} = \frac{\sum\_{l} \mathbf{w}\_{l} f\_{l}}{\sum\_{l} \mathbf{w}\_{l}} \tag{16}$$

A Concise Fuzzy Rule Base to Reason Student Performance Based on Rough-Fuzzy Approach 75

Output ANFIS64

Output ANFIS69

Fig. 9. Comparison between ANFIS outputs based on 64 training datasets and testing data

Fig. 10. Comparison between ANFIS outputs based on 69 training datasets and testing data

Moreover, the percentage of successful classification for each input data pattern have been calculated and shown in the Table 4 and Fig. 10. The table below indicates that the human experts' fuzzy rule base consisting of only 18 rules has the possibility of not giving all classification result. For 81 input datasets have been tested only 62% successfully classified; 1500 random input datasets, 66% successfully give the desired result. Meanwhile ANFIS based on 69 training datasets yield encouraging results than human experts' fuzzy rule base,

By analyzing and comparing the experimental results for the five fuzzy rule bases, it can be concluded that the human experts' fuzzy rule base is consistent but incomplete. This is

they have successfully classified all the given input.

Fig. 7. Comparison between ANFIS outputs based on 44 training datasets and testing data

Fig. 8. Comparison between ANFIS outputs based on 54 training datasets and testing data

After incrementing the training data from 54 to 64, the results seem becomes better. Fig. 9 shows the comparison between outputs of ANFIS model based on 64 training datasets and outputs of the checking data. The outcomes of the trained ANFIS able to achieved up to 96.3% which are classified successfully. However, still have some of outputs are illogical decisions. There are 3.7% of the decisions are illogically.

Thus, another experiment carried out by using the 69 training datasets and finally the all the outputs of the ANFIS are able to classify all the 81 input patterns successfully. We can see it clearly in the Fig. 10. From the graph, both of the outputs are same and the ANFIS model can classify the student performance correctly in all possible conditions.

Output ANFIS44

Output ANFIS54

Fig. 7. Comparison between ANFIS outputs based on 44 training datasets and testing data

Fig. 8. Comparison between ANFIS outputs based on 54 training datasets and testing data

decisions. There are 3.7% of the decisions are illogically.

can classify the student performance correctly in all possible conditions.

After incrementing the training data from 54 to 64, the results seem becomes better. Fig. 9 shows the comparison between outputs of ANFIS model based on 64 training datasets and outputs of the checking data. The outcomes of the trained ANFIS able to achieved up to 96.3% which are classified successfully. However, still have some of outputs are illogical

Thus, another experiment carried out by using the 69 training datasets and finally the all the outputs of the ANFIS are able to classify all the 81 input patterns successfully. We can see it clearly in the Fig. 10. From the graph, both of the outputs are same and the ANFIS model

Fig. 9. Comparison between ANFIS outputs based on 64 training datasets and testing data

Fig. 10. Comparison between ANFIS outputs based on 69 training datasets and testing data

Moreover, the percentage of successful classification for each input data pattern have been calculated and shown in the Table 4 and Fig. 10. The table below indicates that the human experts' fuzzy rule base consisting of only 18 rules has the possibility of not giving all classification result. For 81 input datasets have been tested only 62% successfully classified; 1500 random input datasets, 66% successfully give the desired result. Meanwhile ANFIS based on 69 training datasets yield encouraging results than human experts' fuzzy rule base, they have successfully classified all the given input.

By analyzing and comparing the experimental results for the five fuzzy rule bases, it can be concluded that the human experts' fuzzy rule base is consistent but incomplete. This is

A Concise Fuzzy Rule Base to Reason Student Performance Based on Rough-Fuzzy Approach 77

The three main phases in the rough-fuzzy approach are data pre-processing, reduct

In this phase, the complete fuzzy rules are converts from linguistic terms into numeric

The fuzzy rules are mapped into a decision system format, discretisation of data,

a. In this problem, the fuzzy rules are mapped as rows; while the antecedents and the consequents of the rules are mapped into columns. In the rough set decision table, the antecedents and consequents of the fuzzy rules are labelled as condition and decision

b. Discretisation refers to the process of arranging the attribute values into groups of similar values. It involves the transformation of the fuzzy linguistic descriptions of the conditions and the decision attributes into numerical values. In this study, a conversion scheme is formulated to transform the conditions and decisions of fuzzy linguistic

The reduct computation stage determines the selection of an important attribute that can be used to represent the decision system (Carlin et al., 1998). It is used to reduce the decision system, thus generating more concise rules. The rough set approach employs two important concepts related to reduction: one is related to reduction of rows, and the other one is related to reduction of columns (Chen, 1999). With the notion of an indiscernibility class, the rows with certain properties are grouped together, while with the notion of dispensable attributes, the columns with less important attributes are removed. Another essential concept in reduct computation is the lower and upper approximations, in which the computation involved in the lower approximation produces rules that are certain, while the computation involved in the upper

d. Rule Generation. A reduct is converted into a rule by binding the condition attribute values of the object class from which the reduct is originated to the corresponding

The rules in rough set format are converted into linguistic terms of the concise fuzzy rule base.

In Section 4, there are 81 datasets that represent every possible value of the fuzzy rules with full certainty. This dataset is used for the development of the ANFIS model. Using Rosetta as rough set tool, the genetic algorithm with object reduct is the method used for computing reducts (Øhrn, 2001). This method implements a genetic algorithm for computing minimal hitting sets as described by Vinterbo and Øhrn (2000). Using rough set, we trained the fuzzy

computation and data post-processing as shown in Fig. 11 and described as follows:

computation of reducts from data and derivation of rules from reducts.

**5.1 Rough fuzzy phases** 

**Phase 1.** Data pre-processing.

**Phase 2.** Reduct computation.

attributes, respectively.

c. Computation of reduct

attribute.

**Phase 3.** Data post-processing

**5.2 Rough fuzzy experiment** 

values that correspond to the rough set format.

values into numerical representations.

approximation produces possible rules (Øhrn, 2001).

because the 18 rules in this rule base were carefully selected to give full certainty for decisions. However, we found that not all situations covered by this 18 fuzzy rules and still have some rules are not stated. On the contrary, the complete fuzzy rule base in ANFIS is complete but still got some rules are inconsistent and the decision output is not logically. Although all situations for all four attributes are covered by the set of 81 rules, some of the rules have been found to have unnecessary conditions. Thus, the increment of the training data need to done, so that the ANFIS based on 69 training datasets able to eliminate the unnecessary conditions and the illogical decisions. Finally, the ANFIS model is consistent and complete; all situations for all four attributes are covered by the set of 69 training data, and there are no missing rules.


Table 5. Percentage of successful classifications correctly
