**3. 3D representation system of programming students' profiles**

The system of representation of profiles presented in this chapter is an evolution of *PCodigo II*, a software developed by which, by software metrics that quantifies effort and quality of programming, recognizes possible learning difficulties, good programming practices and until strong evidence of plagiarism among programs [3].

Thus our system extends the students' profiles representation of *PCodigo II* in a temporal dimension, selects more relevant metrics and allows the automatic selection of representative examples from a set of source codes for composition of rubric representation.

**Figure 1** shows the system's architecture proposed in a scheme of inputs, processing and outputs come an integration of our system to the 1.0 and 3. x versions of *Moodle* virtual learning environment.

According to **Figure 1**, for version 1.9 of *Moodle*, the system receives as input a backup of Moodle's *Compacted Classroom* (in .zip, .rar, .gz or .tgz formats). For version 3. x of *Moodle*, the system is accessed through *Teacher's Credentials* to access a distance programming course of *Moodle*.

The course data imported from *Moodle* are as follows: student listing, activity listing, activity notes and *Submissions*, that are files of programming exercises. These data are then extracted by the *Extracting and Preprocessing* module and *Submissions* containing source codes that were written in C, C++, Java or Python languages are mapped to vectors whose dimensions are software metrics that quantify effort and quality of programming [3]. The submitted C programs are mapped on 348 software metrics and the Python programs, in 42 metrics.

Each vector representation on software metrics of a student's programming solution we call *Learning State*. Then, after generating *Learning States* of a programming class, the system gathers these representations in a *Cognitive Matrix* for analysis and comparison of programs written by students [3].

In order to analyze solutions in a generic way, we have reduced each *Learning State* to five metrics: *Maintainability*, *Cyclomatic Complexity*, *Indentation*, *Laconism* and *Modularization*. They are described as follows:


Then, bringing together the cognitive matrices for each programming solution of a course, a *3D Representation of Learning Profiles* of a programming class. The same procedure is performed for a *Reduced Matrix*. This timeline formed by a set of *Learning States* of a student over a course is called *Learning Profile*.

*Enhanced Expert Systems*

of longitudinal patterns.

proficient and expert.

software metrics.

of marks awarded.

class in a fast, detailed and holistic way.

evaluate each type of programming solution.

separate but interrelated learning events.

with dimensionality reduction of [11] and the study of [18] involving the discovery

PCodigo II is an online system of automatic mapping of students' profiles in software metrics to analyze programming learning [3]. In addition to profiling mapping in 348 software metrics, PCodigo II has massive execution, similar profile

The first applications of PCodigo II of [3] in real programming exercises demonstrate the effectiveness of this system for the diagnostic assessment of programming learning. Thus applying PCodigo II in real programming exercises it was shown that teachers, taking into account what the metrics say, can recognize the learning difficulties, good programming practices and classes of learning profiles of a whole

The chapter of [16] presents some information visualization instruments in a multidimensional perspective to help teachers in the analysis of programming learning with mapping of profiles on software metrics. Through generated visualizations, we can analyze and compare profiles under different variables to recognize

The strategy of profile recognition by static analysis of codes based on metrics of [2] aims to infer profiles of programmers from analysis of their Java code, classify them according to skills and continually evaluate their progress in the practice of programming in a course. The detected profiles are a novice, advanced beginner,

Some metrics used in this strategy are a number of sentences, conditional control and repetition structures, types of data, classes, operators, lines of code, and other code. The advantage of this strategy in relation to our system is to classify and qualify students. However, we automatically select the most appropriate metrics to

For an automatic selection of evaluation variables, we highlight the selection model of the characteristics of [17], which combines clustering techniques and algorithm to create a feature map by selecting relevant terms in the texts of the groups of notes of the evaluation of a teacher. In our proposal, the relevant characteristics, that is, the most important metrics for each programming solution, we can visualize through heat maps comparing different solutions from five or more

Regarding the composition of rubrics, a strategy to highlight is the proposal of [11], which is based on clustering and Principal Component Analysis techniques to recognize, from solutions developed by students, examples of solutions that represent, in a rubric scheme, the scores attributed by a teacher. This work complements these proposals by generating a ranking of samples of programming solutions for a teacher to score until finding the best set of rubric representations with a diversity

According to [18], to understand how learning unfolds in the over time, it is necessary to move to a new learning perspective in which the units of analysis are

Following this idea, the study of [18] investigates and validates longitudinal patterns in online participation as a measure to differentiate student performances. The proposal of the system of this work, based on the study of [18], seeks to understand how programming learning unfolds and analyze longitudinal patterns. In this way, following this proposal, in relation to the other Works Presented, we advanced in the 3D representation of profiles of programming students, in the view of characteristics represented by software metrics over time and the composition of rubrics from a ranking of selected solutions automatically for a teacher to score.

graphing, information visualization, and plagiarism analysis capabilities.

learning difficulties and classes of solutions from similar characteristics.

**56**

#### **Figure 1.**

*Architecture of the 3D representation system of students' profiles.*

*Learning Profile* shows how a set of student's assessment variables evolves over a course. Thus, through the analysis of profiles of learning it is possible to understand main learning difficulties of students and to reorient teaching with formative assessment actions in order to anticipate the predictable future of poor performances.

#### **3.1 Selection of metrics**

The Reduced Matrix generation process is performed by *Selection of Metrics* module (see **Figure 1**) using the *Recursive Feature Elimination* (RFE) method of the *Scikit-Learn* library [19] and a linear regression algorithm. The inputs of *Selection of Metrics* are the grades of some programming solutions and the *Cognitive Matrix* mapped on 348 software metrics generated by *PCodigo II* [3]. The *Selection of Metrics* returns the metrics most related to the grading pattern through a metric ranking.

#### **3.2 Timeline of programming solutions**

The timeline consists of a vector representation of the five fundamental metrics or selected metrics most closely related to a teacher's grade from each course programming exercise. This representation contributes to the analysis of how the evaluation metrics evolve for each student during a course and to generate a training set from to predict future exercise performance from history of exercises and performances associated with them.

#### **3.3 Clusters analysis and composition of rubrics**

A hierarchical approach we have used s to form clusters of similar solutions. In this way, a representative would be selected from each of these clusters to receive a teacher's grade and that grade would be reproduced for the other standards in the same cluster.

Unlike PCodigo II [3], in which clustering is performed with a previously defined number of clusters, a dendrogram based on centroid was generated, from which can be extracted the amount of clusters required, which, in this work, was

**59**

*Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis…*

placed as half of the samples from the algorithm BFS (*Breadth-first search*) tree, where the depth is given by the distance noted on each edge, that is *Euclidean* 

According to **Figure 1**, for *Composition of Rubrics* with the purpose of assisting teacher's programming exercises, we have developed an automatic selection of representative samples of codes and metrics more related to the marks assigned by a

In order to select this small set of representative codes, we have used a hierarchi-

In order to begin the performance prediction experiments, we have chosen two prediction methods: based on cluster analysis and based on previous performance

In prediction based on cluster analysis, we used the selection ranking that selects representative samples of the *Dendrogram Selection* subgraphs to form a training set of the prediction model based on linear regression with 50% examples of a set of punctuated programming solutions by a teacher. The other 50% are predicted automatically by prediction model with reference based on diversity of the scores

In the prediction of performances based on a history of previous performances, through a time series generated from the 3D representation (students × activities × metrics), a regressor model of each metric is trained and a regressor of metrics for grades, then the metrics of the next exercise are predicted as well as your grade. In this case, the training set is represented by the solutions solved by the same student along the course and the note to be predicted is the next solution to the history

The first experiments of the system functionalities proposed in this chapter were in a Moodle's classroom of a distance course of C Programming Language in Brazil. Through the access credentials of a programming teacher, we obtained a zipped copy of the classroom from this course to the processing of learning analysis from students' codes. Next, all C programming code files were extracted along the

After the generation of 3D representations of learning profiles (*Activities* × *Students* × *Metrics*), gathering 10 activities, 25 students and 348 metrics of software,

• For each activity, the list of metrics that were considered the most relevant to

programming distance course by about 25 programming students.

we use this information to generate the following results and views:

cal representation of clusters by *Selection Dendrogram* with Euclidean Distance similarity measure. In *Selection Dendrogram*, the first samples marked yellow are the samples selected from *Correction Ranking,* that is a list automatically generated to indicate the best correction sequence of programming exercises so that the teacher can score a smaller set of samples of programs that represents the diversity of marks. Through this representation, a search in depth not aware of plagiarism is performed starting with the more atypical samples and accumulating distances (from root to node) which are expressed at each node of the dendrogram. Then, after the selected samples are scored by a teacher and the metrics that most impact the grades

assigned by him are verified to analyze possible correction inconsistencies.

*DOI: http://dx.doi.org/10.5772/intechopen.81754*

teacher to this small set of representative codes.

assigned by a teacher to the training set examples.

**3.4 Prediction of performance**

*Distance.*

histories.

samples of that set.

assign marks.

**4. Experiments and results**

*Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis… DOI: http://dx.doi.org/10.5772/intechopen.81754*

placed as half of the samples from the algorithm BFS (*Breadth-first search*) tree, where the depth is given by the distance noted on each edge, that is *Euclidean Distance.*

According to **Figure 1**, for *Composition of Rubrics* with the purpose of assisting teacher's programming exercises, we have developed an automatic selection of representative samples of codes and metrics more related to the marks assigned by a teacher to this small set of representative codes.

In order to select this small set of representative codes, we have used a hierarchical representation of clusters by *Selection Dendrogram* with Euclidean Distance similarity measure. In *Selection Dendrogram*, the first samples marked yellow are the samples selected from *Correction Ranking,* that is a list automatically generated to indicate the best correction sequence of programming exercises so that the teacher can score a smaller set of samples of programs that represents the diversity of marks.

Through this representation, a search in depth not aware of plagiarism is performed starting with the more atypical samples and accumulating distances (from root to node) which are expressed at each node of the dendrogram. Then, after the selected samples are scored by a teacher and the metrics that most impact the grades assigned by him are verified to analyze possible correction inconsistencies.

#### **3.4 Prediction of performance**

*Enhanced Expert Systems*

performances.

**Figure 1.**

**3.1 Selection of metrics**

**3.2 Timeline of programming solutions**

*Architecture of the 3D representation system of students' profiles.*

performances associated with them.

**3.3 Clusters analysis and composition of rubrics**

*Learning Profile* shows how a set of student's assessment variables evolves over a course. Thus, through the analysis of profiles of learning it is possible to understand main learning difficulties of students and to reorient teaching with formative assessment actions in order to anticipate the predictable future of poor

The Reduced Matrix generation process is performed by *Selection of Metrics* module (see **Figure 1**) using the *Recursive Feature Elimination* (RFE) method of the *Scikit-Learn* library [19] and a linear regression algorithm. The inputs of *Selection of Metrics* are the grades of some programming solutions and the *Cognitive Matrix* mapped on 348 software metrics generated by *PCodigo II* [3]. The *Selection of Metrics* returns the metrics most related to the grading pattern through a metric ranking.

The timeline consists of a vector representation of the five fundamental metrics or selected metrics most closely related to a teacher's grade from each course programming exercise. This representation contributes to the analysis of how the evaluation metrics evolve for each student during a course and to generate a training set from to predict future exercise performance from history of exercises and

A hierarchical approach we have used s to form clusters of similar solutions. In this way, a representative would be selected from each of these clusters to receive a teacher's grade and that grade would be reproduced for the other standards in the

Unlike PCodigo II [3], in which clustering is performed with a previously defined number of clusters, a dendrogram based on centroid was generated, from which can be extracted the amount of clusters required, which, in this work, was

**58**

same cluster.

In order to begin the performance prediction experiments, we have chosen two prediction methods: based on cluster analysis and based on previous performance histories.

In prediction based on cluster analysis, we used the selection ranking that selects representative samples of the *Dendrogram Selection* subgraphs to form a training set of the prediction model based on linear regression with 50% examples of a set of punctuated programming solutions by a teacher. The other 50% are predicted automatically by prediction model with reference based on diversity of the scores assigned by a teacher to the training set examples.

In the prediction of performances based on a history of previous performances, through a time series generated from the 3D representation (students × activities × metrics), a regressor model of each metric is trained and a regressor of metrics for grades, then the metrics of the next exercise are predicted as well as your grade. In this case, the training set is represented by the solutions solved by the same student along the course and the note to be predicted is the next solution to the history samples of that set.

### **4. Experiments and results**

The first experiments of the system functionalities proposed in this chapter were in a Moodle's classroom of a distance course of C Programming Language in Brazil. Through the access credentials of a programming teacher, we obtained a zipped copy of the classroom from this course to the processing of learning analysis from students' codes. Next, all C programming code files were extracted along the programming distance course by about 25 programming students.

After the generation of 3D representations of learning profiles (*Activities* × *Students* × *Metrics*), gathering 10 activities, 25 students and 348 metrics of software, we use this information to generate the following results and views:

• For each activity, the list of metrics that were considered the most relevant to assign marks.


One of the activities we use for this experiment was applied in a C programming distance course and contains the following statement:

*Write a program to get the number of P points of three teams in a football championship, according to the following mathematical expression:*

$$\mathcal{P} = \mathsf{GGP} - \mathsf{GN} + \mathsf{ZWF} + \mathsf{ZVC} + \mathsf{E}$$

*In this formula, GP is the number of positive goals, GN is the number of goals taken, VF is the number of wins away from home, VC is the number of victories at home and E is the number of draws. The output of this program must show, according to the number of points obtained by a team, the champion and the runner-up of a championship.*

We chose this activity for learning analysis because the use of logical expressions and conditional and repetitive control structures allows us to differentiate the solutions in order to recognize which solutions show difficulties to construct logical expressions in control structures. In this way, a good solution of this activity will present few comparisons and a few lines of programming code. On the other hand, a solution with several comparisons, instructions and control structures built into the arrangement evidences programming effort and difficulties to construct logical sentences.

In **Figure 2**, using this activity as an example of results 1 and 2, and views 5, 6, we highlight two modes of analysis of programming solutions of our system for a programming activity: first, from software metrics *Maintainability, Cyclomatic Complexity*, *Indentation*, *Laconism* and *Modularization and from* metrics that were considered the most relevant for the attribution of marks, that is, the *Reduced Matrix* metrics.

**61**

**Figure 2.**

*Analysis of solutions by software metrics.*

*Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis…*

In graphs of **Figure 2**, the columns indicate students' solutions and columns, metrics. In each column, the white color (scale 1) indicates the highest value and the black color (scale 0), the smallest value of a metric. The interpretation of whether the higher value is better depends on the level of information of each code. However, the teacher can perform this interpretation comparing value of the best solutions and the worst solution. In this way, it would have an instrument to

*DOI: http://dx.doi.org/10.5772/intechopen.81754*

#### *Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis… DOI: http://dx.doi.org/10.5772/intechopen.81754*

#### **Figure 2.** *Analysis of solutions by software metrics.*

In graphs of **Figure 2**, the columns indicate students' solutions and columns, metrics. In each column, the white color (scale 1) indicates the highest value and the black color (scale 0), the smallest value of a metric. The interpretation of whether the higher value is better depends on the level of information of each code. However, the teacher can perform this interpretation comparing value of the best solutions and the worst solution. In this way, it would have an instrument to

*Enhanced Expert Systems*

student profile.

ties programming.

similar to each other.

activity.

• For a class as a whole, a list of selected metrics from 348 metrics, that were

• Dendrogram automatically generated on all the metrics that make up the

• Dendrogram automatically generated on all the metrics that make up the

• A heat map for each activity with selected metrics that best represent each

• A heat map for each activity with the metrics that best represent the marking

• A heat map for each student (historical in time) with the metrics that best

• A heat map for each activity with five metrics representing skills and difficul-

• A heat map for each student (historical in time) with five metrics representing

• Prediction of student grades, where grades are assigned to submissions that are

One of the activities we use for this experiment was applied in a C programming

*Write a program to get the number of P points of three teams in a football champion-*

*In this formula, GP is the number of positive goals, GN is the number of goals taken, VF is the number of wins away from home, VC is the number of victories at home and E is the number of draws. The output of this program must show, according to the number of points obtained by a team, the champion and the runner-up of a championship.* We chose this activity for learning analysis because the use of logical expressions and conditional and repetitive control structures allows us to differentiate the solutions in order to recognize which solutions show difficulties to construct logical expressions in control structures. In this way, a good solution of this activity will present few comparisons and a few lines of programming code. On the other hand, a solution with several comparisons, instructions and control structures built into the arrangement evidences programming effort and difficulties to construct logical sentences.

In **Figure 2**, using this activity as an example of results 1 and 2, and views 5, 6, we highlight two modes of analysis of programming solutions of our system for a programming activity: first, from software metrics *Maintainability, Cyclomatic Complexity*, *Indentation*, *Laconism* and *Modularization and from* metrics that were considered the most relevant for the attribution of marks, that is, the *Reduced Matrix* metrics.

student profile, after normalization to values between 0 and 1.

represent the correction criterion for the class as a whole.

the students' programming skills and difficulties.

distance course and contains the following statement:

*ship, according to the following mathematical expression:*

= 5 − + 3 + 2 +

considered the most relevant to assign marks.

criterion for the class as a whole.

**60**

evaluate which indicators characterize good programming solutions and those that express the most difficulties.

According to **Figure 2**, in the first graph, high value of *Complexity*, low value of *Indentation* and high value of *Laconism* differentiate *al\_00017 solution*, that is indicated by the red arrow in **Figure 2**, from too much and stand out as a poor solution. In the second graph, however, according to the assessment criteria based on three metrics related to a teacher's mark, this solution follows the pattern of the others and is therefore not indicated as a bad solution.

In **Figure 3**, where there is an example of view 9, we highlight how the five major metrics evolve each exercise for a same student over a course. It is observed that this student, indicated in the first line of the graph by a green arrow, has a predominance of the black color in his programming solution, indicating low values, and meaning good performances in the easiest exercises. On the other hand, in the last exercise by a red arrow he did, the colors appear lighter, indicating more complex activities and more difficulties. That more evident when, from this exercise that has higher *Complexity* value, the student stopped delivering the activities of programming, as it is noticed in the black color indicating a lack of performance in the following activities. We see in this visualization the potential of the tool to enable a teacher to recognize where a student began to demonstrate difficulties.

The graph of **Figure 4** is the ranking view for a teacher to assign marks to activities with the least effort of correcting. This graph is a dendrogram that presents the hierarchy of developed solutions for a programming activity represented by software metrics normalized to values between 0 and 1. Distances are marked in gray and pink.

The graph of **Figure 5** is a ranking view for a teacher to assign marks to activities with the least effort of correcting. This graph is a dendrogram that presents the hierarchy of developed solutions for a programming activity. Distances are marked in gray and pink, and the selected samples are marked in yellow.

According to **Figure 5**, first selecting the samples of greater dissimilarity, the teacher punctuates the most different ones and then some of the more similar ones.

As this teacher follows the ranking of samples suggested by the system, he himself can identify how far he can correct to obtain a minimum set of representation of the diversity of the solutions developed for composition of rubrics and, in the future, for to train automatic assessment exercises of programming exercises with a set of examples of teachers' marks. In this case, we consider 50% for training and 50% for testing of the prediction model.

**Figure 6** presents our first prediction results performed at a distance learning C programming. In this graph, we present performance results of all the programming solutions developed by a student (*al\_00009*) throughout a programming course. In the presentation of these results, for each submitted programming solution, we compared the grade given by a teacher with the grades predicted by our system from a history of activities previously solved by that same student and from solutions of other students of class in that same activity based on nearest neighbor methods. This process of performance analysis is performed for all students of the distance learning course through our system.

According to the graph of **Figure 6**, it is observed that the prediction of a student's performance in an activity based on a history of exercises solved by that student and in the solutions of that exercise developed by other students still present themselves divergent from the assigned marks by a teacher, although in higher performances these approaches approximate the evaluation of a teacher.

**63**

effort.

**Figure 3.**

*Evolution of metrics for each activity.*

*Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis…*

In addition, the predictive results of these approaches approximate as the history of solved exercises used increases. Thus, we present good expectations to advance in the studies of these methods to predict the performance of programming students. In conclusion, with some examples of the results generated by the system of this chapter, we shown the potential of this tool for programming teachers to accompany the process of learning their students from the beginning to the end of a course from a broad or reduced set of metrics and with less teachers' evaluation

*DOI: http://dx.doi.org/10.5772/intechopen.81754*

*Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis… DOI: http://dx.doi.org/10.5772/intechopen.81754*

**Figure 3.** *Evolution of metrics for each activity.*

In addition, the predictive results of these approaches approximate as the history of solved exercises used increases. Thus, we present good expectations to advance in the studies of these methods to predict the performance of programming students.

In conclusion, with some examples of the results generated by the system of this chapter, we shown the potential of this tool for programming teachers to accompany the process of learning their students from the beginning to the end of a course from a broad or reduced set of metrics and with less teachers' evaluation effort.

*Enhanced Expert Systems*

difficulties.

gray and pink.

express the most difficulties.

others and is therefore not indicated as a bad solution.

evaluate which indicators characterize good programming solutions and those that

According to **Figure 2**, in the first graph, high value of *Complexity*, low value of *Indentation* and high value of *Laconism* differentiate *al\_00017 solution*, that is indicated by the red arrow in **Figure 2**, from too much and stand out as a poor solution. In the second graph, however, according to the assessment criteria based on three metrics related to a teacher's mark, this solution follows the pattern of the

In **Figure 3**, where there is an example of view 9, we highlight how the five major metrics evolve each exercise for a same student over a course. It is observed that this student, indicated in the first line of the graph by a green arrow, has a predominance of the black color in his programming solution, indicating low values, and meaning good performances in the easiest exercises. On the other hand, in the last exercise by a red arrow he did, the colors appear lighter, indicating more complex activities and more difficulties. That more evident when, from this exercise that has higher *Complexity* value, the student stopped delivering the activities of programming, as it is noticed in the black color indicating a lack of performance in the following activities. We see in this visualization the potential of the tool to enable a teacher to recognize where a student began to demonstrate

The graph of **Figure 4** is the ranking view for a teacher to assign marks to activities with the least effort of correcting. This graph is a dendrogram that presents the hierarchy of developed solutions for a programming activity represented by software metrics normalized to values between 0 and 1. Distances are marked in

The graph of **Figure 5** is a ranking view for a teacher to assign marks to activities with the least effort of correcting. This graph is a dendrogram that presents the hierarchy of developed solutions for a programming activity. Distances are marked

According to **Figure 5**, first selecting the samples of greater dissimilarity, the teacher punctuates the most different ones and then some of the more similar ones. As this teacher follows the ranking of samples suggested by the system, he himself can identify how far he can correct to obtain a minimum set of representation of the diversity of the solutions developed for composition of rubrics and, in the future, for to train automatic assessment exercises of programming exercises with a set of examples of teachers' marks. In this case, we consider 50% for training

**Figure 6** presents our first prediction results performed at a distance learning C programming. In this graph, we present performance results of all the programming solutions developed by a student (*al\_00009*) throughout a programming course. In the presentation of these results, for each submitted programming solution, we compared the grade given by a teacher with the grades predicted by our system from a history of activities previously solved by that same student and from solutions of other students of class in that same activity based on nearest neighbor methods. This process of performance analysis is performed for all students of the

According to the graph of **Figure 6**, it is observed that the prediction of a student's performance in an activity based on a history of exercises solved by that student and in the solutions of that exercise developed by other students still present themselves divergent from the assigned marks by a teacher, although in higher performances these approaches approximate the evaluation of a teacher.

in gray and pink, and the selected samples are marked in yellow.

and 50% for testing of the prediction model.

distance learning course through our system.

**62**

#### **Figure 4.**

*Dendrogram of solutions of a programming activity represented on normalized software metrics (without grades).*

#### **Figure 5.**

*Dendrogram of solutions of a programming activity selected from a correction ranking (with grades).*

**65**

**Author details**

and Mônica Ferreira Silva Lopes

provided the original work is properly cited.

Márcia Gonçalves de Oliveira\*, Ádler Oliveira Silva Neves

\*Address all correspondence to: marcia.oliveira@ifes.edu.br

Federal Institute of Espírito Santo, Vitória, Espírito Santo, Brazil

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

*Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis…*

The system proposed in this chapter was presented as a relevant tool to assist teachers in their evaluation decisions, enabling them to assist the learning process

For this, our system can recognize where the learning difficulties begin, monitor how students evolve along a course, generate rubric representation and, soon,

These possibilities of learning analysis contribute a lot to reducing teachers' efforts in the onerous task of evaluating programming exercises so that they can better track the learning process of students and reorient their formative actions. Some future works from this research are using samples indicated for manual correction as training references of a semi-automatic programming evaluation system and improving our strategy to predict performances in activities from the timeline of solved programming exercises or from students' solutions that solved

Through this work we offer, therefore, a multidimensional and the clinical analysis tool to help teachers in their formative assessment actions and students to be better assisted in their difficulties and skills in the practice of programming.

*DOI: http://dx.doi.org/10.5772/intechopen.81754*

of their students in each programming exercise.

predict future performances of programming students.

exercises similar to the one we intend to predict a grade.

**5. Conclusion**

**Figure 6.** *Timeline with prediction of performance in programming.*

*Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis… DOI: http://dx.doi.org/10.5772/intechopen.81754*
