**4. Ecosystem approach**

Regarding competence "measurement" models, there are no generic standards or procedures to evaluate or value, and each proposed model is tailored to a specific context and can be extrapolated to others by making appropriate modifications. As emphasized in [24], for the evaluation of leadership competences under a hypothetical hierarchical scheme, the partial least squares (PLS) trajectory models are used, where to collect information, they use questionnaires that are based mainly on the Likert scale and weightings. Like the previous case, in [25], the procedures and tools used for the evaluation of competences in Erasmus nurse students (ENS) clinics are made up of questionnaires, where each competence is valued in different scale metrics such as Likert. Schelfhout et al. [26] are based on a model of levels where they contemplate domains, subcompetences and scaled behavioral indicators as the basis for giving concrete feedback to students rather than using Likert-scale surveys. Therefore, a mixed study method that combines qualitative and quantitative research techniques (selfassessment/evaluation questionnaires) was used; the evaluation of the validity of this model

According to what has been analyzed, formulating an ontological system of the coworking UPS ecosystem and based on that applying metrics to assess the competences of the agents

The culture of innovation at Universidad Politécnica Salesiana (UPS) seeks to develop a new more complex and formulated concept in [27], which explains that "the university just like a jungle (ecosystem) takes inert and inorganic elements such as knowledge and science to create thriving ecosystems of living organisms whose interactions make up society." This innovative concept seeks to change the educational linearity that governs classrooms toward the productivity of innovation and creativity in spaces or associative groups that share common and multidisciplinary interests (cowork), that break what is conventional, and maintain the center of interest in people, basis of UPS's culture and a primary agent in the interaction and collaboration with diverse talents that seek to transcend social barriers in favor of connectiv-

The ecosystem of innovation at UPS is intended to be something like a free zone, where the flow of ideas, talents and capital can be maximized in a network of collaborative work. The importance of creating places within the institution to encourage this new university culture has been hard and fundamental work in order to "generate" a new educational model based on an individual's life project; therefore, one of the aims of the StartUPS project is that students/professors from the university integrate all the knowledge they have acquired in real-life projects and that they develop behavioral, contextual and technical competences [6] within spaces like the "coworks." The coworking UPS project is part of UPS's strategy, to become a university of research and innovation, and the culture of entrepreneurship represents a fundamental factor in the achievement of these objectives in the short and long term. In 2015, a series of agreements to integrate the culture of "project work" were adopted in order to develop measures to promote innovation in UPS. This process of change has been accompanied by training for UPS agents (teachers

was done through a confirmatory factorial analysis (CFA).

226 Management of Information Systems

that are actively involved in it are possible and applicable.

**3. StartUPS: an entrepreneurship background**

ism [28–30], learning to learn [31–33] and the common good [34].

The computing model being suggested is part of a more complex system called CREAMINKA, which is a tool designed to support strategic decision-making regarding R + D + i (research + development + innovation) in the university. This component seeks to carry out a specific task, the analysis of competences/skills of the agents that make up this ecosystem by applying the corresponding metrics of these skills through indicators that are valued through a mixed evaluation mechanism.

As shown in **Figure 1**, the structure of the ecosystem is organized in four clearly defined layers: (i) the transactional system for StartUPS, (ii) the microservices component, (iii) the triplet repository and (iv) the mobile/web application. The microservices component is the main layer that supports the entire subsystem; its function is to provide the necessary services so that the flows of information can be matched to the different components. The "StartUPS" transactional system stores information of the agents in the ecosystem, such as data of their competences, projects, evaluation/valuation questionnaires, etc. The triplet repository stores the knowledge model of the innovation ecosystem and previously treated data from the transactional system. The mobile/web application is in charge of the interaction with the different

**Figure 1.** General structure of the StartUPS innovation subsystem.

agents and the mechanism of information input and output. The microservice component has five specific services; the "parser service" microservice, which is responsible for the translation/transformation of data obtained from transactional/nontransactional data sources to data for the ontological model of triplets; the "auth service" microservice has the necessary logic to support the processes of authorization and authentication; the "CRUD service" microservice has the task of creating, reading, updating and deleting information; the "report service" microservice is responsible for creating the different reports using the data provided by the "data service" microservice, which provides all the information processed thanks to different inference mechanisms, mining data and artificial intelligence.

#### **4.1. Competence evaluation model**

As mentioned in the related work section, there are several models for the analysis or "measurement" of competences. The suggested model is basically based on four "hierarchical" levels and their weightings relations. The levels are made up by: (i) the general competences (generic) and (ii) the specific competences [35–38], (iii) the indicators and (iv) the trifocal evaluation (auto-hetero-co).

The competence evaluation diagram as illustrated in **Figure 2** starts by carrying out the "trifocal" evaluation of competences of an agent in the ecosystem after having developed a project

CREAMINKA: An Intelligent Ecosystem for Supporting Management and Information Discovery… http://dx.doi.org/10.5772/intechopen.73212 229

**Figure 2.** Modular schema of competence evaluation.

agents and the mechanism of information input and output. The microservice component has five specific services; the "parser service" microservice, which is responsible for the translation/transformation of data obtained from transactional/nontransactional data sources to data for the ontological model of triplets; the "auth service" microservice has the necessary logic to support the processes of authorization and authentication; the "CRUD service" microservice has the task of creating, reading, updating and deleting information; the "report service" microservice is responsible for creating the different reports using the data provided by the "data service" microservice, which provides all the information processed thanks to different

As mentioned in the related work section, there are several models for the analysis or "measurement" of competences. The suggested model is basically based on four "hierarchical" levels and their weightings relations. The levels are made up by: (i) the general competences (generic) and (ii) the specific competences [35–38], (iii) the indicators and (iv) the trifocal evaluation (auto-hetero-co). The competence evaluation diagram as illustrated in **Figure 2** starts by carrying out the "trifocal" evaluation of competences of an agent in the ecosystem after having developed a project

inference mechanisms, mining data and artificial intelligence.

**Figure 1.** General structure of the StartUPS innovation subsystem.

**4.1. Competence evaluation model**

228 Management of Information Systems

or completing a set of activities in an event, training course or workshop within the different innovation spaces created by the university. The evaluation model has two instances, it starts from a qualitative valuation that is subjective toward an attempt of a quantitative valuation that is objective, all this through the use of weights in the relations that exist between the different levels of the competence diagram.

The trifocal evaluation/valuation contains three concepts: (i) heteroevaluation, (ii) coevaluation and (iii) self-evaluation. To begin, there is a questionnaire that contains the battery of indicators to evaluate/value, either for a project or a set of activities; it should be noted that these indicators have already defined a weighting that refers to their specific competence, in addition to having their respective scale of measurement, whether a value scale, Likert scale and others. The heteroevaluation is given by one or more valuators, who also have a weight when completing their questionnaire, regarding the set of questionnaires that are generated or completed; a similar case occurs with the process of carrying out the coevaluation questionnaires. Since the self-assessment is filled in by the valued individual, it has its respective weight. It is important to highlight that each type of evaluation has its respective weighting in the trifocal model; therefore, the heteroevaluation, coevaluation and self-evaluation have their weight. The partial results when completing this trifocal measurement of the indicators depend on the sum of their scaled values by their weights and the weights given to both the three types of questionnaires and the valuators or evaluators.

Therefore, the weighted values of the indicators maintain different weighted relations or connections with the different specific competences of the model; in other words, an indicator can be related to one or more specific competences; and in turn, these specific competences, like the previous case, have one or more connections with the general competences. The final result obtained in each branch of the suggested competence evaluation/valuation model depends on the sum of the evaluated results found when using the different mathematical operations.

With the information mentioned above, it is suggested that "the sum of subjectivities (qualitative measurements) enable the attainment of objectivity (quantitative measurements)."

Within the process of evaluation of competences performed by the subsystem of CREAMINKA, the skills that a person has can be qualified based on a scale. In **Figure 3**, it can be observed how a user of the system has a score for their general skills based on a scale represented by measure scale (MS); and on the right side, we present the process of how the calculation of the weighting for a general competition is performed. Starting from the right side, the assessment score (fs) are related to the indicators, considering that the scale of each fs is within the MS elements. Each of the fs scores has a weight v for the calculation of the weighting of specific SCS competences that can also take a value within the MS scale. Finally, each score of the specific competences has a weight for the calculation of the general GCS competences.

**Figure 3.** Evaluation process schema of general competences.

## **4.2. Ontology**

and others. The heteroevaluation is given by one or more valuators, who also have a weight when completing their questionnaire, regarding the set of questionnaires that are generated or completed; a similar case occurs with the process of carrying out the coevaluation questionnaires. Since the self-assessment is filled in by the valued individual, it has its respective weight. It is important to highlight that each type of evaluation has its respective weighting in the trifocal model; therefore, the heteroevaluation, coevaluation and self-evaluation have their weight. The partial results when completing this trifocal measurement of the indicators depend on the sum of their scaled values by their weights and the weights given to both the

Therefore, the weighted values of the indicators maintain different weighted relations or connections with the different specific competences of the model; in other words, an indicator can be related to one or more specific competences; and in turn, these specific competences, like the previous case, have one or more connections with the general competences. The final result obtained in each branch of the suggested competence evaluation/valuation model depends on the sum of the evaluated results found when using the different mathematical operations. With the information mentioned above, it is suggested that "the sum of subjectivities (qualitative measurements) enable the attainment of objectivity (quantitative measurements)."

Within the process of evaluation of competences performed by the subsystem of CREAMINKA, the skills that a person has can be qualified based on a scale. In **Figure 3**, it can be observed how a user of the system has a score for their general skills based on a scale represented by measure scale (MS); and on the right side, we present the process of how the calculation of the weighting for a general competition is performed. Starting from the right side, the assessment score (fs) are related to the indicators, considering that the scale of each fs is within the MS elements. Each of the fs scores has a weight v for the calculation of the weighting of specific SCS competences that can also take a value within the MS scale. Finally, each score of the specific

competences has a weight for the calculation of the general GCS competences.

**Figure 3.** Evaluation process schema of general competences.

three types of questionnaires and the valuators or evaluators.

230 Management of Information Systems

CREAMINKA's ontology, with its CO prefix, models ecosystems immersed in scientific research and coworking. It is an ecosystem where students, teachers and external collaborators interact within different internal and external processes and events, generating different types of scientific products. In the case of this ontology, all concepts related to the coworking ecosystem will be analyzed, a module that extends the functionalities raised in the preliminary phases of CREAMINKA's ontology, where only scientific research was considered within the research groups.

Within the framework of the ontology development, it was considered to reuse ontologies such as FOAF [39], which describes various concepts related to individuals and groups; BIBO [40], which describes bibliographic information of the documents that will be generated; VIVO [41], which describes the research community model and extends some of the ontologies named above; BFO [42], which describes a high level ontology for the categorization of concepts and used very frequently in the ontologies reuse phase, when combining. In the case of the CREAMINKA ontology, concepts such as processes and generic independent entities were used to have a grouping reference framework.

#### *4.2.1. Definition of the ontology*

The discourse universe *D* as seen in Eq. (1) contains all elements of the coworking ecosystem that hold evaluation process, events, classification of knowledge, scalar measures units, projects and participation roles.

*D* = {*Process*, *Concept*, *Keyword*, *ResearchLine*, *Role*, *EvaluatorRole*, *EntrepreneurshipProject*, *Prototyping*, *MarketEvaluation*, *AssessmentProcess*, *Grant*, *Person*, *Group*, *Team*, *Organization*, *Competence*, *GeneralCompetence*, *SpecificCompetence*, *Evet*} (1)

The main unary relations defined in the ontology are:


The main binary relationships that were modeled are described below:


The set of relations *R* is defined as seen in Eq. (2):

 *<sup>R</sup>* <sup>=</sup> {*hasWeight*, *evaluated*, *applyEvaluationFormat*, *scoreFor*, *hasMeasurementUnit*, *hasIndicator*, *hasSubProcess*, *participatesIn*, *bearerOf*} (2)

Specification of the subconcepts of unary relationships in ontology as seen in Eq. (3):

*O*<sup>0</sup> = *D* ∪ { *Process*(*x*) → *Project*(*x*), *Project*(*x*) → *EntreperneurshipProject*(*x*), *Project*(*x*) → *ResearchProject*(*x*), *Process*(*x*) → *Prototyping*(*x*) → *Process*(*x*) → *MarketEvaluation*(*x*), *Process*(*x*) → *AssessmentProcess*(*x*), *AssessmentProcess*(*x*) <sup>→</sup> *CoEvaluation*(*x*), *AssessmentProcess*(*x*) <sup>→</sup> *HeteroEvaluation*(*x*), *AssessmentProcess*(*x*) → *AutoEvaluation*(*x*), *Competence*(*x*) → *SpecificCompetence*(*x*), *Competence*(*x*) → *GeneralCompetence*(*x*) (3)

Specification of domains and ranges of binary relations as seen in Eq. (4):

*O*<sup>0</sup> = *O*<sup>0</sup> ∪ {*bearerOf*(*x*, *y*) → *Person*(*x*) ∧ *Role*(*y*)*participatesIn*(*x*, *y*) → *Roles*(*x*) ∧ *Process*(*y*)*hasWeight*(*x*, *y*) → *Thing*(*x*) ∧ *WeightMeasurement*(*y*)*Evaluated*(*x*, *y*) → *AssessmenteProcess*(*x*) ∧ *Process*(*y*)*applyEvaluationFormat*(*x*, *y*) → *AssessmentProcess*(*x*) <sup>∧</sup> *Test*(*y*)*scoreFor*(*x*, *<sup>y</sup>*) <sup>→</sup> *Score*(*x*) ∧ *Test*(*y*)*scoreFor*(*x*, *y*) → *Score*(*x*) ∧ *Indicator*(*y*)*hasMeasurementUnit*(*x*, *y*) → *Thing*(*x*) ∧ *MeasurementUnit*(*y*)*hasSubProcess*(*x*, *y*) → *Process*(*x*) ∧ *Process*(*y*)} (4)

#### *4.2.2. Conceptualization of competence assessment*

• Assessment process: process in which the assessment of different indicators is carried out,

• Measurement weight: represents the weight relationship that exists between two concepts.

• Has weight: indicates the weight relationship that exists in a class and its weight-class

• Apply evaluation format: specifies the evaluation format on which the evaluation process

• Has measurement unit: indicates the unit of measurement used as a reference in a score.

• obo: participates in: defines the relationship between continuous objects and occurring

• obo: barer of: specifies the relationship between a dependent entity and a dependent entity.

Specification of the subconcepts of unary relationships in ontology as seen in Eq. (3):

*EntreperneurshipProject*(*x*), *Project*(*x*) → *ResearchProject*(*x*), *Process*(*x*) →

*AutoEvaluation*(*x*), *Competence*(*x*) → *SpecificCompetence*(*x*), *Competence*(*x*) →

*Prototyping*(*x*) → *Process*(*x*) → *MarketEvaluation*(*x*), *Process*(*x*) → *AssessmentProcess*(*x*), *AssessmentProcess*(*x*) <sup>→</sup> *CoEvaluation*(*x*), *AssessmentProcess*(*x*) <sup>→</sup>

*hasMeasurementUnit*, *hasIndicator*, *hasSubProcess*, *participatesIn*, *bearerOf*} (2)

(3)

• Evaluated: indicates the evaluation process that was carried out on another process.

which has as output the scores of the indicators in relation to a scale.

The main binary relationships that were modeled are described below:

• Score for: specifies the score that an indicator or test has.

The set of relations *R* is defined as seen in Eq. (2):

• Has indicator: specifies the indicator to which a concept is linked.

*<sup>R</sup>* <sup>=</sup> {*hasWeight*, *evaluated*, *applyEvaluationFormat*, *scoreFor*,

*O*<sup>0</sup> = *D* ∪ { *Process*(*x*) → *Project*(*x*), *Project*(*x*) →

*HeteroEvaluation*(*x*), *AssessmentProcess*(*x*) →

Specification of domains and ranges of binary relations as seen in Eq. (4):

*GeneralCompetence*(*x*)

• Has subprocess: indicates the belonging of a process to a higher process.

quantifier.

232 Management of Information Systems

is based.

objects.

• Competence: represents the abilities that a person has to develop something.

In order to analyze how the different concepts of the developed ontology for the CREAMINKA subsystem interact, we have to separate the several concepts associated at different levels, starting with the conceptualization of the weights that work as a complex relationship between concepts of the different levels of the competences evaluation model. Then, an analysis of how such levels are related within the evaluation model is addressed, in an evaluation process, and the actors involved. Finally, the approach is based on the analysis of how assessments take place within the different processes that normally take place within the ecosystem of a StartUPS.

Within the competence assessment model, we intend to move from a qualitative assessment to a quantitative assessment attempt, as mentioned above, whereby the concept that links the different components between levels of the model that are represented as classes is referred to as weight measurement. This is a complex concept since it works as a link entity that qualifies the relationship between two classes, giving weight to the different associated concepts as it can be observed in **Figure 4**. When analyzing the domain of the relation has weight, we discovered concepts that were implicit in the scheme of the competence evaluation model, the ontology has to consider the evaluator role within the assessment process and link it to a weight.

The "assessment process", as seen in **Figure 5**, includes both the "person" or "persons" who have been evaluated and the evaluator, distinguishing these persons by the role they have within the process. That is how the CO ontology extends the roles raised in VIVO ontology, adding the "Assessed Entity Role" and "Evaluator Role". Evaluator role is not directly related to assessment process, since, as we saw in the previous section, the relationship between these two concepts is complex and they have to quantify that relationship through "Weight Measurement". This evaluation process has to evaluate a process that, within the StartUPS ecosystem, is usually an entrepreneurial project or a subprocess of it, considering the members of the project. The evaluation process must "have outputs" that in this case are "scores" of the indicators or "tests" evaluated with reference to a "measurement unit". To classify directly if a score belongs to a partial score or total score, equivalence rules were made in the ontology since if the range that passes through the "score for" is an indicator, it is known that the entity must belong to the partial score; but if the rank entity is test, it is known that it is the total score of the test. The outputs of the evaluation process that are scalar measures have to be referenced

**Figure 4.** Conceptualization of weights at the levels of the competence assessment model.

with a scale as mentioned above; this role is fulfilled by the concept of measurement unit in which there can exist instances such as Likert scale. When performing an evaluation process, a test that links indicators through the relationship "has indicator" is always taken as reference.

Previously, we discussed about the different types of evaluations that formed trifocal evaluation/assessment. Within the CO ontology, this knowledge is inferred through the definition of axioms within the equivalences, to distinguish between three types of processes that are subclasses of assessment process, these equivalence rules are:


CREAMINKA: An Intelligent Ecosystem for Supporting Management and Information Discovery… http://dx.doi.org/10.5772/intechopen.73212 235

**Figure 5.** Schematic diagram of the competences evaluation process in the ontology.

with a scale as mentioned above; this role is fulfilled by the concept of measurement unit in which there can exist instances such as Likert scale. When performing an evaluation process, a test that links indicators through the relationship "has indicator" is always taken as reference. Previously, we discussed about the different types of evaluations that formed trifocal evaluation/assessment. Within the CO ontology, this knowledge is inferred through the definition of axioms within the equivalences, to distinguish between three types of processes that are

• Coevaluation: the person who is the bearer of an evaluator role participates in a process by means of a role and the process is evaluated by an assessment process that is linked to the evaluator role, and that person does not have a role that participates in the assessment

• Self-evaluation: the person who is the bearer of an evaluator role participates in a process through a role and the process is evaluated by an assessment process that is linked to the evaluator role, and that person has a role that participates in the assessment process.

• Heteroevaluation: the person who is the bearer of an evaluator role does not participate in a process through a role and the process is evaluated by an assessment process that is linked to the evaluator role, and that person does not have a role that participates in the

subclasses of assessment process, these equivalence rules are:

**Figure 4.** Conceptualization of weights at the levels of the competence assessment model.

process.

assessment process.

234 Management of Information Systems

The relationships that the evaluations have within the coworking ecosystem were modeled on the ontology and can be observed in **Figure 6**, where it can be seen how people fulfill different roles within the ecosystem through a participation relationship within events that can be the different workshops, training courses, boot camps or other instances that match the different events in which the skills acquired through assessment process are evaluated. Added to this, within the processes, we can find the entrepreneurship projects in which people fulfill a role, from these projects subprocesses like prototyping can be broken down, where the entrepreneurship project as the prototyping process can be evaluated.

As discussed in this section, each of the approaches from the relationship of weights to the different levels of the competence assessment model, the actors within the evaluation process and the relationship of the evaluation process with the different occurrences of which they are part of, the actors of the coworking ecosystem allow us to give an approximation of the competence assessment of an actor who participates in different events and entrepreneurship projects modeled on an ontology.

**Figure 6.** Schematic diagram of the evaluations of the different process performed in the coworking ecosystem in the ontology.

### **5. Experimentation and preliminary results**

In order to check the traceability of people within the different processes that are performed in the coworking ecosystem modeled as a part of the CREAMINKA ontology, a SPARQL as shown in **Figure 7** is tested on the database where it can be observed as a result in **Table 1** the person next to the role which he participates with, in a process, such as entrepreneurship projects, boot camps and training workshops.

SPARQL consultation on actor's participation in the coworking ecosystem processes:

Obtained results:

In order to provide a tool to analyze the development of both general and specific competences of students/participants involved in entrepreneurship and/or research processes, we have designed two metrics. The first metric to determine the level of development that achieves a student/participant for a general competence as seen in Eq. (5):

$$\text{GC}\_s(\text{St}\_{\text{'}}\text{GC}\_{\text{'}}) = \frac{1}{\sum\_{\text{well}\_{\text{'}}}(w \cdot H1)} \sum\_{k=1}^{N} w\_k \cdot \text{S}\{\text{St}\_{\text{'}}\text{SC}\_{\text{'}}\} \tag{5}$$

where:


CREAMINKA: An Intelligent Ecosystem for Supporting Management and Information Discovery… http://dx.doi.org/10.5772/intechopen.73212 237

**Figure 7.** Query SPARQL of the traceability of a person in the coworking ecosystem.


On the other hand, the second metric allows us to know the level of development that students/participants achieve for each of the specific competences that make up a general competence. For this, the following equation is used as seen in Eq. (6):

$$S\{St\_{\rho}, SC\_{i}\} = \frac{1}{\sum\_{v \in \mathbb{F}\_{i}^{\mathbb{F}}} (v \cdot H\mathbb{Z})} \sum\_{\forall v \in \mathbb{F}\_{i}^{\mathbb{F}}} f \cdot v \tag{6}$$

where:

*Wi*

**5. Experimentation and preliminary results**

projects, boot camps and training workshops.

Obtained results:

ontology.

236 Management of Information Systems

where:

• *GCs*(*Sti*

• <sup>→</sup> *Wj*

*GCs*(*Sti*

neurship and research.

In order to check the traceability of people within the different processes that are performed in the coworking ecosystem modeled as a part of the CREAMINKA ontology, a SPARQL as shown in **Figure 7** is tested on the database where it can be observed as a result in **Table 1** the person next to the role which he participates with, in a process, such as entrepreneurship

**Figure 6.** Schematic diagram of the evaluations of the different process performed in the coworking ecosystem in the

In order to provide a tool to analyze the development of both general and specific competences of students/participants involved in entrepreneurship and/or research processes, we have designed two metrics. The first metric to determine the level of development that

> (*<sup>w</sup>* <sup>∙</sup> *<sup>H</sup>*1) ∑ *k*=1 *N*

The number of general competences is defined by the experts in higher education, entrepre-

*wk* ∙ *S*(*Sti*

, *SCk j*

) (5)

.

for the *j*th-general competence *GCj*

.

SPARQL consultation on actor's participation in the coworking ecosystem processes:

, *GCj*) <sup>=</sup> \_\_\_\_\_\_\_\_\_\_\_\_ <sup>1</sup> ∑*<sup>w</sup>*<sup>∊</sup> → *Wj*

achieves a student/participant for a general competence as seen in Eq. (5):

, *GCj*) represents the score achieved by *i*th-student *Sti*

is a vector of weights related with the *j*th-general competence *GCj*



**Table 1.** People traceability results obtained with the execution of the SPARQL queries (coworking ecosystem).


On this basis, we have used the metrics described above to create a module that allows performing clustering analysis. This module allows system users testing different values of weights as well as generating dendrograms and cluster graphics. This information is useful in decision-making for managers and research/entrepreneurship group directors.

In **Figure 8**, we can see an example of a dendrogram generated by the system from the specific competences and indicators retrieved from 20 participants in entrepreneurship projects,

**Figure 8.** Dendrogram that is generated from the analysis of indicators of the participants and students of research and entrepreneurship groups.

boot camps and training workshops. The information feed to the clustering analysis module is described below:

• <sup>→</sup> *Vj*

238 Management of Information Systems

• <sup>→</sup> *Fj*

entrepreneurship groups.

is a vector of weights related with the *j*th-specific competence *SCk*

decision-making for managers and research/entrepreneurship group directors.

is a vector that contains all the indicators related with the *j*th-specific competence *SCk*

On this basis, we have used the metrics described above to create a module that allows performing clustering analysis. This module allows system users testing different values of weights as well as generating dendrograms and cluster graphics. This information is useful in

In **Figure 8**, we can see an example of a dendrogram generated by the system from the specific competences and indicators retrieved from 20 participants in entrepreneurship projects,

**Figure 8.** Dendrogram that is generated from the analysis of indicators of the participants and students of research and

*j* .

> *j* .


**Figure 9.** Dendrogram that is generated from the analysis of indicators of the participants and students of research and entrepreneurship groups.

• The participants are enrolled in different careers such as systems engineering, electrical engineering, business administration, etc.

As shown in **Figure 8**, if we cut the dendrogram at a distance of 1.33, four groups are formed. For example, with this information, we can observe that participants 8 and 10 have a similar profile in general in their specific competences, although they are from different careers (social communication and mechanical engineering).

On the other hand, in **Figure 9**, we can observe how new groups are formed when the specific competences are considered. As we can see, there are three perfectly defined groups where you can establish leadership, vision, entrepreneurship, etc. characteristics.
