*4.2.2 Technical: hybrid data collection method*

The anticipation of measurements and results is also at the center of the agile/ UX process developed by the research team. In parallel with the definition of the mandate and the division into use scenarios, the research team continually tries to foresee the structure of the presentation of the results while being flexible. Empirical data, both implicit (lived experience assessed with psychophysiological measures) and explicit (perceived experience assessed with self-reported questionnaire and interview), are considered. This anticipation is carried out using a systematic methodology of foreseen codification of the psychophysiological—emotional and cognitive—measures within the clarification of the mandate and the experimental design. The triangulation of measures also makes it possible to anticipate the potentially interesting results that will answer the client's questions. This triangulation is achieved through a mosaic of proven collection methods [5, 30, 31]. The use of several data collection technologies of variable nature (physiological, psychological, and behavioral) ensures an enriched data collection. Consequently, this anticipatory effort allows the UX team to be one development cycle ahead of others and to accelerate the whole process of analyzing the collected data. Comparative empirical data methodology is also deployed. By comparing different conditions of use, design elements, or even groups of users, decision-making becomes more objective, concrete, and easy for the team of designers.

*"For example, in the 7th case study, the project involved the collection of implicit data from eye tracking (Tobii), recognition of facial expressions (Facereader,*

4.The time constrains: (a) need to adjust the granularity (level of details) of the project according to the research question; (b) need to introduce pre-tests to provide last-minute adjustments on site; (c) need to carefully evaluate the time

Testing time 30 min to 1 h 1 h

–4 conditions 1–15 tasks –4 sub-tasks –4 interviews –3 questionnaires

2–4 psychometric tools

2–3 days of data collection 2–3 days of analysis

Participants 144 typical users 12 participants

Observation (performance indicators)

Tools used 3–4 neurophysiological tools

Time 7–14 days of preparation

**All projects Mean and median per**

**project**

2 conditions 4 tasks 2 sub-tasks 1 interview 2 questionnaires

indicators)

2 neurophysiological tools 3 psychometric tools Observation (performance

7 days of preparation 3 days of data collection 2 days of analysis

These strategies are focused on meeting the clients' expectations of time, budget,

Every UX research begins with a question. The nature of this issue has a direct impact not only on the completion of the UX tests but also on the complexity of the tests. This complexity depends on the nature of the stimuli studied and on the level of authenticity of the desired context of use. Indeed, the research question determines the nature of the stimuli, that is, whether they are static or dynamic. For example, studying the navigation of a Website on a computer screen underlies the deployment of static stimuli which, *a priori*, is easy data to analyze. Static stimuli require shorter coding and analysis time than dynamic stimuli, for example, the study of a game application on mobile. The same applies to the choice of data collection tools deployed. Coding and analyzing data from an eye tracker does not represent the same workload as coding and analyzing data from an electroenceph-

Moreover, the research question directly influences the choice of the context of use in which the experiment takes place and the importance of the level of authenticity to be respected. Inevitably, undertaking an experiment in a real-life context does not underlie the allocation and deployment of the same resources (material and human) and the same time space for its realization in a laboratory context. Dynamic stimuli and the context of authentic use are the most important limitations

allotted for the project; and, thus, (d) need for scheduling.

Result Final report 799 pages 66 pages

*Statistics for the totality of the Sprint projects and mean per project.*

and UX issues.

Details of the experiment

**Table 2.**

*4.1.1 Research question type*

alography (EEG) headset.

**92**

**4.1 Based on the nature of the research**

Execution Experimental

*Human 4.0 - From Biology to Cybernetic*

design

*Noldus), electrodermal activity (Biopac) as well as electrocardiogram (Biopac) and explicit data from usability scale questionnaires, performance indicators and interviews. This arsenal of tools was deployed to understand the "what and when" of interaction by triangulating valence (positive or negative), activation (weak or strong) and cognitive (easy or difficult) reactions, as well as the why of interaction through the verbalization of perceived experiences. The hybridization of all the data on the cognitive and emotional load thus created a global portrait of the interactive experience between the users and the product." (UX Lead)*

*4.4.2 Pre-test*

*(Lab Manager)*

*4.4.4 Time allotted*

**95**

As each project has its own specificities and distinguishes itself from others, pretests are always necessary. Undertaken in a short time span, these pre-tests allow the UX team to make final adjustments before starting the data collection with the participants. Three pre-tests are usually performed. The first is a technical test to ensure that all collection and analysis instruments are functional and set up properly to facilitate collection. The second is done with a member of the team to evaluate the time and fluidity of the experimental task. The third test is done with an external participant to ensure the understanding of each step of the experimental

task and to avoid any misunderstanding during the data collection.

*4.4.3 Standardization of the planning and methodology*

*Towards Agility and Speed in Enriched UX Evaluation Projects*

*DOI: http://dx.doi.org/10.5772/intechopen.89762*

*"For example, in the 6th case study, while performing the pre-tests, our research team realized that one of the tasks could not be done in the sequence that was proposed initially, and this caused a major change in the experimental design and protocol. The pre-tests prevented loss of data from one of the recruited participants."*

With each successive Sprint project, the research team gradually standardized the process and practice to enhance their execution in terms of speed, efficiency of human resources, and costs. Indeed, the team put together a concise timetable detailing every step of a Sprint project where responsibilities for the research team and the design client are granted, and deadlines are specified. This timeline presents, on one hand, the elements of macro-planning in terms of weeks. Depending of the maturity and knowledge of the design client about their context of intervention as the product or service they wish to test, this preparation phase is variable and flexible. Furthermore, as regards the academic context, the submission of the ethics certification (considering the academic research context) requires many weeks of anticipation, since this is to ensure that all approvals have been obtained before starting the user experience testing. However, if the Sprint project is a sequel to a previous one or if a design client has already made a Sprint project and wishes to carry out a second one, this preparatory phase gradually decreases in terms of time since it increases in terms of efficiency. On the other hand, the elements of micro-planning in terms of days and hours, such as details of the execution, are specified and are the main interest of this standardized timeline (**Table 3**).

This normalized timeline presents the critical path of a Sprint project: (1) project kick off; (2) mandate definition; (3) experimental design fine tuning; (4) pre-test and validation; (5) data collection; (6) codification; (7) analysis; and (8) final presentation. Aiming to be completely transparent, this normalized timeline's intentions are to help all the project stakeholders to understand the critical steps that could delay the project, identify the persons in charge of the various steps, so as to avoid any misunderstanding and repetition of efforts. Moreover, it can be taken

Another important aspect to consider during Sprint projects is the time allotted for carrying out the tests. This similar aspect turns into one of the limitations of agile/UX research. Indeed, for a Sprint project to be realized in 1 week, the experience of using the evaluated product or service can hardly exceed 1 h without having

as a list of actions to be considered when starting a UX research project.

a direct consequence on the realization and costs. The time allotted for data

## **4.3 Based on results**

## *4.3.1 Data visualization*

Finally, the lab team has developed a unique and innovative way of presenting its results to facilitate the transmission of knowledge to clients and development teams. By aggregating and triangulating the arsenal of empirical data collected, the laboratory's researchers have succeeded in creating a methodology for simplifying and making the data more accessible. The results of this methodology are the visualizations of the interactions through the creation of UX heatmaps [5, 30, 31]. These heatmaps offer an "easy to interpret UX evaluation tool which contextualizes users' signals while interacting with a system. Using these signals to infer the users' emotional and cognitive states and mapping these states on the interface provide researchers and practitioners with a useful tool to contextualize users' reactions" [10].

*"For example, in the 8th study case, the presentation of the final report including the results of the UX research was carried out with the client's design team and several decision-makers. Using empirical and perceptual data visualization tools, managers from different departments who do not face this type of research on a daily basis quickly realized which of the products studied best met the usability objectives, thus having clear facts with which to make their decision." (UX Lead)*

#### **4.4 Based on time constrains**

#### *4.4.1 Granularity*

The granularity of the project follows the definition of the mandate. Generally, the research team uses a list of questions from the clients as a baseline to translate them into defined actions. In other words, the UX team restructures the project by dividing it into different evaluation conditions. These conditions typically result in distinct usage scenarios that are not necessarily related to product functionality. These similar condition divisions allow the UX team to define the evaluation markers, as well as the performance indicators more easily, in order to facilitate the assessment of the overall and specific user experience.

*"For example, in the 2nd case study, the customer wished to evaluate the efficiency and efficacy of three functionalities of its new product in development. After numerous exchanges, our research team translated this mandate into an operational experimental design that included the testing of both their old and new products with two different comparable evaluation conditions. The first one consisted in testing the 3 functionalities on the old product with existing users in order to establish a comparison baseline. Then, by deploying the theory of learning, the three functionalities were tested randomly three times on the new product. The third repetition was the one that was compared between products." (UX Lead)*

### *4.4.2 Pre-test*

*Noldus), electrodermal activity (Biopac) as well as electrocardiogram (Biopac) and explicit data from usability scale questionnaires, performance indicators and interviews. This arsenal of tools was deployed to understand the "what and when" of interaction by triangulating valence (positive or negative), activation (weak or strong) and cognitive (easy or difficult) reactions, as well as the why of interaction through the verbalization of perceived experiences. The hybridization of all the data on the cognitive and emotional load thus created a global portrait of the interactive*

Finally, the lab team has developed a unique and innovative way of presenting its results to facilitate the transmission of knowledge to clients and development teams.

researchers and practitioners with a useful tool to contextualize users' reactions" [10].

*"For example, in the 8th study case, the presentation of the final report including the results of the UX research was carried out with the client's design team and several decision-makers. Using empirical and perceptual data visualization tools, managers from different departments who do not face this type of research on a daily basis quickly realized which of the products studied best met the usability objectives,*

The granularity of the project follows the definition of the mandate. Generally, the research team uses a list of questions from the clients as a baseline to translate them into defined actions. In other words, the UX team restructures the project by dividing it into different evaluation conditions. These conditions typically result in distinct usage scenarios that are not necessarily related to product functionality. These similar condition divisions allow the UX team to define the evaluation markers, as well as the performance indicators more easily, in order to facilitate the

*"For example, in the 2nd case study, the customer wished to evaluate the efficiency and efficacy of three functionalities of its new product in development. After numerous exchanges, our research team translated this mandate into an operational experimental design that included the testing of both their old and new products with two different comparable evaluation conditions. The first one consisted in testing the 3 functionalities on the old product with existing users in order to establish a comparison baseline. Then, by deploying the theory of learning, the three functionalities were tested randomly three times on the new product. The third repetition was the one that was compared between products." (UX Lead)*

*thus having clear facts with which to make their decision." (UX Lead)*

assessment of the overall and specific user experience.

By aggregating and triangulating the arsenal of empirical data collected, the laboratory's researchers have succeeded in creating a methodology for simplifying and making the data more accessible. The results of this methodology are the visualizations of the interactions through the creation of UX heatmaps [5, 30, 31]. These heatmaps offer an "easy to interpret UX evaluation tool which contextualizes users' signals while interacting with a system. Using these signals to infer the users' emotional and cognitive states and mapping these states on the interface provide

*experience between the users and the product." (UX Lead)*

**4.3 Based on results**

*Human 4.0 - From Biology to Cybernetic*

*4.3.1 Data visualization*

**4.4 Based on time constrains**

*4.4.1 Granularity*

**94**

As each project has its own specificities and distinguishes itself from others, pretests are always necessary. Undertaken in a short time span, these pre-tests allow the UX team to make final adjustments before starting the data collection with the participants. Three pre-tests are usually performed. The first is a technical test to ensure that all collection and analysis instruments are functional and set up properly to facilitate collection. The second is done with a member of the team to evaluate the time and fluidity of the experimental task. The third test is done with an external participant to ensure the understanding of each step of the experimental task and to avoid any misunderstanding during the data collection.

*"For example, in the 6th case study, while performing the pre-tests, our research team realized that one of the tasks could not be done in the sequence that was proposed initially, and this caused a major change in the experimental design and protocol. The pre-tests prevented loss of data from one of the recruited participants." (Lab Manager)*

#### *4.4.3 Standardization of the planning and methodology*

With each successive Sprint project, the research team gradually standardized the process and practice to enhance their execution in terms of speed, efficiency of human resources, and costs. Indeed, the team put together a concise timetable detailing every step of a Sprint project where responsibilities for the research team and the design client are granted, and deadlines are specified. This timeline presents, on one hand, the elements of macro-planning in terms of weeks. Depending of the maturity and knowledge of the design client about their context of intervention as the product or service they wish to test, this preparation phase is variable and flexible. Furthermore, as regards the academic context, the submission of the ethics certification (considering the academic research context) requires many weeks of anticipation, since this is to ensure that all approvals have been obtained before starting the user experience testing. However, if the Sprint project is a sequel to a previous one or if a design client has already made a Sprint project and wishes to carry out a second one, this preparatory phase gradually decreases in terms of time since it increases in terms of efficiency. On the other hand, the elements of micro-planning in terms of days and hours, such as details of the execution, are specified and are the main interest of this standardized timeline (**Table 3**).

This normalized timeline presents the critical path of a Sprint project: (1) project kick off; (2) mandate definition; (3) experimental design fine tuning; (4) pre-test and validation; (5) data collection; (6) codification; (7) analysis; and (8) final presentation. Aiming to be completely transparent, this normalized timeline's intentions are to help all the project stakeholders to understand the critical steps that could delay the project, identify the persons in charge of the various steps, so as to avoid any misunderstanding and repetition of efforts. Moreover, it can be taken as a list of actions to be considered when starting a UX research project.

### *4.4.4 Time allotted*

Another important aspect to consider during Sprint projects is the time allotted for carrying out the tests. This similar aspect turns into one of the limitations of agile/UX research. Indeed, for a Sprint project to be realized in 1 week, the experience of using the evaluated product or service can hardly exceed 1 h without having a direct consequence on the realization and costs. The time allotted for data


**Responsibilities**

**M-1 D-14 D-7 D-5 D-3**

> UX team

Client

UX team UX team

Client

*Towards Agility and Speed in Enriched UX Evaluation Projects*

*DOI: http://dx.doi.org/10.5772/intechopen.89762*

UX team or

client

Client

UX team UX team UX team UX team UX team UX team UX team UX team UX team UX team

Planning of the markers

**97**

**Pre-tests and validation**

Final prototype delivery

Prototype validation

Technical pre-test

Participants' list

Compensation

Last minute adjustments

Internal pre-test Internal validation of the

External pre-test External validation of the

Protocol adjustment

**Data collection** Day 1 of data collection

Day 2 of data collection

**Codification**

Extraction Codification

**Analysis**

Analysis

experimental

 design

experimental

 design

 on the prototype

 **D-2 D-2 D-1 D1 D2 D3 D4 D5 D7**

**Sprint**

