**5. Evaluation**

This section presents the evaluation strategy and explains how it will be conducted. Subsequently, the research results are presented and discussed.

## **5.1 Evaluation strategy**

Typically, the usability for human-technology interaction is described by means of ISO 9241-11 [10]. Since this is difficult or poorly measurable, the system usability scale questionnaire (short *SUS*) is used to answer the question of whether the DAO

*A Usability Analysis of the DAO Concept Based on the Case Study of a Blockchain Game DOI: http://dx.doi.org/10.5772/intechopen.105347*

principle has an impact on usability [11, 12]. This questionnaire was provided to each user in the follow-up of each game.

Each participant was asked to state whether the DAO principle had an impact on usability and if so, what that impact was. For this purpose, a mixture of quantitative (Likert scale of the SUS questionnaire incl. SUS score) and qualitative (free text fields for a question) questions had to be answered within the questionnaire.

After all participants had completed the questionnaire, the questionnaire was evaluated descriptively in order to be able to measure the influence. The qualitative statements were evaluated and presented by a quantitative content analysis (SUS score for usability classification).

The results of this survey will be discussed within the upcoming sections after the successful completion of the statistical analysis.

#### **5.2 Results**

The following section will describe, evaluate, and then discuss the results of the evaluation. In the period from January 31, 2022 to February 13, 2022, the questionnaire was answered. Within this period, 42 persons answered the questionnaire, of which 34 were male and 8 were female. Twenty-seven of the 42 respondents were under the age of 30; the remaining were evenly distributed up to the age of 64. Approximately 83% of the respondents had at least a general qualification for university entrance (e.g., A-level).

The SUS score is determined from all the answers given in the questionnaire and is intended to provide a quick overview on the usability of the project. Its reliability, measured in the Cronbach's alpha, is 0.92 (out of a maximum of 1). This means that the questionnaire has a very high reliability and, thus, fulfills one of the most important quality criteria that a questionnaire must fulfill [11, 13]. Questions are asked about the ease of use, the feeling of security, the complexity of the application, and the need for any support during use. The SUS score is a number that can have a value between 0 and 100. Basically, the higher this value, the better the usability. As shown in **Figure 3**, all values smaller than 50 are to be considered as *awful*. From 68 onwards, usability is considered *okay*. Everything else after that is rated as *good* or *excellent*. In this work, a score of 70 could be achieved. Thus, the usability of the *Connect Four* game can be rated as *good*. In addition to the questionnaire for the SUS score, free text fields were also evaluated. For example, 17 of the 42 respondents stated that they liked playing as a team. Likewise, the fun of the game (12 out of 42) as well as the design (four out of 42), the handling (four out of 42), and the low requirements for participation (one out of 42) in the game were explicitly emphasized. Four people did not provide any further information.

**Figure 3.** *SUS score.*

Thirty-four people used the free text fields to indicate what could be improved. It is striking that 16 of the 42 people found the long delay after placing a vote to be annoying. Communication within the teams (two out of 42) and the design (seven out of 42) and the cryptic user names (six out of 42) were also rated as poor. Three people found playing in teams to be not good and would have liked to play on their own.
