*3.2.4 User testing*

Informally, members of two potential user bases were queried as to their reactions to the differences between the two dashboards: members of the academic public health space, and members of the MA public. When the dashboard redesign

#### **Figure 5.**

*Alternative dashboard solution. Note: In our new version, two tabs are created (see "A"). The figure shows the first tab titled "ICU Rate Explorer". The second tab, titled "data collection", has information about the design of the dashboard and links to the original code. Each of the hospitals is indicated on the map by a color-coded icon that can be clicked on to display a bubble. The legend by "B" displays our color-coding scheme. When clicking on a hospital icon, hospital-level metrics are shown in a bubble, and there is a link that leads to the display of intensive care unit (ICU)-level metrics (see "C").*

#### *Framework to Evaluate Level of Good Faith in Implementations of Public Dashboards DOI: http://dx.doi.org/10.5772/intechopen.101957*

was pitched as a project to public health academics, it was dismissed as an unimportant escapade for various reasons. Some reasons cited were lack of agreement on terminology and patient safety priorities, the challenges with undercount of HAIs in NHSN data, and differential reporting accuracy in teaching vs. non-teaching hospitals. Academics also acknowledged that the system for tracking, addressing, and preventing HAIs is hopelessly broken in the US, and therefore it seems a waste of time to prop up such a system when it produces inaccurate data.

A few members of the MA public who are familiar with technology also provided informal feedback about the utility of the dashboard from a patient standpoint. They reported that the alternative solution was more intuitive than the original, and did a better job of representing the highly limited data from the NHSN.

These differences in reactions underscore the challenge of OGD and ensuring that public dashboards are developed and deployed in good faith. Those from the public health field expressed that since the system is broken and the data are inaccurate, they should be dismissed, while those in the public felt that since the data existed, it should be accessible, even if it was not completely accurate. It not only highlights the differing perspectives of those on either side of information asymmetry, it glaringly illustrates how those who are being held accountable by the usage of the data see dashboarding differently than those who are using the data to do oversight and accountability.

## **4. Application**

We wanted to compare the original HAI dashboard with the one we developed based on the good faith principles described earlier. We started by creating the framework presented in **Table 2**, which guides as to the good faith and bad faith characteristics of public dashboards.

Using this framework, we applied a rating system. We chose zero to represent "neither bad faith nor good faith", −5 to represent "mostly bad faith" and + 5 to represent "mostly good faith". Then, based on our experience and available information, we rated the original MA HAI dashboard and our alternative dashboard solution to compare the ratings. To experiment with applying our framework to another public dashboard, we used the information published in the article described earlier to rate the traffic dashboard which was in Rio de Janeiro [2]. Our ratings appear in **Table 3**.

As shown in **Table 3**, using **Table 2** as a rubric and our rating scale, we were able to rate each dashboard and assign a score. We were also able to define in the comments in the table the evidence on which we based our score. **Table 3** demonstrates that this framework can be used to compare two different alternatives of a public dashboard displaying the same data, as well as two completely different public dashboards. The total scores show that while our redesigned prototype of the HAI dashboard had a similar level of good faith implementation compared to the Rio traffic dashboard (scores 26 vs. 23, respectively), the original HAI dashboard had a very low level of good faith implementation compared to the other two (score − 20).

### **5. Discussion**

As is consistent with the global trend, the state of MA implemented an OGD requirement to share HAI data with the public through posting a public dashboard on a web page. However, as residents of MA, the authors found that this dashboard



*Framework to Evaluate Level of Good Faith in Implementations of Public Dashboards DOI: http://dx.doi.org/10.5772/intechopen.101957*

*Note: MA Hosp = original hospital-acquired infection (HAI) dashboard from the Commonwealth of Massachusetts (MA), MA Hosp Alt. = alternative solution, Rio = Rio traffic dashboard [2], and NHSN = National Healthcare Safety Network.*

#### **Table 3.** *Application of rating system.*

#### **Figure 6.**

*Example of visualization of framework score comparison. Note: MA Hosp = original hospital-acquired infection (HAI) dashboard from the Commonwealth of Massachusetts (MA), MA Hosp Alt. = alternative solution, and Rio = Rio traffic dashboard [2].*

did not serve our information needs, and essentially obscured the data it was supposed to present. To address this challenge, we not only redesigned the dashboard into a new prototype, but we also tested our proposed framework for evaluating the level of good faith in public dashboards by applying it. Using our proposed framework and rubric, we evaluated the original HAI dashboard, our redesigned prototype, and a public dashboard on another topic presented in the scientific literature on the level of good faith implementation. Through this exercise, we demonstrated that the proposed framework is reasonable to use when evaluating the level of good faith in a public dashboard.

The next step in the pursuit of holding governments accountable for meeting OGD standards in public dashboards is to improve upon this framework and rubric through rigorous research. As part of this research, entire groups of individuals could be asked to score dashboards on each of these characteristics, and the results could easily be summarized to allow an evidence-based comparison between dashboards. Results can be easily visualized in a dumbbell plot (using packages *ggplot2*, *ggalt*, and *tidyverse* [42–44]), which we have done with our individual scores, but could be done with summary scores (**Figure 6**).

As visualized in **Figure 5** and summed in **Table 3**, our scoring system suggested that the alternative HAI dashboard we developed was done in a level of good faith (score = 26) similar to that of the Rio traffic dashboard (score = 23), and that the original HAI dashboard appears to not have been done in good faith (score = −20), and may serve as a governmental attempt to hide or obscure uncomfortable data. This exercise shows that the framework and rubric developed can be used to compare the level of good faith in public dashboards, and to provide evidence-based recommendations on how governments can improve them so they meet both the spirit and the letter of OGD requirements.

#### **6. Conclusion**

In conclusion, in this chapter, we describe the challenge of holding governments accountable for developing public dashboards to meet OGD requirements in a way that also serves the public's information needs. To address this challenge, we propose a framework of six principles of good faith OGD by which public dashboards could be evaluated to ensure data shared by the government under OGD policies and laws are done so in good faith. We also demonstrate applying this framework

### *Framework to Evaluate Level of Good Faith in Implementations of Public Dashboards DOI: http://dx.doi.org/10.5772/intechopen.101957*

to the use-case of a public dashboard intended for residents of MA in the US to use to compare and select hospitals based on their HAI rates. As a demonstration, we present our redesign of the dashboard, then use a rubric based on the framework to score and compare the original dashboard and our alternative in terms of levels of good faith OGD. We also demonstrate using the rubric on a published use-case in the literature. As our framework and rubric provide a reasonable starting point as a method for evaluating and comparing the level of good faith in public dashboards, we strongly recommend that future research into this topic consider our framework and rubric, and build upon it through gathering evidence in the field.
