**2. Guidance for the design of public dashboards**

The COVID-19 pandemic brought to attention a longstanding need for welldesigned dashboards in public health and medicine [4]. It also brought to light that there are no uniform guiding principles behind developing publicly-facing dashboards intended to serve public interests. As a prime example, a recent review of United States (US) government public dashboards for COVID-19 found that "states engaged in dashboard practices that generally aligned with many of the goals set forth by the Centers for Disease Control and Prevention, Essential Public Health Services" (from abstract) [4]. However, the results of this review do not address whether the public was adequately served by any of these dashboards that were funded with the public's money. Important questions not answered were: Did these dashboards meet the public's information needs? Did they meet the information needs of public health practitioners? Or more importantly – whose information needs were these dashboards supposed to meet, and what were these needs?

#### **2.1 Philosophies behind public dashboard design**

At present, there is no overarching philosophy behind public dashboard design, for public health or other topics [2]. Although individual projects will publish use-cases where they discuss their design philosophy [2, 5, 6], there has not been an overall effort by the professional informatics societies or other academic groups to assemble principles behind the design for dashboards intended to serve the public. This may be because such an effort would be daunting, and would require a relatively narrow scope. The scope should be aimed at addressing high-level requirements focused on ensuring that the public's needs are met by whatever dashboard solution is developed, regardless of the topic.

This chapter will attempt to summarize the literature into a framework that provides a general, generic rubric by which to evaluate how well a dashboard design for the public ensures that the public's needs are met through measuring their adherence to high-level requirements. The framework will also put forth a method by which to compare alternative dashboard solutions aimed at meeting similar public needs as to how consistent the solution is with the public's dashboard requirements.

The framework and rubric are intended to evaluate outcomes. Logically, a design process that adequately includes the public that the dashboard is intended to serve will inevitably produce a dashboard solution that meets these outcomes. Hence, there is no need to invest public funding in bloated efforts such as the Rapid Cycle Quality Improvement (RCQI) model, which is promoted by many health

#### *Framework to Evaluate Level of Good Faith in Implementations of Public Dashboards DOI: http://dx.doi.org/10.5772/intechopen.101957*

departments and organizations, and is extremely paperwork intensive [7, 8]. Part of what causes the RCQI model to be so effort-intensive is that it measures process outcomes. By contrast, the evaluation framework for public dashboards recommended in this chapter is streamlined, and focused on achieving a design solution, not a process solution.

Nevertheless, an optimal design solution will not be achieved without an adequate design process. Therefore, it is important to consider how the public should be involved in the process of designing public dashboards – especially those that are publicly-funded, and therefore have obligations to respond to the public's needs.

#### **2.2 Dashboard design process**

As stated previously, there is currently no agreed-upon best-practices design process for dashboards in general, and public dashboards specifically [2]. Each time a dashboard is developed, a different design process is used. But a generic, logical process can be summarized in **Figure 1**.

As shown in **Figure 1**, typically, before the dashboard is designed, some sort of design process is chosen, and this design process is followed to develop an "alpha prototype". The alpha prototype represents a working mock-up that exists for the purposes of getting feedback and working out an initial design. Next, the alpha prototype undergoes a testing process to inform developers as to modifications that are necessary before widespread testing is done. Once those modifications are made, a beta prototype exists, and can be launched for field testing.

As described in **Figure 1**, depending upon the project, there can be different components included in the design process for the alpha prototype. First, there will be iterative design processes as part of designing the alpha prototype, as well as the development of design documentation and the actual creation of the prototype. The details behind each of these components will vary by project. Once the alpha prototype exists, the process to convert it to the beta prototype involves some sort of user testing, and some sort of evaluation for adherence to standards. Granted, an alpha prototype may be released into the field without having undergone the beta prototype process, but that means it has not been user-tested or evaluated for adhering to standards.

This logical process can apply to any dashboard development effort. As one example, researchers aimed to design a dashboard for clinicians [9]. They wrote

#### **Figure 1.**

*Generic logical dashboard design process. This design process produces an alpha prototype for initial testing, and a beta prototype for widespread field testing.*

requirements and developed an alpha prototype, then worked with clinicians to gain feedback to guide the development of a beta prototype (which would presumably be developed in the future and field-tested) [9]. This article focused mainly on the feedback process to improving the alpha prototype, but the focus of articles can be on any part of the dashboard design process. Another article focused on the development of a beta prototype aimed at both the public and leaders for real-time decision-making related to traffic flow [2]. While the beta prototype was developed and appeared ready for testing, the article did not report any results, so the current final stage of this project was not evident in the article [2].

Although, this logical design process should theoretically involve the intended users of the dashboard, and prototypes should undergo iterative testing, this is not always the case with public dashboards. Because public dashboards often involve government agencies and leaders at some level, whether as data sources or as intended audiences, these forces can have unintended impacts on the dashboard design and quality.

#### **2.3 Governmental data suppression and misrepresentation**

As a general trend, consumers are demanding more data transparency, and calls are being made for governments to make data available for public oversight [1]. Likewise, there is an increasing trend toward using dashboards for empowering the public [2, 3]. Not only do dashboards of public data provide a mechanism for public oversight of leaders, but they also reduce *information asymmetry*, which refers to the circumstance in which one party (the government) has more information than another party (the public), thus disempowering them [2, 10, 11].

However, governments are not always keen to share the data for various reasons. It has been argued that government agencies will be more likely to comply with open government data (OGD) practices if they see it as an opportunity to showcase their agency's success [1]. However, if the agency believes the data will cast the agency in a negative light, the agency may be less likely to be inclined toward OGD practices. Ruijer and colleagues recommend that institutional incentives and pressure be created for OGD, because governments have a natural interest in suppressing data they think may be harmful to them in some way if analyzed [1].

However, data suppression is not the only method governments employ to prevent data use and interpretation. One limitation of legal requirements for OGD is that the agency may comply with the requirements in bad faith. During the COVID-19 outbreak in early 2020, a state epidemiologist in Florida said she was fired for refusing to manually falsify data behind a state dashboard [12]. Simply reviewing the limitations of big data can reveal ways to share big data in bad faith in a dashboard, such as visualizing too much data, visualizing incomprehensible or inappropriate data, and not visualizing needed data [13].

For this reason, in addition to holding governments to OGD standards, government efforts need to be evaluated as to whether or not they meet OGD standards *in good faith*. The framework presented here guides as to how to evaluate good vs. bad faith implementations of a public dashboard.

### **2.4 Dashboard requirements**

The evaluation framework presented has six principles on which to judge the level of good or bad faith in a public dashboard: 1) ease of access to the underlying data, 2) the transparency of the underlying data, 3) approach to data classification, 4) utility of comparison functions, 5) utility of navigation functions, and 6) utility of metrics presented. These principles will be described below.

*Framework to Evaluate Level of Good Faith in Implementations of Public Dashboards DOI: http://dx.doi.org/10.5772/intechopen.101957*

### *2.4.1 Access to underlying data*

A dashboard is essentially a front-end, with data behind it being visualized [14]. Hence, once a dashboard is published, members of the public may want to access the underlying data for various reasons, including oversight of the dashboard. But governments resistant to data-sharing may use the dashboard in bad faith as a firewall between the public and the underlying data to prevent data access [1]. Hence, good faith OGD principles hold that public dashboards should not serve as barriers, but instead serve as facilitators to access the underlying data being visualized in the dashboard.

### *2.4.2 Transparency of underlying data*

Although raw data are used for the dashboard, in the dashboarding process, they undergo many transformations to be properly visually displayed [9, 14]. The processing of the data can develop calculations that are then displayed in the dashboard. Therefore, to be transparent, the dashboard must not only facilitate access to the underlying raw data, but also to the transformations the data underwent in being displayed. A simple way to accomplish this kind of transparency is to use open source tools and publish the code, along with documentation [14]. This allows citizen data scientists an opportunity to review and evaluate the decisions made in the dashboard display.

### *2.4.3 Data classification*

How data are classified in a dashboard can greatly impact the utility of the dashboard. As an example, developers of an emergency department (ED) dashboard that was in use for five years under beta testing found that after the ED experienced an outbreak of Middle East Respiratory Syndrome (MERS), major structural changes were needed to the dashboard [15]. Another paper about developing a visualization of patient histories for clinicians described in detail how each entity being displayed on the dashboard would be classified [16]. Hence, inappropriate classifications or ones deliberately made in bad faith can negatively impact data interpretation to the point that the dashboard could be incomprehensible to its users.

#### *2.4.4 Comparison functions*

Dashboards are used to inform decision-making, and therefore, being able to make needed comparisons is an important factor in a dashboard's usability [14, 17]. As an example, the public traffic dashboard described earlier presented visualizations of the ten most congested areas of the city, as well as textual feedback on the two most suitable routes between downtown and outlying areas, to provide optimal comparators to allow the public to make the most-informed route decision [2]. While ultimately, optimal design choices could be debated, it is easy to conceive of how agencies looking to maintain opacity could obscure data interpretation in a dashboard in bad faith by deliberately limiting the ability to make useful comparisons.

#### *2.4.5 Navigation functions*

Dashboards are typically at least somewhat interactive, providing the user the ability to navigate through the data display, which responds to actions by the user [14, 18, 19]. When operating in good faith, developers often conduct extensive

usability testing to ensure that the dashboard is intuitive to use in terms of navigating through the data display, and that any interactivity is useful [15]. But when implemented in bad faith, a dashboard could be designed to deliberately confuse the user as to how to navigate and interpret the data in the dashboard.
