**2. The early history of modeling COVID-19 in support of decision-makers**

Even as the COVID-19 pandemic evolved through the beginning of its course in early 2020, discussions were underway around the world about preparing for the longer term. In the US, they were based on painful lessons learned from a response often characterized by delays, inefficiencies, a lack of federal coordination, and a pervasive skepticism about the science. Elsewhere, lessons were sometimes more timely and less painful, but the number of cases and deaths continued to climb daily

**61**

*On the Value of Conducting and Communicating Counterfactual Exercise…*

nearly everywhere. Some of these lessons were, of course, obvious. Containment and mitigation can have a positive effect. Creating effective diagnostic tests is difficult. It is even harder to produce and distribute high-quality tests and personal protection equipment in the quantities required. Fast-tracking new therapeutics might become productive, but the real hope probably lies in creating a new and effective vaccine as quickly as possible amidst uncertainties about the character of immunity from the virus and the distribution of the vaccine, itself. Other lessons were more obscure, but one seemed to touch nearly every point where action decisions were required or anticipated: informative modeling results are difficult to communicate and *they are easy to criticize because coping with apparently incompre-*

In the US and many other countries, virus impact projection models played a prominent role in political and public discussions about what it would mean to "flatten the curve" of new COVID-19 infections. Such models were essential to provide insight into the enormous scope of the problem. They became critical tools for planning the timing of efforts to return societies and economies to pre-COVID-19 activities without doing more damage [11]. Many, however, did not provide necessary information on projected uncertainties in the course and severity of the virus, the key determinants of these uncertainties, the information required to reduce them, and/or best practices in conveying all of this to decision-makers,

As a result, it was challenging for the primary "clients" of modelers' products (decision-makers across governments of all scales, businesses large and small, religious organizations, public and private foundations, individuals, etc.) to be comfortable with the idea of assigning likelihoods or even degrees of confidence to their various outputs—that is, to the varieties of possible futures born of processing results from multiple modeling efforts and/or accommodating deliberately created

For example, when faced in February with five model results and a consulting firm that produced a "composite" estimate of questionable value, Governor Cuomo of New York ultimately picked the model that produced the projections of hospital demand that matched the maximum number of hospital beds and intensive care beds that his state could make available. The state had determined that it could essentially double its total capacity across its 12 geographic regions (53,000 beds including 26,000 intensive care beds) by manipulating equipment on hand, converting non-treatment rooms into patient rooms, and organizing hundreds of hospitals into a single administrative entity—just barely adequate against the middle scenario that had estimated a maximum need of roughly 120,000 beds including 60,000 ICU beds [13]. Why did he ignore the extreme possibilities? Not because they were totally implausible. "Why waste time", he had thought, "worrying about the two extreme scenarios that would surely overwhelm the entire state hospital

Governor Cuomo's predicament was a reflection of at least three phenomena that define the communication context of modelers' best efforts. First of all, modeling is an essential tool for understanding the likely outcomes of different strategies for responding to a fast-moving global pandemic like COVID-19 [11] or, as consistently noted by the Intergovernmental Panel on Climate Change (IPCC), a slow but accelerating stressor like climate change [14–18]. Indeed, any phenomenon that produces large, growing, and widespread risk over time can threaten the planet's ability to develop sustainably. However, developing and refining models for any of these threats are very difficult. In most cases, the most useful modeling necessarily involves multidisciplinary collaboration between epidemiologists (climate scientists, natural scientists, etc.), public health experts, mathematicians, statisticians,

*DOI: http://dx.doi.org/10.5772/intechopen.93639*

*hensible uncertainty is not a widely distributed skill*.

their constituents, and their bosses [12].

probability distributions from a single model.

system regardless of what we did?"

*On the Value of Conducting and Communicating Counterfactual Exercise… DOI: http://dx.doi.org/10.5772/intechopen.93639*

*Environmental Issues and Sustainable Development*

assumptions, bias, framing, and immodesty.

bated by the growing global climate crisis [10].

Concluding and synthetic remarks occupy the last section.

**2. The early history of modeling COVID-19 in support of** 

climate change.

the emissions of greenhouse gases and decarbonizing the macroeconomy and thereby reduce the likelihoods of significant harm. Shrinking the tails of the most extreme consequences is another—invest in new adaptations and response actions (therapeutics and vaccines) that can eradicate the explosive nature of potential outbreaks. That is, invest in the development and distribution of new ways to minimize the ravages of the virus or prevent it from invading human beings. Or, invest in forward-looking or responsive adaptations that reduce the consequences of

These are abstract issues, of course, but confronting them is critical for efforts to manage the controversies that surround action decisions—controversies that can be born of misinterpretations of modeling results and applications, deliberate distortions designed by unscrupulous agents to promulgate false perceptions, exaggerated foci that obscure social, economic, and political complexities, as well as unfounded assertions that attack the integrity of sound scientific practices [9]. These controversies make it clear that modelers need to work continually to improve the models that they employ to answer comparative policy-relevant questions and to communicate their results effectively. They therefore lead to the conclusion that efforts manage climate and health risks need to include exercising novel and traditional methods for improve modeling practices, the understanding of modeling structures, and the communication of modeling results. These efforts are just as important carefully taking account of more widely expressed modeling concerns:

Here, similarities and synergies between epidemiologic models of pandemics like COVID-19 and integrated models of longer-term risks from climate change provide a context for productive suggestions about how to structure these efforts strategies like policy-relevant counterfactual exercises, structural model comparison experiments, value of information calculations, out-of-scale reality checks, and model updating are all highlighted, here. The goal is to offer some thoughts about how these research activities can support sound communication for sustainable development. This is especially important because systemic social and economic inadequacies have been laid bare by the COVID-19 pandemic and will be exacer-

Section 2 provides some context by reviewing briefly the early history of modeling the COVID-19 coronavirus with reference to the needs and challenges of that enterprise—representing the virus, the consequences of exposure, the implications of responding or not, the need for intervening in the workings of the economy, and so on. Section 3 frames the issue of improving the production and communication of modeling results in a skeptical, frightened, and uncertain world. Tools like methods to identify thresholds of tolerable risk, counterfactual modeling exercises, structured model comparisons, and value of information calculations are introduced and discussed briefly with regard to practicality, context, and experience.

Even as the COVID-19 pandemic evolved through the beginning of its course in early 2020, discussions were underway around the world about preparing for the longer term. In the US, they were based on painful lessons learned from a response often characterized by delays, inefficiencies, a lack of federal coordination, and a pervasive skepticism about the science. Elsewhere, lessons were sometimes more timely and less painful, but the number of cases and deaths continued to climb daily

**60**

**decision-makers**

nearly everywhere. Some of these lessons were, of course, obvious. Containment and mitigation can have a positive effect. Creating effective diagnostic tests is difficult. It is even harder to produce and distribute high-quality tests and personal protection equipment in the quantities required. Fast-tracking new therapeutics might become productive, but the real hope probably lies in creating a new and effective vaccine as quickly as possible amidst uncertainties about the character of immunity from the virus and the distribution of the vaccine, itself. Other lessons were more obscure, but one seemed to touch nearly every point where action decisions were required or anticipated: informative modeling results are difficult to communicate and *they are easy to criticize because coping with apparently incomprehensible uncertainty is not a widely distributed skill*.

In the US and many other countries, virus impact projection models played a prominent role in political and public discussions about what it would mean to "flatten the curve" of new COVID-19 infections. Such models were essential to provide insight into the enormous scope of the problem. They became critical tools for planning the timing of efforts to return societies and economies to pre-COVID-19 activities without doing more damage [11]. Many, however, did not provide necessary information on projected uncertainties in the course and severity of the virus, the key determinants of these uncertainties, the information required to reduce them, and/or best practices in conveying all of this to decision-makers, their constituents, and their bosses [12].

As a result, it was challenging for the primary "clients" of modelers' products (decision-makers across governments of all scales, businesses large and small, religious organizations, public and private foundations, individuals, etc.) to be comfortable with the idea of assigning likelihoods or even degrees of confidence to their various outputs—that is, to the varieties of possible futures born of processing results from multiple modeling efforts and/or accommodating deliberately created probability distributions from a single model.

For example, when faced in February with five model results and a consulting firm that produced a "composite" estimate of questionable value, Governor Cuomo of New York ultimately picked the model that produced the projections of hospital demand that matched the maximum number of hospital beds and intensive care beds that his state could make available. The state had determined that it could essentially double its total capacity across its 12 geographic regions (53,000 beds including 26,000 intensive care beds) by manipulating equipment on hand, converting non-treatment rooms into patient rooms, and organizing hundreds of hospitals into a single administrative entity—just barely adequate against the middle scenario that had estimated a maximum need of roughly 120,000 beds including 60,000 ICU beds [13]. Why did he ignore the extreme possibilities? Not because they were totally implausible. "Why waste time", he had thought, "worrying about the two extreme scenarios that would surely overwhelm the entire state hospital system regardless of what we did?"

Governor Cuomo's predicament was a reflection of at least three phenomena that define the communication context of modelers' best efforts. First of all, modeling is an essential tool for understanding the likely outcomes of different strategies for responding to a fast-moving global pandemic like COVID-19 [11] or, as consistently noted by the Intergovernmental Panel on Climate Change (IPCC), a slow but accelerating stressor like climate change [14–18]. Indeed, any phenomenon that produces large, growing, and widespread risk over time can threaten the planet's ability to develop sustainably. However, developing and refining models for any of these threats are very difficult. In most cases, the most useful modeling necessarily involves multidisciplinary collaboration between epidemiologists (climate scientists, natural scientists, etc.), public health experts, mathematicians, statisticians,

and economists—the sort of collaboration that cannot be built in a few days and is not possible at all without personal buy-in by willing participants.

Secondly, appropriately displaying uncertainty bands around "best-practice" projections increases the public communication challenges in engaging decisionmakers. Such relative likelihood information must be communicated in a responsible, accurate, and understandable way, but also one that minimizes the risk that those who are uncomfortable with probabilistic information will simply throw up their hands and conclude that "Scientists do not know what they are talking about." Care needs to be taken, as well, in communicating the value of looking at the tails of the distributions of results. Speaking of low likelihood extreme events of, for example, very bad outcomes cannot irresponsibly be labeled "fear mongering" if those events have very large consequences. Risk is, after all, the product of likelihood and confidence; and it can be comparatively large and therefore worthy of careful consideration if either factor is large.

Finally, model results and their underlying science are vulnerable to attack by skeptics and partisans who are generally suspicious or, more problematically, possess political agendas [9]. This is particularly concerning when projections honestly change markedly from week to week as new information from around the world becomes available and when results from individual models diverge markedly. It is frequently difficult to explain to decision-makers why they should accept projections of any single, well-described policy scenario when its projected outcomes can differ so widely from model to model. These differences do not mean that any given model or ensemble of models is completely untrustworthy; they mean that the modelers are trying to describe the full range of possible futures as well as they can from difference perspectives of natural and/or human processes. Of course, it was the former impression that undermined trust in published models of COVID-19 course projections, particularly after the "no policy" projections of the Imperial College London model [19] received such widespread public and political attention as a baseline description of the reality and seriousness of the health risks.

Some of the multiple efforts to understand the intricacies of the behavior this virus that blossomed well into the summer are covered briefly in [20]. Pei, et al. [21] is notable in this collection as perhaps the first rigorous counterfactual exercise; it was designed at Columbia University to answer the important question at the time: What would have happened if non-therapeutic interventions in the US had started earlier than March 15? According to their calculations, starting only a week earlier, on March 8, would have saved approximately 35,000 U.S. lives [a 55% reduction (95% CI: 46–62%)] and avoided more than 700,000 COVID-19 cases [a 62% reduction (95% CI: 55–68%)] through May 3. Starting interventions another week earlier could have reduced deaths by more than 50,000 (around 83%) with cases falling proportionately. There were no do-overs, of course. The US was well on its chosen pathway by May, but there would be chances to change course if (not really when) the virus came back. It follows that these published answers to an important "What if we had done X?" question should have become strong reasons to express urgency for renewed action if conditions began to deteriorate sometime downstream. They did with little prompt response, but that is another story.

Before then, on June 8th, *Nature* published two different counterfactual studies that considered the opposite question while including other countries. Hsiang et al. [22] focused on six countries (China, France, Iran, Italy, the UK, and the US) where travel restrictions, social distancing, canceled events, and lockdown orders had been imposed. Their calculations, supported by an estimate that COVID-19 cases had doubled roughly every 2 days starting in mid-January, suggested that as many as 62 million confirmed cases (385,000 in the US) had been prevented or delayed through the first week in April by the actions that had been implemented.

**63**

trials (2).

2 deaths [30].

**information**

*On the Value of Conducting and Communicating Counterfactual Exercise…*

Meanwhile, Flaxman et al. [23] focused on 11 European countries on the same question. They worked with estimated viral reproduction rates between 3 and 5; that is, every infected person was expected to infect between 3 and 5 other people per unit of time (the so called "serial interval"—estimated for COVID-19 in Du et al. [24] to be roughly 4 days). They estimated that a total of 3.1 million deaths (plus or minus 350,000) were avoided through the end of April, and they found that only lockdowns produced statistically significant effects on the number of estimated cases. Were these high numbers really physically plausible? Yes, but they must be interpreted in their complete and proper contexts. The reported scenarios of all of the virus studies only described trajectories for cases and deaths that could be attributed to COVID-19 given alternative assumptions about the form and timing of any policy or behavioral response. As a result, each imagined path also involved a course of policy intervention that had other economic and social effects that were not captured in the analysis [25]. Ultimately, it is up to decision-makers to ponder the implicit trade-offs between these intertwined impacts, to ferret out joint levels of tolerable risk—a judgment they cannot be made honestly without acknowledging what the science says. Unfortunately, the president of the US called [21] a "political hit job" [26]. Even more troubling, conservatives more generally greeted coronavirus models with the same "detest" that they have voiced about climate models [27]. It is important to note that modeling of the COVID-19 coronavirus was not the first time in recent history that widespread modeling played a significant role in framing global and national responses and communicating their social value. Shortly after the discovery of the Ebola viral disease (EVD) in West Africa, modelers around the world began to work to inform decision-makers about the regional and global risks. Chretien et al. [28] chronical 125 models from 66 publications of trends in EVD transmission (in 41 publications), effectiveness of various responses

(in 29), forecasts (projections in 29), spreading patterns across regions and countries (15), the phylogenetics of the disease (9), and the feasibility of vaccine

Their takeaway messages include some points that are salient, here. Taken in their order, they began by highlighting the need to understand the influence of increasing awareness of severe infections across various levels of community, to improve the ability to sustain that awareness, and to include its manifestations in the models. They also argued strongly for model coordination and systematic comparison of modeling results to better understand the major sources of uncertainty and how models accommodate their inclusion. Indeed, they encouraged the adoption of ensemble approaches with transparent architectures for easier communication. Finally, drawing on Yozwiak et al. [29], they stress the importance of making data and results available more quickly and effectively to all interested parties. These efforts were part of an enormously successful global response organized by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC), among others. When EVD subsided in November of 2015, 28,000 cases and 11,000 deaths had been reported in Guinea, Liberia, and Sierra Leone. In the US, the final tally was 4 cases diagnosed among 11 cases recorded and

**3. Dealing more effectively with the challenge of communicating new** 

The three phenomena noted above are daunting, but the experiences of the virus modelers whose work was criticized unjustly is evidence of the importance of skilled communication that anticipates the dangers of inserting quality science into

*DOI: http://dx.doi.org/10.5772/intechopen.93639*

#### *On the Value of Conducting and Communicating Counterfactual Exercise… DOI: http://dx.doi.org/10.5772/intechopen.93639*

*Environmental Issues and Sustainable Development*

careful consideration if either factor is large.

and economists—the sort of collaboration that cannot be built in a few days and is

Secondly, appropriately displaying uncertainty bands around "best-practice" projections increases the public communication challenges in engaging decisionmakers. Such relative likelihood information must be communicated in a responsible, accurate, and understandable way, but also one that minimizes the risk that those who are uncomfortable with probabilistic information will simply throw up their hands and conclude that "Scientists do not know what they are talking about." Care needs to be taken, as well, in communicating the value of looking at the tails of the distributions of results. Speaking of low likelihood extreme events of, for example, very bad outcomes cannot irresponsibly be labeled "fear mongering" if those events have very large consequences. Risk is, after all, the product of likelihood and confidence; and it can be comparatively large and therefore worthy of

Finally, model results and their underlying science are vulnerable to attack by skeptics and partisans who are generally suspicious or, more problematically, possess political agendas [9]. This is particularly concerning when projections honestly change markedly from week to week as new information from around the world becomes available and when results from individual models diverge markedly. It is frequently difficult to explain to decision-makers why they should accept projections of any single, well-described policy scenario when its projected outcomes can differ so widely from model to model. These differences do not mean that any given model or ensemble of models is completely untrustworthy; they mean that the modelers are trying to describe the full range of possible futures as well as they can from difference perspectives of natural and/or human processes. Of course, it was the former impression that undermined trust in published models of COVID-19 course projections, particularly after the "no policy" projections of the Imperial College London model [19] received such widespread public and political attention

as a baseline description of the reality and seriousness of the health risks.

did with little prompt response, but that is another story.

Some of the multiple efforts to understand the intricacies of the behavior this virus that blossomed well into the summer are covered briefly in [20]. Pei, et al. [21] is notable in this collection as perhaps the first rigorous counterfactual exercise; it was designed at Columbia University to answer the important question at the time: What would have happened if non-therapeutic interventions in the US had started earlier than March 15? According to their calculations, starting only a week earlier, on March 8, would have saved approximately 35,000 U.S. lives [a 55% reduction (95% CI: 46–62%)] and avoided more than 700,000 COVID-19 cases [a 62% reduction (95% CI: 55–68%)] through May 3. Starting interventions another week earlier could have reduced deaths by more than 50,000 (around 83%) with cases falling proportionately. There were no do-overs, of course. The US was well on its chosen pathway by May, but there would be chances to change course if (not really when) the virus came back. It follows that these published answers to an important "What if we had done X?" question should have become strong reasons to express urgency for renewed action if conditions began to deteriorate sometime downstream. They

Before then, on June 8th, *Nature* published two different counterfactual studies

that considered the opposite question while including other countries. Hsiang et al. [22] focused on six countries (China, France, Iran, Italy, the UK, and the US) where travel restrictions, social distancing, canceled events, and lockdown orders had been imposed. Their calculations, supported by an estimate that COVID-19 cases had doubled roughly every 2 days starting in mid-January, suggested that as many as 62 million confirmed cases (385,000 in the US) had been prevented or delayed through the first week in April by the actions that had been implemented.

not possible at all without personal buy-in by willing participants.

**62**

Meanwhile, Flaxman et al. [23] focused on 11 European countries on the same question. They worked with estimated viral reproduction rates between 3 and 5; that is, every infected person was expected to infect between 3 and 5 other people per unit of time (the so called "serial interval"—estimated for COVID-19 in Du et al. [24] to be roughly 4 days). They estimated that a total of 3.1 million deaths (plus or minus 350,000) were avoided through the end of April, and they found that only lockdowns produced statistically significant effects on the number of estimated cases.

Were these high numbers really physically plausible? Yes, but they must be interpreted in their complete and proper contexts. The reported scenarios of all of the virus studies only described trajectories for cases and deaths that could be attributed to COVID-19 given alternative assumptions about the form and timing of any policy or behavioral response. As a result, each imagined path also involved a course of policy intervention that had other economic and social effects that were not captured in the analysis [25]. Ultimately, it is up to decision-makers to ponder the implicit trade-offs between these intertwined impacts, to ferret out joint levels of tolerable risk—a judgment they cannot be made honestly without acknowledging what the science says. Unfortunately, the president of the US called [21] a "political hit job" [26]. Even more troubling, conservatives more generally greeted coronavirus models with the same "detest" that they have voiced about climate models [27].

It is important to note that modeling of the COVID-19 coronavirus was not the first time in recent history that widespread modeling played a significant role in framing global and national responses and communicating their social value. Shortly after the discovery of the Ebola viral disease (EVD) in West Africa, modelers around the world began to work to inform decision-makers about the regional and global risks. Chretien et al. [28] chronical 125 models from 66 publications of trends in EVD transmission (in 41 publications), effectiveness of various responses (in 29), forecasts (projections in 29), spreading patterns across regions and countries (15), the phylogenetics of the disease (9), and the feasibility of vaccine trials (2).

Their takeaway messages include some points that are salient, here. Taken in their order, they began by highlighting the need to understand the influence of increasing awareness of severe infections across various levels of community, to improve the ability to sustain that awareness, and to include its manifestations in the models. They also argued strongly for model coordination and systematic comparison of modeling results to better understand the major sources of uncertainty and how models accommodate their inclusion. Indeed, they encouraged the adoption of ensemble approaches with transparent architectures for easier communication. Finally, drawing on Yozwiak et al. [29], they stress the importance of making data and results available more quickly and effectively to all interested parties. These efforts were part of an enormously successful global response organized by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC), among others. When EVD subsided in November of 2015, 28,000 cases and 11,000 deaths had been reported in Guinea, Liberia, and Sierra Leone. In the US, the final tally was 4 cases diagnosed among 11 cases recorded and 2 deaths [30].
