Quite a few models of the Covid-19 pandemic have now been developed and some commentators have been quite critical. The main source of criticism is the two-part view that (1) the only purpose of scientific models is to make forecasts, and (2) as a method for forecasting, epidemic models have failed (according to some, typically unspecified, standard of accuracy).
I have already said that models are not oracles. Although forecasting is one possible task for models, there are others. One such use, I previously argued, was to view models as instruments that, when fit to data, provide measurements of the world. Such measurements are especially helpful when there are things we would like to measure but cannot observe directly, e.g. the number of SARS-Cov-2 infections in a population.
Here, I want to think about a different use for models, that of the thought experiment. A thought experiment is an idea for an experiment that has not actually occurred. Famously, Einstein routinely used thought experiments in his reasoning about physical phenomena, such as imagining himself chasing after a beam of light to measure its properties while he himself was traveling at the speed of light. In such a case, he suggested, one would see the spatial oscillation of the electromagnetic field, but no temporal oscillation.
Scientists engage in thought experiments for a wide range of reasons. Sometimes thought experiments are performed to think through the logistics of experiments that will actually be performed at some point in the future. These thought experiments are for planning. Other thought experiments concern counterfactuals, particularly regarding events that did not happen in the past, but, in some sense, could have. Still, other thought experiments are conducted because the experiment they envision cannot be conducted, they are impossible to do, perhaps because of the limitations of current technology, but also perhaps because the experiment cannot be conducted at all, with any conceivable technology, such as Einstein’s light beam experiment.
I suggest epidemiologists engage in thought experiments for yet another reason: to comprehend the behavior of epidemics as complex systems. As with other complex systems, epidemics are subject to many factors and feedbacks. Relevant factors in the Covid-19 pandemic include the amount of close contact between people, the environments where that contact occurred, the contagiousness of the different variants, and even the weather. Feedbacks include depletion of the susceptible population due to infection or vaccination, behavioral changes due to fear or fatigue, and the interaction between popular opinion and public policy.
We are quite confident that as the number of people vaccinated goes up, the speed of transmission will slow. Similarly, there is good evidence that some of the more recent genetic variants of SARS-CoV-2 are more transmissible than earlier strains. How important are these two processes to future epidemic states? To answer this question we might conduct some different thought experiments and compare their outcomes. What kind of epidemic would we have if we didn’t vaccinate, but rather let the epidemic run its course? What kind of epidemic would we have if we vaccinate, but didn’t have the novel variants? Of course, neither of those options is open to us. Our future pandemic is one with both vaccination and variants, leading to one of the most important questions of the day: How fast must vaccination be to prevent variant-induced resurgence?
The conditions envisioned here — a pandemic with and without vaccination, a pandemic with and without variants, and a pandemic with and without a host of other variables — are all thought experiments. Of course, asking the question does not provide its own answer. We also need information about how fast the variants are likely to increase in the population, how much more infectious they are, how fast vaccines are likely to be distributed, and other such quantitative pieces of information.
Epidemiologists can provide plausible (if not perfect) estimates for these quantities and we understand reasonably well (but not perfectly) how these processes interact. It boggles the mind of even the most brilliant epidemiologist to keep track of all these factors and the associated calculations in one’s head. But an extension of such a thought experiment presents itself naturally: write down mathematical expressions to represent the processes, use the plausible estimates as coefficients for those equations, and use a computer to solve the equations for the future time of interest. Hence, a model is born.
This view of the epidemic model takes the model to be simply a precisification of the epidemiologist’s ideas, i.e. the model is just a sophisticated thought experiment. The formulation of the model itself forces the epidemiologist to face all the relevant questions: is there a relevant rate that has not been quantified, how does the force of infection depend on the number of infected people in the population, and many others.
The view of epidemic-model-as-thought-experiment has the added virtue that while the epidemiologist often cannot intuit what the combination of considered factors jointly entail, the mathematical solution will get it right. It is a sanity check on the conjecture that all the things the epidemiologist put into the model actually have the anticipated effects. Models, in this usage, are a tool for anticipating unintended consequences.
Of course, the thought experiment is reliable only insofar as its assumptions are approximately correct. A model can’t give back anything you didn’t bake into it in the first place. But even this limited property — to tell you what cake you will get from a given list of ingredients — is no mean feat.
This view, in which the model is a sanity check on processes too complex to reason about otherwise, gives the lie to a somewhat famous saying among scientists, which is that a model is only as good as the data it’s based on. But a thought experiment doesn’t need data at all! Can such a model then be any good? I say the answer is “yes” because the model has provided something of value, namely a statement of the logical consequence of a set of plausible beliefs. This consequence itself may or may not be plausible.
For instance, early last spring, when it was unclear how countries outside of China would respond to the spread of SARS-CoV-2, I used a simple model to calculate how many Americans might die from Covid-19 if no actions were taken to curtail the spread of the virus, a condition I considered plausible, although undesirable. The number I arrived at was approximately 2.4 million. To me, even though the premises of the model seemed plausible (e.g. that the US might not impose movement restrictions), the conclusion was not. (I could not believe that American society would allow such an epidemic to unfold). This led me to the conclusion that America would act. The question, then, was how and when. If taking significant action was inevitable, it seemed to be sensible to take action sooner rather than later, when it could have the biggest impact and save the largest number of lives. This conclusion is the outcome of using a model as a thought experiment.