“A pandemic reverses the asymmetry of risk.”
In the following guest post*, Richard Yetter Chappell, a philosopher at the University of Miami, surveys some of the major philosophically interesting lessons from the pandemic and the U.S. response to it. (A version of this post first appeared at his blog, Philosophy, et cetera.)
Lessons from the Pandemic
by Richard Yetter Chappell
It’s generally recognized that our (American) response to the Covid-19 pandemic was disastrous. But I think far fewer appreciate the full scale of the disaster, or the most significant causal levers by which the worst effects could have been avoided. (Yes, Trump was bad. But his public health disinformation and politicization of masking—while obviously bad—may prove relatively trivial compared to the mammoth failings of our public health institutions and medical establishment.) Much of the pandemic’s harm could have been mitigated had our institutions been properly guided by the most basic norms of cost-benefit analysis. Consider:
(1) The dangers of blocking innovation by default
In ordinary circumstances, the status quo is relatively safe and so untested medical innovations present asymmetric risks. That is, until they are proven safe and effective, it may be reasonable to assume that the potential risks of an untested product outweigh its potential benefits, and so block public access to such products until they pass stringent testing requirements. (There are arguments to be made that FDA regulations are excessively onerous even in ordinary circumstances, but I remain neutral on that question here. I take it that there is at least a reasonable case to be made in the FDA’s defense ordinarily. No such case for the FDA’s stringency seems possible in a pandemic.)
A pandemic reverses the asymmetry of risk. Now it is the status quo that is immensely dangerous, and a typical sort of medical intervention (an experimental drug or vaccine, say) is comparatively less so. The potential benefits of innovation likely outweigh the potential risks for many individuals, and vastly so on a societal scale, where the value of information is immense. So the FDA’s usual regulations should have been streamlined or suspended for potential pandemic solutions (in the same way that any ethics barriers beyond the minimum baseline of informed consent should have been suspended for pandemic research). This should be the first thing the government does in the face of a new pandemic. By blocking access to experimental vaccines at the start of the pandemic, the FDA should be regarded as causally responsible for every Covid death that is occurring now (and many that occurred previously).
Just think: if any willing member of the public could have purchased themselves a shot of the experimental Moderna vaccine back in the first half of 2020, its effectiveness would have been proven much sooner, and production and distribution ramped up accordingly, bringing about an end to the pandemic many months sooner than we will actually achieve. The sheer scale of the avoidable harms suffered here is almost impossible to over-state. (If the FDA managed to prevent a Thalidomide-scale disaster every year for several decades, it still would not be sufficient to outweigh the harm of extending this pandemic by many months. But of course the real choice facing us is not so “all or nothing”. There’s no reason we can’t reap the benefits of FDA protection—if it is a benefit—in ordinary circumstances, while sensibly suspending policies that are very obviously inapt in the face of a pandemic.)
Of course, we couldn’t know in advance which (if any) experimental vaccines would work. Even so, the expected value of my recommended policy (encouraging experimental vaccination followed by low-dose viral inoculation to confirm immunity) strikes me as clearly positive, even just given what we knew back in March. (If you disagree, please comment there — and show your working.) If nothing else, consider how many lives would have been saved simply by requiring immunity certification for anyone working in elder-care. Providing targeted immunity to high-risk transmission vectors in a pandemic should be an obvious policy priority. I’m appalled that it proved to be beyond our medical policy establishment.
I’ve focused here on the error of blocking opportunities for early immunity (whether through experimental vaccines or viral inoculation — and of course the failure to run vaccine challenge trials also belongs on this list), but the underlying lesson applies to many other errors in pandemic policy, including banning early Covid tests (in Feb 2020), banning quick tests throughout the summer, etc. In future pandemics, the FDA should only be allowed to ban a pandemic-alleviating product after first producing a cost-benefit analysis to justify their intervention. In a pandemic, innovation must be permitted by default.
(2) Misguided perfectionism
Closely related to the above mistake is the implicit assumption that it’s somehow better to do (or allow) nothing than to do (or allow) something imperfect. Letting the perfect be the enemy of the good in a pandemic is disastrous. Blocking quick Covid tests for having lower accuracy than slow ones is an obvious example of this form of stupidity. Deciding in advance that a vaccine must prove at least 50% effective in trials to receive FDA approval is another. (Obviously a 40% effective vaccine would be better than nothing! Fortunately it didn’t come to that in the end, but this policy introduced extra risk of disastrous outcomes for no gain whatsoever.)
Compare Dr. Ladapo’s argument in the WSJ that “Doctors should follow the evidence for promising therapies. Instead they demand certainty.” (Steve Kirsch expands on the complaint.) Again, this is a very basic form of irrationality that we’re seeing from the medical establishment.
Misguided perfectionism has also damaged the vaccine rollout due to prioritizing complex allocation schemes over ensuring that as many people are vaccinated as quickly as possible. (Some are letting doses spoil rather than “risk” vaccinating anyone “out of turn”!)
More examples are discussed here.
(3) Agency bias
Sometimes the pressure to do nothing seems to stem from inflating fears of potential downside, while disregarding missed potential gains. Relatedly, we tend to blame people for harms that stem from action (whether performed or allowed), and ignore or downplay harms that stem from inaction (& so are seen as built into the status quo). While this bias leads to avoiding policies perceived as “risky”, I don’t call it “risk aversion” because while some risks are inflated, others are irrationally neglected. It seems closely related to the omission-commission error. Whatever its roots, it strikes me as a very deep-rooted psychological bias, and one that is plausibly behind much bad thinking about the pandemic.
Many of the world’s problems (e.g. California’s wildfires) may ultimately be traced back to a kind of asymmetrically-biased blame-aversion incentivizing a bad status quo over even mildly “risky” solutions that would obviously be worth trying according to a neutral cost-benefit analysis. Foolish inaction is frustrating enough in normal circumstances. It is outright disastrous in a pandemic.
(See also my companion post on the epistemic analogue of this asymmetric bias.)
(4) Status-quo bias
This is really just a summary of the previous points. But I cannot possibly emphasize enough what a mistake it is to privilege the status-quo in a pandemic. It’s just nuts. Quietly maintaining the status quo in a pandemic kills thousands upon thousands of people, and indirectly harms millions more. Yet everyone behaves as though it’s somehow intolerably “reckless” to even consider unconventional policies that have any potential downside (no matter how disproportionately greater their potential upside). Meanwhile, the only people I see outraged about the FDA’s obstructionism are libertarians who are always outraged by the FDA. How is it not obvious to all that obstructing medical progress is the single greatest threat in a pandemic? (If only this could inspire a fraction of the outrage that was directed at ordinary people for going to the beach…)
(5) Other failures of cost-benefit analysis
This all seems to come down to a failure to even attempt a proper cost-benefit analysis. This failure also took other forms. One of the most striking involved the blind prioritization of physical health over social, economic, and mental welfare. One saw this in the commonly-voiced idea that it was somehow “indecent” to question whether lockdowns might do more harm than good all things considered, for example. (Not to mention the Covid “security theatre” of closing parks!) N.B. I’m not here claiming that lockdowns were all bad. I’m claiming that cost-benefit analysis was needed to answer the question, and it’s bad of people to deny this.
Aside: I’m aware of studies showing that people are biased against experimentation, and more risk-averse for others than for themselves (both via Marginal Revolution). I’m sure there must be similar studies on what we might call “health bias”—prioritizing quantity over quality of life, and direct physical health threats over indirect effects on welfare—any pointers would be most welcome.
(6) Epistemic obtuseness
The medical establishment’s demand for certainty (mentioned under #2 above) is one kind of epistemic obtuseness. There are others worth mentioning. A big one is the assumption that we can have “no evidence” for P until a trial specifically testing for P has been conducted. (As Robert Wiblin jokes, “We have no data on whether the Pfizer vaccine turns people into elves 12 months after they take it.”) The WHO trumpeted that there was “no evidence” that Covid antibodies conferred any immunity, when in fact we had perfectly good (albeit uncertain) evidence of this based on (i) what we know of similar viruses, and (ii) the absence of large numbers of confirmed reinfections that we would expect to see after X months of a raging global pandemic if recovery did not confer any immunity whatsoever. (The latter evidence obviously got stronger the larger the value of X became.)
(7) Mundane practical failures
This post has focused on what I think are philosophically interesting lessons from the pandemic—mistakes that stem from systematic biases in our thinking, for example. There are more mundane errors too: failures to plan for the logistics of vaccine distribution, and other “maddeningly obvious” stuff. As a philosopher, I don’t have any special expertise to add there, but readers are very welcome to contribute in the comments with whatever they take the biggest mistakes (and associated lessons) of the pandemic to be.