Lessons from the Pandemic (guest post)


“A pandemic reverses the asymmetry of risk.”

In the following guest post*, Richard Yetter Chappell, a philosopher at the University of Miami, surveys some of the major philosophically interesting lessons from the pandemic and the U.S. response to it. (A version of this post first appeared at his blog, Philosophy, et cetera.)

[Anthony Moman, “A Thousand Cuts”]

Lessons from the Pandemic
by Richard Yetter Chappell

It’s generally recognized that our (American) response to the Covid-19 pandemic was disastrous. But I think far fewer appreciate the full scale of the disaster, or the most significant causal levers by which the worst effects could have been avoided. (Yes, Trump was bad.  But his public health disinformation and politicization of masking—while obviously bad—may prove relatively trivial compared to the mammoth failings of our public health institutions and medical establishment.) Much of the pandemic’s harm could have been mitigated had our institutions been properly guided by the most basic norms of cost-benefit analysis. Consider:

(1) The dangers of blocking innovation by default

In ordinary circumstances, the status quo is relatively safe and so untested medical innovations present asymmetric risks. That is, until they are proven safe and effective, it may be reasonable to assume that the potential risks of an untested product outweigh its potential benefits, and so block public access to such products until they pass stringent testing requirements. (There are arguments to be made that FDA regulations are excessively onerous even in ordinary circumstances, but I remain neutral on that question here. I take it that there is at least a reasonable case to be made in the FDA’s defense ordinarily. No such case for the FDA’s stringency seems possible in a pandemic.)

A pandemic reverses the asymmetry of risk. Now it is the status quo that is immensely dangerous, and a typical sort of medical intervention (an experimental drug or vaccine, say) is comparatively less so. The potential benefits of innovation likely outweigh the potential risks for many individuals, and vastly so on a societal scale, where the value of information is immense. So the FDA’s usual regulations should have been streamlined or suspended for potential pandemic solutions (in the same way that any ethics barriers beyond the minimum baseline of informed consent should have been suspended for pandemic research). This should be the first thing the government does in the face of a new pandemic. By blocking access to experimental vaccines at the start of the pandemicthe FDA should be regarded as causally responsible for every Covid death that is occurring now (and many that occurred previously).

Just think: if any willing member of the public could have purchased themselves a shot of the experimental Moderna vaccine back in the first half of 2020, its effectiveness would have been proven much sooner, and production and distribution ramped up accordingly, bringing about an end to the pandemic many months sooner than we will actually achieve. The sheer scale of the avoidable harms suffered here is almost impossible to over-state.  (If the FDA managed to prevent a Thalidomide-scale disaster every year for several decades, it still would not be sufficient to outweigh the harm of extending this pandemic by many months. But of course the real choice facing us is not so “all or nothing”. There’s no reason we can’t reap the benefits of FDA protection—if it is a benefit—in ordinary circumstances, while sensibly suspending policies that are very obviously inapt in the face of a pandemic.)

Of course, we couldn’t know in advance which (if any) experimental vaccines would work.  Even so, the expected value of my recommended policy (encouraging experimental vaccination followed by low-dose viral inoculation to confirm immunity) strikes me as clearly positive, even just given what we knew back in March. (If you disagree, please comment there — and show your working.)  If nothing else, consider how many lives would have been saved simply by requiring immunity certification for anyone working in elder-care.  Providing targeted immunity to high-risk transmission vectors in a pandemic should be an obvious policy priority. I’m appalled that it proved to be beyond our medical policy establishment.

I’ve focused here on the error of blocking opportunities for early immunity (whether through experimental vaccines or viral inoculation — and of course the failure to run vaccine challenge trials also belongs on this list), but the underlying lesson applies to many other errors in pandemic policy, including banning early Covid tests (in Feb 2020), banning quick tests throughout the summer, etc.  In future pandemics, the FDA should only be allowed to ban a pandemic-alleviating product after first producing a cost-benefit analysis to justify their intervention. In a pandemic, innovation must be permitted by default.

(2) Misguided perfectionism

Closely related to the above mistake is the implicit assumption that it’s somehow better to do (or allow) nothing than to do (or allow) something imperfect. Letting the perfect be the enemy of the good in a pandemic is disastrous. Blocking quick Covid tests for having lower accuracy than slow ones is an obvious example of this form of stupidity. Deciding in advance that a vaccine must prove at least 50% effective in trials to receive FDA approval is another. (Obviously a 40% effective vaccine would be better than nothing!  Fortunately it didn’t come to that in the end, but this policy introduced extra risk of disastrous outcomes for no gain whatsoever.)

Compare Dr. Ladapo’s argument in the WSJ that “Doctors should follow the evidence for promising therapies. Instead they demand certainty.” (Steve Kirsch expands on the complaint.) Again, this is a very basic form of irrationality that we’re seeing from the medical establishment.

Misguided perfectionism has also damaged the vaccine rollout due to prioritizing complex allocation schemes over ensuring that as many people are vaccinated as quickly as possible. (Some are letting doses spoil rather than “risk” vaccinating anyone “out of turn”!)

More examples are discussed here.

(3) Agency bias

Sometimes the pressure to do nothing seems to stem from inflating fears of potential downside, while disregarding missed potential gains. Relatedly, we tend to blame people for harms that stem from action (whether performed or allowed), and ignore or downplay harms that stem from inaction (& so are seen as built into the status quo). While this bias leads to avoiding policies perceived as “risky”, I don’t call it “risk aversion” because while some risks are inflated, others are irrationally neglected. It seems closely related to the omission-commission error. Whatever its roots, it strikes me as a very deep-rooted psychological bias, and one that is plausibly behind much bad thinking about the pandemic.

One recent example of this mistake: holding second doses of vaccine in reserve instead of giving out first doses first (and trusting that stockpiles would be replenished in time to provide booster shots before initial immunity waned). It’s easy to imagine how “first doses first” could go wrong. But it’s harder to see how that slight risk could mathematically outweigh the likely benefits of quickly vaccinating twice as many people, in “expected value” terms. As I previously noted (in relation to inoculation), it’s not enough to just flag a potential down-side of a policy proposal. Every option in a pandemic has downsides. We need to assess which option is the least bad, or the most promising, in expectation.

Many of the world’s problems (e.g. California’s wildfires) may ultimately be traced back to a kind of asymmetrically-biased blame-aversion incentivizing a bad status quo over even mildly “risky” solutions that would obviously be worth trying according to a neutral cost-benefit analysis.  Foolish inaction is frustrating enough in normal circumstances. It is outright disastrous in a pandemic.

(See also my companion post on the epistemic analogue of this asymmetric bias.)

(4) Status-quo bias

This is really just a summary of the previous points. But I cannot possibly emphasize enough what a mistake it is to privilege the status-quo in a pandemic. It’s just nuts. Quietly maintaining the status quo in a pandemic kills thousands upon thousands of people, and indirectly harms millions more. Yet everyone behaves as though it’s somehow intolerably “reckless” to even consider unconventional policies that have any potential downside (no matter how disproportionately greater their potential upside). Meanwhile, the only people I see outraged about the FDA’s obstructionism are libertarians who are always outraged by the FDA. How is it not obvious to all that obstructing medical progress is the single greatest threat in a pandemic? (If only this could inspire a fraction of the outrage that was directed at ordinary people for going to the beach…)

(5) Other failures of cost-benefit analysis

This all seems to come down to a failure to even attempt a proper cost-benefit analysis. This failure also took other forms. One of the most striking involved the blind prioritization of physical health over social, economic, and mental welfare. One saw this in the commonly-voiced idea that it was somehow “indecent” to question whether lockdowns might do more harm than good all things considered, for example. (Not to mention the Covid “security theatre” of closing parks!) N.B. I’m not here claiming that lockdowns were all bad. I’m claiming that cost-benefit analysis was needed to answer the question, and it’s bad of people to deny this.

Aside: I’m aware of studies showing that people are biased against experimentation, and more risk-averse for others than for themselves (both via Marginal Revolution). I’m sure there must be similar studies on what we might call “health bias”—prioritizing quantity over quality of life, and direct physical health threats over indirect effects on welfare—any pointers would be most welcome.

(6) Epistemic obtuseness

The medical establishment’s demand for certainty (mentioned under #2 above) is one kind of epistemic obtuseness. There are others worth mentioning. A big one is the assumption that we can have “no evidence” for P until a trial specifically testing for P has been conducted. (As Robert Wiblin jokes, “We have no data on whether the Pfizer vaccine turns people into elves 12 months after they take it.”) The WHO trumpeted that there was “no evidence” that Covid antibodies conferred any immunity, when in fact we had perfectly good (albeit uncertain) evidence of this based on (i) what we know of similar viruses, and (ii) the absence of large numbers of confirmed reinfections that we would expect to see after X months of a raging global pandemic if recovery did not confer any immunity whatsoever. (The latter evidence obviously got stronger the larger the value of X became.)

(7) Mundane practical failures

This post has focused on what I think are philosophically interesting lessons from the pandemic—mistakes that stem from systematic biases in our thinking, for example. There are more mundane errors too: failures to plan for the logistics of vaccine distribution, and other “maddeningly obvious” stuff. As a philosopher, I don’t have any special expertise to add there, but readers are very welcome to contribute in the comments with whatever they take the biggest mistakes (and associated lessons) of the pandemic to be.

Hedgehog Review

Subscribe
Notify of
guest

5 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Avalonian
3 years ago

Richard, while I appreciate the thought and critical energy that has gone into this post, and while many of the arguments seem promising to me, the first question that immediately comes to my mind concerns your authority to pronounce so confidently on complex causal matters such as these. The essay is shot through with predictive claims that “strike” you as true, claims about the expected value of enormously complex policies, and for example we hear that there is only a “slight risk” to a first-doses-first policy, and that to deny that “a 40% effective vaccine would be better than nothing” in a trial amounts to “stupidity”. The thing is, I’m quite sure there are *reasons* for such policies as setting a 50% threshold for pursuing vaccine research, yet you don’t actually give any of them here, preferring to insist that a 40% vaccine should get a trial because that “obviously” would be better than nothing. But would it? Might they have their reasons? Could there be constraints that the CDC/FDA are operating under that aren’t salient to you or to us? Maybe, but only someone with direct experience in those matters would know what all the constraints are, and I can’t help but notice that no discussion of their constraints or reasons even appears here. This cannot be the way to settle such issues.

So, to the point: if I wanted to know about the expected value of many of the things you propose here I would *first* ask a medical expert with lots of experience in social policy. And when evaluating the CDC’s conservatism and risk-aversion I would also, for example, want to hear exactly what their experts will say to justify this policy (you don’t give any of their reasons in this post, preferring to assume that since you’ve got the empirical questions right, their risk aversion amounts to “bias”.) So, sorry for what might come off as an ad hominem, but take it as a neutral question: is there some expertise you have on these questions that should lead us to trust what “strikes” you as true when it comes to incredibly complex matters of social policy?

As a related aside re:(6): the CDC and the WHO have to worry enormously about the perlocutionary effects of what they say given the way people actually are. Saying that there is “no evidence” for something might be a way of encouraging caution in the populace, and not a literal report of what is strictly believed by the relevant scientists. This, again, is the sort of real-world constraint these people are under which is ignored by armchair analysis: if much of the populace is strongly inclined to think “ah, I’m sure this isn’t that bad”, anything you can do to move *them* towards true belief is a good thing by the lights of social policy.

Richard Y Chappell
3 years ago

Hi Avalonian, fair question. I should clarify that I am not claiming any special “authority” or expecting anyone to “trust” me. In sharing my thoughts, I instead invite you to judge them for yourself. If various of my claims strike you as unlikely to be true, the mere fact that I’m asserting them shouldn’t change your mind.

On “pronouncing so confidently”, see my companion post. I disagree with the presumption that there is epistemic pressure to avoid confident views, or to suspend judgment by default. We should use our best judgment, such as it is, and be ready to update when presented with new evidence (e.g. from those with more relevant expertise). So I very much welcome critical responses from anyone better-informed on the empirical details who can explain where I’m going wrong.

I agree that we should especially want to hear expected value estimates from “medical expert[s] with lots of experience in social policy.” I don’t know of any such who have offered expected value estimates. Most seem to form their policy views on non-utilitarian bases. (Compare mainstream opinion on kidney markets.) So I don’t think we should be at all inclined to assume that the establishment policies reflect expert expected value estimates.

But in any case, I of course agree with your observation that I’m not infallible and there are likely opposing reasons that I’m not aware of. Again, I welcome anyone with more awareness of those reasons to (assess their strength and) share them. I don’t think we should just be silent by default if things seem to us to be deeply messed up, just because we might be mistaken.

JDRox
JDRox
3 years ago

I don’t understand Avalonian’s worry about 6: a perlocutionary effect of saying “there’s no evidence for X” is most certainly to cast doubt upon X, and the medical establishment has done this in cases where they should have been doing the opposite. (E.g., there’s no evidence masks protect the wearer, etc.) Also, regarding Avalonian’s first point, while I agree that the piece isn’t written in the most authoritative style, it really is true that there just aren’t any, good, worked out arguments against, e.g., first doses first (in our current situation). Those of us defending first doses first etc. have been asking, and asking, and they’re just not there.

JDRox
JDRox
3 years ago

I don’t understand Avalonian’s worry about 6: a perlocutionary effect of saying “there’s no evidence for X” is most certainly to cast doubt upon X, and the medical establishment has done this in cases where they should have been doing the opposite. (E.g., there’s no evidence masks protect the wearer, etc.) Also, regarding the first point, while I agree that the piece isn’t written in the most authoritative style, it really is true that there just aren’t any, good, worked out arguments against, e.g., first doses first (in our current situation). Those of us defending first doses first etc. have been asking, and asking, and they’re just not there.

Nicolas Delon
3 years ago

@ Avalonian Most of the prominent medical and public health experts, as far as I’m aware, have refused to perform any cost-benefit analysis. Like Richard, I am unaware of any such serious efforts. When a few dissenters have tried, they have been pilloried. Experts are welcome to join the conversation, but we need not wait until they do. In fact, I suspect their refusal to even entertain the thought that we should do CBA to justify large-scale policies that affect hundreds of millions of people and livelihoods, has done a lot of harm.

Richard Y Chappell
3 years ago

For readers like Avalonian, it might be helpful to bracket any particular empirical details or examples and focus instead on the most general overarching claim of my post: that excessive conservatism risks immense harm in a pandemic.

One doesn’t need a medical degree to see that this more modest (yet still important) claim is true. For it does not require us to establish that some unconventional pandemic policy truly would be much better (though I happen to believe that this is true); it suffices to note that an unconventional pandemic policy easily could be much better — i.e., there’s a non-trivial probability of this — and since excessive conservatism would dismiss such unconventional proposals out of hand, such conservatism significantly risks immense harm. Since it is worth guarding against significant risks of immense harm, it is worth guarding against excessive conservatism in a pandemic.

To turn this into a more pointed critique of the medical/policy establishment (and elite public opinion), we can simply observe that there is no evidence that said establishment (or elite opinion) is suitably aware of this risk, or that they have taken suitable steps to guard against excessive conservatism. Quite the opposite, I think: the publicly-available evidence (including establishment pronouncements, presented justifications, and policy decisions) all give off the strong appearance of extreme conservatism. (And of course we have background knowledge that institutions tend to be conservative, and that conventional medical ethics in particular is extremely conservative — again, see kidney markets. So we should all go into a pandemic with a moderately high prior for the view that mainstream institutions and opinion are likely to be excessively conservative.)

Is it conceivable that their conservatism is all for the best in the end? Sure, it’s possible. Is that any kind of problem for my critique? No, of course not. We should press for clear cost-benefit analysis from pandemic policy-makers even if it turns out that such improved epistemic procedures would — remarkably — yield the same substantive policies at the end of the day. For there is at least a non-trivial chance that we could do much, much better.