A 13-year old African-American girl goes to the hospital for a tonsillectomy. What ends up happening is heartbreaking, infuriating, surprising—and, in part, a result of the work of philosophers and bioethicists.
The story is recounted in “What Does It Mean To Die?“, an outstanding article in The New Yorker by Rachel Aviv. I’d recommend reading it before continuing here.
The article raises questions regarding hospital procedures, racism in medical contexts, and other issues, but its main focus is on the definition of death and its real-world implications.
The story is set in California. As Aviv points out, “California follows a version of the 1981 Uniform Determination of Death Act, which says that someone who has sustained the ‘irreversible cessation of all functions of the entire brain, including the brain stem, is dead.'”
But that hasn’t always been the policy. Aviv recounts some of the history leading up to that:
Until the nineteen-sixties, cardio-respiratory failure was the only way to die. The notion that death could be diagnosed in the brain didn’t emerge until after the advent of the modern ventilator, allowing what was known at the time as “oxygen treatment”: as long as blood carrying oxygen reached the heart, it could continue to beat. In 1967, Henry Beecher, a renowned bioethicist at Harvard Medical School, wrote to a colleague, “It would be most desirable for a group at Harvard University to come to some subtle conclusion as to a new definition of death.” Permanently comatose patients, maintained by mechanical ventilators, were “increasing in numbers over the land and there are a number of problems which should be faced up to.”
Beecher created a committee comprising men who already knew one another: ten doctors, one lawyer, one historian, and one theologian. In less than six months, they completed a report, which they published in the Journal of the American Medical Association. The only citation in the article was from a speech by the Pope. They proposed that the irreversible destruction of the brain should be defined as death, giving two reasons: to relieve the burden on families and hospitals, which were providing futile care to patients who would never recover, and to address the fact that “obsolete criteria for the definition of death can lead to controversy in obtaining organs for transplantation”…
In the next twelve years, twenty-seven states rewrote their definitions of death to conform to the Harvard committee’s conclusions. Thousands of lives were prolonged or saved every year because patients declared brain-dead—a form of death eventually adopted by the United Kingdom, Canada, Australia, and most of Europe—were now eligible to donate their organs. The philosopher Peter Singer described it as “a concept so desirable in its consequences that it is unthinkable to give up, and so shaky on its foundations that it can scarcely be supported.” The new death was “an ethical choice masquerading as a medical fact,” he wrote.
That only some states adopted the Harvard committee’s definition of death meant that “people considered alive in one region of the country could be declared dead in another.” There was a push for uniformity:
In 1981, the President’s Commission for the Study of Ethical Problems proposed a uniform definition and theory of death. Its report, which was endorsed by the American Medical Association, stated that death is the moment when the body stops operating as an “integrated whole.” Even if life continues in individual organs and cells, the person is no longer alive, because the functioning organs are merely a collection of artificially maintained subsystems that will inevitably disintegrate. “The heart usually stops beating within two to ten days,” the report said.
On the President’s Commission was a philosopher, Daniel Wikler (Harvard). Aviv interviewed him for her article:
He didn’t think the commission’s theory of death was supported by the scientific facts it cited. “I thought it was demonstrably untrue, but so what?” he said. “I didn’t see a downside at the time.” Wikler told the commission that it would be more logical to say that death occurred when the cerebrum—the center for consciousness, thoughts, and feelings, the properties essential to having a personal identity—was destroyed. His formulation would have rendered a much broader population of patients, including those who could breathe on their own, dead.
Despite Wikler’s reservations, he drafted the third chapter of the report, “Understanding the ‘Meaning’ of Death.” “I was put in a tight spot, and I fudged,” he told me. “I knew that there was an air of bad faith about it. I made it seem like there are a lot of profound unknowns and went in the direction of fuzziness, so that no one could say, ‘Hey, your philosopher says this is nonsense.’ That’s what I thought, but you’d never know from what I wrote.”
While Wikler appears to have thought the bar they set for death was too high, Alan Weisbard (Wisconsin) who was also on the commission (as its assistant legal director) seemed to have reservations in the opposite direction:
He said, “I think that the people who have done the deep and conceptual thinking about brain death are people with high I.Q.s, who tremendously value their cognitive abilities—people who believe that the ability to think, to plan, and to act in the world are what make for meaningful lives. But there is a different tradition that looks much more to the body.” The notion of brain death has been rejected by some Native Americans, Muslims, and evangelical Protestants, in addition to Orthodox Jews. The concept is also treated with skepticism in Japan, owing in part to distrust of medical authority. Japan’s first heart transplant, in 1968, became a national scandal—it was unclear that the donor was beyond recovery, or that the recipient (who died shortly after the transplant) needed a new heart—and, afterward, the country never adopted a comprehensive law equating brain death with the death of a human being. Weisbard, a religious Jew, said that he didn’t think “minority communities should be forced into a definition of death that violates their belief structures and practices and their primary senses.”
Aviv discusses the research of neurologist Alan Shewmon (UCLA). Shewmon had defended the concept of brain death, but felt it lacked justification, and began to research cases in which people “lived for months or years after they were legally dead.” He found 175 of them. Aviv writes:
Shewmon’s research on what he calls “chronic survival” after brain death helped prompt a new President’s council on bioethics, in 2008, to revisit the definition of death. The council’s report referred to Shewmon’s research thirty-eight times. Although it ultimately reaffirmed the validity of brain death, it abandoned the biological and philosophical justification presented by the 1981 President’s Commission—that a functioning brain was necessary for the body to operate as an “integrated whole.” Instead, the report said that the destruction of the brain was equivalent to death because it meant that a human being was no longer able to “engage in commerce with the surrounding world,” which is “what an organism ‘does’ and what distinguishes every organism from nonliving things.”
In a personal note appended to the end of the report, the chairman of the council, Edmund Pellegrino, expressed regret regarding the lack of empirical precision. He wrote that attempts to articulate the boundaries of death “end in some form of circular reasoning—defining death in terms of life and life in terms of death without a true ‘definition’ of one or the other.”
Others working in bioethics are also interviewed for the article. Bioethicist Robert Truog (Harvard) comments on the racial aspect to the story:
African-Americans are twice as likely as whites to ask that their lives be prolonged as much as possible, even in cases of irreversible coma—a preference that likely stems from fears of neglect. A large body of research has shown that black patients are less likely to get appropriate medications and surgeries than white ones are, regardless of their insurance or education level, and more likely to receive undesirable medical interventions, like amputations. Truog said, “When a doctor is saying your loved one is dead, and your loved one doesn’t look dead, I understand that it might feel that, once again, you are not getting the right care because of the color of your skin.”
Thaddeus Pope (Hamline) has concerns about controversies over the definition of death:
Pope told me that “every extra hour of nursing time that goes into one of these dead patients is an hour of nursing time that didn’t go to somebody else.” He also worries that these disputes, which often get media attention, will cause fewer people to register as organ donors, a practice whose social acceptability depends on the idea that patients are dead before their vital organs are removed.
I thought Aviv’s article was worth drawing attention to because it discusses how philosophers and bioethicists have engaged in work that has had identifiable “real-world” effects—serious, life and death effects. It also lays bare some of the ways in which philosophical work is limited, or perhaps even compromised, by pressures of time, politics, and the need for an answer. In turn, this raises questions about how we should understand the role of philosophy in such contexts.
(Thanks to Benjamin Mitchell-Yellin for bringing this article to my attention.)