Trolley Problems: You’re Doing It All Wrong
As philosophy comes to occupy more and more of the public’s attention—which is good news—it is not surprising that a lot of that attention is directed at ideas and examples that are dramatic and easy to describe. Chief among these, it seems, is the trolley problem (it it has even shown up on a network sitcom). The trolley problem is so popular, though, that discussion of it, like the trolley itself, seems out of control. So I thought that a very brief primer, directed mainly at journalists and other non-specialists, would be of use.
The particular prompt for this post is yet another popular press article about the so-called shortcomings of trolley-style thought experiments, this time by Daniel Engber at Slate. He discusses a lab experiment which purported to show that “people’s thoughts about imaginary trolleys and other sacrificial hypotheticals did not predict their actions.” He calls this a “discomfiting” result. But it’s only discomfiting if you are making some mistakes about what these types of examples are. I hope that the following is of use in helping to keep writing about the trolley problem on the right track.
Trolley Problems: A Brief Caution
Please note that the value of trolley problems and the like does not depend on:
- Whether the trolley problem is realistic, that is, whether trolley systems are engineered to avoid these kinds of problems, whether one is very unlikely to encounter any trolleys in one’s life, whether one is unlikely to be confronted with a situation in which they must quickly choose between options in which different numbers of people die
- Whether what people in simulations or experiments actually choose to do in trolley-like situations matches with what they say they would do in such situations
- Which parts of people’s brains show increased activity when they choose one option or another
- The idea that what one thinks one should do in such problems tells us straightforwardly what one should do in various real life situations
But, you might ask, if trolley problems are not realistic, or predictive, or a means by which science can solve moral problems, or action-guiding, then what value do they have? Their value is in raising questions and problems for our moral thinking, such as:
- whether the numbers of people being harmed or benefited matter to what we should do, and if so, why?
- whether there is a distinction worth making between acting and not acting, and if so, whether that distinction makes a difference to the value of resultant benefits and harms, and if so, why?
- whether there is a distinction worth making between what an agent intends and what an agent merely foresees, and if so, whether that distinction makes a difference to the value of resultant benefits and harms, and if so, why?
Each of these produces more questions: what is a harm or benefit? What is it to harm or benefit? How do good and bad add up across persons? What is an action? What is an intention? What is the relationship between metaphysics and ethics? And so on.
Here are some mistakes people make using the trolley problem:
- Using “trolley problem” to refer to the question of what one would do in one example. Among philosophers, today, the name “trolley problem” refers to the apparently conflicting judgments that arise from two different trolley examples.
- Thinking that the central question these examples are about is: “What would you do?” That is a psychology question. More closely related to what philosophers are up to are the questions: “What should you do?” and “why?” But even those are not the main question. The main question is: what problems does this thought experiment and our answers to these questions raise for our understanding of morality?
- Thinking that we can easily extract from our responses to the trolley problem more general principles or justified responses to real-world choices. This is false because, first, the trolley problems are just one type of possible obstacle to systematic, coherent moral thinking—there are many others. Second, the real world is not artificially constrained in its options nor epistemically transparent, as the examples are.
None of this is to say that trolley problems cannot be legitimately critiqued—they have been and will continue to be. It’s just to caution against some simplistic and wrong-headed types of criticism.
Thanks for the post.
Perhaps this might be filed under the “other objections” category, but I’m left to wonder whether the alleged value of trolley problems might be obviated by concerns for parsimony. That is, if the value of trolley problems is to raise questions about comparative harms and befits of action versus omission, intention versus foresight, etc., but in each instance using trolley problems as a heuristic to address these concerns is accompanied by a list of qualifiers and caveats, why not instead afford oneself of any number of real-world scenarios which also raise these questions. Real-world scenarios can be messy, and can entail complex (and contested) contextual analysis. This “messiness” may offer a poor yield on identifying moral generalizations. But, as the OP (convincingly) argues, this difficulty with yielding generalizations are also reliably true of trolley problems, if for different reasons.
Put another way, what is it about trolley problems that makes them *uniquely* useful in illuminating moral reflection, in a manner not readily available by reasoning through real-world scenarios? Are there facets of real-world scenarios which pose some sort of a priori impediment to more ambitious moral thinking? If trolley problems are not uniquely valuable in this regard (i.e. overcoming built-in limitations in reasoning about available real-world scenarios), perhaps they should fall into disuse because they are insufficiently useful.
To lay out my cards: May area of interest (and some competence) is bioethics. There is no shortage of real-world problems in bioethics which illuminate questions of act/omission, comparative harms/benefits under conditions of scarcity, etc. I have often found trolley-problem applications to pressing questions in bioethics obtuse, in part because the real-world concerns seem sufficiently rich and complex all on their own.Report
Trolley problems limit the hypothetical.
One of the main issues with any of these ‘thought experiment’ problems is that they lead to one of two modes of thought:
1) Recognize and discuss the moral issues, implicitly accepting the idea that there’s no way to avoid at least one bad outcome;
2) Add immense factual complexity to the argument in an effort to “solve” the problem by eliminating all bad outcomes, and without considering the high-level moral issues
Most listeners (and the vast majority of lay listeners) will choose option #2: “I’d use my belt buckle to take apart a light post to drop on the tracks and save everyone!” The ore complex you make the problem the worse this response gets.Report
Much the same can be said about almost any philosophical problem. People (journalists) often just don’t get it. It makes me wonder why. There is a persistent tendency to confuse philosophy with psychology.Report
I am not surprised. Philosophy is one of the hardest subjects, but it is hard in a deceptive way, since much of the technical vocabulary consists of ordinary words used in a technical way and many of the examples involve everyday situations or situations that resemble everyday situations. On the other hand, journalists tend to summarise and simplify — understandably so, given the audience they are targeting and the media in which they are writing. This is no excuse for making mistakes and misrepresenting philosophical problems — one can write good philosophy in an accessible way. But it makes it the case that journalists are likely to over-simplify and make basic mistakes when writing about Philosophy. It is also not surprising, therefore, that most examples of good philosophy written in an accessible way have been produced by philosophers themselves.Report
I (mostly) like this response to to the critique of Trolley Problems that appeared in Current Affairs: https://jacobitemag.com/2017/11/13/trolleyer-than-thou/
Also this bit from https://aeon.co/essays/how-philosophy-helped-one-soldier-on-the-battlefield (which I think I found because you shared it a while back):
The trolley problem is criticised for lacking the elements faced by soldiers on the ground – personal biases, physical discomforts, psychological pressures – the sweaty, confused, dirty reality. But while understanding the local context was essential, including the context of my relationships and associated duties, I found it useful to take some dilemmas out of the context in which I was operating and look at them again through a different lens, focusing on the principles behind them. It helped me to understand that the military…was essentially consequentialist…By understanding the criticisms of consequentialism, I could try to conduct myself as ethically as possible and to some extent step back from that utilitarian model of thinking.Report
Thanks for sharing that reply to the Current Affairs piece. I do like some stuff from Current Affairs, but their take on the trolley problem is probably the worst I’ve ever encountered.Report
Thank you for this necessary post.
In Thomson’s treatment of the trolley problem in Realm of Rights, she finds examples in which what is morally correct depends on the time of day that the trolley comes down the track. (Read the book to see what I mean.) This just goes to show that the trolley problem is really about comparison of judgments in different cases and investigating reasons for differences in judgment, just as you say.
If philosophy were easy, then it wouldn’t be philosophy. It would be journalism.Report
What does this analysis mean for experimental philosophy? That it is really just psychology, not philosophy? This puzzles me a bit, because philosophy seems (to this social science outsider) to rely very heavily on intuitions that are tested and refined by simple examples. If experiments show that intuitions are highly sensitive to features that are not being proposed as philosophically important (such as what psychologists might call “mundane realism”, and experimental economists might call “baggage”), that suggests that we should be very cautious in relying on philosopher’s intuitions, and particularly on their claims about what aspects of the setting are driving those intuitions. But maybe such experiments would be psychology that is helpful to philosophy, not philosophy per se.Report
”If experiments show that intuitions are highly sensitive to features that are not being proposed as philosophically important” and you also say
”that suggests that we should be very cautious in relying on philosopher’s intuitions”
It would be quite important to see if the intuitions of philosophers can be manipulated by features of seemingly no philosophical importance. Only then would your inference follow. Otherwise, if intuitions of people who are not philosophers are manipulated easily, that doesn’t immediately tell us that the intuitions of philosophers are somehow problemaitc or unreliable, because it just might be the case that the intuitions of philosphers are not so flexible as those of non-philosophers.Report
Justin, everything you say seems to be on point, especially regarding many critiques of trolley problems in the media (and by our friends at Very Bad Wizards!). But you are leaving out what I take to be the main relevance of the empirical work on trolley problems (neuroscience, psychology, and x-phi). If the puzzles that the cases raise–namely, as you point out, the differing intuitions most people have about different cases–are shown to be best explained by morally irrelevant differences between the cases, then the puzzling features of the cases do not helpfully illuminate moral questions and problems–or at least not the ones most often raised. For instance, IF Greene is right that the main feature driving NO judgments to the footbridge cases is an aversion to physically pushing someone, then IF we all agree that the pushing aversion is not the morally relevant difference, then the interesting moral differences regarding side-effects are not usefully illuminated by the cases. And they may be misleading rationalizations or at least confusing the issues.
I was once sure that the main difference between the cases is believability and hence our intuitions might be picking up on a really plausible moral principle–i.e., an agent is not morally justified in doing harm that he is not epistemically justified in believing will lead to a (significant) reduction in harm/bad outcomes. It is highly plausible that the agent in footbridge is not justified in believing that pushing one big person in front of a trolley will stop it in time to save five people (so he should not do what risks killing six). But with my students, Dylan Murray and Bradley Thomas, we ran some experiments to test this hypothesis. The results suggested that epistemic intuitions do differ between the cases and are coming into play in moral judgments (others have done similar work recently), but we found that the anti-pushing aversion was driving the moral judgments more than the other way around. The moral intuitions were in fact influencing the epistemic intuitions (thinking it was wrong led people to think the agent was less justified in believing it would work).
In the end, it seems to me that we can and should do the moral psychology by doing the moral theory and the psychology in dialogue. Of course, all of this is consistent with the claim that there is lots to love about trolley problems. (I love teaching them.) And lots to hate…Report
Thanks, Eddy. I think work that illuminates how people are actually thinking about morality is important, and your findings about how moral intuitions are influencing epistemic judgments is really interesting.
The footbridge case is designed to generate intuitions against consequence-based forced sacrifice for us to then think about. But I don’t see the case as designed to itself offer an explanation of those intuitions. It is more an invitation to reflect on those intuitions, and likely design and consider further cases, each slightly closer to one which generates intuitions in favor of consequence-based forced sacrifice, perhaps in different ways, to do a bit of ethical forensics to see what is at the heart of people’s judgments about it.
So if we discover, as you and your team appear to, that the judgment that it would be wrong to push the big guy off of the bridge onto the tracks below is best explained by an anti-pushing aversion (say, because you get more sacrifice-endorsements in a trap door version of footbridge), that doesn’t show that footbridge is less valuable as an example worth considering. To the contrary, the example fulfilled the very role it was supposed to.
In other words, I agree that “that we can and should do the moral psychology by doing the moral theory and the psychology in dialogue,” but perhaps we disagree on what that means for the value of what you might think of as inaccurate or misleading examples.Report
“To the contrary, the example fulfilled the very role it was supposed to.”
Maybe I’m missing something, but it seems to me that this is a revisionist understanding of what role such examples were typically intended to play.
Intuitive cases are often offered as evidence for or against moral principles and theories.
Philosophers propose the intuition tracks and is evidence for something of moral significance. For example, we don’t feel it’s okay to push the fat man because it would violate the DDE. This case is evidence for a moral theory that gives more weight to the DDE than consequences.
That’s one story. And, I at least, thought this was the basic role of the cases in moral theory.
But then experimentalists test out cases that put pressure on the idea that the examples can do the specific moral work we thought. Greene asks people how they feel about the trap-door case. The DDE would also be violated here, but in this case many more people feel it is okay to open the trapdoor. So maybe these intuitions do not track the DDE and rather track, and are better explained by, an aversion to up close and personal violence.
Greene’s psychological work (maybe not the neuro part) challenges specific proposals about just what such cases are evidence for in moral theory (and in psychology since some liked a DDE-style explanation there as well.)
This stands to undermine the evidential role the intuitive case played for many in moral theory.
I thought the cases were important to moral theorists because of their evidential role. Take that away and it seems they are less ‘worthy of consideration’ contrary to Justin’s suggestion.Report
Like Eddy, this sounds right to me but also a tad overstated. Many take its value to exceed not just the “raising questions “, but also, lending some kind of support to various answers of them. To me, it seems reasonable to consider whether some of the things listed here are relevant to evaluating those answers. For example, science might give us more insight into the processes that generate them, which could end up being really informative for evaluating theories or engaging in “ethical forensics”. For another, it is reasonable to be suspicious of unrealistic imaginative exercises, given that this can interfere with judgment in all kinds of ways.
(On this latter score, though, philosophers may be interested to know that the same basic intuitions have been replicated by a research team at the University of Waterloo in much more realistic cover stories of the same type https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515145, a fact that is often glossed over in these kinds of criticisms.)Report
A basic confusion seems to be that trolley problems are equated with trolley cases.
The trolley problem arises from pairs of trolley cases contrasted with each other and the question what distinction makes a moral difference to ground a difference in intuitive judgments between the cases.Report
Please note that the value of trolley problems and the like does not depend on:
Whether the trolley problem is realistic, that is, whether trolley systems are engineered to avoid these kinds of problems, whether one is very unlikely to encounter any trolleys in one’s life, whether one is unlikely to be confronted with a situation in which they must quickly choose between options in which different numbers of people die
That needs an argument. I have often thought, and I have heard others, including students report, that the irrealism of the trolley problems may make it harder for us to imagine them in the right way, which would thereby muck up the data we supposedly get from the vignettes. In practice, we are getting a bit of spoken prose in response to a bit of read prose, but we aren’t looking for relationships between prose. What we want is to get some sort of intuition about a certain sort of case. But the irrealism means to me (and others) that often we are not justified in thinking that the prose at issue does a good job describing the case the intuition regards — because the irrealism means that the uptake of the prose is cluttered or marred.Report
”that the irrealism of the trolley problems may make it harder for us to imagine them in the right way”
Is the problem irrealism, as in the very low probability of encountering a trolley situation, or the lack of detail with which trolley cases are described?
Because it seems to me what it might be making us hard to imagine trolley cases in the right way is the lack of detail in their description. But that seems something different from the probability of a trolley case happening in real life.Report
Hmm, I don’t know. This line faces a simple dilemma: either (1) the TP’s value lies *solely* in its raising certain questions, or its value further lies in (2) providing a helpful or perspicuous example which can aid us in *answering* those questions. If it’s only the former, then the attention paid to the Trolley Problem is ludicrously out of proportion to its actual value. Many famous cases, real-world and otherwise, have long been available to help us ponder the significance of numbers and of the doing/allowing distinction.
This suggests that only (2) can possibly be a correct description of what philosophers have generally had in mind. So the idea is that trolley problems, unlike other sorts of cases, might be particularly helpful in answering questions like “whether the numbers of people being harmed or benefit matter to what we should do, and if so, why?” Well, how would they help us to answer this straightforwardly normative question about what “matters” to us in our actual decision-making? Most obviously, they could help us to get clear on what we actually believe, as part of an exercise in reflective equilibrium. But since sincere moral belief is deeply tied to motivation (on almost any meta-ethical view), the fact that real actions and professed responses to the TP come apart is evidence that our responses to the problem aren’t tracking our actual evaluative beliefs. So perhaps this is why Engber is “discomfited”.Report
I applaud you for highlighting the questions trolley problems are not meant to answer, but I’d like to add a caveat to your statement, “The main question is: what problems does this thought experiment and our answers to these questions raise for our understanding of morality?” If you’re looking for a strictly philosophical answer to that question, then you’ll be greatly misled (and will mislead others). There is a gigantic body of scientific evidence that our moral motivations are grounded in biology and social structures, and not simply in ratiocination from premises like J. S. Mill’s harm principle. The three questions you cite as right ones for the trolley problem all end with “why?” Again, if you’re looking for purely philosophical answers to “why?”, you’re looking in the wrong place.
The reason I might give (even to myself) for doing something doesn’t necessarily reveal my motive (or more accurately, the source of my motivation) for doing something. Indeed, the reasons I give could be confabulatory. Scientists are acutely aware that our motivations are quite separate and distinct from our verbally expressed reasons, and the relationship between the two is tenuous and often opaque. If you were to focus strictly on ratiocination upon verbally expressed reasons (as philosophers typically do), you are likely to find yourself misunderstanding, and compounding that misunderstanding, of what really motivates us.Report
Doesn’t the value of the trolley problem derive from the challenge it poses to the consequentialist theory of right action? An adequate theory of right action must correspond sufficiently well to our most tenable moral judgements, but consequentialism’s theory of right action doesn’t. This because it tells us that the right thing to do in both the Trolley Driver and the Transplant cases is to save a net total of 4 lives. In the former case, saving a net total of 4 lives is for the driver to turn the trolley. This corresponds with one a tenable moral judgment that most of us have. In the latter case, saving a net total of 4 lives is for the surgeon to kill the healthy patient who doesn’t consent to transplantation, but one of our most tenable moral judgments tells us that doing so is wrong. So the consequentialism’s theory of right action is inadequate, until the problem (of explaining the moral difference between the relevant cases) is resolved.
If this is misguided, I’d be happy to be corrected…Report
Tram problems, and they are trams, not ‘trolleys’, which bring the tea, are frequently used in an attempt to force people into making utilitarian choices.
The artificiality is relevant, because it suggests an utterly unrealistic ability to know outcomes.
If we did know outcomes with certainty and precision, then utilitarianism would make sense, but we don’t, so it doesn’t.
If it was possible, Jeremy Bentham would have published an effective felicific calculus, and we’d all be using it as a matter of course.Report
By the same reasoning, then, personal prudence must also not make sense. After all, I can’t know with certainty and precision whether I’ll be better off saving this money in my pocket or spending it.Report
and they are trams, not ‘trolleys’, which bring the tea
Oh barf. Please, don’t try to impose local, parochial linguistic preferences on others. People use different words in different areas. If you can’t accept this, you are the problem.Report
“If we did know outcomes with certainty and precision, then utilitarianism would make sense, but we don’t, so it doesn’t.”
The Fallacy of Denying the Antecedent.Report
1. a streetcar powered electrically through a trolley — called also trolley car
2. a device that carries electric current from an overhead wire to an electrically driven vehicle.
3. a wheeled carriage running on an overhead rail or track
4. chiefly British : a cart or wheeled stand used for conveying something (such as food or books)
5. dialectal, England : a cart of any of various kinds
= = =
Surely, you are not denying the existence of American English? Only about 300+ million or so people speak it.Report
I’ve no doubt that it’s a valuable exercise to lead a student to reflect on any dissonances between what he thinks about a case, what he thinks in principle, and what he thinks about the available principles. But here are three /potential/ problems I see. First, (everything we collect under the epithet) the trolley problem gives the sense that the only, or the overwhelmingly, important aspect of the moral life is appealing to principles to decide what to do, or what the right thing to do is. Second, it turns the principles, and the great philosophers who elaborated them, into toys. Third, and related to the complaint that the cases are artificial, it’s not that truths we discover about the artificial cases won’t transfer to natural cases, it’s that the thick concepts we learned and grew up deploying in natural cases have little traction in the artificial cases, and the fact that they’re the concepts we’ve always used hides the fact that we’re not even sure what we mean when we talk about the artificial cases.Report
Nozick once suggested “utilitarianism for animals; Kantianism for people”. Perhaps the subjects were thinking along roughly these lines in the mouse experiment. I’d be interested in seeing something like a hypothetical trolley problem study, but with mice on the tracks.Report
Sorry, I didn’t meanthat to show up as a reply.Report
If one thinks of morality in fundamentally particularistic terms, it seems to me that trolley cases are pretty much useless. And that wouldn’t just be true for actual moral particularists but for someone like Aristotle, who thinks that ultimately, doing the right thing is a matter of perception in particular circumstances. (See his “baked bread” analogy) It would also be true for someone like Ross, for whom actual duty is always a function of prima facie duty, plus consideration of the particular circumstances.
I must admit to being quite sympathetic with this view.Report
Not accurate about Ross, for whom the principles of prima facie duty were perfectly general and needed to have their content specified (see e.g. the long discussion of promise-keeping in Foundations of Ethics). The trolley cases bear on the specification of the duty of non-maleficence, in particular on how it differs from failures to fulfil beneficence. If you push the one in front of the trolley to save the five, you clearly violate non-maleficence. But what if you divert the trolley from the five to the one? Is that too a violation of non-maleficence or not? (If it is, you mustn’t divert.) To know whether any particular act is wrong we need to know which prima facie duties apply to it, and to know that we need to know more precisely what those duties say. The trolley problem raises just this issue about non-maleficence, since an account of it hat looks plausible in the pushing case doesn’t look so plausible in the diversion one.
And if everything for Aristotle was a matter of perception, why did he give a mathematical formula for distributive justice (the ratio of what A gets to what B gets must equal the ratio of A’s merit to B’s merit)? Nothing very particularist there.Report
Again, one’s *actual duty* according to Ross is a function of considering one’s prima facie duties and determining which is most pressing given the specific circumstances in which one finds oneself.
Also again, Aristotle’s bread-baking example makes the relevant point.
I did not claim that either was a particularist. I claimed that there are particularlist elements that render trolley cases somewhat useless for those who embrace their conceptions of ethics.Report
My account of the particuarlist elements in Aristotle’s ethics and especially the point about moral perception and the “baked bread” metaphor are at the heart of Sharon Eve Rabinoff’s doctoral dissertation at Boston College.
And, quoted directly from Ross, The Right and the Good:
“When I am in a situation, as perhaps I always am, in which more than one of these prima facie duties is incumbent on me, what I have to do is to study the situation as fully as I can until I form the considered opinion (it is never more) that in the circumstances one of them is more incumbent than any other.”Report
That quote actually makes my point. What Ross says we can’t know with certainty only arises once we’ve determined which prima facie duties are “incumbent on” us. But we can’t know which those are unless we know what the various prima facie duties say, and that’s what the trolley problem bears on. The question isn’t whether, assuming turning the trolley would violate non-maleficence, that duty does or doesn’t outweigh beneficence in this case. It’s whether turning the trolley *does* violate non-maleficence, i.e. whether non-maleficence properly understood applies to this case. Ross’s brief remarks about non-maleficence suggest that for him the difference between it and failures of beneficence rests on the doing/allowing distinction alone. If it does, then, given that there’s no doing/allowing difference between the pushing and turning cases, turning the trolley should be wrong if pushing someone in front of it is. But Ross would surely find that counterintuitive, and, if he did, he would have to revise his understanding of non-maleficence. His Foundations of Ethics contains a long discussion of the prima facie duty of promise-keeping, discussing various cases to determine which fall under it and which don’t. (He didn’t say it’s just a matter of “perception” what violates the duty of promise-keeping and what doesn’t.) He could and should have had a similar discussion of non-maleficence, and the trolley problem would have been relevant to it.Report
The trolley problem is cast in abstraction from the sorts of relevant contextual features that would allow a person to form the “considered opinion” as to which prima facie duty is more pressing *in these circumstances*.
That was my only point.Report
An interesting feature of Ross’s position is that while we can know with certainty what our prima facie duties are we cannot know (with certainty anyway) what our all-things-considerd duty is–and that is the only duty that ultimately makes an action right or wrong. This is mainly because of the difficulty of knowing the full consequences of our actions.Report
Agreed. And if you look at the excerpt I quoted from The Right and the Good, Ross says that this irreparable uncertainty is due to the fact that it can never be more than a “considered opinion” which prima facie duty is more pressing in a specific set of circumstances, when two or more conflict. Hence the uselessness of trolley problems in determining what our actual duty is (which is ultimately all that matters).
It’s one of the reasons I like Ross so much. Such uncertainty strikes me as being obviously true of real, lived moral life and beliefs the formulaic, top-down treatment offered by traditional consequentialism and deontology.Report
‘Beliefs’ should be ‘belies.’Report
The article link in Slate leads to an interview made by Prof. Nussbaum in Medium. The correct link should be https://slate.com/technology/2018/06/psychologys-trolley-problem-might-have-a-problem.html.Report
Oh no! Thanks for pointing that out. I just fixed it.Report
I’ve think trolley problems are generally terrible as pedagogical tools and not much better as aids to philosophical discussion. I used them in the past, but when I did a good many of my students openly scoffed or challenged me on them. And a lot more seemed to shut down in more subtle ways. It was pretty clear that for a lot of them examples of that sort confirmed their worst stereotypes of philosophy. That is that it was completely disconnected from their real life concerns and needlessly abstruse. These days I tend to go with real world case studies that raise the same issues. Students are much more open to them and if you do a little digging they are easy to find. The Buried Bodies Case and the Memorial Hospital Case for instance do a great job raising the same sorts of issues trolley problems do.
I’m also highly dubious about them as tools to argument at higher levels. They’re supposed to pump intuitions but I must confess in their more bizarre variations I usually don’t have a firm intuition myself. I bet other people feel similarly. I also suspect that a lot of these hypothetical cases are set up in ways that sneak in assumptions that stand in need of argument or are false. I know it’s different but it’s striking how easily most versions of the ticking time bomb hypothetical sneak in the assumption that torture works and works quickly (both of which are almost certainly false). To echo Ben’s thought (albeit in a more pointed way) I think anyone who does bioethics of applied ethics can easily think of real world cases that raise the same issues trolley cases or similar outlandish hypotheticals do and without their issues. I think that this is one of many ways higher order ethics might have something to learn from applied ethics.Report
Sorry I didn’t mean to put the same comment twice! I’m not that in love with my own opinions. I just thought the other one didn’t pass muster due to my use of well bull but I guess it just had to be approved. I do feel strongly about the trolley problem though, and I really kind of hate it (yeah I know there may have been some doubt on that one).Report
In my Intro courses we do the trolley cases and it is invariably one of the best discussions that we have in the class, if not the very best. It is fun and also manages to address serious issues. But I like trolley cases and do not object to them at all as tools for doing normative ethics. Do you think it’s possible that you do a poor job when presenting the trolley problem to students because you personally hate it? I for one think that I am much better at presenting and discussing material when I do not think it is stupid or boring or whatever.Report
Last year, I asked Judy Thomson why she liked trolley cases. She said: ethics is complicated, so we need clean, simple cases to make sure we’ve singled out the issue we’re trying to study — otherwise we just talk past each other. I thought that was a great answer.
Of course, when you simplify, there’s always the risk of throwing out the wrong details. You never know in advance what’s going to end up mattering. But that’s also true when you’re making scientific models, and no one thinks we should stop doing that: the risk of oversimplifying is well worth it, because how else are you going to isolate the thing you want to study? You just have to go ahead while staying open to the idea that maybe the thing you’re studying is more complicated than you’re giving it credit for.
But are the people working on the trolley problem open-minded enough? Do they seem concerned that, maybe, their cases are too abstract, that they’ve left out key details? Yes — definitely! Just read Foot’s original paper, which is all about finding new texture in old cases, or Thomson’s chapter in The Realm of Rights, which is wonderfully sensitive to all kinds of possible factors: consent, liability, culpability, social contract, degree of harm, manner of harm, group membership — the kinds of meaty moral issues from which “trolley problems” are supposedly distracting us.
In response to Dan Kaufman: I second Tom, but let me add that Ross himself relies on intuitions about wrongness at key moments of The Right and the Good. For example, in an argument against consequentialism, he says that we can’t break promises for the (only slightly) greater good; elsewhere, in distinguishing beneficence from non-maleficence, he says that it’s wrong to kill one to save one. If these *extremely* abstract cases are respectable, shouldn’t Thomson’s cases be, too?Report
But I would argue that Thomson’s own most famous work shows how bizarre hypotheticals are slippery and misleading. Take the abortion article. At one point Thomson gives us the bizarre case of Henry Fonda suddenly being able to miraculously heal with his touch and relies on the intuition that an ailing person has no right to the healing touch of Fonda to basically argue that there’s no duty to easy rescues. But of course most of us have no such intuition as evidenced by the reaction of any sane person to Singer’s much more realistic child in the water case. The Fonda hypothetical is so bizarre that we just don’t have an intuition about it, but in fact we have a pretty clear intuition about the sorts of cases it’s supposed to represent. Besides getting exactly the conclusion Thomson wants what’s gained by using this case?
Or take the violinist case itself. It assumes, with no argument whatsoever, that abortion is best described as a letting die than a killing. However, that itself is a huge point that stands in need of argument.
Finally, the whole argument mostly ignores one of the biggest arguments for a right to abortion, which is a right to bodily integrity. Yes it touches on it and that’s in the background, but it never really focuses on it. When I have to teach that article, I’ve usually had to fall back on more realistic cases like organ donation that highlight the same issue to get students to not basically laugh me out of class. Why not focus on the fact that most of us think we have no duty to give parts of our liver or our kidneys or even bone marrow to anyone and that forcing us to “donate” organs or even marrow would be a gross violation of our rights? This seems to get at Thomson’s more defensible points much more directly.Report
(1) Whether “we” just don’t have an intuition about the Fonda case is an empirical issue, isn’t it? Also, how can the case get Thomson the conclusion she wants unless it pumps the intuition?
(2) Thomson explicitly takes on the killing v. letting die issue, using the violinist case. She says for example, that whatever else is true, it can’t be true that you would be murdering the violinist by unplugging him. Whether she argues for this claim depends on the extent to which thought experiments can serve as premises in arguments in the first place, I think.
(3) Thomson explicitly says that she’s objecting to the pro-life argument that begins with the premise that the fetus is a person. (But she’s not objecting by denying that premise.) She’s not trying to give a thorough defense of abortion, despite the name of the article. So the bodily integrity stuff can be left for another day, as can the question about whether the fetus really is a person.Report
1. My issue isn’t with pumping intuitions. It’s with using artificial cases to try to do so, and with focusing on a highly artificial case when there are more realistic ones. And precisely my objection is using carefully designed artificial conclusions to get exactly the conclusion one wants. This all smells a bit like arguing from conclusions to premises to me. I suspect that if one’s argument absolutely depends on such intuitions it’s not a good one. The issue isn’t whether I can get the conclusion I want, but whether I have a good argument for it.
2. This completely misses the point. It’s pretty clear that the violinist case as Thomson sets it up is best described as letting die. The huge question, which Thomson doesn’t even bother to address, is whether abortion is likewise best described as letting die rather than killing.
3. To the extent she has an argument in the article it depends on two unargued (and I think likely wrong) premises: There’s no duty to rescue and abortion is letting die rather than killing. To the extent the argument can be salvaged at all I think it has to be something like duties to rescue are relatively weak and can easily be trumped by a right to bodily integrity. Anyway, my whole point is the two main premises the argument needs are snuck in by the choice of examples. In that the article obscures those issues I’d honestly argue that it’s a huge step back in the whole debate whatever side you come down on. If you’re pro-life you ought to push the duty to rescue stuff and especially the claim that abortion is killing rather than letting die. On the other hand if you’re pro-choice precisely what you need to defend is the claim that duties to rescue are weak and trumped by a right to bodily integrity and that abortion is best described as letting die rather than killing. Or I guess we could step back and go back to the arguments about personhood. But whatever angle one takes I have a hard time seeing why this article represents progress. It muddles the issue and obscures the questions that need answers. That seems to exactly the opposite of what good philosophy ought to do, and it’s especially at odds with the ideals of analytic philosophy. And it does this precisely because of its reliance on bizarre hypotheticals. The use of such hypotheticals does not bring anything like clarity here and it certainly doesn’t allow us to focus on the important issues while putting the extraneous ones aside.Report
Fair enough, but with respect to what the violinist case does and does not accomplish, despite or because of its far-fetchededness:
There are at least two claims that one might try to undermine with the violinist case:
(1) If x is a person, then everyone must always do everything in their power to preserve x’s life.
(2) If x is a person, then it is wrong to “directly kill” x.
You seem to think that it fails miserably with respect to (2) because it’s not a case of direct killing, but rather a case of letting die. Fair enough. Let the arguments begin. But I don’t see how its outlandishness is at fault; maybe it’s not analogous enough to an abortion case, but if so that’s not because it’s contrived. And doesn’t the case do a pretty good job of undermining (1) at any rate? Since her main strategy is to challenge the “easy” argument that abortion is wrong because a fetus is a person, undermining just (1) gets her *some* of the way there, doesn’t it? Whether her case against the standard pro-life argument is conclusive is not dependent on whether she’s using artificial cases, is it?Report
Sam: notice that you didn’t engage with any of the ideas in my comment–just the name “Thomson!”
(Where did I say that we should prefer *unrealistic* cases?)Report
Brian, 1 and 2 are ridiculous. I can’t think of anyone in the history of moral philosophy who’s seriously entertained much less defended either of them. If the article is just undermining them, then it’s an egregious example of knocking down strawmen. The really relevant claims here are something like: 1. All things being equal it’s wrong to let someone die or to kill them. 2. However, it’s much easier to justify letting die than it is killing, so less weighty reasons are needed for the former than the latter.
Daniel, Yes I suppose I did go on a bit of a tangent. However, Thomson’s work irritates me because it shows a decided preference for bizarre artificial examples where realistic ones will do. I’d go on a similar tear with Nozick, Parfit, or Kamm. I don’t think simplified cases are as innocent as you let on. My worry is that the details people tend to leave out are precisely the one’s that don’t fit with their conclusion. The facts that are put aside are much more often inconvenient than they are irrelevant. I suspect that many philosophers like trolley problems and similar sorts of cases because they let them debate an issue only in terms that they choose. Trolley problems and other artificial examples have a lot more in common with a sleight of hand tricks than they do scientific models.Report
OK, then please explain why/how the *fantastical nature* of the violinist case makes it so that the case cannot undermine your (1). Again, maybe the violinist case doesn’t work, but I can’t see how its supposed failure is due to it being super-fictional. I thought that its sci-fi-like quality was one of your main problems with it. So far, your complaint seems to be that it’s a bad case, but your reasons for thinking it bad don’t seem to have anything to do with it being far-fetched. But maybe I’ve just misunderstood your point.Report
I always thought the Fonda stuff was odd. Insofar as I have an intuition about a case where someone can save a life with the touch of a hand, it is very much the opposite of the one Thompson seemed to want to solicit. If it were really true that A’s hand on B’s fevered brow would save B’s life, I’d have to compunction whatsoever about grabbing A by the wrist and dragging them over–and I’d be morally scandalized by the suggestion otherwise (oh no, we must let B go, sadly A can’t be bothered and we must respect that… nope nope nope not having it)
Did other people feel otherwise? I’m curious. That article is so celebrated–and I don’t begrudge it that, to be clear–but that was far from the only example where my reaction both as an undergrad and now was, well, if it really worked like that, then yeah…Report
For what it’s worth, I’m also skeptical of the Henry Fonda example, as well as her later objections to rights to aid in The Realm of Rights. (Kamm has a great response in Morality, Mortality II.)
But I don’t share Sam’s hostility. For me, it was a healthy shock just entertaining the idea that we don’t owe aid to others, a bit like reading Taurek or Singer for the first time. Mind-expanding, even though I wasn’t persuaded in the end.
Also, to her credit, Thomson does argue that there are costs to saying Henry has to help. If he owes it when he’s in the room, shouldn’t he owe it when he’s across the country? If so, can’t we kidnap him? This is like Singer’s pond argument in reverse–and I don’t think it’d be hard to dream up more detailed cases. Once your imagination gets going it starts to sound like an episode of Black Mirror…Report
I guess it is an interesting case. But to me it seems clear that the difference between marching him across the room and flying him across the country, say under escort, is just how much forcing him to aid interferes with him living his life. To me it seems that we do have a duty to perform easy rescues– that is one’s that cost us little– and I don’t see a problem with the state enforcing that duty. The difficulty is of course specifying what costs us little and what costs us a lot. One of the reason Thomson’s article irritates me is it hampers us having that discussion.
As much as I like Black Mirror, I think that comment might be more revealing than it seems. Fictions, and hypothetical examples are just that, can be helpful in philosophy but they’re also dangerous. I’m aghast at how many of my students reference say “The Purge” or “Zero Dark Thirty” in making their arguments. And lest one think this is limited to 19 year old community college students I remember one PhD student in UVA’s political science program lecturing Jeremy Waldron in post-talk Q and A about the silliness of his views on torture. And her reasons were nothing more than a lot of references to “24.”Report
1. Dan, yes it brings out what I think of as the tragedy of morality, namely that it is supremely important that we act rightly but very difficult to know that we have.
2. Pedagogically, I would teach both the trolley problem to get the abstract issue across, but follow it up with a real-life problem (say, Hiroshima) to bring out the complexity of lived morality and the limits of philosophical toy examples.
3. If I may self cite, I recently posted a short essay “Absolute Deontology” on my blog about why the notion of prima facie duty is not the best way to handle the kind of problem Ross wants it to, mainly dealing with lying.Report
“Whether the trolley problem is realistic, that is, whether trolley systems are engineered to avoid these kinds of problems, whether one is very unlikely to encounter any trolleys in one’s life, whether one is unlikely to be confronted with a situation in which they must quickly choose between options in which different numbers of people die
Like others, I think this is much too quick. It’s a substantive and highly contestible claim that morality is unified enough for questions about one ought to do in recherche emergency cases to (a) have determinate answers that (b) reflect plausible, mid-level principles (e.g. the DDE or whatever) which in turn (c) reflect the correct moral theory.
I’m inclined to think that, to a significant extent, the annoyance or bemusement lots of non-philosophers feel at the irrealism of trolley problems tracks the serious theoretical grounds for doubting at least some of (a) through (c).
This is important for public philosophy, which is a lot more fruitful and respectful if we’re careful to keep in mind that what looks like laypeople being simplistic might actually be evidence of us being dogmatic.Report
Sorry, late to this party but saw it in the New Year’s round-up. Some additional thoughts against the criticism that trolley problems are so fake and therefore useless. Excerpt from my article:
“…Thought experiments are designed to be “intuition pumps,” probing the limits of what we believe to be true. They isolate and stress-test particular beliefs in artificial scenarios, because real-world scenarios are often too messy with uncontrolled, entangled variables.
But guess what? That’s exactly what science experiments do, too, and no one has a problem with their method. Science experiments often are made-up scenarios designed to isolate, control, and test certain variables, because the real world is too messy.
As one of countless examples, think about experiments in which different drugs were given to spiders to see how they affect web-spinning. In nature, this would be difficult to study, because spiders aren’t drinking coffee, lighting up marijuana joints, or taking sleeping pills out in the wild. But with a contrived set-up or experiment, we can isolate and control the variables of interest, and the results are fascinating.
But should we dismiss the experiment as useless, disingenuous, or maybe “pure empirical masturbation dressed up as science,” just because spiders don’t naturally drop acid? Obviously not. Yet otherwise smart people make that mistake with thought experiments in philosophy and ethics. They criticize the lack of realism, and the most charitable explanation is that they’re not familiar enough with the subject and its methods…”
I think there are some important disanalogies between ethical thought experiments and experiments in science. For one, experiments in science are bounded by empirical realities in a way that thought experiments are not. Science experiments are accountable to the way the world actually is, while thought experiments are accountable only to the imagination and intuitions of the engineer of the thought experiment/the person being tested.
To use your example, while it’s true that spiders don’t take LSD in the wild, the effects of LSD on their neurological and behavioral patterns can tell us useful things, both about LSD and neurology and behavior (of spiders in general at least, but possibly also in a more general sense).
But what, exactly, do the results of X-Phi trolley problems tell us? Best case scenario, that a large number of people share a certain set of intuitions about cases that have no clear application to real life; bad case scenario, that a clever philosopher can engineer an intuition pump to elicit a desired response from the folk; worst case scenario, they mislead a generation of philosophers, policy makers, and the public at large regarding how ethical deliberation actually works .Report
The article I had linked to above answers your question about what the trolley problem is supposed to show. The basic point is that it forces technologists to confront the hidden nuances in a design/crash decision (which they are /not/ doing today), whether for a dramatic no-win situation (hopefully rare) or an everyday scenario of where to position an autonomous vehicle in its own lane when there are pedestrians on both sides.
To be clear, I’m not suggesting that ethicists can simply deliver the “right solution” to industry, but all we can reasonably hope for is that industry gives serious forethought about how Avs handle/create risk, which is demonstrated with a proactive defense of the design principle in question (again, which is not done today; the Obama administration’s NHTSA did ask for voluntary ethics explanations from manufacturers in late 2016, but the current administration quickly reversed all that guidance). As you probably know, ethics isn’t just about finding an answer to an ethics problem, but it’s also about the process of arriving at an answer. It’s like showing your work in math.
The trolley problem with AVs has been invoked in x-phi work, but I didn’t address that at all in the article; that’s a different issue/problem. I wasn’t suggesting that survey results should drive a design or policy decision, though it’s also not irrelevant, as I touched upon in this article: https://www.forbes.com/sites/patricklin/2018/10/29/does-ai-ethics-need-to-be-more-inclusive/
Excerpt: “Nonetheless, public attitudes are a necessary ingredient in practical policymaking, which should aim at the ethical but doesn’t always hit that mark. If expert judgments in ethics diverge too much from public attitudes—asking more from a population than what they’re willing to agree to—that’s a problem for implementing the policy, and a resolution is needed.”Report
The Trolley Problem isn’t one: https://samirchopra.com/2018/09/17/hms-ulysses-and-the-trolley-problem/Report
Nice one, Samir.Report
Thanks, Patrick! Thanks for reading the post.Report
Just an alert that, yesterday, the “Thought Experiments” entry in the Stanford Encyclopedia of Philosophy was significantly updated (though it does not mention the trolley problem):
I think 99% of people would be unable to pull the lever due to the psychological impact of having to decide to take an action that will result in even a single death.
Morally we are generally taught that every life has an infinite value, therefore there is grounds for rational justification of inaction. One multiplied by infinity is no less than five times infinity. And I am sure that person will be grateful.
It is, after all, essentially a setup for how the decision will be evaluated after the fact, and we being human, will investigate the character and circumstances of each victim vs. potential victim. That is inevitable. We would actually take a microsecond, with our hand hovering over the switch, to think about who and why is tied to each track, an impossible task. Who could doom a stranger by choice without knowing them, even against other certain greater tragedy? Morals tend to seek certainty, and the only pure certainty here requires inaction. Fate, circumstances doomed five people and spared one. Ethics say throw the switch and do the greater good. I hope you can live with your decision.Report