Where Should Moral Philosophy Begin? (guest post) (updated)


“In thinking about trolley problems, to what extent have you put yourself in the shoes of the person at the switch… and to what extent have you put yourself into the shoes of those tied to the tracks?”

In the following guest post, Avram Hiller, associate professor of philosophy at Portland State University, notes the tendency of moral philosophers to begin from the perspective of the agent (what the agent should do or how the agent should be), suggests that this methodological assumption could have substantive implications, and encourages us to begin instead with the perspective of those acted upon—the moral patients.

A version of this post was the inaugural entry at Professor Hiller’s new blog, Philosophy Happens. It’s on Substack. Check it out and subscribe.

This is the fourth in a series of guest posts by different authors at Daily Nous this summer.

(Posts in this series will remain pinned to the top of the homepage for several days following initial publication.)

Two people holding hands while walking on a rocky promontory at Turnagain Arm, Alaska.

Photo by Avram Hiller, Creative Commons License CC BY-NC4.0.

Where Should Moral Philosophy Begin?
by Avram Hiller 

How ought one live? That is a question posed by Socrates (or, at least, suggested by Socrates, at Republic 352d and 618b-c and Gorgias 500c). Bernard Williams writes (Ethics and the Limits of Philosophy, 4): “Socrates’ question is the best place for moral philosophy to start.” Is he correct?

Of course, as Williams recognizes, there are some alternatives: How should one act? How should I live? How should I act?

One might reasonably have some concerns with the individualism implicit in all these questions, in their use of “one” or “I”. That’s because so many actions are collective ones, and thus it might be at least as important to ask: What should we do? Or, how should we live (together)? Hopefully, one has friends, and one can chat with them about this stuff. Taking a non-individualist starting point may make a difference in where we end up—something I hope to explore on another occasion.

There are three relevant dimensions here: singular I vs. plural we; I specifically vs. agents in general; and how to act vs. how to live. There are thus eight combinations, and all of them make for really good questions. However, I don’t think that any of them are the best place to start. They all ask about how to live or act, but I think that moral philosophy should start at something broader.

Take Williams’s famous example of “Jim and the Indians” (from A Critique of Utilitarianism §§3-5). If Jim has an option to kill one person to prevent someone else from killing twenty, Williams writes that it is at least not obvious that Jim should do it (even if, in the end, that’s the right choice). However, it is obvious, for the utilitarian, that Jim should do it. There is thus something problematic, says Williams, about utilitarianism. The example is intended to show that there are relevant reasons other than utilitarian ones that should be factored into the decision.

Now, I’m not sure how strong an argument this is—some mathematical theories deliver answers which are obvious under the theory but not obvious when looking at the problem intuitively—but that isn’t a problem for the mathematical theory. I’ll set that concern aside, though.

Here’s my main worry. I was chatting with a fellow philosopher who is non-white. He said that his immediate reaction to the Jim and the Indians thought experiment was not to view the situation from the perspective of Jim—which, all along, was the perspective that I myself had taken. (Perhaps it’s relevant that I have family members named Jim, but no family members from an indigenous South American tribe.) Instead, my friend thought of it from the perspective of the 20 people of color who were hoping and praying that they would not end up being killed. From that perspective, it is indeed quite obvious what Jim should do (even if it is not obvious what Jim will do). One hesitates only when looking at the question from the agent’s perspective.

So: Why should we begin moral philosophy from the perspective of asking what an agent should do (or how a person/group should live)? Perhaps we do so because we, reasonably enough, come to philosophy seeking answers to questions that we ourselves have about what to do or how to be. But I’d like to suggest that we begin moral thinking by including the perspective of those acted upon—the moral patients. (Note: Tom Regan used this phrase to refer to those who are only moral patients and are not also moral agents, but I am using it to refer to anyone who deserves moral consideration.) In life, to what extent is each of us Jim, deciding what to do, and to what extent is each of us one of Williams’s Indians, whose life is affected by the actions of others? And because there are many more Indians than there are Jims (in Williams’s example, and, taken proverbially, in many of life’s decisions), most of our moral consideration should come from their perspective. Call this the patient-centered perspective in moral theorizing.

This is not the same as claiming that we should take the “point of view of the universe”—whatever that might be—in our ethical theorizing.

Rather, I’m saying that in generating moral theories, especially when using thought experiments, we should see ourselves not just as moral agents but as moral patients—even if doing so isn’t (directly) action-guiding.

I’m not suggesting that central traditions in Western moral philosophy fail to consider the perspective of moral patients—of course they do. And in principle, the end point of moral inquiry could be the same regardless of one’s point of departure. So in what way will adopting a patient-centered starting point make a non-trivial difference in where moral theory will end up?

There are, I think, important consequences. One of the main ones, as suggested above, is in how we should approach thought experiments. Typically, moral agents are more salient to us in thought experiments than moral patients. Ask yourself: in thinking about trolley problems, to what extent have you put yourself in the shoes of the person at the switch (or on the footbridge) deciding whether to flip it (or to push the other person off), and to what extent have you put yourself into the shoes of those tied to the tracks? There is a danger that the results of a thought experiment will be biased if some aspects of it have more salience than others. Taking a patient-centered perspective, we mostly set our own position as the deliberative agent to the side and consider actions in a more neutral way, making the patient(s) just as salient as the agent.

Here’s another way in which patient-centered moral theorizing is tangibly different from an agent-centric approach. The excellent Stanford Encyclopedia article on Williams quotes him as dreaming of “a philosophy that would be thoroughly truthful and honestly helpful” (Philosophy as a Humanistic Discipline, 212). As is typical, Williams means something rather complicated in the original context (in this case, concerning the direction of philosophical inquiry). But it prompts us to ask: How can moral philosophy be helpful? It seems to me that Williams approached his academic work on ethics to be helpful for the moral agent in deliberation. But if we moral philosophers care about being helpful, I can’t see why helpfulness should extend primarily to people-as-agents rather than being helpful for people in general. Everyone.

I haven’t said what I think the question at the beginning of moral philosophy should be. Alas, there is a lot of context-sensitivity in proper question-asking: if one is a student asking someone they think is a wise person for advice, then perhaps “How should I live?” is indeed the way to go. But for those engaged in discussions of moral theory that are typical of academic philosophy in the contemporary English-speaking tradition, the question moral philosophy should start with is something closer to: How should things be for us?

That’s not quite it, however. Who is us? Who are the moral patients? We shouldn’t make assumptions at the outset as to the answer to that question—maybe it’s all persons, maybe it’s all sentient beings, maybe it’s all living things, maybe it even includes relationships or biotic systems as a whole. In light of that, the best question from which moral philosophy should begin is something like: How should the world be?

And this, it seems, points in the direction of consequentialism. If you approach the Jim and the Indians thought experiment by asking, “Is the world a better place with twenty people dead or one,” while barely considering which agent is responsible, you’re going to be a lot more likely to end up a consequentialist. It admittedly, and I suppose unabashedly, rejects the Kantian idea that it is my agency that matters in my practical deliberations. But it might be worth asking yourself: insofar as you agree with that Kantian idea, is it because you have assumed that moral theory should be concerned, from the very start, with how one should act? As a Bizarro-Kant might wonder, what makes you—qua agent—so special, so as to treat yourself in an exceptional way?

Something similar can be said about virtue ethics: if you think that what is central in moral philosophy is character, perhaps you are operating under the starting assumption that moral theorizing is about giving advice to individuals about how to live their lives. It is that assumption that the patient-centered approach calls into question.

There’s of course a lot more to say about all this! My brief ruminations in this post certainly don’t clinch the case for consequentialism against Kantianism or virtue ethics. Even when taking a patient-centered approach, one might still end with the view that the world should be such that I always respect others’ dignity, or that the world should be such that I have certain virtuous character traits.

Last, I’ve discussed moral philosophy here, but something similar can be said about contemporary epistemology. A great deal of epistemology in the last 60 years has asked: “What does it take for S to know that P?” My own sense is that instead, we should begin inquiry into epistemology by asking something like: “What does it take for P to be known by S?” This approach, which can be called patient-centered epistemology, can shed a lot of light on important epistemological questions. (Spoiler: it lends some support to a knowledge-first view.) But that too is a topic for another day.


UPDATE (7/26/24): Professor Hiller follows up on this post and some commenters’ critiques here.

The Hedgehog Review
Subscribe
Notify of
guest

46 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Travis Figg
Travis Figg
20 days ago

If you think of morality according those rules or principles that everyone could reasonably agree to in principle (maybe with some additional conditions and caveats thrown in), then it makes sense to approach Trolley or Jim by considering what action you could reasonably endorse if you did not know what role you might play in the scenario, so that you should conceive yourself as possibly occupying any of the positions — you could be one of the 5 in Trolley, or one of the 20 in Jim.

Daniel Muñoz
20 days ago

Some nice points in this post, but I don’t think it’s “typical” for moral philosophers to downplay the patient’s perspective.

For example, here’s Korsgaard:

This is why it is important to establish, if you can, what the Indians themselves think should happen. Suppose the oldest Indian steps forward and says, “Please go ahead, shoot me, and I forgive you in advance.” This does not make things wonderful but it does help. Very roughly speaking, you are not treating him as a mere means if he consents to what you are doing. Of course, the Indian does not in general consent to be shot, and his gesture does not mean that after all he has not been wronged. In the larger world he has. But if you and the Indians are forced to regard Pedro and the captain as mere forces of nature, as you are in this case, then there is a smaller moral world within which the issue is between you and them, and in that world this Indian consents. On the other hand, suppose the Indians are pacifists and they say, “We would rather die than ask you, an innocent man, to commit an act of violence. Don’t do what the captain asks, but go back up north, and tell our story; make sure people know what is happening down here.” Now the decision not to shoot looks much more tempting, doesn’t it? Now you can at least imagine refusing. But you may still take the rifle from Pedro’s hands and say, “You cannot ask me to kill to save you, and yet I will,” and pick an Indian to shoot. This is a different kind of decision to kill than the earlier one, for it involves a refusal to share the Indians’ moral universe; from the perspective of the Indians who live, it has a slight taint of paternalism.

Christine Korsgaard, “The Reasons We Can Share,” in her Creating the Kingdom of Ends, p. 296.

Avram Hiller
Reply to  Daniel Muñoz
19 days ago

Thanks for this! I was actually planning to do a follow-up on my blog where I discuss Korsgaard and Nagel – so I appreciate your raising this here.

I definitely think that Kantian and other views don’t downplay the perspective of patients, per se, but I think that there’s a difference in where that consideration enters the picture. (A difference that deserves more elaboration than what I say in the blog post.)

What Korsgaard says in CKE tweaks the example from how Williams presents it originally. (For Williams, none of the Indians want to be killed, and you just have the binary choice.) Korsgaard wants to show that the Kantian principle is something like “don’t harm someone without their consent” (which requires finding out their own perspective), and so consideration of their perspective comes in to play. And K nicely gives these modifications to the thought experiment, I think, to give a more humanistic sense of the situation, and thereby make the Kantian/deontological view more palatable.

But in Williams’s case, if you aren’t in fact able to obtain consent from any of the Indians, then it still is the case for the Kantian that you shouldn’t kill the one to stop another person from killing the 20. And that (a) seems very much like a horrific answer, and (b) comes from privileging Jim’s own position as the deliberative agent over the perspective of the 20, who don’t really care who is at the other end of the gun that kills them.

I know that this is kinda quick, so I’ll try to say more in a later post on my blog. Still, I’d be happy to hear your response to what I say here.

Daniel Muñoz
Reply to  Avram Hiller
19 days ago

Thanks for the reply, Avram! This helps me see what you mean by “privileging” the agent’s position.

That said, I don’t think we’re necessarily privileging Jim when we say he should refuse to kill the one Indian to save the whole group. We might instead be privileging the one, by insisting on their inviolability.

Avram Hiller
Reply to  Daniel Muñoz
19 days ago

Daniel – that’s helpful. My thought (or maybe I should say, “my contentious diagnosis”) is that the underlying reason why some moral theories privilege the inviolability of the one patient in such cases is that there is too much focus on the agent from the outset.

This can, perhaps, be seen especially in Williams’s example, where the one Indian is actually one of the twenty, and thus will be killed either way. So Jim can’t really be privileging the inviolability of that one patient in deciding the one vs. the twenty. It seems that the main difference in Jim’s two options in regard to that one individual is in which agent directly does the violating. So my sense is that the Kantian response to the example arises from having too great a focus on the agent, not the patient.

Daniel Muñoz
Reply to  Avram Hiller
18 days ago

I think this gets to the heart of the issue, Avram.

In my view (which I got to from reading Kamm), Jim can be privileging the inviolability of the one. It’s just that he’s doing this in a deontological rather than a consequentialist way. He’s not trying to prevent the outcome that rights are violated. He’s trying not to violate rights.

Philippa Foot has a nice analogy. Suppose you’re at a dinner party where someone is being rude. The only way to shut them down is to be a bit rude yourself. Would a concern for etiquette require you to minimize rudeness? Not necessarily. Etiquette might not be a matter of minimizing rudeness or any other outcome. It might instead be (partly) a matter of following some nonconsequentialist rules, like “don’t rudely shut someone down.”

Michael Kates
Reply to  Daniel Muñoz
17 days ago

But isn’t saying that, “He’s trying not to violate rights,” just another way of demonstrating the point that this view is privileging the agent’s position?

Daniel Muñoz
Reply to  Michael Kates
17 days ago

I don’t think so! Even on patient-centric theories, shouldn’t people try to avoid doing wrong things?

On a deontological theory like Kamm’s, killing is wrong (even to prevent a killing) not because of who the agent is, but because of what it does to the patient. That’s what makes the theory patient-centric.

By contrast, on an agent-relative consequentialist view, you might refuse to kill (even to prevent two killings) so as to avoid getting blood on your own hands. That’s agent-centric.

Maybe it’s a matter of emphasis. Instead of “he’s trying not to violate rights,” I’d say “he’s trying not to violate rights.”

Avram Hiller
Reply to  Daniel Muñoz
18 days ago

[I’m putting this response here because comments in a thread with multiple replies are put into narrow columns, and so this might be easier to read.]

Thanks Daniel – your last comment does help clarify things. But I think I probably wasn’t clear enough about what I meant by ‘patient-centered’. (I should probably pick another name. And if you or anyone else has a suggestion, I’d be happy to hear it!)

There is certainly an important sense in which the Kantian edict, “don’t violate anyone’s rights”, is other-directed, and not consequentialist. But that’s a bit different from what I have in mind with the phrase ‘patient-centered’, which is a claim about methodology in moral theorizing.

Here’s Korsgaard in the first chapter of Sources (p. 25):
“Kantians believe that the source of the normativity of moral claims must be found in the agent’s own will, in particular in the fact that the laws of morality are the laws of the agent’s own will and that its claims are ones she is prepared to make on herself.”

And Korsgaard is explicit there that her concern is to respond to the question “Why should I be moral?”, which is agent-centric. (And, perhaps not coincidentally, that question looms large in Socrates’ conversations in the Republic and Gorgias.)

And my thought was that if one starts moral philosophy with that kind of question, then there’s a good chance that one will end up with the wrong answer.

I won’t say a whole lot more here, but I will note that I probably have a different reaction to Foot’s dinner party case than you do. It reminds me of that old trope in some teen comedies. The main character is near the back of the classroom, but the annoying guy behind him is talking and bugging him and distracting the other kids in the class. However, the doofus teacher isn’t noticing what is going on. Then our main character finally breaks down and says “Shut up!” to the annoying guy. But the teacher does notice this, and says to our protagonist, in a snarling tone, something like “Mister [last name], there is no talking in this class. Go to the principal’s office!”

How should we feel about this case? I don’t know what your reaction typically is, but mine is always, poor kid! Here is this stupid absolutist rule being enforced by an inattentive and mean teacher. Seemingly, ideally, the rule really would be more like “no talking in class, except to limit others’ talking”.

There are probably decent practical reasons why the teacher has adopted the exceptionless rule: the teacher wants to maintain control of class, and a rule with that exception clause might be too easily abused or too hard to enforce.

But that would go to show that what appears to be an exceptionless deontic rule really arises out of an interest in promoting the most overall good (and not from some kind of independent deontic foundation).

I would want to say the same thing for norms like the one Foot describes: they are either inapt norms, or they work in the service of something other than a purely deontic principle. And, I also think that’s the best way do think of Kantian norms governing what Jim should do.

I’m guessing you’re not still with me in these judgments – there’s certainly a lot more to say, on both sides! But I hope it gives more of a glimpse into one of the main points in the post, which is that if (A) one starts moral philosophy from an agent-centered place, then there’s a good chance that (B) one will end up with flawed results. Maybe helps show why I think (A) and (B) are the case, even if it doesn’t say much about the conditional linking the methodology in (A) to the result in (B).

Last edited 18 days ago by Avram Hiller
Daniel Muñoz
Reply to  Avram Hiller
18 days ago

Super helpful. (And I like the doofus teacher case.) Thank you, Avram!

Avram Hiller
Reply to  Daniel Muñoz
17 days ago

Thanks very much for helping me think more clearly about these things!

Neil Sinhababu
Reply to  Daniel Muñoz
3 days ago

As Avram notes, Korsgaard’s response involves changing the case so that the Indians are able to engage in complicated agential performances. It would be like utilitarians coming up with infrastructure solutions to trolley problems – just build another side track to turn the trolley where it won’t run over anyone, and the problem is solved! The original case that Avram brings forward from Williams is more applicable to important problems today.

In this big interconnected world, we often affect people whose individual actions we can’t know about. The individuals who will lose their harvests and homes to climate change, or whom we might kill with weapons of war, rarely are in position to address us. We already know that they need food, shelter, and survival, as they are human beings. The significance of their needs should guide our actions, regardless of whether they can direct complicated agential performances at us.

BCB
BCB
19 days ago

So, you think an appropriately patient-centered approach to moral philosophy should start from questions like How should things be for us? or How should the world be?, which seem to favor consequentialism. But why these questions, rather than (e.g.) the arguably more patient-centered What can we reasonably demand of each other?, which doesn’t favor it (or, at least, doesn’t favor it so directly)?

Kenny Easwaran
Reply to  BCB
19 days ago

Do you think “reasonably demand” is more patient-centered than “hope for”? It seems to presuppose a particular form of interpersonal agency on the part of the supposed patient.

contractualism-curious
19 days ago

I wonder if Avram would agree that contractualism does a pretty good job respecting dictum that we should take a (more) patient-centered approach in normative ethical theorizing.

Contractualist reasoning asks us to consider the perspective of all who stand to be affected by a particular action, and take seriously the reasons they each might have to object to (principles that permit) the action in question. This seems pretty patient-centered to me. Of course, it also takes into account the potential objections of the agent. But to completely ignore this perspective seems problematic in its own way!

This is not meant to be a gotcha question–just an earnest attempt to understand what might or might not count as appropriately patient-centered.

Avram Hiller
Reply to  contractualism-curious
19 days ago

Travis Figg, BCB, and contractualism-curious all raise good questions about the relationship between this approach and contractualism.

I don’t favor contractualism, for reasons independent of this post, but I’m sympathetic with what all of you are suggesting. Contractualism seems to be a response to something like the question “How are we to live together?” My approach I think is similar, but is open, at the outset, about who comprises the we – is it just rational agents/those who can form contracts with us, or is it, at the outset, open to including other (potential) moral patients? If so, then that kind of broad-minded contractualism might indeed satisfy the methodological constraint I am pushing for in the post. As such, the question “How are we to live together?”, given a suitable expansion of the “we”, becomes more or less equivalent to “How should the world be?”

Last edited 19 days ago by Avram Hiller
Kyryll
Kyryll
19 days ago

I think there are two, probably old problems with this suggestion.

First, there is an epistemic problem with this heuristics: if we start thinking about “how should the world be”, we, if humble enough, should end up conceding that we have no clue (apart from the cases where the answer would be clear without using this heuristic). So, although I very much agree that a good practical deliberation should include considerations about the patients, I don’t see how the big question can contribute to deliberations. But it might be a shortcoming of my imagination.

Second, imo the answer to the question what makes you\one so special is simple: it is the ability to get motivated by things you encounter \ results of your reasoning and actions. If we use “overall state of the word” as the benchmark unit for our practical deliberation and more broadly evaluations, then almost entirety of actions made by a regular human being throughout their life would have no direct repercussions. Whereas the state of one’s mind, the significant others and more generally people around is directly observable and, so can regulate one’s behaviour.

This paves the way to an alternative answer to the question where should moral philosophy start, and it isn’t normative ethics, but moral psychology.

Avram Hiller
Reply to  Kyryll
19 days ago

I’m sympathetic to some of what you say here. My sense is that you are right – we can’t grasp the whole world, and any moral theory needs to be realistic about the limits of human psychology. (In fact, I myself have a paper that tries to incorporate this consideration.) But I don’t think that causes a problem for the idea that we should start by asking how the world should be. We should do our best, and not turn our back on that project just because we won’t be able to accomplish it perfectly.

And a clarification – and I probably was not clear enough about this in my post – is that my suggestion is that moral theories should start by asking how the world should be. But this doesn’t entail that agents have to do this in their practical deliberations.

Fritz Warfield
Fritz Warfield
19 days ago

I don’t really see what’s at stake here.

Can’t a moral theorist begin with, or ask, whatever question she wants and then argue however she wants towards a conclusion?

Different theorists surely will, and do, make different choices.

Vindication, if any, for the various choices will come through the interest and insight of the work.

Avram Hiller
Reply to  Fritz Warfield
19 days ago

Research programs (in science and in other disciplines) can begin from different questions despite both being on similar subject matter, and end up with different results. By studying the questions themselves, we might discover that the wrong question was being asked by one of the sides: questions have presuppositions, and also push people to look in different directions, some potentially better than others.

In the context of how moral philosophy has developed, nowadays we have quite different, competing theories. So it may be helpful to track back from the resulting theories to see how they might have been motivated by very different presuppositions. Perhaps it will turn out that they were talking past each other the whole time, because they were trying to answer different questions.

However, if one is an advocate of one of the positions, it may be worth the effort to try to diagnose problems in the competing theory, not just by arguing that it produces bad results (especially given that both Kantians and utilitarians have to bite some big bullets), but by trying to undercut its (perhaps hidden) presuppositions.

V. Alan White
19 days ago

I have long argued, even here I believe, that the most obvious place to begin moral theorizing qua theory is to answer the basic question of what is moral good. And I believe that Singer’s basic approach is the best answer: some expansive version of hedonism interpreted by utilitarianism (which automatically includes both Indians and Jim). I do not agree that simple pleasure/pain dualities of good/evil are enough–actually I think that some sort of virtue utilitarianism can only be sufficient as to include sentience goods with intellectual goods across species. But a moral theory absent an account of what constitutes moral good might be misled by perspectival accounts that stray from the key question of what any adequate moral theory should accomplish.

Last edited 19 days ago by V. Alan White
Grad student
Grad student
Reply to  V. Alan White
19 days ago

In doing so, you assume that there is ‘moral good’. But why assume that? What’s the evidence?

Martin Peterson
19 days ago

If I wish to draw a map of a city it should not matter where I *begin*. Every good procedure for drawing a map should yield the same result: an accurate representation of the main streets and buildings. Is morality any different? My view (or hope) is that we will reach the same conclusion no matter where we *begin* thinking about moral issues.

Avram Hiller
Reply to  Martin Peterson
18 days ago

Thanks Martin! This gets me thinking about how I might in fact go about drawing a map. I’m with you in hoping that we’d get the same, accurate results regardless of the starting point.

However, if I in fact start in downtown, I might notice details of the buildings and get caught up in them, and then be a little tired and sketchy when I get to drawing up the outskirts. And vice versa if I were to start in the country.

It’s probably best to recognize at the start that this might occur, and thus come up with some kind of general strategy that ensures that we cover all the areas with equal attention.

In moral theory, we seem to have some different “maps”. My thought is that what might explain this is that some views start from a certain narrow place (my agency), and my suggestion is to start from what I take to be a broader place (the world).

Derek Bowman
Reply to  Avram Hiller
17 days ago

Sticking with the map analogy, I think the worry is that we’re trying to draw a street map of a city, and you’re suggesting that to avoid biasing a particular neighborhood we start with a regional satellite photo, where we can’t see that some streets are one way, some bridges are out, and where many side streets and key intersections don’t appear at all.

Avram Hiller
Reply to  Derek Bowman
17 days ago

I actually agree that there is some danger of that happening. I probably think the danger is less severe than you think it is. My sense is that we are quite capable of keeping both agents’ and patients’ perspectives in mind, although we can’t fully guard against missing out on some details.

Jeff
Jeff
19 days ago

Standpoint?

Derek Bowman
19 days ago

So is the thesis of this proposal, as it seems, a thesis about how a certain group of agents (philosophical ethicists) should proceed? In that case, it seems we’ve only hidden our agent-centered starting point, from which we’ve said those agents should direct their attention to patient-focused questions.

Or is this merely a thesis about how the world should be? e.g. ‘The world ought to be such that (some/many/most/all/a-specific-percentage-of) philosophical ethicists theorizes with patient-focused beginnings.’

If it’s the latter, then why think that’s the way the world should be? Why not think, instead, that the world ought to be such that all agents are born with an infallible moral intuition, or paired with a divine spirit to guide and protect them, or that they be arranged such that their actions interlock together into a pre-established harmony without the need for ethical deliberation?

A necessary element for distinguishing ethical answers to question about how ‘the world ought to be’ from mere fantasy is a concern with how those states of the world are affected by the capacities, characters, and interrelations of agents.

So whatever role patient-centered considerations should play in our ethical theorizing and deliberations, I can’t understand what it would mean to ‘start’ with them, except insofar as we’ve already implicitly built a concern for moral agents and moral agency into the framing of the very questions we’re asking.

Avram Hiller
Reply to  Derek Bowman
18 days ago

Thanks for this!

You are right that whatever prescriptive claims I make about how people ought to go about making prescriptive claims ought to apply to my own prescriptive claims. But I’m not arguing against making agent-specific prescriptive claims – only that one should not take that as the starting point in one’s theorizing.

Your main point, though, is something broader and more interesting. If I understand it properly, it is that all theories about how the world should be have to incorporate considerations about norms of agency. As you say, we have to a have a “concern for moral agents and moral agency” at the very start of theorizing.

What I want to say in response is by clarifying the notion of “concern” here. I think that moral theories should take into consideration facts about agency. I shouldn’t hope for universal peace and love: that just won’t happen, because of how human agents are constituted. So there is a sense in which agential facts don’t come posterior to an analysis of how the world should be. But you are saying something stronger, that I want to resist – that concern for norms of, and not mere facts about, agency must come first. And I’m having a hard time seeing why that is the case.

Derek Bowman
Reply to  Avram Hiller
18 days ago

Thanks for the reply. I don’t know if this is the right venue to tease these points out, but I’ll at least try a few preliminary thoughts/questions that may be helpful for further thought even if they don’t coalesce into a clear counterargument or alternative to your position.

First, I’d want to point out that it’s not clear to me that in his discussion of Jim the Tourist and George the Chemist Williams isn’t focusing primarily on ‘facts’ about agency. I guess it depends on where exactly you want to draw the ‘fact’ / ‘norm’ distinction here, but at least part of Williams’s argument is that to be an agent requires having a character, and having a character involves having concerns and commitments that play a central role in giving meaning and purpose to your life, and that it is constitutive to having such concerns and commitments that they sometimes take priority in your practical deliberations over impersonal norms and values or over the concerns and commitments of others. That doesn’t mean that there is no place for the kind of patient-centered concerns you emphasize. But it does put a limit (though what limit, precisely, is harder to say) on the scope and priority of such concerns. Insofar as moral claims are coherently addressed to agents, they must not preclude the necessary preconditions of agency.

Second, if the primary question is ‘how ought the world to be,’ I don’t yet understand why we must limit our wishes by the facts of agency. You say it’s because “it just won’t happen,” but of course we often make moral judgments about what ought to happen that just won’t happen. Take the case of Jim the Tourist. “How ought the world to be” from the perspective of the moral patients in this case? Well the indigenous protestors ought to be free from government persecution, and the Captain ought to refuse to engage in deadly reprisals against protestors. Those things just won’t happen either. But surely that’s no barrier to our saying that they ought to, or to morally condemning the actions of the government and of the Captain. The reason those desirable states of the world are irrelevant in this case is not because they won’t happen, but rather because their happening or not is, at least in this story, outside of the scope of Jim’s agency. If, on the other hand, we were asking what the Captain or the government ought to do, those possibilities would be salient insofar as they are in the scope of the relevant party’s agency.

But now haven’t we already defined our question as the question of the norms for a particular agent or set of agents or for a particular exercise of agency? Does it count as ‘starting’ with patient-centered considerations if our answers to those quests are determined (solely? primarily? initially?) by patient-centered reasons?

Last edited 18 days ago by Derek Bowman
Avram Hiller
Reply to  Derek Bowman
17 days ago

Thanks for this follow-up!

I think your first point incorporates the very kind of agent-centric consideration that Williams takes that I’m trying to push back against. Sure, it is bad when an agent has to compromise their integrity. But, it’s also bad when lots of people are killed. Williams wants us to take the standpoint of the agent in the thought experiment to guide us to what we should do. But I don’t have a sense of why the badness of the agent losing their integrity puts some deep contraint on what a moral theory should ask of agents. I think the badness to the agent should barely register in what the theory says the agent should do.

Your second point is something that I’ll have to think more about. I’m inclined to go with a lot of what you say. I said that I shouldn’t hope for world peace, given that it is implausible, but in thinking about your response, I think you are right that it’s OK to still say that that’s how the world should be (even if facts about agency won’t get us there). But after incorporating that, I don’t think that I need an account of what agents ought to do that comes prior to what patients ask for.

There is a final, very related point that I hope to address in more detail in a later blog post. It’s that perhaps the patients themselves will already be in the grip of a moral theory that guides their reactions to what the agent should do. How much attention should the moral theorist pay to these moral theories that are already in place? I’ll try to say more about that another time.

Derek Bowman
Reply to  Avram Hiller
17 days ago

Thanks for the reply. I won’t try to follow up on everything here, but I do want to clarify the interpretation of Williams I’m offering here.

Yes, I understand that you’re trying to push back against this element of Williams’s thinking. My point was meant to suggest that you may have a hard time doing so once you’ve conceded that we must take ‘the facts of agency’ into account in constraining the relevant option set for our judgment about ‘how the world ought to be.’

On the interpretation I’ve offered, Williams isn’t arguing that performing certain actions would be bad for agents. Rather, the argument is that honoring certain putative moral requirements would be incompatible with the necessary moral psychology of even being an agent of the relevant sort (i.e. the sort to whom moral claims are addressed) at all.

For a comparison, suppose that to be a ‘parent’ in a particular socially and morally relevant sense requires having a special relationship of concern toward and responsibility for one or more particular children. In that case, a moral rule demanding that parents care about and take responsibility for all children in the world equally would be incoherent, because following the rule would be incompatible with belonging to the group to which the rule was meant to apply. The problem isn’t that following the rule would be bad or unpleasant for parents – it’s that it can’t be a rule for parents because asking you to follow the rule would mean asking you never to have been a parent to begin with.

Avram Hiller
Reply to  Derek Bowman
16 days ago

“we must take ‘the facts of agency’ into account in constraining the relevant option set for our judgment about ‘how the world ought to be.” – that’s actually what you persuaded me not to accept in your previous comment. I think that we can ask ask how the world ought to be, and then, when we give a normative theory about how to act in light of it, then we can take those psychological facts to work within an ought-implies-can principle.

About the point that it is incoherent to demand of an agent that they compromise their own integrity as an agent – I actually think that’s false. 

I’ll grant that severing an agent from their own life projects does indeed undermine who they are as agents. 

Here’s an analogy. My left kidney is, by its nature, such that it is mine, and its function is to filter my blood. If it were severed from the rest of my body, it wouldn’t be my kidney anymore. And the analogy is that if I sever my agency from my own projects, it wouldn’t be my agency my more.

But we do in fact sometimes ask of kidneys that they go and filter someone else’s blood. If I give someone my kidney, it wouldn’t be my kidney any more – it would be theirs. But there’s no contradiction in making that demand of my kidney.

One might then reply that I’m not treating it as “my kidney”, then. A moral theory that asks of my kidney to treat others’ blood as equal to mine wouldn’t be treating it as what it is. And thus the request of it is incoherent.

And maybe I am being thick-headed, but I just am not able to see why it’s incoherent. It might be bad for the kidney – its identity would have to change. And that would indeed be a worse outcome for a person – to have their identity to change. It is like asking someone to martyr themself for a cause. Such a request might be overly demanding, but I don’t see how it would be incoherent. But here again I think, such an outcome wouldn’t be as bad as a patient dying, and a moral theory shouldn’t privilege the agent over the patient.

Ry Griffiths
Ry Griffiths
19 days ago

Love this. Please consider Adam Smith on this, not joking. I’m immediately put in mind of Smith because defines injustice in terms of warranted resentment. Darwall reads Smith (many places, second-personal standpoint, but the specific statement of I’m thinking of is in ‘sympathetic liberalism’ in PPA. he’s written a lot on Smith) as crucially making the move of ‘patient-relative’ judgements. Smith virtually demands that victims resent cruelty because of what he understands resentment to aim at: ‘not inflicting pain, but at bringing the agent back to a sense of what is due other people. I’m also immediately put in mind of Judith Shklar on philosophers not theorizing justice by beginning with a theory, not just recognition of instances of, injustice (putting cruelty first, I think she says in faces of injustice.

Avram Hiller
Reply to  Ry Griffiths
18 days ago

Thanks for these thoughts! I’ll have to look into Smith and Shklar more carefully.

Ian
Ian
19 days ago

So, I’m not sure if moral dilemmas like this do in fact underplay the motivations of the moral patients at hand.

I think the moral dilemma is only a moral dilemma because the agent knows that the people do not want to be killed, or at least presumes that. If the agent didn’t ask themselves what the people who would be killed would be want, there’s no real moral dilemma, or at least I don’t think there would be.

I do think it’s interesting to ask about our perspectives as moral patients though. How do I (we) want to be treated? Does in fact seem like an under explored question that might also shed some light on the more typical: How should I (we) act? Given that their answers should be related. But because it changes the perspective, it also could bring up alternative and interesting set of questions or maybe even dilemmas that we hadn’t thought of previously. I think here of questions about whether it’s wrong to enjoy or desire certain things, or dislike other things. But I’m sure there could be others too.

Mark Raabe
Mark Raabe
18 days ago

I like where you are going with this, although I sympathize with those who don’t think you’ve quite hit the nail on the head with “How should the world be?” — which can be variously criticized as vague, intractable, beyond reasonable scope, potentially circular, and so on.

But your suggestion is fruitful. For me, if I’m one of the patients in the trolley problem, my first thought is likely to be not “What should the agent do?” (especially since I doubt I’d grasp the full situation immediately) but “Who the hell tied me to these tracks?”

In other words, by taking the patient’s perspective, I may be illuminating that there are other relevant parties besides just the patient(s) and apparent agent, as well as other contributing acts. Or I may be questioning the entire premise (essentially by identifying the person who tied me to the tracks as the person who framed the thought experiment). Or both.

Any of which leads to consideration of not just how one should act in a particular situation, but (among other things) how we might prevent the situation from arising in the first place. In many instances the latter can be the far more effective moral approach, but it’s one that is typically shortchanged in current societal discourse.

Note that it isn’t impossible to arrive at such a mindset from the agent’s perspective; it just seems a lot less likely. We usually require from the agent some sort of exigent action: there’s a trolley bearing down. In other words, we structure the problem to dissuade the agent from taking the time to ask things like “Why am I even in this situation?” or “Why is this up to me — by whose negligence are there no working brakes?” etc. There is an imperative to do something now.

In short, it seems that adopting the patient’s perspective positions us better to pull ourselves up a level and ask whether we are posing the right question, or have identified the right parties, or should adopt a different frame of reference entirely.

Tim
Tim
18 days ago

I am confused.

The trolley scenarios force us to choose between the deaths of some and the deaths of others. Their point has always been to render sharp and help adjudicate potential moral principles guiding our harming and aiding of others.

I never considered them from any perspective within the thought experiment, agent or patient. I always understood the questions as what should the agent do. By putting ourselves in the shoes of those on the tracks or bridges we can guess what they *want* the agent to do. The working assumption is that they would hope the agent causes the deaths of other people not them, but if they don’t hope this that is just a relevant consideration which would need to be added to the formulation of the thought experiment.) Alternatively, by putting ourselves in the shoes of the agent, we can guess what we *would* do. But how does any of this information help tell us what the agent should do? Morality is impartial in that it does not depend on anyone’s perspective.

If we want to know what the people on the track think the agent *should* do, the best we can do is give them a trolley problem which doesn’t place them anywhere in the situation.

Last edited 18 days ago by Tim
Avram Hiller
Reply to  Tim
17 days ago

Thanks for this! I think we might have differing views about the work that thought experiments are supposed to do. My take is that we want to arrive at a kind of neutral theory like you are describing, but to get there we have these thought experiments where we immerse ourselves in the vignette for the purpose of generating what we take to be data to help build that theory. If we started the thought experiment from a neutral perspective, then we wouldn’t really need the thought experiment because we’d have already arrived at where we need to be. So I take it to be a feature, and not a bug, about thought experiments that they trigger these pre-theoretic responses. But I’m suggesting that that feature does come with a danger, which is that our pre-theroretic responses may be biased.

In my post, I tried to be careful in saying that we ourselves are moral patients. It would be easy to say that when viewing a thought experiment, we should take everyone’s interests equally. But the moral judgment that we take people’s interests equally is something that might end up being an output of a set of thought experiments rather than an assumption at the start.

Tim
Tim
Reply to  Avram Hiller
16 days ago

Thanks for the thoughtful reply.

Could you tell me why anyone would think this?

to get there we have these thought experiments where we immerse ourselves in the vignette for the purpose of generating what we take to be data to help build that theory. If we started the thought experiment from a neutral perspective, then we wouldn’t really need the thought experiment because we’d have already arrived at where we need to be.

No one thinks this about thought experiments in general. In the Gettier case, Smith doesn’t know that Jones owns a Ford. In the Chinese room, the person in the room doesn’t understand Chinese. We are perfectly comfortable with intuitions about third person situations outside of ethics.

You said that if we didn’t imagine ourselves as one of the people within the thought experiment, then thought experiments would be redundant. But thought experiments function to elicit intuitions. If they happen to elicit intuitions when considered third personally, nothing follows that we didn’t need the thought experiment.

In ethics and in life, I can very easily have intuitions about what other people should or shouldn’t do in the situation they are in, without having to pretend that I’m them. My impression is that other ethicists can and regularly do this. So I think your worry is more a mistake about the actual methodology, than a problem with the methodology itself.

Last edited 16 days ago by Tim
Avram Hiller
Reply to  Tim
16 days ago

There are these empirical studies of thought experiments, like Joshua Greene’s account comparing the moral psychology of respondents in the footbridge to the psychology in the switch case. Greene argues that the footbridge case is one where respondents are reacting to the personal violation aspect of the case, and that’s what drives them to say that the agent shouldn’t push, whereas in the switch case, there’s no personal violation and that’s why they say to switch. And then Greene argues that whether something is a personal violation is not morally significant, even if it is part of our moral psychology due to our evolution. And thus Greene rejects the typical “no” response to footbridge.

I’m not sure I agree with Greene’s own particular psychological explanation, but I see myself engaging in a similar kind of project: let’s try to explain reactions to thought experiments by appealing to (what is here admittedly an armchair) psychological explanation – that people focus too much on the agent and not the patients, and that’s why we might hesitate in the Jim and Indians case when we really shouldn’t be hesitating (given that the agent shouldn’t be given more consideration than the patient).

Perhaps my description in my response to you of what goes on in thought experiments was wrong or unclear, or, at least, it wasn’t persuasive. But maybe you can help me out then by recommending something else I could say. Insofar as you still aren’t convinced by the overall picture I am presenting, is it:

(A) Yes, Greene-style explanations in response to thought experiments are wrongheaded

(B) No, Greene-style explanations are fine, by my own project isn’t like that (despite how I myself see it)

If it’s (A), maybe you can help me see a bit more clearly how thought experiments function.
If it’s (B), then how do you see what I’m doing as different?

Avram Hiller
Reply to  Tim
16 days ago

Just one other quick point. Maybe I shouldn’t be giving armchair psychological explanations, but in this particular case, I am responding to what Williams says about Jim and the Indians. He himself isn’t using that thought experiment to respond whether it’s right or wrong; he’s saying something about how it is not obvious, which is a psychological/epistemological claim. (And, it one that some people regard as false.) And so I’m using my explanation not strictly to say what’s right in the case, but to say what’s obvious or not – so a kind of psychological explanation is what is most relevant.

Chambers
Chambers
Reply to  Avram Hiller
16 days ago

For what it’s worth it’s probably not the best test case. I am a pretty stringent nonconsequentialist as are many of the people I know yet virtually everyone does already think the Jim and the Indians case is obvious; Williams was just mistaken on this. It is obvious that you should shoot the person who is going to be shot anyway if the others wont for. His George and the chemical weapons case might be better for you.

Mike on the internet
Mike on the internet
17 days ago

While thinking about the 5 poor souls on the trolley tracks, instead of the terribly burdened decider at the switch, seems like a broadening of perspective (and a welcome identification with patients, who outnumber agents and have a hell of a time), one might argue that the agent-centered approach could fairly be considered to be broader.

When it is more than just clean-hands vanity, worrying about the deontologial rules endorsed (and reinforced) by an individual choice is in fact worrying about the future patterns of outcomes will be created by an agent for an unknown number of future patients. In this roundabout way, the choice of deontological principles may affect far more patients than ad hoc consequentialist decisions concerning these 5 trolley victims or those 20 tribe members. It is an open question whether the cumulative effects of following a deontological rule over a lifetime will ever be as important as a single instance of saving 5-minus-1 or 20-minus-1 lives, especially when the knock-on effects of social expectations and coordination are considered.

I do wonder how much agent-centered hair splitting is just clean-hands vanity, though.

N. L. Engel-Hawbecker
N. L. Engel-Hawbecker
16 days ago

This is sorta building off what Tim said.

It’s well-known that “should” is ambiguous (or close enough) between something like “hopefully would” and “is morally required to.” (Consider: “David Lynch should make another film before he dies.”) Thus, when you write,

“my friend thought of it from the perspective of the 20 people of color who were hoping and praying that they would not end up being killed. From that perspective, it is indeed quite obvious what Jim should do…”

I wanted to flag, first, that your second sentence rang false to me, in the intended moral sense of “should.” (As Williams said, it’s not obvious.) Second, if you are relying on some simple inference from what would hopefully be brought about to what is morally required, then that inference is doing all the consequentialist-y work for you—not anything to do with perspective taking. And third, we don’t even need to adopt the perspective of the patients specifically to figure out what Jim would hopefully do (or how the world should/would hopefully be): we could just as well adopt the perspective of an impartial spectator. So I suspect the whole “patient-centered” labelling is a red-herring. (Relatedly, how would you approach cases where the patient has no perspective on what’s being done to them—e.g., because they’re in a coma, or already dead?)

For whatever it’s worth, I also found the framing of this piece in terms of questions a little confusing. As the quote above also demonstrates, it sounds like the overarching question under discussion is still just of the “What should this agent do?” variety. The patient-centered perspective, as you describe it, is just answering the same sort of question by drawing on a certain set of inputs. Similarly, you seem to be treating the “How should the world be?” question as merely a helpful means of answering the “What should this agent do?” question, which still positions the latter as the ultimate, overarching question of moral theorizing. So I suspect your whole framing in terms of opening questions is a red-herring too.

Avram Hiller
Reply to  N. L. Engel-Hawbecker
15 days ago

Thanks so much for this – it is very helpful. A few things in response.

(1) It sounds like we have different intuitive responses to the cases. What should moral theorists do when that occurs? That’s a really difficult question, and I have something to say about it, but I will do so at a later point. (I think post-Gettier epistemology never really recovered from the fact that different people report different intuitions regarding fake barn cases. Though maybe that lack of recovery is a good thing!)

(2) That’s a really interesting consideration about the move from one should to another doing the consequentialist work. I’ll have to think about that some more.

(3) You write: “we could just as well adopt the perspective of an impartial spectator.” I was using two assumptions that perhaps I should have been more clear about. First – and this came up in my first response to Tim – it’s not clear to me that we can adopt the perspective of an impartial spectator. It’s not clear that we can rid ourselves of biases, and it’s not clear that we can gather all the relevant information to make a judgment from that perspective. So I’m trying to make something of a practical suggestion about how to give/approach thought experiments given our human flaws and biases. Second, I don’t want to assume that moral theories should consider things from the perspective of an impartial spectator. I’m mostly saying that we do in fact think about these experiments more from the perspective of agents, and we should do a better job considering the patients (given that we ourselves are patients). Third, that’s a nice question about comatose people. I will say that I think we can mostly get into that perspective, since presumably most of what is owed to comatose people arises out of who they were beforehand. Still, I think that it’s not off the table to think that non-sentient things like trees have moral standing. My suggestion there is that we still do our best to emulate what the interests of such beings are – and hopefully, whatever arguments are use to show that such non-sentient things have standing will also give us a clue as to how to do that. There is no guarantee that we’ll get it right, for that or even for other humans. But that isn’t a fatal flaw for my account – it will just go to show that finding out what’s right is hard or even impossible sometimes.

Your final point is helpful. Let me say a few things. You write: “you seem to be treating the “How should the world be?” question as merely a helpful means of answering the “What should this agent do?” question, which still positions the latter as the ultimate, overarching question of moral theorizing.” That I’d like to push back on. I do think that it’s still a relevant question what should someone do in a circumstance – I don’t want to deny that. One reason for taking that non-agential question as primary (that I only briefly allude to in the post) is that we might not know at the outset who the relevant agents are in a situation – is it an individual, a group, or something else? We might not even understand what an agency is. And we also might not have a sense of what an individual’s options are in a situation, which is a precondition for figuring out what the agent should do. So I do actually think that the “world” question does come prior to the question about how an agent ought to act, even if the latter question is a really important one.

I’ll say one more thing about the overall orientation of my project – perhaps you can help me think better about how to put things, though I know it will be a bit blasphemous for some and I doubt you’ll agree with what I’m about to say.

I’ve discussed the Jim case but let me say something about the George the chemist case. One thing that people tend to like about Williams is that he gives rich detail in his thought experiments, rather than just e.g., stipulating an unnamed, nondescript agent at a switch and 6 unnamed, nondescript people tied to tracks. But look at the details Williams gives in the George case: they are pretty much all about George and the goings-on at the chemical company. There is no detailed discussion of the suffering by potential victims that is more likely to happen if George doesn’t take the job. No mention of the names and family relationships of people dying horrific deaths, their families ripped apart, that sort of thing. So I happen to think that the George thought experiment as presented by Williams is a worse and more biased thought experiment than a naïve switch case. But not just that: the fact that no one seems to be up in arms about this methodological issue makes me think that most philosophers take an agent-centric bias for granted at the very start of their own moral theorizing. Further, I take it that this initial bias has a distorting effect on the resulting moral theories, which not coincidentally have a component which specially limits what agents are responsible for doing. And so I’m trying to draw moral theorists’ attention to this.

(Also, FWIW, insofar as Williams doesn’t say much to quantify the increased risk, it may then be unclear, even on naïve utilitarian terms, that he should take the job – so I’m not sure the thought experiment does the work that W needs it to do.)