FTX, Moral Philosophy, Public Philosophy


Does the FTX debacle hold lessons for moral philosophers? For those interested in public philosophy?

1. What’s the connection?

The head of FTX, billionaire Samuel Bankman-Fried, has been an active part of the “effective altruism” movement, which was inspired by Peter Singer‘s application of utilitarianism to problems of world hunger (especially the classic 1972 essay “Famine, Affluence, and Morality“) and, more lately, shaped by philosophers Toby Ord and William MacAskill (both at Oxford). Further, it seems MacAskill was an influence on the general direction of Bankman-Fried’s career. As Sigal Samuel puts it in an article for Vox:

When Bankman-Fried was in college, he had a meal that changed the course of his life. His lunch companion was Will MacAskill, the Scottish moral philosopher who’s the closest thing EA has to a leader. Bankman-Fried told MacAskill that he was interested in devoting his career to animal welfare. But MacAskill convinced him he could make a greater impact by pursuing a high-earning career and then donating huge gobs of money: “earning to give,” as EA calls it… So the young acolyte pursued a career in finance and, later, crypto. To all appearances, he remained a committed effective altruist, pouring funding into neglected causes like pandemic prevention. 

Since FTX tanked last week (see the Bloomberg columns here and here; also this blog post), philosophers have weighed in.

2. What are the lessons for Effective Altruism?

Among those who commented was MacAskill, who wrote in a series of tweets:

If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception. I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.

If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.

Some might argue that effective altruism provided moral cover for poor business practices in an environmentally unfriendly sector, a possibility MacAskill entertains here:

If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.

I initially thought charges like this are easy to make but hard to substantiate because they depend so much on mind-reading, but Bankman-Fried helpfully admitted that EA was “mostly a front” in an interview this week:

Kelsey Piper: So the ethics stuff- mostly a front? People will like you if you win and hate you if you lose and that’s how it all really works?
Sam Bankman-Fried: Yeah. I mean that’s not *all* of it but it’s a lot…
Kelsey Piper: You were really good at talking about ethics, for someone who kind of saw it all as a game with winners and losers
Sam-Bankman-Fried: Ya. Hehe. I had to be. It’s what reputations are made of, to some extent…

Additionally, to the extent that not “all” of the ethics was a front, some might worry that the connection to EA contributed to FTX making inadvisable or unethical moves via “moral licensing.” [Update: In a comment, Eden Lin makes a plausible case that what Bankman-Fried was referring to as “mostly a front” is the idea that “one shouldn’t violate common sense moral prohibitions for the greater good.”]

So, one of the lessons for philosophy’s advocates of EA, who appear very sincere in their commitment to doing good, might be to be more cynical about people. Other lessons?

3. What are the lessons for utilitarianism? For moral philosophy? For public philosophy?

One of things MacAskill says is that the kind of behavior FTX executives engaged in goes against the tenets of EA, presumably even when such behavior would help achieve EA’s end goals:

For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning. 

One might be curious how a movement whose origins are in utilitarianism can accurately be said to emphasize “common-sense moral constraints” and “strongly oppose ‘ends justify the means’ reasoning.” But since Bentham, utilitarian moral thinking has often been preoccupied with minimizing the apparent gap between its prescriptions and those of common-sense morality, and since Mill we’ve seen increasingly sophisticated versions of utilitarian (and more broadly, consequentialist) moral theories that try to do just that. Furthermore, even if effective altruism was inspired by Singer’s utilitarianism, there’s nothing incoherent about a non-consequentialist approach to it that places constraints on how one may permissibly benefit others.

MacAskill provides some passages from his recent book in which he appears to take seriously such constraints. However, I can’t tell if he or other leaders of EA are on board with them. For example, he says “plausibly, it’s wrong to do harm even when doing so will bring about the best outcome.” Yet anyone as smart as MacAskill (and as familiar as he is with contemporary moral philosophy) could come up with a variety of compelling counterexamples to “it’s wrong to do harm even when doing so will bring about the best outcome” in a heartbeat. So I doubt he believes it—at least not in that simple form. And so, despite EA’s “in principle” compatibility with deontological constraints, when we are talking about EA in practice, I think we are talking about something utilitarian-like, and we shouldn’t be surprised when that practice ends up deviating from common-sense morality or endorsing “ends justify the means” reasoning. (For a defense of the idea that at the level of “practice”, utilitarianism, on instrumental grounds, coherently endorses familiar deontological constraints, see this recent post by Richard Chappell.)

That said, “fraud, illiquidity, and sloppy bookkeeping in the name of maximizing utility” does not appear to be the explanation for FTX’s demise. So what could it have to do with moral philosophy? Why judge a moral theory by the behavior of people who fail to live up to it?

But what if that moral theory is too often associated with immorality in practice? That’s a question Eric Schliesser (Amsterdam) raises about utilitarianism:

Within utilitarianism there is a curious, organic forgetting built into the way it’s practiced, especially by the leading lights who shape it as an intellectual movement within philosophy (and economics, of course), and as a social movement. And this is remarkable because utilitarianism for all its nobility and good effects has been involved in significant moral and political disasters involving not just, say, coercive negative eugenics and—while Bentham rejected this—imperialism (based on civilizational superiority commitments in Mill and others), but a whole range of bread and butter social debacles that are the effect of once popular economics or well-meaning government policy gone awry. But in so far as autopsies are done by insiders they never question that it is something about the character of utilitarian thought, when applied outside the study, that may be the cause of the trouble (it’s always misguided practitioners, the circulation of false beliefs, the wrong sort of utilitarianism, etc.). 

In my view there is no serious study within the utilitarian mainstream that takes the inductive risk of itself seriously and—and this is the key part—has figured out how to make it endogenous to the practice. This is actually peculiar because tracking inductive risk just is tracking consequences and (if you wish) utils. 

Schliesser says that there’s “something about the character of utilitarian thought, when applied outside the study” that’s problematic. What is that something?

Despite it being over two centuries since Bentham espoused utilitarianism (not to mention over two millenia since Mozi), and—especially—despite the storm of developments in consequentialist thinking over the past 50 years, the version of consequentialism that tends to get carried “outside the study” is really, well, simplistic. It is not, say, the sophisticated consequentialism of Railton, nor the rights-respecting consequentialism of Pettit, nor the rule-consequentialism of Hooker, nor the responsibility-constrained consequentialism of Mason, nor the moderate aggregationism of Voorhoeve, nor the commonsense consequentialism of Portmore, etc., etc., etc. Those are too complicated or too subtle or too informed by engagement with the problems of moral philosophy or moral life, or too challenging to readily explain and apply. Instead the version of utilitarianism that tends to get taken on board “outside the study” is over-the-counter basic utilitarianism, the common side effects of which are bullet-biting, shoulder-shrugging, hand-waving, wishful thinking, and an increased temptation to push people off of footbridges. Perhaps there are concerns about “the character of utilitarian thought” because it’s this basic version that people usually have in mind.

Yet, the utilitarians might reply, isn’t it normally the case that it’s the unsophisticated versions of moral philosophy that tend to get popular attention and application? And wouldn’t it be reasonable to ask how that has turned out for other moral theories? It’s not as if, say, divine command theory fares less disastrously “outside the study.” And isn’t the closest thing to a popular Kantian movement… Ayn Rand? (Oh the irony.)  Should we expect the Kantians to give up Kant because some people think taxation for public schools violates autonomy? Should we expect divine command theorists to give up God because some people thought they were sent by him on a crusade to murder non-believers? So maybe the fact that only a crude version of utilitarianism gets taken up outside the study isn’t a reason to be especially concerned about utilitarianism.

But maybe it’s a reason to be concerned more generally about moral philosophy? Or about how it’s taught? Or about how it is packaged for the public?

Discussion welcome.

Warwick University MA in Philosophy
Subscribe
Notify of
guest

90 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Eden Lin
1 year ago

I don’t think that Bankman-Fried admitted in that Vox interview that his commitment to EA was “mostly a front.” The interviewer had just reminded him of an earlier conversation of theirs “about whether you should do unethical shit for the greater good,” in which he had answered that question in the negative. She then asks, about that answer of his, whether it was a “PR off the cuff answer” (as opposed to a truthful answer, presumably). He replies, “man all the dumb shit I said” and “it’s not true, not really.” So, when she then asks him, “so the ethics stuff – most a front?” and he says “yeah,” I take them to be talking about whether his earlier claim that you shouldn’t “do unethical shit for the greater good” is mostly a front. I take “unethical shit” to mean “shit that is unethical from a common sense point of view” in this context. So, I take him to have admitted only that when he said that one shouldn’t violate common sense moral prohibitions for the greater good, that was mostly a front — an insincere answer intended to avoid bad PR. This is compatible with his being a true believer in EA who accepts utilitarianism and who thinks that doing “unethical shit” for the greater good is not genuinely unethical.

Émile P. Torres
Reply to  Justin Weinberg
1 year ago

I think Lin’s interpretation is clearly corroborated by SBF in his most recently NYTs interview. Search for “Yeah, absolutely. It was frustrating because that was not meant to be a public interview,” and read the next few paragraphs. https://www.nytimes.com/2022/12/01/business/dealbook/sam-bankman-fried-dealbook-interview-transcript.html

Émile P. Torres
Reply to  Émile P. Torres
1 year ago

Further confirmation from a New Yorker article:

The spokesperson said, “Mr. Bankman-Fried does in fact deeply believe in Effective Altruism, and always has, but he thinks that there are many things that companies do—specifically highly regulated ones—around the edges to attempt to appear as ‘good actors.’ ”

https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity

Nicolas Delon
Reply to  Eden Lin
1 year ago

I think that’s clearly the right interpretation and I’m surprised so many people keep misreading the quotes. The context makes it obvious it’s about “ethics” in the deontological/professional ethics/common sense morality sense.

Maybe the EA part was also a front but that’s not something you can infer from the interview.

Last edited 1 year ago by Nicolas Delon
Jen
Jen
Reply to  Eden Lin
1 year ago

His view is not so sophisticated as to involve ideas of common sense moral prohibitions. He was concerned less about those, more about avoiding the perception that he was a sketchy loser. I’ll explain.

He said something like “don’t do unethical shit, for if you’re running Philip Morris, nobody will work with you on philanthropy.” He meant that others’ perceptions of a person as being sketchy are likely to impede that person’s philanthropic goals. But this is one of the things he was talking about when he said “all the dumb shit I said, it’s not true, not really.” This is an admission that what he said is not completely accurate.

He later goes on to explain more accurately. He said the bad quadrants are “sketchy + lose” and “clean + lose,” and the worst is the former. He also said the best is “win + ???” Apparently he prefers being percieved as a winner, and if he is forced to be perceived as a loser, he prefers being perceived as clean over being perceived as sketchy.

The best explanation of his response (“yeah”) to the question of whether the ethics stuff was mostly a front: he was affirming that what he said previously about not doing unethical shit was inaccurate. He then explained more accurately, that his goal is to avoid the so-called “worst quadrant,” being perceived as a *sketchy* loser. If he lost, he would at least be perceived as a clean loser.

Joseph Rachiele
Joseph Rachiele
Reply to  Eden Lin
1 year ago

It’s not true that their earlier conversation was about whether you should do unethical shit for the greater good. Or at least that’s an inaccurate way to characterize his response for present purposes:

“There is some line,” he told me then. “The answer can’t be there is no line. Or else, you know, you could end up doing massively more damage than good. And I think more generally, you could say, okay, fine, but just, like, subtract that out. But I don’t think it’s that simple, either. Because there are a lot of complicated but important second-order harms that come if your core business is bad for the world, in terms of your ability to work with partners and your ability to work with partners in your philanthropic efforts.”

The thing that the screenshots reveal he now thinks isn’t true is the second idea in this passage, starting with “But I don’t think it’s that simple, eithet. There are a lot of complicated second-order harms that come if…”

Nothing he says in the screenshots suggests he no longer thinks there is such a line. Let me explain.

Their earlier conversation was also about whether you should do something that Will at least presumably thought was not against common sense moralilty–run a crypto exchange. That sounds right too at the abstract level. Running a casino empire and donating the proceeds to effective charities may be wrong, but it’s not violating some strong constraint of common sense morality.

Thus, when she returns to the early convo (which he presumably is mostly recollecting?) he is lamenting the dumb shit he said that seems to presume people’s roughly accurate perception of the morals of societal actors.

Everything later must also be read in light of his public feud with CZ who is his competitor and “won.” He notes that he thinks CZ is doing unethical shit.

Finally I am in complete agreement with Jen, especially the last part. He was answering the second question. You should interpret “yeah” “I mean that’s not *all* of it” so that they are continuous with the other thoughts that follow under these two questions. Given the brevity of DMs they are typically the same thought.

Those quadrants simply combine ethical reality and success, which wouldn’t make sense if talking about a single person. From public comments he seems to regard himself as morally ambiguous now so I would put him in the “lose + ??? quadrant. And he think he just lost to win + sketchy.

It’s a bizarre section. Piper framed the screenshots with following when all the textual evidence before and after points away from this reading and he seems to dodge all the gotchas: “Those well-considered ideas about balancing ethical imperatives? “It’s not true, not really,” he said now.”

Avram Hiller
1 year ago

Thanks Justin for this nice post.

My sense is that the disaster has several origins, some endogenous to Benthamic utilitarianism, some exogenous.

Tyler Cowen’s interview with SBF in March is especially illuminating. This section I think foretells disaster:

COWEN: Should a Benthamite be risk-neutral with regard to social welfare?
BANKMAN-FRIED: Yes, that I feel very strongly about.
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: [Yes, with a small caveat.]
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.

Classical (and seemingly SBF’s) utilitarianism (1) is risk neutral; (2) gives equal weight for equal goods; (3) does not discount very tiny probabilities, either of success or failure; and (4) does not give extra weight for local/relational value.

In making the decisions he did, we see these things in action. He didn’t seem to value his own customers’ money over that of whatever he might do with it. SBF’s choices were tremendously risky; in one interview (sorry I can’t put my finger on the source), he said that the chances of success for FTX was about 20%, but he still thought it was worth trying; and then his decision to use customer funds to prop up Alameda were also risky. There was talk of him becoming the world’s first trillionaire; he said (sorry again I don’t have the source) that he didn’t think that there was much decreasing marginal per dollar benefit to his donations up through a $1 trillion. It was as if he was actually playing the St. Petersburg game with his financial decisions, and the FTX bankruptcy was the inevitable result, given enough time.

These four features are all intrinsic to the proper employment of classical utilitarianism, and they are all disputable on their own. So I do think that it is worth it for classical utilitarians to figure out which of at least (1)-(3) to reject, given that more powerful utilitarians are more likely nowadays to fall into more real-world St. Petersburg-type choices like SBF did. (Or, bite the bullet, and say that what happened with FTX was indeed worth the risk and was mostly just bad luck.) Of course, how to respond to St. Petersburg is a much-discussed issue, and there are various kinds of risk-adjusted, discounted, and agent-relative forms of consequentialism/decision theory, and Justin points to some of them. These seem all the more important to consider at the present moment.

There were also causes of the disaster that are extrinsic. SBF’s decision to risk his customers’ money was remarkably short-sighted – a more stable way for him to have built up FTX (and maintain his good graces in the finance world) would not have involved that risk. A better way to manage the company would have had a wider and wiser board of directors. I see what happened as SBF being a very smart number cruncher but an imperfect judge of market players and a young and foolish CEO.

One might say that utilitarianism, if it so easily harms others, then it, like guns, should be restricted. I’m not sure. One thing is that utilitarianism for various reasons seems to appeal to young, foolish, and quantitatively-minded people. My sense is that Led Zeppelin also appeals to young and foolish (and perhaps quantitatively-minded) people, too, and Led Zeppelin really is awesome, so the fact that something appeals to that subset of people does not necessarily mean that there is something wrong with it. And young smart foolish people can cause lots of damage even without utilitarianism, so the damage from the FTX disaster may have a confounding cause. Setting that aside, it is also not clear to me that in total, utilitarianism in our current context is a worse ideology to have popularized, at least when compared to many other worldly popular ideologies. But I think the right thing to say is what Richard Yetter Chappell and other defenders of utilitarianism have said through the years – that on the level of practice, utilitarianism doesn’t in fact recommend treating every decision as an occasion to try to maximize.

Kian Mintz-Woo
Reply to  Avram Hiller
1 year ago

I think Av is right on here, so I’ll just add two cents:

[1]

in one interview (sorry I can’t put my finger on the source), he said that the chances of success for FTX was about 20%

That’s his interview with Rob Wiblin for 80K [source link here].

[2] In response to Eric Schliesser’s question about inductive risk quoted in the OP, I think it’s feasible that (a) the direct/indirect utilitarianism debates (and maybe the rule-/act-versions of utilitarian debates) [e.g. as discussed here] as well as (b) the decision-procedure/criterion of rightness distinction distinction are both ways of getting at the inductive risk of accepting utilitarianism, in an intercine fashion. The idea behind most of these discussions is that deciding based on directly utilitarian procedures is not utility-maximizing and can fail via various mechanisms. We require alternative structures that maximize utility but do not lead individuals to simplistic maximizing actions.

Matt L
Reply to  Avram Hiller
1 year ago

“…and Led Zeppelin really is awesome”

Counterpoint: Led Zeppelin is awful:
https://www.youtube.com/watch?v=QcMNayOuq50

Last edited 1 year ago by Matt L
Avram Hiller
Reply to  Matt L
1 year ago

LOL, I must admit that song actually makes its point quite well!

Paul Taborsky
Reply to  Avram Hiller
1 year ago

Tyler Cowen’s mention of the St. Petersburg paradox is exactly right. Effective Altruism (or at least in some of its more outrageous versions, such as Longtermism) really does seem like a St. Petersburg version of utilitarianism. So It is no wonder it makes an easy intellectual fit with the crypto economy, based on Ponzi schemes which, (much like multi-level-marketing) depend on the St. Petersburg paradox for their appeal.

Scott
Scott
1 year ago

What are the lessons for public philosophy? I think one lesson is that if you are going to be out there in public, taking a stand on the issues of the day, telling people how to live their lives, you have to be accountable for what you do. You’ve moved from the world of the academic journals. If there are actual dollars and lives on the line, you need to hold yourself accountable, and be held accountable by others.

So far I haven’t seen the academics associated with SBF do that. Has anyone said how many dollars they have received from him? Has anyone noted what actions they took on his behalf? Has anyone documented what reasons they had to be concerned about his behavior, and what they did with that information?

Richard Y Chappell
1 year ago

It’s a depressing thought that only the most crude and distorted versions of a moral theory are likely to “get taken up outside the study.”

But, even supposing it’s true, I wonder what can or should be done about it. Does anyone really want to embrace esotericism, and try to keep moral philosophy out of public view? Maybe I’m too much the optimist, but I’m much more inclined to think that the answer is to do more public philosophy, to try to help the public come to a less distorted understanding. (I argue for this further in my post on ‘Ethical Theory and Practice‘.)

I do think it could help for people to think about how they teach this material. It seems pretty common to present a badly caricatured understanding of utilitarianism (possibly because it’s common for philosophers to sincerely believe the caricatures? I’m not sure.)

I hope that utilitarianism.net can prove a helpful resource here. We’ve put a lot of work into trying to combine clear accessibility with philosophical sophistication, pushing back against the bullet-biting caricature. We especially emphasize the utilitarian case for commonsense constraints and virtues at many points throughout the textbook.

More generally, I think we (moral philosophers) should try to be a bit more sophisticated in how we use thought experiments, with their artificial stipulations. They’re great for pinning down ultimate values. But it’s a mistake to treat these “in principle” judgments as advice for how to act in practice, given the realities of unintended consequences, motivated reasoning, and other sources of agential unreliability that in real life can’t just be stipulated away.

I worry that this distinction between theory and practice may not tend to come through clearly enough in standard ways of teaching these topics. (I’d be happy if it turns out that I’m mistaken in this assumption, though!)

Chris Stephens
Reply to  Richard Y Chappell
1 year ago

Richard – I’ve not read all your materials carefully – but those I have look really good – but I don’t see (did I miss it?) that you address what many (?) see as the most significant objection to utilitarianism – the problem of interpersonal utility comparisons (e.g,. Dan Hausman “The Impossibility of Interpersonal Utility Comparisons” from Mind 1995). I’m an outsider to this debate, but for some of us, this has always seemed one if the biggest problems, because it doesn’t seem to be an objection that relies on “moral intuitions” about what to do in some particular case or cases.

Richard Y Chappell
Reply to  Chris Stephens
1 year ago

Thanks for the suggestion! I’m always looking for more objections to discuss. (The “cluelessness” objection is another big one that’s on my to-do list.)

fwiw, I’m partial to Parfit’s observation that we all know perfectly well that one person is harmed less by a papercut than another is by being beheaded, so there can’t really be an insuperable problem here. But I haven’t read Hausman yet, and will obviously need to do so in order to really do justice to the issue. So, thanks again for that!

Gabriel
Gabriel
Reply to  Richard Y Chappell
1 year ago

Richard,

Just writing to second Chris’s point about the importance of defending utilitarian understandings of aggregation as grounded in a freestanding notion of “utility.” On at least one version of the problem [as I understand it–possibly wrongly], the issue is not that (i) interpersonal comparisons of harms is impossible (e.g, it is definitely better to save person X’s life than Y’s finger, all things equal), but rather that (ii) there is not a substance called “utility” from which we derive how to resolve such comparisons in determining right action.

On a classic economic understanding of individual utility functions, the utility function is merely a convenient way of representing an individual’s preferences. The utility measure is derived from the preferences (it “represents” the preferences), though, and not vice versa. Carrying this over to the construction of the moral utility function, one might worry that assigning utility values to different states of the world can only make sense as a means of representing which states of the world are more choiceworthy: in other worlds, the moral utility function would simply be a means of representing which states of the world we ought to bring about over others. Thus, assigning utility of 1000 to the state of the world in which person X’s life is saved and a utility of 1 to the state of the world in which person Y’s finger is saved would merely be a convenient way of representing the circumstances–given uncertainty–under which one ought to save person X’s life over Y’s finger.

I have always felt more troubled about the relationship between utilitarianism and “realist” conceptions of utility in ranking states of affairs than about certain toy examples often trotted out as counterexamples to utilitarian conclusions. So I think this would be an excellent subject to write about!

manny
1 year ago

Is it really surprising that a view that doesn’t have an in-principle objection to, e.g., slavery will occasionally lead its adherents to do bad stuff?

Utilitarians can dance around as much as they like, but the fastest footwork in the world will only get them so far—and never far enough away from the core ideas that an in-principle objection to, e.g., slavery will just magically appear. Dear Jesus, of course it will occasionally result in bad stuff.

As for the tu quoque…common-sense moral theory surely isn’t susceptible in the way that utilitarianism is (or divine command theory is). After all, its starting point is various fixed points in common-sense and it builds from them. And, if what is built leads to something prima facie abhorrent, then it starts again. The problem arises when your starting points aren’t common-sense judgements, but are instead deep theory, which will be followed wheresoever its implications lead.

Nicolas Delon
Reply to  manny
1 year ago

Even if naive versions of utilitarianism don’t have an ‘in-principle’ objection to slavery, it is notable that, historically, utilitarians have been significantly less favorably disposed towards slavery than the main proponent of a theory that does seem to have an ‘in-principle’ objection to it, namely Kant. But beyond the ad hominem, non-naive versions of utilitarianism do have very strong objections to slavery. They are, given the world in which we live, as ‘in-principle’ as we need them to be. The principle is simple: everyone’s interests count equally, and slavery causes unjustifiable amounts of suffering in all the plausible versions of our world we can imagine. If you find hypothetical cases in which direct utilitarianism would condone slavery, then such cases are so far removed from our current conception of slavery that it’s not clear the reductio has much bite. And for those reasons, indirect utilitarianism will probably almost always decisive reasons to condemn slavery ‘in principle’, because doing so is what’s best.

See e.g. this famous paper by R.M. Hare.

manny
Reply to  Nicolas Delon
1 year ago

The move from “in-principle objection” to “as in-principle as we need it to be” would classify as dancing and fast footwork. What matters here is that there isn’t an in-principle objection; that contingent facts about the world might nonetheless prohibit slavery, by the utilitarian’s lights, is by-the-by.

It’s by-the-by because, as I say, it’s wholly unsurprising that a view that doesn’t have an in-principle objection to, e.g., slavery will occasionally result in its adherents doing bad stuff (whether it be slavery or something else, less bad, but still bad).

Nicolas Delon
Reply to  manny
1 year ago

would classify as dancing and fast footwork

It’s a pretty phrase but why? Why should we care about truly in-principle objections so much? After all, alternative views also have in-principle implications that strike us as implausible—refusing to harm someone to save millions or letting the murder in come to mind.

it’s wholly unsurprising that a view that doesn’t have an in-principle objection to, e.g., slavery will occasionally result in its adherents doing bad stuff (whether it be slavery or something else, less bad, but still bad).

Why, given that utilitarians all argue that we should internalize rules according to which slavery is in principle wrong?

manny
Reply to  Nicolas Delon
1 year ago

I agree: Kantians are in much the same boat.

I suppose it depends on what it takes to “internalise” those rules (and thus do as you say utilitarians say). If one internalises them iff they act always and every time as common-sense morality decrees, then, sure, I take back my claim. But then utilitarianism is not a normative theory as I understand the term.

If one internalises them iff they most always act as common-sense decrees, then shift focus to the remainder. There will be situations someone has time enough to think long and hard about the consequences of some action—the downstream effects, the uncertainties, etc.—and to (truly, rationally, justifiedly) convince themselves that doing such-and-such will maximise (expected?) value. And, it would be wholly unsurprising, if sometimes that such-and-such was bad* stuff.

(*Bad according to common-sense morality.)

Nicolas Delon
Nicolas Delon
Reply to  manny
1 year ago

The possibility that agents could misapply moral principles and do bad stuff is hardly unique to utilitarianism. It’s a feature of moral agents being finite, biased, and imperfectly motivated. Is there any moral theory of which we couldn’t say ‘It’s wholly unsurprising that a theory that… would lead people to do bad stuff’? It’s wholly unsurprising that a theory that unconditionally disapproves of lying would lead principled people to let atrocities happen. It’s wholly unsurprising that a theory that equates flourishing and virtue would, in some societies, lead people to act in ways we consider misguided. The tu quoque seems appropriate here. It’s only if your concern uniquely applies to utilitarianism that your initial claim is interesting. Otherwise it’s just a banal observation that people will misapply moral theories—in good or bad faith—to act wrongly.

manny
Reply to  Nicolas Delon
1 year ago

But…….it won’t always be a misapplication of the rules for the utilitarian?

I think we might be talking past each other. All I was saying was that it shouldn’t be surprising that a view that is famous for endorsing what are, by the lights of common-sense morality, bad things, will occasionally lead its adherents to do things that are, by the lights of common-sense morality, bad things. (Unless I’m missing something, to deny that you have to deny that utilitarians are ever, by the lights of their own view, required to do something that goes against common-sense morality. But, if that’s right—because of, e.g., the internalising, you mentioned above—then utilitarianism isn’t a normative theory as I understand the term. Unclear what it even could be.)

Kenny Easwaran
Reply to  manny
1 year ago

Given that all actual people actually do bad stuff (though there is no in-principle reason that this must be the case), pointint out that a view that [has some feature] will occasionally result in its adherents doing bad stuff” isn’t exactly an objection. The objection would be if it has its adherents doing bad stuff as opposed to good stuff, but it would be a count in favor of the view if the view has its adherents doing (merely) bad stuff as opposed to worse stuff.

JTD
JTD
Reply to  manny
1 year ago

Is it really surprising that a view that doesn’t have an in-principle objection to, e.g., slavery will occasionally lead its adherents to do bad stuff?

I don’t know what “in-principle objection” is supposed to mean here, but I take it that this is what you are trying to say:

Is it really surprising that a view that morally permits slavery in certain circumstances will occasionally lead its adherents to do bad stuff?

In this case your thought is confused. Every plausible moral theory permits slavery in certain circumstances. For example, moderate deontology permits slavery in a circumstance where unless you enslave three people a million people will be enslaved. Only absolutism (an implausible moral theory) absolutely prohibits things like slavery. But I don’t see how this is a reason for thinking that moderate deontologists are more likely to do bad stuff than absolute deontologists (if anything, the opposite seems to be historically true). Some clearer thinking is needed here.

manny
Reply to  JTD
1 year ago

No, that wasn’t what I was trying to say. With little hope of progress……..e.g. some principle that goes beyond “will it make me happier than it makes the enslaved sad.”

JTD
JTD
Reply to  manny
1 year ago

I’m still confused about what you are trying to say here. Is this what you meant?

Is it really surprising that a view that morally permits enslaving others, if the happiness this bring to you outweighs the suffering it brings to others, will occasionally lead its adherents to do bad stuff?

Tom Hurka
Tom Hurka
1 year ago

I don’t think it’s accurate to say Singer’s 1972 article “applied utilitarianism” to problems of world hunger. It said we should prevent suffering if we can do so without, among other things “doing something that is wrong in itself,” which allows that there may be things wrong in themselves, i.e. may be deontological constraints. Singer may himself have rejected any such constraints, but his article didn’t. Nor did the article recommend killing or harming some people to save other people from starving; it just recommended giving aid.

In a series of articles Guy Kahane and others have distinguished two sides of utilitarianism: the no-constraints, consequentialist side that can endorse killing one to save five, and the side that makes strong demands of beneficence, requiring impartial concern for all people everywhere, including in distant places. I don’t see why an EA movement can’t accept the second of these without the first. Why can’t a view have both constraints and a very demanding principle of beneficence?

Hayden Wilkinson
Hayden Wilkinson
Reply to  Tom Hurka
1 year ago

For what it’s worth, from my own vantage point of being fairly heavily involved in the EA community (and the longtermist subset of it), this is what I take most of the EA community to believe. It’s unfortunate that EA is so often conflated with consequentialist views that deny constraints.

MacAskill advocates for roughly this in his twitter thread on recent events with FTX.

On the Market
1 year ago

It seems reasonable to assume that a philosophy that intrinsically IS a movement that seeks to gather large sums of money from the obscenely rich, is, if it is to be effective, a philosophy that is intrinsically appealing to the obscenely rich and their interests. Ipso facto, if EA is effective, it is corrupt. [Nobody will sincerely argue that there is no moral fraughtness in the interests of the obscenely rich, I hope.]

That matches my perception of EA proponents in public. They espouse a philosophy that has made itself appealing to capital by re-focusing moral discourse from the problems of accumulated capital to “sexy”, but far-flung, theoretical exercises (like the “AI control problem”). At the very least, the present case reveals that a VERY blind eye has been turned to the means by which capital is accumulated, in favor of the alleged good that can be done by the means provided by capital itself.

If someone doesn’t see a moral wrong in this, I can’t help them. I can’t help MacAskill. He seems contrite, but has offered nothing beyond moral outrage and accusations of misunderstandings — in particular nothing about EA itself and how it became a movement that permitted this. If he cannot see this blind eye turning as his own moral mistake, he is inadequate to his own moral ambitions.

But of course he cannot. His moral failing isn’t a problem of communication or public vs academic philosophy. It is intrinsic to the EA philosophy itself.

I can see no downside to this being made obvious now.

FTY
FTY
1 year ago

I think we’ve probably exhausted the usefulness of examining this one guy’s mental contents, but his teenage take on Parfit did crack me up.

When Bankman-Fried was about 14, his mother says, she noticed that—completely on his own—he had been reading up in this area intensively.

“He emerged from his bedroom one night and said to me, ‘Mom, what kind of person labels an argument he disagrees with ‘the repugnant conclusion?’” Bankman-Fried had stumbled upon the writings of philosopher Derek Parfit, who had used that phrase in criticizing a certain strain of utilitarian thought.

“Sam was mad at Parfit for being wrong,” Barbara Fried recounts, “but madder at Parfit for the cheapness of his argument. ‘If you’re gonna take this on, you damn well need to grapple with the argument’” and not merely label it “repugnant.”

(…)

He is a billionaire, at least in part, because he has more risk tolerance than most of us, and has not been cowed by ferocious condemnations of the sector by powerful and highly credentialed people.

Which is totally in character. As Barbara Fried says of her son: “His position has always been: ‘I will go where the premises of utilitarianism take me. I’m not going to flinch. And I’m not going to name the outcomes ‘repugnant.’ I’m going to think about them, and my judgment is going to be rational.’”

https://finance.yahoo.com/news/ftx-ceo-sam-bankman-fried-profile-085444366.html

Matt L
Reply to  FTY
1 year ago

“…because he has more risk tolerance than most of us,”

Well, that’s certainly one way of putting it. It’s easy to think of people who have “more risk tolerance than most of us”, and why many of them end up in jail. Probably in this case, too. (And of course it’s easier to have “more risk tolerance” with other people’s money!)

Derek Baker
1 year ago

Have philosophers considered not advising young college grads to alter their career plans so that they can maximize their earnings for the greater good?

Yao Lin
Reply to  Derek Baker
1 year ago

I was about to comment on that too. The “you can do more good by earning more money” line has been a troupe used by recruiters from, say, consulting firms during campus job events for decades, luring numerous college grads into high-paid bullshit jobs that produce nothing valuable at all (and, worse, actively do harms to the society). Philosophers really don’t have to use their purported “moral authority” to help further legitimize such rationalization.

Kenny Easwaran
Reply to  Derek Baker
1 year ago

For what it’s worth, this is the recent advice that 80,000 hours (the EA career guide) has on “earning to give” – in recent years they have both determined that their earlier advice was too bullish on the idea for the time, and also that conditions in the world had changed to make it even less effective than it might have been a decade ago.

https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/

Derek Baker
Reply to  Kenny Easwaran
1 year ago

That’s good. But I’ll admit that I remain baffled at how people can be this confident in their ability to offer life advice to others.

Richard Y Chappell
Reply to  Derek Baker
1 year ago

Hi Derek, I’m pretty confused by this. How confident do you think one needs to be in order to offer advice?

At one extreme, there’s a kind of paralysis norm:

(Paralysis): Don’t offer advice unless you’re sure (or very close to sure) that it’s for the best.

But that doesn’t seem like a good norm. For one thing, it seems self-undermining, since you presumably can’t be sure that this is a good norm. So you can’t advise others to follow it.

It also doesn’t seem like a norm that you follow yourself, insofar as your first comment advised philosophers against advising earning-to-give, despite the possibility that your advice here could (presumably quite easily!) turn out to be harmful.

At the other end, there’s standard norms of expected value:

(EV): Offer advice when doing so is positive in expectation.

EV seems generally reasonable to me (at least when tempered with commonsense heuristic limitations on what we should be willing to overturn on the basis of a rough calculation — so, no advising criminality, etc.).

Note that EV doesn’t require high confidence in any ordinary sense. Even in the face of significant cluelessness about actual consequences, we may still reasonably judge that it’s more positive in expectation to at least *try* to help people (in a strategic, scope-sensitive kind of way) than to not even try to take these considerations into account.

For related discussion on why I think reticence-enforcing epistemic norms are bad, see ‘Agency and Epistemic Cheems Mindset‘.

Last edited 1 year ago by Richard Yetter Chappell
Jared
1 year ago

Well, so maybe there’s a substantive link between the disaster and EA (see, e.g., Avram Hiller on this thread for one plausible case).

But it’s easy to overstate, especially at a moment when there’s been a lot of shoddy thinking about EA and its alleged guilt by association. In particular, there’s been a lot of inferring from the fact that EA is compatible with certain kinds of crap, therefore EA endorses, “lends cover” or is somehow too congenial to that crap.

In some prominent philosophy venues, for example, professionals who should know better have expressed the embarrassing non-sequitur that because EA does not target the root causes of the inegalitarian status quo, it is somehow supportive of, or friendly to, that status quo, or is somehow against addressing the root causes.

Which is obviously ridiculous. Like criticizing a campaign to donate pain meds, as “endorsing” pain, because they fail to address its root causes, even lending “cover” to those who want to keep inflicting pain. Again, it should be obvious that just because an idea can be coherently embraced by a jerk doesn’t make it a jerky idea.

On the Market
Reply to  Jared
1 year ago

I believe the accusation isn’t so much “lending cover” as “actively running interference for and giving succor to”.

Joseph Rachiele
Joseph Rachiele
1 year ago

In terms of other lessons for EA as a social movement/ethics in practice rather than utilitarianism as a moral foundation: there is a question that arises from the business reporting (or maybe the bankruptcy filing?) that show how terribly the company was managed. This would be distinct from whether SBF was sincere in saying there was a line of doing great harm that he would not cross; I don’t know the answer to that.

How much could the EA ppl involved have known about that? They knew I suppose that SBF was young, inexperienced, and amassing a fortune in an unregulated market. If they underestimated the risk of serious harm given knowledge of these facts, what kind of failing would that be? I think some ppl are suggeting there was a kind of negligence there.

Suppose that’s right. In terms of how to place any new information in the context of prior opinion, my prior would have to reflect the fact that those involved have worked so successfully with two other billionaires in the past on GiveWill and Open Philanthropy to do tremendous good.

Billy
Billy
1 year ago

In line with what Justin’s post says and with what Tom Hurka’s post about Singer’s 1972 article says, I believe that EA can accept deontological constraints while also embracing a super-demanding principle of beneficence. So, what I have to say below about the distinction between theory and practice is not necessarily about EA.
 
Relying on the distinction between theory and practice has long struck me as questionable. It is at least paradoxical, and it might even be outright inconsistent. Here are some examples:
 
(1)  A utilitarian can claim that deontological constraints are mere heuristics, mere rules of thumb, or only of instrumental importance in a theoretical sense, while claiming that in practice deontological constraints need to be deeply respected, internalized, and complied with. In short, the claim is that, in practice, we had better treat deontological constraints as having an intrinsic (i.e., non-instrumental) kind of importance whenever we are making decisions about what we should do, even though we know in theory that deontological constraints do not have this kind of importance.
 
(2)  Sidgwick is a hedonist. But he also says the following:
 
“And thus we may conclude that the pursuit of the ideal objects before mentioned, Virtue, Truth, Freedom, Beauty, etc., for their own sakes, is indirectly and secondarily, though not primarily and absolutely, rational; on account not only of the happiness that will result from their attainment, but also that which springs from their disinterested pursuit (Sidgwick’s emphasis).”
 
Sidgwick is urging us to treat “ideal objects” such as virtue and knowledge in practice as ends worth pursuing, even though he thinks that, theoretically, we should hold that virtue and knowledge are merely instrumental goods.
  
(3) Mackie writes the first chapter of his famous book on ethics, claiming that there are no objective values or objective moral truths. But then he writes the rest of the book, making all kinds of claims about normative and applied ethics. His view is that in theory these claims are all objectively false, while in practice he is deeply committed to these claims and cares a great deal about them, so much so that he wants us all to agree with him on them.
 
In (1) and (2) above, the distinction is that, theoretically, X is of merely instrumental importance, while in practice X should be treated as having intrinsic (i.e., non-instrumental) importance. In (3) above, the distinction is that, although X is objectively false, it is nonetheless worthwhile in practice to spend lots of time and effort arguing for X, publicly expressing the value of X, and trying to get others to agree with one about X. 
 
I am sure that there is real sincerity on the part of people like Sidgwick and Mackie in relying on the distinction between theory and practice. And nobody is going to doubt how intelligent these people are. Sidgwick and Mackie, for instance, are both far more intelligent than I am. But I have trouble buying this sort of move. If we know theoretically that X is only instrumentally important or that X is objectively false, then how are we supposed to convince ourselves in practice to treat X as being intrinsically important or as being true in some important sense? And why should we do this, exactly? Maybe I am being dense. I don’t know. But I have trouble doing this kind of thing. I would rather just say, “Let’s try to get our theory and our practice to align, cohere, square together, etc.”
 
Two more examples: (1) Pascal takes his wager to suggest that, even if one’s considered theoretical view is that one does not know whether God exists, one nevertheless should go to mass, pray, and so on – that is, one should in practice act as though God exists – with the hope being that, eventually, this will lead one really to believe in God. (2) I can’t remember where, exactly, I read this one, but for those of us who are anxious-attachment people, I read that one practical strategy is as follows: when you feel rejected by someone you care about, you can repeatedly tell yourself dismissive things about this person as a practical strategy. The idea is that, if you feel rejected by X, you can repeatedly say to yourself (as a practical strategy) things such as, “Well, I don’t really need X in my life. I am better off without X.” I have tried this one. It slightly worked for a minute. But then some other part of my mind (the theoretical part, I guess) kept telling me, “I am lying to myself. I don’t really believe that I am better off without X. This practical strategy is deceitful.” And at that point the practical strategy didn’t work anymore.
 
Just my two cents…

Richard Y Chappell
Reply to  Billy
1 year ago

fwiw, in my post on Ethical Theory and Practice, I do not suggest that “in practice [constraints] X should be treated as having intrinsic (i.e., non-instrumental) importance.”

Rather, my suggestion is that there are “extremely weighty reasons to avoid violating generally-beneficial rules or rights (even when it seems to the unreliable agent that the violation would be worth it).” These are instrumental reasons, but no less weighty for that. So it’s all perfectly coherent.

Indeed, what’s incoherent (I think) is to simultaneously (i) recognize that agents subjectively similar to oneself are overwhelmingly likely to be mistaken in judging that it’s positive EV to break the rule, and yet (ii) go ahead and break a rule whenever you naively judge it to be positive-EV to do so.

By (i), you ought in such a situation to conclude that your judgment in (ii) was likely erroneous, and so in light of this higher-order evidence the rule-breaking act is negative-EV after all.

So, on my account it’s not that there’s an insuperable gulf between theory and practice. It’s more that the thought experiments we invoke when doing theory (with their stipulated certainties) aren’t relevantly similar to real-life practical decisions. The latter are more complex, and require taking higher-order evidence into account, which should lead to our trusting reliable heuristics over unreliable calculations. No incoherence or self-deception required.

Billy
Billy
Reply to  Richard Y Chappell
1 year ago

Thanks, Richard. That is helpful. I’ll think about your view.

Charles Pigden
Charles Pigden
Reply to  Billy
1 year ago

Seconding Richard Chappell, but his time with respect to Mackie, I would say that he too does not rely on a theory/practice distinction. His theoretical claim is that morality is a fiction but that some moral fictions are useful-to-believe. Hence in pushing certain fictions his practice was consistent with his theory.

Billy
Billy
Reply to  Charles Pigden
1 year ago

Thanks, Charles. I’ll reconsider my wording for Mackie.

Kenny Easwaran
Reply to  Billy
1 year ago

This is a topic that I think is dealt with interestingly in, of all places, Thi Nguyen’s book “Games: Agency as Art”. He notes that while playing a game, we take on as a final end the accumulation of points (or whatever it is that you do to win the game), and that somehow we manage to do this even though on some level we know that this end is merely instrumental to some further end we have, which is to have fun with friends (or whatever it is). I have been intending to re-read the relevant chapter of the book (I think the second?) because I think this point is so important to these other questions about agency, ethics, and epistemology.

Billy
Billy
Reply to  Kenny Easwaran
1 year ago

Interesting, Kenny. I’ll check out Thi’s book. Thanks for pointing me to this.

FTY
FTY
1 year ago

For me framing it in terms of individual agents that receive a body of knowledge via teaching or “public philosophy” and then run amok with it is not very helpful.

This is less like “what if undergrads read Kant and direct terrorists to kindergartens?” and more “how should we assess Marxism given historically existing Marxist movements?” Much like in the case of Marxism, I doubt we will reach consensus about what lessons one should draw about the theory from nominally affiliated practice.* But I find it helpful to think about effective altruism now as a political movement, which should be under the kind of scrutiny we reserve for political movements.

I used to think that effective altruism was political only in a broad sense: reinforcing the status quo, not trying “to understand how power works, except to better align itself with it” etc.** The details of the FTX scandal made me think that it is political in the narrower sense of trying to get power to advance its goals. It showed me to what extent EA is happy to use its social and institutional network to let global capital slip its regulatory leash (which is the story of SBF’s ascent, fraud aside). It also showed me that EAs will leverage financial power into social and political power, in a coordinated but not necessarily transparent way (the detail that’s unsettled me the most in this affair keeps being MacAskill texting Elon Musk to get SBF in on the Twitter acquisition).

If anything in philosophy deserves scrutiny, maybe it is teaching ethics without political philosophy, thereby allowing ethical traditions for which this relationship is contentious (all of them?) to project their implicit models of politics unchallenged.

* I’m not using the comparison to make an evaluative point. Someone who *is* using it as a warning to longtermist EA is… Peter Singer, see https://www.project-syndicate.org/commentary/ethical-implications-of-focusing-on-extinction-risk-by-peter-singer-2021-10
** See Srinivasan on MacAskill’s previous book in 2015: https://www.lrb.co.uk/the-paper/v37/n18/amia-srinivasan/stop-the-robot-apocalypse

Enrico Matassa
Reply to  FTY
1 year ago

I would think that looking at the actions of actual agents who have sincerely tried to follow a moral or political philosophy is actually much more informative than looking at hypotheticals. The fact that Mill was a colonialist out of sincere utilitarian conviction, that the planners of of the Vietnam and Iraq wars justified their actions largely in utilitarian terms, that the doctors who murdered their patients at Memorial Hospital in New Orleans reached that conclusion on utilitarian grounds, well every one of those is a far more damning indictment of utilitarianism than any clever philosophers hypotheticals about stranded hikers who want to change their wills, doctors harvesting organs, experience machines, or the like to my mind. And it’s not like we ever reach consensus from thinking about the hypotheticals either.

Nicolas Delon
Nicolas Delon
Reply to  Enrico Matassa
1 year ago

But we have plenty of examples of utilitarians advocating for social reforms on utilitarian grounds that we rightly consider laudable. Also Kantian nazis, colonialists, etc.

JTD
JTD
Reply to  Enrico Matassa
1 year ago

In discussions like these, where we discuss historical events involving agents acting on the basis of certain motives, we must be careful not to equivocate between:

  1. A particular decision being made using consequentialist reasoning.
  2. A consequentialist moral theory being accepted by an agent, who then uses that theory to guide all of their decisions.

All plausible moral theories will regularly require agents to make decisions using consequentialist thinking. For example, any plausible version of deontology will require agents to use consequentialist thinking when they are in situations where they are deciding between options that do not involving violating a constraint or special duty or incurring a cost that you are permitted not to occur. This is why a lot of public health policy, endorsed by non-consequentialists utilizes consequentialist reasoning.

It follows that pointing to examples of people engaging in consequentialist thinking and ultimately making bad decisions is not, as it is often wrongly portrayed, a special problem for consequentialism. If it is a problem at all, it is a problem for all plausible moral theories because all of them require consequentialist thinking in many circumstances.

The thing that could be a special problem for consequentialism is instances of bad decisions arising from (2) in situations where non-consequentialist theory would not have endorsed consequentialist reasoning. Yet, people citing various historical examples (such as Eric Schliesser) have failed to provide details to show that this was actually the case in these examples.

Of course, people making these kinds of arguments have also omitted several other steps from their argument. For example, showing that (2) overall will lead to worse outcomes than other decision procedures for moral thinking, or explaining why any of this is a problem for consequentialism when it has been standard since the 50’s for consequentialists to distinguish between the formal criterion for right action and the decision procedures we follow in our moral thinking, and to argue that there are probaly good consequentialist reason for not following a purely consequentialist decision procedure. But, my point is that even before we get to these issues these critics are making the basic equivocation between 1 and 2 that I describe above.

Extremely vindicated
Extremely vindicated
Reply to  FTY
1 year ago

It also showed me that EAs will leverage financial power into social and political power, in a coordinated but not necessarily transparent way (the detail that’s unsettled me the most in this affair keeps being MacAskill texting Elon Musk to get SBF in on the Twitter acquisition).

Can I uh get some more details on this.

FTY
FTY
Reply to  Extremely vindicated
1 year ago

You can read the Musk texts here: https://twitter.com/MattBinder/status/1591091481309491200

MacAskill: Hey- I saw your poll on twitter about Twitter and free speech. I’m not sure if this is what’s on your mind, but my collaborator Sam Bankman-Fried has for a while been potentially interested in purchasing it and then making it better for the world. […]

Musk: You vouch for him?

MacAskill: Very much so! Very dedicated to making the long-term future of humanity go well.

To the general point: effective altruism seems more aware of the importance of policymaking and politics recently (see this lecture from EA Global 2018); ran a candidate for Congress unsuccessfully (bankrolled by SBF but with organizing from the US EAs; with some people complaining of social pressure from community leaders to support the campaign); got the UN on board with some measure of longtermism (this is a critique with some details).

All these are fine individually, but they point to a movement that is starting to be politically active in a way I had not realized EA was from previous coverage. I dislike that this was partly made possible by having a billionaire to put large amounts of money into politics, but whatever, the system is broken. But given how social media is used, acquiring Twitter is buying some political influence (plus potentially a lot of social influence) in a way that made me uncomfortable.

Eric Steinhart
1 year ago

Here’s an issue I haven’t really seen much discussed here: Isn’t cryptocurrency itself just unethical?

It’s ruined the lives of countless people who “invested” in it. Its primary use seems to be to fund global organized crime (e.g. human trafficking, terrorist groups, illegal arms dealers, hackers, etc.). It contributes massively to environmental degradation. It certainly looks like a pyramid scheme or even outright fraud. It’s closely tied to conspiracy theories and delusional thinking.

Why would any *ethicist* want to be involved with this? For my part, I think philosophers should condemn this.

Mark Alfano
1 year ago

My take on this so far, as someone who has studied both the ethical side and the tech side of blockchains, and has followed to the extent possible what actually went down with FTX, is:

(1) blockchains as a basis for common knowledge might be pretty useful (David Lewis’s ghost would likely agree)

(2) but most actual use cases for blockchains (cryptocurrencies) are ponzi schemes (there are other use cases that don’t burn down forests to make money)

(3) fraud is inevitable in ponzi schemes

(4) EA advocates such as MacC got themselves entangled with cryptocurrencies (ponzi schemes) in pursuit of other goals, perhaps naively, perhaps through culpable ignorance

(5) ponzi schemes and other types of fraud *massively* undermine trust, which is one of the most important elements in human cooperation, and which is very hard to earn back after betrayal

(6) cooperation is how we both aim for and achieve moral ends

(7) so it’s a real pity that a slightly nifty tech innovation (blockchain) has led a bunch of people to be bamboozled, in turn leading to a lot of direct damage (loss of investments) and probably to a larger indirect damage (distrust in cooperative infrastructures)

Joseph Rachiele
Joseph Rachiele
Reply to  Mark Alfano
1 year ago

Can you elaborate on 2) and 3)? When I had finance friends tell me to buy bitcoin in December 2012 I told them: a) I was broke; b) it was gambling on a coin that’s value wasn’t tied to any productive asset. They agreed w b), thought bitcoin might increase in value, and had money to lose, but I don’t think they they were early investors in a ponzi scheme. So I take it your view is about the newer cryptocurrencies I’m unfamiliar with.

So you’re saying that all of these are deceiving people that they are tied to some productive asset when they aren’t? I am aware of concerns about deceptive marketing/pump and dump strategies but that’s different.

What happend at FTX with customer funds may indeed be a ponzi scheme (those more knowledgeable about finance and crypto will have to judge), but I can’t figure out how the kind of fraud that may have occurred at FTX was inevitable from it being a crypto exchange.

I am still trying slowly reading things and formulating my views on what happened so I am curious if I may be missing info on how these newer cryptocurrencies work that is more detailed than what you find in some popular newspapers.

Mark Alfano
Reply to  Joseph Rachiele
1 year ago

I think 3) is pretty well established, no? On 2), I’m basically just agreeing with Paul Krugman (e.g., https://www.nytimes.com/2022/06/17/opinion/crypto-bitcoin-inflation-gold.html). Cryptocurrencies like bitcoin, FTT, and FTX only rise in value when enough additional purchases are made. So there’s a huge incentive for people who already have skin in the game to encourage others to invest. Since these coins aren’t tied to any productive asset, there’s no way to make money off of them other than leaving somebody else holding the bag.

That’s consistent with your finance friends thinking that you might be able to dip in and back out of the scheme with lucky enough timing to make a lot of money. Surely many people have. But especially recently, it’s been pretty clear (e.g., in the Superbowl ads) that crypto is being hyped to draw in naive investors — either just to shore up the finances of the firm or to actively fleece them.

Joseph Rachiele
Joseph Rachiele
Reply to  Mark Alfano
1 year ago

I agree with everything you said after the first two sentences so want to try to state the issue again because my comment was too unclear.

The sense in which most crypto is like a ponzi scheme is the sense in which variations on pump and dump are like ponzi schemes and so involve deception: developers know their coin won’t be the next bitcoin, for example, and they nevertheless promote it as such and dreams of far-fetched use cases. That was my understanding. And this is a different sense of ponzi scheme than the one I am used to.

So the deception may be fraud in a colloquial sense but it doesn’t sound like the kind of misrepresentation of facts and the nature of the business in which *criminal* fraud consists in paradigmatic ponzi schemes. I am of course not trying to argue that such behavior is ethically unproblematic. And far worse seems to go on in cryptoworld. I just think a distinction is necessary here for the following reason.

In 7) you suggest that the nifty tech of blockchain may have led EA leaders to be bamboozled into working with someone who many people think used costumer funds in a traditional ponzi scheme and was engaged in criminal fraud. That’s why FTX did such massive moral harm. But if all that was foreseeable not from inside info about SBF but from more general facts about crypto, I would be very much interested. Is Coinbase next?

The strongest argument I could come up with regarding this possibility using easily accessible public info was the one about some risk of this happening I presented above in my comment that wasn’t a reply to you or others. But before this meltdown any discussion of SBF and FTX went in one ear and out the other, so I’m still coming to an understanding.

JTD
JTD
Reply to  Mark Alfano
1 year ago

I think that there is much truth to the Ponzi scheme analysis, however, I would give a slightly more complicated story here:

Cryptocurrencies like bitcoin, FTT, and FTX only rise in value when enough additional purchases are made. So there’s a huge incentive for people who already have skin in the game to encourage others to invest. Since these coins aren’t tied to any productive asset, there’s no way to make money off of them other than leaving somebody else holding the bag.

Standard cryptocurrencies are, in themselves, a kind of productive asset, just not a kind that we should have any respect for. The “productive” thing they provide is a neat tool for money laundering.

Indeed, I imagine that a grossly simplified story of their rise is something like this: (1) The early days of bitcoin; the only users of bitcoin are anarchists and other fringe types, the market value remains very low, (2) criminal organizations start to see the value in bitcoin for money laundering and begin pouring money into it, the market value goes up, cryptocurrencies start to proliferate, (3) investors (many naive but some smarter ones who hope to get out at the right time) are impressed by the rising market value and start to pour money into cryptocurrencies, the market value rises even more steeply, (4) conmen of various sorts get involved in spruiking cryptocurrencies to even more investors–the classic Ponzi scheme stage, (5) the inevitable collapse.

Given this story, it seems possible that, unlike a traditional Ponzi scheme, after all the naive investors have fled cryptocurrency and made huge losses, the market value of some of the more useful cryptocurrencies might stabilize at a fraction of their peak and they might have a future as a money laundering vehicle. But, then again, maybe regulators will be able to step in and undermine this in someway. I really don’t understand enough to make any firm predictions here.

Joseph Rachiele
Joseph Rachiele
Reply to  JTD
1 year ago

This more complicated story is not correct. You should not expect standard cryptocurrencies to increase in value through use in money laundering. There are far too many of them. They are far too easy to create. Most of them *are* worth nothing now.

A minor point before the substantive ones regarding 3): smart investors like my friends working in quantitative finance didn’t hope to time anything. They knew they couldn’t time markets (without superior information). That’s why they offloaded most of their coins sometime when they were below $1000 and didn’t make nearly as much as people are probably assuming. That was not a ponzi scheme in either the weak or strong senses I distinguish above.

Re: 2):

I thought it sounded plausible from the rhetoric of reporting the money laundering significantly contributed to the rising market value of crypto but then I talked to someone in EA in Spring of 2021 and became agnostic on it after they made some excellent points about how blockchain works. So what’s the evidence it is true?

For example, I recall this person thought tokens tend to pass through public wallets and are thus make money laundering through crypto easier to trace than some alternatives. Is that right?

The main mechanism for money laundering in the world still appears to be banks, which locate themselves in favorable regulatory regimes like the company FTX appears to have done.

As for 4) these aren’t classic ponzi schemes but ponzi schemes in the weak sense.

Red: 5) nevertheless collapses of market value are certainly predictable for speculative bubbles but this was not the claim Mark suggested in his original post.

I believe the original post suggested that leading EAers were bamboozled into failing to see that the massive moral harm SBF did is almost certain for any cryptocurrency or exchange although I welcome correction on this point. And that they could have predicted this from the kinds of facts you are discussing.

I now think there is more evidence than I initially did that this assertion is false.

Scott
Scott
1 year ago

Given the opportunity to spend money to do as much good as possible, he granted the money to himself.

“Over a quarter of grants paid out by the Future Fund as of June 2022–$36.5 million–went to charities controlled by Effective Ventures, the U.K.-based charity organization chaired by MacAskill: $14 million went to his main group, the Centre for Effective Altruism; $15 million went to Longview Philanthropy, which helps design “bespoke giving strategies” for major donors; $5 million went to fund the Atlas Fellowship, a scholarship for high school project run out of MacAskill’s Centre; $1 million went to Non-trivial Pursuits, a spinoff project of MacAskill’s 80,000 Hours nonprofit; and another $900,000 towards the Effective Ideas Blog Prize, a sort of E.A. writing competition run by MacAskill’s Centre.

Another $7.8 million was pledged to Effective Ventures groups–including $3.9 million for a donation fund run by MacAskill’s Centre, and another $2.9 million for Longview Philanthropy–though it’s not clear if those grants were paid out. Money was also pledged to smaller, more obscure E.A. groups, such as a YouTuber who makes videos about E.A.”

Scott
Scott
1 year ago

It’s stolen money. Will they give it back?

Richard Y Chappell
Reply to  Scott
1 year ago

From what I’ve read, it sounds like the largest grants were made back when FTX was profitable (e.g. in Jan-Feb), so probably wasn’t actually stolen money? But I guess that’s for the bankruptcy courts to sort out.

Others have discussed some of the moral issues in more detail over on the EA forum: https://forum.effectivealtruism.org/posts/FKJ8yiF3KjFhAuivt/impco-don-t-injure-yourself-by-returning-ftxff-money-for

Scott
Scott
Reply to  Richard Y Chappell
1 year ago

Thanks for replying, to what was admittedly a pretty flippant comment above.

If you were a legit organization who applied for a grant and got one (pandemic preparedness or whatever), I definitely think you should keep the money. You could not have been aware of what was going on.

I also think, regardless of when the house of cards started to crumble, a bunch of people have deposits with FTX that they cannot access. And they can reasonably ask where that money went. And part of the answer is ‘property in the Bahamas’ and possibly ‘secret accounts SBF set up’ and definitely ‘FTX Future Fund’.

And since Will is a moral philosopher, and he gave himself 36 million dollars, under the guise of doing the most good possible, he should give the 36 back. Because there is a clear line between people who had their money stolen and the organizations he leads.

Certainly part of my intuition here is pandemic preparedness is a real philanthropy that can do good in the world and EA orgs are not.

David Wallace
Reply to  Richard Y Chappell
1 year ago

The bankruptcy courts won’t touch the issue of whether the money is stolen: that’s a criminal issue. Their job is just to sort out paying the creditors, to the degree that that’s possible. I think US courts have some ability to recover payments made in the last 90 days before declaring chapter 11, but even then there has to be some reason to think it’s unusual or irregular and I doubt that applies here.

I don’t know how the criminal law works if actually-stolen money gets donated to charity. Presumably the charity has to return it if unspent. But (at least from what I can tell) while it’s obvious that FTX was being run in a horrendously irresponsible way, it’s less obvious that the bar to establish literal theft will have been met. (The rather weird late transfers out of FTX do plausibly look like actual theft, but postdate the FTX Foundation paying for anything.) The financial situation (again, as far as I can tell) isn’t ‘Bankman-Fried transfers customers’ funds to himself, sells them, gives the money to charity’, it’s “Bankman-Fried fiddles his company’s accounts in a way that’s somewhere between incompetent and fraudulent, pushes up the value of his own funds by doing so, sells them, gives the money to charity” . I think it would be difficult to establish a criminal recovery path (and if you could, it would apply to everything FTX Foundation has funded, not just organizations that McAskill is involved with).

Enrico Matassa
1 year ago

You know I’ve never liked utilitarianism, but there’s always been an especially bad smell about Peter Singer’s “Famine, Affluence, and Morality” and I’ve only just been able to spot what it is. The whole thing reeks of a (white) savior complex. Singer likens Bengali refugees to a child in the water, but of course that overlooks the history of the crisis in a number of ways. Most tellingly, it overlooks the role that the British empire played in the partition that set the stage for the Pakistani civil war that created the crisis or the way the Empire spent years magnifying inter-community conflicts. Sometimes they did this unintentionally yes, but often quite deliberately under a divide and rule strategy. This was something Singer should have known when he wrote it.But that’s not the impression his essay gives. Instead, it’s well these problems just happen. There’s no sense that an Australian educated at Oxford might have a different relationship to this crisis than a random person reading the essay in say Mexico or Switzerland, or a community college in the U.S. for that matter. It also denies Pakistanis, Indians, and Bengalis agency in ways that hearken back to the worst colonial thinking: They’re children who need us wealthy smart westerners to save them. And this brings me to a final point. Neither Singer nor most philosophers acknowledge how the humanitarian crisis that occasioned the essay was actually resolved. (I suspect many are too ignorant of actual history to even know). It wasn’t that a lot of Oxbridge sorts and their readers gave to Oxfam til it hurt. Rather, Indira Gandhi and the other Indian leadership intervened decisively in the war out of a mix of legitimate moral concern about the humanitarian crisis, which by the way India itself was largely left to deal with, and very shrewd realpolitik. Utilitarianism seems particularly prone to this sort of smugly paternalistic thinking. The moral agent is a benevolent bureaucrat, except imbued with godlike power and wisdom, who must use that power and wisdom to aid passive victims in trouble.

JTD
JTD
Reply to  Enrico Matassa
1 year ago

So, let me see if I understand your reasoning. Let’s suppose that you are an Italian walking through a town in colonial Libya in 1930. You see a Libyan child fall into a pond and start to drown. Your first inclination is to save them. But then you realize that this would be your “white savior” complex kicking into action. Much better, you think, if someone from the Libyan independence movement were to save this child as this would prove that the Libyans don’t need white saviors. So, you leave the child drowning on assumption that they will probably be saved by one of their countrymen (although you can’t actually see any nearby at this moment). You then head back to your hotel where you proceed to make several sharp critical posts on your social media about the evil of colonialism and how it has caused Libyan children to drown like this. There is, of course, much truth to what you are saying. However, in the polarized world of social media your posts are merely preaching to the converted and make no substantial difference to the current political situation in Libya. Nonetheless, on a dopamine high from the hundreds of likes and positive comments your post received, you congratulate yourself on how morally superior you are to those who disagree with you and rest contented with purity of moral soul.

Jen
Jen
Reply to  JTD
1 year ago

Part of the point was that as a group, Bengalis are neither helpless nor children. So there’s a disanalogy, and Singer’s example is inapt. As a result, his argument is not highly persuasive. For similar reasons, the argument your example supports is unpersuasive.

JTD
JTD
Reply to  Jen
1 year ago

My post was satirical so if you are looking for serious arguments in it you have misunderstood this.

Regarding Singer’s famous argument, the claim that the drowning person being a child invalidates the analogy or that it infantilizes, and therefore insults, adult victims of extreme poverty is an odd one. As Singer himself has explained, he made the person needing aid a child to blunt the callous response that any adult drowning in a pond, or suffering from extreme poverty, must be in some way blameworthy for getting into that situation and hence we are not obliged to make sacrifices to help them. The point is that anyone who thinks like that should at least accept an obligation to children, who cannot be blamed for their predicament, and hence should contribute to aid that will help children. But, of course, we should reject the idea adult victims of extreme poverty tend to be blameworthy for their predicament, and also reject the idea that anyone who does have some blame for their predicament therefore does not deserve to get life-preserving aid from those who can provide it at little, or no cost to themselves. Given that, you can make the person drowning in the pond an adult and you should still have the intuition that you should wade in a save them even though doing so would be a small inconvenience to you.

Jen
Jen
Reply to  JTD
1 year ago

It’s obvious that you weren’t offering an argument in the post. But I take it that your point was to suggest something like the following: Enrico Matassa’s comment supports reasoning sufficiently similar to your satirical reasoning, and if so, Enrico Matassa’s comment is foolish, misguided, or some such thing. Because the comment doesn’t support reasoning sufficiently similar, the argument is unpersuasive. If you wish to deny that you were suggesting this, fine.

Pointing out that Singer made the person in need a child to avoid objections regarding blameworthiness misses Enrico Matassa’s point. The point wasn’t about blameworthiness or insult. Rather, it was about the difference in our expectations of agency. Children’s capacities to do things independently, autonomously, rationally, etc. tend to be less well-developed than adults’. And we expect children in dire situations to have their underdeveloped capacities significantly diminished. Our expectation of a drowning child, then, is that he will have little if any ability to act so as to save himself. Our expectations of Bengalis in somewhat less dire a situation are and should be different; we shouldn’t expect that they will have little if any ability to act so as to save themselves. This is why the two situations are disanalogous, and why Singer’s example is inapt.

The intuition that we have an obligation to rescue the child if it can be done at little cost, gets its force in large part from our low expectations of the agency children have in dire situations. Since we don’t and shouldn’t have the same low expectations of the agency Bengalis have in their situation, one can reject the generalization from the intuition mentioned to the principle that we have an obligation to rescue if it can be done at little cost. For this reason, Singer’s argument is not highly persuasive.

JTD
JTD
Reply to  Jen
1 year ago

Again this misses the point in fairly obvious ways. The obligation in question is an obligation to help people who will suffer serious harms if you do not help them. There are obviously situations where people who are faced with serious harms will not in fact be harmed if you do not help them, either because they will help themselves, or someone else will help them. Singer’s argument concedes that in such situations you have no obligation to help them.

The Bangladesh famine of 1974 was one of the deadliest of the 20th century. It is estimated that 1.5 million people dies as a result of it. Do you doubt that most or all of those people who died lacked the ability to prevent their suffering and death? Do you doubt that if more emergency aid was sent to the worst effected regions in 1974 then there would have been less suffering in those regions?

In any case, this is all besides the point. The Bangladesh famine of 1974 is merely a case study. The general point would still be valid even if this case study failed (although I see no evidence that it does fail as a case study). The general point is about extreme poverty and there is solid evidence that, in at least some cases of suffering caused by extreme poverty, at a small cost to ourselves, we affluent people are able to prevent that suffering, and if we do not takes steps to prevent that suffering then that suffering will in fact occur.

Jen
Jen
Reply to  JTD
1 year ago

We might be talking past each other, but I’m not yet convinced that we are. Assuming we’re not:

Enrico Matassa’s objection to Singer had multiple parts. The part relevant to our disagreement concerns the “agency objection” to Singer’s argument.

The agency objection is this: Singer’s argument relies on a general principle (that we have an obligation to rescue if it can be done at little cost) which Singer supports with the intuition that we have an obligation to rescue a drowning child if it can be done at little cost. But since the force of the intuition is in large part due to our low expectations of the agency children have in dire situations, and many rescues that can be carried out at little cost are rescues of people we expect to have somewhat greater agency, the general principle is not well-supported by the intuition. For this reason, Singer’s argument is not highly persuasive.

While I believe the objection has merit, you believe it misses the point because the obligation in question is to help people who will otherwise suffer serious harms. So you seem to disagree that the force of the intuition is due in large part to low expectations of agency, and instead believe that its force is due in large part to the importance of minimizing harm/suffering. (Obviously, I agree that other things being equal, the net harm in the outcome in which you rescue the child and ruin your shoes is less than that in the outcome in which you preserve your shoes and fail to rescue the child.)

Which understanding of the force of the intuition is better? For Singer’s purposes, if he wishes to do more than preach to the choir, he would do better if his argument were not to depend on utilitarian intuition. For anyone hoping to make a case for utilitarianism on the basis of examples such as that of the drowning child, they’d do well to avoid the quasi-circular reasoning that utilitarianism is true because minimizing harm/suffering is important.

SCH
SCH
Reply to  Enrico Matassa
1 year ago

This strikes me as a really nasty thing to say about Singer & the people who found his 1972 article compelling. There’s no immediate or obvious contradiction between his wanting affluent and informed westerners to make sacrifices to their standards of living in order to help alleviate the consequences of some crisis, and his being an affluent informed westerner. Even if the crisis at hand can be traced back to the wrongdoings of other affluent westerners, there’s no tension or betrayal of principles. This is not to mention that Singer can hardly be held responsible for the actions of the British empire abroad, given that he comes from a family displaced to Australia by the (partially anti-semitically motivated) Anschluss. I might add that the taking in of displaced Jewish refugees fleeing Hitler’s rise to power was good, whether or not it can be described as the exercise of a “white savior complex”.

Isn’t the appropriate racial point to make here, anyways, that predominantly white western societies react far more swiftly and decisively to crises involving white people than to, say, Bengadeshis? You can read Singer as saying we should treat disasters over there, where the people don’t look like us or talk like us, just as seriously as we treat disasters here. The proposal is color-blind, and you seem to want to re-racialize it: White westerners should actually second-guess ourselves about whether our moral attitude is appropriate in cases where it’s being directed towards a desire to help people of other races. That’s strange to me.

Enrico Matassa
Reply to  SCH
1 year ago

And I find it incredibly revealing that you can’t even be bothered to get the name of the people in question right. Bengadeshi? With that one you give some pretty nice evidence for my charge that utilitarians don’t respect the people they want to impose their schemes on as human beings with complex histories, identities, and agency but instead just see them as generic victims they can help with the power they unfortunately often possess and the superior knowledge they fancy themselves as possessing.

JTD
JTD
Reply to  Enrico Matassa
1 year ago

Well spotted Enrico! There is indeed an especially bad smell about this. Indubitably:

  1. The best explanation of SCH omitting the “l” from “Bangladeshi” is that they harbour contempt for people from Bangladesh.
  2. Although Singer’s argument does not contain any utilitarian premises, and has been endorsed by non-utilitarians, SCH’s sympathy with the argument clearly outs them as a utilitarian.
  3. This single instance of a utilitarian who has contempt for people from Bangladesh is solid evidence that utilitarians in general “do not respect the people they want to impose their schemes on”.
Chris
Chris
Reply to  JTD
1 year ago

You’ve missed Enricos point – which involved confusing Bangladeshi with Bengali. So well spotted JTD!

JTD
JTD
Reply to  Chris
1 year ago

Bangladeshis are citizens of Bangladesh. Bengalis are an Indo-Aryan ethnolinguistic group from the Bengal region of South Asia, which covers territory in both India and Bangladesh. Although the vast majority of Bangladeshis are Bangalis, a small number belong to other ethnic groups. So, it appears that the people who suffered from the 1974 Bangladesh famine are best described as “Bangladeshis” and not “Bangalis”.

SCH
SCH
Reply to  Chris
1 year ago

And you’ve activated my trap card. I carefully googled “what are people from bengali called” before writing my response, and then intentionally misspelled the term so that people would call me out on the typo and try and correct me. People seeing your foolish error will be forced to admit that utilitarianism is the objectively correct moral theory, which was my ulterior motive all along in defending Peter Singer from nasty character attacks.

David Wallace
Reply to  Enrico Matassa
1 year ago

It is well known that typos on internet comment threads are a window onto the soul.

Jen
Jen
Reply to  David Wallace
1 year ago

A window *onto* the soul? This reveals that you foolishly believe souls are physical objects onto which things can be placed. 😀

Last edited 1 year ago by Jen
Eric Steinhart
1 year ago

Here’s some documentation (which has been cited but not referenced in some of these discussions):

The Sequoia Capital article on SBF with discussion of MacAskill:
https://web.archive.org/web/20221109025610/https:/www.sequoiacap.com/article/sam-bankman-fried-spotlight/

The Forbes article on the FTX Foundation and Effective Ventures:
https://www.forbes.com/sites/johnhyatt/2022/11/17/disgraced-crypto-trader-sam-bankman-fried-was-a-big-backer-of-effective-altruism-now-that-movement-has-a-big-black-eye/?sh=bae6d264ce78

Patrick Lin
1 year ago

Here are some thoughts by Émile P. Torres, a philosopher who is perhaps the highest-profile critic of EA and longtermism:

https://www.salon.com/2022/11/20/what-the-sam-bankman-fried-debacle-can-teach-us-about-longtermism

Scott
Scott
1 year ago

Join the discussion

Animal Symbolicum
1 year ago

From Émile P. Torres’s article: “In the longtermist view, the more ‘happy’ people who exist in the future, the greater the amount of ‘value,’ and the more value, the better the universe will become.”

Maybe this is a moribund topic — no commenter here has alighted upon it — but the question I have is: Is the conception of things according to which a greater number of happy people, a greater amount of value, and a better universe are connected to each other the right conception of things?

From what I can tell, Taurek dug into this conception of things and raised a fairly serious objection to it. (Maybe Nozick too? This isn’t my area of expertise, so forgive my ignorance here.)

The objection — again, from what I can tell — is that it doesn’t so much as make sense as a conception. This is because my experience of “happiness” and your experience of “happiness” are not the sort of things that can be added together to form a third experience of “greater happiness.” Thus there cannot be a third and greater “value” than that inherent in my “happiness” and that inherent in yours. And so there cannot be a universe with “more” value in it — at least, not according to the longtermist conception of happiness and value.

Thoughts?

Jen
Jen
Reply to  Animal Symbolicum
1 year ago

It appears that the objection doesn’t make sense. If for any happy experience, there is value that comes with it, then the value that comes with a collection of happy experiences is greater than the value that comes with any individual experience in the collection. Accordingly, the collection formed by the happy experiences of both person A and person B comes with a greater value than that which comes with the collection formed by the happy experiences of person A alone (assuming person B had at least one happy experience). Similarly, a universe with many happy people–with all their collections of happy experiences–has more value in it than a universe that has all the same happy people except one.

Because to “add” one happy experience to another is nothing more than to form a collection of those two, the objection makes sense only if collections of happy experiences cannot be formed. But since any two things can form a collection…

Animal Symbolicum
Reply to  Jen
1 year ago

It appears your conception of value is different from the conception of value Taurek assumes. Doesn’t surprise me. So many disagreements in moral theory derive from disagreements about the nature of value.

Speaking from a Taurekian point of view — again, this is not my area, so after this, I’ll just shut my dilettante mouth — it looks like knowing there is value in the world means nothing compared to experiencing the value. So increasing value in the world means nothing if no one experiences that increase in value. Now, what must value be in order to be *experienced*? I don’t know. But it seems different from the conception of value you suggest.

Jen
Jen
Reply to  Animal Symbolicum
1 year ago

From the utilitarian point of view, what matters is that your action (say, rescuing a drowning child) brings about an outcome which has greater net value than the outcome that would have been brought about had you taken any other course of action available to you at that time. If A1 (say, rescuing) and A2 (failing to rescue/preserving the quality of your shoes) are the only actions avaliable to you now, and the outcome that A1 would bring about has more happy people than the outcome that A2 would bring about, then (knowing nothing more than this) to take A1 is to “increase value in the universe.” Of course, no one experiences the increase, but that’s because no one could experience it: it is a matter of the difference in value between the outcome of A1 and the outcome that never came to be.

The disagreement between Taurekian and utilitarians appears to be due to a manner of speaking, not to a difference between conceptions of value. To speak of an increase in value is often nothing more than to speak of a difference in value between an outcome that comes to be and outcomes that don’t come to be.

Animal Symbolicum
Reply to  Jen
1 year ago

I know I said I’d shut up, but I’m obviously not getting something, so I suppose I’ll ask you, someone who seems to be better versed than I:

What’s the point of increasing value in the universe?

Jen
Jen
Reply to  Animal Symbolicum
1 year ago

From the utilitarian perspective, the point is to make the universe better than it otherwise would be: a universe with more value in it is better than one with less, and to bring about a better universe is to do more to promote the good.