Mini-Heap


Recent links of interest to those interested in philosophy…

  1. Philosophy is taught in the final year of high school, and is “the dominant subject in literary majors and a fairly important minor [among] scientific majors” — Hady Ba (Cheikh Anta Diop University’s Teachers College) on philosophy in Senegal
  2. “If the academic humanities too often address only siloed experts, then pop philosophy too often addresses an audience of imagined idiots” — Becca Rothfeld on public philosophy
  3. “I would love to be talked out of it” — Scientific American covers a recent workshop on panpsychism
  4. New technology ethics podcast — from John Danaher (Galway) and Sven Nyholm (LMU), the series is based on their book “This is Technology Ethics,” which covers a wide range of fascinating philosophical questions
  5. “The desire to admire EA despite its flaws indulges a quixotic longing to admire an ineffective altruist” — an enviously well-written essay on the power and problems of effective altruism by Mary Townsend (St. John’s)
  6. “Contemporary scientists appear to be divided between those who think Neanderthal dignity calls for a recognition of their similarity to us, and those who think it calls for a recognition of their difference” — Nikhil Krishnan (Cambridge) on the science, ethics, and meaning of disputes over how we think of Neanderthals
  7. Last month, NDPR published an especially critical and widely circulated review by Louise Antony (UMass) of a new book in moral philosophy and now the book’s authors, Victor Kumar (Boston) and Richmond Campbell (Dalhousie), have published a response

Discussion welcome.

Mini-Heap posts usually appear when 7 or so new items accumulate in the Heap of Links, a collection of items from around the web that may be of interest to philosophers.

The Heap of Links consists partly of suggestions from readers; if you find something online that you think would be of interest to the philosophical community, please send it in for consideration for the Heap. Thank you.

 

Horizons Sustainable Financial Services
Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Richard Y Chappell
7 months ago

I found the thread of the Townsend piece frustratingly difficult to follow. It seems to use nice literary writing to hint at the idea that trying to save lives is a bad idea, without coming out with anything recognizable as an actual argument for this (extraordinary) conclusion.

The evidence she cites as “worst of all” for EA is stories of anti-malarial bednets occasionally misused for fishing. She doesn’t seem aware that GiveWell looked into this and concluded that it is a non-issue: https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/

MrMr
MrMr
Reply to  Richard Y Chappell
7 months ago

I strongly agree. Even if bednets being used for fishing were a much more significant fiasco than it was, the fact that actions can have unintended consequences is a fact of life, not an argument against effective altruism. Presumably, many thoughts about what would happen if she did were part of Townsend’s motivation for writing her essay; e.g., the expectation that she would enjoy writing it and that it would promote ideas she thinks are worthwhile. But if it turned out that she unexpectedly did not enjoy writing it and that it met with little uptake, that need not indicate any foolishness on her part. Sometimes things don’t work out. What would be necessary here would be some argument that EA ~needs~ perfect certainty about the future in order for its practical and normative recommendations to make sense, and that this differs from the fallible future-thinking required by ordinary prudence. But no such argument is forthcoming.

I also found the consistent hostility to the use of numbers unfortunate. It is possible to be misled by numbers, yes. But it is also not possible to decide among a large number of highly complex and heterogenous competing priorities without using numbers.

The piece notes that QALYs/DALYs originate in health economics and health system priority-setting. It would benefit from considering why those fields require quantification, and why, despite their extremely well-known conceptual and methodological problems, QALYs and DALYs are still universally used. Answer: because there are over 85,000 ICD codes, aka medical diagnostic codes, which can be further discriminated by severity (not to mention social context!), and similarly many interventions, each with not only a range of costs and effects but also a range of evidence qualities supporting their use. And health systems must decide, using their limited budgets, what they will pay for. The idea that priority-setting at this scale of complexity could be done, without numbers, by some morally sensitive sage who eschews abstractions and grapples with REAL “muddy, hairy, smelly, messy people” instead of “figures in a ledger” is completely absurd. Any scheme designed in that way would almost certainly be dominated by a competing, quantitatively-informed approach. And of course, what’s true of health priority setting is just as true of philanthropic priority-setting more generally.

It is also worth noting, for those who (wrongly) do not care much about efficiency, that numbers aren’t just indispensable to efficient allocation. They are also necessary for just allocation. It is quantification that enables us to characterize inequalities such as, e.g., the much lower historical funding for Sickle Cell research as against its disease burden when compared to diseases which are not concentrated in Black populations. Without being able to abstract from specific individuals and their symptoms to a measure of generic health burden, that comparison would not be possible.

Richard Y Chappell
Reply to  Richard Y Chappell
7 months ago

For anyone interested, philosophy student Matthew Adelstein offers a thorough response to Townsend’s article, here:
https://benthams.substack.com/p/the-bulwarks-article-on-effective

As he explains: “[T]here is not a single argument made in the entire article that should cause anyone to be less sympathetic to effective altruism at all! Every claim in the article is either true but irrelevant to the content of effective altruism or false. The article is crafted in many ways to mislead, confuse and induce negative affect in the reader but is light on anything of substance.”

Derek Bowman
Reply to  Richard Y Chappell
6 months ago

For what it’s worth, though I would quibble with some details, I found prof Townsend’s piece somewhat insightful. I certainly think trying to saddle her with the thesis that it’s bad to try to save lives is more uncharitable than any of her depictions of effective altruism. The difficulty I have with the Adelstein response is that it seems to be defending a kind of bare-logical/in-principle reading of effective altruism, whereas Townsend’s piece struck me as more focused on the attractions of the movement as a social phenomenon.

I’m surprised that her brief dig at malaria nets loomed larger for you than the association with the Sam Bankman Fried grift and the point about longtermism allowing the substitution of concerns with far-future AI conjectures to justify ignoring the concerns of people here and now. Neither of these are refutations of effective altruism as a set of ethical or philosophical theses, but they are clearly a part of how the movement has manifested itself in the world.

Richard Y Chappell
Reply to  Derek Bowman
6 months ago

> “I certainly think trying to saddle her with the thesis that it’s bad to try to save lives is more uncharitable…”

But the main thing EAs do, in real life, is (i) donate money to save lives, and (ii) work on related causes.

If you write a piece that argues that people shouldn’t be EAs, or that we should think poorly of the movement, etc., then the real-life effect of that is to discourage saving lives. That’s what the “social phenomenon” here is.

> “I’m surprised that her brief dig at malaria nets loomed larger for you than the association with the Sam Bankman Fried grift and the point about longtermism allowing the substitution of concerns with far-future AI conjectures to justify ignoring the concerns of people here and now.”

I just flagged a factual inaccuracy in what she claimed to be “worst of all”, since that’s pretty low-hanging fruit for establishing a quality control problem.

The anti-longtermism stuff just struck me as empty rhetoric, not anything substantial enough to be worth engaging with. Association with SBF is obviously unfortunate in retrospect, but I don’t believe in guilt by association.

AI conjectures tend to concern their development over the next couple of decades — hardly the “far future”.

But the point that protecting future generations (against near-term extinction risks) can sometimes take priority over current people’s interests is just obviously correct, so I don’t see any serious objection here. See Parfit’s depletion vs conservation cases.

Of course, if you’re absolutely certain that AI poses no such risk, then you’ll disagree about the cause prioritization on empirical grounds (in the same way that those who think animals aren’t conscious will disagree with the animal welfare wing of EA). But I can’t imagine how you could be justified in such certainty. See my post on “x-risk agnosticism”:
https://rychappell.substack.com/p/x-risk-agnosticism

Derek Bowman
Reply to  Richard Y Chappell
6 months ago

Thanks for the reply, Richard.

The connection to SBF is more than just ‘unfortunate,’ and it’s not about guilt by association. It’s a cautionary tale about one of the failure-modes of effective altruism as a way of approaching one’s ethical obligations. Being vulnerable to scams, rationalized self-dealing, and other errors are, of course, features of any moral theory and any social movement. But I think the particular mode of those failures can offer insight into the limits, potential blindspots, and predictable errors that result. (Compare, e.g., the susceptibility of evangelical Christians to various direct-mail, televangelist, and in-person scamming based on their religious convictions and identity).

The desire to do good and to be a good person, if taken seriously, can demand serious self-sacrifice and persistent, high-stakes uncertainty. Effective altruism offers a tempting escape from this: You can continue to pursue conventional success and just make sure you’re giving a chunk of your money in the right way. Singer even tells us that you’ll get more out of the experience of giving than what you’re giving up, so no real sacrifice is needed. Longtermism adds to this by allowing the moral imperative of whatever speculative future disaster you’re averting silence any concerns you might otherwise have about the system which allows you to acquire your wealth. If you don’t understand how this can be a tempting line of thinking, then I’m not surprised you missed the point of Townsend’s article.

As for AI: I think there are more than enough concerns about the actual uses of current versions of machine learning which exacerbate existing problems facing people right now, that I don’t think we need to concern ourselves with speculative future risks based on science fiction scenarios.

Rather than a logical defense of how ‘no true effective altruist’ would commit such errors, I’d be much more impressed with an account of how effective altruists avoid, and help others to avoid these mistakes and temptations, and of how they’ve effectively reckoned with the movement’s misuse by the FTX scammers.

—-

In any case, I’m not sure how many people are still reading this. Feel free to email (found on my website) if you’d like to continue the discussion, though I’m not sure I have much more to offer than what I’ve said here.

Last edited 6 months ago by Derek Bowman