The team at FTX Future Fund, a philanthropy aimed at funding projects “to improve humanity’s long-term prospects,” including philosophers William MacAskill and Nick Beckstead, resigned yesterday, following the sudden collapse earlier this week of FTX, the large, influential, and previously-relatively-well-regarded cryptocurrency exchange whose proceeds bankrolled it.
Why FTX collapsed is a complicated story that is still being told (I found this account helpful). But among the fallout of its collapse and the financial loss to its CEO, Sam Bankman-Fried (reportedly he lost around 95% of his wealth), is the threat to projects supported by the Future Fund. These include some philosophy projects, mainly on topics related to artificial intelligence, effective altruism, and humanity’s longterm prospects.
In a letter announcing their resignation, MacAskill, Beckstead, Leopold Aschenbrenner, Avital Balwit, and Ketan Ramakrishnan write that they “are now unable to perform our work or process grants… We are devastated to say that it looks like there are many committed grants that the Future Fund will be unable to honor… We are no longer employed by the Future Fund, but in our personal capacities, we are exploring ways to help with this awful situation.”
Here’s the full letter:
FTX has now filed for bankruptcy.
UPDATE (11/11/22): William MacAskill reflects on these recent events here.
UPDATE (11/13/22): Eric Schliesser (Amsterdam) comments. An excerpt:
Within utilitarianism there is a curious, organic forgetting built into the way it’s practiced, especially by the leading lights who shape it as an intellectual movement within philosophy (and economics, of course), and as a social movement. And this is remarkable because utilitarianism for all its nobility and good effects has been involved in significant moral and political disasters involving not just, say, coercive negative eugenics and—while Bentham rejected this—imperialism (based on civilizational superiority commitments in Mill and others), but a whole range of bread and butter social debacles that are the effect of once popular economics or well-meaning government policy gone awry. But in so far as autopsies are done by insiders they never question that it is something about the character of utilitarian thought, when applied outside the study, that may be the cause of the trouble (it’s always misguided practitioners, the circulation of false beliefs, the wrong sort of utilitarianism, etc.).
In my view there is no serious study within the utilitarian mainstream that takes the inductive risk of itself seriously and—and this is the key part—has figured out how to make it endogenous to the practice. This is actually peculiar because tracking inductive risk just is tracking consequences and (if you wish) utils.
UPDATE (11/14/22): Tyler Cowen (George Mason) notes: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.” More here.
UPDATE (11/16/22): Alex Guerrero (Rutgers) reflects on FTX, effective altruism, and Peter Unger’s Living High and Letting Die here.
Disclosure: The Center for AI Safety, a project funded by FTX Future Fund, advertised earlier this year on Daily Nous.