Cryptocurrency Chaos Affects Academic Grants (updated)


The team at FTX Future Fund, a philanthropy aimed at funding projects “to improve humanity’s long-term prospects,” including philosophers William MacAskill and Nick Beckstead, resigned yesterday, following the sudden collapse earlier this week of FTX, the large, influential, and previously-relatively-well-regarded cryptocurrency exchange whose proceeds bankrolled it.

Why FTX collapsed is a complicated story that is still being told (I found this account helpful). But among the fallout of its collapse and the financial loss to its CEO, Sam Bankman-Fried (reportedly he lost around 95% of his wealth), is the threat to projects supported by the Future Fund. These include some philosophy projects, mainly on topics related to artificial intelligence, effective altruism, and humanity’s longterm prospects.

In a letter announcing their resignation, MacAskill, Beckstead, Leopold Aschenbrenner, Avital Balwit, and Ketan Ramakrishnan write that they “are now unable to perform our work or process grants… We are devastated to say that it looks like there are many committed grants that the Future Fund will be unable to honor… We are no longer employed by the Future Fund, but in our personal capacities, we are exploring ways to help with this awful situation.”

Here’s the full letter:

FTX has now filed for bankruptcy.

UPDATE (11/11/22): William MacAskill reflects on these recent events here.

UPDATE (11/13/22): Eric Schliesser (Amsterdam) comments. An excerpt:

Within utilitarianism there is a curious, organic forgetting built into the way it’s practiced, especially by the leading lights who shape it as an intellectual movement within philosophy (and economics, of course), and as a social movement. And this is remarkable because utilitarianism for all its nobility and good effects has been involved in significant moral and political disasters involving not just, say, coercive negative eugenics and—while Bentham rejected this—imperialism (based on civilizational superiority commitments in Mill and others), but a whole range of bread and butter social debacles that are the effect of once popular economics or well-meaning government policy gone awry. But in so far as autopsies are done by insiders they never question that it is something about the character of utilitarian thought, when applied outside the study, that may be the cause of the trouble (it’s always misguided practitioners, the circulation of false beliefs, the wrong sort of utilitarianism, etc.). 

In my view there is no serious study within the utilitarian mainstream that takes the inductive risk of itself seriously and—and this is the key part—has figured out how to make it endogenous to the practice. This is actually peculiar because tracking inductive risk just is tracking consequences and (if you wish) utils. 

UPDATE (11/14/22): Tyler Cowen (George Mason) notes: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.” More here.

UPDATE (11/16/22): Alex Guerrero (Rutgers) reflects on FTX, effective altruism, and Peter Unger’s Living High and Letting Die here.


Disclosure: The Center for AI Safety, a project funded by FTX Future Fund, advertised earlier this year on Daily Nous.

Related: “The Philosopher Advising Billionaires on Philanthropy“, “Change Their Minds, Win Money

  

Use innovative tools to teach clear and courageous thinking
Subscribe
Notify of
guest

19 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Steve
1 year ago

How could longtermists be expected to have predicted the collapse of a Ponzi scheme?

Extremely vindicated
Extremely vindicated
Reply to  Steve
1 year ago

Lmao

JDRox
JDRox
Reply to  Steve
1 year ago

Isn’t it sorta plausible that they did predict it, and did it anyways? If we interpret them charitably, the Ponzi scheme funneled a lot of money into charity. If we interpret them uncharitably, it funneled a lot of money to them. Either way it’s not obvious to me that, to someone with their values, the negatives of the collapse outweigh the positives of everything before it.

Extremely vindicated
Extremely vindicated
Reply to  JDRox
1 year ago

It would be darkly hilarious if every crypto rug pull was run by effective altruists, robbing from the precariously middle class to buy mosquito netting.

Billy
Billy
1 year ago

I don’t believe in utilitarianism, effective altruism, or cryptocurrency. But it’s still a sad day for philosophy. That I do believe.

FTY
FTY
1 year ago

People might be interested in this (now-deleted) portrait of the founder that mentions how instrumental the effective altruism network was in getting him the initial capital: https://webcache.googleusercontent.com/search?q=cache:pizI33lYOGAJ:https://www.sequoiacap.com/article/sam-bankman-fried-spotlight/&cd=1&hl=en&ct=clnk&gl=us

Shocked
Reply to  FTY
1 year ago

Yeah, this aged rather poorly, didn’t it? https://dailynous.com/2022/10/08/the-philosopher-advising-billionaires-on-philanthropy/

Mr. MacAskill and Mr. Bankman-Fried’s relationship is an important piece in understanding the community’s evolution in recent years. The two men first met in 2012, when Mr. Bankman-Fried was a student at M.I.T. with an interest in utilitarian philosophy. Over lunch, Mr. Bankman-Fried said that he was interested in working on issues related to animal welfare. Mr. MacAskill suggested that he might do more good by entering a high-earning field and donating money to the cause than by working for it directly.

Mr. Bankman-Fried contacted the Humane League and other charities, asking if they would prefer his time or donations based on his expected earnings if he went to work in tech or finance. They opted for the money, and he embarked on a remunerative career, eventually founding the cryptocurrency exchange FTX in 2019.

The experiment with the young man’s career was, by any measure, a success. Bloomberg recently estimated that Mr. Bankman-Fried was worth $10.5 billion, even after the recent crash in crypto prices. That puts Mr. Bankman-Fried in the unusual position of having earned his enormous fortune on behalf of the effective altruism cause, rather than making the money and then searching for a sense of purpose in donating it. Mr. Bankman-Fried said he expected to give away the bulk of his fortune in the next 10 to 20 years.”

Sinope Lighting
Sinope Lighting
Reply to  Shocked
1 year ago

It did age poorly. Like most of the coverage, it also obscures the material/structural side of things as if EA’s only contributions to SBF’s trajectory were 1. one career advice session and 2. some philosophical ideas he misinterpreted (and 3. boosting his reputation after he got rich).

According to the Sequoia piece, after SBF left Jane Street, MacAskill offered him a job in Berkeley “as director of business development at the Centre for Effective Altruism.” This is where he made his first money, speculating in crypto by evading national regulations in South Korea and Japan:

The first job was just getting the money into the system. The operational challenges were huge. Not just anyone can walk into a foreign bank and start wiring money out of the country every day. There are know-your-customer rules, caps on withdrawals, citizenship requirements. Even worse, to any normal bank, the constant zeroing out, then maxing out, of a cash account—with the money coming and going overseas, to and from fly-by-night Bitcoin exchanges—raised every red flag in the book. (…)

Fortunately, SBF had a secret weapon: the EA community. There’s a loose worldwide network of like-minded people who do each other favors and sleep on each other’s couches simply because they all belong to the same tribe. Perhaps the most important of them was a Japanese grad student, who volunteered to do the legwork in Japan. As a Japanese citizen, he was able to open an account with the one (obscure, rural) Japanese bank that was willing, for a fee, to process the transactions that SBF—newly incorporated as Alameda Research—wanted to make. The spread between Bitcoin in Japan and Bitcoin in the U.S. was “only” 10 percent—but it was a trade Alameda found it could make every day. With SBF’s initial $50,000 compounding at 10 percent each day, the next step was to increase the amount of capital. At the time, the total daily volume of crypto trading was on the order of a billion dollars. Figuring he wanted to capture 5 percent of that, SBF went looking for a $50 million loan. Again, he reached out to the EA community. Jaan Tallinn, the cofounder of Skype, put up a good chunk of that initial $50 million. (…)

“This thing couldn’t have taken off without EA,” reminisces Singh, running his hand through a shock of thick black hair. (…) “All the employees, all the funding—everything was EA to start with.”

When people say SBF was EA’s first “homegrown billionaire” this is what they should have in mind, not a mythical lunch in Boston. And whatever reflection is due should go beyond whether EA attracts/licenses naive utilitarians.

L. A. Paul
Reply to  Shocked
1 year ago

In a paper written with Jeff Sebo, “Effective Altruism
and Transformative Experience”, we argued: “Suppose that you are an effective altruist deciding what to do for a living, and that you have three main options to consider: You can (a) go to grad school (so that you can work in research and education), (b) go to law school (so that you can work in law and politics), or (c) work in finance (so that you can earn to give). Suppose also, since grad school and law school would be more continuous with your college experience than finance would be, you have a better sense of what your life would be like in the first two scenarios than in the third. In particular, the choice whether to work in finance strikes you as high risk/ high reward. If it works out, you could earn millions of dollars per year and then donate that money to effective causes. But you wonder if you can expect it to work out. Here you may ask: Would I fail at investment banking? Would I succeed but lose my commitment to effective altruism? Would I retain my commitment to effective altruism but start to think that I need to spend more money on myself than I currently think I do? If I did change my mind in one or more of these ways, would I be rationally updating in light of new information and arguments? Would I simply be rationalizing the kind of self-interested behavior that I would have, at that point, been socialized into? Or might I change in other ways that I cannot imaginatively anticipate, and which might raise other possibilities for ex ante/ex post conflict?” …

Enrico Matassa
1 year ago

If you can’t rely on hardcore utilitarians to keep their promises, then who can you rely on?
Sarcasm aside, contrary to what you say the situation actually isn’t that complicated. Bankman-Fried took some insanely risky (I would say stupid) gambles with other people’s money without their knowledge or consent. Those gambles didn’t pan out. It’s financial fraud plain and simple.

Extremely vindicated
1 year ago

Admittedly this is speculating, but I’d like to note that longtermists have gotten criticism for not being more alarmed by climate change as an existential risk, so I can’t help but wonder how much of that is due to backing by cryptocurrency, which has an enormous carbon footprint.

I only point this out because the last year has seen publicity around MacAskill’s book sucking up an enormous amount of attention, even though his philanthropic organizations seem to have suffered serious elite capture by the world’s shittiest people.

Ineffective Egoist
1 year ago

While on the subject of the ethics of those involved in Effective Altruism:

“if you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men. Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them.”

https://forum.effectivealtruism.org/posts/NacFjEJGoFFWRqsc8/women-and-effective-altruism

Jen
Jen
Reply to  Ineffective Egoist
1 year ago

I wonder whether this involved unlawful behavior. If it did, JDRox comment above might lead us to an interesting take on it:
Isn’t it sorta plausible that they did predict [unlawful behavior], and did it anyways? If we interpret them charitably, the [unlawful] scheme funneled a lot of [pleasure/good] into [humanity]. If we interpret them uncharitably, it funneled a lot of [pleasure/good] to them. Either way it’s not obvious to me that, to someone with their values, the negatives of the [unlawful behavior] outweigh the positives of [its antecedents].

Scott
Scott
1 year ago

There needs to be a more serious reckoning here. SBF was not just a funder of EA. He was given moral cover by philosophy professors. His image as a do gooder enabled the enormous harm he caused. If you’ve gone a the world’s most exhaustive book tour promoting your ideas, have your New Yorker profile and all that, you have a lot more to answer for than ‘I’ve got a lot to reflect on, but I swear we were into integrity all along.’

Patrick Lin
1 year ago

New interview with SBF, in which he admits the “ethics stuff” (e.g., caring about the greater good or long-term anything) was “mostly a front.

https://futurism.com/the-byte/sam-bankman-fried-ethics-stuff-front

Full interview: https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy

<Insert shocked-not-shocked face>

Enrico Matassa
Reply to  Patrick Lin
1 year ago

So the defense here I take it is that MacAskill, Beckstead, and the other effective altruists aren’t scoundrels, they’re just dupes who are really poor at predicting even the very near term consequences of their actions in the real world? Forgive me for thinking that might in fact far more damning than attacking them for having poor morals as measured by normal standards. We’ve always known that utilitarian morality departs wildly from ordinary morality on questions of honesty and manipulating others. On that charge the theory is counter-intuitive and any intellectual movement built on utilitarianism will have counter-intuitive things to say on things like this. But if one can’t actually predict what will happen even a few days out, then the theory becomes utterly useless. That the leaders of longtermist utilitarianism have no idea what the future looks like is fatal to the movement in a way that admitting that their moral judgments might seem odd to those outside their church are not.

Jen
Jen
Reply to  Patrick Lin
1 year ago

His response was ambiguous. But given the rest of the exchange between SBF and Piper (the interviewer), the best explanation of his response seems not to be that for him, the ethics stuff was mostly a front, but that he views others’ perceptions of a person’s goodness as a function of two things, (a) their perceptions of the person’s location in the continuum between “sketchy” and “clean” and (b) their perceptions of the person’s location in the continuum between winner and loser. If this is correct, then we shouldn’t believe he admitted the ethics stuff was mostly a front.

In fact, something he said suggests that he believes doing unethical stuff will impede one in achieving one’s philanthropic objectives. He said something like “don’t do unethical stuff because if you’re running Philip Morris, no one will work with you on philanthropy.” It’s difficult to sqaure this with the idea that for him, the ethics stuff was mostly a front.

On the other hand, whether he actually accepts EA or not, he’s probably a liar, and this makes it difficult to believe what he says.

Scott
Scott
1 year ago

Here is a super reasonable question to ask any academic who has any affiliation from SBF–did you receive direct or indirect financial support from him?