Change Their Minds, Win Money


The Future Fund, a philanthropic collective funded primarily by the creator of a crypto-currency exchange and aimed at supporting “ambitious projects to improve humanity’s long-term prospects,” has launched a contest offering substantial prizes for arguments that change their minds about the development and effects of artificial intelligence.

The use of prizes to incentivize philosophical work on specific topics is not new, and varieties of them are regularly offered by philosophical organizations and academic journals (for example) and foundations (for example). Prizes for philosophical work aimed at changing minds and behavior have been offered before, too (for example).

The “Future Fund’s AI Worldview” contest is a bit different, though. One difference is that its prizes are bigger: up to $1,500,000.

[image created with DALL-E]

Another difference is the condition for winning several of the prizes: moving the judges’ credences regarding a few predictions about artificial general intelligence (AGI)—and the more you move them, the bigger the prize you could win.

For example, the judges currently put their confidence in the claim that AGI will be developed by January 1, 2043 at 20%. But if you can devise an argument that convinces them that their credence in this should be between 3% and 10%, or between 45% and 75%, then they will award you $500,000. If you convince them it should be below 3% or above 75%, they will award you $1,500,000. A similar prize structure is offered in regard to other propositions, such as, “Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI,” their confidence in which they currently place at 15%. There are other prizes, too, which you can read about on the prize page.

Why are they running this contest? They write:

We hope to expose our assumptions about the future of AI to intense external scrutiny and improve them. We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century, and it is consequently one of our top funding priorities. Yet our philanthropic interest in AI is fundamentally dependent on a number of very difficult judgment calls, which we think have been inadequately scrutinized by others. 

As a result, we think it’s really possible that:

    • all of this AI stuff is a misguided sideshow,
    • we should be even more focused on AI, or
    • a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem.

If any of those three options is right—and we strongly suspect at least one of them is—we want to learn about it as quickly as possible because it would change how we allocate hundreds of millions of dollars (or more) and help us better serve our mission of improving humanity’s longterm prospects…

AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year. As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise:

      1. Loss of control to AI systems
        Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.
      2. Concentration of power
        Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future…

We really want to get closer to the truth on these issues quickly. Better answers to these questions could prevent us from wasting hundreds of millions of dollars (or more) and years of effort on our part. We could start with smaller prizes, but we’re interested in running bold and decisive tests of prizes as a philanthropic mechanism. A further consideration is that sometimes people argue that all of this futurist speculation about AI is really dumb, and that its errors could be readily explained by experts who can’t be bothered to seriously engage with these questions. These prizes will hopefully test whether this theory is true.

I was told via email, “We think philosophers would be particularly well-equipped to provide in-depth analyses and critiques about these assumptions concerning the future of AI, so we wanted to disseminate this opportunity to the broader philosophy community.”

The model of this contest could be applied to other topics, of course. Which would you suggest?

Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Daniel Weltman
1 year ago

Please nobody steal my strategy (I am going to promise the judges that I will split the prize money with them if they adjust their credences).

Kenny Easwaran
Reply to  Daniel Weltman
1 year ago

I think you also have to convince them that pragmatic reasons are good reasons to adjust credences! (Maybe collaborate with Susanna Rinard? https://philpapers.org/rec/RINNEF )

Daniel Weltman
Reply to  Kenny Easwaran
1 year ago

In addition to the various arguments others have adduced for that conclusion, I have a bootstrapping argument for that conclusion:

  1. I will hit the judges with bootstraps if they do not accept that pragmatic reasons are good reasons to adjust credences.
  2. They do not want want to be hit by bootstraps.
  3. Pragmatic reasons are good reasons to adjust credences (from 1 and 2 with some additional steps skipped – to be filled in as an exercise by the reader).
EFB
EFB
1 year ago

I’m not exactly sure what it is, but this doesn’t sit right with me. It gives the impression of university students (I know they’re not) playing a game except with lots and lots of money.

Nate Sheff
Reply to  EFB
1 year ago

I’m sorry, but are you suggesting that a venture funded by crypto billionaires might be ethically compromised?

Sam Duncan
1 year ago

There’s are a couple of serious issues here that I’m kind of surprised that no one’s commented on. Even critical pieces on AI and its supposed threat shape the academic and political discussion. This is the thing people are talking about and it’s the idea that you should devote your efforts to dealing with. I take it that most of the public (quite rightly) thinks that the threat of evil AI is a bit silly. It’s a fun sci fi plot but things like climate change or the disaster that is the American health care system are the problems that clever people ought to spend their time trying to fix. But if suddenly everyone is talking about the threat of the killer roh-bits– even if many of them are saying to calm down about the roh-bits– then people will start to take that seriously and it will crowd out attention from both the public and clever folks from what I take to be real problems. The deeper issue here is that this is once again a case of a very rich person, Samuel Bankman-Fried in this case, trying to capture public discussion and mold it to fit his own values and personal obsessions. But there is no reason to think that Bankman-Fried is any more knowledgable about the issues our society faces and how to deal with them than the Kochs are. Mind you I don’t think that Bankman-Fried’s political goals are quite as destructive as the Kochs (a darn low bar that is though) but the fundamental issue of having people whose only qualification is being very rich take charge of our discourse in underhanded ways is very much present here. I’m more than a little disappointed that the same people who make a stink every time Koch funds something to make sure libertarian ideas take up a huge part of the academic discussion aren’t crying foul here.

Heather Douglas
1 year ago

I argue here that Future Fund is asking the wrong questions. Of course, that is a typical thing for a philosopher to do!