APA Creates New Prizes for Philosophical Research on AI


The American Philosophical Association (APA) has announced the creation of new prizes for philosophical work on artificial intelligence.

Specifically, the prizes are for philosophical or philosophically-informed interdisciplinary work on the AI2050 Hard Problems. AI2050 is a project from Schmidt Sciences, a science and technology-oriented philanthrophy, focused on how to realize the potential of AI.

They put it this way:

It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome?

The working list of AI2050 Hard Problems are based around four goals:

  • Develop more capable and more general AI, that is useful, safe and earns public trust

  • Leverage AI to address humanity’s greatest challenges and deliver positive benefits for all

  • Develop, deploy, use and compete for AI responsibly

  • Co-evolve societal systems and what it means to be human in the age of AI

Each of these goals is broken down into sets of problems. For example, the first, regarding the development of more capable and more general AI, is elaborated on as follows:

1. Solved the science and technological limitations and hard problems in current AI that are critical to enabling further breakthrough progress in AI leading to more powerful and useful AI capable of realizing the beneficial and exciting possibilities, including artificial general intelligence (AGI).

Examples include generalizability, causal reasoning, higher/meta-level cognition, multi-agent systems, agent cognition, the ability to generate new knowledge, novel scientific conjectures/theories, novel beneficial capabilities, and novel compute architectures, breakthroughs in AI’s use of resources.

2. Solved AI’s continually evolving safety and security, robustness, performance, output challenges and other shortcomings that may cause harm or erode public trust of AI systems, especially in safety-critical applications and uses where societal stakes and potential for societal harm are high.

Examples include bias and fairness, toxicity of outputs, factuality/accuracy, information hazards including misinformation, reliability, security, privacy and data integrity, misapplication, intelligibility, and explainability, social and psychological harms.

3. Solved challenges of safety and control, human alignment and compatibility with increasingly powerful and capable AI and eventually AGI.

Examples include risks associated with tool-use/connections to physical systems, multi-agent systems, goal misspecification/drift/corruption, risks of self-improving/self-rewriting systems, gain of function risks and catastrophic risks, alignment, provably beneficial systems, human-machine cooperation, challenges of normativity and plasticity.

You can see the full list of working problems, about which the project “makes no claim to being comprehensive, final, or fixed,” here.

There are two prizes, one for an early-career researcher and one for an established researcher. Each prize winner will receive $10,000. If there are co-winners, they will split the prize amount equally. The prizes, funded by Schmidt Sciences, will be awarded at the APA divisional meetings.

The APA now invites submissions for the inaugural APA AI2050 Prizes. The submission deadline is June 23, 2024.

For more information on these prizes, including details on criteria and eligibility, visit the APA AI2050 Prizes page.

(via Erin Shepherd)

Subscribe
Notify of
guest

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Anco
1 month ago

As someone who studied AI and has been working on it from a philosophical angle for the last 14 years, I’m a bit overwhelmed by all the recent buzz around the topic(s), including now from the larger philosophical community. On the one hand, I’m quite happy more people seem to share my interest for issues I get excited about. On the other, it seems there is such a hype train going on that makes people continuously mistake the bunch of _somewhat_ smart algorithms that we have (which is really all most of AI currently is anyway), for solving the worlds Problems. To invoke the late Dennett: maybe we need to break the AI spell. You know, just a bit.

John Alspector Finney
27 days ago

I don’t understand what all the hoopla is about AI – it’s just a tool, no different from a saddle. Aristotle was right to focus on the saddle maker, not the saddle itself.