“Hey Sophi”, or How Much Philosophy Will Computers Do?


While we have seen increased use of computing in philosophy over the past two decades, the continued development of computational sophistication and power, artificial intelligence, machine learning, and associated technologies, suggest that philosophers in the near future could do more philosophy through computers, or outsource various philosophical tasks to computers. Should they? Would they? And if so, what should we be doing now to prepare for this?

[Obvious, “Duc de Belamy” and “Edmond de Belamy”]

This post was prompted by a recent article in Nature by Chris Reed about the work of Noam Slonim (IBM), Yonatan Bilu (KI Institute), and Ranit Aharonov (IBM) to develop an autonomous computer system, Project Debater, that can argue with and debate humans (shared with me by Tushar Irani of Wesleyan), as well the progress made with the language and communication skills of artifical intelligence, as demonstrated by GPT-3. (Also see the entry, “Computational Philosophy,” at the Stanford Encyclopedia of Philosophy.)

Here’s a little about Project Debater from the Nature piece:

It brings together new approaches for harvesting and interpreting argumentatively relevant material from text with methods for repairing sentence syntax (which enable the system to redeploy extracted sentence fragments when presenting its arguments…). These components of the debater system are combined with information that was pre-prepared by humans, grouped around key themes, to provide knowledge, arguments and counterarguments about a wide range of topics. This knowledge base is supplemented with ‘canned’ text — fragments of sentences, pre-authored by humans — that can be used to introduce and structure a presentation during a debate…

In a series of outings in 2018 and 2019, Project Debater took on a range of talented, high-profile human debaters, and its performance was informally evaluated by the audiences. Backed by its argumentation techniques and fuelled by its processed data sets, the system creates a 4-minute speech that opens a debate about a topic from its repertoire, to which a human opponent responds. It then reacts to its opponent’s points by producing a second 4-minute speech. The opponent replies with their own 4-minute rebuttal, and the debate concludes with both participants giving a 2-minute closing statement.

Perhaps the weakest aspect of the system is that it struggles to emulate the coherence and flow of human debaters — a problem associated with the highest level at which its processing can select, abstract and choreograph arguments. Yet this limitation is hardly unique to Project Debater. The structure of argument is still poorly understood, despite two millennia of research. Depending on whether the focus of argumentation research is language use, epistemology (the philosophical theory of knowledge), cognitive processes or logical validity, the features that have been proposed as crucial for a coherent model of argumentation and reasoning differ wildly.

Models of what constitutes good argument are therefore extremely diverse, whereas models of what constitutes good debate amount to little more than formalized intuitions (although disciplines in which the goodness of debate is codified, such as law and, to a lesser extent, political science, are ahead of the game on this front). It is therefore no wonder that Project Debater’s performance was evaluated simply by asking a human audience whether they thought it was “exemplifying a decent performance”. For almost two thirds of the debated topics, the humans thought that it did.

As I tell my students, philosophy isn’t debate (the former is oriented towards understanding, the latter towards winning). But some of the work that goes into debate is similar to the work that goes into philosophy. What’s provocative about Project Debater, GPT-3, and related developments to me is that it suggests the near-term possibility of computing technology and language models semi-autonomously mapping out, in natural language, the assumptions and implications of arguments and their component parts.

One way to understand the body of knowledge philosophy generates is as a map of the unknown, or set of maps. Philosophical questions are points on the maps. So are premises, assumptions, principles, and theories. The “roads” on the maps are the arguments, implications, and inferences between these points, covering the ground of necessity and possiblity.

Individual philosophical works that pose questions, develop arguments, justify premises, and explore the implications of positions make small maps of small bits of the vast terrain of the unknown, and often provide “directions” to others about how to navigate it.

When computers get a bit better at understanding language, or adequately simulating an understanding of language, and better at understanding the structure of argument, they will be able to do a lot of this map-making work. They will also be able to provide directions for philosophers. How far into the future is an exchange like the following?

“Hey Sophi”
“Yes, Justin?”
“If I’m a consequentialist about ethics, how can I argue for eternalism about time?”
“There are a number of routes. Would you care to narrow them down?”
“Yes. Eliminate routes with supernatural and non-naturalist metaethics.”
“Current mapping is using over100 variants of consequentialist ethics. Would you care to specify this factor?”
“Not at this time”
“Some routes are blocked by your logic settings.”
“That’s fine.”
“Here are the top 10 routes on screen, ranked by estimated profession-wide average support for premises.”
“Re-rank according to compatibility with my philosophical and empirical presets.”
“Here you go, Justin.”
Annotate routes 1,2,3,6, and 8 with objection alerts, up to objection-level 3.”
“Done.”
“Thanks, Sophi.”

This kind of technology may not work flawlessly, it may need substantial contributions from human philosophers to work well doing what it does, and it certainly won’t do everything that everyone thinks philosophy should do, but it will nonetheless be a very useful tool for philosophers, and may open new philosophical territory to explore.

Is there any reason to think such a tool wouldn’t come into existence?

I think the most likely reason it may not come into existence is that philosophers themselves don’t cooperate with its development. As Reed notes in his summary, “the structure of argument is still poorly understood,” and philosophers might be integral to making the varieties of argument intelligible to and operationalizable by the technology (or its makers). Perhaps they won’t choose to do this kind of work. Or perhaps the philosophical profession may not recognize work done to create or assist with the creation of this technology as philosophical work, thereby institutionally discouraging it. Further, it would seem that some work on making philosophical content more machine-intelligible would be necessary, either directly or perhaps through feedback on beta-testing, and philosophers might be reluctant to do this work or provide guidance.

Many technologies face the paradoxstacle, “It needs to be used in order to become good, but needs to be good in order to be used,” and overcome it. But philosophers’ reluctance to cooperate and limited demand for more “efficient” philosophy could be a formidable barrier. That would be a pity. (Think of how the integration of computing into mathematics has made research on more kinds of mathematics possible, and how computing has brought about advances in many other disciplines.)

A question all of this raises is: What should we be doing now in regard to the development of such technology, or in regard to other prospects for the integration of computing into philosophy?

One thing to do would be for us all to become more aware of existing projects that involve computers, in some form or another, taking on some of the tasks involved in philosophizing, or projects that are relevant to this. I’m not just talking about computer-based philosophy-information aggregators of various kinds, such as PhilPapers, Stanford Encyclopedia of Philosophy, Internet Encyclopedia of Philosophy, and InPhO, but also the use by philosophers of various computing tools in their research, as with corpus analysis, topic modeling, computer simulations, and network modeling, as well as relevant work in the philosophy of computer science.

I’m sure there is a lot more here and I encourage those more knowledgeable to share examples in the comments. (Maybe the philosophers involved with the International Association for Computing and Philosophy could help out?)

Another thing to do would be to start thinking about the kinds of training philosophers of the near future might need in order to help create, improve, and work effectively with these technologies. In the recent past, people have argued that some philosophy graduate students may have good reason to learn statistics or other formal research methods. Many philosophers of science think that training in the relevant science is extraordinarily useful for doing philosophy of science well. Perhaps we can add computer programming to the list of skills one may opt for as part of their philosophical training (I believe some PhD programs already do this, allowing some students to satisfy their language requirement by gaining competence in programming in a computer language).

Your thoughts and suggestions welcome.


Related: The Distant Future of Philosophy, Will Computers Do Philosophy?

guest
4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Kenny Easwaran
8 months ago

This sounds like it could have many similarities to the use of proof assistants in mathematics. (https://people.inf.ethz.ch/fukudak/lect/mssemi/reports/09_rep_PatrickSchnider.pdf) These are still niche tools, used particularly by people working on problems for which this has been very helpful (notably things like the four color theorem and the Kepler conjecture, but a few others, as well as a few people who have a philosophical argument that all mathematics would benefit from being formalized).

Importantly, the existence of these niche parts of both the disciplines of math and philosophy means that if these tools are at all usable, there will be some people that use them, which will hopefully enable them to be improved to the point where more people start using them. But it will likely be slow at first. This doesn’t seem like something like a social network where you need a critical mass to be using it before it is usable at all.Report

Graham Clay
8 months ago

Thanks for this post, Justin. This is really exciting stuff and I’m glad you’re shining a light on it.

I and Caleb Ontiveros have been working on a paper (here is a draft) arguing that philosophers have an obligation to spend significant effort not only using AI but also helping develop it and theorize about it. This is not to say that AI will not be philosophically relevant otherwise. But, as you discuss in your post, it will have a significantly greater effect on philosophy if philosophers get involved in this way. (We argue that philosophers have an obligation to work to increase progress in a specific sense, hence the downstream obligation to interact with AI in this way, given its likelihood to significantly accelerate progress.)

There are several reasons to be optimistic about the impact of AI on philosophy. First, the current rate of technological progress in this domain is extremely rapid. You note many examples of such progress, of course. ProjectDebater, GPT-3, etc. are quite promising signs of what’s to come, and these two in particular are already quite capable. (And they are capable not just for philosophical research purposes–GPT-3 can, with certain constraints, produce undergraduate philosophy essays of sufficient quality to fool professors. The challenges and opportunities this will bring to teaching are numerous.) Second, in the largest study to date, when asked the chance that AI will be “able to accomplish every task better and more cheaply than human workers,” the average AI expert estimated a 50% chance by 2061 and a 10% chance by 2025 (Katja Grace et al., “When Will AI Exceed Human Performance? Evidence from AI Experts,” Journal of Artificial Intelligence Research, 62, (2018): 729-754). You read that right: every task! It is no surprise people are worried about/excited about the possibility of artificial general intelligences (AGIs) in the near future–it’s because they are not all that far fetched anymore, at least by the lights of the experts who work on this.

As for the tasks that AI will be able to help philosophers with or complete on its own, we have been thinking of them in terms of several categories:

  • Recommendation tools: locating and suggesting useful content.
  • Synthesis tools: summarizing existing work and ensuring that current encyclopedias are up to date.
  • Systematizing tools: relating philosophical propositions and positions.
  • Simulation tools: providing germane contributions from the standpoint of a simulated philosopher or a simulated believer of a given position.
  • Formalizing tools: transforming common language statements into formal logic.
  • Reasoning tools: reasoning through philosophical propositions in a way that is philosophically useful.

And we are sure there are categories we haven’t thought of. As you mention, there are existing tools that do some of these tasks without the use of AI, like the Recommendation tools of PhilPapers, but AI will make them more useful or bring the more sophisticated tools into the marketplace of ideas for the first time.

The mapping tool you describe, Justin, would fall under the Systematizing category, I think. And there are philosophers working on Systematizing tools of precisely this sort already, actually. Last we heard, PhilSurvey’s endgame was to implement an expert system in this territory, but perhaps David Bourget will chime in here and say more…Report

David Bourget
David Bourget
8 months ago

Nice Post. I don’t have the time right now to say a lot about this, but the mapping tool Justin describes is pretty much what I hope the PhilPapers Surveys will evolve into in the long term. Obviously we’re very far from this now (and the starting steps have been greatly delayed by covid and other obstacles). We have first-hand experience with many of the challenges you mentioned, but we keep going!Report

Landon Elkind
8 months ago

Here is a bit of shamelessly self-serving promotion about one such project in this area: my Principia Rewrite project is using the Cop proof-assistant. The 206 theorems in the propositional logic portions have already been formalized and the portions covering quantifiers, descriptions, classes, and relations is in progress. I mention it because, in my view, this sort of formalization tool could be used in rational reconstruction of arguments more broadly in history of philosophy. And it fits with the general theme of using computational tools in creative philosophical research.Report

Last edited 8 months ago by Landon Elkind