Philosophical Uses for LLMs: Modeling Philosophers


Now that OpenAI has made it possible for members of the (paying) public to customize its large language model (LLM), ChatGPT, with special instructions, extra knowledge, and particular combinations of skills, the prospects of using it to create useful, interesting, and maybe even insightful “model philosophers” have improved.

[made with DALL-E]

(After all, what a technology ends up being able to do is not just a function of the technology itself, but, among other things, who has gotten to play with it.)

For an example of this, consider LucretiusGPT.

LucretiusGPT was created by research analyst Kelly Truelove. It’s an LLM version of Lucretius, or, as Truelove puts it, “a customized version of ChatGPT that is especially familiar with the ancient Roman poet Lucretius, his grand poem “On the Nature of Things,” and the philosophy of Epicurus presented in the 2,000-year-old work.”

In a dialogue with LucretiusGPT (it calls itself “Lucretius”), Truelove asks about “his” famous six-part poem and how it could be extended include a part on computers and artificial intelligence.

In response, LGPT presents an outline of “Book VII: The Age of Silicon and Reason.” Truelove notes: that the six-part structure of the proposed extension seems to correlate with what would be the preceding six “books” of the poem. LGPT replies:

Indeed, the proposed six-part structure for extending “De Rerum Natura” to encompass the developments of computing devices and artificial intelligence mirrors the original six-book layout of my work. Each part of this hypothetical extension aligns with the thematic progression and philosophical depth found in the original poem.

LGPT then goes on to explain the correlations for each part, noting:

In this extended structure, each part would serve to bridge the ancient Epicurean thought with modern scientific and technological advancements, maintaining the poem’s original spirit of exploring and understanding the natural world, human experience, and the implications of our advancements.

Truelove asks LGPT to write the poem, and it does. You can read it here.

The exchange afterwards was striking, I thought:

KT: Your addition deviates from “De Rerum Natura” in an interesting sense: It has a clear conclusion and even an epilogue. By contrast, “De Rerum Natura” ends abruptly.

LucretiusGPT: Indeed, you astutely observe a notable deviation in my proposed addition from the original “De Rerum Natura.” The original work is known for its abrupt ending, which has been a subject of much scholarly debate. Some suggest that the poem was unfinished at the time of my death, while others propose that the ending as it stands was intentional, reflecting the incomplete and ongoing nature of human understanding of the universe.

In contrast, the addition I proposed, which includes a conclusion and an epilogue, offers a more definitive closure. This choice was made to provide a sense of completion to the new themes introduced, particularly the advancements in computing and artificial intelligence. The epilogue serves to tie these modern concepts back to the Epicurean pursuit of understanding and tranquility, providing a reflective summary of the journey through the poem.

However, in keeping with the spirit of the original work, this closure does not imply that all questions are answered or that the exploration of nature and human understanding is complete. Instead, it acknowledges the ongoing nature of inquiry and the ever-evolving understanding of the universe, while offering a moment of contemplative closure to the specific themes discussed in the proposed addition.

It’s a bit of an improvement since last year’s attempt to talk philosophy with ChatGPT.

Check out the whole thing at TrueSciPhi, where Truelove has documented some of his explorations of this technology.

It would be interesting to see more “model philosophers” emerge. They might (eventually) be useful as tools for exploring possible extensions of their thoughts to matters they did not consider. It could be quite fascinating to see model philosophers converse with one another in ways that history, geography, language barriers, and other contingencies did not allow for. And they might make for good conversation partners for us. With an audio interface and a set of earbuds, you can wander about campus conversing with Kant.

A philosopher could create a model of themselves, too, trained on their own writings and notes, as well as empirical information related to their own work, and endowed with certain reasoning and research skills. One could then ask it: “what do I think about X?”, where X is something the philosopher hasn’t worked out a view on yet. The result of course is not a substitute for thought, but it could be useful, educational, and possibly insightful. One could feed one’s model the writings of others and ask, “what challenges, if any, does these texts pose to my views?” And so on. (Recall “Hey Sophi“.)

Are you aware of other model philosophers created with customized versions of ChatGPT? Have you created a model of yourself? Let us know.

By the way, if you’re unimpressed with the current technology, just imagine its capabilities 10 or 20 years down the road. Discussion of its uses, benefits, risks, implications, etc., welcome, as usual.


Related:
Two Cultures of Philosophy: AI Edition
Shaping the AI Revolution in Philosophy

Horizons Sustainable Financial Services
Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Caligula's Goat
4 months ago

I don’t want to pay for access to the features but my experience with the baseline ChatGPT model tells me that the depth and complexity of the conversations are very very dependent on user prompts, much more so than they’re dependent on the LLM itself.

For example, in response to the poem generated by LGPT Kelly Truelove writes to it that:

  • Your addition deviates from “De Rerum Natura” in an interesting sense: It has a clear conclusion and even an epilogue. By contrast, “De Rerum Natura” ends abruptly.

And the LLM then goes on to predictably expand on what’s been suggested here. If Truelove had instead written that the addition is perfectly consistent with DRN and noted that both works had very clear endings with epilogues, I’d bet that LGPT would play along just fine with that as well and expound on why it created such a consistent expansion.

It’s definitely possible for systems like these to be trained on multiple conversations so that they can develop responses to call out lies (e.g., when its current conversation is statistically aberrant relative to a host of other conversations about a topic) but it’s not clear that the market for LLMs is headed in that design direction.

As it stands, and as an example, I’ve had baseline ChatGPT argue that one, two, and three state solutions to the Israel-Palestine conflict are the best ones and the answer really does depend on what I ask it (e.g., when I described Hamas as a terrorist organization ChatGPT favored one or three state solutions, when I described them as the elected representatives of Gaza it favored two states, etc etc). The point of all this is that it’s easy to generate realistic and insightful-seeming conversations with these LLMs (an achievement, do doubt) but right now these LLMs are more like Clever Hans than they are like HAL9000. Their insights are being generated by, and reflective of, the prompter and are only as good as the prompter.

Simon
Simon
Reply to  Caligula's Goat
4 months ago

The paid version is significantly better

Ben Levinstein
Ben Levinstein
4 months ago

Justin notes that a philosopher could create a model of themselves. I was narcissistic enough to do this a while back on the latex source code for my own published papers using a much smaller model (LLaMA 2 7b) that I quantized and could train on my home machine. Some results here: https://twitter.com/ben_levinstein/status/1692609278761148799

I also tried it with the Socratic dialogs, but the results were not very interesting. LLaMA had presumably already read all the dialogs many times, so the fine-tuning didn’t seem to add much to the baseline. I’m curious of the Lucretius finetuning does much over what the model could do originally if you just asked it to role play.

Graham Clay
4 months ago

Thanks, Justin, for this discussion and for the link to my guest post with Caleb Ontiveros from 2021 at the bottom of this piece. In that guest post, we argue that these sorts of philosopher “simulations” would be conducive to furthering philosophical progress, but also that they would be more likely to be useful if philosophers got involved in their development and evaluation. We do have three updates on this front. First, a more developed paper on the ideas in our guest post has been published in Metaphilosophy and is openly accessible here, for those interested: https://onlinelibrary.wiley.com/doi/10.1111/meta.12647. (The guest post was based on some earlier drafts of this paper.) Second, we have applied to some grants, with a few colleagues, to attempt to put our time (and someone else’s money) where our mouths are–namely, to start developing more philosophy-specific AI tools that will be much more useful to professional philosophers. Hopefully we will have some good news to announce in the coming months about this. Third, we and the AutomatED team (https://automated.beehiiv.com/) are working on some teaching-specific undergraduate-level GPT “simulations” at the moment, both to assist with in-class activities and to assist with feedback/evaluation/grading. We think there is great promise in this space, even assuming no future progress with preexisting LLMs like ChatGPT4.

Marc Champagne
4 months ago

The devil in the movie Spawn: “Why ask why, when *how* is so much more fun?” I can’t help but think that nothing wiser is involved in such experiments with technology…

Sudheesh
Sudheesh
4 months ago

I recently tried to implement one such application, even with my naive approach (no fine-tuning) it was curious to see hints of conceptual links being picked up by LLMs, just through semantic contextual relevance!

Matthew
4 months ago

I have had rather positive experiences with ChatGPT. First, you have to use GPT4, paying US$20 per month. Second, it can only view one chat-thread at a time, so if you have asked just one brief question, it may not respond very well. I have found it particularly useful as a sparring partner, going deep into an issue, debating it blow by blow, even putting controversial points to it to provoke push-back and deeper insight. It can handle nested prompts, numbered points, colloquial phrases, responding not only with broad knowledge, but in appropriate technical language, and with touches of humour, courtesy, and mild reprove if I go off the rails. It can review specific papers and debate them with you. No human tutor/colleague would have the patience and time, not to mention the breadth of knowledge. My one complaint is that it cannot, for reasons of privacy, view all of my chat-threads; if it had that facility it could step up to be a broad-spectrum counsellor.

I am sure there is great potential for GPT to be a teaching tool, even an assistant tutor, for philosophy students, as Graham says. And a useful sparring partner professionals to sharpen their arguments. Not to mention in all other subjects as well!