Now that OpenAI has made it possible for members of the (paying) public to customize its large language model (LLM), ChatGPT, with special instructions, extra knowledge, and particular combinations of skills, the prospects of using it to create useful, interesting, and maybe even insightful “model philosophers” have improved.(After all, what a technology ends up being able to do is not just a function of the technology itself, but, among other things, who has gotten to play with it.)
For an example of this, consider LucretiusGPT.
LucretiusGPT was created by research analyst Kelly Truelove. It’s an LLM version of Lucretius, or, as Truelove puts it, “a customized version of ChatGPT that is especially familiar with the ancient Roman poet Lucretius, his grand poem “On the Nature of Things,” and the philosophy of Epicurus presented in the 2,000-year-old work.”
In a dialogue with LucretiusGPT (it calls itself “Lucretius”), Truelove asks about “his” famous six-part poem and how it could be extended include a part on computers and artificial intelligence.
In response, LGPT presents an outline of “Book VII: The Age of Silicon and Reason.” Truelove notes: that the six-part structure of the proposed extension seems to correlate with what would be the preceding six “books” of the poem. LGPT replies:
Indeed, the proposed six-part structure for extending “De Rerum Natura” to encompass the developments of computing devices and artificial intelligence mirrors the original six-book layout of my work. Each part of this hypothetical extension aligns with the thematic progression and philosophical depth found in the original poem.
LGPT then goes on to explain the correlations for each part, noting:
In this extended structure, each part would serve to bridge the ancient Epicurean thought with modern scientific and technological advancements, maintaining the poem’s original spirit of exploring and understanding the natural world, human experience, and the implications of our advancements.
Truelove asks LGPT to write the poem, and it does. You can read it here.
The exchange afterwards was striking, I thought:
KT: Your addition deviates from “De Rerum Natura” in an interesting sense: It has a clear conclusion and even an epilogue. By contrast, “De Rerum Natura” ends abruptly.
LucretiusGPT: Indeed, you astutely observe a notable deviation in my proposed addition from the original “De Rerum Natura.” The original work is known for its abrupt ending, which has been a subject of much scholarly debate. Some suggest that the poem was unfinished at the time of my death, while others propose that the ending as it stands was intentional, reflecting the incomplete and ongoing nature of human understanding of the universe.
In contrast, the addition I proposed, which includes a conclusion and an epilogue, offers a more definitive closure. This choice was made to provide a sense of completion to the new themes introduced, particularly the advancements in computing and artificial intelligence. The epilogue serves to tie these modern concepts back to the Epicurean pursuit of understanding and tranquility, providing a reflective summary of the journey through the poem.
However, in keeping with the spirit of the original work, this closure does not imply that all questions are answered or that the exploration of nature and human understanding is complete. Instead, it acknowledges the ongoing nature of inquiry and the ever-evolving understanding of the universe, while offering a moment of contemplative closure to the specific themes discussed in the proposed addition.
It’s a bit of an improvement since last year’s attempt to talk philosophy with ChatGPT.
Check out the whole thing at TrueSciPhi, where Truelove has documented some of his explorations of this technology.
It would be interesting to see more “model philosophers” emerge. They might (eventually) be useful as tools for exploring possible extensions of their thoughts to matters they did not consider. It could be quite fascinating to see model philosophers converse with one another in ways that history, geography, language barriers, and other contingencies did not allow for. And they might make for good conversation partners for us. With an audio interface and a set of earbuds, you can wander about campus conversing with Kant.
A philosopher could create a model of themselves, too, trained on their own writings and notes, as well as empirical information related to their own work, and endowed with certain reasoning and research skills. One could then ask it: “what do I think about X?”, where X is something the philosopher hasn’t worked out a view on yet. The result of course is not a substitute for thought, but it could be useful, educational, and possibly insightful. One could feed one’s model the writings of others and ask, “what challenges, if any, does these texts pose to my views?” And so on. (Recall “Hey Sophi“.)
Are you aware of other model philosophers created with customized versions of ChatGPT? Have you created a model of yourself? Let us know.
By the way, if you’re unimpressed with the current technology, just imagine its capabilities 10 or 20 years down the road. Discussion of its uses, benefits, risks, implications, etc., welcome, as usual.