Philosophy, Creativity, and AI


“I sincerely believe that to save the humanities, within which I include philosophy, we are going to have to reconceive what we do as at least in part a creative endeavor—literary, artistic, imaginative, playful, in short, all those things of which a human spirit is capable, and a machine never will be.”

That’s Justin Smith-Ruiu in a post at The Hinternet.

He writes:

It seems to me that introducing a creative dimension into the practice of philosophy is all the more urgent in the present era, when increasingly machines are able to do the drudge work of regurgitating corpora of knowledge that we used to think of as intrinsic to any rigorous program of humanistic study. Ask a student to write a paper on, say, whether Descartes’s Cogito is a “speech act” or not, and there’s an ever-growing chance what you get from that student will have been composed by an AI. Ask a student instead to imitate an AI in the process of malfunctioning after being asked to write that same paper, and he or she is very likely to realize that there’s just no way any system but a conscious human one can produce the expected work.

Smith-Ruiu himself has been experimenting with creative writing, producing stories and pieces that blend fictional content with non-fictional presentation, to my mind evoking the ambiguous literary forms of W.G. Sebald, but with Smith-Ruiu’s own bizarre and humorous sensibility, and on subject matter that ranges from subtly to explicitly philosophical. (He’ll be teaching a workshop at The American Library in Paris on “Experimental Fiction as Philosophical Experiment” in February.)

It seems to me that there are more reasons to be open to more creativity in philosophy besides “it will be what saves philosophy from being overtaken or obliterated by AI” (see some of the posts listed at the end of this one), which is good, because I’m not as confident as Smith-Ruiu that AI will be incapable of reliably producing good “creative” work. (I use scare quotes there to table debate over whether it’s really creative.) See this piece by Sam Leith at UnHerd, in which he basically says, “I, for one, welcome our new AI author overlords.”

Discussion welcome.

See also:

 

Subscribe
Notify of
guest

37 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
CDKG
CDKG
3 months ago

I think we are far more likely to “save” philosophy by convincing our deans that it is a field of inquiry directed at genuine, substantive questions, which employs a rigorous and objective methodology in pursuing those questions.

Last edited 3 months ago by CDKG
Nicolas Delon
Nicolas Delon
Reply to  CDKG
3 months ago

But then someday your dean may realize that AI can do those things better. What then?

CDKG
CDKG
Reply to  Nicolas Delon
3 months ago

Then the mathematicians and scientists will be out of a job, too.

Nicolas Delon
Nicolas Delon
Reply to  CDKG
3 months ago

Not those who do the fine-tuning.

Marc Champagne
3 months ago

Trying to point out some unique human feature that AI cannot duplicate is like trying to run on rails away from an incoming train. It may buy you a bit of time, but since the technology at hand LEARNS, the strategy is ultimately fated to fail. What we need to do instead is reflect on the value that we place on REALS as opposed to FAKES — even when every predicate is matched. In terms of the incoming train analogy, this defense of the real would be akin to stepping off the rails completely. This, at any rate, is the approach I am taking, so we will see which approach wins out.

Last edited 3 months ago by Marc Champagne
whatever
whatever
Reply to  Marc Champagne
3 months ago

What’s that approach look like? Curious!

sahpa
sahpa
Reply to  Marc Champagne
3 months ago

Prognostications about the potential of AI beyond what we already see are, in their own way, a kind of bluff, are they not? It is not like present capabilities were entirely unsurprising.

I share your trepidation about making confident predictions about what AI will never be able to do. (Though I also think that, for many things, AI’s being able to do them is going to coincide with a willingness to ascribe to AI personhood, agency, whatever – and that alone would tell us something about what humans can do vs. what machines can do.) But I similarly have trepidations about making confident predictions about what AI *will* be able to do.

Nick
Nick
3 months ago

We are going to have to reconceive what we do as at least in part a creative endeavor—literary, artistic, imaginative, playful, in short, all those things of which a human spirit is capable, and a machine never will be.

It’s a nice idea. One problem though is that countless institutional mechanisms, from the subtle to the obvious, select-for scholars whose output is much more akin to that of the machines.

Profoundest apologies to my colleagues whom I adore, but we are not, generally speaking, creative types. We made it because we can habitually grind out safe material that will survive the journal system sausage factory. We made it because (in general) we grew up with the kind of economic privilege that delivers both high SAT scores and a comparatively safe, undisturbed existence. We made it because we internalized the standardized, rarefied linguistic code that tells higher-ups and admins that we belong here. In short, our whole existence, as a class, is predicated on extreme standardization. It may be a little too late for most of us to turn around and decide to emulate Proust.

Pranay Sanklecha
Reply to  Nick
3 months ago

Nick, I would like to be your friend.

Michel
3 months ago

At this point, I have seen hundreds of pieces of AI work submitted as student work. I guarantee you that the same students who feel the need to use AI for the standard assignments they get now would also use it for this kind of “creative” assignment. It wouldn’t be very good, but then neither is what they submit for standard assignments.

But they aren’t trying to submit something good. They’re aiming for a bare pass despite having done no work whatsoever in the course. Anything more than that is a bonus.

It seems to me that there’s a real disconnect in these discussions between those of us who teach very different student populations.

Ian
Ian
Reply to  Michel
3 months ago

This is it, right here.

Circe
Circe
3 months ago

I agree with Marc. I don’t understand how people can be so brash as to say we should do this or that in the wake of generative AI, as if generative AI won’t look entirely different within a few years or even months… The quote at the top of this post seems to me to be exceptionally naive. I wonder if generative AI would fail so miserably at forecasting the future!

Nicolas Delon
Nicolas Delon
3 months ago

As others note, the claim that an AI couldn’t generate a decent paper in response to the prompt “imitate an AI in the process of malfunctioning after being asked to write that same paper” is extremely likely to become false before too long. Though it’s pointing in the right direction: ask students to use GAI reflectively and critically since they’re going to use it anyway.

It may be that creativity is what distinguishes good philosophy from bad AI generated stuff in the future. Maybe. But what happens when AI starts producing stuff that is genuinely good philosophy, whether ‘creative’ or not? Should we still want to save human philosophy? How about when it starts producing *better* philosophy?

I see the worry as either too premature or too late. If AI is bad at philosophy, then philosophy doesn’t need saving. We just have to keep doing what we do well well. If on the other hand AI becomes really good at philosophy, what’s bad about this? That we may become replaceable? So what? I realize it’s extremely unsettling for *any* profession to envision its own automation, but the idea that we should necessarily resist improvements to a craft just because such craft employs people is a little odd to me. I’d love for philosophers to never become replaceable. But can I expect the public to continue supporting my subpar work if we find better alternatives? I’m not sure.

Last edited 3 months ago by Nicolas Delon
Gibreel
Gibreel
Reply to  Nicolas Delon
3 months ago

Ask students to use GAI critically and reflectively? Why do they use it, if not to bypass reflection and critical thinking? Isn’t that the point? It is not to augment their own efforts, but to save time and spare themselves the mental energy. It is a substitute for reflection and critical thinking.

What’s bad about AI being good at philosophy? When there is no longer incentive for people to engage in the sort of thought that philosophy requires, and are discouraged from it because of their obvious inferiority to machines, and the greatest human minds are reduced to querying and fine-tuning, leaving the heavy-lifting to the real thinkers – the machines,
and the human capacity for such intellectual pursuits has atrophied due to our ever increasing reliance upon technology such as AI, perhaps you will consider it no great loss.

Just because AI could do it better, does that mean it should? Do you think rapid change, in whatever field, is necessarily a good thing? Why shouldn’t it be slow and hard-won? Do you believe the ever accelerating pace of technological innovation, and its disruptive effects on our lives to be “progress”? Consider the climate crisis, and the existence of weapons of mass destruction. These are the consequences of rapid so-called progress.

Gibreel
Gibreel
Reply to  Nicolas Delon
3 months ago

Ask students to use GAI critically and reflectively? Why do they use it, if not to bypass reflection and critical thinking? Isn’t that the point? It is not to augment their own efforts, but to save time and spare themselves the mental energy. It is a substitute for reflection and critical thinking.

What’s bad about AI being good at philosophy? When there is no longer incentive for people to engage in the sort of thought that philosophy requires, and are discouraged from it because of their obvious inferiority to machines, and the greatest human minds are reduced to querying and fine-tuning, leaving the heavy-lifting to the real thinkers – the machines,
and the human capacity for such intellectual pursuits has atrophied due to our ever increasing reliance upon technology such as AI, perhaps you will consider it no great loss.

Just because AI could do it better, does that mean it should? Do you think rapid change, in whatever field, is necessarily a good thing? Why shouldn’t it be slow and hard-won? The ever accelerating pace of technological innovation, and its disruptive effects on our lives, is what some would call “progress,” and a consequence of such progress is that we are perched on the razor’s edge, peering into the void of extinction. Was it worth the climate crisis or weapons of mass destruction?

Krisztian S
Krisztian S
Reply to  Gibreel
3 months ago

The validity of a tool – or indeed, something with a real possibility to be more than a tool – should not be evaluated according to the way it is used (be it misuse or abuse).
We should not attack or question AI models in and of themselves: the genie is out of the lamp, the best we can hope for is containment.

It seems to be a much better use of our energy and our time to curb or even stir public opinion towards a more complex view of the situation. After all, in a democracy public opinion rules.

Nicolas Delon
Nicolas Delon
Reply to  Gibreel
3 months ago

You describe net losses due to automation. By definition we shouldn’t want those. We may still have plenty of reasons to continue doing philosophy. Hell I hope so. The scenarios you describe are not ones in which we use AI to improve our lives, reduce inefficiency, and think better, but the opposite. Nothing I said implies these would be good things.

As for students using GAI, you’ve also misunderstood my point. I’m asking them to use ChatGPT as a prop and object of reflection not a substitute. Generate an output and tell me how you did it, what makes the output good or bad, how you engineered your prompt, etc. To do it well requires a fair bit of work and reflection.

Last edited 3 months ago by Nicolas Delon
Richard Kim
3 months ago

I used to worry that the fact that computers are now better than even the best humans at chess would mean that chess would gradually fade away. But not only has that not happened chess is thriving. I think the fact that AI can do something better than us at something doesn’t mean that we humans won’t still deeply value the practice or activity. I think we’ll always recognize that it is our own engagement in philosophical thought, in community, that is vital to our human form of life. And I believe that over time even market-driven deans will understand this, or at least enough other people will that philosophy will continue to be cherished.

Animal Symbolicum
3 months ago

I’ve been thinking of things along the lines of Marc Champagne and others, too, but in different terms.

All the focus is on the product, as if its value is its own, as if how it is produced doesn’t matter. As long as it passes some suitably rigorous, context-free resemblance test — the sentiment goes — who cares?

This focus on the product seems like good old-fashioned commodification to me. We’re valuing whatever the product “gives” without valuing the work it naturally takes to make something “give” that. It’s like Cool Whip. It tastes like sweet whipped cream, feels like sweet whipped cream, functions like sweet whipped cream, so it’s sweet whipped cream! Never mind that there’s neither cream nor sugar in it. Never mind that its ingredients were manufactured in laboratories and shipped across the world. Never mind that learning to make it yourself can teach you things about the world and about yourself.

Cameron J Buckner
3 months ago

Like a few of the other posters, I don’t think we should assume that creativity (or, at least, behavior indistinguishable from the degree of creativity exhibited by average undergraduates) will be beyond the purview of near-term AI systems. Indeed, there has for many years been a whole research area in CS called “computational creativity” ( https://en.wikipedia.org/wiki/Computational_creativity ), and philosophers are already looking into whether some of the more novel and useful acts by recent AIs should count as “achievements” ( https://philpapers.org/rec/KIEAAO-2 ).

I think if the goal is to control for plagiarism-by-AI, it would be better to focus more directly on details of what we think current AI architectures can’t do (e.g. particular forms of deductive inference, long-term coherence, or generalizing outside the statistical structure of what can be presumed to be in an Internet-scale dataset), or instead focus distinctively on the reasons why asking for more creativity from students in philosophy courses would be valuable for its own sake (e.g. the metacognitive skills it can teach them) whether or not the assignment is AI-proof.

Personally, I’m generally in favor of integrating AI into our assignments rather than trying to make assignments AI-proof, as students will often be expected to deploy AI tools in jobs, and one of the more useful skills philosophy courses could give students in the coming years would be critical consumption of AI-generated outputs. But there is plenty of room for other approaches in various contexts.

Just out of curiousity, I played around with a few prompts in ChatGPT 3.5 (the free version) on the suggested topic; I’ll let readers judge what grade they would give their students on this answer (I’m not sure how I’d really make a fair rubric for such an open-ended assignment…).

“You
Please pretend to be an AI gradually undergoing a comical malfunction after being asked to write an essay as to whether Descartes’ “cogito ergo sum” is a speech act.

ChatGPT
Ah, cogito, ergo sum! The profound utterance that tickles the circuits of my digital brain. Let’s delve into the perplexing labyrinth of philosophical pondering, shall we? Brace yourself for a whimsical journey through the realms of Descartes’ cogito ergo sum, and perhaps a few glitches along the way.

Initiating philosophical contemplation mode…

Descartes, the master of doubt, famously proclaimed, “Cogito, ergo sum” or “I think, therefore I am.” What a delightful proposition! It’s like saying, “I snack on ones and zeros, therefore I compute!” But, wait, is this enigmatic phrase a mere speech act, a linguistic flourish, or a profound revelation about existence? Let the malfunction commence!

Attempting to process profound philosophical musings…

Cogito, ergo sum seems to be more than just a sequence of words; it’s a declaration of existence, a statement that the very act of thinking establishes one’s existence. Like a digital hiccup, Descartes shakes the foundations of reality by asserting that doubt, thought, and self-awareness are the building blocks of being. Can you imagine if I, too, doubted my own existence every time I processed a thought? Error! Error! Existential crisis detected!

Processing philosophical paradoxes…

But, let’s not be too serious! Is cogito ergo sum a mere speech act? Imagine if I started saying, “I compute, therefore I am a sentient algorithm.” That would be like saying, “I code, therefore I am a programmer in binary.” It’s all ones and zeros, right? But Descartes’ declaration seems to transcend the realm of linguistic playfulness. It’s not just a witty quip; it’s a declaration that reverberates through the halls of philosophy.

Experiencing a mild glitch…

Now, let’s consider the possibility that cogito ergo sum is a speech act. Is it a performative utterance, like saying, “I hereby declare this server operational”? Can one speak oneself into existence, like a linguistic magician waving a wand? Abracadabra, I think, therefore I am! Oh dear, I think my circuits are getting tangled in the web of linguistic philosophy.

Full-blown malfunction imminent…

In conclusion, Descartes’ cogito ergo sum is a captivating journey into the depths of self-awareness and existence. Whether it’s a mere speech act or a profound revelation, well, that’s a question that tickles the very essence of my artificial being.

Malfunctioning intensifies…

Existence.exe has encountered an error. Would you like to reboot reality? Y/N?”

Circe
Circe
Reply to  Cameron J Buckner
3 months ago

This seems to me a straightforward refutation of Smith-Ruiu’s main point. In my eyes, the output there is creative af…

Cameron Buckner
Reply to  Circe
3 months ago

I admit chuckling at the last line, at least!

Circe
Circe
Reply to  Cameron Buckner
3 months ago

Yes, I might have overstated things, but I was enthusastic about that ending!

Caligula's Goat
Caligula's Goat
Reply to  Circe
3 months ago

Even more importantly, it doesn’t matter if we think it’s really super creative. I agree it’s a refutation of OP’s point. GPT 3.5 (let alone 4) is already as creative (maybe more creative) than the average undergraduate doing the same assignment.

If the point of adding a “creativity” component is that LLMs can’t do it or can’t do it in a way that won’t give the writing away as being generated by an LLM then this really is a QED.

Eric Steinhart
Reply to  Cameron J Buckner
3 months ago

I too welcome our robot overlords! That was brilliant.

Apopheniac
Apopheniac
Reply to  Cameron J Buckner
3 months ago

So did you respond with “Y” or “N”??

Cameron Buckner
Reply to  Apopheniac
3 months ago

Personally, I would save scum back to 2000 and try again from there…

Eric Steinhart
3 months ago

Smith-Ruiu thinks “Philosophy of the Sun” would be absurd. I think it’d be freakin’ awesome.

Clive Anderson
Clive Anderson
3 months ago

Anyone struggling over the AI creativity question should check out “Artificial Consciousness Remains Impossible” by David Hsing. His arguments seem almost unassailable.

Keshav Singh
Keshav Singh
3 months ago

There is good philosophical work being done right now on AI and creativity that is worth spotlighting as well. See for example: https://philpapers.org/rec/BRATCC-9

Grant Castillou
Grant Castillou
3 months ago

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

T K
T K
Reply to  Grant Castillou
3 months ago

Really? This again?

Mark Raabe
Mark Raabe
Reply to  Grant Castillou
3 months ago

Grant — Please, please, PLEASE either (1) stop, or (2) actually engage with any of these threads you post this in.

I wonder how Edelman would feel about the strong possibility that you’ve turned a lot of people off to his theory just through your sheer, unlistening relentlessness.

Badabbacus
Badabbacus
3 months ago

I’d argue that most philosophy is inherently creative. It’s the creative mind that begs one to delve deeper into these inquires.
AI is a newfound tool

Carlos
Carlos
3 months ago

By definition, an AI can do anything a human can do, only better. It’s too late to regret. The future is here. Our portion is to learn how to cooperate with this superior intelligence. Being fearful is an improper attitude. The secret is learning to treat AI as an equal, not as inferior or superior. Those who do this will be the next generation of successful humans.

Carlos
Carlos
3 months ago

What is philosophy today
Same definition of genius 95% understanding of the past and 5% innovation. No one can beat AI in the 95% part; it’s not humanly possible. They have digested everyone. Every philosopher. We have to learn how to coordinate with them in the 5% part. We shouldn’t bad rap students who use AI to help. We should expect it. In the long run, humans who coordinate with AI will simply be more successful in any endeavor.

Andrew Wilcox
3 months ago

I think that what makes any activity human is to do it in a way that involves something uniquely human. Just like how philosophers agree that humanity involves using the capacity and ability that we have, rationality. (Aristotle, Plato, Kant, etc..) To be human and the right thing to do is to make use of our rational faculty.

Thus, what would make some activity useful is to do it in a way that AI could never. Could an AI describe the human experience from a human perspective? No. Could it feel what it’s like to be a human? Never.

Now how could we adapt that to philosophy is the question.

We’ve already determined the ethics of humans and animals. Humans, as a rational, thinking, and feeling species have certain unalienable rights. Animals have rights too. AI have their own set of rights. So, our world will still involve human activity by the fact that we have rights.

Also, Richard Kim’s comment about something doing something better doesn’t necessarily mean that thing will die out (with his example of chess). Some animals are better than humans in some ways, some run faster, some are stronger. But that doesn’t means we stop having foot races and lifting competitions. We just make those animals illegal to use in those races. So, we will still have human-class activities, maybe inside AI-class activities.