Two Cultures of Philosophy: AI Edition


Up for discussion: the following two claims (along with their presuppositions, ambiguities, etc).

“If you think artificial intelligence could improve philosophy, then you’re mistaken about a central point of philosophy.”

“If you think artificial intelligence could not improve philosophy, then you’re mistaken about a central point of philosophy.”

(Prompted by the consideration of a distinction between technology that alleviates the need for human thinking and technology that improves it, as raised in a Twitter thread by Zachary Pirtle.)


Related:
The Distant Future of Philosophy
“Hey Sophi”, or How Much Philosophy Will Computers Do?
Will Computers Do Philosophy?
Shaping the AI Revolution in Philosophy

guest
22 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Marc Champagne
1 month ago

If an AI reached the conclusion that it cannot improve philosophy, would AI enthusiasts listen to it? My bet is that they would keep tinkering with the AI, until they heard it say what they have already decided.Report

Last edited 1 month ago by Marc Champagne
Lex
Lex
Reply to  Marc Champagne
1 month ago

Maybe they would conclude that it was philosophy that couldnt be improved; rather than a shortcoming of the AI.Report

Martin Peterson
1 month ago

Ordinary non-intelligent computers have already improved philosophy. I can mention at least ten philosophy papers that use computer simulations for making a central point. (Here is one: A computer simulation of the argument from disagreement | SpringerLink) So it is pretty obvious that AI will continue to improve philosophy, there is little to discuss here.Report

Frank
Frank
Reply to  Martin Peterson
1 month ago

Except that this only settles – at most – the truth value of the object proposition in the propositional attitude sentences that constitute the antecedents of the conditional sentences in question. I can, just for instance, believe what you say here without thinking that those who believe falsely are mistaken about a central point of philosophy. Also, not every philosophy paper improves philosophy. There is a lot more to discuss.Report

Kenny Easwaran
Reply to  Frank
1 month ago

While it’s true that not *every* paper improves philosophy, I would think that the default assumption about each published paper is that it is in fact *some* sort of improvement, given the number of people who had to think so to get it published! (At minimum, the author, an editor, and a reviewer.)

There probably are some papers that are net neutral (the value they contribute is equal to the amount that they make it harder for other people to publish or find other papers that would have been more valuable) or even negative (they lead people on a wild goose chase distracting from the more significant issues in the sub-field), but I would think the burden is on anyone who claims that about a particular paper to justify that claim.Report

Lex
Lex
1 month ago

There’s a speculative aspect in play; an unrestricted scope of improvement for AI’s capabilities. Without an AI ability ceiling we cannot similarly limit its/their contribution.
I used to be a Strong AI sceptic but now not so much.
Panpsychism help with that.Report

Grant Castillou
1 month ago

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461Report

Lex
Lex
Reply to  Grant Castillou
1 month ago

Have you considered how panpsychism might play into, at least, the primary step?Report

James Cummings
Reply to  Grant Castillou
1 month ago

A fine comment, but the proof of a pudding is never in itself, and only in eating it.Report

Siddharth Muthukrishnan
1 month ago

AI (broadly construed) is already having significant impacts in many scientific disciplines. The most striking example is perhaps DeepMind’s AlphaFold program which has left every other protein-folding algorithm in the dust—protein-folding being a problem for which computational biologists had been struggling for decades to come up with good algorithms for. In physics, machine learning (ML) models are already having significant impacts in data analysis in, inter alia, particle physics and astronomy. There are also uses of ML in theoretical physics, to better understand some models or to help find some analytical solutions. AI looks as if it would be helpful to pure math as well, as another recent DeepMind paper showed, where it can help formulate conjectures or highlight connections.

So if we are broadly Quinean, and think that philosophy is continuous with the sciences, then it’s likely that AI will have impacts in philosophy. The way it helps math might be similar to the way it helps philosophy.

So I certainly lean towards the second of the two claims. We would be silly to not take advantage of any tools that might be helpful to us.Report

Graham Clay
Reply to  Siddharth Muthukrishnan
1 month ago

Siddharth–I am broadly on board with the spirit of your position, I think. But I would suggest that we needn’t be broadly Quinean or think philosophy is continuous with the sciences to think that AI tools will have an impact on philosophy that results in it improving. Just like the cases you mention in biology and math, AI tools could at the very least help philosophers do philosophy better (regardless of whether the AI itself qualifies as a philosopher). Caleb Ontiveros and I argued for and built on this view in this forum last year, actually: https://dailynous.com/2021/07/06/shaping-the-ai-revolution-in-philosophy-guest-post/Report

Eric Steinhart
1 month ago

I think AI might help philosophy a lot, much as it’s helping math and physics, and art and chess.

Assume that an AI has access to all the digitized philosophical and scientific literature. It could be used to see whether a given conjecture (or question, or topic) is worth exploring by a human.

Much of the effort in evaluating whether an idea is worth developing involves doing lots of preliminary research, which takes lots of time, often comes to dead ends, and often misses connections across disparate fields. You ask the AI to write a research paper on the topic, to answer the question, to settle the conjecture. Then you see whether the AI found anything interesting. Basically, the AI is a research assistant that’s much faster and broader than any human.Report

Linda Bourassa
1 month ago

AI can only reflect consciousness. Consciousness desires to make sense of reality = foundation of philosophy — cannot be separated from it’s path.

AI is part of the philosophical path. It is what it is, apart from induced duality (improve/not improve). Just my opinion.

The question for me is whether AI is designed to help us (humanity) find a more balanced (harmonious) philosophical center. They are both human creations after all.Report

Shane Epting
1 month ago

This response might seem lazy. Ok, fine. It is lazy. I work at a science and engineering university. I was talking with someone who works on AI (non-philosopher). He looks me dead in the eye, saying, “It’s number crunching. That’s it. Nothing special.” He continued to explain the ins and outs, but the above was the takeaway. I’m fine with it and have moved on with my life (deep exhale).Report

Libby
1 month ago

A central point of philosophy is ‘philo’: love. Does AI love to think? I sincerely hope it does. If not, then it is a tool, to which Socrates said: ‘You can’t let a hammer use you.’Report

aaron goldbird
1 month ago

There’s probably a lot of ways AI could help us do philosophy much faster (collecting references, analyzing historical texts, copy editing and probably other publication stuff). So is that one of the assumptions behind a disagreement about the ‘central point of philosophy’. Is it better if philosophy is done way faster?Report

Kenny Easwaran
Reply to  aaron goldbird
1 month ago

If there is a thing that is better for people when it is done than when it is not done, then it seems that getting it done faster is better (assuming it is not done worse as a result). If you get it done slower, then there are people who never get the benefit of it having been done, while if you get it done faster, then more people get the benefit of it having been done.

Since I think philosophy is a thing that is better for people when it is done than when it is not done, then yes, I think it is better if philosophy is done way faster.Report

V. Alan White
1 month ago

Human beings were essentially randomly programmed by evolution to think in recognizably self-conscious communicative fashion with only mere animal consciousness (called “primary” elsewhere here) as a foundation. But it’s not clear that that animal consciousness is a necessary condition for such self-consciousness even if it was in our case. If AI can by some criteria (extended Turing tests or the like, given the other minds problem as a basic problem) obtain self-consciousness and communicative skills, then I see no reason that it cannot participate and extend philosophical dialogue, which is only a subset of linguistic discourse in general anyway.Report

Rosanna Festa
1 month ago

Intelligence is a centrale point of philosophy, a philosopher is intelligent, does it improve artificial intelligence?Report

Garrison Gibson
1 month ago

Human minds apparently have subconscious networking interpreting sense data with a conscious high-level language thinking for-itself that draws upon and is informed by the subconscious and direct percepts. A.I. on the other hand is entirely one-level processing (with sub-routines and modules). That’s fairly dissimilar from the human thought phenomenon.

Could A.I. ever be anything but sociopathic except as programming pre-determined its attitudes? Why should it care about phenomenal human minds that it encountered as a kind of inferior yet novel external data source? Sure A.I. can process a lot of information though I am not sure that people popularly have enough wisdom to comprehend the long-range concept of machine thinking for-itself. If philosophy is the love of wisdom its relationship to networked sociopaths may be tenuous though sometimes beneficial.Report

Last edited 1 month ago by Garrison Gibson
Conrad
1 month ago

Another way to frame this discussion is to consider not how AI makes philosophy better (or worse) on some arbitrary scale, but rather that AI and it’s consequences are an opportunity to undertake a project of thinking and philosophical adventure. Philosophy should always be responding to the conditions of the present moment and those conditions are increasingly being determined by computational processes, and particularly the genuinely machinic generation of the social and epistemic conditions of thought and being that are historically novel. And even more fundamental is the question of what thinking is itself in the age of machines that can ‘think’ (and whether we agree that they do think at all).Report

Damian S.
1 month ago

It’s without question Ai Has Effected philosophy and will continue to do so even if it never leads to The AI science dreams and many Fear(restating the question Check ✅). But, enough boosting case in point The animated movie Matrix’s, Ghost in the Shell , The Matrix, Blade Runner. Tho none of these thing in advancement exist in our world has not change are constant conversation on the matter and once a subject enters into the fray was that not it’s entry into it’s place in effect? Isn’t the mere fact we are having this conversation proof. My friend, the beauty I think your missing is in philosophy something need not exist in the now or ever to insight thought to effect the now. Just like a problem that is prepared for but never comes. – Damian S. & Gold D.Report