AI & Philosophy Degree Programs
Faculty at Arizona State University are developing a new philosophy major program with a focus on artificial intelligence, consciousness, and ethics.

[Sou Fujimoto, “Many Small Cubes”]
The new AI-focused degree is currently undergoing administrative approval, with plans to launch it in Fall of 2027, says The State Press, which further reports that “the degree aims at taking an interdisciplinary approach while remaining primarily focused on philosophy.”
Are other philosophy departments currently offering or developing new AI-related degree programs at the undergraduate or graduate level, or taking a significant role in the development of interdisciplinary AI-related degree programs at their institution? Let us know about it.
Related: Grad & Undergrad Philosophy Programs Focusing on Science/Technology
King’s College London has a joint Artificial Intelligence & Philosophy undergraduate degree: https://www.kcl.ac.uk/study/undergraduate/courses/artificial-intelligence-and-philosophy
These AI / philosophy programs, while probably well intended, are bad bad news.. it’s letting a snake into the henhouse. this AI stuff is ruining the discipline, and as Patrick Lin who posts regularly here always says, AI can’t even think so why should we listen to what it says? Philosophical inquiry of a serious sort needs to be completely hermetically sealed off from AI, not integrated with it! Arizona state administrators probably think they can make a quick buck off this gimmick (putting AI in the title of the programme – not a bad idea, fair enough, from a business point of view), but it is disgracing the sanctity of the philosophical essay; programmes like this one are harbingers of the death knell of philosophy as a discipline and should be rejected in favor of more traditional, classical philosophical education, with texts and a black board and deep, deep, thought.
Anyone who shares those closing sentiments might enjoy this: https://certifiedaifreeskillsandknowledge.org/
For centuries, people have thought that the study of mind, consciousness, knowledge, propositional representation, decisions, language, and complex behavior are central to philosophy. This has often meant that philosophers are interested in animals and gods and angels and groups, as things that approximate human versions of these things without being the same. Why would we not study artificial intelligence?
That said, we should absolutely be asking people to think and not just pretend to think in a way aided by robot-generated essays!
Would you make the same judgement for literally every other scientific tool or method? Because none of them can literally think, and so we shouldn’t use them? Of course, you’re here making the assumption that people are treating AI as if it’s conscious, which may be true to an extent for the general public, but this is absolutely false among philosophers.
The circle-jerk reaction to ANYTHING AI related really speaks poorly of a discipline that strives for rigorous, clear thinking. If you even bothered to look at the ASU press release, you’ll quickly realize that your comment is a total strawman. Your alleged problem with “AI stuff” that is “ruining the discipline” is precisely the sort of problems that the program is planning to deal with.
Reviewer 3, just because they ask some fundamental queestions about AI doesn’t mean they aren’t letting the snake in the hen house in the way I’ve cautioned against. Just consider this from the press release – a Philosopher at ASU says: “”There’s a demand for people to work in what they call ‘alignment in tech,'” Fillmore-Patrick said. “We’ve found evidence, when we were researching (and) starting this program, that that’s a growing field.” If the AI alignment stuff (and associated job market) is part of what’s driving this thing, you can bet there will be a lot of AI use going on in these philosophy classes – and the “AI literacy” cornerstone of the programme suggests as much. I’m not saying don’t have a class that sits back and asks “what is AI?” But it’s looking to me that this program is doing a lot more than that…. training the next generation of AI users to get jobs (and, perhaps also committing patricide against Father Socrates in the process…)
Plato killed Socrates (not just Parmenides) a long time ago when he decided to put his words onto the page. Maybe we need more parricides in philosophy.
It sounds like you might be conflating (a) AI as an object of philosophical inquiry with (b) incorporating AI in one’s method (e.g., “listening to what [AI] says,” “disgracing the sanctity of the philosophical essay,” “programmes likes this one…should be rejected in favor of more traditional, classical education, with texts and a blackboard…”).
As far as I can tell, the philosophy concentration in AI that ASU is developing regards (a), not (b).
Claims you make (like “AI can’t even think”) are conclusions one might draw based upon (a). Without such inquiry, such conclusions would seemingly be mere prejudice or drawn in ignorance.
I appreciate the shout-out, Paul, but perhaps surprisingly, I’m ok with a Philosophy + AI major, at least in principle and as long as it’s appropriately skeptical. But I can also appreciate why folks might be against this plan, and I’ll explain both in a moment.
(I’m not anti-AI per se but believe that LLMs are seriously overestimated by users and industry. We don’t need to rehash why here, but here’s the basic idea.)
First, if I may reply to reviewer #3 on your behalf: “AI can’t even think so why should we listen to what it says” is only a very rough paraphrase of my view, which rev #3 is taking too literally:
Unlike other scientific tools, say a microscope, LLMs aren’t merely illuminating new domains that we can’t see with our naked eyes, but they’re giving us propositions, claims, arguments, ideas, etc. wholesale—presenting them as the truth, as if they understand what they’re talking about. No other tech does this. But given their design, there are good reasons to not trust LLMs to the extent that most people seem to. (See the last Daily Nous thread on AI.)
So, some people might react against this Philosophy + AI major because they think AI (or LLMs, more specifically) is mostly snake-oil, and why should we bless or legitimize snake-oil by creating an academic major around it?
Perhaps they would have a similar reaction to a major that pairs philosophy with, say, blockchain, or metaverse (VR/AR), or some other overhyped tech du jour. Beyond tech, imagine a major on Philosophy + MAGA: what is even the point of that? Maybe that’s appropriate for a Rhetoric program, but not philosophy insofar as it would try to make sense of the senseless and legitimize it along the way.
Another kind of possible objection is this. What might be ok for a specific philosophy course doesn’t mean it should be expanded into an entire major program. E.g., there are/were classes on Philosophy + Buffy the Vampire Slayer, which I have no doubt engages with substantive philosophy, but (1) can it be expanded into a major program and (2) should it be?
I don’t know about (1); maybe it can be expanded to connect to every key area of philosophy. But (2) is probably a no; the show just doesn’t have that kind of longevity to deem it worthy of a major program and the investments needed. (Sorry, Buffy fans.)
Philosophy + AI is already being taught in many different kinds of philosophy courses: ethics/aesthetics, metaphysics, epistemology, and maybe even logic. So, I think the answer to (1) is yes here. On (2), it’s very reasonable to place a bet on AI’s longevity, even if LLMs are ultimately a dead-end, but it’s also understandable why some folks might not see it this way.
I’d think there’s room for 1,000 flowers to bloom here. The new major plan isn’t obviously dumb, but we’re in the middle of an intense cultural moment re: AI, so naturally there would be strong reactions both ways.
So, if you want to fight someone, maybe don’t pick on the philosophy departments that are just trying to be creative and more relevant to the modern world, and/or just trying to survive…
UMD is in the final stages of approval for a major titled “Human-Centered AI” that is slated to begin accepting students this Fall (2026). The program is broader than “Philosophy and AI”, though it is hosted in the Philosophy department and includes several philosophy specialization tracks. I was interviewed about the vision for it for the college’s newsletter here: https://arhu.umd.edu/news/human-centered-approach-ai
Umeå University offers a Bachelor’s degree in Philosophy and AI since 2024 (with some courses in Swedish, and some in English): https://www.umu.se/utbildning/program/kandidatprogram-i-filosofi-och-artificiell-intelligens/
My university (Ashoka University) has a Philosophy & Computer Science degree.
The university is also launching an entire department of AI, separate from the CompSci department. This seems to be a bit of an overreacation to me, but I suppose if a donor is willing to give you tons of money to start an AI department, it’s silly to say no. If the department hires people who teach decent classes I imagine we would be open to a Philosophy + AI degree, although I haven’t asked anyone else in my department.
I loathe LLMs and what they are doing to the world, and I’m definitely sick of most AI ethics stuff after having taught it a few times, but it seems to me there’s plenty of room for undergraduate degrees focusing on the intersection of AI and Philosophy. Foundational issues in AI, questions about AI consciousness, and the massive field of AI ethics are all squarely within philosophy’s bailiwick, and I’m sure there are other interesting topics too. Even if there aren’t, those three are rich enough on their own to make the case for a degree like this.
It seems to me that, at the VERY least, it’s an open question whether or not AI development should continue. It also seems to me that that’s the biggest value philosophers could bring to discussions around AI–contributing to the existential questions. Insofar as philosophy and AI programs seem to be built on the presupposition that these questions have already been answered, I’m very skeptical that these programs are a good thing.
Thank you, John Oliver
We are starting a new undergraduate minor degree next fall 2026-2027 AI, Mind and Society at University of Toronto Scarborough. The new Minor Degree is housed in Philosophy and the majority of the courses are in philosophy, but there are electives in humanities as well as sciences.
Purdue has a BA in AI that’s housed in the philosophy department.
https://www.cla.purdue.edu/academic/philosophy/students/undergraduate/majors/artificial-intelligence/index.html
UIUC has a CS+Philosophy major.
The University at Buffalo now has two ‘AI+Philosophy’ degrees: a BS in AI + Logic and Ontology and a BS in AI + PPE. Both degrees are substantial (65-75 credits) and involve students taking around 20 credits of technical classes (linear algebra, programming, etc.), a cluster of four or five classes on the social impact of AI and then 30 or so credits in philosophy.
Great seeing these initiatives. Might be worth sharing a plea to those building and directing such programs, to consider reaching out to folks with PhDs in philosophy who work with the AI industry, broadly construed. As someone in that sort of position myself, my hunch is that our experience is relevant to students pursuing their degrees in such programs.
Northeastern University London has been running a Philosophy & AI MA degree for several years now: https://www.nulondon.ac.uk/degrees/postgraduate/philosophy-and-ai/
Also home to the (AI-orientated) Computational Philosophy Lab: https://cpl.sites.northeastern.edu/
This is fine as long as it truly is directed towards the study of AI-related issues rather than AI-normalizing propaganda.