Grieving What AI Has Taken from Learning
“I wonder if these people have ever seen a student’s face when they finally understand something for the first time.”
Jane Sloan Peters, a professor of religious studies and historical theologian at the University of Mount Saint Vincent, was talking with her students about changes she has made to her teaching so as to safeguard student learning from artificial intelligence when “a wave of sadness washed over me, and I actually got choked up in front of the class.”
“Before AI,” I told them, “Students used to work hard to come up with their own ideas. I’d help, and they’d struggle, but they’d come to something that was their own. That doesn’t happen anymore and I grieve that.” Then I felt embarrassed and went on teaching as though nothing had happened.
Her reflections on this experience will resonate with many Daily Nous readers. She identifies one of the many feelings she has been having about how AI is altering education as grief.

[Trenton Doyle Hancock, detail of untitled etching from “Bye and Bye”]
AI promises great gains, but many educators sense that with its advent, we have lost so much. In this particular instance, students have lost the freedom to sit comfortably in a space of silence and uncertainty, a space as dark and rich as the spring soil in which seedlings are born. And I have lost the joy of sitting with them, encouraging them, watching as their thoughts take root and grow…
The [American Psychological Association] defines grief as “the anguish experienced after significant loss, usually the death of a beloved person.” We are witnessing the death not of a beloved person, but of love as the grounds of education. Love is the heart of a liberal education—a love of the truth, as well as the kind of friendship-love (philia) between teachers and students that makes it possible to pursue the truth together.
AI sycophants would have you believe that teachers like me are simply scared of a new system that will expose their personal deficiencies and outdated pedagogical methods—intractability about AI is ultimately self-interested self-preservation. Dear teacher, you are not fooling anyone but yourself—those lecture notes belong in the bin.
I wonder if these people have ever seen a student’s face when they finally understand something for the first time. What’s more, I wonder if they’ve ever seen a student experience the unique delight in not knowing. Certainly, the unknown can produce confusion and frustration. But sometimes I see in students’ faces a flash of something like the relish of a traveler who knows the journey ahead will be just as delightful as the destination.
Professor Peters says that this “delight in not knowing” is something that’s especially valuable to cultivate in students studying theology, and I imagine many Daily Nous readers think the same is true of philosophy.
At the end of her piece, which you can read in its entirety here, she says: “despite being aware of the real losses in education, I still believe the love and wonder at the heart of education can be salvaged, somehow. The question is, how?”
(via Zena Hitz)
I have to say, while I feel sorry for these grievers, I have a hard time getting in their headspace. From the educational point of view, I find the prospects of learning via AI exhilirating! I have all these opportunities now to level up across all sorts of areas for free using LLMs; last week I was using it to practice formal methods in philosophy, this week I got sidetracked learning about the history of classical music and now I’m using it to better understand the origins of the Aenid. I’m also constantly curious , more so than ever, I think given the ‘affordance’ of knowing I’ve got answers at my fingertips to the most specific questions I could ask. From a learning point of view, I’ve never had a more exciting time to understnading the world (more deeply than i ever had before) and am constantly wanting to learn more!! (I have found Gemini’s tutor mode amazing by the way, as well as Claude Co-Work – and Notebook LM.)
I think the near-frantic energy conjured by the language choices in this comment versus the poetics/rhetoric of grief in the original post captures the disconnect almost perfectly.
I know the diplomatic thing to do is to do the big tent, equal validity thing (different people respond differently to different things) but I think the grievers are honestly being too dramatic about this! The grievy post says grief is ““the anguish experienced after significant loss, usually the death of a beloved person.” I mean this is an OTT comparison … it’s just ChatGPT we’re talking about, not the death of Grandpa! Chin up! I’m sure we also enjoyed the light in a child’s eyes (the grievers think this has gone out apparently) when they catch a ball outside for the first time, but then Nintendo came and they stopped doing that — their eyes light up now when they get to the next level on the nintendo game, etc. Probably a bit out of date reference but you know what I mean. We need to be able to adapt in life without throwing a navey gazey pity party every time a new technology comes along and catches our attention and shifts our routines/attitudes. And as I see it at least, if we wanna get knolwedge and truth, we coudl do a lot worse than having frontier model LLMs in our pocket (and for the record, I still get a twinkle in my eye of the sort the grievers are talking about, when I learn an interesting new fact from ChatGPT!! (such as that Istanbul is in two continents at the same time!! Learned that on the new 5.5 model) Twinkle in eye? Check!
“Every time a new technology comes along.” You guys always do this, trying to slip this passive language through the door, as though LLM technology just spontaneously coalesced out of the ether, and then one day we all woke up to find that our students had suddenly and mysteriously become mind-benumbed AI addicts.
The introduction of this technology to the general public, and its imposition upon academia, was planned and carried out deliberately. The same people who have profited from it are the ones who have paid exorbitantly to convince people like you that their product is a positive or neutral force in the world. It is not. It is poisoning the planet. It is making our children stupid. We do not and have never needed it. And the fact that you can’t bring yourself to care about what has been lost does not make us histrionic for doing so.
I’ll move on from literary analysis to philosophical analysis, same effect:
“We need to be able to adapt in life without throwing a navey gazey pity party every time a new technology comes along and catches our attention and shifts our routines/attitudes.”
Setting aside the slippage between token (recent advances in generative AI) and type (technology advancement), this is a thesis about how to live a good life. It is substantive and by no means a given. It is, straightforwardly, exactly a negation of the substantive view expressed in the original post.
“By no means a given” is extremely charitable. The thesis under examination is simply an endorsement of nihilism, the claim that we shouldn’t allow ourselves to get attached to anything because it might change, that nothing is intrinsically more valuable than anything else and only weepy losers think otherwise.
Nah, I’m not nihilistic! It’s more like this: these people having the pity party are like “I’m grieving that children no longer have a spark or glimmer when they learn something the old fashioned way” . I’m actually appealing to the intrinsic value of knowledge (knowledge is valuable, we should ceteris paribus pursue it), to say “yeah, but we can gain knowledge more expediently now, something to be rightly celebrated not wept over” . Compare: it might have seemed a sense of achievement to overcome obstacles (walking a mile) to get to school, and through cars kids now just get dropped off and miss the special glow of achievement of overcoming obsatacles.. no sense wallowing in poetic metaphors and grief because we’ve lost these kind of obstacles that certain kind of old-fashioned achievements were predictated on. Having instaknowledge via LLM in our pocket removes obstacles to acquiring knowledge the old fashioned way (slugging it out with pen and paper and libraries) but getting the facts fast frees us up to pursue more ambitious intellectual projects, again, something to be celebrated given the intrinsic value of knowledge.
Why don’t you take a look at academic performance trends and get back to me.
Yes, yes, correlation and causation, etc. But one way or another your claim that AI is/has been good for learning doesn’t match the statistics. AI-usage is not correlated with macro-level academic improvement, and thus a fortiori certainly hasn’t caused it. It is, however, correlated (along with a complex of other related developments) with a massive downturn in academic performance.
That seems worthy of grief.
This is besides the point, but:
I don’t think Buddhism is generally taken to constitute an endorsement of nihilism, yet it entails (as far as I understand) the claim that we shouldn’t allow ourselves to get attached to anything because it might change (follows from doctrine of impermanence) and that nothing is intrinsically more valuable than anything else (follows from doctrine of no-self).
(And in fact, given the doctrine of suffering, maybe it even entails the “only weepy losers think otherwise” bit!)
I’m not so sure. The idea that “nothing is intrinsically more valuable than anything else” is the textbook definition of axiological nihilism. If Buddhism matches this description, then, well, it’s an axiologically nihilistic worldview. But I’m not really qualified to adjudicate that. Surely Buddhahood is more valuable than samsara-life, right?
Fair enough; this seems like a modus ponens/modus tollens issue. I was taking it as given that Buddhism surely constitutes/provides some sort of positive account in the realm of the good life and, insofar as it matches that description, there is therefore a gap between that description and nihilism. You are running the argument in the other direction, which is also fine.
That being said, the other direction seems like a much more heavy swing to me (epistemologically speaking). That is, I’d want to understand Buddhism on its own terms a lot more than I do and be much more confident in my understanding of the scope of my understanding it, before that direction becomes the direction I’d be comfortable running.
Well, if you want to understand buddhism…. a good way to do it is to wipe the tears of grief (i still don’t get all this grief talk!) and look it up on an LLM! (joking not joking)
Yes but the point is about students, not about those who already have been educated, graduated, whatever. Sure, perhaps we are all students in some sense, but there are differences. Besides, there is a difference between contexts: you presumably do these things because you’re excited to learn about things (for example). Whereas the standard context for students is not necessarily this (although it can be and it would be best if it was): they want a good grade, want their degree, want to finish their course of study. This often introduces a very different motivational mindset from yours.
I think this is the core of the problem: the spark in the eye is gone because some students are motivated only to get a good grade and degree and miss an intrinsic yearning for knowledge. AI can help the former kind of students, as well as the latter, to achieve their goal.
Instead of fighting AI – and the next new tool, and the one afterwards – we’d do better asking how education became a means toward an end instead of a goal in itself.
genuinely curious: given the penchant for errors especially in free llms–and i don’t mean hallucinations but simple factual errors–how is it that you can confidently feel that you are leveling up? in my experience, info about something like classical music or the aenid (topics i have only basic knowledge) would be highly suspect, at least in the details, coming from an llm.
additionally, i am curious to understand what you consider to be the innovation of llm-based learning over simply reading about a topic from vetted sources. let’s take the stanford encyclopedia for example, although you could use google scholar or academic search premier (presuming you have uni access), or even dare i say it strong wikipedia articles. presumably if one follows the links the llm provides, this is what one is already doing. if one doesn’t follow the links, one is subject to the llm’s confidently stated errors and misreadings of the vetted sources, something that very much continues to happen.
Over the decades I’ve leveled up a lot of knowledge and understanding through the use of Wikipedia, though I’ve often later discovered that the perspectives it gives are skewed and missing important facts, and occasionally even wrong. Neither Wikipedia nor LLMs alone can get someone to actual mastery of a subject, but they really can provide a lot of knowledge and understanding, and usually more than is possible from a single undergraduate class on a topic.
Also, over the decades, I’ve developed an understanding of which types of questions I can answer better through a Google search and which I can answer better through a Wikipedia article and which through a Reddit search. There are now some types of questions that are better answered through an LLM (often the sort of thing that involves a slightly “diagonal” angle through topics, that might be addressed by someone reading a couple Reddit posts and open-access academic articles).
No kind of knowledge that a non-specialist can build will be as good as that of a specialist – and it’s difficult for the non-specialist to get that even if a specialist is present to walk them through the literature. But non-specialists can use an LLM now to get a kind of knowledge that they couldn’t get before (just as Reddit and Wikipedia and Google each introduced kinds of knowledge to non-specialists that weren’t possible before them).
And I should add, I absolutely do grieve for what we have been losing in education, particularly for the inability some students demonstrate for holding uncertainty as a tool for achieving deeper understanding, and the unwillingness to do hard work. Some of this has been disappearing for years, between the rise of short-format video and social media, and students taking advantage of things that are intended as accommodations for some to avoid work for others. (I despair at the number of times I’ve seen students skip class because the slides are available on Canvas, and they think that reading slides is nearly as good as being in class, thinking at the pace of speech.)
But the rise of LLMs has created a new kind of shortcut that is making some students skip any kind of thought. The fact that they enable so many amazing new things does not cancel out the deep destruction they have caused to a certain model of education. We will have to find new ways to get students to think – will we find some way that forces them to do it, or some new method of presentation that gets them to an authentic interest in it, or some weird muddle that mixes these things? There will be new opportunities for sure, but we will have lost something important, and I worry that we won’t replace that thing.
i don’t deny your central claim or that you *can* use llms to learn stuff. but i don’t see this post quite addressing either question i asked. it seems to me i get more demonstrably false answers from llms than i do with reasonably thoughtful google search. further, it seems to me that the risk of these wrong answers negatively impacts the efficiency of using llms. to be clear, I’m not talking about coding or logic etc
it’s worth adding perhaps that often when i express something similar to this, i am told that i am using the wrong llm or that i ought to make a better prompt. but in either case (of checking another llm or iterating prompts until i get what i need), i am adding labor to the process of getting the correct information where as if i say check SEP and follow a link to my library’s ebook from the references, i have a clear pathway to the information that has a level of reliability built in.
I’m talking about frontier LLMs; I’m leveling up like crazy on 5.5 gpt and opus 4.7 – the attitude that these things are full of hallucinations and unreliable is getting outdated quickly; these frontier models are about like wikipedia in terms of reliability (some errors but good enough!) – basically like having a permaent MOOC source (check out Notebook LM’s instructional videos) for highly specific things you want to know, and you get them immediately. My preference is to ask Notebook LM to make videos of papers I want to reaad and watch them, and then talk to Gemini about it asking more questions, and using study cards it makes me, quizzes, etc. all free and immediate and all on your phone.
how is it that we get access to frontier models “for free” per your initial comment?
you assert a low error rate (note, i specifically excluded hallucinations in my prior question. not because they don’t matter or don’t happen) but i’m curious how one would know the error rate level if one is getting new information? wouldn’t that mean that one had verified that information by consulting the provided sources? which in turn means that one would *not* be increasing learning efficiency but simply using the llm as a search engine? on the other hand, if one doesn’t do this, how does one establish that the llm output is relatively error free? it seems like we’re missing a piece.
measurements of things like error and hallucination rates seem to vary dramatically based on study and benchmark designs. we can get any answer we wish (kind of like the llms). and frontier models haven’t been around long enough to have robust studies of real world error issues.
i don’t think anyone doubts that those who already know material can get llms to produce strong accounts there days. but that’s not what we’re talking about here with students or with lifelong learners.
This talk of ‘levelling up’ is very Dragonball Z.
In most games where you level up, there’s a built-in metric: you know what the maximum achievable level is and you can check your exp to see your progress to the next level.
None of those metrics are available here.
Peters has made the case beautifully. Devastatingly, even. When I think on what the billionaires have stolen from us with their environment-destroying, water-guzzling toys, it’s all too easy to plunge into a state of grieving.
Something I would venture to add: not only are the joys and generative agonies of learning beautiful to witness, they are also good for the brain. Being forced to think strengthens the mind in the same way exercise strengthens the body. To the AI sycophants: Here are the fruits of your beloved LLMs: flabby, stultified minds unwilling to stretch themselves around a difficult idea that requires more effort than formulating a 10-word prompt. I see this every day.
Going to work now gives me the same feeling I get when I look at a bleached coral reef. Incidentally the neoliberals are responsible for both.
Of course, at least to an extent and at least passively, we’ve empowered these people to do what they do, to cause the damage they are causing.
Who’s “we”? By the time I reached adulthood the groundwork for all this had already been laid. By the time I had attained even the tiniest modicum of clout in the discipline, the onslaught was in full swing. If by “we” you mean donation-hungry members of the academic establishment, then of course you’re correct, but I’d prefer not to be lumped in with them.
Well, change to ‘have empowered’ to ‘are empowering’. Aren’t we, all of us? Or are we, all of us, most of us, many of us, just victims? Sure, ‘we’ might overgeneralize but that is just a quick proxy, a placeholder. I cannot judge your situation, but surely, it is not merely just a generational matter, all mistakes and guilt in the past, nothing in the present?
Yes, I suppose we could all be doing more. To the extent that we’re not doing our utmost, we are all complicit. But I assure you, I’m trying my best.
I would say that by passively accepting the computer’s hegemony over postsecondary education we’ve been at least complicit. We all just normalized the use of the very devices that are hosting the demolition of education. None of us in philosophy had to do this.
All for minor gains in convenience. Premised on a bet, even at the time a pretty shaky one, that nothing like AI would ever be invented.
We should have reacted to this danger earlier; I recall those who did by sticking to traditional methods, and in my PhD department we smirked and giggled at them. Who’s smirking now, I wonder…
Yes, my considered opinion is that allowing high-capacity personal computers to be used by the rank-and-file public has been utterly catastrophic in its long-term consequences. AI is just one example. The devices have also facilitated political polarization, new avenues of sexual exploitation, hyper-materialism… the list goes on.
We should have regulated the technology immediately, as we do guns. Left unchecked, it has proven to be at least as dangerous and harmful.
But again, I was still in elementary school when this particular point of intervention would have been possible, so I’d say the onus for that one does indeed rest on the shoulders of the generations immediately preceding my own.
I’m pretty good at not letting awful stuff get to me emotionally, so I wouldn’t characterize myself as having engaged in much grieving, but otherwise I very much share the sentiment. The mind boggles at the amount of damage LLMs have done to people’s intellects in such a short time.
Perhaps if the LLMs had wiped out the brains of everyone over 40, it would be cause for great concern but relatively remediable, since the young could take their place. But the opposite has happened: the LLMs have started eating us at the bottom. Sometimes I think it will be a miracle if civilizations don’t collapse within a generation or two.
One ray of hope is that I’ve noticed a lot of young people turn against the LLMs. But at least in my personal experience this has happened mostly among the most excellent students. As someone with strongly egalitarian tendencies, I am not thrilled to see that the way this shakes out may result in a substantial widening of existing divides in education.
Dear Jane,
I know exactly what you mean. I had my moment eight months ago. But, in a spirit of antifragility, I started this:
https://certifiedaifreeskillsandknowledge.org/
Now, I feel awesome — and so do my students.
Join us!
Best,
Marc
My confidence in the face of name-calling, btw, stems from this not-yet-finished book: https://philpapers.org/archive/CHAEES-2.pdf