The AI Threat, the Humanities, and Self-Cultivation


“The humanities are… a gateway to and instigator of a lifelong activity of free self-cultivation. The changes they provoke in us are not always for the happier, or the more remunerative, or the more civically engaged, but when things go passably well, these changes are for the deeper, the more reflective, and the more thoughtful.”

Those words are from “The Coup that Failed,” by Talbot Brewer (Virginia), published in The Hedgehog Review in 2014.

photo by Justin Weinberg

When I’ve spoken with colleagues about the worry that many students will cheat on assignments by using ChatGPT or other large language models (LLMs), we invariably bounce between three approaches to the problem: “defense”, “offense”, and what I’ll call “no-fence”

“Defense” involves devising assignments and assessments we hope might be, to varying degrees, LLM-resistant. “Offense” involves incorporating the use of LLMs into assignments in specific and guided ways in order to learn how to use it well. Meanwhile, the “no -fence” approach puts up neither measures to guard against illicit student use of LLMs nor markers to guide students in their effective use. It says, rather, if students want to to use LLMs to cheat or avoid work, that’s something they’re doing to themselves, and it’s not the job of professors to paternalistically prevent their students from sabotaging their own educations.

I see the appeal of the no-fence approach. But I also fear it being disastrous.

For one thing, it seems too flip a response to potential damage to intellectual culture. It’s not as if each student who uses an LLM affects only their own education. Rather, one’s use of it can affect other students’ sense of fairness, their sense of what’s acceptable, and perhaps their sense of whether cheating with an LLM is “rational,” given their aims. Cheating may be contagious along multiple vectors, and sufficiently widespread, it turns higher education into charade.

I expect some readers to respond,”what intellectual culture?” or “isn’t higher education already a charade?” They might be thinking: most college campuses, from the perspective of most students, are already not intellectual environments, and much higher education isn’t about learning, but about credentializing and signaling.

I don’t know to what extent that cynical description is accurate. But I do know it’s not an accurate description of all students on all campuses. There remains enough interest among students in knowledge and learning to be concerned about threats to the culture that would extinguish it.

When I hear someone say they’re going with something like the no-fence approach, and I raise these kinds of worries, the response is typically something like the following: the thing I’m worried is threatened by “CheatGPT” is the very thing the value of which we can discuss with our students, so as to get them to see why they ought not to cheat. That is, we should talk with our students about the value of education in the humanities.

What do we say here? If we say something like, “you’ll learn how to write well, and communication is important in the workplace,” we are inviting them to reply with something like, “but in the workplace, we’ll use better and better LLMs or AIs for that, so I don’t need to learn how to write well.” Besides, for most of us, the instrumental value a humanities education may have in making us better workers does not capture what many of us think is important about what we teach, and so such “practical” responses miss something.

Rather, we need to talk with them about how their education is good for them—not (just) good for them in their roles as employees, or consumers, or citizens—but as persons. This means engaging with students in a discussion about what’s important, and what we think they should see as important, for a life well-lived.

And that is why I began with this post with the quote from Brewer’s essay about self-cultivation, about an education that can make a person more reflective and more thoughtful.

This vision of what a humanities education is about informs how we should think of our teaching. He writes:

The ideal teacher of philosophy is not someone whose opinions are to be accepted, but someone whose form of thought is worth emulating.

The Socrates we know is a dramatic persona Plato puts forward as worthy of emulation. I believe that such emulation consists in serious-minded lifelong engagement (engagement “unto death,” as Plato wishes to make clear in the Phaedo and Crito) in the activity of self-articulation, which is to say, the activity of bringing oneself into a more determinate form by bringing oneself into words. Here “articulation” is meant in both of its common senses: We give a more fine-toothed and determinate shape to our views about important matters (i.e., give them greater articulation) by bringing them into the space of words (i.e., by articulating them). This activity requires fidelity to our actual outlook, but it also alters that outlook by finding words for it that we are prepared to live by, hence setting the stage for another, more adequate articulation.

If this is philosophy, then philosophy is continuous with the sort of self-formative activity all human beings go through again and again in the course of their lives, provided they live with even a modicum of deliberateness.

I tell my students on the first day of class, about reasons to study philosophy. Among other things, I say sometimes say something like the following:

We have a word to describe people who go about their lives doing and believing what they’re told, or what everyone else does, without thinking about why they’re doing or believing these things. That word is “thoughtless”. Most of us don’t think it would be good for us to be thoughtless. Rather, we want to think of ourselves as thoughtful. And we don’t merely want to think of ourselves that way; we want to be that way. We want it to be true that we’re thoughtful. Studying philosophy is one way of becoming more thoughtful.

I imagine most philosophy professors have something like this in their repertoire, and professors in other disciplines have their versions, too. Having something like that as a teaching mission statement is a necessary condition for a responsible version of the no-fence approach.

But is it enough? I say those words to a lecture hall full of students. How many of them were listening? Of those that listened, how many followed what I said? Of those that followed what I said, how many were at all moved by it? And of those who were moved by it, how many will remember it an hour from now?

Certainly it is not enough to say something like this every once in a while, nor to say it on a regular basis. It has to consistently inform and animate one’s teaching, it has to be a lesson that comes through again and again. How does one do that? And how does one do that in the large introductory lecture courses that dominate student schedules at the beginning of their college educations, and are thus so influential in setting students’ attitudes towards their education? (These are not rhetorical questions. Please share whatever answers you have.)

As teachers, we want to believe that what we’re doing is valuable. And we don’t merely want to believe it; we want it to be true. The threat that LLMs and other forms of AI pose to a culture of humanistic learning forces us to think about and articulate what, if anything, makes it true, or could make it true.

We will never convince all students that what we’re teaching is valuable for them in the ways we think it is, and in my view, we shouldn’t hope to, as there are many different goods that can be combined into many different kinds of good lives. So universal assent is not the standard we should hold ourselves to. Rather, we just need to reach enough students—enough to protect from threatening technology what we believe is valuable, and enough for us to honestly believe that what we’re doing with our teaching is actually good.

If we can figure out a way to do that, then I think I’d find myself more squarely in the no-fence camp. Can we?

Horizons Sustainable Financial Services
Subscribe
Notify of
guest

37 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Justin Kalef
8 months ago

Great post, Justin W. I’m most of the way through that piece I’m writing in defense of keeping strong anti-cheating penalties firmly in place: I’ll have it ready to submit to your consideration once I get through with a different project I need to get finished with this summer. For reasons I give in the piece, what you call the ‘no-fence camp’ seems to me to be a terrible mistake, and it’s often hard to see what sort of consistent picture it’s meant to be a part of. But I’ll save most of that explanation for later.

For now: I think the bit you say to your students about the importance of being thoughtful is great, and I suspect that your little speech helps a number of them see the light and approach their studies with a sincere hope to better themselves. But, as you say, many other students at most colleges — maybe a majority — have been through so many years of inflated grades, lackluster teaching, and dumbed-down courses that their only remaining motivation is to get some career with the credential they’re waiting to be given for jumping through a few big, low hoops (if they think that far: many of them speak and act as though the credential itself is as far as they’ve bothered to plan).

One way or another, they see themselves as here to get the credential itself, not what that credential is meant to signify, so it just makes economic sense (subjectively) for them to spend the least amount of time doing the work they need for it. The risk of their being caught cheating with Chat-GPT and then seriously punished is very low, and the rewards are very high: the technology is free and it takes almost no time to use, and the resulting grade is at least as good as what most of them are apt to get by putting in over a hundred hours of study on the same course.

My main solution to this is to evaluate them on work that Chat-GPT cannot help them, and to let them know this from the first day of class so that they won’t paint themselves into a corner by starting off with the wrong expectations. But I think it’s also important to stress to them that their career prospects will probably be much worse if they learn to use AI to do most of their thinking for them.

In other words, if your plan is to inspire your students to take your courses seriously by reasoning with them about their value, I think it will be useful to make clear to them that what you call the ‘offense’ approach to AI is also unlikely to work out for them professionally.

The problem with what you call the ‘offense’ approach can be seen more easily from the perspective of an employer. Imagine that I run a business or division with (say) twenty employees, most of whom write reports, business letters, updates for the company website, and maybe a bit of code to help solve specific problems. Along comes some new AI, which is not only far quicker at doing all those jobs, but is highly user-friendly (just as Chat-GPT is already). What is most efficient for me: to gradually replace the twenty employees with twenty new ones who are adept at using AI? What a waste of money! If the work the AI spits out is already as good as what the twenty human employees produce, but gets created in seconds rather than hours, why go to the expense of paying for any of those employees? In the time it takes for me to write a memo to each of the twenty employees, explaining to them what I need to have done, I could give those same instructions directly to the AI and have the completed task in hand immediately.

Learning to give instructions to AI does not take four years of college, on top of twelve years of prior schooling. The whole reason why students are using Chat-GPT to cheat is that it’s so simple to use and requires so little training. Moreover, if one wanted to teach students to operate Chat-GPT as efficiently as possible, and to take advantage of all it has to offer, why would one choose the means of getting those students to take courses designed to teach the humanities to humans?

If the ‘offense’ method — shifting from teaching students to read, analyze, think, argue, and write on their own to teaching them to offload most of that work onto AI — is to be preferred, then we could probably generate enough people to handle the reduced human workload just by altering the high school curriculum and shutting down the universities. In other words, it seems like a suicide move for higher education, but that’s where the ‘offense’ approach seems to get us.

On the other hand, it could well be that many jobs will still need to be done by a human. If all twenty employees in my business can be replaced by Chat-GPT or other AI, which will now design all my websites, write all my business letters and reports, etc., then perhaps one or two people will still need to read through all that output to make sure that it doesn’t contain factual errors or passages that imply something unfavorable to customers, and to try to figure out what lies behind the trends noted in the AI-produced reports.

These human employees would be valuable precisely because their work cannot be done by just dumping the AI-generated material into another AI.

And what should the education of those human employees look like? Well, it would be pretty silly to just teach them to use AI to do their reading, thinking, and writing for them. People trained that way are exactly the sorts of humans who should not be considered for the jobs! What is needed instead is people who know how to read, think, and write for themselves. And that’s just what we’re already teaching people to do.

To summarize, in a somewhat simplified manner:
– Your ‘defense’ approach prepares students for the jobs that cannot be done by AI.
– Your ‘offense’ approach prepares students for the jobs that will be made redundant by AI.
– And your ‘no fence’ approach simply creates an environment in which students who try to prepare for either of those jobs are made fools for trying to learn anything, as their classmates will be motivated to get higher grades by cheating with AI (for which, unlike the students taught in ‘offense’ classes, they won’t have or need direct instruction).

Daniel Groll
Daniel Groll
Reply to  Justin Kalef
8 months ago

Thanks for this Justins! Justin K: I find lots of what you say persuasive and may even have my student read your response to the “offense” approach. I worry, though, that there is no way to deploy the “defense” approach without losing something utterly central to not only what we (humanities profs) do, but, to put it a little grandiosely, without losing something critical to the human project of generating knowledge or (in a more pragmatic vein) devising solutions to problems. While there are definitely many ways to encourage, and inculcate, critical thinking — presentations, in class essays etc — I think there is something super valuable about the kind of thinking that is fostered by doing the kind of writing that takes time, involves banging your head against the wall, stuggling to think of an objection or a response to an objection, revising a draft, letting your paper sit for a day, tossing out that first crappy paragraph you wrote etc. That kind of writing — which I think necessarily takes place over a stretch of time longer than what can be offered in a class session — is not just a *means* of expressing thoughts, but is itself a kind of thinking and a kind of thinking which plays a central role in the human project I mentioned above. Of course, many many (many!) students don’t engage in this kind of thinking, despite our best efforts to get them to do so. But some do. And I think we really need people to engage in this kind of thinking. So: I don’t think we should avoid assignments that constitute *training* in this kind of thinking. But I worry that in trying to make our assignments largely AI-proof, we’ll remove a — if not the — central method for developing the kind of thinking I’m talking about.

Again: I’ve no doubt that there are *many* ways to develop critical thought. But I think about how central reading (thoughtful) stuff is to the project of advancing knowledge and so I worry about what happens when students are not trained to produce that kind of writing (while accepting that only a very small portion will go on to regularly do that kind of writing). Is there a way to give students the chance to do this kind of writing/thinking that is largely AI-proof?

Last edited 8 months ago by Daniel Groll
Justin Kalef
Reply to  Daniel Groll
8 months ago

Thanks, Daniel! And I agree with you that having students write an essay at length, going through all the stages you describe, is a valuable part of the teaching we can offer them.

Assessment on the basis of essays is a relative latecomer on the university scene, in the grand scheme of things, and I don’t know think it’s indispensible: many university graduates of yesteryear did fine without it. (Then again, they had already learned many of these skills before entering university, and very few of our students today have been taught to anything near that standard by their K-12 schooling).

I’d be keen to hear a proposal for a graded essay assignment that isn’t apt to lead to widespread cheating, now that Chat-GPT is out of the bag. If anyone comes up with one, I’d love to use it!

Failing that, I intend to keep teaching the art of essay writing to undergraduates who want to learn it — only not for credit. If students are keen to learn the process for its own sake, I’m happy to teach it.

Preston Stovall
Reply to  Justin Kalef
8 months ago

One problem with this approach is that, as Daniel indicates, essay writing appears to be something that scaffolds student learning across the board. If that’s right, then we should be actively looking for ways to keep essay writing in the curriculum.

One option, for smaller classes with longer essay assignments, would be to expect students to regularly answer questions about their submissions, in class. This could come by way of either one-off (or two- or three-off) questions in the course of a lecture, or more formal presentations where students present their ideas and receive feedback and questions from the rest of the class. This wouldn’t work for large classes, of course, but my sense is that larger classes often trade long essays for shorter in-class writing assignments, and this at least preserves some writing in a way that is cheating-resistant.

Justin Kalef
Reply to  Preston Stovall
8 months ago

I think you’re right that teaching how to write essays now (and maybe always) is best left for smaller classes, or at least small tutorial sections of a larger lecture course.

I offered one essay-writing option to my students last semester, but nobody took it. The idea was that students who wanted to write an essay would write up a general outline of what they wanted to do (including of course some new argumentative move they wanted to introduce, or some new thesis they wanted to argue for). If that was approved, they would write a draft and then come in for what I called an ‘essay exam’, which would serve as something akin to the sort of prospectus defense graduate students do in some programs, though of course not quite as demanding.

That way, dumping the instructions into Chat-GPT would only be so helpful. Even if Chat-GPT came up with a viable argumentative outline for them, they would have to be prepared to defend that argument at its weakest points against some difficult-to-predict objections I would partly devise in advance and partly come up with on the spot. I’d keep a recording of the objections I’d raised that they hadn’t been able to answer. Then, they could go home and finish the essay.

I figured that this would provide an exciting opportunity to develop as a philosopher and essay writer, providing much more in-depth feedback than one could provide to a larger number of students. It would also put me in a good position to write a letter recommending the student, if one were requested.

At the same time, leaving students with the default option of a regular oral exam on the course material meant that people who might be inclined to cheat on an essay would be able to avoid the temptation, which they all did.

Laura
Reply to  Justin Kalef
8 months ago

One can think clearly and logically only if one can communicate clearly and logically. This is one of the most valuable things we teach in philosophy – clear and logical thinking. I don’t think we can teach it without asking students to demonstrate and practice it themselves. AI assembles and organizes information but cannot evaluate things lacking a pre-determined truth value. So far it cannot replace this human task, which is essential for working professionals. There is no reason to stop teaching these things and assume they can be handed over to AI, then. Using AI makes assembly and organization tasks much easier, and it’s not clear that using it as such a tool is bad, even for students. The remaining challenge is finding ways to isolate the tasks of reasoning, deliberation, and evaluation, in order to teach them and assess whether students can do them well. It may take a little time and experimentation to accomplish this while AI generators are widely and freely available, but I think it can be done. In the meantime, nothing stops us from having students write in person with a pencil, or even typing on a keyboard in a lab during class.

Eddy Nahmias
8 months ago

Thanks for this, Justin. I wish our discipline and other liberal arts did not have to try to promote themselves. And ultimately, the best way for students to see the value of these fields is just to engage in them. But meanwhile, I’ve been trying to come up with some ‘advertising’ using the slogan “Think for Yourself!” Here’s a blurb I just wrote that is cheesy … but IMHO not bullshit either.

Think for Yourself! Philosophy is the study of what we should believe, what is real, and how we should live our own lives and our lives together. In a world with endless (mis)information, you must learn how to think for yourself to decide who to trust and to find what is real. In a world with increasing polarization, we must learn how to think for ourselves to find common ground and build a more just society. In a world with chatbots and A.I., you must learn how to think for yourself, to write and communicate your own ideas, and to work with other people to solve complex problems. Philosophy will not tell you what to think. It will help you learn to think for yourself. It may also help you learn to think about the self you want to be.

naive squirrel
Reply to  Eddy Nahmias
8 months ago

I like this reply. In so far as the mantra “think for yourself” is tied to the idea of articulating your self, I think it might help to engage in more discussions of the value of authenticity, which I think of in terms of the value of NOT outsourcing our intellect, particularly when it comes to making important decisions that aren’t clearly calculable (or to be made based on testimony or some standard blueprint), and require reflective judgement and contemplation.

Patrick Lin
Reply to  Eddy Nahmias
8 months ago

Related video, on Nietzsche and thinking for yourself:

https://www.youtube.com/watch?v=e-k7b8Zmh70

Meme
Meme
Reply to  Eddy Nahmias
8 months ago

Cheesy or not, it’s inspired and I agree with it wholeheartedly! In more pessimistic moods, however, I’m also inspired by Butler (from his “Darwin among the Machines”):

“Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. … Our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species.”

(A bit melodramatic, but that’s only… human.)

naive squirrel
8 months ago

I’ve been watching some of these discussions about how to handle chatGPT for a little less than a year or so, and maybe I’m just in too narrow of circles, but one thing that seems to be missing or really underplayed is this question:

What *can’t* AI do or produce that we think is valuable for humans to do or produce, and how do we educate and test specifically for THAT ability? (And why is that ability so valuable in the first place?) I haven’t really answered this question myself, but I also haven’t really seen it being talked about a lot (I do see mostly practical things like whether such and such is a bad mode of policing, or how to detect AI, or whatever).

So I appreciate this opportunity to consider the question of what makes philosophy a valuable and unautomatable (for lack of better word!) activity, and I would start by answering the question of: why can’t AI do philosophy (assuming we think it can’t)?

My thinking is that we better be raising the bar – the way AI is so indiscernibly able to fulfill many of our ordinary seemingly “intellectual” functions (that humans would normally get paid to do) just means that we have to work on identifying what our function is, beyond whatever sorts of abilities AI might have for detecting and responding to patterns and so on. AI matches us because we’re prone to dulling ourselves in ways that makes us match it (e.g., instead of uniquely stylizing ourselves as communicators, inventing metaphors and symbols, applying our imaginative, intuitive, first-personal perceptions to important and difficult questions and decisions, drawing novel connections between ideas, etc., we think of ‘learning’ as coming to know how to write a professional memo, or some other mundane task that follows a pretty strict set of commonly accepted rules and patterns that just get reinforced by continued use…).

I think this discussion can and should bring us to spend more time discussing ‘existentialist’ issues regarding what it is to be (truly) human, what it is to be one’s ‘authentic’ self (in individualized, personal terms of being human) and why it matters, and so on…

Meme
Meme
Reply to  naive squirrel
8 months ago

I wholeheartedly agree with this comment, especially the points about creativity (etc.) and raising the bar.

Regarding the question of what philosophy might contribute, here is one possibility. A common idea is that philosophy is about thinking *specifically* for oneself. But one might also think that what it is to automate an activity is for the automator to do the activity for you, and hence for you not to do it for yourself. In that case, it would follow that philosophy is about something which cannot be automated, even in principle.

(Counterpoint: given the horrific possibility of somehow merging natural minds and AI, perhaps the automator can do the thinking for you *by* doing it for itself. Counter-counterpoint: in such a scenario, *you* are not there anymore—at least, not you as you were pre-merge.)

But is this valuable? I tend to think so, and I also tend to think that the value of thinking for oneself is intrinsic and totally unrelated to its usefulness. (“Tend” is the operative word here—I have my doubts…) A related worry, however, is whether this value is unique to philosophy, or whether it instead is common to many fields. Just some food for thought.

Meme
Meme
Reply to  Meme
8 months ago

I did not see Eddy Nahmias’s comment (below) before posting this. Suffice it to say that I agree!

Laura
Reply to  naive squirrel
8 months ago

AI is not a deliberator. It can tell you what other deliberators have said and it can organize that information beautifully, but it doesn’t deliberate.

Justin Kalef
Reply to  Laura
8 months ago

Well, perhaps it doesn’t deliberate itself, but it does seem to do much more than find what deliberators have said on the topic and repeat it. You can see examples of this by asking it to deliberate on things people are unlikely to have discussed before.

For instance, I just made up this issue and asked Chat-GPT to deliberate on it:

“Should there be a law in the USA requiring all of us to carry at least ten dollars in cash on our person at all times? Please deliberate carefully on this question and explain your answer.”

Here’s the response I got:

The question of whether there should be a law in the USA requiring individuals to carry a minimum amount of cash on their person at all times is complex and involves several considerations. Let’s deliberate on both sides of the argument:

Arguments in Favor:

Emergency Situations: Having a small amount of cash on hand could be beneficial in emergencies when electronic payment methods are unavailable due to power outages, technical glitches, or natural disasters. It could allow individuals to purchase essential goods and services when traditional payment methods are inaccessible.

Privacy and Anonymity: Cash transactions provide a level of privacy and anonymity that electronic transactions do not. Requiring individuals to carry cash could help protect their financial privacy and reduce the ability of entities to track their spending habits.

Financial Inclusion: Not everyone has access to banking services or credit/debit cards. Requiring a minimum amount of cash could ensure that those who are unbanked or underbanked still have a means of participating in transactions and economic activities.

Preventing Electronic Fraud: With the increasing prevalence of electronic payment methods, there’s also a rise in electronic fraud and hacking.

Requiring cash on hand could act as a safeguard against falling victim to such cybercrimes.

Arguments Against:

Personal Freedom: Requiring individuals to carry a minimum amount of cash infringes on personal freedom and choice. Citizens should have the right to decide how they manage their finances and whether they prefer electronic transactions or cash.

Unnecessary Regulation: Implementing such a law could be seen as unnecessary government intervention. People should be able to decide for themselves whether to carry cash based on their needs and preferences.

Cost and Convenience: Mandating the carrying of cash could burden individuals who prefer electronic transactions or find cash management inconvenient. Carrying cash might not align with modern lifestyles and could be seen as an unnecessary inconvenience.

Enforcement Challenges: Enforcing a law that requires individuals to carry cash would be difficult. It could lead to potential conflicts between law enforcement and citizens, especially in cases where people forget or choose not to carry cash.

Technological Progress: As society progresses, the reliance on physical currency is decreasing in favor of digital and mobile payment methods. Enforcing the carrying of cash might not align with the direction in which financial transactions are evolving.

In conclusion, while the idea of requiring individuals to carry a minimum amount of cash on their person at all times has some potential benefits in terms of emergency preparedness, privacy, and financial inclusion, it also poses significant challenges related to personal freedom, convenience, and the changing nature of financial transactions. Striking a balance between individual choice and societal needs is crucial. Instead of a mandatory law, encouraging financial preparedness and educating citizens about the advantages of having a small amount of cash on hand might be a more practical approach.

Ian
Ian
8 months ago

It seems to me that within something like what’s being called the no fences there’s a lot of middle ground between “they’re all honest and won’t cheat” and “they’ll all cheat.” The question here is what are we teaching at the end of the day. I seen myself as (trying to) teach analytical skills, the value of close engagement with source material, effective communication, and the importance of critique as resistance to ideological & intellectual demagoguery.

Now it may be the case that LLMs will ultimately abrogate the effective communication part, and to be frank that’s pretty much ok with me. It may be the case too that LLMs–if and when they are trained on paywalled primary materials–will abrogate a lot of content that is taught in the humanities. If you ask an LLM to explain consequentialism and objections to it, you get (or will get soon) a good summary.

But if we teach skills, as I hope I do, in which content serves more as the material under examination than it does knowledge per se, I don’t see how LLMs threaten this at all. These are life skills, sure, but they are also occupational skills and political skills. The value seems clear here.

In the no fences approach, an instructor simply says “if you want to learn these skills, don’t use LLMs” (perhaps with the exception of polishing prose which is great and all but on the way out). If X percentage of students cynically don’t see the value being offered, why is it this instructor’s job to convince them? In addition, I’m not at all sure that rational argument about why this instructor is teaching will even go particularly far (with students anyway, admin may be different). It’s my experience as a student that great material/ideas can make up for poor teaching and that great teaching (in terms of charisma) can make mountains out of poor material/ideas. What made the difference for me, and maybe this is an aberration, was ideas or skills that were compelling and I guess I don’t think we can argue people into being compelled.

So with no fences my take is this: It’s no skin off my back if X percentage of students just want to plow through some gen. ed. class. I think we all can see that such a desire is perfectly understandable. Surely we all took classes we found stultifying as undergrads. That said, I firmly believe in the value of the skills I teach in a number of ways and it is my hope that students will be at least a smidge compelled by that value. Some will be, and won’t find any value in using LLMs for classwork, and some won’t be and may well take shortcuts. They are adults and if they judge that LLM use is more beneficial to them than the skills I hope I offer, then that’s their prerogative. If they’re right about that, I can’t fault them. And if they’re wrong, I can hope that upon reflection they will see that they were wrong.

Justin Kalef
Reply to  Ian
8 months ago

I think I understand the motivation behind what you’re saying, Ian. But…

  1. “So with no fences my take is this: It’s no skin off my back if X percentage of students just want to plow through some gen. ed. class.”

Would you be willing to try the following experiment? Walk into some gen. ed. course you’re teaching this year, and tell the students that they have a choice: if they just want to get the credential without learning anything, they can just let you know that they request an A+, and you’ll give them that grade without their having to do any work or attend any days of class. Or, if they prefer to learn something, they should just come to class as usual, and do all the work, and they’ll get whatever grade they earn.

You probably don’t need to try that experiment to see what would happen. Nearly everyone would clear out right away, so long as they trust that you’ll keep your word. As for the handful of students who want to learn, they might be willing to go to class and do the readings even if they don’t need to — but would they really be willing to risk a lower grade just because, unlike the people who simply requested an A+, they’re being graded on their actual work?

But this is close to the situation the students actually face in an openly no-fence course. Each student who choses to take the course seriously has to invest about a hundred hours of work and study to achieve a questionable result, whereas students who do nothing but cheat with AI will more or less always do better.

2. “They are adults and if they judge that LLM use is more beneficial to them than the skills I hope I offer, then that’s their prerogative.

Fair enough — but why should it also be their prerogative to be assigned a fraudulent grade? It’s one thing to say that what you have to offer in your capacity as a teacher is not for everyone. It’s another thing to say that you’ll be complicit in giving them a fake credential at the expense of (among other people) your honest students who are working sincerely for the same reward.

Ian
Ian
Reply to  Justin Kalef
8 months ago

I don’t accept the proposed experiment. The devil’s in the details. Here’s a revised version for X gen ed course:

You have two options:

1) You can pay your tuition, show up everyday to class, do all the in-class work, cultivate a participation grade, do the assigned work and put yourself in a position in which you’ve learned some skills and (we hope) qualified for a strong grade and been on the up and up.

2) You can do all of the above, except the last three—you gain no skills, or only a few; you put yourself in a position in which a C is the best you likely qualify for, and you know you are a cheater.

I’m fairly confident that 1 is the better option here.

In terms of a credential and its value, I tend to think that ship has already sailed. I can’t speak for others, but if you show up and try you’re probably gonna pass my class with a C grade—even if you’re not great at what I’m asking you to do.

I take it that the implication you make, Justin, is that in my understanding of the no fence approach is that this approach incentives cheating and is perhaps not living up to an abstract idea about the university & its value. You can of course think that but I don’t agree.

Justin Kalef
Reply to  Ian
8 months ago

I don’t yet see why those two options resemble the sorts of options faced by a student in a ‘no-fences’ course (i.e. one in which students who try to cheat with AI or by some other means are allowed to get away with it).

Many experienced professors have said that the work produced by Chat-GPT in response to their standard prompts would earn a high grade from them. In fact, many have admitted to having actually given high grades to such papers before discovering somehow that they were created by Chat-GPT.

This is especially true for the sorts of gen. ed. courses you’re talking about, in which we don’t tend to expect anything too philosophically advanced.

So why should we imagine that a student who decides to cheat using AI would get an inferior grade?

Ian
Ian
Reply to  Justin Kalef
8 months ago

Maybe we’re speaking at odds here. When I say no fences, I mean that I ban LLM/generative AI use in my class. What I don’t do is spend hours trying to catch students who might have used it. There some hallmarks for the free ChatGPT that recur, enough so that when combined with drafting and so forth—scaffolding if you like but no more than before—it becomes a bit clearer when a student has used a chatbot. (Now, this may not apply for GPT4–I haven’t played with it.)

These entirely LLM generated essays are—at present—rather poor, so I think the “free A+” model is, right now, simply not at play. Certainly from what I have seen, AI essays don’t approach A- quality.

My view is simple: cheaters gonna cheat. As others have said, I’m not a cop and I’m not paid well enough to spend hours of my time trying get a confession—essentially the only real chance of finding out if AI has been used. So don’t use LLMs if you want actual benefit. If you’re just going through the motions and cover your tracks sufficiently, you’ll probably get away with cheating. If spending your time & effort learning how to guide the chatbot to producing a mediocre essay rather than using your brain, that’s on you student.

Given that in a gen Ed course there are a certain percentage of students who don’t want to be there/think this course is unnecessary, laziness and corner cutting just happens. I have absolutely no problem with that kind of view. Certainly we can try to court the skeptical undergrad, and some skeptics will be converted by what they feel is valuable in the course with or without courting.

But to those relative few who choose—and it is a choice—to try to get away with AI generated work because they could give a rat’s ass about the course and what it offers, I unofficially think that’s their call to make. And if it seems clear to me that AI use is likely, I will certainly ask the student and maybe give the student another chance if they come clean. But I won’t spend hours of my time playing cop or redesigning my (gen Ed) syllabuses to make AI use more difficult, which I think is—at least at this time—a waste. Nor will I “lean into” AI use, as I don’t see the pedagogical value in essentially learning how to use a single tool in a college course.

I have what I have to offer. I think it’s useful and valuable in a number of ways. Not all students agree, and I’m absolutely ok with that.

Gary Bartlett
Gary Bartlett
Reply to  Ian
8 months ago

“My view is simple: cheaters gonna cheat.”

I see this being said by a lot of those who advocate the no-fences position. The idea seems to be that there are a certain number of students who are just cheaters by nature, and that’s their problem; so we should just let them do their thing, because in the end it’s only themselves they’re hurting.

But the premise is false. There are no (or at least, vanishingly few) students who go into a college class thinking, “I’m gonna cheat my way through this.” Rather, students look at what the class is like; they evaluate how much effort it would take to do well by fair means, and how likely it is they’d be caught if they cheated a bit (or a lot); they look at what their peers are doing; etc etc. In short, for the vast majority of students who cheat, cheating is a response to a large set of conditions that they encounter in a given class.

Many of those conditions are under the control (at least to some extent, and in some cases to a great extent) of the instructor. So for the instructor to say, “Some students are just inevitably going to cheat, because that’s just who they are, and so I’m not even going to worry about them”, seems to me an abrogation of one’s responsibility.

Ian
Ian
Reply to  Gary Bartlett
8 months ago

I don’t think one’s nature has anything to do with it. It is a calculation, and one that is largely based on perceived cost v. benefit.

Specifically in a gen. ed. course that may have nothing to do with the student’s interests or major, the benefit may seems minimal and not worth the cost of actually doing the work in a dedicated way. I guess I don’t see this claim as at all controversial.

I also don’t think that we’re being at all realistic if we think that we can stop all cheating. To be honest, with the amount of cheating I have seen in my years of teaching lower div. gen. ed. humanities courses along with the laments of other instructors with whom I am personally acquainted (I recommend we all check in with our community college comrades about what they’re seeing in regards to AI cheating), I find the supposition that cheating is somehow an aberration flatly absurd.

If anything, the cultural pendulum has swung pretty drastically toward the acceptability of varying kinds of academic dishonesty (I heard in a podcast recently that a recently published paper estimated that there are around 20k people in Nigeria who’s job is to write college essays. Here’s a media link to the same type of thing: https://www.dailymail.co.uk/news/article-7290333/Inside-African-essay-factories-producing-essays-cheating-UK-students.html).

I think it’s crucial here to say a few things. First, I do not in any way think that most students cheat. Second, some students cheat b/c they are at the verge of a breakdown and simply can’t manage the work. These are not bad people and I sympathetic to them. Third, when we are talking about in students in major and upper div. classes, cheating is–in my experience–vanishingly rare. Fourth, I think we do ourselves a tremendous disservice as educators if we don’t realistically assess what’s going with our students in terms of economics. The out-of-pocket cost of a university education is–generally speaking–debilitating for many students and families.

This latter is largely a new phenomenon with jumps in private and public tuitions in the past two decades or so & a rapidly changing economy. Now, from a psychological perspective (and I think if you poke around Discord or Reddit you’ll see this thinking in abundance) the cost vs. benefit analysis for many college students is very different than it was for a number of folks who now profess. Under these (and other) circumstances, the idea that a student who let’s say wishes to major in biochemistry who is more or less compelled to take composition or intro to philosophy or whatever gen. ed. humanities course would decide that, say, reading Plato is harder than they would like and seems of low value, and that writing essays has little or nothing to do with the career they are pursuing, and that cheating is easier than ever, and that academic cheating is more widespread than ever, and finally now that AI will likely take over most everyday writing tasks within the next decade, would decide to cut corners is not only understandable, but in fact a reasonable position in its own way if not forgone or even the rule.

I don’t think this is a cynical view, nor do I think it speaks to some character weakness in a student. I’ll say it again–college students are adults. How they allocate their time and energy is ultimately up to them. Some will be shitty, some won’t care, some will have an epiphany, some will become good people, some will–against the better judgment of everyone–major in the humanities, some will turn to the dark side, and on and on.

I also don’t think my view in any way abrogates my responsibility. My responsibility is to offer what I think are valuable skills in an (hopefully) engaging way, to help as many students as possible, and to model humanistic analysis. I do that, and my evals would suggest that I do it well. But if you’re not buying what I’m selling, it really is no skin off my back.

Gary Bartlett
Gary Bartlett
Reply to  Ian
8 months ago

I think we agree on everything except your final conclusion. 🙂

“Specifically in a gen. ed. course that may have nothing to do with the student’s interests or major, the benefit may seems minimal and not worth the cost of actually doing the work in a dedicated way. I guess I don’t see this claim as at all controversial.”

Agreed. However, I would emphasize your word “may”. So then the key questions are: How often will this happen? How often are we prepared to accept it happening?

“I also don’t think that we’re being at all realistic if we think that we can stop all cheating.”

Agreed. I certainly never said we can stop all cheating.

“I find the supposition that cheating is somehow an aberration flatly absurd.”

As do I. I didn’t say cheating is an aberration.

“I heard in a podcast recently that a recently published paper estimated that there are around 20k people in Nigeria who’s job is to write college essays.”

I think we’re listening to the same podcasts! 🙂 This one, right? https://www.nytimes.com/2023/08/18/podcasts/sam-bankman-fried-goes-to-jail-back-to-school-with-ai-and-a-self-driving-car-update.html

“the idea that a student who let’s say wishes to major in biochemistry who is more or less compelled to take composition or intro to philosophy or whatever gen. ed. humanities course would decide that, say, reading Plato is harder than they would like and seems of low value, and that writing essays has little or nothing to do with the career they are pursuing, and that cheating is easier than ever, and that academic cheating is more widespread than ever, and finally now that AI will likely take over most everyday writing tasks within the next decade, would decide to cut corners is not only understandable, but in fact a reasonable position in its own way if not forgone or even the rule.”

I completely agree. This is pretty much exactly what I was saying, in fact. I’d add that if the student sees that many of their peers are using AI, that adds another huge weight to the motivation for cheating.

“I don’t think this is a cynical view, nor do I think it speaks to some character weakness in a student. I’ll say it again–college students are adults. How they allocate their time and energy is ultimately up to them.”

I’m getting up from my seat and preparing to get off the bus. How they allocate their time and energy is indeed up to them, but we (as instructors) can influence that allocation.

“My responsibility is to offer what I think are valuable skills in an (hopefully) engaging way, to help as many students as possible, and to model humanistic analysis. I do that, and my evals would suggest that I do it well. But if you’re not buying what I’m selling, it really is no skin off my back.”

Here’s where I get off. I agree that all of these things are your responsibility. But I think it’s also your responsibility to do what you can, within reason, to dissuade students from cheating and to persuade them to do things that help them learn. Take the case of take-home essays, which you mentioned earlier. You seem to think we should continue to assign these as we always have; and if some students use ChatGPT to write it for them, well, that’s their loss. I can’t accept this. I think it is our responsibility to take at least some steps to motivate students to write their own essays. I certainly don’t think it’s possible to prevent all use of AI in such essays (without abandoning the “take-home” element), but I still think we can prevent some of it. And I think we need to try. And one of the main reasons I think this, oddly, is all of the stuff that you yourself have just said. Students are not cheaters by nature. They arrive at cheating because a raft of pressures move them in that direction. We can, and should, set up some countervailing pressures.

Some will see this as “cop pedagogy.” I dislike the frequency with which this term is being thrown around these days. I take it that everyone thinks we should put some effort into forestalling cheating. (If you didn’t think this, then why bother staying in the room when your students take an exam? Why not not just go get coffee?) So the only question is, how much effort should we put into it? That’s a difficult question, about which reasonable people can disagree. But I still think it’s an abrogation of our responsibilities at this point to continue to assign take-home essays without any attempt to deter the use of AI.

Ian
Ian
Reply to  Gary Bartlett
8 months ago

I guess I take the deterrence to built in here.

First, the essay is only a certain percentage of one’s class grade, which usually includes one or two in-class blue book exams already. Second, I can and do make the argument that you’re not getting anything out of using AI. Third, if in some magical way I were to find out that you used AI, I would intervene disciplinarily and it is certainly possible that you would fail the course.

When combined with even a small amount of scaffolding–e.g. thesis workshops & drafting–I think AI use becomes less of a lure. That’s about how much effort I’m willing to put in at this stage. We’ll see how it goes.

Laura
Reply to  Ian
8 months ago

I think we should always start by figuring out what we need to be teaching and how students need to practice it in order to learn. Then we come upon the question of how students can demonstrate that learning to us in the form of assessments. At that point the question of whether they can fake it becomes a concern. AI might have something to do with the teaching and the learning – one challenge is to figure out how it can help. It also obviously can impact the capacity for cheating – a different challenge is to figure out how to prevent that when we’re looking for evidence that a student has learned something. I think we can meet both of these challenges and in the meantime shouldn’t land on total pessimism about any part of the project.

Daniel Nagase
Daniel Nagase
8 months ago

I expect some readers to respond,”what intellectual culture?” or “isn’t higher education already a charade?” They might be thinking: most college campuses, from the perspective of most students, are already not intellectual environments, and much higher education isn’t about learning, but about credentializing and signaling.

So let us suppose that higher education is not about credentializing and signaling. What purpose is then served by grades? Indeed, considering that there is literature pointing that grading actually hurts teaching, it seems to me that there is no real purpose to them if we are not worried about credentializing and signaling. If, therefore, we are not so worried, we should perhaps abolish them altogether. And if there are no grades, then any incentive students have towards cheating is destroyed (and any incentive the professor had for policing the students is also destroyed, I think).

Northeast Prof
Northeast Prof
8 months ago

I think that there’s a temptation to think of many of one’s students as cynically after good grades and a credential rather than an education. My experience is that many of them who are doing so (and there are indeed many of them) are doing it mainly out of *fear*: they would *love* to pursue an education without having to worry about their grades (or, more precisely, many of them would come to love it if they were finally given the opportunity that they’ve been denied for so long), but they are convinced, and not unreasonably so, that their economic/social class future depends primarily on how well they do grade-wise. Those who have some sense of what a genuine education is will still choose to do whatever gets them a good grade (e.g., use ChatGPT or whatever) and then genuinely feel the loss of that opportunity; others, who don’t even know what they’re giving up, will do whatever gets them a good grade because that’s all they know to pursue and they’ll think they’re being savvy, that they’ve learned ‘the game.’ But, even these latter students, in my experience, have an inchoate desire to learn, to be curious, to pursue big philosophical questions, but they are so *afraid* of doing so, of risking their grades, that they are paralyzed, unable to pursue questions that do in fact matter to them because of the risks as they see them.

So, for me, the challenge (and it is a huge challenge) is how to create the space for students to let their guard down and allow themselves to be intellectually curious within a system that is indeed mostly about credentials. I worry that it’s not possible.

Justin Kalef
Reply to  Northeast Prof
8 months ago

Yes. And we could change the dynamic entirely by removing ourselves from the credential-giving business. If people just came to us to learn, and nobody outside the gates of the university cared about our judgments of how well the students had done, there would be no system to game and nothing to gain by cheating.

On the one hand, that would greatly improve our ability to fulfill our mission as teachers, and dramatically improve and expand the kind of engagement we could have with most of our students.

On the other hand, how many of our current students would keep forking over the kind of money we need to stay afloat, and all those years of their lives, if we suddenly abandoned the bestowing of credentials? Maybe 5%? Seems optimistic to me But let’s be generous and imagine 10%. Now 90% of the current instructors lose their careers, and most colleges and universities get boarded up. Then, larger cuts follow. The voting public won’t be as impressed by how many people are going to college under whatever politician’s administration, so there goes a major motivation for funding from the government. And how many people will volunteer to serve in the armed forces for years to pay their way through school, when degrees don’t bestow the same sort of social power? Now it’s not clear why the Department of Defense should foot any of the bill, either.

Would all these things be worth the improvement to our teaching from the abolition of grades and credentials? Maybe! But it’s not obvious to me why it is, on balance. I wish someone could give a reason for thinking so.

Hugh
Hugh
8 months ago

I know eventually it will be a huge issue so I have only done things which involve substantial in-class components. Yes, it is to incentivize the students to think about texts and ideas and express themselves, which is the main point of teaching philosophy.

But right now, the LLMs seem to give virtually the same results to any assignment which means if two or more students use them, they will turn in close to identical assignments.

My writing prompts, to create a paper of any length usually generates bizarre errors. I assume these would go away if the students prompted and re-prompted the LLM in very specific ways.

It seems to absurd to have students doing this, but one would have to substantially understand the point of the prompt and the readings to catch the errors.

At least so far, the LLMs can’t do what the students are being asked to do. The papers won’t be unique and the papers would usually have the kinds of errors I haven’t see students make.

It’s not so much that the LLMs will write as well or better than the students as that they will distract the students or make it seem like there’s no point to doing the work on their own.

Justin Kalef
Reply to  Hugh
8 months ago

But right now, the LLMs seem to give virtually the same results to any assignment which means if two or more students use them, they will turn in close to identical assignments.

I think you might be getting that result from trying certain prompts from the same Chat-GPT account, very close to each other. I’ve experimented with submitting an essay prompt on one account, waiting a few minutes, and then copying-and-pasting the very same essay prompt on another account, which more closely simulates what will happen in real life. The resulting essays I’ve got have been very different, and have even argued for different conclusions.

Rosa
Rosa
8 months ago

To the none-rhetorical questions about what to do, here are a few things I do that I hope help:

  1. I have them start out the semester with a pass/fail 500 word assignment in which I ask them to reflect on their personal, academic, and professional goals for the course. My hope is that having to think about what they actually want to get out of it will encourage them to actually try to get those things out.
  2. A very large portion of my assignments are formative pass/fail assignments that I think they will get a lot of out doing. I explain to them why I designed the assignment the way I did and what I expect them to get out of it, in the hope that they will recognize those things and *want* to get them out. And I make the assignments pass/fail so that students can do the work on actually graded assignments and not end up with a bad end-of-semester grade if they don’t get A’s on the graded work. And while I don’t do this yet, I’m considering letting them resubmit their written work for a new grade so that they have an additional opportunity to both genuinely do the work and get the grade they want at the end of the semester.
Justin Kalef
Reply to  Rosa
8 months ago

These strike me as thoughtful and effective ways to make your course inviting and effective for your non-cheating students.

But I don’t see how they will be effective against the ‘Cheat-GPT’ students, as Justin W. calls them. Those students are chiefly motivated by a desire to minimize the ratio of labor invested over grade received.

To students like that, the optimal approach here seems to be:

1. Pass/fail 500-word assignment on the goals for the course: feed it into Chat-GPT and turn in the result if it looks okay. (Much quicker than actually thinking about anything).

2. Formative pass/fail assignments: get Chat-GPT to do them.

3. Chance to resubmit graded work: wonderful — use Chat-GPT to do it the first time, and then submit the returned work together with the instructor’s comments as a prompt for improving it on the regrading.

4. Come to class a few minutes early to mock the suckers in class who actually thought through the assignments on their own (I’ve had students tell me that they’ve been subjected to this by cheats in environments where this kind of culture has been allowed to grow).

Again, your suggestions sound great for students who are there to learn something from you: I just don’t see how they deal with the Chat-GPT problem.

Rollo Burgess
8 months ago

I’m not a teacher in a university, bur it seems to me that there is indeed a massive threat to the humanities and we need to defend them, but the threat is the amount it costs to go to university and the increasing framing of higher education as some sort of training for office jobs.

LLMs seem to me like a piffling and trivial problem by comparison with the economics of higher education.

Just grade undergrads based on exams and oral contributions in seminars. And viva the hell out of postgrads.

Patrick Lin
8 months ago

I’d tend to agree that, as teachers, we should try to address cheating in our classrooms, and how much you can/want to do is up to you. But how did this responsibility fall entirely on our shoulders, as the framing of these discussions seem to be suggest?

Shouldn’t admission departments at schools be screening applicants to better ensure they are genuinely curious and honest intellectually, esp. now with the threat of AI writers that make cheating much easier?

I’m not saying I know how to do that screening, but it seems that this responsibility to deter/address cheating (if there is one) is distributed; we might have a share in it, but this isn’t entirely our problem to sort out. At the least, schools should be providing top-down guidance and even real resources.

Or if schools don’t care about this problem enough to nip it at the admissions stage or provide resources to us after indiscriminately admitting students regardless of their inclination to learn vs. cheat, what does that say about what our role should be in all this? Should our classrooms really be the first and last line of defense? Shouldn’t there be a more organized, systemic effort to impress the value of learning (and not taking the apparently-easy way by cheating) upon students?

This seems to be a massive cultural problem that also involves K-12 levels of education, which is not to say we can wash our hands of the problem but only that we shouldn’t beat ourselves up over it, if we can’t solve it in one fell swoop (as extra unpaid labor).

As with most everything else, just do what you can with what little you have…

Justin Kalef
Reply to  Patrick Lin
8 months ago

I agree with some of this, Patrick; but on the whole you seem to be buying into the idea that the pressure to catch, punish and eliminate cheats is coming from the administration. But that just does not seem to be the case. Many professors shovel out A and A+ grades to everyone, with no concern for whether they have learned anything or even done their own work. The adminstration always seems happy to wink at this.

The upper administration of more or less any college or university today seems mostly concerned to follow a simple formula: bring in as many students as you can grab; take the tuition money (and residence fees, etc.) they come with; retain as many as them as you can (and if this means they stay longer and take longer to finish a degree, be happy with the extra tuition and fees and hope parents don’t start asking about how long the average student takes to finish; and give them a diploma at an inspiring ceremony. Ensuring any kind of quality control or even academic honesty does not fit easily into this model, because it doesn’t clearly bring in more dollars. And if the students didn’t learn anything and flounder after they leave, that could be profitable: they might come back for another degree.

Effectively screening applicants to undergraduate programs for a sincere desire to learn, at the volume of applications they have to process to keep the machine running, would be overwhelming. Besides, many colleges can ill afford to turn away that many applicants. They lose no money if those applicants come to the college and cheat, but they do lose money if the same applicants take their tuition money away to cheat at some other college instead.

And what do these administrators want out of us teachers? From professors, they want publications and service. From grad students, they want speedy work toward the completion of the degree requirements. From anyone else, they want courses taught and graded in a timely manner, with high enrollments and good student reviews. The last thing they want to see is any of us caring enough about declining standards or rampant cheating that we spend our time putting a case together and adding to the workload of the academic integrity office.

Those who allow cheats in their courses to spread the cancer and rot of academic dishonesty, shrugging their shoulders and saying ‘not my job to do anything about this,’ need not worry about being taken to task by the administration. The administration loves to hear those words, pat you on the shoulder, and say, ‘That’s the way’. Turn a blind eye to the culture of corruption, keep the wheels turning, and we all go home with a clear conscience and money in our pockets.

It’s the honest students who suffer from the devalued and easily falsified currency of the resulting grades, and who get punished for life by having to compete against cheats while nobody does anything about it. But why should the administration care, once they’ve taken their money and waved goodbye? If we teachers don’t, nobody will.

Patrick Lin
Reply to  Justin Kalef
8 months ago

I think I’d agree with all that, Justin K.

And I wasn’t so much buying into that idea (of administrators caring enough about cheaters) as I was trying to show how far we are from that reality (which arguably should be the case, if we’re all supposed to be rowing in the same direction).

But on your closing remark, “If we teachers don’t, nobody will”, that might be true, but I’m concerned about its possible implication that we need to urgently contort ourselves to address the problem, as the only line of defense. I’m seeing many, many teachers getting overly anxious about this AI problem, which can’t be great for their mental health…

As individual, unorganized teachers, we can’t save everyone, but we can only save those who we can save. Your mileage may vary, and that’s ok.

Justin Kalef
Reply to  Patrick Lin
8 months ago

Yeah, agreed, not every teacher is going to want to spend too much time trying to solve the problem.

Like most of these social systems, we can live with a few violators who get away with it, and a few non-cooperators. If we each have a couple of students who get away with cheating this time, and not all of them get caught in the end, that doesn’t mean that society or the college system collapses, any more than the entire currency system collapses if a few people get away with printing some fake twenty-dollar bills. And if a few of our colleagues decide that they’re not interested in doing what would be their fair share of safeguarding us against the ‘Cheat-GPT’ students, that does mean more pressure on those of us who pick up the slack; but I agree, we’ll live. And maybe some of those teachers have other virtues that make up for their not doing anything on this front.

But if the big tidal wave of quick and easy AI-based cheating meets with enough professors who just shrug the whole thing off, then we really are in trouble. We can cover for a few teachers not doing anything about it, but there are limits, and those limits are more dangerously close the more teachers ignore it.

What is worst of all, by far, is the fact that some teachers are not only choosing not to do anything about it, but are actively encouraging others to do the same thing — in some cases, trying to turn everything upside-down and shame the very people who are doing their best to clean up the problem after everyone has gone home for the evening! But it seems that you’re not doing that, and I’m glad you’re not.