Policing Is Not Pedagogy: On the Supposed Threat of ChatGPT (guest post)


“ChatGPT has just woken many of us up to the fact that we need to be better teachers, not better cops.”

In the following guest post, Matthew Noah Smith, associate professor of philosophy at Northeastern University, argues against the idea that we should be focused on preventing students from using large language models (LLMs) to cheat in our courses.

Rather, professors should aim to teach more effectively and increase students’ interest in the subject, which in turn is likely to reduce their motivation to cheat. He includes in the post several suggestions for how to do this.


[Image made using Dall-E-2]]

Policing is not Pedagogy: On the Supposed Threat of ChatGPT
by Matthew Noah Smith

People are wringing their hands over the threat that ChatGPT and other LLM chatbots pose to education. For these systems can produce essays that, for all intents and purposes, are indistinguishable from very good undergraduate (or perhaps even graduate student) essays. AI’s essay writing capacity allegedly “changes everything” from a pedagogical perspective.

This is incorrect: ChatGPT has not changed everything.

Let’s be clear about the sole reason why people think that ChatGPT’s powers will transform pedagogy: cheating will be easier. (I will focus on ChatGPT only here, although there are and will be other LLM systems that can perform—or outperform—the functions ChatGPT has). Students, rushing to complete assignments, will simply spend an hour or so refining their prompt to ChatGPT, and then ChatGPT will spit out a good enough essay. At that point, the student merely needs to massage it so that it better conforms to the instructor’s expectations. It seems a lot easier than spending scores of hours reading articles and books, participating in class discussion, paying attention to lectures, and, finally, composing the essay. The charge many people make is that the average student will give in and use ChatGPT as much as possible in place of doing the required work.

In short, students will cheat their way to an A. Or, to put it more gently, they will cheat their way to a completed assignment. It follows that because they will use ChatGPT to cheat, students will not get as much out of school as they otherwise would have. Call this the cheating threat to learning.

The only solution, it has been suggested, is that we must force our students to be free to learn. The only available tactics, many seem to think, are either aggressively policing student essays or switching to in-class high stakes testing. On this view, we’re supposed to be high-tech plagiarism cops armed with big exams.

But how much responsibility do teachers have to invigilate their students in order to prevent cheating? Not much, I think. And so we do not have a particularly robust responsibility to police our students’ usage of ChatGPT. So we should reject the idea that we should be high-tech plagiarism cops. I have two arguments for this claim.

First, it is inappropriate for us to organize our assessments entirely around viewing our students as potential wrongdoers. We should commit, throughout our classes—and especially when it comes to points at which our students are especially vulnerable—to seeing them primarily as if they want to be part of the collective project of learning. Insofar as we structure assessments, which are an important part of the class, around preventing cheating, we necessarily become suspicious of our students, viewing them as opponents who must be threatened with sanctions to ensure good behavior or, barring that, outsmarted so that they cannot successfully break the rules. This both limits and corrupts the collective project of learning.

It limits the collective project of learning because strategic attempts at outsmarting our opponents, uh I mean our students, is the opposite of a collective project (analogy: a house dweller installing a security system and a burglar trying to break through that system are not engaged in a collective project).

It corrupts the collective project of learning because even if we are engaged together in that project, we will no longer view each other as sharing values. We are instead in conflict with each other. We view each other as threats to what matters to us. For the professor, what matters is that the student doesn’t cheat. For the student, what matters is getting the good grade. The teacher-student relationship is shot through with suspicion and resentment, both of which can quickly turn to anger. These are, I believe, sentiments that corrupt shared activities.

The second reason we should not worry about the cheating threat to learning is that focusing on preventing cheating is not the same thing as focusing on good pedagogy. Good pedagogy involves seeking to engage students in ways that create a sense of wonder at—or at least interest in—the material and help motivate them to learn it. If we focus on selecting assessment methods solely with an eye towards thwarting ChatGPT’s cheating threat to learning, we are less likely to be selecting assessment methods that facilitate good pedagogy.

For example, some prominent people, like the political theorist Corey Robin, have argued that we should switch to in-class exams in order to limit the cheating threat to learning. Why? Is it because in-class exams are better for learning? No. It’s because ChatGPT cannot be employed during in-class exams, thereby neutralizing the cheating threat.

This approach is a mistake. We have some evidence that infrequent high stakes in-class exams produce worse learning outcomes than frequent low-stakes in-class exams. And if stereotype threat is real, then high stakes in-class exams might be a net negative, at least for women. In fact, since there are alternatives (see suggestions 1-10 below) that may alleviate worries about Chat-GPT-related cheating threats to learning, high stakes in-class testing may be all things considered worse from a pedagogical perspective.

So, should we simply ignore cheating? No. We should build assessments that make cheating less likely. But we should not build assessments to trigger in students the fear of getting caught due to surveillance and suspicion. Rather, we should build assessments that, when properly integrated with the course material, make cheating either irrelevant or just not worth it—even if not getting caught is a near certainty.

Before suggesting some these alternative assessments, it’s worth noting that it can be rational for a student to use Chat-GPT to cheat on an essay assignment. After all, some undergraduate students are very stressed out and overburdened. In order to manage their lives, certain corners must be cut. Why not just use this useful chatbot to write that paper?

Yet this observation does not constitute an argument for high stakes in-class testing. Rather, it is an argument for structuring our classes in ways that both accommodate stressed out students and produce desirable pedagogical outcomes. In other words, we need teaching tactics that generate engagement and even wonder, and that reduce the rational motivations to cheat.

In what follows, I will quickly list some assessment methods I’ve learned about. Some of these I also regularly use. My dedicated and brilliant teaching professor colleagues at Northeastern taught me many of these, as have colleagues outside my university whose professional focus is teaching rather than research. So, I want to emphasize first that I claim no credit for coming up with these. I am merely transmitting the wisdom of others.

From a structural perspective, this whole obsession with the cheating threat to learning reveals clearly that great pedagogy is the only solution to the myriad structural factors pushing students to cheat. And great pedagogy cannot be an afterthought. It should be a full-time effort. So, perhaps the best thing we can do to address the threat ChatGPT poses to learning is to organize to demand that our universities make a genuine commitment to education. One (still insufficient) approach that my university has taken is the establishment of well-respected, permanent teaching professorships.

*  *  *

Let us now turn to the space in which each of us, as individual professors, have the most amount of power: our own classrooms. In particular, here are some different practices that can be used to engage students, that altogether can determine the grade of the student, and whose focus is pedagogy, not policing.

    1. Regular reading questions

Students should be asked to submit reading questions for some portion of the assigned readings. (I require questions for approximately 33% of the readings.) The students should be told to produce structured questions that directly cite the assigned readings. For example, I tell my students that their questions should take this form: “On page XX, Author writes ‘blah blah blah’. I think that means that p but I am unsure of this because of such-and-such a reason. Please explain.” I give other examples of how they can write questions but all of them involve a specific textual citation and some attempt at interpretation.

Instructors can read these questions ahead of class and then use some of them during class. For example, the instructor might start class by putting two of the questions on the board and breaking students into groups whose task is to attempt to answer the questions. The groups can then report back to the whole class.

    1. Regular and short in-class quizzes

Students are regularly given quizzes testing whether they have read the assigned texts and whether they were engaging in the previous classes. The students could have their annotated glossaries (see 8 below) open during the quiz. Regular low-stakes quizzes are well established as effective tools for facilitating information retention.

    1. Discussion Forum Posts

Students are required to post a significant number of 100-word or more discussion forum posts on the course website (we use Canvas, others use Blackboard, and still others use bespoke course management software). The posts should directly respond to the readings, or to classroom activities, or to other posts on the forum. I typically require students to do a mix. I also ask students to work through the semester to make their posts more polished. For example, after a few weeks, I ask students to begin to cite specific passages in their posts. Or I ask them to reconstruct an argument from a particular passage or to raise an objection to an argument a student has reconstructed.

I also use the discussion forum for the final paper. I invite students to collaborate on the forum to work through their papers, and then require that they cite each other if they use another person’s ideas from the forum. I indicate that there are limits to what they can use, and that if they have questions we can talk about it in class. This is a useful opportunity to discuss originality and attribution in scholarship.

    1. Scaffolding essay writing

Students are required to produce their long essays in steps, from formulating the question to working on an annotated bibliography, to outlining the paper, to drafting the paper, to completing the final paper. Any number of these steps can be used, or just one of them. But, the goal is to help students through each step of the process of writing the paper. This distributes the work over several weeks. It also provides opportunities for instructors to identify where students are having the greatest difficulty.

    1. Students commenting on each other’s drafts

Students read another student’s draft and write a short critique of the draft (or they annotate it with comments in a word processing application). The students might be required to meet with each other to discuss their comments. I require students to include a paragraph in their final paper on how they incorporated (or why they didn’t incorporate) the comments they got from a fellow student.

    1. Student Presentations

Students are organized into groups each of which is tasked to present on an assigned reading or a specific topic. The presentations should be formally organized, with a list of questions the students will answer in the presentation, summaries of assigned text(s), some analysis of the text(s) (I often ask my students to reconstruct arguments), an objection or two, a possible response, and then a list of further questions. I also ask presenters to lead a structured discussion in the classroom. An added bonus of this is that it can help student presenters to understand the importance of students voluntarily engaging in classroom discussion.

    1. Comment on Student Presentations

Students are required to write one short commentary on a student presentation in class. They have to summarize the presentation, indicate a question they had going into the presentation that the presenters answered, a question they had that wasn’t answered, and finally some reflections on a class theme inspired by the presentation. These comments can be made available to the presenters.

    1. Annotated Glossary

Students are required to maintain an annotated glossary of terms introduced in the class. The glossary should include definitions, citations to the sources of the definitions (including date of the lecture if it was from a lecture), examples that illustrate the terms, and updates as the new definitions of the terms are offered (which inevitably happens when it comes to contested technical terms). These can be turned in several times throughout the semester.

    1. Students must cite the discussion forum and lectures in their final paper

Students are required to use material generated within the course when writing their final papers. This, in addition to citing assigned readings, ensures that the students will be engaging the ideas we’ve all developed together throughout the class.

    1. Almost all grades are Pass/Fail

Students receive full credit for doing the work to an adequate standard. We still give the students comments, but not too many, as the goal is to identify, at most, one thing the student did wrong, one thing the student did well, and one thing the student can work on in the next assignment. (I learned this 3-comment approach in grad school when I TA’d for Geoff Sayre-McCord.) This is a better steer than a punitive grade, which can just stress students out. I usually reserve the final paper for a stricter grading scale, with the possibility of a student even getting an F on the final paper. But, if I’ve scaffolded the paper writing well enough then this is unlikely.

This is hardly an exhaustive list of assessments. But they all avoid high-stakes in-class exams or one or two lengthy standalone essays, and they all can help students in different ways. Some are designed to support the collective project of learning together in the classroom. Some are designed to draw students into the material through a series of low stakes contributions. Some are designed to make students feel like they have a stake in how the class goes. Some are tools for promoting information retention. Some help to reduce the burden of completing a big project such as writing an essay while still requiring that the student complete the big project.

These assessments probably can all be hacked by ChatGPT. But it’s hardly a simple matter to do so. Furthermore, even if some students do use ChatGPT repeatedly to hack these assignments, most of the students in the class probably won’t, as the stakes are so low and the requirements for completing the assignments are not all that taxing. In other words, these assessments make it less rational to cheat and more rational to do the work. They additionally are, as already argued, likely to be at least as pedagogically effective—and in some cases are demonstrably more pedagogically effective—than high stakes in-class testing and one-off essay writing.

*  *  *

But, you say, these just are big changes to the normal practice in philosophy classrooms, which usually are mostly lectures punctuated by big paper assignments. So, hasn’t ChatGPT actually changed everything?

No. The ChatGPT cheating threat to learning is not the reason to adopt these assessment changes. The reason to adopt these assessment changes is that these assessments would yield better learning outcomes even if ChatGPT never existed at all. ChatGPT has just woken many of us up to the fact that we need to be better teachers, not better cops.

In general, there is no reason to view contemporary punitive grade systems as necessary for hard work. After all, at Harvard University there is already such extreme grade inflation that students are basically guaranteed an “A-” just for showing up and it is not much different at many other universities. And yet most students at these universities are still putting in at least some effort! In fact, I find it somewhat astonishing that people are so certain that an especially punitive grading environment is necessary for positive learning outcomes. Some kind of sanction may be required to inspire most students to make an effort. But, I think that we have some reason to believe that students will work hard if they are interested in the topic and if they care. I at least enter the classroom presuming that if my students have a problem with the class, it is not due to a lack of desire but instead due to too many competing demands on their time and emotions. Yes, there are some students in my classes who just don’t care. This is certain. But there is no way I am going to organize how I assess every other student around the vicious goal of making the slackers suffer for their slacking.

Insofar as students are making the rational choice to forgo doing actual coursework and instead, because of how demanding their lives are, rely on ChatGPT, I fail to grasp how we are in fact improving their admittedly difficult lives by making them take a few high stakes exams in order to determine their grades. Why not adjust many of our other pedagogical practices so that we can better assist our students in joining us in an exploration of the ideas and texts we all lovingly study?

*  *  *

Post-Script

I believe many of us would like to make various changes to how we teach. But, I’ve also encountered a surprising amount of dogmatic insistence that high stakes take-home essays and in-class exams are the ne plus ultra of assessment, and furthermore that punitive grading is probably a necessary component of education, at least for the average student. My entirely speculative diagnosis of these sclerosed attitudes is twofold.

First, many of us are so used to—and have so embraced—both the disciplining and performative practices of the modern academy that we have fetishized both discipline and performative teaching as the ultimate modes of instruction. To truly teach is to profess on a stage—to be the authority who lectures. For students to truly learn is for them to recognize this authority via submission to the discipline of grading. On this view, to shape a student’s mind is not a collaborative process but is instead a form of knowledge transfer conditioned by repeated corrections when that transfer doesn’t quite take. All this must proceed along our own and not the students’ timescales. In contrast, the somewhat stochastic process of intellectual accretion, the slow sedimentation of heterogeneous understandings and haphazard bits of different academic skills, which occurs when the classroom is more collaborative and slow can feel unpleasantly out of our control.

Second, the development of new teaching and assessment techniques is hard work. There are few institutional incentives to engage in that hard work. Furthermore, it takes constant time and effort to manage all that students submit when you use all these assessments together. I, for one, expend much effort reading, commenting on, and incorporating into each day’s class the small bits of work my students regularly submit. I also have hours and hours of one-on-one meetings with students when it comes time to discuss their paper drafts, all of which I have read and commented on prior to these meetings. While boring and a lot of work, these tasks also make the educational process more collaborative and less punitive. This in turn may lower my students’ anxiety.

But I am abusing my privileges here. This diagnosis is conjectural, not a strongly held view.


Discussion welcome. Readers are encouraged to share their own thoughts on, and strategies and tactics for, teaching in a world in which our students have access to LLMs.

Readers may be interested in also sharing their ideas for teaching in a world of LLMs on a Google doc organized by Klaas Kraay (Toronto Metropolitan).


Related Posts:

Teaching Philosophy in a World with ChatGPT
If You Can’t Beat Them, Join Them: GPT-3 Edition
AI, Teaching, and “Our Willingness to Give Bullshit a Pass”
Conversation Starter: Teaching Philosophy in an Age of Large Language Models
Teacher, Bureaucrat, Cop
How Academics Can Make Use of ChatGPT

Use innovative tools to teach clear and courageous thinking
Subscribe
Notify of
guest

59 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Justin Kalef
Justin Kalef
8 months ago

Three comments:

1. This post begins with two reasons why “we should reject the idea that we should be high-tech plagiarism cops.” But this is a straw man of the opposing idea. Those of us who maintain that we have a duty to prevent plagiarism do not therefore think that that is our only duty or even our main duty. Our relationship with our students is not one-dimensional.

By comparison: I take it that most (if not all) of us agree that it was appropriate to prevent very ill students from taking off their masks and coughing on students sitting beside them during the pandemic, and that it would be appropriate now to stop any students from blasting music from a stereo in class, if someone were to try such a thing. Dealing with such students puts the professor in an adversarial position with them at that time and in that respect, and it does not directly pursue student learning. But it would nonetheless be a necessary response from a professor.

(To those inclined to point out that few students spread viruses in class so carelessly or openly play their music in lecture halls: the standards of conduct we observe can only be taken for granted because of a long line of people before you who didn’t share these qualms about becoming a ‘cop’ by enforcing those standards).

2. The middle of this post discusses ten assessment techniques — all perfectly good and quite effective in an AI-free world — followed by this comment:

These assessments probably can all be hacked by ChatGPT. But it’s hardly a simple matter to do so. Furthermore, even if some students do use ChatGPT repeatedly to hack these assignments, most of the students in the class probably won’t, as the stakes are so low and the requirements for completing the assignments are not all that taxing. In other words, these assessments make it less rational to cheat and more rational to do the work.

But the main problem raised by Chat-GPT and AI more generally is that it has changed what it would be most rational for students to do if their sole aim is to get the greatest credential for the least amount of work.

Before Chat-GPT, students who planned to get a friend to write their big essay for them at the end of the semester would be daunted by the news that they will in fact need to turn a lower-stakes assignment in each week, and would be apt to abandon the cheating strategy. It was more rational to pay attention in lectures, skim the readings, and then spend 20-30 minutes per week doing the low-stakes assignment.

But now that is far less efficient than just cheating the system with Chat-GPT, which requires students to pay no attention in class and do none of the readings: just five or ten minutes of work with AI is sufficient to produce something apt to earn a desirable grade. Moreover, each time one uses Chat-GPT to cheat, one is developing a general cheating strategy that will work in most other courses, making investment in learning how to cheat effectively more rational (for an egoistic student who only desires the credential) than taking the course seriously.

To make it even worse, students who do not yet know how to use Chat-GPT to hack assignments can learn from others who do: there is no risk of being caught turning in the same assignment, since Chat-GPT produces original, non-traceable work in mere seconds, each time.

This is why Chat-GPT is a game-changer: it radically alters the best strategy for keeping the time-invested-over-grade-received ratio as low as possible, even if the professor is using assessment methods that were effective until very recently.

3. The essay closes with a further argument that there is no need to invest time ensuring that our assessment methods are difficult to hack using AI: students at Harvard and some similar universities typically receive A grades, from which the author concludes that they would receive those grades just for showing up; but, the author claims, “most students at these universities are still putting in at least some effort.” Therefore, the author concludes, some students will choose to approach their studies sincerely even in the absence of any threat of punishment for cheating.

But can we really infer from the fact that most students at Harvard receive As that those students would receive the same grades if they merely ‘showed up’ and did nothing else? And even if they would get the same grades for doing nothing, would the lessons learned from students at Harvard and similar schools apply just as well to less-selective colleges and universities, where incoming students lack the work ethic of a typical incoming Harvard student?

The author admits that “Some kind of sanction may be required to inspire most students to make an effort.” To many of us, this is already a good reason for not effectively removing that sanction.

CarlD
Reply to  Justin Kalef
8 months ago

The trick I think is to assume the AI as baseline performance. So it’s not cheating or hacking, it’s just getting in the game. The question then becomes, what’s the game, if it’s not the stuff the robots automate.

Joshua Preiss
8 months ago

Excellent piece! In support I would like to say that I use 1, 3-7, 9, and 10 (in the form of labor-based grading) in my classes, and the results are by any measure better than when I’ve taught the lecture plus 2 papers model. It is also, as you recognize, A LOT of work that (at present) runs counter various institutional and professional incentives.

To expand this recognition by reference to my institution, with a 4/4 teaching load (with full classes of 27 students for each course) it takes significant time away from what peers ultimately judge us on (research) at a time when demographics and the political economy of higher education threaten programs in the humanities and social sciences at non-elite institutions, regardless of teaching or research quality. I can understand why, for non-dogmatic reasons, it would be a difficult sell for many professors.

LLMs, however, may leave them with little alternative. As such, it poses a collective challenge to the profession, and one that can only be effectively combatted with solidarity (and mutual recognition of value) between philosophers with quite disparate institutional challenges and resources.

Ben M-Y
8 months ago

Hear! Hear! Thank you for this great post, Matthew.

CarlD
8 months ago

Agreed. As someone who’s been teaching like this for many years, the underlying question is what we actually want the students to learn. Broadly speaking, that’s reliable processes of gathering, understanding, and making something of relevant information. All of the critical thinking stuff is in there, and the rest is discipline and project detail.

From this perspective the LLMs are interesting tools. One thing they do we have to reckon with is take technical writing competence off the table as a goal or accomplishment. They do that pretty well and with them, anyone can produce generically polished prose. So just like word processing takes handwriting off the table, LLMs take bad writing off the table. It may take a minute for this to sink in.

They also gather and present information well and efficiently, especially for routine kinds of stuff. They scale most of the way up to advanced in just about any field. So gathering and presenting routine information is now also off the table as a relevant goal or accomplishment, and becomes a reasonable baseline standard.
(I thought this was true already with Google, but in practice some-assembly-required is a significant hurdle for a lot of folks.)

We can now reasonably expect our students to produce competent, well informed prose on just about any subject. To me that’s pretty spiffy, because I’ve read a whole bunch of incompetent garbage over the years and starting from fairly good will be a big relief. For those of us teaching and learning basic skills that’s a whole bunch of drudgery just gone.

The question then becomes, what are we as teachers and they as students adding to what the robot can bang out in a couple seconds? I think that’s a pretty exciting question, and may take some serious rearrangement of mental and institutional furniture.

Hieronymus
Reply to  CarlD
8 months ago

We can now reasonably expect our students to produce competent, well informed prose on just about any subject.”

But if our students are using ChatGPT, then they aren’t producing anything of the sort.

Aeon J. Skoble
Reply to  CarlD
8 months ago

“[LLMs] also gather and present information well and efficiently, especially for routine kinds of stuff. 
But they also make stuff up. I have played around with Chat-GPT a bit and have found both impressive answers to niche questions, and solid BS. One question I asked resulted in two full paragraphs, every sentence of which was false.

CarlD
Reply to  CarlD
8 months ago

Aeon, that’s true of any source, including ourselves. So we’re back to my rubric, where the gathering is in conjunction with the understanding and the making something. Sources must be checked and cross checked, as you did. No source is the arbiter of its own validity. That then is what students also need to skill up on.

Hieronymous, it’s really neither here no there who or what produces the thing. Any labor saving device raises the same questions. My car produces motion at a rate far beyond my unassisted speed, but still we say I traveled. The calculator is a fine tool for civilians and mathematicians alike, but at different levels of application and enhancement. The point is that these parts of the job are now available for mechanical assist. Therefore we may (and will, as history repeatedly shows with tool development and propagation) shift our attention to the parts of the job that are not.

Jesus Christ
Reply to  CarlD
8 months ago

So just like word processing takes handwriting off the table, LLMs take bad writing off the table. It may take a minute for this to sink in.

The question then becomes, what are we as teachers and they as students adding to what the robot can bang out in a couple seconds? I think that’s a pretty exciting question, and may take some serious rearrangement of mental and institutional furniture.

Hieronymous, it’s really neither here no there who or what produces the thing. Any labor saving device raises the same questions. My car produces motion at a rate far beyond my unassisted speed, but still we say I traveled. The calculator is a fine tool for civilians and mathematicians alike, but at different levels of application and enhancement. The point is that these parts of the job are now available for mechanical assist. Therefore we may (and will, as history repeatedly shows with tool development and propagation) shift our attention to the parts of the job that are not.

I appreciate the optimism. But there’s a world of difference between using a word-processor to write, a car to travel, or a calculator to learn a mathematical answer, and submitting an AI-generated essay as evidence of one’s having learned something. In the first three cases, you do indeed write, travel, and learn an answer. In the third, the worry is that the submission is no evidence of learning at all.

CarlD
Reply to  Jesus Christ
8 months ago

Agreed. Smoothly written, well informed essays now become the baseline. They’re easily available to all and they prove nothing yet. They’re the baseline of acceptable performance and nothing less will do. Like the ante in poker they get you in the game, but they’re not the game yet. D work, or a generous C. Level up kids, here’s how.

Gary Bartlett
Gary Bartlett
Reply to  CarlD
8 months ago

The problem with this is that it will put the baseline beyond what many students can be expected to reach on their own. It would work, perhaps, if LLMs were only able to produce D or C work. But that’s not the situation. Even now, they can produce B work, and sometimes even A’s. And there’s every reason to expect that level to go higher. So it’s as if the ante for the poker game is already so high that no one except the super-rich can even play. So most students will have no motivation to even try to write essays for themselves; such an essay will be doomed to a D or an F because the “baseline” is a B or an A.

Ben M-Y
Reply to  Gary Bartlett
8 months ago

“So most students will have no motivation to write essays for themselves …”

I think this comment misunderstands the available evidence and is uncharitable to students.

For example, here’s a study that shows that certain teaching strategies reduce motivation to cheat, because they increase motivation to learn. I take it this is either the sort of thing the OP has in mind or along the same lines. And there’s a ton of research to back up the idea that when teachers engage their students in the process of learning and explain why they’re doing what they’re doing in the classroom, including why they’re assigning what they’re assigning, students not only learn better but are more motivated to do so.

As for our students, the OP makes the entirely correct point that they are not all cheaters and we and they lose out when we assume they are/will be. People are complex, including students, and I would think we shouldn’t make blanket assumptions about what whole groups of people will or won’t do. Again, there’s lots of research on this stuff. And I think the bulk of it supports something much more like the position in the OP, as opposed to the position of many in our discipline.

I should add that I think the OP overstates things with the claim that cheating with ChatGPT can be rational. Perhaps it can be understandable, but I’m not sure it’s justifiable. Or if it is, I think this would only be in very special circumstances, and not simply because one has a lot on one’s plate, etc. But this is a small point.

Gary Bartlett
Gary Bartlett
Reply to  Ben M-Y
8 months ago

I was not making a general claim about student motivation. I was not responding to the OP. I was responding to CarlD’s claim that “Smoothly written, well informed essays” should be “the baseline of acceptable performance and nothing less will do.” My point was only that in that scenario, most students will not be motivated to write their own essays.

Ben M-Y
Reply to  Gary Bartlett
8 months ago

Sorry if I misunderstood your comment.

I still think it’s mistaken, even in that scenario, to think most students would lack this motivation. It depends a great deal on things like how the smoothly written, well informed essays are generated, talked about by the instructor and class, connected up to students’ academic and career goals, etc. One can be a demanding teacher all while attempting to establish a community of learners in one’s class. And I think the research suggests this would have an effect of students’ motivation to cheat that is the opposite of what you claim.

Gary Bartlett
Gary Bartlett
Reply to  Ben M-Y
8 months ago

“It depends a great deal on things like how the smoothly written, well informed essays are generated”

Agreed. Again, I was responding to CarlD, who makes it clear that the essays he’s talking about are AI-generated. To quote him more in full: “Smoothly written, well informed essays now become the baseline. They’re easily available to all and they prove nothing yet.” The reason they’re easily available to all is that an LLM can generate them.

Human all too human
Human all too human
Reply to  Ben M-Y
8 months ago

That study first published in 2018 is so outdated at this point. Covid-19 and online learning have changed (average) student attitudes about school so much, that at this point, studies based on student motivation and behavior prior to the pandemic are not so relevant.

CarlD
Reply to  CarlD
8 months ago

CarlD: The following text is my contribution to a discussion board. Please summarize, extract the main points, and consider the implications. …

ChatGPT 4: The post discusses the impact of Language Learning Models (LLMs) on teaching and the transformation of student learning outcomes.
The main points can be summarized as:

  1. The goal of teaching is to impart a process of gathering, understanding, and synthesizing relevant information, inclusive of critical thinking and project details.
  2. LLMs essentially eliminate the need for developing technical writing skills, as they can produce polished prose consistently. This technology is likened to word processors that have made handwriting redundant.
  3. LLMs efficiently gather and present information, making the task of accumulating and presenting routine information redundant as well.
  4. The author suggests that with the advent of LLMs, a well-informed, competently written piece of prose is now a baseline expectation rather than a goal.
  5. This shifts the question to what value-add teachers and students can bring beyond what an AI model can generate in seconds, potentially requiring a rethinking of teaching approaches and pedagogical institutions.
  6. Similar to other labor-saving devices, LLMs allow us to shift our focus to areas that cannot be mechanized, raising the bar for what constitutes acceptable performance.

Implications:

  1. This shift could potentially increase the focus on critical thinking, creativity, and innovation, as the basics of information gathering and writing are covered by LLMs.
  2. Pedagogical methods may need to evolve to accommodate this new reality. Teachers might need to redesign curricula and assessments to emphasize the skills that LLMs can’t replicate.
  3. The definition of student success and accomplishment could be revised to reflect these changes.
  4. It might help level the playing field for students with varying writing abilities, as everyone would have access to the same standard of written work through LLMs, allowing them to focus more on content and critical thinking.
  5. On a broader level, this development could reshape the landscape of education, leading to increased emphasis on developing soft skills like problem-solving, creativity, and emotional intelligence.
CarlD
Reply to  CarlD
8 months ago

Here are three objections. Respond to them.

The students themselves are not producing the smoothly written, well informed text.
There’s no evidence of learning in such submissions.
Students working at a C level will be demotivated by AI working at a B or A level.

ChatGPT

  1. “The students themselves are not producing the smoothly written, well-informed text.”

Response: It is true that Language Learning Models (LLMs) may generate the polished prose, but this doesn’t negate the value of the learning process. The role of students in this new paradigm shifts from being just scribes to becoming editors, synthesizers, and critical thinkers. They would need to direct the AI, verify its outputs, integrate diverse pieces of information, and ensure that the final product aligns with the task at hand. These are high-level cognitive tasks that contribute significantly to their learning.

  1. “There’s no evidence of learning in such submissions.”

Response: The evidence of learning might not be as direct as in traditional assignments, but it is still present. The LLM-generated output will still be guided and refined by students. They will have to engage with the topic at a deep level to guide the model effectively. Evaluating the quality, relevance, and accuracy of the information produced will require understanding the topic. It’s a shift from assessing learning based on writing ability to assessing learning based on understanding, critical thinking, and the ability to synthesize and guide an AI to produce quality content.

  1. “Students working at a C level will be demotivated by AI working at a B or A level.”

Response: It’s understandable that students may feel demotivated when an AI tool can easily produce work that surpasses theirs. However, this is where it’s crucial to redefine success and progress in the classroom. Rather than judging students solely based on the final product, educators can shift towards a model that values the learning process itself. This involves reinforcing the idea that these tools are just that – tools, meant to assist in the learning journey, not replace it. Education must pivot towards fostering skills that AI cannot replicate – empathy, creativity, critical thinking, ethical reasoning, etc. Students should understand that their unique human qualities, ideas, and perspectives hold immense value, and they are not in competition with AI, but instead, learning to leverage it as a resource.

Preston Stovall
Preston Stovall
Reply to  CarlD
8 months ago

Agreed. Smoothly written, well informed essays now become the baseline. They’re easily available to all and they prove nothing yet. They’re the baseline of acceptable performance and nothing less will do. Like the ante in poker they get you in the game, but they’re not the game yet. D work, or a generous C. Level up kids, here’s how

It’s not clear this is right way to use the technology, not least because it’s not clear this is the best way for people to learn the skill that comes with essay writing. That skill is, in part, one of organizing one’s thoughts into a coherent narrative of the sort the essay requires. I suspect most people learn how to manifest that kind of thinking by writing essays where one thinks through an issue oneself, rather than by feeding it into a machine that spits out something one finds clever.

We teach mastery of one’s faculties in education, with the aim of being able to put them to use in knowing things about and doing things in the world. It’s not at all clear that offloading the work of essay writing onto a text generator is an equivalent skill. While there’s room for algorithmically generated texts in higher education, I’m skeptical that we should treat them as “a baseline of acceptable performance”.

CarlD
Reply to  Preston Stovall
8 months ago

Fair enough. We can safely predict it will be used all kinds of ways.

Fwiw, I get a lot of students who for various reasons do not have a realistic developmental path to organized, clear narratives in functionally correct prose. For them, this basic requirement is a barrier to entry for the activities they do have realistic developmental paths to. If the technology levels the playing field for them, we should consider that a win. And we should also ponder whether essays were ever the one best way to accomplish our actual learning objectives, or whether instead they were just what was available and familiar at an earlier stage of cultural and technological development.

Preston Stovall
Preston Stovall
Reply to  CarlD
8 months ago

From what I understand, research on essay-writing as a basis of cognitive scaffolding is one of the few areas in social science research on education where there’s good (probative) evidence for an effect.

At any rate, I don’t doubt that your classes have “a lot of students who for various reasons do not have a realistic developmental path to organized, clear narratives in functionally correct prose“. Some of my students present similarly. But what evidence is there that this failure in “developmental path” isn’t caused by previous educators neglecting their duties to help children build such a scaffold? I’m serious here. Do you know of any social-scientific evidence supporting your pedagogical proposal, comparable to what’s apparently been learned about the importance of essay writing as a scaffold for cognitive development? I’m genuinely curious, as this is an area where I have an interest but no real grounding.

Timothy Sommers
8 months ago

I would really like to take the position mentioned early in the piece. I’d like to say, I’m not here to police learning. If you want to cheat your way to a good grade, you won’t learn anything, and that’s your problem.(“We pay for our sins with the lives we lead.” – James Baldwin) But isn’t the problem with this obvious. It’s not only unfair to the non cheaters, it incentives them to accept cheating as part of the game – or lose out. At some point, you are just a sucker not to cheat.
BTW I would describe the tone of this piece as cavalier for caviler’s sake. Lots of worrying things are described, then we are told not to worry about them.

Matthew Smith
Matthew Smith
Reply to  Timothy Sommers
8 months ago

First, if you don’t rely on one or two essays to determine the grade but instead on many small contributions of different modalities then current AI no longer gives advantages to the cheaters. They have to submit the reading questions, do the forum posts, do the presentation, do the glossary, etc.

Secons, if you don’t use punitive grading but instead give people the opportunity to get an A through consistent contribution to the collaborative project of the classroom, then they are not risking much grade-wise by not using AI.

The only way that ChatGPT really incentivizes non-cheaters to cheat is when professors stubbornly insist that a midterm and final paper will determine all of their students’ grades. But this is not a problem with ChatGPT so much as it is a problem with simplistic approaches to college education.

Jesus Christ
Reply to  Matthew Smith
8 months ago

Secons, if you don’t use punitive grading but instead give people the opportunity to get an A through consistent contribution to the collaborative project of the classroom, then they are not risking much grade-wise by not using AI.

This is an interesting take on the issue, and would work well with either a specifications-grading or an ungrading regime. It might be worth also trying to find assignments that were meant to be fulfilled by the use of text-generating programs, and aim for a similar generality as you’ve outlined in the OP. Thanks for sharing.

An adjunct
An adjunct
Reply to  Matthew Smith
8 months ago

You are operating on false beliefs if you think students will not cheat on even lowly forum posts. I’ve had students plagiarize for these. I think ChatGPT only makes it easier. I don’t know why you think ‘having to do the task’ is proof against either cheating or empty lack of learning. Perhaps your tendentious ‘not a cop’ framing leads you to overlook how scripting a sequence of micromanaged tasks that must be completed to receive credit and a satisfactory grade is, if not cop pedagogy, certainly something bearing a strong family resemblance to what you conceive to be cop pedagogy.

Matthew Smith
Matthew Smith
Reply to  An adjunct
8 months ago

Cop pedagogy is pedagogy organized around suspicion. I am arguing for a pedagogy organized around collaboration.

The approach I advocate is driven by an interest in bringing students into the educational process and finding multiple opportunities for them to contribute meaningfully.

In other words, I do not design my classes with the assumption that students will cheat. Rather, I design assessments and activities on the assumption that students want opportunities to participate.

That you see my approach as a response to the threat of cheating as opposed to a response to the threat of boredom (both for me and students) is probably due to the way I framed the essay and so is a fault of my composition. But I want to emphasize that when I organize my classes in the fashion above I do not much care if students cheat.

Timothy Sommers
Reply to  Matthew Smith
8 months ago

I really appreciate you responding and I think using different modalities, etc. are great, though as JC says I think your suggestions are more in line with spec or cumulative grading. But you can’t consistently talk about strategies to get students not to use ChatGPT and also claim you are not at all worried about students using ChatGPT. The problem is that if you drop the cop framing your essay is another contribution (a good one, sure) to the debate about how to prevent CGPT use, not a new carefree approach.

Gary Bartlett
Gary Bartlett
Reply to  Matthew Smith
8 months ago

The only way that ChatGPT really incentivizes non-cheaters to cheat is when professors stubbornly insist that a midterm and final paper will determine all of their students’ grades.

That’s an extremely bold and optimistic claim. It would be great if it were true, but I think it just isn’t. 🙁 This is an empirical question, of course, and it shouldn’t be too long before we know the answer for sure.

But I don’t see why students wouldn’t be incentivized to use ChatGPT on an assignment worth, say, 10% of their grade. Is your reasoning here that they won’t see it as worthwhile because the assignment is worth so little?

I really would like to believe you’re right. I’ve long since moved away from the “midterm and final paper” structure, to a bunch of smaller assignments. But I’m still afraid that ChatGPT will make it so easy to do well on these assignments that many, and perhaps even most, students will use it.

EDIT: I think it’s also dangerous to divide students into “cheaters” and “non-cheaters”, as it implies that there are some students who will just cheat regardless while others would never do it.

Last edited 8 months ago by Gary Bartlett
Another Adjunct
Another Adjunct
Reply to  Matthew Smith
8 months ago

if you don’t rely on one or two essays to determine the grade but instead on many small contributions of different modalities then current AI no longer gives advantages to the cheaters. They have to submit the reading questions, do the forum posts, do the presentation, do the glossary, etc.

This is obviously false. The advantage is not having to spend any time reading, thinking about, or writing the many low-stakes assignments! Just ask the student on whom I’ve just filed an academic dishonesty report: he submitted 9 discussion posts, about 11 reading quizzes, and two short essays without once accessing an assigned reading during my online summer course. If he were better about covering his tracks, he might have gotten away with it — 3 credits for maybe a couple hours work in total.

CarlD
Reply to  Timothy Sommers
8 months ago

Just to clarify, we are not contrasting the cheatability of AI with the complete uncheatability of conventional pedagogy and assessment. Cheating is endemic and that’s why we’re having this conversation. So “some students will cheat” is in no way an illuminating take on AI or successful dismissal of it. “Some students will now cheat in ways we don’t yet understand or have a disciplinary apparatus for” is more like it.

Timothy Sommers
Reply to  CarlD
8 months ago

I sincerely don’t understand how anything you’re saying is responsive to anything I said. Maybe, you are replying to the wrong thread? “Some students will cheat” was not my “take” on AI or anything else. In fact, if you do a search you’ll find that the phrase “some students will cheat” occurs only in your reply and now in my reply to your reply. Nor has anyone, as far as I can see, taken the position that there’s no cheating with other forms of assessment; i.e., again, I don’t know who you are replying to and what you are trying to say to them.

Aeon J. Skoble
Reply to  CarlD
8 months ago

It’s true that some students were already cheating in the pre-AI world. But the concern is, AI makes cheating easier and more attractive. I’m sure someone could cheat on a blue-book in-class exam, but it’s obvious that it’s easier to cheat on a take-home assignment, esp in the laptop age. The fact that some number of students will always be looking to cheat doesn’t mean that some environments won’t be more conducive to and more encouraging of it. If I think “now that everyone has laptops, I’ll switch to take-home assignments written with word processing,” I have to realize that there are downsides. If I think that writing essays is a superior assessment than blue-book exams, that upside has to be contrasted with the fact that it’s super-easy to plagiarize, and I can expect more people to succumb to the temptation. The same is true now with AI: it makes it easier to plagiarize in a way that Turnitin can’t detect, so it’s even more attractive. So it’s foolish to think the number won’t go up. But here’s the dilemma: If it’s actually _true_ that take-home essays done with a word processor are a superior assessment than exams, then the reality is that few students will get whatever those benefits are (because they’re using AI), so I might as well go back to blue books; but if it’s false that that’s the superior method, then I definitely should go back to blue books. That said, I also think there’s a difference between what we expect from senior majors in a 300- or 400-level class, and what we expect from the assortment of students in a 100- or 200-level class. If I have a 400-level seminar with 9 philosophy majors, I feel more justified in assuming that they won’t be that attracted to cheating. But in a 200-level class with 25 students, 4 or 6 of whom are majors, I don’t think that’s as safe an assumption.

CarlD
Reply to  Aeon J. Skoble
8 months ago

These are all good points. I’ll just repeat here that from my perspective the ‘problem’ is that AI is too good at doing some things we’re used to asking students to do. And the progressive fix for that is to just go ahead and expect AI to do those things, and from there to shift our pedagogy and assessments to the things AI can’t do as well. Critical thinking, creative synthesis, ethical reasoning, judgment. We can also use the AI to actively scaffold students toward these higher order accomplishments, in part by removing the cognitive load of basic skills acquisition. But yes, if we’re going to keep thinking and teaching as if AI can’t do what it can do, then yes, it’s a cheat engine and needs to be managed accordingly.

Aeon J. Skoble
Reply to  CarlD
8 months ago

shift our pedagogy and assessments to the things AI can’t do as well. Critical thinking, creative synthesis, ethical reasoning, judgment.”
Agreed. That’s why I’m keeping traditional essay writing in my 300/400 level classes but going back to blue-book essay exams in my 200s.

Matt L
Reply to  Aeon J. Skoble
8 months ago

I’m very sympathetic to your claims here, Aeon, but want to note that at least in principle you don’t need to use blue-books. There is easily available software that can “lock down” a student’s laptop while taking an exam and then let that exam be uploaded to a course page. There’s a version of this available for Blackboard, so any university that uses Blackboard has easy access to it. There are many others, too. They have been around for more than 20 years now (I used them when I took exams in law school about that long ago) and have most of the problems worked out. You don’t have to deal with the increasingly bad handwriting of students who are not used to writing by hand, or worry about lost bluebooks. It’s a good thing to use.

Justin Kalef
Justin Kalef
Reply to  Matt L
8 months ago

There is easily available software that can “lock down” a student’s laptop while taking an exam and then let that exam be uploaded to a course page.”

My simple suggestion for any instructor considering an anti-cheating tactic: spend at least a few minutes looking at it from the perspective of a would-be cheat, and figure out how you would bust the system.

If I’m a cheat and someone gave me this one, I think the most obvious try is be for me to feed the prompt into Chat-GPT on my phone and then type out what it says onto my laptop, where I will have no other windows open. The number of keystrokes is just the same as if I did the work myself, but of course I don’t need to learn any of the course material, let alone think about it; and I don’t need to engage in any thinking as I type the words I type.

Do you have a way to avoid this?

Aeon J. Skoble
Reply to  Justin Kalef
8 months ago

I don’t allow phones to be out during class, so if they cheated that way, I’d know in a second. But this suggestion, while attractive, is too complicated for me; I’ll just use blue books.

Matt L
Reply to  Justin Kalef
8 months ago

It’s not actually hard. You have proctors in the exam. (People did this in the blue-book era, too.)

Justin Kalef
Justin Kalef
Reply to  Matt L
8 months ago

Thanks, both. But I meant to weigh in on whether it was reasonable to switch from in- class blue book exams to take home ‘exams’. My position is that it isn’t.

On the Market
On the Market
8 months ago

I mostly agree with the argument and with the solutions. I’ve been implementing most of these for a while now, and found that in particular the pass/fail grading is great at improving engagement and performance while reducing anxiety.

The way I explain it to my students is that they get full marks if I can determine that they made an effort to show original thinking (recently adding that AI generated texts are the exact opposite) and that this is their chance to experiment with philosophical writing and pushing their own ideas without being scared of being marked down for trying something that is not playing it safe. A majority of students takes very well to this opportunity.

The way I see it, the alleged necessity of policing comes down to the value proposition we offer to students.

If that proposition is nothing more than meeting a requirement for “a degree”, then of course the rational thing for students is to minimize effort to obtain maximum value (i.e. a letter on a transcript).

But this isn’t the value proposition we make to students, and never was. We offer, among other things, indulgence in natural curiosity, preparation for some of the hardest licensing exams, personal enrichment, and transferable professional skills.

Students who take up this value proposition won’t be motivated to cheat. Except, that is, when it is clear to them that examination does not get them that value, but is just something institutional, extraneous. Good course design will make it visible that the examination is part of the value. Short, repeating opportunities to solicit clarification and feedback are just that.

But when maintaining the integrity of examination becomes an end unto itself, and the course design thereby becomes disconnected from our value proposition, this is when the students have no reason not to cheat, save for the fear of getting caught. But focusing on maintaining that fear is like extinguishing a fire with gasoline.

Last edited 8 months ago by On the Market
Eric Steinhart
8 months ago

This is an excellent post, and I hope it will help to curb the regressive anti-technological attitudes that will only lead to further marginalization of philosophy.

However, I think it’s still far too firmly rooted in a dying tradition that fails to see the new opportunities revealed by AI. It assumes that LLMs are merely talking encyclopedias. But they are much more than that. Consider:

(1) LLMs are conversational partners, and you can interact with them via the Socratic method. Really: you can ask ChatGPT to define justice, and then hit it with objections and ask for replies. So one type of assignment involves engaging them with Socratic strategies and reporting the results as their chat logs.

(2) LLMs are philosophical partners with uncanny valley intellects. A student can be tasked with reasoning with them. So here’s an assignment: persuade ChatGPT that it has a soul. (I’ve done this: it’s a fascinating and non-trivial exercise. ChatGPT will resist, so you have to provide it with arguments.). Another assignment: persuade ChatGPT that AIs are worthy of worship and that there can and should be AI-based religions. You might try persuading ChatGPT that it’s a god (not God, but a god). Report the results as chat logs.

(3) LLMs are argument creators. Students can be tasked with getting ChatGPT to come up with new ontological arguments, or with new arguments for abstract objects, for modal realism, etc. I’ve done this; again, it’s a fascinating and non-trivial exercise.

(4) LLMs can be trained on philosophical texts (as Schwitzgebel did with Dennett’s texts). This would be a very interesting larger-scale project for a class.

(5) LLMs are just the tip of the AI iceberg. Have the students learn to use proof assistants (Lean, Isabelle) to do philosophy.

There are lots of new opportunities here.

Last edited 8 months ago by Eric Steinhart
Marc Champagne
Reply to  Eric Steinhart
8 months ago

Opportunities are great when optional and freely adopted, not so great when not.

Preston Stovall
Preston Stovall
Reply to  Eric Steinhart
8 months ago

These are great suggestions. Thank you.

Dustin Locke
Dustin Locke
Reply to  Eric Steinhart
8 months ago

My prompt to GPT:

“Write a Socratic dialogue between a ChatGPT user and ChatGPT. Have it start with the user asking ChatGPT to define justice. After ChatGPT gives an answer, have the user offer a counterexample, to which ChatGPT revise its answer. Have them go back-and-forth for several rounds.”

GPT’s reply:

“ User: ChatGPT, could you define justice for me?

ChatGPT: Certainly. Justice, in a broad sense, refers to fairness or equitability. It is often associated with ethical correctness based on the law, morality, fairness, or equity. It’s about giving or receiving what is due or deserved, and it is frequently linked with concepts like rights, law, and morality.

User: Interesting. However, let’s consider a counterexample. In a situation where a law is unjust, say a law that discriminates against a certain group of people, would enforcing that law still be considered justice?

ChatGPT: Your example is insightful. Indeed, justice is more complex. In such a case, we might say that justice is not merely adherence to laws, but to fair and morally upright laws. Justice, then, should not uphold laws that are discriminatory or unjust, even if they are technically laws. Rather, it would seek to challenge and rectify such laws.

User: I see where you’re coming from, but consider this: what about vigilantes who take justice into their own hands when they believe the legal system isn’t delivering it? They might be challenging what they perceive as unjust laws or enforcement, but their methods often involve breaking laws themselves.

ChatGPT: That indeed adds a layer of complexity. Perhaps we could say that justice involves not just challenging unjust laws, but also doing so in a way that respects the overarching principles of societal order and fairness. Vigilantism, while often well-intended, typically circumvents due process and can lead to harm or unfairness, thus not aligning with this refined concept of justice.

User: However, there were times in history when societal order was oppressive, and people had to resist it for progress. Take the Civil Rights Movement, for instance. Would you then say their actions were unjust because they disrupted societal order?

ChatGPT: That’s a very poignant example. In such cases, it may be more accurate to say that justice sometimes requires challenging the prevailing societal order when it’s fundamentally unjust. Therefore, justice could be seen as actions that promote equity, fairness, and the protection of human rights, even when it means opposing oppressive systems. However, these actions should strive to reduce harm and create a fairer system, not just to oppose the existing one.”

Eric Steinhart
Reply to  Dustin Locke
8 months ago

No, you missed the whole point.

The suggested assignment is not you asking ChatGPT to write a Socratic dialogue. It’s you (or the student) having a Socratic dialog with ChatGPT, as if you were Socrates, and ChatGPT were (say) Euthyphro.

Eric
Eric
Reply to  Eric Steinhart
8 months ago

I’m pretty sure you are the one that missed the point here.

Dustin Locke
Dustin Locke
Reply to  Eric
8 months ago

^yep 😄

Eric Steinhart
Reply to  Dustin Locke
8 months ago

I wrote: “LLMs are conversational partners, and you can interact with them via the Socratic method. Really: you can ask ChatGPT to define justice, and then hit it with objections and ask for replies. So one type of assignment involves engaging them with Socratic strategies and reporting the results as their chat logs.”

What you did is not what I specified. So, yes, you missed the point entirely. What you did may have some intrinsic interest, but it’s not the assignment I specified.

Eric
Eric
Reply to  Eric Steinhart
8 months ago

You can’t actually be this unable to see a point. you specified an assignment as a way of ensuring that students use chatgpt constructively rather than as a cheat. what Dustin showed is that *despite what you specified* it would be trivial for a student to do what he did rather than what you want them to do.

Sam Duncan
Sam Duncan
8 months ago

These are all excellent points. I’d only add that there’s a pretty strong correlation between making students write a lot and students actually learning transferable skills. That’s one of the main takeaways of Arum and Roksa’s excellent and disturbing “Academically Adrift.” Another conclusion there is that humanities are actually one of the few fields of study where most students do display statistically significant increases in critical thinking skills, which they surmise is at least partially due to the fact humanities fields tend (or at least tended to) assign a lot of writing. So if we switch to exams we’re almost certain to teach students less. Likely much less. Especially since high stakes exams are useless or worse for encouraging any long term learning much less transfer. We really do face a choice. Do we care more about policing or teaching? If we stick with writing more cheaters will get through yes but if we throw it out out of an obsession with with being meritocracy’s police force we’ll shortchange everyone else out of a decent education.

Animal Symbolicum
Reply to  Sam Duncan
8 months ago

“There’s a pretty strong correlation between making students write a lot and students actually learning transferable skills.”

I think this is an important point that often gets overlooked by academic types, who, regardless of any explicit philosophy of mind they might espouse, tend to view thinking as taking place “in the head.”

One well-supported explanation for the correlation you mention is that thinking takes place in the embodied attempt to make thoughts communicable. This is opposed to the picture in which writing, speaking, or any other relevantly similar engagements with the material realm are only accidentally related to thoughts — the picture in which writing, say, is only good for translating thoughts that have already formed backstage.

Another Philosopher
Another Philosopher
8 months ago

I agree with everything Matthew writes, and I think his suggestions are excellent.

However, even if we agree that it is not the duty of professors to prevent plagiarism but to foster a good learning environment, etc., and that professors should find different ways to motivate their students to learn, higher education is still facing a huge threat.

First of all, not every professor is going to be able to motivate their students better—maybe we are not doing the best we can now, but the requirements of a job must be such that the average person doing it can fulfill them. I’m not sure teachers of philosophy have much more in the tank, especially given all the pressures junior faculty and adjuncts are already under.

Second of all, not all students are very interested in learning and simply want to get a degree so that they can get a job and earn money (and pay their student loans). And even if they are interested in learning, the stakes for them are high and the temptation to take a shortcut is strong.

What if we then say: “So what? That’s not our responsibility”. Well, I agree with that, as I said. But that doesn’t mean that society isn’t facing a huge problem with education—it would be strange, I think, for the very people engaged in that project to reject that responsibility (even if I agree that our primary duty is to teach, not be cops).

Second, if this problem is not solved, then universities will lose their role as accrediting institutions: Why would a degree have any value in the eyes of others if one can get it so easily?

Again, you could say: “We are philosophy professors, we might have that role now, but that’s just contingent. What we really should be doing is thinking about and teaching philosophy.” And I agree with that, too.

But as a matter of fact, we can only do what we do because we are funded by people who tolerate us—and as of now, they barely do. For most people, the accrediting function is the only argument that slightly convinces them that universities are worth keeping around. We lose that, and there will not be much academic philosophy left.

(I disagree, I think the social value of universities is immense, as that of philosophy. But I struggle to convince academic philosophers, so what chance do I have with the man on the Clapham omnibus?)

The argument made in the piece is simply utopian. I like utopian thinking, but I still think LLMs are a major threat to education, even if I agree with the author for the most part.

Klaas Kraay
8 months ago

The pedagogical strategies in the original post are thoughtful, well-meaning, and can be implemented in small classes.

But I worry that most of them can’t be implemented at scale. Many of my classes have over eighty students, and some have hundreds of students. (This is typical at large Canadian universities.)

I would be very grateful for suggestions in a similar spirit that can be implemented in large classes.

(Also, a plea to readers: please consider sharing your pedagogical ideas for the age of LLMs here, so that we can benefit from each other’s wisdom.)

CarlD
Reply to  Klaas Kraay
8 months ago

I loved this piece. It’s not directly about pedagogy, but it’s great to think with and really encouraging about AI possibilities going forward.

https://uxdesign.cc/leverage-the-strengths-of-llms-for-creativity-thinking-58137a8da8b9

Paul Raymont
Paul Raymont
8 months ago

Thanks, M. N. Smith for your suggestions. I worry that if so many prof’s move the writing assignments in-class, then many philosophy students won’t develop their skill in writing longer papers until the upper-year undergrad courses (where smaller classes make the new out-of-class assignments more feasible). The ones who enter graduate programs will likely be less well prepared for dissertation writing.

Mich Ciurria
8 months ago

I made a similar argument on the APA Blog & I emphasized that carceral techniques (policing) entrench existing inequalities in education: https://blog.apaonline.org/2023/03/30/ableism-and-chatgpt-why-people-fear-it-versus-why-they-should-fear-it/?amp

P.s. I am presenting on LLMs & oppression at the (virtual) Philosophy for Humans conference tomorrow, if anyone is interested: https://www.philosophyforhumans.com/conference-schedule

Aeon J. Skoble
Reply to  Mich Ciurria
8 months ago

Are you implying that students who come from disadvantaged backgrounds are more likely to cheat? What’s the evidence for that? But in any case, some things are wrong, period. Laws against murder are good laws, and if it turned out that this perpetuated existing socioeconomic inequities, they’d still be good laws. Same with cheating and plagiarism. Rich or poor, I don’t want them doing it.

John Protevi
John Protevi
8 months ago

Here is how I handle group discussion for a small-ish (20 person) class in the LSU Honors College. This is close to suggestion 6 above. (I also do a scaffolded research paper with in-person meetings to discuss drafts, as well as an in-class midterm, but that’s only 15% of the grade so I think that meets the “low-stakes” criterion.)

CONTRIBUTION TO SEMINARS: For each class, a group will be responsible for leading discussion. Sometimes that will include putting answers on a Google Doc to the study questions for each day’s reading assignment. At other times, group members will produce a Google Doc with notes, questions, and answers. During the session, I’ll ask group members to comment on the answers to the study questions as a way of starting the discussion. As the discussion progresses, someone in the group will take live notes as we go along. The group then collates any comments or questions produced by other members of the class to produce the finished product which will then be archived on the course Google page. [15% of course grade.]

Harriet Baber
8 months ago

In other words MORE WORK FOR US. No thanks. I provide students with the resources for learning, if that’s what they want to do. If they don’t, it’s on them. Beyond providing the resources, my job is to grade them as part of the pre-employment/pre-professional school credentialing process. So I’ve switched from term papers to blue books. Not great for me and even worse for students, but that’s the least worst option if I’m going to do my credentialing job.