Resources for Teaching in the Age of ChatGPT & other LLMs
How do large language models (LLMs) affect how we understand our job as teachers, and how does it affect what we should do in order to do that job well?
Zak Kopeikin, Ted Shear, and Julia Staffel (CU Boulder) have compiled some resources to help instructors teach well in an era in which college students have access to ChatGPT and other LLMs.
Please feel free to suggest others in the comments.
Resources for Teaching in the Age of ChatGPT & other LLMs
by Zak Kopeikin, Ted Shear, and Julia Staffel
Some resources for understanding how large language models work
- Jargon free explanation of how large language models work
- A more technical, in depth explanation of how large language models work (multiple blog posts by philosopher Ben Levinstein)
- A reporter tests if he can get a large language model to say really inappropriate things (he can)
Opinion pieces and other online resources about teaching with AI
- (Daily Nous) Policing Is Not Pedagogy: On the Supposed Threat of ChatGPT (Matthew Smith)
Argues that we should make classes more interesting and engaging to discourage cheating. Includes assignment suggestions. - Teacher,Bureaucrat, Cop (Thi Nguyen)
A Daily Nous Guest Post about how we approach cheating and how this affects the relationship we have to our students and their attitudes towards learning. Not explicitly about ChatGPT, but raises points about trading off values, and cautions about trading off teaching value by going full-blown cop. - Assigning AI: Seven Approaches for Students, with Prompts
An academic paper discussing seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks - How students can use paraphrasers to bypass AI detection tools
- Concerns about Turnitin’s AI detection software
- More about Turnitin’s AI detection software and its flaws
- Reviewing some AI detection software (Zak)
Tips for teaching strategies and assignments that discourage AI use
- A comprehensive guide to discouraging and preventing AI misuse
This post talks about content and format-based ways of designing assignments that discourage AI use. - Advice about reporting AI cheating to the CU Honors Council (Julia)
- An essay assignment called “Reportatio” that discourages AI cheating (Julia)
- How to use track changes in Google Docs to discourage AI cheating (Ted)
Tips for teaching strategies and assignments that use AI
- How to Create and Use a Correcting ChatGPT Activity (Zak)
- Straight to video
- For the template
- For the model/sample
I’d also recommend readers check out “A Guide: How Professors Can Discourage and Prevent AI Misuse” by Graham Clay at AutomatED, which summarizes several months of research on the subject.
Related:
I had seen many of the concerns people had about the LLM “detectors” such as Turnitin’s black box. I decided to put some of them to the test, and asked ChatGPT 4.0 to write a short paragraph. I then directly pasted that paragraph into various detectors online (not Turnitin’s). Every one told me it was “100% human created”. I couldn’t help but laugh.
Thank you, these are very useful resources, especially the explanations of how LLMs work.
Regarding the tips and strategies for using AI in the classroom (the “if you can’t beat them, join them” strategy), I think it’s important to consider the ethical implications. My own view is that we should not be normalizing the use of e.g. ChatGPT in the classroom, as though it’s just another useful technology for academics, like word-processing or search engines. A large part of what’s problematic about chatbot technology is that it relies fundamentally on the uncredited work of human beings. It has to be “trained” on huge amounts of human-authored text scraped, without permission, from the Internet, and it also relies on a vast amount of very poorly paid work by human behind the scenes. See e.g.
https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots
Another, related, part of the problem is that the technology is specifically designed to deceive us. It exploits our tendency to anthromorphize in order to get us to think or feel, even if we know better, that we’re interacting with an intelligent agent. To me, that makes it particularly inappropriate for use in an academic context, where transparency is especially important.
There are other ethical implications, e.g. the energy costs, which appear to be significant, although difficult to estimate.
But it’s the essentially deceptive character of ChatGPT, and the way that its creators draw on the results of human thinking to create the deception, that bother me most about normalizing its use in the classroom. The only way I would consider its use is in the context of a discussion of its ethical implications, or of philosophical issues about AI more generally.
Thanks for a helpful comment that provided considerations I need to examine further. I was not familiar with the energy use issue and I would like to find out more about how these companies plan to address the problem of training AI on other people’s materials.
Here is what I’ve decided to do: I provide my students the following guidelines:
https://abizadeh.wixsite.com/arash/post-1/how-not-to-use-chatgpt-in-your-undergraduate-political-theory-class
I guess we’ll see how it works….