Resources for Teaching in the Age of ChatGPT & other LLMs


How do large language models (LLMs) affect how we understand our job as teachers, and how does it affect what we should do in order to do that job well?

Zak Kopeikin, Ted Shear, and Julia Staffel (CU Boulder) have compiled some resources to help instructors teach well in an era in which college students have access to ChatGPT and other LLMs.

Please feel free to suggest others in the comments.


Resources for Teaching in the Age of ChatGPT & other LLMs
by Zak Kopeikin, Ted Shear, and Julia Staffel 

Some resources for understanding how large language models work

Opinion pieces and other online resources about teaching with AI

Tips for teaching strategies and assignments that discourage AI use

Tips for teaching strategies and assignments that use AI


I’d also recommend readers check out “A Guide: How Professors Can Discourage and Prevent AI Misuse” by Graham Clay at AutomatED, which summarizes several months of research on the subject.

Related:

Subscribe
Notify of
guest

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Will Behun
Will Behun
8 months ago

I had seen many of the concerns people had about the LLM “detectors” such as Turnitin’s black box. I decided to put some of them to the test, and asked ChatGPT 4.0 to write a short paragraph. I then directly pasted that paragraph into various detectors online (not Turnitin’s). Every one told me it was “100% human created”. I couldn’t help but laugh.

Hannah Ginsborg
Hannah Ginsborg
8 months ago

Thank you, these are very useful resources, especially the explanations of how LLMs work.

Regarding the tips and strategies for using AI in the classroom (the “if you can’t beat them, join them” strategy), I think it’s important to consider the ethical implications. My own view is that we should not be normalizing the use of e.g. ChatGPT in the classroom, as though it’s just another useful technology for academics, like word-processing or search engines. A large part of what’s problematic about chatbot technology is that it relies fundamentally on the uncredited work of human beings. It has to be “trained” on huge amounts of human-authored text scraped, without permission, from the Internet, and it also relies on a vast amount of very poorly paid work by human behind the scenes. See e.g.

https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots

Another, related, part of the problem is that the technology is specifically designed to deceive us. It exploits our tendency to anthromorphize in order to get us to think or feel, even if we know better, that we’re interacting with an intelligent agent. To me, that makes it particularly inappropriate for use in an academic context, where transparency is especially important.

There are other ethical implications, e.g. the energy costs, which appear to be significant, although difficult to estimate.

But it’s the essentially deceptive character of ChatGPT, and the way that its creators draw on the results of human thinking to create the deception, that bother me most about normalizing its use in the classroom. The only way I would consider its use is in the context of a discussion of its ethical implications, or of philosophical issues about AI more generally.

Laura
Laura
Reply to  Hannah Ginsborg
7 months ago

Thanks for a helpful comment that provided considerations I need to examine further. I was not familiar with the energy use issue and I would like to find out more about how these companies plan to address the problem of training AI on other people’s materials.

Arash Abizadeh
7 months ago

Here is what I’ve decided to do: I provide my students the following guidelines:
https://abizadeh.wixsite.com/arash/post-1/how-not-to-use-chatgpt-in-your-undergraduate-political-theory-class

I guess we’ll see how it works….