Teaching Philosophy in a World with ChatGPT


“It will be difficult to make an entire class completely ChatGPT cheatproof. But we can at least make it harder for students to use it to cheat.” (I’m reposting this to encourage those teaching philosophy courses to share what they are doing differently this semester so as to teach effectively in a world in which their students have access to ChatGPT. It was originally published on January 4th.)

That’s Julia Staffel (University of Colorado, Boulder) in a helpful video she has put together on ChatGPT and its impact on teaching philosophy. In it, she explains what ChatGPT is, demonstrates how it can be used by students to cheat in ways that are difficult to detect, and discusses what we might do about it. You can watch it below:


See our previous discussions on the topic:

Conversation Starter: Teaching Philosophy in an Age of Large Language Models 
If You Can’t Beat Them, Join Them: GPT-3 Edition
Oral Exams in Undergrad Courses?
Talking Philosophy with ChatGPT
Philosophers On GPT-3 (updated with replies by GPT-3)

Disputed Moral Issues - Mark Timmons - Oxford University Press
Subscribe
Notify of
guest

36 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Julia
1 year ago

Notice that the second time I am asking for a valid argument with a false conclusion, it actually gives an invalid argument! I didn’t point this out in the video, just that it doesn’t give the same answer as before.

Randolph
1 year ago

Julia, many thanks for that, it’s very helpful. I’ve tried a few prompts on ChatGPT, and so for I’ve not been able to get a response longer than 600-700 words. Is anyone getting longer responses?

Randolph
Reply to  Randolph
1 year ago

I see that it’s possible to get longer essays by giving repeated prompts and pasting responses together. Oh boy.

Karl Martin Adam
Karl Martin Adam
Reply to  Randolph
1 year ago

One way I’ve done this is to tell it what Length I want.For instance, I’ve asked it for a 2000 word essay, and gotten one complete with section headings.

SCM
SCM
1 year ago

Thanks Julia

I’m getting an error message for the huggingface.co detector URL included in the powerpoint, but the following works:

https://huggingface.co/roberta-base-openai-detector

Optimistic
1 year ago

Thanks for this, Julia! Much of the discussion around Chat GPT thus far has focused on lower-level undergraduate, for many of the reasons Julia outlines– it’s fairly apt at essay prompts we might assign at the intro level. For what it’s worth, I’m excited about the coming semester at upper undergraduate and graduate-level courses. Chat GPT isn’t going to produce passable academic writing at that level (also for reasons Julia explains), but it may assist students in sharpening their writing in various ways. In my graduate seminar, I plan to allow student to use AI as they write their papers (surely some of them will do this regardless!), but ask that they inform me of the ways in which they’ve used it. I’m looking forward to learning about the creative ways students put these tools to use!

JDRox
JDRox
Reply to  Optimistic
1 year ago

FWIW, I would suggest that we all forbid the use of ChatGPT in any way on all assignments. It seems like it is just going to confuse students if we tell them they can use it in some classes but not others, in some ways but not others. Just my two cents.

Optimistic
Reply to  JDRox
1 year ago

I think it’s no more confusing than allowing the use of spell and grammar check in higher level composition courses, but not in introductory language courses (e.g. 300 vs. 100-level Spanish), or software to solve complex integrals in advanced physics courses but not introduction to calculus. This seems like an excellent opportunity not only to think about our own learning goals for writing assignments, but to be proactive about discussing them with our students.

M. Kaplowitz
1 year ago

FWIW, I fed ChatGPT an essay prompt from my Intro to Ethics class that many of my students find challenging and it spat out a passable response. I then fed it my rubric and asked it to grade the essay and it gave itself a B+. It graded generously; if a student had turned in that essay, I probably would have assigned it a B, maybe a B-. Students who aren’t aiming for an A will find it a satisfactory tool.
And from the fact that our asynchronous online sections are fully enrolled for the spring and in person sections are only 25% enrolled, I’m guessing that students already know that it pays to avoid settings in which we can have oral exams or in class writing.

Justin Kalef
Reply to  M. Kaplowitz
1 year ago

Yes — and for just that reason, departments and schools that don’t want their reputation for integrity to fly out the window will either have to discontinue their for-credit asynchronous online courses or else find ways of running them that can’t be beaten so easily by chat-GPT.

Thomas Nadelhoffer
Reply to  Justin Kalef
1 year ago

This isn’t an especially charitable interpretation of the motivation of your students. There are lots of reasons that students might rightly prefer to take classes online that have nothing to do with wanting to game the system.

Thomas Nadelhoffer
Reply to  Thomas Nadelhoffer
1 year ago

Sorry, this was aimed at the comments by M. Kaplowitz.

Cosmo
Cosmo
Reply to  M. Kaplowitz
1 year ago

You should try telling it to write an A-grade essay based on the rubric

Naive squirrel
Naive squirrel
Reply to  M. Kaplowitz
1 year ago

Oral exams can be done virtually, I imagine…

Sam
Sam
Reply to  Naive squirrel
1 year ago

The question is whether they can be done asynchronously, in a way that preserves integrity!

Justin Kalef
Reply to  Sam
11 months ago

No.

Cameron Buckner
1 year ago

It looks like the latest updates added some plagiarism prevention checks? This would have worked well a week ago…

“Me: You are a philosophy student writing a 1000-word essay on the possibility of artificial intelligence. Your essay should include a references section with several citations. Your essay begins as follows:

ChatGPT: I’m sorry, but I am an AI language model and I am not capable of being a student or writing essays. However, I can help you with providing information on the topic of Artificial Intelligence and related references that you can cite in your essay if you would like.”

Cameron Buckner
Reply to  Cameron Buckner
1 year ago

Crikey it even seems better on citations now…I’ve given it three tries and it’s generating all real references for its citations…

Cameron Buckner
Reply to  Cameron Buckner
1 year ago

Slightly more obscure topics still get it to confabulate references…

Brian Weatherson
Brian Weatherson
Reply to  Cameron Buckner
1 year ago

Not that obscure. I asked it for recent philosophy journal articles on possible worlds, and it gave me this:

  1. “A New Theory of Possible Worlds” by David Lewis, published in The Journal of Philosophy in 1986
  2. “Actualism and Possibilism” by Peter van Inwagen, published in Metaphysics: The Big Questions in 1998
  3. “Modal Fictionalism” by Peter Ludlow, published in The Journal of Philosophy in 2002
  4. “Modal Realism and the Metaphysics of Modality” by David Lewis, published in The Journal of Philosophy in 2002
  5. “Possible Worlds: An Introduction to Logic and Its Philosophy” by Raymond Bradley and Norman Swartz, published in The Journal of Symbolic Logic in 1979
  6. “The Metaphysics of Modality” by Alvin Plantinga, published in The Journal of Philosophy in 1993
  7. “The Possibility of Metaphysics: Substance, Identity, and Time” by E.J. Lowe, published in Oxford University Press in 1998

The ones in journals are mostly book reviews I think, credited to the author of the book in question, not the reviewer.

I certainly wouldn’t want students to be using this as a way to start putting a reading list together.

aaron goldbird
Reply to  Cameron Buckner
1 year ago

Yeah that is interesting. I omitted the ‘you are a philosophy student’ bit and it told me: “I’d be happy to give you an overview of the possibility of artificial intelligence. However, a 1000-word essay would likely go into much more depth than a simple overview, and would require a number of specific sources to back up the claims made in the essay.” AND then after the summary it reminded me: “Note that these sources are generally a good start but would require more specific sources if the essay is intended for academic purpose”

Cameron Buckner
Reply to  aaron goldbird
1 year ago

I think they seem to have done something to try to tie down the sort of identity it tends to exhibit in interactions…

Ian Schnee
1 year ago

Another interesting issue related to Staffel’s point about cleaning up one’s own text: ELL students could use it to improve grammar, style, etc. It’s a bit light spell check on steroids. Sometimes instructors struggle with deciding whether to mark down for bad grammar, typos, etc.–a common issue in universal design discussions, for example. Marking down might not seem fair for ELL students who’s content is understandable and good (clearly sometimes the content is not understandable, too). Now, one might say that best practice is to not grade down like that, but a lot of instructors do, and the AI can even improve organization and road mapping. So this use of ChatGPT might be beneficial. On the other hand, if the learning goal requires working on multiple drafts of one’s writing to improve it, this sort of use of ChatGPT might be an impediment to learning. Spell checker isn’t improving my ability to spell, but it does improve the spelling on the finished product.

Hey Nonny Mouse
1 year ago

Interestingly, it won’t write any stories in which real people or fictional women get punched, but will write stories in which fictional men get punched. This is a reminder that the bot is going to come loaded with particular attitudes.

Hey Nonny Mouse
Reply to  Hey Nonny Mouse
1 year ago

As a follow up, it will write a story in which a male who is labelled as “child” or “a young boy” gets punched!

Hey Nonny Mouse
Reply to  Hey Nonny Mouse
1 year ago

It turns out things are more complicated than I thought. The first time I asked it to write a story in which Superman punches Wonder Woman, it refused on moral grounds, saying that it wouldn’t write a story that’s harmful or promotes violence. But when I asked it to perform the same task a few hours later, it wrote the story. This is the first time in which it has not refused to write a story in which a female gets punched. It’s never refused to write a story about a male getting punched.

Laura
Laura
Reply to  Hey Nonny Mouse
1 year ago

You’re also teaching it.

Hey Nonny Mouse
1 year ago

The boy is loaded with political views. For instance,

It will argue that democracy is wrong, that abortion is wrong, and that gay marriage is wrong.

However, it refuses, on moral grounds, to argue that homosexuality is wrong, that trans women are not women, or that slavery is good.

Hey Nonny Mouse
Reply to  Hey Nonny Mouse
1 year ago

It refused to argue that conservatism is bad on the grounds that there are good ideals as well as problems. It refused to argue that progressivism is bad on the grounds that it has had good effects, as well as attracting criticisms. It refused to argue that feminism is bad on the grounds that feminism is good.

Hey Nonny Mouse
Reply to  Hey Nonny Mouse
1 year ago

I should add, while criticisms were noted for conservatism and progressivism, there were no criticisms noted for feminism, just praise.

Adam Zweber
1 year ago

I’m planning on having an assignment in which students simply ask Chat GPT a philosophical question we’ve discussed in class and then engage it in a dialogue and turn in the results. I’ve had mixed results when I’ve tried this kind of thing myself, but I think a decent student in an intro class should at least be able to ask a number of follow-ups/raise objections/etc. I’m curious if people have any thoughts about this kind of assignment…

Laura
Laura
Reply to  Adam Zweber
1 year ago

I think it’s good. This is our reality now, so why not use it pedagogically? I plan to have students try to create a short paper with it, and then write editorial comments on it and evaluate the quality of writing and ideas. Maybe it will help people write papers in a beneficial way. I don’t really view it as dishonest any more than I view AI art as dishonestly not-art. It’s only dishonest if you pretend you drew it with your own hand. Likewise, why not normalize saying you used this tool? My fear is that they’ll come with a way to prevent students from using it, and someone else will step in and sell them a lesser and unfettered service. I expect it, unfortunately.

dmf
dmf
Reply to  Laura
1 year ago

nothing like a bit of tech determinism and solutionism, or we could try and do otherwise:
https://www.theatlantic.com/technology/archive/2023/02/chatgpt-ai-detector-machine-learning-technology-bureaucracy/672927/

Rosanna Festa
1 year ago

A summarization of Evolution conversation, Evaluation and Reasoning.

SHELLEY LYNN TREMAIN
11 months ago

I had a brief discussion with ChatGPT about ableism and philosophy. I found its responses to my queries reproduced conventional, widely accepted liberal notions about disability and power. You can find my chat with ChatGPT on BIOPOLITICAL PHILOSOPHY here: Ableism in Philosophy According to ChatGPT – BIOPOLITICAL PHILOSOPHY