AI, Teaching, and “Our Willingness to Give Bullshit a Pass”


There has been a fair amount of concern over the threats that ChatGPT and AI in general pose to teaching. But perhaps there’s an upside?

Eric Schliesser (Amsterdam) entertains the possibility:

Without wanting to diss the underlying neural network, but ChatGPT is a bullshitter in the (Frankfurt) sense of having no concern for the truth at all… 

I have seen many of my American colleagues remark that while ChatGPT is, as of yet, unable to handle technical stuff skillfully, it can produce B- undergraduate papers. In a very sophisticated and prescient essay last Summer, the philosopher John Symons had already noticed this: “It turns out that the system was also pretty good, certainly as good as a mediocre undergraduate student, at generating passable paragraphs that could be strung together to produce the kinds of essays that might ordinarily get a C+ or a B-.” Symons teaches at Kansas, but I have seen similar claims by professors who teach at the most selective universities…

But this means that many students pass through our courses and pass them in virtue of generating passable paragraphs that do not reveal any understanding… [I]t is not a good thing that one can pass our college classes while bullshitting thanks to (say) one’s expensive private, high school education that taught one how to write passable paragraphs.

This state of affairs helps explain partially, I think, the contempt by which so many in the political and corporate class (especially in Silicon Valley) hold the academy, and the Humanities in particular… And, as I reflected on the academics’ response to ChatGPT, who can blame them? The corporate and political climbers are on to the fact that producing grammatically correct bullshit is apparently often sufficient to pass too many of our introductory courses… And if introductory courses are their only exposure, I suspect they infer, falsely, from this that there is no genuine expertise or skilled judgment to be acquired in the higher reaches of our disciplines.

That there is such contempt is clear from the fact that in countless recent political controversies, the Humanities are without influential friends. Now, I don’t mean to suggest masochistically that this state of affairs is solely the product of what we do in our introductory classrooms. For all I know it is the least significant contributing factor of a much more general cultural shift. But it is to be hoped that if ChatGPT triggers us into tackling how we can remove our willingness to give bullshit a pass — even for the wrong reason (combatting the the threat of massive plagiarism) — then it may help us improve higher education.

The above is an excerpt from a post at Crooked Timber.

Hedgehog Review
Subscribe
Notify of
guest

11 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
James Myers
1 year ago

What options exist or are being developed to test understanding, particularly understanding of cause and effect relationships?

NnES
1 year ago

I speculate that, in many US universities, C+ or B- are practically the lowest grades the instructor can give without putting the student in grave academic trouble (e.g., academic probation, losing funding). Hence, in many general education courses at least, I get the impression that the instructor needs to be very determined to give a lower grade than C+.

Thus, I speculate that the author’s suggestion will effectively turn an intro to philosophy course into a weeder course, which I think many STEM people boast about in their ‘rigor’. Whether this is desirable or whether philosophy can afford this measure is a different problem though.

Caligula's Goat
Reply to  NnES
1 year ago

I think there’s something to this NnES but I would say that a B-/C+ (or maybe even a C at some places) is the lowest grade an instructor can give a student without getting THEMSELVES in some kind of trouble. Anyone who has had a student file a grievance against a grade knows that our adminstrators often don’t have our backs on these sorts of matters (the customer is often right even on matters of their own grades).

So I think it’s hard, especialyl for NTT faculty, to fail a student who turns in work that is written intelligibly and addresses the prompt (even if it addresses it at a ChatGPT level of understanding).

Justin Kalef
Reply to  Caligula's Goat
1 year ago

An excellent point. I think the best solution is to create new forms of assessment that clearly rule out the possibility of a passing grade for bullshitting, by making it no longer a judgment call (even a fairly obvious one) whether the student has followed the instructions by bullshitting.

Philosophy teacher at big u.s. state school
1 year ago

I have taught many students who are writing below the level of what chatGPT can produce, but through some elbow grease and minor amounts of tutoring from me, end up writing at roughly the level of what an AI could produce. It’s sad to say, but many people come out of high school and even a few years of college without being able to write an essay that demonstrates even the extremely superficial acquaintance with the ideas that chatGPT can emulate. A worry I have about the suggestion that we lower our “tolerance for bullshit” is that it will make this growth invisible to my grading scheme (that is, it would treat everything at or below the level of what an ai could produce as work that does not suffice for passing). Maybe this would incentivize students to work harder, but I think it would also make it much harder for teachers like me to educate people who failed to achieve at least AI level writing skills in high school. This seems to me a cost that outweighs the benefits.

Ash
Ash
1 year ago

But so many students enter college unable to string a few coherent sentences together–I mean that without exaggeration–and many of them don’t achieve even superficial understanding–they’re just too confused. Raising our standards to require something better than what ChatGPT could write would mean huge numbers of students would fail.

Justin Kalef
Reply to  Ash
1 year ago

And the alternative is to pass students who, apparently, should not pass! Perhaps it’s high time that we stop doing that.

obvious
obvious
Reply to  Ash
1 year ago

Good. The point is that if they can’t do that then they’re not ready for university. If you can’t do university-level work, then you should fail university classes.

Jen
Jen
1 year ago

Artificially intelligent agents like ChatGPT produce responses using vast amounts of information from databases, input data and by applying algorithms. Roughly, it simulates conversation by making guesses as to an appropriate response to new inputs, based on information to which it has been given access. The content of its responses depends on that information and thus tends to be plainly unoriginal, mostly stuff that can be found by a sophisticated Google search. Unfortunately for us, it can express the unoriginal content in a way that evades plagiarism detection software relatively well.

While the ability mentioned would seem to be its virtue, it may betray a vice. It seems that its dependence on information found in databases explains why it has trouble with adversarial interactions (many of which have been shared on this blog), whether the adversary attempts to get it to fail to make the use-mention distiction, or to get it to explain how to carry out the quasi-nonsensical activity of removing your socks without first removing your shoes. It also seems to explain why it has trouble with reasoning about novel or relatively unknown scenarios, such as the shape made in a piece of paper after folding and cutting or the Knights and Knaves puzzle. So, it would seem that a vice of AI is its inability to deal with novel and adversarial scenarios.

This vice points us in the direction of a potential solution to our problem. We want our students to write papers without assistance from AI. Since artificial agents struggle with novel and adversarial scenarios, we could assign papers that require students to deal with such scenarios. This would likely require us to be inventive in teaching and in writing assignment instructions. But the benefits would likely outweigh the costs for those of us who care enough about the quality of our contributions to education.

Bob
Bob
1 year ago

Assuming that AI-written essays do essentially look just like “mediocre” undergrad essays, it does not follow that: “this means that many students pass through our courses and pass them in virtue of generating passable paragraphs that do not reveal any understanding… Just because a chatbot can generate a similar standard of essay without understanding, does not mean that human students (writing without AI assistance) do not “understand” what it is they are writing, or writing about, nor that their production of these passable paragraphs does not reveal at least some understanding on the part of the human (even if they don’t understand the subject brilliantly, or don’t elucidate it to A-grade standard on the page).

Compare: I rest a fishing rod on a prop with a machine attached, with a clever adaptive mechanism so it will be able to reel the fish in responsively. I put a human fisherman next to it with their own rod. The lake is in fact nearly empty, resulting in poor catches. The mediocre haul from an 8-hour stretch of fishing activity might be basically identical. However, on the part of the fisherman I can say that some patience has been revealed, while I can say no such thing of the machine, even though the output is the same.

Preston Stovall
Preston Stovall
Reply to  Bob
1 year ago

Great analogy.