If You Can’t Beat Them, Join Them: GPT-3 Edition


“How to deal with GPT-3-written essays? Instead of scolding students not to use it, we ask them to generate a ten, choose the best one, and explain why. Unless they have a paid account, the word-count limit would make it impossible to use GPT-3 to also generate the explanation…”

That’s Mark Alfano (Macquarie) discussing GPT-3 and teaching on Twitter yesterday. He added: “I want to avoid curmudgeonliness. New technology is resisted until it’s inevitable. Maybe using GPT-3 to generate first drafts and then synthesizing them is not bad actually. It has downsides, but so does reliance on philpapers or google scholar, and those are pretty new too.”

Prompted by Brian Robinson (Texas A&M-Kingsville), Alfano turned to the possibility of researchers using GPT-3 to write articles, and gave the language model this prompt:

A lengthy philosophical essay, titled “It’s OK that an AI drafted this paper,” that argues that it is acceptable to use artificial intelligence to draft a paper for publication in a peer-reviewed journal, provided the author edits the draft for clarity, cogency, and originality. The essay begins by making the claim that this is acceptable and provides two arguments. Then it responds to at least three objections to using artificial intelligence in this way.

Part of the challenge of using GPT-3 in this way is coming up with the right prompt. Alfano said he camte to the above formulation after about six preceding versions. Here’s his account of the resulting output, gathered up from his Twitter thread on it:

It is acceptable to use artificial intelligence to draft a paper for publication in a peer-reviewed journal, provided the author edits the draft for clarity, cogency, and originality.

Here are two arguments in support of this claim. First, using artificial intelligence to draft a paper can help ensure that the paper is well-written. This is because artificial intelligence can be used to check for errors in grammar, spelling, and syntax. Additionally, artificial intelligence can be used to improve the clarity and coherence of the paper. Second, using artificial intelligence to draft a paper can help ensure that the paper is well-researched. This is because artificial intelligence can be used to search for relevant scholarly articles and to identify key concepts and ideas. Additionally, artificial intelligence can be used to generate citations and bibliographies.

There are, of course, objections to using artificial intelligence to draft a paper. Here are three objections and my responses to them.

Objection 1: Artificial intelligence cannot understand the nuances of academic writing. Response: While it is true that artificial intelligence cannot understand the nuances of academic writing, this does not mean that it cannot produce a well-written paper. Artificial intelligence can be used to check for errors in grammar, spelling, and syntax. Additionally, artificial intelligence can be used to improve the clarity and coherence of the paper. 

Objection 2: Artificial intelligence cannot think creatively. Response: While it is true that artificial intelligence cannot think creatively, this does not mean that it cannot produce a well-written paper. Artificial intelligence can be used to search for relevant scholarly articles and to identify key concepts and ideas. Additionally, artificial intelligence can be used to generate citations and bibliographies.

Objection 3: Using artificial intelligence to draft a paper is cheating. Response: This objection assumes that using artificial intelligence to draft a paper is equivalent to having someone else write the paper for you. This is not the case. Using artificial intelligence to draft a paper simply means that you are using a tool to help you write the paper. The final product will still be your own work.

Alfano comments: “Observations: (1) The two positive arguments aren’t terrible, though it didn’t actually find any citations for me. In the responses to the first two objections, it basically just repeats itself. This might be avoided by increasing the frequency and presence penalties. The third objection, and the response to it, surprised me a little. I might even be convinced.”


Related: Philosophers on GPT-3 (including responses from GPT-3)

Subscribe
Notify of
guest

11 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mark Alfano
1 year ago

Having thought about it more, I fear that my initial proposal would demand too much time from instructors/graders. So here’s an alternative: have students use GPT3 to generate a first draft. Doing so effectively actually requires some finesse with prompt formulation. Then, in the discussion forums, they would be put into groups of approximately four. Each shares their completion, and the other three comment on it and discuss strengths and weaknesses. Then they revise and flesh out the completion to personalize it to their own views. It would be very hard to do this in a purely automated way. And if a few of them manage to do so, that’s probably fine because they’d put a lot of effort into prompt construction (so it’s not actually automated). This process could be iterated two or three times, with multiple rounds of feedback from their group.
The convenors and tutors would monitor these forums, but wouldn’t mark or give feedback unless things went off the rails. So the time commitment is not very high. The only graded assignments would be the final product and a short reflection on the process and what they learned from it (maybe 500 words). They could even use GPT3 to draft the reflection, but at that point they’d know that its initial output is not as good as what they get when they put in additional work.

Peter Langland-Hassan
Peter Langland-Hassan
Reply to  Mark Alfano
1 year ago

This could be a valuable exercise. My worry is for the hopeless romantics who enjoy philosophy and who would have found it gratifying to write their own papers. Do they still get that opportunity?

Mark Alfano
Reply to  Peter Langland-Hassan
1 year ago

They could write their first drafts without the aid of the LLM, then participate like everyone else. It would actually be pretty cool to run it as an RCT where half used GPT-3 and half wrote their own. But that’d probably be seen as unfair.

Ben M-Y
1 year ago

I wonder if you could say a bit more about why you were possibly convinced by the response to the third objection.

Perhaps I misunderstand, but doesn’t GPT-3 use existing text on the web to produce its responses to prompts? If so, then your use of it is in some ways like you using a search engine. An author who cut and pasted text cobbled from search results and then claimed the resulting text as their own work would be cheating (indeed, I’ve seen this prosecuted as such in cases of academic fraud). The claim that it is a tool is a non sequitur. The line between cheating and academic honesty isn’t between use of tools and no use of tools; it depends on which tools one uses and how one uses them. The distinction between a draft and a final product is also a non sequitur. If an author bought a draft paper from a paper mill or paid one of their own students to draft it for them and then revised it to pass off as (solely) their own, that would still count as cheating. The author of the original draft should be named as a co-author, for instance. But that doesn’t seem to be what’s being imagined here, as the AI says the revised final product would “be your own work” and I don’t see any mention here if naming GPT-3 as a co-author.

Now, a final paper of the sort under discussion may count as one’s own work if embedded in a certain type of classroom assignment. Combining work with AI could be a cool and valuable teaching tool. But article submissions to journals are not course assignments of the relevant sort.

Mark Alfano
Reply to  Ben M-Y
1 year ago

I use automated spell-checking and grammar-checking when I write, as do most of us (I assume). Those tools are less fancy and new than GPT-3, but they’re also AI. They also use existing text to flag (potential) errors (and sometimes get it wrong). When I take advantage of them, does that mean the final product is not my own work? You haven’t said what would constitute ownership, so I’m not sure how to respond.

Ben M-Y
Reply to  Mark Alfano
1 year ago

The “existing text” that your spell check operates on is written by you (I assume and you must as well, given the grammar of your sentence). So it doesn’t raise anything like the ownership issues that AI-generated text revised by you would. We don’t attribute authorship to an editor or proofreader, so I take it this is an obvious red herring.

And I’m not sure why I need to say what ownership means in this context in order for you to explain why you were potentially convinced by an argument that relies on that very notion.

Mark Alfano
Reply to  Ben M-Y
1 year ago

Sorry, I’m not trying to be sarcastic here. It takes some time and effort to become adept with large language models, and to acquire finesse in prompting them. Certainly someone who uses an LLM owns their *prompt*, and if they iteratively refine their prompts until the LLM gives them what they’re looking for, it’s not clear to me that they don’t have some claim to ownership of the resulting completion. They’ve clearly “mixed their labor” with it, in case you’re sympathetic with the Lockean approach. This is why I asked what you had in mind by ownership. If someone tills soil that was fertilized by previous generations, they may have some claim to ownership. If someone tills the linguistic soil of corpora on which an LLM is trained, they may also have some claim to ownership. I’m not sure how well the analogy holds up, but it isn’t obviously dumb, as far as I can tell.

Peter Alward
1 year ago

Using an AI to draft a journal article is not cheating as long as the AI is credited as the first author.

Robert Bloomfield
1 year ago

I’ve been looking into using GPT-3 in my own teaching, and have two observations so far. The first is that GPT-3 is not very good with facts. It can usually handle facts that are written about extensively already. But anything a little bit obscure, and there’s an excellent chance it will simply make stuff up. It said my most influential paper was a Nobel Prize winning one, because I refer to it fairly heavily in some of my work. It would take a lot of training in my area for it to do a good job on basic knowledge.

Second, GPT-3’s answers tend to be ok but not great, even with some training. I see this as a real opportunity for critical thinking assignments similar to those proposed here: give each student GPT-3s essay on some topic, and have them critique it. What facts did it get wrong, what did it misinterpret? It’s not too hard to generate a new one for each student, so everyone has a slightly different assignment.

Mark Alfano
Reply to  Robert Bloomfield
1 year ago

Yeah, this all rings true, given my experience playing with GPT-3. I agree that it’s important not to fetishize LLMs. They’re very good at a limited range of things, and remarkably fragile in various ways. Exploring that fragility can be interesting and fun, at least for some of us and maybe some of our students.

Yuri Zavorotny
1 year ago

Few learn to think and understand. Most of us learn, instead, to pretend, to imitate thinking — just like GPT-3 does. Is it really surprising, then, that most of us — the pretenders — cannot tell a chatbot from a human?

“Most people don’t listen with the intent to understand – they listen with the intent to reply. (Stephen A. Covey)

“The whole world is a stage, and all the men and women merely actors. (William Shakespeare)

“There’s no truth in you — when you lie, you speak your native language. (John 8:44)

“Even though it always holds true, people prove unable to comprehend the lógos, both before and after they have been told about it.
“Failure to understand it is the root of all evil. (Heraclitus)

“In it was life, and the life was the light of men. And the light in the darkness shined; and the darkness comprehended it not. (John 1:4-5)

Maybe we should quit hoping that AI would save us and remember, instead, how to think for ourselves? The way we used to?