COPE: AI Tools Aren’t Authors. Philosophers: Not So Fast


The Committee on Publication Ethics (COPE), whose standards inform the policies and practices of many philosophy journals and their publishers, has declared that “AI tools cannot be listed as an author of a paper.”

[Manipulation of Caravaggio’s “Saint Jerome Writing” by J. Weinberg]

COPE says:

AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.

Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.

COPE’s position matches up with that of Nature and other publications (see this previous post). (via Brian Earp)

In response to Nature’s earlier announcement, philosophers Ryan Jenkins and Patrick Lin of the Ethics + Emerging Sciences Group at California Polytechnic State University, raised some concerns about this kind of “simple policy”. In their report, “AI-Assisted Authorship: How to Assign Credit in Synthetic Scholarship“, they write:

Nature argues that crediting AI writers in the acknowledgements serves the goal of transparency. While this may be true in many cases, it could also help to hide or grossly understate the role and substantial contributions of AI writers to the paper, which is counterproductive to transparency.

Nature also argues AI writers should not be credited as authors on the grounds that they cannot be accountable for what they write. This line of argument needs to be considered more carefully. For instance, authors are  sometimes posthumously credited, even though they cannot presently be held accountable for what they said when alive, nor can they approve of a posthumous submission of a manuscript; yet it would clearly be hasty to forbid the submission or publication of posthumous works.

Thus, a more nuanced, middle-ground solution may be needed, as satisfying as a simple policy might be.

Jenkins and Lin suggest framing the matter around two questions.

The first concerns what they call “continuity”:

How substantially are the contribution of AI writers carried through to the final product? To what extent does the final product resemble the contributions of AI? What is the relative contribution from AI versus a human? The calculations are always difficult, even if the coauthors are human. Some journals routinely require statements of relative contribution to add clarity and nuance when multiple humans are sharing credit.

The second concerns what they call “creditworthiness”:

Is this the kind of product a human author would normally receive credit for? Consider whether the AI’s contributions would typically result in academic or professional credit for a human author. This analysis is similar to how we view student assistants: the greater the substance of their contribution to the final product, and the greater the extent to which this kind of product typically redounds to the credit of the author, the more important it is to credit the range of contributors, both human and artificial. 

You can read their report here.

Hedgehog Review
Subscribe
Notify of
guest

10 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Managing Editor
1 year ago

Can an AI writer be asked to review a submission for a journal?

Andrew Richmond
Reply to  Managing Editor
1 year ago

A Modest Proposal to Shorten Review Time

Paul Wilson
1 year ago

Creditworthiness (“Is this the kind of product a human author would normally receive credit for?”) involves issues of authorship, autonomy, originality, relative contribution, etc. and, esp., copyright.

Does OpenAI claim copyright to the prompted outputs of ChatGPT?

If so, credit; if not, run output through online plagiarism checker. Credit latter? I think not.

I would think most professional philosophers, understanding nature of such tools, would use ChatGPT to overcome blank page syndrome, at most.

Full disclosure: I used I Ching to help compose this response.

Michel
1 year ago

The notion of ‘co-authorship’ is one that’s been explored quite a bit in aesthetics and the philosophy of art. That makes sense, of course, given the nature of collaborative artistic practices (historically, in the studio model of painting, but also more recently in film, comics, contemporary sculpture, etc.).

In particular, I’d recommend that those interested in AI’s potential role read:

  1. Bacharach, Sondra and Deborah Tollefsen (2010). We did it: From Mere Contributors to Coauthors. Journal of Aesthetics and Art Criticism 68 (1): 23–32.
  2. Anscomb, Claire (2020). Visibility, Creativity, and Collective Working Practices in Art and Science. European Journal for Philosophy of Science 11 (1): 1–23.
  3. Wylie, Caitlin Donahue (2018). Trust in Technicians in Paleontology Laboratories. Science, Technology, and Human Values 43 (2): 324–48.
  4. Livingston, Paisley (1997). “Cinematic Authorship,” in Film Theory and Philosophy, Richard Allen and Murray Smith (eds.). Oxford: Oxford University Press, 132–48.
  5. Hick, Darren Hudson (2014). Authorship, Co-Authorship, and Multiple Authorship. Journal of Aesthetics and Art Criticism 72 (2): 147–56.
  6. Mag Uidhir, Christy (2012). “Comics & Collective Authorship,” in The Art of Comics: A Philosophical Approach, Aaron Meskin and Roy T. Cook (eds.). Malden: Wiley-Blackwell, 47–67.
Donal Khosrowi
1 year ago

Thanks for sharing + thanks to Michel for the pointers to phil of art & aesthetics literature. Just another (self-)plug for related literature: Elinor Clark and I recently made related points about the role of AI systems in scientific discovery. TLDR: AI systems can sometimes make significant contributions to scientific discovery and should be considered part of a discovering collective comprised of humans and AI systems. We currently work on transferring our arguments to the space of generative AI (mostly focusing on image generation). You can find the paper here: https://link.springer.com/article/10.1007/s11229-022-03902-9

Peter Alward
1 year ago

Given their substantial contributions to my articles, should I list my laptop – “Type-y” – and my word processing program – “Word-y” – as co-authors? Or would thanking them in an acknowledge footnote be sufficient?

SCM
SCM
Reply to  Peter Alward
1 year ago

It’s clearly time to reassess Clippy’s contributions to the literature.

Robert Bloomfield
1 year ago

I am about to start a course that demands a lot of student writing. I’d be grateful for any thoughts on my proposed integrity policy for using bots:

Outside Help
All assignments in this course are “open book, open note, open resource.” You may use any static resource (books, website content, etc.) you choose to help improve your submissions. You must, however, cite those sources if you quote or paraphrase them on matters that are not general knowledge. Yet the assignments are not “open neighbor” — you may not solicit or use help on an assignment from any individual (for team projects, any individual outside your team) unless you are interviewing them as part of your submission.

Generative Prompt Transformers
Cornell University has not yet issued any formal policies on the use of generative prompt transformers (“chatbots,” like ChatGPT and GPT-3). My policy for this course is to treat them not as static resources (as discussed in the previous paragraph) but as generative resources, more like asking a person for help than reading a book. However, because these resources are so novel, I am adopting a compromise policy: 

  • Students may use generative prompt transformers and related tools as a resource for team projects, individual projects, and discussions, but must attribute any quotes and paraphrases of their output in the submission itself and must send the instructor a separate document with the prompts and generated responses used for the submission.

Thoughts?

Marcel Müller
Marcel Müller
Reply to  Robert Bloomfield
1 year ago

From my limited perspective:
These systems can not be trusted to be a valid academic source, so anything they do generate needs to be backed up with another valid source. They can’t be trusted to be a valid anything source, considering they pronounce people dead:

https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/

I would support the comprise because I do not believe these systems are as infallible as many make them out to be, and actually checking the generated content is in my view a very good exercise in getting to know what it is people are talking about. But I also believe the field of Philosophy of Technology is severely understaffed and underfunded.

Full disclosure: I enjoy using ChatGPT but generating anything academically useful is not intuitive.

Rosanna Festa
1 year ago

The right Matter.