Use of AI tools to create content for academic papers

ChatGPT is just one of a new class of tools that can generate realistic natural language text and also program code upon a prompt by a user.

Consider this case: An author writes a paper and uses ChatGPT to generate parts of the related work section. The author does not mention this in the paper. Is this acceptable?

I would say it is plagiarism. The author plagiarizes from ChatGPT. I would argue that this is not acceptable. ChatGPT cannot be an author, but an author may also not plagiarize from ChatGPT.

Another case: The author writes a paper comparing code samples in different programming languages. The code samples are generated by ChatGPT. This would in my view be ok, since this is like displaying the output of an image processing program. The author would need to indicate with the code/text samples that they are generated with ChatGPT. The code/text samples are not part of the scientific argumentation in the paper. They are just objects of the argumentation.

Which standpoint should CEUR-WS take?

Comments are welcome.

2023-02-21: CEUR-WS published its rules on AI tools for content creation at https://ceur-ws.org/ACADEMIC-ETHICS.html

Advertisement

Leave a Reply (only about CEUR-WS matters)

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: