Monthly Archives: January 2023

ChatGPT is just one of a new class of tools that can generate realistic natural language text and also program code upon a prompt by a user.

Consider this case: An author writes a paper and uses ChatGPT to generate parts of the related work section. The author does not mention this in the paper. Is this acceptable?

I would say it is plagiarism. The author plagiarizes from ChatGPT. I would argue that this is not acceptable. ChatGPT cannot be an author, but an author may also not plagiarize from ChatGPT.

Another case: The author writes a paper comparing code samples in different programming languages. The code samples are generated by ChatGPT. This would in my view be ok, since this is like displaying the output of an image processing program. The author would need to indicate with the code/text samples that they are generated with ChatGPT. The code/text samples are not part of the scientific argumentation in the paper. They are just objects of the argumentation.

Which standpoint should CEUR-WS take?

Comments are welcome.

2023-02-21: CEUR-WS published its rules on AI tools for content creation at