Special Issue - ChatGPT: A co-researcher?

Newsletter

ChatGPT has officially entered our everyday dictionary, but adoption is more challenging in research. Should we consider ChatGPT as just a tool? Or perhaps something more?

ChatGPT was released in November 2022 and has since exploded in popularity, reaching 100 million users in just two months. In the first week, I received emails and messages from colleagues and friends about how to best introduce it into their work environment. It was apparent to everyone that ChatGPT and LLMs (Large Language Models) in general, were in the spotlight, and people were not shying away from them. ChatGPT has even gotten at least four authorship credits, sparking a debate between journal editors, publishers, and researchers on using LLMs in research articles, specifically in its author and researcher status.

After all, why not? Why shouldn’t I use it?

Research and writing are full of tasks that an LLM can speed up. ChatGPT can help with the language and the style of research articles. It can help authors improve their writing and editing and provide structures that make the content of the research article more precise. In addition, it can accurately summarize parts of their research and speed up the process of writing papers by removing the need for timeconsuming tasks. ChatGPT can even provide fresh ideas and a new set of eyes, providing researchers with new avenues to explore.

Or at least these are some of the main ideas behind its use in writing a research article.

However, the truth is that at the moment, while extremely useful in specific tasks, LLMs are still unreliable as an out-of-the-box solution. A researcher would need to spend a considerable amount of time double-checking all the outputs of ChatGPT to ensure that no false information makes it through the cracks to the published text of a research paper.

A response from journals and publishers.

Publishing research papers with ChatGPT as a credited co-author has led some of the most prestigious publishers to explicitly announce that they will not accept it as a co-author in their journals. For example, a publisher has updated its guidelines, stating that ChatGPT can no longer be listed as an author in its nearly 3,000 journals. However, its use is not outright prohibited as long as the authors that use LLMs document their use in the methods or acknowledgments section if appropriate. Another publisher has also adopted a similar stance and will allow LLMs if the authors declare how they have been used. Finally, Holden Thorp, editor-in-chief of Science, said: “Given the frenzy that has built up around this, it’s a good idea to make it explicit that we will not permit ChatGPT to be an author or to have its text used in papers,” [1] banning its use from the journal.

DAPA images, author writing
DAPA images, author writing

George Balaskas,

a personal account

George Balaskas is an MSCA PhD fellow at the National Centre of Scientific Research “Demokritos“. He is part of the Health CASCADE project, aiming to make co-creation trustworthy. He has a BSc in Computer Science and Artificial intelligence and an MSc in Artificial Intelligence from the University of Sussex. His focus is on Deep Learning and Natural Language Processing. He is currently working on employing Transformer models to enable and speed up co-creation.

Peshkov, AI and future concept
Peshkov, AI and future concept

If not an author, then what?

To be an author in a research paper means to have ownership of the research that has been conducted. Ownership comes with accountability and responsibility for the validity and integrity of the work. Magdalena Skipper, editor-in-chief of Nature, said: “An attribution of authorship carries with its accountability for the work, which cannot be effectively applied to LLMs.” Additionally, most journals and publishers require authors to consent to terms of use, which is impossible for an LLM to do. If LLMs cannot consent to terms of use, cannot be held accountable for the validity of their work, and cannot consent to be an author. Then the only possible avenue other than an outright ban seems to be disclosure. The researchers have to make sure that the journal they are interested in allows the use of LLMs, and if it does, they have to carefully document, disclose and acknowledge their use of LLMs.

To conclude

While ChatGPT can undoubtedly be a valuable tool in the research process, it is unlikely that it should be credited as a co-author. While it can contribute to the writing process, it lacks the critical evaluation skills and personal experience necessary for true co-authorship. Additionally, the potential for bias and flawed data is a concern that should not be overlooked. Finally, an LLM cannot take legal decisions or be held accountable for its production.

Ultimately, it is up to individual researchers and institutions to decide how best to acknowledge the contributions of ChatGPT and other AI tools in their academic works, whether as a simple tool or a co-researcher. Publishers adopted a common front. However, it is a rapidly evolving field, and future improvements will likely raise the question again. If chatGPT can write a whole paper alone one day, will it still be only a tool?

George Balaskas
MSCA PhD fellow
NCSR Demokritos
gbalaskas@iit.demokritos.gr

Quentin Loisel
MSCA PhD fellow
Glasgow Caledonian University
quentin.loisel@gcu.ac.uk
Twitter @q5loisel

References

H. H. Thorp, ChatGPT is fun, but not an author (2023). Science. 379, 313–313.H. H. Thorp, ChatGPT is fun, but not an author (2023). Science. 379, 313–313.

Survey button