Over the last few months, the tech world has been buzzing with a new artificial intelligence (AI) model released to the public—ChatGPT. The project was introduced in late 2022 by OpenAI with the goal of demonstrating a language model with unique language capabilities, including understanding natural language, generating human-like responses, and learning from large amounts of data. Its emergence represents a significant step forward in the development of AI technologies that can more closely mimic human language and communication.
While advancements in this technology are no doubt significant, it has raised questions to scholars and publishers alike. If ChatGPT can replicate writing to a state that is nearly indistinguishable from that of a human, how could it be exploited in research and publishing? Is using such a tool ethical?
ChatGPT has the potential to transform the way academic papers and other scholarly content are written. Abstracts, summaries, and other key elements could be automatically generated with only a prompt. This would save time, increase efficiency, and even help authors identify key elements of their own work. However, the quality of the generated text remains intensely debated, so it is wise to be cautious when considering ChatGPT as a writing tool. Despite this, future AI text generation could help scientists that are not strong writers develop a consistent and well-written paper. It could even help translate data between languages. In any case, it is important to remember that AI technology is still in its infancy, so using it runs the risk of poor writing, misleading content, and journal rejection.
ChatGPT and similar AI models have some very serious drawbacks. ChatGPT could be used to generate plagiarized content or manipulate data in ways that are misleading or unethical. If you spend some time working with ChatGPT, which is free at the time of this writing, you will notice that it spits out many different responses to the same question. The outputs are, while correct and largely indistinguishable from human writing, full of redundancies and vague information. The papers written with ChatGPT, then, are likely to be of lower quality and lack originality. This is a problem for academic publishing, where novelty is highly valued. The chatbot’s ability to find good source material is also poor and it “fails miserably” at providing citations, as shown by one librarian’s analysis
. Furthermore, if researchers become more reliant on AI generating their content, then this may lead to an atrophy of critical thinking and writing skills. Even worse, if ChatGPT or other models become paywalled by their companies, then this could create unfair advantages between researchers in low-income countries and those with enough funding to utilize AI generation.
One of the biggest issues with using AI models is the lack of quality control. While AI language models like ChatGPT can generate a large volume of text quickly, there is a risk that the quality of the generated content may not be up to the standards required for academic publishing. It is important to have appropriate quality control measures in place to ensure that the content is accurate, relevant, and reliable. How can we be sure that any piece of writing is not generated by AI? You may believe that spotting robotic language is easy, but a recent study found that only 68% of “fake” abstracts
generated by ChatGPT were caught by reviewers. This means that a large amount of AI-generated content could be published in the future, especially as these models become more sophisticated and circumvent detection by human readers.
The impressive performance of ChatGPT has highlighted the importance of creating robust guidelines for using AI in academic papers. Notably, Nature recently published an article stating that authors are already beginning to credit ChatGPT
in their submissions. While the debate is currently ongoing and will likely be a hot topic in the coming years, some journals have already banned listing ChatGPT
as a co-author on papers. Although AI cannot be listed as an author, primarily because authorship involves taking responsibility for the work, it can still be used as a tool in manuscript preparation if all aspects of its use are properly recorded.
Finally, ChatGPT is not the only AI on the block. Its recent success will no doubt spawn competitors, and models for other purposes have already been developed. You may have seen comedic AI-generated voices, images, and videos (“deepfakes”) featuring politicians and celebrities or even entire shows created with AI on the web, which indicates that the potential uses of AI are profound. Publishing is just one area that AI is influencing, and ChatGPT is merely symptomatic of a larger shift in technology.
The problem comes when we cannot discern reality from the creations of a neural network, when our own writing is indistinguishable from that of a machine. Going forward, it is clear that both researchers and publishers need to agree on what practices are acceptable in this space and what measures are needed to enforce future guidelines.