Adapting to AI: The Future of Scientific Research
The world is abuzz about artificial intelligence (AI) tools that can be readily deployed in everyday life. Increasingly sophisticated generative AI models that use neural networks akin to a human brain to generate content or data based on their training provide researchers with tantalizing new ways to assemble and improve scientific papers. It also offers short-cuts through what can be a cumbersome writing process. Generative AI has great potential to improve science communication, but it is a legal and ethical minefield. Moreover, the output of such models is not always correct, and there are risks that such tools can be used maliciously to produce misinformation. Researchers and publishers need clear guidelines to ensure that generative AI is used responsibly.
AI-generated content: a double-edged sword
Generative AI tools, which include large language models (LLMs) like ChatGPT, can be used to summarize large volumes of information and explain complex topics in simple language. They can be used to quickly generate text, images, and video with a few prompts. That’s not to mention all the other ways scientists are starting to use these tools in conducting scientific research, such as in writing code.
Vertex acquires AI tax capabilities from Ryan
In this rapidly evolving AI landscape, Nature Geoscience, along with all Springer Nature journals, recognize the need to protect the transparency and integrity of the scientific research that we publish from the risks that AI presents. We therefore have set some initial ground rules for the use of AI in our papers (see Nature Portfolio AI editorial policy).
Sendbird AI chatbot for Shopify merchants
Firstly, we do not permit LLMs to be credited as an author on a research paper. This is simply because a LLM cannot satisfy our authorship criteria (see Nature Portfolio authorship criteria): a model cannot approve a manuscript or be personally accountable for its contributions to the work. The use of LLMs to assist in developing text is not banned, but we ask that authors be transparent about their use, just as for any other method used in the research. Any LLM use should be clearly documented in the paper’s Methods or Acknowledgements section.
The risks of AI-generated images
Secondly, we currently do not allow the use of AI-generated images and videos. Although we appreciate the exciting potential of AI-generated visual content, there are unresolved legal and research integrity issues. Existing generative AI tools are not transparent about the sources of the training images. Therefore, the accuracy of the resulting images cannot be verified, the images cannot be attributed appropriately, and both copyrights and privacy may be violated. Until regulatory and legal systems catch up, we will be unable to publish visual content made using generative AI. We do permit use of non-generative machine learning tools to manipulate existing images, but we require this to be disclosed in the figure caption.
Human oversight is essential
Finally, we have established guidelines for the use of generative AI by peer reviewers. Although it may be tempting to use a LLM to summarize a paper or help write a peer review report, caution should be exercised in utilizing these tools. Peer reviewers are accountable for the accuracy of their reports and we rely on reviewers’ technical expertise to ensure the research that we publish is rigorous. Also, because manuscripts may contain confidential information that cannot be shared outside the peer review process, we ask reviewers (and our editors) not to upload manuscripts into generative AI tools. Any other use of AI tools in evaluating a manuscript should be declared transparently in the peer review report.
A roadmap for responsible AI use in scientific publishing
Beyond these policies, a common-sense roadmap of best practice for generative AI use in scientific publishing is coming into focus. It is becoming increasingly clear that human oversight is essential to protecting the integrity and transparency of research and peer review. Authors should not blindly adopt text suggested by LLMs, but think critically about whether it is accurate. Authors, not the tools they use, are responsible for the integrity of their contributions to a scientific paper.
Vertex acquires AI tax capabilities from Ryan
LLMs may be particularly enticing for authors who face language barriers in communicating their science in English in international journals. Given the risks of inaccuracies being introduced, authors should consider whether alternatives such as editing assistance from a colleague may be adequate. We remind potential authors that when we consider submissions for peer review, we do not care whether a paper is in perfect English, provided that it is clear enough for editors (and potentially reviewers) to assess.