Think Again Before Using Generative AI During Peer Review or As You Prepare an Application
June 29, 2023
With the recent rise in the use of generative artificial intelligence (AI) technology, many questions have been raised about how this technology can be responsibly and ethically used going forward.
Multiple discussions have been taking place on social media, in the news, and at the governmental level, both in the United States and internationally. I’d like to tell you about a recent National Institutes of Health (NIH) notice on this topic that’s important for all peer reviewers, members of NIH Advisory Councils and Boards, applicants, and grantees, and suggest a few resources you may find helpful.
In the scientific community, an active topic has been whether generative AI can be used in the preparation of applications and critiques during peer review. On June 23, 2023, NIH released an NIH Guide notice, The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process (NOT-OD-24-149). The intent of this policy is to maintain security and confidentiality in the NIH peer review process. Using generative AI technologies requires the sharing of material from applications, violating NIH’s policy on maintaining confidentiality in peer review. Importantly, this policy also applies to NIH National Advisory Councils and Boards.
While use of these technologies is not prohibited as you prepare an application, doing so is at your own risk since generative AI could, for example, plagiarize text or fabricate information. This would constitute research misconduct and require actions by NIH to address noncompliance.
You can find an overview on this topic, “Using AI in Peer Review Is a Breach of Confidentiality,” a post by Drs. Michael Lauer, Stephanie Constant, and Amy Wernimont, NIH, on the Open Mike blog. I hope that this NIH policy is informative as you go through the application process and/or participate in the peer review process.
Comments are now closed for this post.