Controversy erupts in AI assistance for peer reviews

ARC_Tracker highlighted the use of AI in peer reviews

In a startling revelation, allegations have emerged that some peer reviews of grant applications submitted to the Australian Research Council (ARC) may have been written with the assistance of artificial intelligence (AI). The use of AI in the review process has sparked a heated debate within the academic community, raising concerns about confidentiality and the integrity of the rigorous evaluation process.

The ARC_Tracker Twitter account, operated by a researcher at an Australian university, recently disclosed reports suggesting that ChatGPT, a widely used generative AI model, was employed to produce assessor reports for ARC discovery projects. These multi-year research programs compete for substantial government grants of up to $US500,000, with only a limited number of applicants receiving funding.

Researchers who had led grant proposals raised suspicions when they received assessor reports that appeared to be mere regurgitations of their original applications, lacking any critical analysis or insightful assessment. Particularly troubling was the discovery of the text “regenerate response” in one report, which strongly indicated the involvement of ChatGPT in generating the text.

ARC_Tracker highlighted the use of AI in peer reviews as a response to various underlying factors. Firstly, academics face overwhelming workloads that often leave them with insufficient time to review grant proposals in detail. Secondly, the ARC’s grant proposals tend to be excessively long, sometimes spanning over 100 pages, further intensifying the pressure on peer reviewers. These factors, coupled with the lack of explicit policies regarding the use of AI text engines, may have contributed to the decision of some peer reviewers to rely on ChatGPT.

The allegations have prompted the ARC to issue a statement cautioning against the use of AI tools in peer assessments. The council emphasized the importance of maintaining confidentiality and hinted at updating their guidance on AI use soon. However, critics argue that the ARC’s statement falls short of explicitly addressing the issue of generative AI, leaving room for confusion and potential breaches of confidentiality.

While the ARC spokesperson refrained from commenting on the prevalence of ChatGPT use in peer reviews, they acknowledged the broader concerns surrounding confidentiality and security in the context of emerging technologies. The ARC assured applicants that any concerns raised would be addressed according to their existing policies.

As the controversy unfolds, questions have been raised about the need for clearer and more comprehensive policies from the ARC. Calls for proactive measures to anticipate challenges posed by emerging technologies, such as AI detection models, have gained traction. Some universities, including the University of Melbourne, have already implemented similar measures to identify AI usage by students.

As the academic community grapples with this revelation, the focus now shifts towards striking a delicate balance between leveraging AI for efficiency and maintaining the fundamental principles of confidentiality, fairness, and integrity in the peer review process. The ARC’s response and forthcoming updates to their policies will play a crucial role in shaping the future of grant application assessments in Australia and serve as an example for academic institutions worldwide.

 

Tags:

Leave a Comment

Related posts