How Artificial Intelligence Impacts Research, Integrity, and Peer Review

On 24th September 2024, MDPI hosted a roundtable discussion featuring three speakers discussing the implications of artificial intelligence on research, integrity, and peer review. The speakers represented academia and industry, allowing for a productive discussion from different perspectives.

We summarise some of the key points of the discussion and provide a link to watch a recording.

How artificial intelligence supports academics

Dr Felicitas Hesselmann studied Sociology and the History of Art and is currently a researcher in Research System and Science Dynamics at the German Centre for Higher Education Research and Science Studies.

When asked about how AI tools have been implemented into her workflow, Hesselmann described how her experience is not with elaborate tools, but with tools that help resolve “mundane issues”.

She believes that the main values of these tools are solving repetitive issues and ensuring that they are reliable, i.e., they are intuitive and do not breakdown.

Artificial intelligence-augmented research

Simon Porter, Vice President of Research Futures, Digital Science, has a career in transforming university practices in how data about research is used from a range of perspectives. He continued the discussion by explaining “AI is much more effective when it becomes an iterative collaboration”.

By this, he means that the user should always be in control, with a call-and-response approach between the user and AI tool.

He agrees with Hesselmann that the main benefit is “gradually, gently making the activities we do become slightly augmented”. For example, when writing a paper, the author may have to produce a table. An AI tool could make the table using the data the user inputs to save the author time and focus for writing.

He claims that automating “tasks that are small will benefit us most”.

Artificial intelligence in peer review

Porter continues to describe these small, augmented tasks being implemented into the peer review process. Rather than an AI tool creating the review report, users should ask “how can AI help me do my job better?”.

Christopher Leonard is Director of Product Solutions at Cactus Communications. His area of specialisation is the interface between peer review processes and AI. He believes there is a bigger role for AI to play in peer review.

He explains how his team ask reviewers to produce reports and then they cross reference them using a series of prompts on their own in-house large language model (LLM) to identfiy the strengths and weaknesses of the reviewed manuscript. His team looks for things that the human reviewers did not identify and asks them about it, ensuring there are no gaps in the reports.

Similarly, he suggests AI could be used before the peer review process to check for research integrity issues, such as citation cartels, plagiarism, and manipulated figures.

Hesselmann and Porter describe AI as augmenting writers and reviewers’ tasks, increasing efficiency and saving time. Leonard explains how AI can be used before and after reviews to evaluate manuscripts and reports, with the tools performing bigger tasks more independently.

What both examples have in common is human mediation. AI tools are not trusted to function solely, instead being used to support and accelerate tasks performed by humans.

Concerns about artificial intelligence and research integrity

Hesselmann explains that she would not feel comfortable with AI acting as a reviewer. This is due to the blackbox nature of AI tools, which refers to their inner workings not being disclosed or even understood.

Furthermore, she explains that there cannot be a one-size-fits-all approach for reviewer tools, as every discipline has different requirements and challenges that must be addressed in a review.

Discussion then moved to LLMs being trained on openly available research. The issue with this is that there are biases within research, meaning the AI can also adopt these if not addressed. This could further entrench biases that impact researchers around the world.

Alongside this, Hesselmann gave the example of how regional biases could be introduced if LLMs are trained exclusively using the Latin alphabet. This could result in titles or names that include non-Latin characters being neglected in searches or outputs. For example, if the AI tool selects reviewers, it may favour specific names that are easy to distinguish at the detriment of others.

Validation would be required to ensure biases are not included. However, validating the outputs of LLMs is a time-consuming but ultimately highly necessary process. At a time when reviewers are already overburdened and outnumbered by the number of reviews needed, the focus should be on lowering workloads not increasing them.

The future of artificial intelligence and research integrity

Porter explains we cannot view AI as another peer reviewer, as that would be humanising it. We need to think carefully about what aspects of a peer review can be safely delegated to algorithms.

As mentioned, peer reviewers are struggling to keep up with the volume of manuscripts being produced. This question of scale needs to be tackled. Leonard believes that AI is necessary for resolving this, but not without expressing concerns about what this means for the process.

Hesselmann is apprehensive about a widespread integration of AI, with concerns about deskilling (academics losing key skills because they are automated by tools), inadvertently increasing the work of editors due to the need for validating AI outputs, and issues around privacy and security.

Maintaining research integrity

All three speakers have concerns but recognise the value of AI in peer review and for research integrity. Their discussion highlighted the value of AI in augmenting workflows, which would simplify repetitive or mundane tasks, thus allowing for more efficient and productive work.

Artificial intelligence is a rapidly advancing field which will continue to have implications for the entire scientific publishing industry. Discussions such as this, with academics and industry coming together, are key for navigating all this change.

The speakers discussed a range of other issues and expectations about AI and how it will impact research, integrity, and peer review. If you want to watch a recording of the roundtable discussion, please visit the link here.

For more information on what MDPI is doing to recognize International Peer Review Week, please visit our landing page.