
Research Integrity Team: AI Ethics and Industry News
The ethical use of artificial intelligence (AI) has become a central topic of discussion within the publishing industry. This quarter, it was also the main theme of the Committee on Publication Ethics (COPE) Forum held in July and Peer Review Week.
In this quarterly update, we share how MDPI’s Research Integrity team is tackling this important issue and how AI is being integrated within our processes.
Research Integrity team: setting standards for AI use
Before establishing guidelines, it is important to understand how Generative AI (GenAI) tools are being used by the scholarly community and what their expectations and needs are.
Editorial Board members are key to preventing publication ethics breaches and will be among the primary users of any new AI tools introduced within the editorial process. Therefore, collecting their feedback and experiences is a critical first step towards setting future standards for AI use.
The Research Integrity team has continued their external outreach during MDPI’s Summits to connect with Chief Editors from across various disciplines. During these meetings, the team presents important policy updates, procedures and challenges while also gaining insight into the experiences and needs of academic editors.

Renato Merki, Daisy Fenton and Diana Cristina Apodaritei presenting at MDPI Houston, UK and Romania Summits.
The presentations focused on the various MDPI-developed and industry tools that enable automatic ethics checks. Detecting AI-generated submissions was identified as a major challenge and Chief Editors shared their experiences and tips in identifying these manuscripts.
While tools may help flag potentially problematic manuscripts, or could be used to select suitable reviewers, human validation is still vital to ensuring a robust and ethical peer review process.
How MDPI is leveraging the use of AI tools
To maintain a rigorous peer review process while safeguarding scientific integrity, publishers need to constantly adapt and innovate, implementing new tools that can streamline processes and enhance quality.
One key area where AI tools can add value is in detecting image manipulation cases. Image manipulation, especially within the biomedical and life sciences field, has become a growing concern as the percentage of articles retracted for image manipulation increases.
Tim Tait-Jaimeson, Head of Publication Ethics, explains:
Unfortunately, there is an increase in unethical use of AI to manipulate or create seemingly authentic images of actual scientific data. However, this same technology could also be part of the solution to detecting fraudulent content.
Following the success of a pilot study, MDPI has partnered with Proofig AI, an AI-powered automated image proofing tool that will help screen manuscripts and flag individual instances of image duplication, manipulation or plagiarism.
Learn more about how Proofig AI works and how it was integrated within MDPI’s editorial process here.
Industry updates
Key discussions within scholarly publishing revolve around the responsible use of AI during the peer review process by publishers, academic editors, authors and reviewers.
There will be changes coming that will affect all stakeholders. Staying informed and participating in these discussions is important to ensure that any new guidelines or policies fit the needs of the scholarly community.
Ethical use of AI
The topic of discussion at the July COPE Forum focused on Emerging AI dilemmas in scholarly publishing. Four main questions drove the discussion:
- What levels of disclosure are ethically required from authors regarding their AI interactions?
- How can AI use standards and disclosure be enforced?
- How can publishers create dynamic policies capable of adapting quickly to emerging AI technologies and their implications?
- How do the current AI detection technologies handle concerns around false positives, biases, transparency, and accuracy?
The need for transparency, standardization and further understanding of how AI tools work and are used was highlighted. Publishers play an important role here: to encourage transparency and help educate users on the available tools, their pitfalls and responsible use.
To learn more, join the conversation with COPE Council Member Marie Soulière and COPE Advisor Hong Zhou as they discuss some of the dilemmas surrounding AI.
Classifying how authors use AI
In December 2023, The International Association of Scientific, Technical & Medical Publishers (STM) put forward ethical and practical guidelines for the use of GenAI. Since then, AI tools have evolved significantly and the ways authors use AI tools have expanded beyond the initial text refinement or generation use.
Classifying how authors are using AI tools and ensuring consistent terminology is a prerequisite for building coherent and standardized guidelines. A new document from STM was published in September classifying AI use into 9 categories.
The purpose of the document is to help publishers develop policies on what should be allowed and how it should be declared with respect to AI use.
Journal policies on AI use
The Directory of Open Access Journals (DOAJ) announced upcoming changes to their guides for journals applying for inclusion. These guidelines are aligned with COPE as well as STM policies. They will require journals to have clear policies on AI use to address the following points:
- Disclosure by authors of use beyond grammar or spellcheck use;
- Confirmation by authors of output validation;
- Authorship requirements and not including tools as authors;
- Citing GenAI tools;
- Reviewers and use of AI tools during peer review;
- Disclosure by the journal regarding AI tools and human validation of results.
The announcement from DOAJ holds not only authors and reviewers accountable, but also editors and journals, promoting transparent reporting of how they are integrating AI tools within the editorial process.
MDPI’s policies on AI use
The Research Integrity team has been closely monitoring all aspects of the discussion and ensuring MDPI’s guidelines adhere to the highest standards and current requirements.
MDPI’s policies regarding artificial intelligence cover use during manuscript preparation by authors, by reviewers, and in editorial decision-making. These guidelines are for now a living document, and we will continue to apply updates as the discussion around AI use continues to develop.
Learn more about tools MDPI has created to support and innovate the peer review process










