
Open Science Supports Research Assessment
Research assessment is essential to maintaining the quality and integrity of scientific publications. Assessment is involved in every publication, funding and job application, and career progression. However, there are concerns about overreliance on metrics and lack of incentives for open science practices.
This article discusses research assessment, its various forms, concerns about it, and how the industry is responding. Finally, we will comment on how open science can support research assessment.
What is research assessment?
There are various forms of assessment and contexts it appears in. This is because scientific reputation is related to factors like the quality of research proposals, number of publications and citations, and amount of funding received.
Therefore, assessment is involved in determining:
- The suitability of articles and reviews for publication in scholarly journals.
- The viability of researchers’ applications to receive funding.
- The viability and success of projects.
- The success of job applications.
- The success of institutions.
Assessment can be qualitative, like during peer review, or quantitative, like using bibliometrics for impact, or a mix of both.
Types of research assessment
There are various types of research assessment:
- Peer review.
- Bibliometric analysis.
- Altmetrics (alternative metrics) – these consider social media mentions, downloads, and online discussions to give a broader perspective on impact.
- Research impact assessment – this looks at real-world outcomes and applications of research.
- Funding and grant evaluation.
- Institutional evaluation.
Why does research assessment matter?
Research assessment is central to ensuring the quality and integrity of research.
It determines which projects are funded and carried out, who progresses in their careers, and ranks institutions and research bodies. Further, it forms the systems of recognition, rewards, and incentives.
This ultimately means it influences the behaviour and activities of everyone in the research community.
Overreliance on metrics
The main criticism of current assessment practices revolves around metrics.
There is a greater emphasis on quantitative assessment, specifically on Journal Impact Factors (JIFs). JIFs are calculated by dividing the total number of current year citations to publications from the previous 2 years by the number of citable publications published in the journal during the same time period.
Critics highlight how these metrics can equate the average citation frequency of articles in a journal with the quality of the journal or articles in it. There is no direct correlation between quality and the number of citations.
This leads to an incentive for scholars to exclusively publish in journals with higher JIFs.
Furthermore, relying too much on these metrics narrows focus onto outputs in journals. This excludes other research outputs like data and software code.
Lack of incentive for open science
In UNESCO’s Open Science Outlook in 2023, they highlighted that “in most cases, there is no tangible reward for time, resources and efforts associated with open science practices”.
If scholars are focused on publishing in journals with high JIFs to attain career progression or funding, then they may neglect pursuing open science practices.
In response to these criticisms of research assessment, there are initiatives emerging across the scholarly community aiming at a global reform.
Declaration on Research Assessment
The Declaration on Research Assessment (DORA) is a global initiative covering all scholarly disciplines and all key stakeholders.
DORA’s mission is to advance practical and robust approaches to research assessment globally and across all scholarly disciplines.
The Declaration includes recommendations for funding agencies, academic institutions, journals, organisations that supply metrics, and individual researchers. The first recommendation reflects concerns about using journal metrics like JIF:
Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.
Common themes in the recommendations include removing the use of journal-based metrics, assessing research on its own merits, and capitalising on the opportunities provided by online publication.
MDPI signed DORA in 2018
MDPI signed the Declaration in 2018, proudly joining the list of organisations around the world that are committing to improving how the quality of research results is evaluated.
DORA’s recommendations for publishers include:
- Not promoting JIFs and promoting article-level metrics instead.
- Encouraging responsible authorship practices and provision of information about the specific contributions of each author.
- Removing all reuse limitations on reference lists.
- Removing or reducing constraints on the number of references.
Coalition for Advancing Research Assessment
The Coalition for Advancing Research Assessment (CoARA) is a collective of organisations committed to reforming the methods and processes by which research, researchers, and research organisations are evaluated.
They similarly argue that there is an overreliance on citation counts and journal metrics, which leads to failure to recognise the wide array of contributions made by researchers.
CoARA includes over 700 members, all engaged in the common aim:
The vision of CoARA is to recognise diverse outputs, practices, and activities that maximise the quality and impact of research through an emphasis on qualitative judgement in assessment, for which peer review is essential, supported by the responsible use of quantitative indicators.
The agreement revolves around 10 commitments, including
- Recognising the diversity of researchers’ contributions.
- Basing research assessment on qualitative evaluation.
- Abandoning the inappropriate use of journal- and publication-based metrics.
- Committing resources to reforming assessment.
And these are centred around guiding principles of mutual learning and collaboration, a trust-based approach, and ensuring cash and in-kind contributions from members.
Open science and research assessment
Open science can support research assessment in various ways, including making it more rigorous, inclusive, and impactful.
Reproducibility and scientific integrity
Open science ensures reproducibility by sharing raw data, code, and methodologies. This allows others to verify and replicate findings, ensuring the integrity and quality of the science.
For assessment, transparency throughout the publishing process means evaluators can examine not only conclusions but also the processes leading to those conclusions. This is valuable for researchers, as it highlights the hard work they are doing to achieve their outcomes, rather than focusing purely on their impact.
Embracing Altmetrics
Open science has enabled alternative metrics to emerge that provide a broader perspective on impact.
These include mentions in social media, patents, policy documents, news outlets, and public engagement activities.
Altmetrics can provide a more holistic view of a researcher’s impact.
Open infrastructure
A CoARA working group explored how responsible assessment can be established based on open infrastructures.
An infrastructure is a system or service that is necessary for something to run smoothly. Open infrastructures (OIs) provide essential services that enable members of the scientific community to practice open science. Therefore, the systems or services are developed with specific open values in mind.
The working group describe that open infrastructures are key enablers of the CoARA principles. By supporting machine-actionable and openly accessible data, OIs provide the technical foundation for reforms.
They establish a conceptual architecture centred around four tiers, all enabled by open science:
- Foundation tier: establishing data integrity and interoperability.
- Publishing venues tier: ensuring the open dissemination and archiving of diverse research outputs.
- Metadata aggregation tier: collecting data to build a comprehensive research information network.
- Assessment support tier: providing advanced analytics, indicators, and functionalities to support research evaluation.
This infrastructure essentially involves opening research outputs, ensuring they have the correct metadata, that these metadata are organised and accessible, and then ensuring there are tools for evaluators to look at this diverse selection of outputs and information.
This would provide evaluators with all the tools necessary to see the diverse contributions a scholar is making.
Changing the approach to assessment
cOAlition S, the body behind the EU’s Plan S, which aims to establish complete Open Access, is a supporter of reforming research assessment.
They argue that doing so in line with open science principles will enable movements
- From competition to collaboration.
- From being reputed for prestige to being reputed for fast and fair reviews.
- From highly selective, flashy, and newsworthy output to solidly responsible and robust output.
- From only articles mattering to all outputs and contributions mattering.
The future is open: research assessment and open science
Research assessment ensures the integrity and quality of scientific publications and the community as a whole.
However, it faces certain challenges, as there is an overreliance on metrics such as JIFs. This incentivises publishing in high-JIF journals rather than focusing on quality and open science practices.
Open science can increase transparency, therefore enabling the evaluation of their work across the entire scientific process, not just the outputs. Furthermore, it can enable the use of alternative metrics and infrastructure to support reforming assessment.
Click here for our article, All You Need to Know About Open Access, which covers a range of topics that can help boost your understanding and also keep you up to date.