Why Journal Metrics are Important and How to Use Them

Journal metrics can be the first thing scholars look at when deciding in which journal to publish. Using metrics can help when navigating the diverse range of journals available online by providing a guide for quality and impact. Additionally, publishing in a journal with strong metrics is considered helpful for furthering a scholar’s career. Let’s reflect on why journal metrics are important and how using them is beneficial.

This article will introduce Impact Factor and CiteScore, valued citation metrics, and explore how they work, their importance, and the relevant updates that were integrated in 2024. We’ll also outline some of the caveats of using metrics.

Scholarly publishing

Traditionally, the main output for researchers is an article that covers the background, methods, and outcomes of their research, among other things. These outputs are published by scholarly journals, which are discipline oriented.

Because of the vast number of journals to choose from, journal metrics were created to help measure the impact of a journal that has been indexed in one of the credible databases. A journal’s impact is generally measured by the combined success of the articles within it using the number of citations in subsequent research articles across all journals.

Publishing in journals with strong metrics, particularly Impact Factor, can support researchers when applying for funding or promotions. But what are the main metrics? And how do they measure impact?

What is Impact Factor and how is it measured?

Impact Factor is a metric developed by Clarivate that represents the average number of times articles are cited in a particular journal. It is updated every year. The metric supports the ranking, evaluating, categorising, and comparing of journals.

The calculation is as follows:

A: The number of citations in the current year to any items published in the journal in the previous two years.

B: The number of substantive articles published in the same two years.

C: The Impact Factor is calculated by dividing A by B and is represented by a number with one decimal.

Journals that receive Impact Factors are ranked within different subject categories. According to the distribution of Impact Factors in a given category, a journal can then be said to be ranked in one of four quartiles (Q1‒Q4), with Q1 and Q2 being the top half.

Constanze Schelhorn, MDPI’s Indexing Manager, explains these rankings:

Journals listed to be in Q1 of a ranking are in the top 25% of the distribution of citation metrics within a given subject category. In other words, its rank is in the 75th percentile or above, higher than at least 75% of journals in that category. Similarly, journals ranked Q2 have an IF which is in the 50th to 75th percentile of the category distribution.

Clarivate recommends using Impact Factors carefully, considering the context around a journal’s scores. This includes considering the amount of review or other types of material published in a journal, variations between disciplines, and article-by-article impact.

Why are Impact Factors important to scholars?

Journal metrics are important to diverse actors in scholarly publishing. For scholars, they help navigate the countless number of journals available online by highlighting the ones that receive more citations.

Moreover, in academia, researcher applications for tenure, grants, funding, etc., frequently involve reference to Impact Factor. Publishing in journals with higher Impact Factors can be regarded as a deciding factor.

However, there are risks to relying solely on this metric.

Disadvantages of Impact Factor

Constanze Schelhorn explains

There are certain risks associated with misuses of Impact Factors. Firstly, unlike the equally common h-index, which is used as a citation metric for individual researchers provided by different sources such as Google Scholar, Scopus or Clarivate for example, Impact Factors are journal metrics and should be used as such. They say little to nothing about the influence of a single article in a given journal.

And the score itself, she explains, can be skewed:

Taking the average of such a distribution, which Impact Factors boil down to, can give disproportionate weight to outliers, i.e., a few very highly cited articles, oftentimes review articles, which tend to be cited more frequently compared with original research papers.

To summarise, Impact Factors can provide a helpful insight into a journal’s citations. However, one must consider the context and use other metrics when considering the journal.

Changes to Impact Factor in 2024

There have been changes to the Impact Factor rankings in 2024.

Clarivate will no longer provide separate rankings for the nine subject categories that are indexed in multiple editions. By editions, Clarivate is referring to the different citation indexes, like the Science Citation Index – Expanded (SCIE) and Emerging Sources citations Index (ESCI). This means categories like Psychiatry, which had separate rankings for the SCIE and ESCI, will have a unified ranking.

With this year’s release, there are 229 subject category rankings. Having combined category rankings, they argue, will provide a simpler and more comprehensive view for the evaluation of journal performance.

What is CiteScore and how is it measured?

CiteScore metrics are developed by Elsevier as an alternative to Impact Factors. They measure the citation impact of journals and can be accessed freely on Scopus. The metric represents the yearly average number of citations to recent articles published in a journal.

The calculation is as follows:

A: The number of citations received in a given year and in the previous three years.

B: The total number of published documents in the journal during the same period.

C: The CiteScore is obtained by dividing A by B and is represented as a number with one decimal point.

Why is CiteScore important?

CiteScore measures a broader range of outputs by including all peer-reviewed document types (Impact Factor, however, is limited to articles and reviews). This makes CiteScores particularly helpful for disciplines whose outputs are not article- or reviewed-based.

Moreover, CiteScore provides a different angle into measuring a journal’s impact. Alongside Impact Factor, scholars can better evaluate the impact of journals when choosing which journal to publish in.

Differences between Impact Factor and CiteScore

The main differences between the two metrics are as follows:

  • CiteScore only uses publications and citations indexed in Scopus, whilst Impact Factor only uses publications and citations indexed in the Web of Science.
  • CiteScore covers 4 years whilst Impact Factor covers 2 years.
  • CiteScore uses five document types (articles, reviews, conference papers, book chapters, and data papers), whilst Impact Factor uses two (articles and reviews).

Journal metrics are important to consider, but only in the context of other results. By using these two metrics, scholars can get an general idea of the impact articles have in a journal. But ultimately, it depends on the quality and relevance of the articles themselves to gain the citations in the first place.

Other important journal metrics and measures

In addition to Impact Factor and CiteScore, there are other journal metrics that are important to consider.

Here is a short list of alternative journal metrics:

  • The SCImago Journal Rank (SJR) indicator is derived from the average number of citations from each year and over a period of three years.
  • The Source Normalised Impact per Paper (SNIP), provided by Scopus (like CiteScore), helps address inconsistencies between fields by calculating the number of citations of a specific topic.
  • Web of Science has four different ranking indexes: SCIE, SSCI, AHCI, and ESCI. To learn more, we have an introduction to indexing databases and their associated rankings.

The Declaration on Research Assessment (DORA) advocates for improving the way researchers and their outputs are evaluated. They explain that, because no indicator can capture the complexities of research quality, they propose five principles to prevent metrics being misused in assessment:

  1. Be clear.
  2. Be transparent.
  3. Be specific.
  4. Be contextual.
  5. Be fair.

Whilst these recommendations do not argue against using metrics, DORA suggests using them with caution and consideration. Constanze Schelhorn also commented on the topic of alternative measures:

Citations provide a signal of academic impact, a nod from one researcher (group) to another. More immediate signs of recognition, novelty, and (dis)approval exist in the digital space. Sharing an article on social media platforms, or mentions of the latest research by news outlets, or uploads to video platforms.

The platform Altmetric monitors a host of alternative sources to provide attention and engagement scores that reach well beyond citations. The colorful doughnuts capturing the scores are shown in the upper right corner on article pages—a useful alternative to article download and citation stats.

Journal metrics are important but use them cautiously

Journal metrics are important tools for evaluating and comparing different journals. Of them, Impact Factor and CiteScore are two valuable measures that represent the average impact of articles in a journal through citations.

However, such indicators of impact must be considered in their discipline’s context and keeping in mind that Impact Factor and CiteScore are collective scores that are not representative of individual articles.

To learn more about journal metrics and indexing, visit our full interview with Constanze Schelhorn, MDPI’s indexing manager.