Image with MDPI logo,

Academic Journal Rankings Explained

Why are academic journal rankings important? Well, they are one of the first steps for an author is to decide which journal they would like to publish their research in.

Academia has a reputational aspect. This means that authors want to publish their work in the most well-known or highly regarded journals. However, many authors find it difficult to know which journals publish quality research, and which do not.

As a response to this, various independent platforms and indexing databases have created academic journal rankings that compare journal statistics. These statistics include:

  • Citation metrics
  • Relative standing of a journal in section areas
  • General community opinion on the quality of a journal’s publications

Many journal rankings have become highly influential within publishing. They are respected by independent researchers, funding bodies, academic institutions, and the public.

Benefits and Criticisms of Journal Ranking Lists

Journal ranking lists often provide a substantial benefit for authors. They’re trusted platforms that researchers can use to find quality journals.

They allow authors to easily assess information about the quality, impact, and reputation of journals. Authors then have more criteria to use to decide whether or not to submit their research to a specific journal.

Rankings also encourage publishers to improve their editorial practices and impact in the scientific community. If publishers want to increase their journal’s ranking, they’ll need to focus much more on improving the quality of their publications. They need to be proactive when promoting the journal and the overall reputation of their journal.

Therefore, ranking journals can often provide a win-win situation for both authors and publishers, producing mutual benefits through this system of “rewarding” journals for their high impact.

However, these ranking lists can also sometimes be contentious. One of the main criticisms of these lists is that exceptional research can be found in many journals, regardless of overall ranking.

Additionally, some lists have been found to be subjective. They are heavily influenced by the personal or institutional opinions of a small group of evaluators. This can lead to biases towards not only just a specific journal, but also towards certain publishers or entire publishing models.

Furthermore, these ranking lists often create cycles of exclusion. They strongly favour older and more established journals. When a journal is well-ranked, more authors will choose to publish with that journal. Consequently, more researchers will read and cite their publications. This then leads to an increase in the overall ranking of the journal. This cycle can prevent younger, less-established journals from becoming well-known and well-cited, even though the quality of publications may be comparable.

The Different Types of Journal Rankings

There are a range of different academic ranking lists, but they can be broadly divided into two main categories:

Governmental or institutional ranking lists

These are created by governmental bodies within specific countries. They are often produced with the goal of influencing authors from that country, and their respective funders, to strongly consider or explicitly restrict publication to certain journals of a specific rank. Some examples of governmental ranking lists are

Indexing ranking lists

Created by indexing companies or related platforms, these lists are often produced using objective calculations to give each journal a score. This score is primarily focused on a journal’s average citation metrics.

These calculated metrics can often be highly detailed, with a wide range of data used to compare journals.

Some examples of indexing ranking lists and their scores are

Most Important Journal Ranking Lists

Each author or institution values ranking lists differently. However, the two biggest and most well-known are the indexing ranking lists by Web of Science and Scopus. They produce the famous citation scores known as Impact Factors and CiteScores, respectively.

Web of Science

Clarivate, an American analytics company, runs the Web of Science indexing database, which is the second-largest database in the world. They produce an academic journal ranking list each year. It’s called the Journal Citation Reports (JCR). All journals indexed in Web of Science are ranked according to numerous metrics. These metrics include:

Impact Factor Journal-level citation metric score that indicates the average number of citations per paper within a journal, from the past two years (given to all journals indexed in the Web of Science Core Collection, as of June 2023).
5-Year Impact Factor Average number of times that research from a specific journal published in the past five years has been cited in the JCR year.
Journal Citation Indicator Average Category Normalised Citation Impact (CNCI) of citable items (articles and reviews) published by a journal over the past three years.
Total Citations Total number of times that a journal has been cited by all journals included in the database that year.
Cited Half-life Median age of the items in a journal that were cited that year.
Total Citeable Items Total number of articles and reviews published by a journal in the past two years.
Eigenfactor Score Density of the citation network around the journal using five years of cited content, as cited by that year.
Normalised Eigenfactor The eigenfactor score, but normalised (total number of journals in the JCR each year rescaled, so that the average journal has a score of 1).
Article Influence Score Normalises the eigenfactor score according to the cumulative size of the cited journal across the past five years.
Immediacy Index Number of annual citations that reference content in the same year.

* Definitions from Journal Citation Reports (Clarivate).

Scopus

Scopus is the largest indexing database in the world, and is run by the publisher Elsevier. All journals indexed in Scopus are ranked according to a few metrics. These include:

CiteScore Journal-level citation metric score that indicates the average number of citations per paper within a journal, from the past three years.
Source Normalised Impact per Paper (SNIP) Actual number of citations received relative to the citations expected for the journal’s subject field.
SCImago Journal Rank (SJR) Measures weighted citations received by the journal, depending on the subject field and prestige of the cited journal (this metric is actually created by SCImago, but it is also displayed on Scopus’ journal ranking lists).

* Definitions from Scopus (Elsevier).

Privacy Preference Center