How Scholarly Journals are Ranked

Table of Contents

Introduction

Let’s face it: the world of scholarly publishing is a prestige economy. For researchers, publishing in a “high-ranking” journal is not just about sharing research. It is about career progression, securing grant funding, impressing hiring committees, and demonstrating one’s worth in a highly competitive academic landscape. But how exactly are scholarly journals ranked? What are the mysterious metrics—Impact Factor, CiteScore, SCImago Journal Rank, h-index—that determine a journal’s position in this invisible hierarchy?

Journal rankings have evolved into something of a religion in academia. They are cited in job applications, promotion reviews, grant proposals, and university rankings. Yet, few people stop to think about how these rankings are constructed, what biases are embedded in them, or whether they actually measure what they claim to.

The write-up examines how scholarly journals are ranked, the significance of these rankings, the most common metrics used, their calculation methods, and the criticisms surrounding them. We will also examine the regional disparities they reinforce, how some rankings can be manipulated, and where things might be headed next.

Why Rank Scholarly Journals at All?

Before diving into the nitty-gritty of metrics, let’s ask the fundamental question: why rank scholarly journals in the first place?

At a practical level, journal rankings help reduce complexity. In the academic world, which publishes millions of papers each year, rankings offer a shortcut. If a journal ranks highly, many assume that the papers within it must be of higher quality. For time-strapped committees and funding agencies, this saves effort. Instead of reading and evaluating each article on its own merits, they glance at the journal’s reputation and assume that quality flows downward from there.

From a career perspective, journal rankings shape everything. Want to get hired? Tenure-track hopefuls know that a few articles in “top-tier” journals can outweigh many more in mid-level publications. Want funding? Grant reviewers might be more impressed by a paper in Nature than three in field-specific but lower-ranked outlets. Even university rankings, such as QS and Times Higher Education, use journal metrics to assess research output.

This obsession with rankings has led to what many refer to as “the tyranny of metrics.” Academic merit is increasingly reduced to a single numerical score. But what do these numbers actually measure? And do they tell us anything meaningful?

The Big Names in Journal Metrics

Several organizations have developed tools and metrics to rank journals. Each has its own philosophy, methodology, and database. Some are widely used in the sciences, while others are used in the social sciences and humanities. Let’s break down the main players.

1. Journal Impact Factor (JIF)

The Journal Impact Factor is arguably the most influential and infamous metric in academia. Eugene Garfield created it in the 1960s and is currently calculated and published by Clarivate through its Journal Citation Reports (JCR).

How it is calculated:
Impact Factor = Total number of citations in the current year to items published in the previous two years, divided by the total number of citable items published in those two years.

For example, if a journal published 200 articles in 2022 and 2023, and those articles were cited 2,000 times in 2024, the journal’s Impact Factor for 2024 would be 10.

Why it matters:
In many fields, particularly in the life sciences, JIF is seen as the gold standard of journal prestige. It is often the first thing researchers check when choosing where to submit their work.

Problems and criticisms:

  • Short citation window: A two-year citation window may be too brief for fields such as history or philosophy, where citations build slowly.
  • Skewed by outliers: A handful of blockbuster articles can inflate the metric for the entire journal.
  • Self-citations: Journals that encourage authors to cite previous issues can artificially boost scores.
  • Opaque inclusion criteria: Getting listed in Clarivate’s index is itself a selective and somewhat mysterious process.

2. CiteScore

Developed by Elsevier, CiteScore is a newer and more inclusive metric that draws from the Scopus database. It was introduced as an alternative to the Impact Factor.

Calculation method:
CiteScore = Total citations in a year to documents published in the previous four years, divided by the total number of documents published in those four years.

Differences from JIF:

  • Uses a four-year window, allowing more time for citations to accumulate.
  • Includes more document types, such as conference proceedings, letters, and editorials.

Pros:

  • Broader coverage than JIF.
  • Freely available on the Scopus website.

Cons:

  • Still favors fields with fast citation practices.
  • Possible conflict of interest, since Elsevier publishes many of the journals it ranks.

3. SCImago Journal Rank (SJR)

SCImago is another Scopus-based metric that tries to go beyond just counting citations. It assigns different weights to citations based on the prestige of the journal that cites them.

In simple terms:
A citation from The Lancet counts more than one from a little-known journal in the same field.

How it works:

  • Uses an algorithm similar to Google’s PageRank.
  • Measures the “influence” of journals within the academic ecosystem.

Strengths:

  • More refined than raw citation counts.
  • Visual mapping tools help users explore journal networks and hierarchies.

Weaknesses:

  • Still subject to citation inflation tactics.
  • Field normalization remains a challenge.

4. Eigenfactor Score

Developed by researchers at the University of Washington, the Eigenfactor Score attempts to measure the frequency with which journal articles from a journal are cited by other scholars, taking into account the origin of those citations.

How it differs from JIF:

  • Removes journal self-citations.
  • Considers the network structure of citations rather than just the count.
  • It also provides an Article Influence Score, which measures the average impact per article.

Pros:

  • Non-commercial and academically driven.
  • Offers a richer, more nuanced picture of influence.

Drawbacks:

  • Less well-known.
  • Updating frequency and accessibility are limited compared to more mainstream metrics.

5. Google Scholar Metrics

Google Scholar’s contribution to journal rankings is minimal but powerful. It provides the h5-index, which measures the largest number h such that a journal has h articles, each with at least h citations, in the last five years.

Advantages:

  • Free and publicly accessible.
  • Includes journals not covered by Scopus or Web of Science.

Disadvantages:

  • The methodology is not fully transparent.
  • Includes predatory and low-quality journals.
  • Vulnerable to manipulation through citation farms and bots.

Beyond Citations: Altmetrics and New Indicators

Not every form of impact is captured in traditional citation metrics. Enter altmetrics, or alternative metrics. These indicators track the frequency with which scholarly content is shared, discussed, or bookmarked across online platforms.

Altmetrics include:

  • Twitter and Facebook mentions
  • News media coverage
  • Mentions in public policy documents
  • Blog discussions
  • Wikipedia citations
  • Downloads and saves on platforms like Mendeley

Altmetrics offer a glimpse into how research resonates outside of academia. A paper on climate change policy that is cited in a UN report may have low academic citations but a significant real-world impact.

But we need to be aware of the following:

  • Altmetrics can be gamed with social media bots.
  • Popularity does not equal quality.
  • Disciplines such as physics or mathematics receive less social media attention than fields like medicine or psychology.

Still, altmetrics are growing in importance, especially for funders and institutions that want to understand societal impact, not just academic reach.

Context Matters: Field-Specific and Niche Journals

Comparing journals across fields is like comparing apples and oranges. A JIF of 3 might be excellent in one field and mediocre in another.

For example:

  • A mathematics journal with an Impact Factor of 2 may be among the best in its field.
  • A medical journal with the same score might not even crack the top 100.

This is why many databases also rank journals within specific subject categories. Both JCR and Scopus allow users to filter by discipline, revealing how journals rank among their peers rather than across the entire academic landscape.

Moreover, niche journals that serve small but vital research communities may not score highly in citation-based rankings. Yet, they often play a crucial role in advancing knowledge within their respective areas.

Review journals are another outlier. Since reviews summarize existing research and are heavily cited, they often dominate ranking charts. That does not necessarily make them more prestigious than journals publishing original research, but the metrics tend to favor them.

Gaming the System: When Rankings Go Rogue

The academic community is not naive. Once it became clear that metrics matter, people found ways to manipulate them.

Common tactics include:

  • Self-citation loops: Encouraging or even mandating authors to cite previous articles from the same journal.
  • Citation cartels: Coordinated citation arrangements between journals to mutually inflate scores.
  • Editorial coercion: Reviewers or editors suggest adding unnecessary citations to boost a journal’s metric.

In 2023, Clarivate removed over 50 journals from its Journal Citation Reports due to questionable citation practices, including excessive self-citation and citation stacking. Some were top performers who suddenly dropped out of the rankings. The message was clear: manipulate the system and risk losing your standing.

Still, many suspect that other manipulative practices often go unnoticed, especially when they are subtle and dispersed across multiple actors. Journals that publish more frequently, accept a wide range of article types, or aggressively promote content can still engineer an artificial sense of prestige.

The Problem of Regional and Language Bias

English-language, Western-based journals dominate global rankings. This is not because research from Asia, Africa, or Latin America lacks quality. Often, it is simply not included in the citation databases that power these rankings.

Many prestigious regional journals are published in local languages and cater to national research priorities. Yet because they are not indexed in Scopus or Web of Science, they are invisible in global rankings.

To address this gap, platforms such as SciELO in Latin America and AmeliCA (América Latina y el Caribe, Europa y el Sur) provide open access visibility for local research. However, these platforms are largely ignored in global rankings and university assessments.

This creates a feedback loop. Researchers from the Global South are pressured to publish in English in Western journals to gain recognition, even if it means paying high fees or adapting their work for foreign audiences.

Are Journal Rankings Good for Science?

It depends on who you ask.

Supporters argue:

  • Rankings help identify reputable venues for publication.
  • Metrics offer useful, quantifiable indicators of visibility.
  • They reduce decision fatigue for hiring and funding panels.

Critics respond:

  • Metrics are reductive and prone to misuse.
  • They promote a narrow definition of “impact.”
  • They penalize interdisciplinary work and slower citation disciplines.
  • They incentivize quantity over quality.

Movements like the San Francisco Declaration on Research Assessment (DORA) have called for the abandonment of journal-based metrics in evaluating researchers. Instead, they encourage assessing individual work on its own merits, considering peer reviews, open data practices, and real-world influence.

What Might the Future Look Like?

The journal ranking ecosystem is in flux. Several trends could reshape how we evaluate scholarly publications in the coming years:

  • Responsible metrics: Efforts to balance citation-based measures with transparency, fairness, and field normalization.
  • Open peer review: Making the peer review process itself visible and part of a journal’s reputation.
  • Diamond open access: Journals that are free to read and free to publish in, challenging the pay-to-publish model.
  • Profile-based evaluation: Platforms like ORCID, ResearchGate, and Dimensions allow researchers to showcase all their outputs—articles, data, code—beyond the journal in which they appear.

Ultimately, the goal is to create a system where journals are ranked not only by citation counts but also by how well they serve their academic communities, contribute to public understanding, and uphold the values of transparency and rigor.

Conclusion

Journal rankings shape academic careers, institutional budgets, and the trajectory of entire disciplines. Metrics such as the Journal Impact Factor, CiteScore, and SCImago Journal Rank provide concise measures of quality, but they are far from perfect. They are susceptible to gaming, biased toward English-language publications, and often fail to capture the deeper impact of research.

While these metrics are unlikely to disappear anytime soon, there is a growing recognition that they must be used responsibly. No single number can capture the full value of a piece of scholarship. As the academic world moves toward open science, broader impact assessments, and new technologies, it may be time to rethink what journal quality truly means.

Until then, remember: a high ranking can be impressive, but it is not infallible. And in a world obsessed with numbers, it’s worth pausing to ask what we are really measuring—and why.

Leave a comment