Are Impact Factors an Illusion?

Table of Contents

Introduction

Few numbers hold as much sway in academia as the impact factor. For decades, this little metric has determined careers, influenced funding decisions, and made or broken the reputations of journals. It looks neat and simple, giving the impression of objectivity in a system that is otherwise messy and political. But beneath its polished surface lies a swamp of distortions, manipulations, and misplaced trust.

Impact factors were originally introduced in the 1960s as a way for librarians to identify which journals were most frequently cited and therefore most worth subscribing to. They were never meant to become the currency of academic worth. Fast forward sixty years, and they are treated as a proxy for quality, prestige, and even personal brilliance. But can a single number, calculated from a narrow snapshot of citations, truly capture the value of a journal or a researcher’s contribution to knowledge?

Or are impact factors an illusion that has fooled many for decades?

The Origin Story of Impact Factors

The impact factor was created by Eugene Garfield, the founder of the Institute for Scientific Information (ISI). Garfield envisioned it as a tool to help librarians decide which journals to stock by calculating how often the average article in a journal was cited over a two-year period. It was not originally marketed as a measure of scientific excellence. Like many tools that overstay their welcome, its use drifted into realms it was never designed for.

The problem is that once universities, funders, and researchers began treating impact factors as shorthand for journal quality, the metric acquired power it was never supposed to have. The simplicity of a single number proved irresistible. Why bother reading a candidate’s papers when you can just glance at the journals they were published in and check their impact factors? The metric became entrenched in evaluation systems, hiring committees, and tenure boards worldwide. It is the academic version of judging a book by its cover.

The rise of impact factors also coincided with the rapid commercialization of academic publishing. Large publishers saw the advantage of marketing their journals by flaunting high numbers. It became a form of branding, a way to lure both authors and readers. As competition intensified, so did the fetishization of the metric, transforming what was once a librarian’s guide into the academic world’s de facto currency.

How the Number is Calculated

The calculation of an impact factor is deceptively straightforward. You take the total number of citations in a given year to articles published in a journal during the previous two years, and then divide that by the total number of “citable items” published in those two years. For example, if Journal X published 100 articles over two years and those articles received 500 citations in the following year, the impact factor would be 5.

On paper, this sounds fair enough. But the devil is in the details. What counts as a “citable item” is often flexible, and journals can lobby to exclude certain pieces like editorials or letters, which might still attract citations but would not count against the denominator. This inflates the number, making some journals look more influential than they actually are. It is like playing a game of statistics whack-a-mole, with publishers doing everything possible to ensure the mole that pops up makes their numbers look bigger.

Another subtle trick lies in timing. Journals may push articles online early but delay assigning them to a formal issue until the timing best suits the impact factor calculation. This extends the window of citation accumulation without increasing the denominator. Some journals also deliberately accept more review articles, which are typically cited more frequently than original research, to maximize their citation potential. These games are invisible to most readers, but they make the supposedly objective number far less neutral than it appears.

The Distortion Game

Impact factors create strange incentives for journals. Since the metric rewards citations, journals lean toward publishing articles that are more likely to be cited quickly, such as review papers or hot-topic studies, rather than slower-burning but equally important research. Niche fields or long-term projects rarely stand a chance in this system, even if their contributions to knowledge are invaluable.

Worse still, the pressure to inflate citations has fueled questionable practices. Some journals encourage authors to cite other articles within the same journal, a not-so-subtle tactic to boost the numbers. There have been cases where editorial boards demanded that authors add citations to their journal as a condition of acceptance. A study found that 18 journals have been discovered to practice excessive self-citations to boost the impact factor. When the gatekeepers of knowledge are playing citation roulette, you know the system is deeply flawed.

The distortion game also extends to authors themselves. Scholars desperate to publish in high-impact journals may tailor their research to fit editorial tastes rather than pursue questions that genuinely matter. This contributes to a homogenization of science, where safe, flashy, or trendy topics get attention, while risky but potentially groundbreaking work struggles to find a home. The result is a feedback loop where citation-rich topics dominate, while less glamorous but important work is sidelined.

A Metric That Ignores Context

Another problem with impact factors is that they completely ignore differences across disciplines. A journal in medicine or biology may have an impact factor of 40, while a respected journal in mathematics may sit at 2. Does that mean the medical journal is twenty times more important? Of course not. Citation cultures vary wildly across fields. In the humanities, where articles may be cited for decades, the two-year citation window used in impact factor calculations is of limited relevance. A philosophy paper might take five years to gain traction, but by then, the metric has already moved on.

This disciplinary imbalance means that researchers in slower-moving fields are unfairly disadvantaged. A groundbreaking article in literature might never appear in a high-impact factor journal, simply because the metrics are stacked against that kind of research. The illusion of objectivity hides a deep bias against fields that do not churn out quick citations.

Moreover, regional differences in publishing practices compound the bias. Researchers from the Global South often work in areas of local significance that may not generate high citation counts internationally, even if the research is vital to their communities. The impact factor system marginalizes this kind of scholarship, reinforcing an already skewed hierarchy that privileges Western journals and topics of global appeal over local relevance.

The Career Consequences

The obsession with impact factors distorts publishing. It also shapes careers. Hiring committees often use impact factors as a shortcut to judge a candidate’s worth. Early-career researchers are told to “publish in high-impact journals” as though the actual quality of their work matters less than the venue. Funding agencies sometimes fall into the same trap, assuming that a researcher with papers in Nature or Science is inherently more deserving than one with papers in specialized but rigorous journals.

This obsession fuels a prestige economy where a handful of journals act as gatekeepers of academic legitimacy. The irony is that many articles in so-called high-impact journals are rarely cited, while some hidden gems in obscure journals go on to transform entire fields. But the system rarely rewards the latter. It is a bit like Hollywood giving Oscars only to blockbuster films while ignoring the independent productions that quietly reshape the industry.

The mental health toll on researchers is another under-discussed consequence. The constant pressure to aim for high-impact journals fosters anxiety, burnout, and a sense of futility. Young academics may feel they are failing if their work does not land in a journal with a high impact factor, regardless of its actual quality. This obsession warps the culture of science, reducing it to a numbers game rather than a search for truth.

The Criticism Mounts

Scholars have long criticized the misuse of impact factors. In 2012, the San Francisco Declaration on Research Assessment (DORA) was launched, calling for an end to the reliance on journal impact factors when evaluating research. As of today, more than 23,000 individuals and 2,800 organizations have signed it. Yet despite such efforts, impact factors remain stubbornly entrenched. Universities continue to include them in promotion guidelines, and journals still flaunt their numbers on websites like badges of honor.

Critics have also noted that the obsession with impact factors contributes to broader problems in science, such as the replication crisis. When flashy results are rewarded over careful, incremental work, the literature becomes littered with findings that cannot be reproduced. In other words, impact factors not only distort evaluation but may also undermine the very reliability of science.

Alternatives and the Road Ahead

If impact factors are so flawed, what can replace them? Some argue for article-level metrics, which measure the actual reach and citations of individual papers. Others promote altmetrics, which track online attention, downloads, and media mentions. These alternatives offer more nuanced pictures of how research circulates in the world.

But even these approaches are not perfect. Article-level metrics can still be gamed, and altmetrics sometimes favor flashy but shallow work. The deeper issue is academia’s obsession with quantification. The belief that scholarly worth can be distilled into a single number is the real illusion. Perhaps the real solution is cultural rather than technical: a willingness to read and evaluate research on its own merits, rather than outsourcing judgment to a statistical artifact.

Some institutions have begun experimenting with narrative-based evaluations, asking researchers to explain their most significant contributions rather than listing impact factors. Others are rethinking promotion criteria entirely, emphasizing open access, data sharing, or public engagement. These are promising moves, but they remain exceptions rather than the norm. It will take a cultural shift across academia to loosen the grip of the impact factor illusion.

Conclusion

Impact factors were supposed to be a practical tool for librarians. Instead, they have become the academic equivalent of a horoscope: vague, seductive, and ultimately misleading. They distort publishing priorities, disadvantage certain disciplines, and reduce researchers’ worth to a number that often hides more than it reveals.

The illusion of the impact factor persists because it is easy. It gives the appearance of objectivity in a system that is anything but objective. But if academia is serious about fairness, diversity, and true intellectual progress, it must confront the limitations of this cherished metric. The time has come to stop treating impact factors as gospel and start treating them as what they really are: a crude, outdated measure that should never have been given so much power in the first place.

Leave a comment