Table of Contents
- Introduction
- A System of Unpaid Labor
- The Reluctant Reviewer: A New Norm
- Ghosting, Burnout, and Editor Despair
- Matching Reviewers to Manuscripts: Still a Shot in the Dark
- Misaligned Incentives and the Thankless Grind
- Can Technology Fix Peer Review?
- The Domino Effect on Research Integrity
- Rethinking Peer Review: Is There a Way Out?
- Conclusion
Introduction
Peer review is often described as the backbone of academic publishing. It’s the sacred mechanism meant to safeguard scholarly integrity, validate findings, and ensure that science advances through a process of critical scrutiny. In principle, it’s brilliant. In practice? Increasingly broken. The culprit isn’t necessarily incompetence or corruption. It’s exhaustion. Plain, old-fashioned burnout. Peer review fatigue is not just a buzzword; it’s a real phenomenon. It’s a systemic threat.
What was once a professional courtesy has become an unsustainable burden. Academics receive a constant stream of review requests, often from journals they’ve never interacted with before. Editors are begging for reviewers. Authors are left in limbo. Journals are experiencing significant delays. And the quality of reviews is slipping through the cracks. The very process designed to protect the academic record is starting to unravel under the weight of fatigue.
This isn’t a future problem. It’s happening right now. And it’s breaking the system, one exhausted reviewer at a time.
A System of Unpaid Labor
Let’s start with the painfully obvious truth that academia often likes to sweep under the rug: peer review is unpaid labor. It’s one of the few professional tasks in the research ecosystem that requires time, energy, and intellectual effort without any direct compensation. You could spend four hours poring over a complex manuscript, checking references, evaluating data, and providing line-by-line feedback. Yet, you get nothing but a vague thank-you email in return.
Meanwhile, the pressure to publish is enormous. Promotion committees, funding agencies, and tenure-track committees expect researchers to produce articles at a rapid pace, much like a factory line. Each submission contributes to the ever-growing mountain of papers requiring peer review. Over six million articles are expected to be published in 2026. Multiply that by two or three reviewers per manuscript, and you begin to understand the staggering volume of unpaid academic labor propping up the publishing system.
This imbalance has become unsustainable. Reviewers are expected to keep donating time in the name of academic goodwill, but the system offers little to nothing in return. It’s a lopsided arrangement that has mutated from professional courtesy into exploitation.
The Reluctant Reviewer: A New Norm
Gone are the days when academics enthusiastically volunteered to review because it was considered part of the scholarly mission. Today, review requests are often met with dread. For many scholars, particularly mid-career and early-career researchers who juggle teaching, grant writing, administration, mentoring, and their own publication pipelines, peer review is just one more unpaid task in an already overflowing schedule.
It’s not uncommon to hear stories of researchers receiving three or four peer review requests in a single week. The volume is simply unmanageable. Some try to oblige and power through the stack. Others triage; they accept the ones tied to journals in their field or their professional networks, and ignore the rest. And a growing number are simply tuning out altogether, declining or ghosting invitations with increasing frequency.
The reluctance isn’t rooted in selfishness or laziness. It’s the natural byproduct of a system that overburdens its contributors while offering them nothing in return. There’s an unspoken but mounting sense that peer review has become more of a trap than a service.
Ghosting, Burnout, and Editor Despair
Ghosting is no longer just a term used in dating. In the publishing world, it’s what happens when reviewers accept a manuscript and then disappear into the academic ether. Editors follow up once. Twice. Maybe even three times. Eventually, they give up and restart the process, often weeks behind schedule.
For every ghosted manuscript, an editor has likely sent out five or six invitations just to get two “yes” responses. Even those who do accept are often late or offer rushed, superficial feedback. No one wants to be that person, but with the academic calendar packed from January to December, reviewing becomes the task most easily delayed, deprioritized, or forgotten.
It’s also worth noting the emotional toll on editors. Managing reviewers is like herding cats. It requires patience, persistence, and thick skin. Many editors are themselves active researchers with limited time. The delays, non-responses, and mounting workloads add up to burnout on their side as well. The entire system is creaking under the weight of its own inefficiencies.
Matching Reviewers to Manuscripts: Still a Shot in the Dark
Reviewer fatigue is worsened by the inefficient and often haphazard way reviewers are matched to papers. Editors typically rely on old-fashioned spreadsheets, email chains, editorial databases, or software that recommends reviewers based on keywords. These methods are far from precise. One common frustration is being invited to review manuscripts that are outside one’s area of expertise.
In niche or emerging fields, the problem is even worse. A handful of known experts are bombarded with requests, while many others are overlooked. The result? An uneven distribution of labor, where a few become bottlenecks, overworked and resentful, while the system overlooks potential fresh contributors who might have the capacity but lack visibility.
The lack of transparency and coordination across journals also doesn’t help. There’s no global system to track reviewer fatigue, workloads, or availability. Editors are essentially flying blind, casting wide nets in the hope of catching a reviewer who is available, qualified, and willing to review. It’s a system that hasn’t evolved to meet the demands of modern academia.
Misaligned Incentives and the Thankless Grind
The problem of peer review fatigue is deeply tied to misaligned incentives. Publishing papers brings tangible rewards: promotions, tenure, funding, and professional prestige. Reviewing papers brings none of that. For most institutions, reviewing doesn’t factor into performance evaluations or tenure decisions. It’s considered academic “service,” a vague, noble-sounding category that comes dead last in any metric that actually matters.
Some platforms, such as Publons, ORCID, and Web of Science, have attempted to address this issue by enabling scholars to track their review activity. These are helpful, but not transformative. Most universities still don’t care how many peer reviews you’ve done. Many journals, however, don’t issue certificates or formal acknowledgments.
As a result, peer reviewing often feels like yelling into the void. You write detailed, thoughtful comments. You engage deeply with someone’s research. And you get zero feedback. Not even a line of thanks from the author. Over time, that lack of recognition erodes the motivation to participate. Scholars start to ask: Why am I doing this?
Can Technology Fix Peer Review?
AI has become the go-to solution for nearly every inefficiency in academia; therefore, the peer review process is now being “AI-optimized” as well. Some platforms promise to find the best reviewers in seconds, using algorithms that parse reviewer profiles, co-authorship networks, and subject matter keywords. Others use AI to scan submissions for citation irregularities, plagiarism, or even basic methodological flaws.
But here’s the uncomfortable truth: AI is not a replacement for critical thinking. It can flag issues, summarize content, or assist editors in managing workflows, but it can’t evaluate the novelty of an idea, the appropriateness of a research design, or the persuasiveness of an argument. Peer review, at its heart, is a human judgment process.
In fact, some AI tools have added complexity rather than reducing it. Editors now ask reviewers to address comments generated by AI. Authors are expected to correct issues flagged by tools that may not even understand context. It’s an added layer of noise in a system already suffering from labor overload.
What AI can do—helping with reviewer selection, checking conflicts of interest, flagging fake peer reviews—is useful. But unless it frees up human time in a meaningful way, it risks becoming just another burden masquerading as a solution.
The Domino Effect on Research Integrity
When the peer review system starts to fail, the entire ecosystem is at risk. The consequences go beyond a few missed deadlines. Poorly reviewed papers can be published in high-impact journals, where they are cited, relied upon, and sometimes incorporated into policy or clinical guidelines. That’s not just embarrassing. It’s dangerous.
We’ve already seen high-profile retractions of COVID-19 studies, vaccine data, and behavioral science research. In many of these cases, peer review failed to catch obvious errors or was manipulated entirely. In a fatigued system, with overworked reviewers and underpaid editors, the chances of critical errors being overlooked increase dramatically.
The gap left by genuine peer review is being filled in some corners by predatory journals. These entities exploit the publication pressure by offering fast-track, fake reviews in exchange for fees. They prey on desperate authors and further dilute the credibility of scientific literature. The more dysfunctional our legitimate peer review becomes, the more appealing these shady outlets seem.
Rethinking Peer Review: Is There a Way Out?
There’s no silver bullet, but we need to start rethinking the foundations of peer review before it collapses entirely. One idea is to create more formal recognition systems. Universities could start counting peer review as part of performance evaluations. Funders could reward reviewing as an essential component of academic citizenship.
Another path is financial compensation. It doesn’t have to be enormous. Even modest honoraria could incentivize participation. If journals can charge thousands in article processing charges (APCs), they can afford to offer reviewers a token of appreciation.
Peer review models also need a refresh. Open peer review, post-publication review, and community-based feedback systems are worth exploring. These approaches bring transparency and potentially reduce bottlenecks; however, they also come with their own set of challenges, such as reviewer bias and reluctance to critique openly.
Finally, coordinated databases across publishers could help distribute the labor more evenly. Imagine a shared reviewer pool that tracks workloads and balances assignments based on availability and expertise. It’s not utopian; it’s just overdue.
Conclusion
Peer review fatigue isn’t a temporary inconvenience. It’s a structural weakness that’s eroding the foundations of scholarly publishing. Reviewers are tired, editors are overwhelmed, and authors are stuck in limbo. If we continue to ignore the signals, we’ll end up with a system that’s fast, cheap, and meaningless—the academic equivalent of junk food.
To fix peer review, we need to stop romanticizing it as a noble sacrifice and start treating it as real labor that deserves real compensation, recognition, and structural support. If academia truly values the integrity of research, then the peer review process must evolve into something sustainable, fair, and future-proof.