One of our newpapers addressed the peer review practice in science. Based on a flawed but sexy paper in Nature, where review obviously failed, the system was diagnosed as sick, and cure is needed. Here is my solution.
Peer review works mostly fine, and in my experience the average quality of review is linearly related to the impact factor of the journal. Which is dangerous as a high impact factor is – at least – partly the result of gaming the system (see). As a science participant, I receive several review requests per week, and if I accept I am supposed to review in either my boss’ or my own time (for the journal they are both for free). If I recommend rejection, there will always be another journal in which the “masterpiece” will be published. The number of journals and the number of papers is growing madly, and – in the end – everything gets published, and then the work gets an aura of “truth” as it was “peer-reviewed”.
The pathophysiology the disease:
- Peers cannot match the increased demand for review, which reduces review quality, and quality of published peer-reviewed science.
- The career of scientists depends on the number of peer-reviewed publications and the impact factor of the journals.
- Journals accept “suboptimal” science as publication will augment the impact factor.
How to break this chain of madness, strengthened by perverse stimuli?
Why don’t we restrict peer review only for the current impact-factor top 25% (or top 50% as a compromise) per science category? The peer reviewers get paid for their work by the journal (that would also justify the annoying reminders to hurry up) and the authors evaluate the quality of review (which will be stratified upon “paper accepted or not”). These scores will be available to the reviewers, as a metric for academic performance (the PR-index).
So, what about the studies that don’t make it into the “champignon league” journals? Instead of offering the manuscript successively to 10 journals (from NEJM to PLoS One and everything in-between) they can go directly on the websites of the institutes of the authors (UMC Utrecht for me). Through these websites manuscripts are accessible, indexed in Pubmed and other search tools and raw data are publicly available for those that suspect miscalculations. How to guarantee quality? That’s up to the institute. If they trust their people they “publish” everything. If not, they find a system for quality check.
Read my words: In 10 years manuscripts on the website from University X are equally well cited as those published in the NEJM, and top researchers in University X decide to publish their breakthrough findings within a week on their own website. No delays, no costs, no reviewer #3.