One of our newpapers addressed the peer review practice in science. Based on a flawed but sexy paper in Nature, where review obviously failed, the system was diagnosed as sick, and cure is needed. Here is my solution.
Peer review works mostly fine, and in my experience the average quality of review is linearly related to the impact factor of the journal. Which is dangerous as a high impact factor is – at least – partly the result of gaming the system (see). As a science participant, I receive several review requests per week, and if I accept I am supposed to review in either my boss’ or my own time (for the journal they are both for free). If I recommend rejection, there will always be another journal in which the “masterpiece” will be published. The number of journals and the number of papers is growing madly, and – in the end – everything gets published, and then the work gets an aura of “truth” as it was “peer-reviewed”.
The pathophysiology the disease:
- Peers cannot match the increased demand for review, which reduces review quality, and quality of published peer-reviewed science.
- The career of scientists depends on the number of peer-reviewed publications and the impact factor of the journals.
- Journals accept “suboptimal” science as publication will augment the impact factor.
How to break this chain of madness, strengthened by perverse stimuli?
Why don’t we restrict peer review only for the current impact-factor top 25% (or top 50% as a compromise) per science category? The peer reviewers get paid for their work by the journal (that would also justify the annoying reminders to hurry up) and the authors evaluate the quality of review (which will be stratified upon “paper accepted or not”). These scores will be available to the reviewers, as a metric for academic performance (the PR-index).
So, what about the studies that don’t make it into the “champignon league” journals? Instead of offering the manuscript successively to 10 journals (from NEJM to PLoS One and everything in-between) they can go directly on the websites of the institutes of the authors (UMC Utrecht for me). Through these websites manuscripts are accessible, indexed in Pubmed and other search tools and raw data are publicly available for those that suspect miscalculations. How to guarantee quality? That’s up to the institute. If they trust their people they “publish” everything. If not, they find a system for quality check.
Read my words: In 10 years manuscripts on the website from University X are equally well cited as those published in the NEJM, and top researchers in University X decide to publish their breakthrough findings within a week on their own website. No delays, no costs, no reviewer #3.
5 thoughts on “A disease called “peer review””
As an ex Editor-in-Chief of an international journal, I’m not sure I agree that “it works just fine”.
Peer reviews are a nightmare to get- never mind thorough reviews. It always amused me that the authors who bothered to contact me directly to ask if their masterpiece could be “fast-tracked” were always the ones who steadfastly refused to participate in the peer review process themselves. If your paper is apparently being held up by a journal, trying to get a review out of someone who might actually know what they are talking about is almost certainly the reason for the delay.
I suggest that the self-publishing idea is potentially dangerous. Part of the role of a reviewer is to sort of the wheat from the chaff. Bad research can do very real damage- look at the impact the now infamous Lancet paper on MMR and autism had on measles rates, although arguably that should, in theory at least, have been reviewed.
Active participation in PR should be made compulsory for all card-carrying academics and researchers.
If you won’t make the time to review other people’s papers, why on earth should anyone drop everything to look at yours?
It is also very good for you- there is nothing like seeing the holes in other people’s offerings to make you better at seeing the weaknesses in your own.
Thanks for this comment, which convinces me even more. “Peer reviews are a nightmare to get”, I agree (once was an associate-editor) and, yes, reviewing is educational. therefore I do not propose to completely abandon peer review (immediately), which would still mean that there remains potential to stop papers like the MMR on from being published in a major journal. And, yes, the self-publishing is potentially dangerous, but the risk is for the Institute that posts it! I So, more time for good reviewing (paid for by the journals) and pressure on academic institutes to deliver quality. With that, I think the scientific community will do an even better job in “self-cleaning”. Im still sharpening my thoughts on this.
This is an excellent post .. the problem well delineated and a practical solution provided. The hyperbolic nature of an Internet driven Information Era is that the risk is that of Quantity bettering Quality, and the solution offered here forms a solid foundation for ongoing refinement of processes to improve Quality in an higher Quantity environment.
I like the idea of Marc. Do not underestimate the power of self-cleaning. If a publication published by the researcher or institute itself is not trusted by the community it will not be sited. It also breaks down party the power of a few large journals which obviously also fail regurarly. It had been proposed earlier to just publish everything and there will be a natural selection. This way also studies will be published that journals typically do not want, like repeating other studies (which is a fundamental aspect of sciencef but not attractive for journals. As a matter of fact, in physics this is already normal, were highly appreciated researchers published new theories on their website rather than in journals because they are sometimes so radical that no journal dares to publish it….
Thanks Marc and all for the interesting comments. Where is the idea of open peer review in all this? It would provide a modicum of kudos to academics who provide PRs, and would also temper some of the more outrageous comments from Reviewer #3!