Peer review does not mean we can trust a published paper

May 3, 2013

“The benefit of published work is that if they have passed the muster of peer review future researchers can have faith in the results”, writes a commenter at The Economist. Such statements are commonplace.

I couldn’t disagree more. Nothing is more fatal to the scientific endeavour than having “faith” in a previously published result — as the string of failed replications in oncology and in social psychology is showing. See also the trivial but crucial spreadsheet error in the economics paper that underlies many austerity policies.

Studies have shown that peer-reviewers on average spend about 2-3 hours in evaluating a paper that’s been sent their way. There is simply no way for even an expert to judge in that time whether a paper is correct: the best they can do is say “this looks legitimate, the authors seem to have gone about things the right way”.

Now that is a useful thing to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread assumption that it means more than it does.

Remember: “has passed peer review” only really means “two experts have looked at this for a couple of hours, and didn’t see anything obviously wrong in it”.

 

 

Note. I initially wrote this as a comment on a pretty good article about open access at The Economist. That article is not perfect, but it’s essentially correct, and it makes me happy that these issues are now mainstream enough that it’s no longer a surprise when they’re covered by as mainstream an outlet as The Economist.

33 Responses to “Peer review does not mean we can trust a published paper”


  1. You are right, but the peer-review is evidence that at least somebody remotely competent in the subject has spent 2-3 hours reading the article. So is better than no peer-review.

  2. Mike Taylor Says:

    … which is why I said “Now that is a useful thing to be able to say, for sure”.


  3. Mike, just posted this in reply to your comment at The Economist. Couldn’t see the link to the study there that you’ve got above. Irene

    ‘Remember: “has passed peer review” only really means “two experts have looked at this for a couple of hours, and didn’t see anything obviously wrong in it”. ‘
    Mike,
    Just wondering where you get your figure of a couple hours for time spent on a review. Two large international, cross-disciplinary surveys on peer review have both found times much longer than this.
    Ware and Monkman (2008), with responses from >3000 researchers – mean 8.5h, median 5h per review, with only 15% spending 2h or less and 22% spending 10h or more

    Click to access PeerReviewFullPRCReport-final.pdf

    Data are also broken down by subject area and age.
    Sense About Science (2009), with responses from > 4000 researchers – median time 6 hours (median more time is more meaningful than the average as some researchers reported spending up to 100 hours on a review)
    http://www.senseaboutscience.org/news.php/87/peer-review-survey-2009
    and
    http://onlinelibrary.wiley.com/doi/10.1002/asi.22798/abstract
    So it’s a bit unfair to make the effort researchers spend on reviewing seem much less than it is. I also know from my own experience over more than 20 years that most reviewers provide detailed thorough and perceptive reviews. But how good and relevant the reviews are on any manuscript also depends on how well and appropriately the reviewers are chosen.
    Interestingly, the recent (March 2013) Taylor & Francis Open Access Survey (> 14000 respondents) found that ‘rigorous peer review’ was the service rated the most important when authors were asked to rate the importance of services they expect to receive when paying to have their papers published open access. This was rated as more important than both rapid publication and rapid peer review.

    Click to access open-access-survey-march2013.pdf


  4. Agree, so the glass is not empty, neither full, to please both optimists and pessimists. If you allow, to make an analogy, one might say “male peacock shiny feathers DON’T mean the bird will have better chicks” but it makes more sense to grow some feathers than to fight to death in order to prove one’s worth. Peer-review is like a courtship ceremonial.

  5. Mike Taylor Says:

    Hi, Irene, thanks for this comment. My “two to three hours” estimate comes primarily from Yankauer (1990) in JAMA, the abstract of which is accessible at http://www.ncbi.nlm.nih.gov/pubmed/2304210

    I also heard a similar estimate mentioned by a speaker at a recent conference in Oxford (Rigour and Openness in 21st Century Science) but I wasn’t taking notes and don’t remember the reference used to back that up. I am pretty certain it wasn’t the JAMA study.

    It’s very interesting (and encouraging!) that the studies you cite here both give indicate that reviewers invest significantly more time. I don’t know why the studies’ results differ so much. (For what it’s worth, I’d estimate that my own reviews on average take 8 to 12 hours.)

    Anyway, irrespective of the actual numbers, I hope you’ll agree with my core point: that while peer-review is valuable, it is absolutely NOT an indication that a published result can be trusted.


  6. Hi Mike,

    The paper you cite reports a survey of a small number of reviewers (276) quite a long time ago (1988) so I don’t think it can used to be a reflection of the situation today. It also involved just medical/health sciences reviewers so I’d be cautious about generalisations. Interestingly, both the Ware & Monkman and Sense About Science surveys found that reviewers in those areas spent less time on a review than any other group.

    I was at the Oxford conference on day 2 giving a talk (sorry to have missed you!) and times spent reviewing were discussed and updated from what had been said earlier.

    Peer review is basically just scrutiny by/opinions from experts, so that has to be a good thing, much better than none or opinions from people who don’t know the areas. I’ve seen first hand the value it can bring, not only to the papers reporting the research, but also to the work behind them and going forwards in those labs. But it’s only of value if done properly. There are problems with peer review, a couple being that quality is very variable and, which is quite astonishing, few people get any training (in any of the roles). At COPE we’ve just produced some Ethical Guidelines for Peer Reviewers, providing guidance for reviewers. We’re hoping that besides being used by them and journals and editors, they’ll be a resource for universities and institutions http://publicationethics.org/resources/guidelines .


  7. My understanding is that “Growth in a Time of Debt” (Reinhart & Rogoff 2010) was not actually peer-reviewed

  8. Mike Taylor Says:

    That’s correct. But part of the reason it was taken as seriously as it was, was because it appeared in a peer-reviewed journal — in a non-reviewed “special issue”. I think was widely assumed to have been peer-reviewed, and treated with undue reverence for that reason.

  9. Mike Taylor Says:

    Sorry to have missed you in Oxford, Irene — as you may know, I was only able to be there for Day One, and you evidently for Day Two. But I did meet the Other Mike Taylor, so it wasn’t a dead loss :-)

    It’s encouraging to think that your better peer-review numbers are both more recent and based on a larger sample size than mine. But needless to say, I stand by my core point, which is that however long is spent in peer-review, no-one should ever mistake “peer-reviewed” for “reliable” or “trustworthy”. Such praise is earned only over years and decades, as subsequent work shows a paper to have been both correct and useful.


  10. case in point:

    How not to do science

    I have sunk papers by being one of those >8 hour reviewing arseholes who actually re-do statistics, re-measure specimens, double-check sources.

    and you know what?
    I am darn proud of it!

    I once, in such a case, wrote a review as long as the original paper, with 8 figures, because the idea of the paper was great, but the implementation was abysmal. So I showed the authors how to do it right. Nothing ever came off it, but if I ever have a spare two weeks I will re-do their work, write it up, put their names on the authors list after mine, and publish that stuff.


  11. Mike,

    Of course peer review cannot ensure 100% reliability, but your post seems to suggest that you’d be in support of a more rigid peer review system, something the majority of the OA mega journals you support, practice the opposite of. Using 1 article which is 25 (?) years old along with an anecdotal story about something “you heard someone say” one time to support your argument here is pretty weak IMO. I don’t always agree with you, but I’ve come to expect better from you. Thanks to Irene for hitting you between the eyes with actual facts, extensive experience, and good old common sense.

    Adam

  12. Nathan Myers Says:

    The time spent must vary greatly depending on the rigor of the field of study, the seriousness of the journal, the character of the reviewer, and the ambition of the paper, with not necessarily positive correlation. Two hours is probably not especially unusual, but I’ll bet two weeks is extremely unusual. It’s not unusual that a significant paper needs two weeks’ review by many more than two colleagues, and many published responses, before its conclusion can be considered reliable-for-now. I think that’s the point. Quibbling over how large a fraction of a day each of two or three reviewers can devote to skimming a paper misses the point.

    The value of peer review is really in how much attention you need not pay to publications that are not (even) peer-reviewed.

    [That said, this publication has not even been peer-reviewed.]

  13. Trish Groves Says:

    There’s a wide evidence base on how to conduct peer review effectively, ethically, and fairly, though there’s still much to learn. Indeed, for 25 years researchers and editors in biomedical science have been studying and trying to improve peer review, meeting every four years or so to discuss new peer review research at the International Congresses on Peer Review and Biomedical Publication.

    The seventh congress takes place in Chicago on 8-10 September 2013:
    http://www.peerreviewcongress.org/index.html

    If you click here:
    http://www.peerreviewcongress.org/previous.html

    the links will take you to the programmes and – often – to the abstracts for the previous six meetings. There are some JAMA supplements too. All the studies presented at the congress are, of course, peer reviewed.

    (Competing interest: I’m an editor at the BMJ, which has co-organised these congresses with JAMA, and I’m the European coordinator for this year’s congress)

  14. Mike Taylor Says:

    Adam Etkin wrote:

    Your post seems to suggest that you’d be in support of a more rigid peer review system.

    I hope it doesn’t suggest that! The idea is a phantasm. No, what I’m in support of is abandoning the comforting but childish illusion that having been through peer-review means a study is trustworthy. It’s amazing how often people talk as though this is true; but in fact a huge part of the story of science is peer-reviewed papers being shown to be wrong. The sooner we all accept this, the better.

    Thanks to Irene for hitting you between the eyes with actual facts, extensive experience, and good old common sense.

    Don’t you see that Irene’s better documented, more recent and more encouraging numbers don’t change the underlying issue at all? Anyone who said “I wouldn’t trust a paper that had been through three-hour reviews, but I trust this one because its reviewers took five hours” should be laughed out of town.

    If we estimate that a typical published but unreviewed finding is 50% likely to be correct, then putting it through review might raise that likelihood to say 60% or 70%. But the idea that it makes it 100%, or even 90%, is flatly wrong.

  15. Mike Taylor Says:

    Nathan Myers

    The value of peer review is really in how much attention you need not pay to publications that are not (even) peer-reviewed.

    Again, no. Watson and Crick’s DNA paper was unreviewed — should that have been ignored? All the papers in the 20065 Mesozoic Terrestrial Ecosystems volume were unreviewed (and I assume those from other MTE volumes) — should they be ignored? Many PNAS papers are unreviewed — should they be ignored? All Einstein’s papers were unreviewed — should they be ignored? Our neck anatomy paper appeared on arXiv unreviewed before the reviewed version appeared, substantially similar, on PeeJ — should it have been ignored?

    And conversely we can all point to dozens of articles that did make it through peer-review but are profoundly wrong.

    So while having undergone peer-review is correleted with correctness, that correlation is pretty weak. Peer-review is neither necessary nor sufficient for correctness. There’s no pixie-dust in peer-review.

    This is not a controversial view, by the way. If people hold a different opinion, it’s because they’ve been badly taught, and/or inexperienced as publishing scientists themselves. Honestly, no-one who’s received half a dozen reviews themselves can retain the fantasy that the ability to collect two “positive” ones proves anything.

  16. Mike Taylor Says:

    And let me be clear that my point here is not to have a go at peer-review. The point is that there is no silver bullet. If you want to know whether a published paper is good or not, it’s no good asking “was it peer-reviewed?” or “did it appear in JVP?” or any other such convenient short-cut. There are only two ways to evaluate it. You can either invest serious time (as in hundreds of hours, not two or three) to fully investigate its claims for yourself, replicating observations, experiments and analyses; or you can wait ten or twenty years and see what the community as a whole makes of it. Those are the only options.


  17. I’d counter by saying that if one wants to know whether a published paper is good or not the best way to evaluate it is to ask “Was it peer reviewed?” followed by “Is the journal reputable?” and then invest time to investigate. You seem to suggest throwing the first 2 out the window. OF COURSE there is no “silver bullet.” OF COURSE there are times when even peer review in reputable journals make mistakes. But this post strikes me as a throwing the baby out with the bath water approach. IMO in the large majority of cases rigorous peer review by reputable journals is still a good indicator that you can “trust a published paper.” But as the saying goes…TRUST BUT VERIFY.


  18. Adam
    a) I know of many people who do claim peer review is a silver bullet
    b) I have not found a significant difference in the quality of publication between journals, but between editors.
    c) 60% of papers I back-ground check wrt Plateosaurus are so sloppily done in some respect that I am seriously wondering what editor and reviewers were smoking.
    In the end, it all boils down to the individual people – that’s true for authors, editors and reviewers. If something was peer reviewed is practically meaningless, because despite the higher level of trust you can indeed place in it, the necessary level of distrust is overwhelming whether the paper was reviewed or not.


  19. Heinrich, I know many who claim it is not a silver bullet.

    I’d suggest that if 60% of papers which make it to reviewers are as poor as you indicate then perhaps this is an indication that the journal is not very reputable? Perhaps “reputable” is not the best word?

  20. Mike Taylor Says:

    Adam, Heinrich is not talking about any one journal (in case you were looking for ammunition to criticise one that you don’t like). He’s talking about the totality of all published papers. In which regard it’s worth taking a look at this, admittedly controversial, classic.


  21. Adam, I see Mike likes the smell of strawman arguments in the morning as much as I do.

    Or do you wish to insinuate that ALL journals are disreputable?


  22. Heinrich, I’m unsure if your comment is directed at myself or Mike? As for myself, I happen to feel that most journals who conduct peer review are in fact reputable, with some bad apples spoiling it for the rest. The original point I think Mike tries to make is that peer review doesn’t guarantee 100% accuracy is often true (and obvious to anyone familiar with the STM publishing industry) however the title of this piece is misleading IMO and carries an entirely different meaning I do not agree with at all. “Trust” has a different meaning than “always correct.” Even the most trustworthy can make mistakes. With nearly 2 million “peer reviewed” papers published last year obviously errors will occur. A step in the right direction is MORE rigorous peer review, not less.

  23. Mike Taylor Says:

    More rigorous review is fine. But if it seduces you further down the dark path of assuming you can trust findings that have been through that more rigorous review, then it’s a net negative. Just improving our mechanisms is not the answer here. Casting off our illusions is.


  24. […] Remember: “has passed peer review” only really means “two experts have looked at this for a couple of hours, and didn’t see anything obviously wrong in it”. (https://svpow.com/2013/05/03/peer-review-does-not-mean-we-can-trust-a-published-paper/) […]


  25. […] billet m’est revenu à l’esprit en lisant une discussion sur l’excellent blog de Mike Taylor (paléontologue amateur qui publie et grand défenseur du […]


  26. […] “Peer review does not mean we can trust a published paper” https://svpow.com/… […]

  27. Schenck Says:

    I read recently (somewhere) that even the journal Nature didn’t implement across the board modern-style peer review until the 1970s.

    Does anyone think that the quality of Nature publications changed significantly then?

    Caveat: of course they used editorial review.

    Sometimes I think we really make too much of peer-review. Crackpots and fake journals do it sometimes too.


  28. […] and lots of blog posts about problems and bad experiences with peer review (Simply Statistics, SVPow, and this COPE report)  .  There is lots of evidence that peer review suffers from deficiencies […]


  29. […] As we’ve discussed here before, having been through peer review certainly does not mean we can trust a published paper. People do sometimes talk as though this is the case, and it’s an absolutely fallacy that we should be quick to rebut whenever we encounter it. […]


  30. […] peer-review. We do still want that “we went through review” badge on our work (without believing it means more than it really does) and the archiving in PubMed Central and CLOCKSS, and the removal of any reason for anyone to be […]


  31. […] Pay Brown reminds us that all published results are subject to further analysis and correction, whether peer-reviewed or not. See Peer review does not mean we can trust a published paper. […]


  32. […] I think it’s very possible that, instead of the all-Gold future outlined above, we’ll land up with something like this. Not every detail will work out the way I suggested here, of course, but we may well get something along these lines, where the emphasis is on very rapid initial publication and continuously acquired reputation, and not on a mythical and misleading “this paper is peer-reviewed” stamp. […]


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.