How to read research papers

Ezra Klein made an interesting observation a few days ago about how opinion journalists read papers by experts: [T]his is one of the difficulties with analysis. Fairly few political commentators know enough to decide which research papers are methodologically convincing and which aren’t. So we often end up touting the papers that sound right, and ...

By , a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University.
Sean Gallup/Getty Images
Sean Gallup/Getty Images
Sean Gallup/Getty Images

Ezra Klein made an interesting observation a few days ago about how opinion journalists read papers by experts:

Ezra Klein made an interesting observation a few days ago about how opinion journalists read papers by experts:

[T]his is one of the difficulties with analysis. Fairly few political commentators know enough to decide which research papers are methodologically convincing and which aren’t. So we often end up touting the papers that sound right, and the papers that sound right are, unsurprisingly, the ones that accord most closely with our view of the world.

To which Will Wilkinson said "Amen": 

This is one of the reasons I tend not to blog as much I’d like about a lot of debates in economic policy. I just don’t know who to trust, and I don’t trust myself enough to not just tout work that confirms my biases. This is also why I tend to worry a lot about methodology in my policy papers. How much can we trust happiness surveys? How exactly is inequality measured? How exactly is inflation measured? Does standard practice bias standard measurements in a particular direction? Of course, the motive to dig deeper is often suspicion of research you feel can’t really be right. But this is, I believe, an honorable motive, as long as one digs honestly. Indeed, I’m pretty sure motivated cognition, when constrained by sound epistemic norms, is one of the mainsprings of intellectual progress.

One way to weigh competing research papers is to consider the publishing outlet.  Presumably, peer-reviewed articles will carry greater weight.  Except that Megan McArdle doesn’t presume:

Especially for papers that rely on empirical work with painstakingly assembled datasets, the only way for peer reviewers to do the kind of thorough vetting that many commentators seem to imagine is implied by the words "peer review" would be to . . . well, go back and re-do the whole thing.  Obviously, this is not what happens.  Peer reviewers check for obvious anomalies, originality, and broad methodological weakness.  They don’t replicate the work themselves.  Which means that there is immense space for things to go wrong–intentionally or not….

This is not to say that the peer review system is worthless.  But it’s limited.  Peer review doesn’t prove that a paper is right; it doesn’t even prove that the paper is any good (and it may serve as a gatekeeper that shuts out good, correct papers that don’t sit well with the field’s current establishment for one reason or another).  All it proves is that the paper has passed the most basic hurdles required to get published–that it be potentially interesting, and not obviously false.  This may commend it to our attention–but not to our instant belief.

This jibes with a recent Chonicle of Higher Education essay that bemoaned the explosion of research articles: 

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

None of this provides much comfort for the layman interested in navigating through the miasma of contradictory research papers.  How can the amateur policy wonk separate the wheat from the chaff? 

Below are seven useful rules of thumb to provide you.  These are not foolproof — in fact, that’s one of the rules — but they can provide some useful filtering while trying to discern good research from not-so-good research: 

1)  If you can’t read the abstract, don’t bother with the paper.  Most smart people, including academics, don’t like to admit when they don’t understand something that they read.  This provides an opening for those who purposefully write obscurant or jargon-filled papers.  If you’re befuddled after reading the paper abstract, don’t bother with the paper — a poorly-worded abstract is the first sign of bad writing.  And bad academic writing is commonly linked to bad analytic reasoning. 

2)  It’s not the publication, it’s the citation count.  If you’re trying to determine the relative importance of a paper, enter it into Google Scholar and check out the citation count.  The more a paper is cited, the greater its weight among those in the know.  Now, this doesn’t always hold — sometimes a paper is cited along the lines of, "My findings clearly demonstrate that Drezner’s (2007) argument was, like, total horses**t."   Still, for papers that are more than a few years old, the citaion hit count is a useful metric.

3)  Yes, peer review is better.   Nothing Megan McArdle wrote is incorrect.  That said, peer review does provide some useful functions, so the reader doesn’t have to.  If nothing else, it’s a useful signal that the author thought it could pass muster with critical colleagues.  Now, there are times when a researcher will  bypass peer review to get something published sooner.  That said, in international relations, scholars who publish in non-refereed journals usually have a version of the paper intended for peer review. 

4)  Do you see a strawman?  It’s a causally complex world out there.  Any researcher who doesn’t test an argument against viable alternatives isn’t really interested in whether he’s right or not — he just wants to back up his gut instincts.  A "strawman" is when an author takes the most extreme caricature of the opposing argument as the viable alternative.  If the rival arguments sound absurd when you read about them in the paper, it’s probably because the author has no interest in presenting the sane version of them.  Which means you can ignore the paper. 

5)  Are the author’s conclusions the only possible conclusions to draw?  Sometimes a paper can rest on solid theory and evidence, but then jump to policy conclusions that seem a bit of a stretch (click here for one example).  If you can reason out different policy conclusions from the theory and data, then don’t take the author’s conclusions at face value.  To use some jargon, sometimes a paper’s positivist conclusions are sound, even if the normative conclusions derived from the positive ones are a bit wobbly.  

6)  Can you falsify the author’s argument?    Conduct this exercise when you’re done reading a research paper — can you picture the findings that would force the author to say, "you know what, I can’t explain this away — it turns out my hypothesis was wrong"?  If you can’t picture that, then you can discard what you’re reading a a piece of agitprop rather than a piece of research. 

7)  Fraudulent papers will still get through the cracks.  Trust is a public good that permeates all scholarship and reportage.  Peer reviewers assume that the author is not making up the data or plagiarizing someone else’s idea.  We assume this because if we didn’t, peer review would be virtually impossible.  Every once in a while, an unethical author or a reporter will exploit that trust and publish something that’s a load of crap.  The good news on this front is that the people who do can’t stop themselves from doing it on a regular basis, and eventually they make a mistake.  So the previous rules of thumb don’t always work.  The  publishing system is imperfect — but "imperfect" does not mean the same thing as "fatally flawed." 

With those rules of thumb, go forth and read your research papers. 

Other useful rules of thumb are encouraged in the comments. 

Daniel W. Drezner is a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University. He is the author of the newsletter Drezner’s World. Twitter: @dandrezner

More from Foreign Policy

Families who have fled from the war in Sudan carry their belongings while arriving at a transit center for refugees in Renk, South Sudan.
Families who have fled from the war in Sudan carry their belongings while arriving at a transit center for refugees in Renk, South Sudan.

Why Is the World Ignoring a Looming Genocide in Sudan?

Aid workers fear a new disaster as militia forces close in on a major Darfur city.

U.S. Secretary of State Antony Blinken speaks with Saudi Foreign Minister Prince Faisal bin Farhan Al Saud as they walk past portraits of the founding leaders of the Gulf Cooperation Council at the council’s secretariat in Riyadh, Saudi Arabia, on April 29.
U.S. Secretary of State Antony Blinken speaks with Saudi Foreign Minister Prince Faisal bin Farhan Al Saud as they walk past portraits of the founding leaders of the Gulf Cooperation Council at the council’s secretariat in Riyadh, Saudi Arabia, on April 29.

The U.S.-Saudi Agreement Is a Fool’s Errand

For the sake of the international order, Biden must abandon his proposed deal with Riyadh.

Syrians wave national flags and carry a large portrait of their president as they celebrate in the streets of the capital Damascus, a day after an election set to give the current President Bashar al-Assad a fourth term, on May 27, 2021.
Syrians wave national flags and carry a large portrait of their president as they celebrate in the streets of the capital Damascus, a day after an election set to give the current President Bashar al-Assad a fourth term, on May 27, 2021.

The Normalizing of Assad Has Been a Disaster

Syria’s president was welcomed back into the fold a year ago—and everything since then has gotten worse.

Trump is shown from below, mid-speech, wearing a navy blue suit and a thick red tie. An American flag is hanging behind him.
Trump is shown from below, mid-speech, wearing a navy blue suit and a thick red tie. An American flag is hanging behind him.

The Problem With Invoking the ‘Third World’ Slur

The Trump verdict is the latest prompt for deploying a meaningless comparison. All that does is reflect poorly on the United States.