On academic rigor
In academic research, "rigor" is an especially cherished quality. If you want to praise a scholar’s work, you talk about how "rigorous" it is. If you want to dis someone’s scholarship politely, you might sniff and say, "Well, it’s interesting, but it’s not very rigorous." But what we mean by "rigor" isn’t always clear, and ...
In academic research, "rigor" is an especially cherished quality. If you want to praise a scholar's work, you talk about how "rigorous" it is. If you want to dis someone's scholarship politely, you might sniff and say, "Well, it's interesting, but it's not very rigorous."
In academic research, "rigor" is an especially cherished quality. If you want to praise a scholar’s work, you talk about how "rigorous" it is. If you want to dis someone’s scholarship politely, you might sniff and say, "Well, it’s interesting, but it’s not very rigorous."
But what we mean by "rigor" isn’t always clear, and the way it is implemented in practice may even be counterproductive. Many academics tend to define "rigor" in narrow technical terms: 1) Did the researcher employ the most advanced methodological practices, 2) did he or she consider and debunk alternative explanations convincingly, 3) was the data-collection procedure especially careful, 4) did he or she examine all the relevant archives or only a few, 5) was the statistical model properly "identified"? Etc., etc. These criteria can be applied to both quantitative and qualitative research, by the way: In this sense, "rigor" is conceived as a measure of technical proficiency, designed to give us confidence that the claims being advanced are in fact valid.
The gold standard for "rigorous" research is publication in a "peer-reviewed" academic journal. By subjecting papers to anonymous peer review, academic fields supposedly weed out less "rigorous" works and publish only the best research. Different scholarly journals acquire reputations over time, and publishing in "top" journals is seen as the primary measure of a scholar’s worth. University presses follow similar procedures when deciding which monographs to publish, and they too develop reputations of various sorts. Notice, however, that this is all inherently subjective: A journal or a publisher is regarded as prestigious if scholars in the field believe it is.
There’s a lot to be said for this basic approach, which has generated a lot of progress in some fields. I’ve spent a lot of my own career writing articles for refereed journals, reviewing manuscripts for them, or co-editing a book series for a university press, so I’m hardly hostile to this way of doing business. But if we’re really honest with ourselves, academics ought to acknowledge that the system is far from perfect and even encourages some counterproductive tendencies.
For starters, peer review doesn’t guarantee that false results don’t get published; academic journals are filled with articles that are subsequently shown to have contained significant errors. That’s inevitable in the research enterprise, of course, but it is a reminder that peer review alone is not a guarantee of quality. And it certainly doesn’t guarantee that a particular work of scholarship will be useful or important, because most published academic articles are read by very few people and essentially disappear without a trace.
Second, peer review isn’t a mechanical process that automatically winnows the good from the bad. In my experience, journal editors play key independent roles in the evaluation process, and their autonomy can have a huge impact on which works actually get published. Editors don’t have to blindly follow reviewers’ advice if they think a particular manuscript has potential that the reviewers didn’t see, and they can nurture a piece that they think makes a contribution. In this way, editors with a particular vision can guide journals in one direction or another. By the same token, lazy or narrow-minded editors can harm a journal (or a subfield) either by mindlessly following reviewers’ advice or by relying too much on an intellectually narrow set of reviewers.
Third, peer review is probably overvalued because reviewers’ comments are often less than helpful and rarely decisive. By the time most articles are submitted for publication, they’ve usually been presented at academic seminars and have gone through multiple drafts in response to suggestions from the authors’ friends and colleagues. I’ve occasionally gotten useful suggestions from an anonymous reviewer’s report, but I’d say that more than half the comments I’ve received over the years were of no value at all and I simply ignored them. Indeed, a dirty little secret is that a lot of "peer reviews" are no more than a couple of cursory paragraphs along with a recommendation to publish, reject, or revise and resubmit. If that’s the reality of the review process, then why do we fetishize publication in "peer-reviewed" journals as much as we do? In other words, knowing that something got published in the American Political Science Review, World Politics, International Organization, or International Security doesn’t tell you very much about its real value. You have to read it for yourself to make a firm judgment.
Fourth, fetishizing refereed journals (and their supposed rankings) encourages universities to make personnel decisions on the basis of supposedly "objective" indicators such as citation counts, number of "peer-reviewed" articles, and the like. These measures can be useful when used with caution, but they are at best an indirect measure of a scholar’s real contribution. A high citation count may simply indicate that one is working in a faddish subfield and doing "normal science" that other scholars find acceptable but not necessarily pathbreaking. It may also be a sign that you’ve written something that got a lot of attention even though (or because) it was dead wrong. Again, the danger is that departments and university administrators will judge research output not by actually reading the work and making an informed assessment, but by looking at these various indirect indicators.
Fifth, this notion of rigor that is imbedded in these practices may actually make it easier for incorrect or trivial scholarship to survive. If the desire to be seen as "rigorous" leads scholars to produce works that are difficult to understand (either because they use lots of rarified techniques, specialized data, obscure historical sources, or arcane and confusing language), then it is going to be harder for anyone reading the work to evaluate their claims.
By contrast, a scholarly argument that is simple, straightforward, and fairly easy to grasp is inherently easier to evaluate. Accordingly, scholarship that is accessible — i.e., that is easily read and understood — will face a larger audience of potential critics than a piece of scholarship that can only be understood by a small, rarified group of readers, many of whom may share a lot of the presuppositions of the study’s author(s).
In short, publications whose clarity widens the circle of potential challengers can actually contribute to scholarly advancement, because the larger the audience that can understand and evaluate an argument, the likelier it is that errors will be exposed and corrected and the better the argument will have to be to win or retain approval. By contrast, a dubious argument that is presented in an opaque or impenetrable way may survive simply because potential critics cannot figure out what the argument is or because it is too time-consuming and difficult to try to replicate the published results. As mathematician Melvyn Nathanson observes, "The more elementary the proof, the easier it is to check and the more reliable is its verification."
Please note: I am not suggesting that academia discard peer review and discourage scholars from publishing in prestigious journals. Rather, I’m suggesting that the social sciences would be more useful and more rigorous if members of these disciplines adopted a less hidebound approach to the merits of different types of publication. "Should it really be the case," Bruce Jentleson correctly asks, "that a book with a major university press and an article or two in [a refereed journal] … can almost seal the deal on tenure, but books with even major commercial houses count so much less and articles in journals such as Foreign Affairs i>often count little if at all?"
Instead of privileging one sort of publication over others, based on a narrow notion of "rigor," we ought to recognize that different types of scholarly writing reach different audiences and are exposed to different forms of outside scrutiny. In most cases, an article published in a prominent economics, history, or political science journal will be read by relatively few people, one or two of whom may then take issue with the work and challenge its findings. By contrast, if that same author presented the results in an article or report intended for a broader audience, so that it was read by a much larger number of informed citizens and by well-informed practitioners in the real world, then this larger population of readers might be quick either to hail its contribution or to identify obvious mistakes. This capacity may be even more pronounced in the Internet age, which allows readers on every continent to challenge an author’s claims — assuming, of course, that they are not published in obscure venues or written in ways that make it harder for all but a few people to understand them.
Finally, fetishizing "peer review" is a good way to ensure that fewer and fewer people pay attention to what academics have to say about important world issues. This is especially true in fields like IR and public policy, whose main social value lies in what we (supposedly) can contribute to public and elite understanding of a complex world. But if universities only reward the things that scholars write solely for each other, we will be encouraging a narrow professionalism and contributing to the cult of irrelevance that rules many academic departments. And over time, we shouldn’t be surprised if the outside world places less and less value on what we have to say and eventually decides to invest society’s finite resources in other activities.
Stephen M. Walt is a columnist at Foreign Policy and the Robert and Renée Belfer professor of international relations at Harvard University. Twitter: @stephenwalt
More from Foreign Policy

A New Multilateralism
How the United States can rejuvenate the global institutions it created.

America Prepares for a Pacific War With China It Doesn’t Want
Embedded with U.S. forces in the Pacific, I saw the dilemmas of deterrence firsthand.

The Endless Frustration of Chinese Diplomacy
Beijing’s representatives are always scared they could be the next to vanish.

The End of America’s Middle East
The region’s four major countries have all forfeited Washington’s trust.