- By Daniel W. Drezner
Daniel W. Drezner is professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University and a senior editor at The National Interest. Prior to Fletcher, he taught at the University of Chicago and the University of Colorado at Boulder. Drezner has received fellowships from the German Marshall Fund of the United States, the Council on Foreign Relations, and Harvard University. He has previously held positions with Civic Education Project, the RAND Corporation, and the Treasury Department.
There’s an awful lot being written about the myriad ways in which political science and international relations scholarship skews this way and that way, but not the meritocratic way. To underscore that note, Peter Campbell and Michael Desch have an essay titled "Rank Irrelevance" over at Foreign Affairs. Their target is the National Research Council (NRC) rankings of graduate programs in political science. It’s also part of their larger Carnegie-financed research project on policy relevance in the academy. As they explain at their website, a key component of their research program focuses on:
[H]ow traditional academic disciplinary rankings might skew the sort of work scholar undertakes and highlight how different sets of criteria based upon sub-field criteria and broader relevance could produce very different rankings. To illustrate this, we have ranked the top fifty political science departments based on 37 different measures of scholarly excellence and broader policy relevance of their international relations faculty. We have also done the same thing for the 442 individual scholars in that group.
So, how are the NRC’s academic disciplinary rankings skewed? Campbell and Desch explain:
[T]he NRC measured academic excellence by looking at a variety of parochial measures, including publications per faculty member and citations per publication. But the NRC only counted work published in disciplinary journals, while excluding books and non-peer-reviewed magazines (like Foreign Affairs). The NRC also calculated faculty productivity and intellectual impact exclusively by tallying scholarly articles (and limited it to those covered by the Web of Science, the most well-known index of this type). In addition, the NRC considered percent faculty with grants, awards per faculty member, percent interdisciplinary faculty, measures of ethnic and gender diversity, average GRE scores for admitted graduate students, the level of financial support for them, the number of Ph.D.s awarded, the median time to degree, and the percentage of students with academic plans, among other factors.…
The NRC’s methodology biased its rankings against two kinds of scholarship: international relations scholarship, which is often book-oriented; and policy-relevant scholarship, which often appears in non-peer-reviewed journals. That leads to vast undervaluation of many leading scholars and, accordingly, their schools.… It also discourages ranked programs from promoting authorship of in-depth policy relevant work.…
[W]e believe that broader criteria of scholarly excellence and relevance ought to be part of how all departments are ranked. We are not advocating junking traditional criteria for academic rankings; rather, we urge that such narrow and disciplinarily focused criteria simply be balanced with some consideration of the unique aspects of international relations and also take account of the broader impact of scholarly work.
My FP colleague Stephen Walt has already praised the value-added of Campbell and Desch’s approach:
Their point — and it is a good one — is that the standards and methods used to evaluate graduate programs are inherently arbitrary, and if you reward only those publications that are least likely to generate policy-relevant research, you are going to get an academic world that tends to be inward-looking and of less practical value.
I have a slightly different take. To be sure, Campbell and Desch raise one valid point: The NRC, by ignoring books, discriminates against fields that place more importance on them — namely, international relations, political theory, and comparative politics. Incorporating university press books would seem to be a relatively quick and easy fix to that problem. Even here, however, it’s not clear to me why international relations is particularly "unique" within political science. If anything, it’s the Americanists who are unique with such an overwhelming emphasis on journal articles.
The thing is, Campbell and Desch do not want to stop there. They also suggest that an appropriate ranking system should include factoring in policy relevance. This could be done through counting policy publications (in Foreign Policy and Foreign Affairs), serving in the government with a Council on Foreign Relations International Affairs Fellowship, or congressional testimony.
Now, I’m a big fan of policy relevance. I’ve published in both Foreign Policy and Foreign Affairs. I’ve had the CFR fellowship. Hell, I even testified before Congress a few times. Throwing false modesty aside, if I were included in Campbell and Desch’s individual scholar rankings, I’d kick ass and take names.
That said, incorporating all of these policy-relevant factors would be a pretty bad way to rank political science departments.
The obvious problem with these metrics is that they discriminate against the other political science fields way more than the status quo discriminates against international relations scholars. But let’s assume that Campbell and Desch would include, say, testifying before state legislatures or advising foreign governments into their metrics as well. Logically, this proposal still doesn’t hold together.
On the one hand, their definition of "policy relevance" is exceedingly narrow. It consists primarily of actions or publications that service the U.S. government. It’s entirely conceivable that some international relations scholars, for ethical or normative reasons, might decide that they would rather not aid the state with their service, authorship, or testimony. Surely there are other ways scholars can become policy relevant: advising NGOs, jump-starting social movements or campaigns, or even, say, out-and-out partisan blogging. Unless one wants to create a bias that rewards scholars for cozying up to the state, Campbell and Desch would have to devise a much more inclusive formula to calculate "policy relevant activities."
The thing is, if you go that far, you’ve probably gone too far. You’re ranking scholars and departments not for their scholarship, but for their ability to act in a political manner in the service of that scholarship — or simply asserting policy positions from a position of authority. As Johannes Urpelainen observed:
Academic policy relevance should be defined as the ability to use the scientific method to contribute to policy formulation. Insightful commentary based on a gut feeling or authority is not academic policy relevance. It results from a fundamental misunderstanding of what the role of academic institutions is in the global society. International relations scholars who feel the need to comment on current events based on their personal views or experiences can do so, but their policy relevance must be evaluated based on their ability to use the scientific method to add value.…
[A]s an academic, I am more than happy to subject my work to peer review. If my arguments are logically flawed or my identification strategy weak, I should not be rewarded just because some policymaker out there wants to justify a policy by referring to an Ivy League academic who is of the same opinion. International scholars should work harder than ever before to do the kind of research that survives the difficult process of peer review.
See Steve Saideman on this point as well.
Supporters of Campbell and Desch’s argument might say that such an attitude "fetishizes" peer review at the expense of, say, writing for Foreign Policy. And there’s no denying that the peer review system is imperfect. But I have seen, up close, the gatekeeping system that operates in order to crack Foreign Affairs or the New York Times op-ed page — as, I’m sure, Campbell and Desch have. I’m therefore a bit gobsmacked that any academic would claim that this kind of non-peer-reviewed system is somehow fairer than what operates in the academy. In actuality, these other publication outlets stack the deck heavily in favor of name recognition and the prestige of one’s home institution. They might do that for valid or invalid reasons — but those reasons have very little to do with scholarly achievement.
Focusing primarily on peer-reviewed publications is a lousy, flawed, and inefficient way of doing rankings — until one considers the alternatives.
Campbell and Desch are correct that scholars should not be punished for trying to enter the public sphere. As biases go, however, I’d posit that the one against policy relevance has faded over time and is a far less disconcerting form of bias than, say, this one.
What do you think?