Breaking Ranks in Academia

Breaking Ranks in Academia

Why does so much of the academic writing on international affairs seem to be of little practical value, mired in a "cult of irrelevance"? Is it because IR scholars are pursuing a misleading model of "science," patterned after physics, chemistry, or biology? Or is it because many prominent academics fear criticism and are deathly afraid of being controversial, and prefer to hide behind arcane vocabulary, abstruse mathematics, or incomprehensible postmodern jargon?

Both motivations are probably at work to some degree, but I would argue that academics are for the most part just responding to the prevailing incentive structures and metrics that are used to evaluate scholarly merit. This point is made abundantly clear in an important new article by Peter Campbell and Michael Desch of the University of Notre Dame, titled "Rank Irrelevance: How Academia Lost Its Way." Campbell and Desch examine the methodology behind the National Research Council rankings of graduate programs in political science, and argue that the methods used are both "systematically biased" and analytically flawed.

National Research Council (NRC) rankings carry a fair bit of weight in academia. As I know from my own experience, deans, provosts, and presidents pay attention to where departments are ranked. A department chair who presides over a significant improvement in his/her department’s ranking will be viewed favorably, while a decline sets off warning bells. Similarly, if a junior faculty member is up for tenure and gets an "outside offer" from a more highly ranked department, that will be taken as a strong signal of that faculty member’s perceived value. By contrast, if you’re up for tenure and get an offer from a department ranked further down the food chain, it will be a positive sign but not necessarily dispositive. For these and other reasons, these rankings matter.

The problem, as Campbell and Desch show, is that the rankings are seriously flawed. The current NRC methodology emphasizes scholarly publications in "peer-reviewed" journals, for example, because that is what the natural sciences do. That sounds like a sensible approach at first hearing, but this procedures biases the assessment in favor of subfields where scholars tend to publish journal articles (such as American politics) and undervalues subfields where books are more common (such as international relations). It also gives little or no weight to publications in journals such as Foreign Affairs or Foreign Policy (i.e., the sort of publication that a policymaker might actually read and that might actually have some impact in the real world). Given how the rankings are calculated, in short, it is inevitable that most political scientists concentrate on writing things that hardly anyone reads.

To drive this point home, Campbell and Desch show how different evaluation schemes would have a dramatic effect on the rankings of various graduate programs. (See here for a compelling chart and here for their full results.) Their point — and it is a good one — is that the standards and methods used to evaluate graduate programs are inherently arbitrary, and if you reward only those publications that are least likely to generate policy-relevant research, you are going to get an academic world that tends to be inward-looking and of less practical value.

In other words, most academic scholars — and especially the younger ones whose careers are still in flux — are just responding to the set of incentives and standards that currently prevail. But these standards are not cast in stone, and there is no a priori reason why scholars could not employ a broader set of criteria when judging candidates for hiring and promotion and when ranking departments. That is indeed what Campbell and Desch recommend. Money quotation:

Simply put, when you rank political science departments by disciplinary, subfield, and broader relevance criteria, you get very different results. Given that, we believe that broader criteria of scholarly excellence and relevance ought to be part of how all departments are ranked. We are not advocating junking traditional criteria for academic rankings; rather, we urge that such narrow and disciplinarily focused criteria simply be balanced with some consideration of the unique aspects of international relations and also take account of the broader impact of scholarly work.

Good advice. Assuming, of course, that you think academia ought to play an important role in helping society address important social and political problems.