Daniel W. Drezner
The cartography of peevishness [UPDATED]
So last week there was some interesting data clean-up in the foreign policy blogosphere, and some less interesting commentary on it. Let’s dive in! Max Fisher posted an item at the Washington Post relying on World Values Survey (WVS) data to generate a global map of racism. He found a Foreign Policy write-up of a ...
So last week there was some interesting data clean-up in the foreign policy blogosphere, and some less interesting commentary on it. Let’s dive in!
Max Fisher posted an item at the Washington Post relying on World Values Survey (WVS) data to generate a global map of racism. He found a Foreign Policy write-up of a Kyklos paper that two Swedish economists published that relied on WVS data. Fisher’s map was based on a response to one question:
The survey asked respondents in more than 80 different countries to identify kinds of people they would not want as neighbors. Some respondents, picking from a list, chose “people of a different race.” The more frequently that people in a given country say they don’t want neighbors from other races, the economists reasoned, the less racially tolerant you could call that society.
Fisher constructed a global map based on the responses to that query, a map that contained some striking findings. Western countries seemed to be far more tolerant (or far savvier at answering this survey question). Countries such as Pakistan seeming to be way more tolerant than India and Bangladesh, for example.
Fisher’s post generated a lot of attention (full disclosure: I tweeted about it) — so much so that some social scientists started to look at the WVS data and found some serious issues with it. The Fletcher School’s Ashirul Amin, for example, dug into the data found that the reason for the seemingly low tolerance of Bangladeshis was a data entry error on the World Values Survey site — the number of "tolerant" and "intolerant" respondents were reversed for one particular year.
Other social scientists, including Steve Saideman, also weighed in with methodological criticisms.
Going further, Siddhartha Mitter pointed out ways in which different nationalities view "race" as a different kind of social construct, thereby making inter-country comparisons a problematic exercise.
The biggest problem, of course, is that “race” is impossible to operationalize in a cross-national comparison. Whereas a homosexual, or an Evangelical Christian, or a heavy drinker, or a person with a criminal record, means more or less the same thing country to country, a person being of “another race” depends on constructs that vary widely, in both nature and level of perceived importance, country to country, and indeed, person to person. In other words, out of all of the many traits of difference for which the WVS surveyed respondents’ tolerance, the Swedish economists – and Fisher, in their wake – managed to select for comparison the single most useless one.
The reason I’m blogging about this, however, is where Mitter went after lodging these criticisms. According to him, the fault lies not with the data entry, but with the foreign policy blogger:
The problem here isn’t the “finding” that the Anglo-Saxon West is more tolerant. The problem is the pseudo-analysis. The specialty of foreign-affairs blogging is explaining to a supposedly uninformed public the complexities of the outside world. Because blogging isn’t reporting, nor is it subject to much editing (let alone peer review), posts like Fisher’s are particularly vulnerable to their author’s blind spots and risk endogenizing, instead of detecting and flushing out, the bullshit in their source material. What is presented as education is very likely to turn out, in reality, obfuscation.
This is an endemic problem across the massive middlebrow “Ideas” industry that has overwhelmed the Internet, taking over from more expensive activities like research and reporting. In that respect, Fisher’s work is a symptom, not a cause. But in his position as a much-read commentator at the Washington Post, claiming to decipher world events through authoritative-looking tools like maps and explainers (his vacuous Central African Republic explainer was a classic of non-information verging on false information, but that’s a discussion for another time), he contributes more than his weight to the making of the conventional wisdom. As such, it would be welcome and useful if he held himself to a high standard of analysis – or at least, social-science basics. Failing that, he’s just another charlatan peddling gee-whiz insights to a readership that’s not as dumb as he thinks.
Cards on the tale: earlier in the post, Mitter indicates he doesn’t think much of Foreign Policy bloggers either, so I’m pretty sure he won’t think much of my own musings here. And I understand Mitter’s anger about a misleading map coming from an outlet that generates a lot of eyeballs. That said, his critique is off-base for two reasons.
First, in this instance, the primary fault lies not with foreign policy bloggers, but with academics. It’s not like Fisher commissioned a bogus survey and then wrote up the findings in a misleading manner. Rather, he relied on a survey that goes back three decades and has been cited pretty widely in the academic literature. He got to that survey via an academic article that got through the peer-review process. Almost all journalists not in possession of a Ph.D., going through that route, would have taken the data as gospel. It’s not clear to me why Mitter thinks a full-blown foreign correspondent would be better versed in the "social science basics." Would Mitter have expected, say, Ryan Avent or Matthew Yglesias to have ferreted out Reinhart and Rogoff’s Excel error, for example? I’m all for better education in the ways of statistics and social science methodology in the foreign affairs community, but methinks Mitter is setting the bar extraordinarily high here.
Second, the blog ecosystem "worked" in this particular case. Fisher posted something, a bunch of social scientists looked at the post and found something problematic, and lo and behold, errors in the data were discovered and publicized. As I’ve opined before, one of the signal purposes of blogging is to critique those higher up in the intellectual food chain. I understand that Mitter would prefer that the original error never take place. By its very nature, however, the peer review process for blogging takes place after publication — not before. That’s a bit messier than the academic route to publication — and, because Fisher has a larger megaphone, one could posit that with great traffic comes great responsibility. Still, I suspect that anyone who titles a post "The Cartography of Bullshit" probably wouldn’t want too heavy of an editorial hand to be placed on
At the heart of Mitter’s lament is his untested hypothesis that foreign affairs blogging has caused the decline in research and international reporting. This strikes me as more correlation than causation, however. Furthermore, it implies that they are substitutes when in fact they are complements. The source material for a lot of foreign affairs blogging is academic research and in-depth international reportage. If Mitter wants to see a better informed public, then there needs to be as much focus on the quality of the primary source material as in the quality of the transmission mechanism.
Am I missing anything?
UPDATE: Mitter has responded in part here, and at more length in a constructive comment to this post. Both are well worth reading, and put some more context into his original post. He’s getting to some interesting tensions about the nature of expertise and "publicity" in a changing media landscape that are worth mulling over before responding.