Daniel W. Drezner
Just how good are foreign policy forecasters?
Philip Tetlock has a must-read review essay on political forecasting in the latest issue of The National Interest. Tetlock is the author of Expert Political Judgment, one of my all-time favorite books in political science. Tetlock reviews books by three political prognosticators: Stratfor’s George Friedman (who has been mocked just a bit by your humble ...
Philip Tetlock has a must-read review essay on political forecasting in the latest issue of The National Interest. Tetlock is the author of Expert Political Judgment, one of my all-time favorite books in political science.
Tetlock reviews books by three political prognosticators: Stratfor’s George Friedman (who has been mocked just a bit by your humble blogger), FP and Eurasia’s Ian Bremmer (who has been panned just a bit by your humble blogger) and Bruce Bueno de Mesquita (who was on your humble blogger’s dissertation committee and is therefore the source of much Good and Light in the world).
You’ll have to read Tetlock’s essay to get his assessment of all three books — but I do like this one-paragraph summary:
The authors are all entrepreneurial futurists, but each offers a strikingly distinctive approach to prediction. I organize these approaches under three headings: the superpundit model in which readers take it, more or less on faith, that the forecaster has a pipeline into the future not available to ordinary mortals (a category into which I place George Friedman’s The Next 100 Years); the technocratic-pluralism model in which the authors never get around to making falsifiable predictions of their own but do offer readers a pretty comprehensive survey of forecasting mistakes and an inventory of tools for avoiding them (a category into which I place Ian Bremmer and Preston Keat’s The Fat Tail); and the scientific-reductionist model in which the author embraces a particular theory from the social sciences and shows how, if you apply that theory thoughtfully to real-world contexts, you can derive surprisingly accurate forecasts (a category into which I place Bruce Bueno de Mesquita’s The Predictioneer’s Game).
What I found more intriguing was Tetlock’s formulation for how to use pundits:
The best thing I can say for the superpundit model is likely to annoy virtually all of that ilk: they look a lot better when we blend them into a superpundit composite. Aggregation helps. As financial journalist James Surowiecki stressed in his insightful book The Wisdom of Crowds, if you average the predictions of many pundits, that average will typically outperform the individual predictions of the pundits from whom the averages were derived. This might sound magical, but averaging works when two fairly easily satisfied conditions are met: (1) the experts are mostly wrong, but they are wrong in different ways that tend to cancel out when you average; (2) the experts are right about some things, but they are right in partly overlapping ways that are amplified by averaging. Averaging improves the signal-to-noise ratio in a very noisy world…. From this perspective, if you want to improve your odds, you are better-off betting not on George Friedman but rather on a basket of averaged-out predictions from a broad ideological portfolio of George Friedman–style pundits. Diversification helps.
I wonder if such an exercise would actually work. One of the accusations levied against the foreign policy community is that because they only talk to and read each other, they all generate the same blinkered analysis. I’m not sure that’s true, but it would be worth conducting this experiment to see whether a Village of Pundits does a better job than a single pundit.