Who needs experts?
Louis Menand has a glowing review in the New Yorker of Philip Tetlock’s latest opus, Expert Political Judgment: How Good Is It? How Can We Know?. Some highlights: It is the somewhat gratifying lesson of Philip Tetlock?s new book… that people who make prediction their business?people who appear as experts on television, get quoted in ...
Louis Menand has a glowing review in the New Yorker of Philip Tetlock's latest opus, Expert Political Judgment: How Good Is It? How Can We Know?. Some highlights: It is the somewhat gratifying lesson of Philip Tetlock?s new book... that people who make prediction their business?people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables?are no better than the rest of us. When they?re wrong, they?re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert?s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.... Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. ?We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,? he reports. ?In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals?distinguished political scientists, area study specialists, economists, and so on?are any better than journalists or attentive readers of the New York Times in ?reading? emerging situations.? And the more famous the forecaster the more overblown the forecasts. ?Experts in demand,? Tetlock says, ?were more overconfident than their colleagues who eked out existences far from the limelight.?... The experts? trouble in Tetlock?s study is exactly the trouble that all human beings have: we fall in love with our hunches, and we really, really hate to be wrong.... The expert-prediction game is not much different. When television pundits make predictions, the more ingenious their forecasts the greater their cachet. An arresting new prediction means that the expert has discovered a set of interlocking causes that no one else has spotted, and that could lead to an outcome that the conventional wisdom is ignoring. On shows like ?The McLaughlin Group,? these experts never lose their reputations, or their jobs, because long shots are their business. More serious commentators differ from the pundits only in the degree of showmanship. These serious experts?the think tankers and area-studies professors?are not entirely out to entertain, but they are a little out to entertain, and both their status as experts and their appeal as performers require them to predict futures that are not obvious to the viewer. The producer of the show does not want you and me to sit there listening to an expert and thinking, I could have said that. The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious. There are intriguing implications for understanding world politics that deserves a post of their own, but suffice it to say that Tetlock's findings will probably warm the cockles of every political blogger out there.
Louis Menand has a glowing review in the New Yorker of Philip Tetlock’s latest opus, Expert Political Judgment: How Good Is It? How Can We Know?. Some highlights:
It is the somewhat gratifying lesson of Philip Tetlock?s new book… that people who make prediction their business?people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables?are no better than the rest of us. When they?re wrong, they?re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert?s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones…. Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. ?We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,? he reports. ?In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals?distinguished political scientists, area study specialists, economists, and so on?are any better than journalists or attentive readers of the New York Times in ?reading? emerging situations.? And the more famous the forecaster the more overblown the forecasts. ?Experts in demand,? Tetlock says, ?were more overconfident than their colleagues who eked out existences far from the limelight.?… The experts? trouble in Tetlock?s study is exactly the trouble that all human beings have: we fall in love with our hunches, and we really, really hate to be wrong…. The expert-prediction game is not much different. When television pundits make predictions, the more ingenious their forecasts the greater their cachet. An arresting new prediction means that the expert has discovered a set of interlocking causes that no one else has spotted, and that could lead to an outcome that the conventional wisdom is ignoring. On shows like ?The McLaughlin Group,? these experts never lose their reputations, or their jobs, because long shots are their business. More serious commentators differ from the pundits only in the degree of showmanship. These serious experts?the think tankers and area-studies professors?are not entirely out to entertain, but they are a little out to entertain, and both their status as experts and their appeal as performers require them to predict futures that are not obvious to the viewer. The producer of the show does not want you and me to sit there listening to an expert and thinking, I could have said that. The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious.
There are intriguing implications for understanding world politics that deserves a post of their own, but suffice it to say that Tetlock’s findings will probably warm the cockles of every political blogger out there.
Daniel W. Drezner is a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University and co-host of the Space the Nation podcast. Twitter: @dandrezner
More from Foreign Policy

Saudi-Iranian Détente Is a Wake-Up Call for America
The peace plan is a big deal—and it’s no accident that China brokered it.

The U.S.-Israel Relationship No Longer Makes Sense
If Israel and its supporters want the country to continue receiving U.S. largesse, they will need to come up with a new narrative.

Putin Is Trapped in the Sunk-Cost Fallacy of War
Moscow is grasping for meaning in a meaningless invasion.

How China’s Saudi-Iran Deal Can Serve U.S. Interests
And why there’s less to Beijing’s diplomatic breakthrough than meets the eye.