Common sense, mostly.
- By Charles HomansCharles Homans is a special correspondent for the New Republic and the former features editor of Foreign Policy.
On Sunday, the website Wikileaks published more than 91,000 military documents from the war in Afghanistan, among them reports alleging that Pakistan’s top military intelligence service is aiding Taliban fighters. The Pakistani government has claimed that the documents are based on inaccurate field reports that neither it nor U.S. intelligence agencies are taking seriously. So how do intelligence analysts determine whether reports are credible or not?
With common sense, mostly. When they receive a piece of intelligence — whether it comes from a phone intercept or satellite imagery — analysts at the Central Intelligence Agency, Defense Intelligence Agency, and elsewhere subject it to more or less the same smell tests that police detectives and reporters use when trying to sort out a story: Is the information internally consistent? Is it consistent with what they’re hearing from other sources?
In the case of human intelligence, the source is put to the same scrutiny: Who provided the intelligence? What’s in it for them? What axe do they have to grind? (If they’re a mentally unstable con man, that’s a good thing to know.) And there’s what you might call the Bruce Willis Rule: If something sounds too much like the plot of an action movie — say, Osama bin Laden buying missiles from North Korea — you probably want to get a second opinion.
Of course, the fact that vetting intelligence is largely intuitive doesn’t mean that it’s easy. What’s simple in theory is immensely difficult in practice: Analysts are deluged with information, some of it good, much of it bad or simply irrelevant, and virtually all of it ambiguous. "It’s like trying to put together a jigsaw puzzle," says former CIA officer and presidential adviser Bruce Riedel. "You maybe have 200 pieces of the puzzle. The first thing you don’t know is, is this a 500-piece or 1,000-piece puzzle? And then with the 200 pieces you have, maybe half of them don’t belong to this puzzle at all. They’re in the wrong box. And then every hour or so, someone comes along and dumps 10 more pieces on your desk — and nine of them aren’t even part of it."
There are rules of thumb, official and otherwise. For instance, analysts err on the side of caution with any piece of threat-related intelligence; even questionable chatter about a potential terrorist attack is sent up the agency ladder, with the appropriate caveats. And in general, analysts deal in probabilities more than they do flat-out assertions; there are few absolutes in spycraft.
Still, there are plenty of ways even an expert can go wrong. For instance, when a fresh bit of information contradicts what’s known already, it can be hard to distinguish outliers from genuinely new information — analysts, like journalists, can be seduced by a novel storyline or trapped by a familiar one. Consider American intelligence agencies’ assessment of Iran’s nuclear program: In 2007, the intelligence community issued a National Intelligence Estimate judging that Iran had given up its nuclear weapons program several years earlier, an assessment that in retrospect may have been overstated. Did the analysts give too much credence to new information they received simply because it was new? Conversely, U.S. intelligence agencies notoriously missed the coming collapse of the Soviet Union and Mikhail Gorbachev’s emergence as a reformer in part because they were caught up in their own narrative of the Soviets’ power and the venality of their leaders. It turned out that Gorbachev actually meant what he said.
There’s also the hazard of information taken out of context. Nothing gets thrown away in intelligence-gathering — even data and reports that are considered false or irrelevant upon arrival are filed away in vast databases in case in hindsight they turn out not to be. But that long-shot information can easily be dusted off and used piecemeal — willfully or accidentally — to support wildly incorrect conclusions. In 2002, the Defense Department created the Office of Special Plans, which produced a report — based on raw data gathered by the DIA, but not the agency’s own analysis of it — alleging that Saddam Hussein had close ties to al Qaeda and had developed weapons of mass destruction. This was news to the DIA’s own analysts and those in other agencies, who had long since judged most of the intelligence that the office used in the report as spurious, and indeed no such ties or weapons were ever found.
A directive issued by the director of national intelligence in 2007 put in writing the best practices underlying good intelligence analysis, in order to avoid these types of failures in the future. Agencies are now required to ask common-sense questions about their data, and identify and assess the credibility of the people who gave it to them. In reality, however, intelligence veterans worry less about the kind of intelligence failures that led to Iraq than the kind the led to September 11; the problem is less bad analysis than it is good analysis that doesn’t get the right attention. Intelligence agencies are vast bureaucracies, and plenty of information falls through the cracks within and between them. Umar Farouk Abdulmutallab, the would-be Christmas Day bomber, nearly managed to pull off his airliner attack because the State Department reportedly failed to act on its own intelligence suggesting that Abdulmutallab had visited terrorist training camps in Yemen, and failed to sound the alarm about what it knew. Making sense of information from the field wasn’t the problem; getting it to the right people was.
Thanks to Pat Lang, Paul R. Pillar of the Georgetown Center for Peace and Security Studies, and Bruce Riedel of the Brookings Institution.