Amy Zegart

Failing History

Failing History

In September 1962, CIA Director John McCone was honeymooning in Europe but couldn’t take his eyes off Cuba. Convinced the Soviets were secretly deploying nuclear missiles 90 miles from Florida, McCone repeatedly cabled Washington with his concerns. Nobody believed him. McCone was operating on a hunch, without solid evidence. When the CIA issued a Special National Intelligence Estimate about the Soviet arms buildup in Cuba on September 19, it disregarded the director’s views entirely. That estimate, like the previous three, concluded the Soviets would not dare put nuclear missiles in Cuba. A month later, U2 spy planes snapped photographs that confirmed McCone’s worst fears and ushered in the most dangerous 13 days in history.

As the Cuban missile crisis turns 50 this month, it stands alone as the most studied event of the nuclear age. Academics have written so much about that eyeball-to-eyeball moment that there are articles about why we should stop writing articles about it. But there is at least one key lesson that still has not been learned. Generations of scholars and practitioners have insisted on calling the crisis an intelligence success when there is much more to be learned by calling it a failure.

The success narrative says the CIA discovered Soviet missiles before they became operational, enabling President Kennedy to seize the initiative and save the day. But that’s the end of the story. The beginning is just as important and more often forgotten: The CIA failed to anticipate the presence of Soviet missiles despite widespread knowledge that Soviet arms shipments were escalating dramatically that summer. All four intelligence estimates on Cuba published in 1962 had a reassuring quality, highlighting evidence that the Soviets sought to defend the island with conventional arms, not deploy nuclear missiles there. Instead of inoculating the Kennedy administration against the horrors of a possible Soviet missile surprise, the intelligence estimates made the surprise even more sudden and shocking.

It is comforting to think that we avoided nuclear Armageddon through artful diplomacy, steely nerves, and timely intelligence. But the truth is we got lucky. During the height of the crisis, a previously scheduled test simulating a missile attack from Cuba was mistakenly identified as a real incoming strike, giving the North American Air Defense Command just minutes to determine what to do. In a 2002 missile crisis anniversary conference (yes, there are these things), scholars learned for the first time that one Soviet submarine captain actually did order preparations to launch a nuclear-tipped torpedo off the U.S. coast on October 27. Were it not for a man named Vasili Arkhipov, who convinced the captain to wait for further instructions from Moscow even as they were being bombarded by U.S. Navy depth charges and running out of air, events could easily have taken a tragic turn. Other terrifying examples abound, showing just how close the edge of disaster really was.

Calling something a success or failure is not just an exercise in tweedy semantics. It shifts the focus from "what went right" to "what went so wrong." And what went wrong 50 years ago is still going wrong today; two lingering questions from 1962 suggest the silent but deadly effects of organizational pathologies in intelligence.

1.                  Why did analysts miss the signals of Khrushchev’s true intentions?

Sherman Kent, who led the CIA’s estimating shop during the crisis, argued that intelligence estimates missed the mark mostly because Khrushchev was nutty. "There is no blinking the fact that we came down on the wrong side," he admitted in 1964. But Kent added, "no estimating process can be expected to divine exactly when the enemy is about to make a dramatically wrong decision." In other words, let’s blame Khrushchev and hope for more predictable adversaries in the future.

The more important and overlooked lesson here is that the structure of the U.S. intelligence system made a tough job nearly impossible. Although the CIA was created in 1947 to prevent another Pearl Harbor, the agency has never really been central. Intelligence agencies in the State, War, Navy, and Justice departments hobbled the CIA from its earliest days to protect their own turf. As a result, in 1962 intelligence reporting and analysis about Cuba was handled by half a dozen agencies with different missions, specialties, incentives, security clearance levels, access to information, and no common boss with the power to knock bureaucratic heads together short of the president. In this bureaucratic jungle, signals of Khrushchev’s true intentions — and there were several — got dispersed and isolated instead of consolidated and amplified to sound the alarm.

Sound familiar? Before 9/11, this same fragmentation kept U.S. intelligence agencies from seizing 23 different opportunities to disrupt the terrorist plot. In each instance, someone in an intelligence agency noticed something important — a string of jihadist flight school students in Arizona, a suspicious extremist at a Minnesota flight school, two suspected al Qaeda operatives with U.S. visas in their passports. These and other signals were not drowned out by all the noise. They were found, an incredible feat. And then, just as incredibly, each signal got lost in the bureaucratic sprawl.

2.                  Why, despite new evidence of a dramatically escalating Soviet buildup, did intelligence analysts continue to draw the same old conclusions?

In August and September 1962, intelligence showed a dramatic uptick in Soviet personnel and weapons deployments to Cuba. Nevertheless, the September 19 intelligence estimate concluded nothing had changed. The Soviets were ramping up all right, but to defend Cuba.

Sherman Kent took a lot of heat for that estimate. Nearly all of it centered on "mirror imaging," the tendency for analysts to assume an enemy will behave as they would. For psychologists, cognitive limits in the Cuban missile crisis have been the gift that keeps on giving. But I am convinced that organizational pressures were also at work and offer new, important lessons for today.

The thing to know about National Intelligence Estimates is that they are collective products. No single person or agency writes them. Instead, estimates require intense negotiation among many agencies to reach consensus, causing the entire process to tilt toward consistency. Once a judgment is made, changing it later becomes more difficult. Why? Because consistency is what policymakers expect. They don’t need to be convinced the world looks the same today as it did last month. They do need to be convinced the world looks different. Consistency is a given, but inconsistency needs to be explained, justified, and defended. Changing a judgment means convincing every agency in the process that what it said or assessed or agreed to the last time should be modified or discarded this time. Generating interagency consensus on a new estimate that says "We have changed our collective minds" is invariably harder than producing a report that says "Once again, we agree with what we wrote last time."

This tilt toward consistency helps explain not only the September 19, 1962, Cuba estimate, but the now infamous 2002 Iraq WMD estimate. Both estimates reinforced earlier judgments even though the available intelligence had changed significantly over time. In Cuba, intelligence was accumulating fast, while in Iraq intelligence had been drying up for years. Yet in both cases, the past had a firm grip on the present. The Cuba estimate did nothing with more information and the Iraq estimate made more out of nothing, doubling down on prior judgments and evidence that Saddam had a hidden WMD program before. Both estimates also downplayed internal disagreements — in the Cuba case, by not taking the CIA director’s hypothesis seriously, and in the Iraq case, by relegating State and Energy Department dissents to footnotes. In the end, both estimates were dead wrong. Invisible pressures toward consistency and consensus help explain why.

The Cuban missile crisis may be over, but it is not past. Learning lessons from history starts with calling a failure a failure.