- By Peter FeaverPeter D. Feaver is a professor of political science and public policy and Bass Fellow at Duke University, and director of the Triangle Institute for Security Studies and the Duke Program in American Grand Strategy. He is co-editor of Elephants in the Room.
I started out in this business looking at the command and control of nuclear weapons in the United States. This was a hot topic in the late Cold War because many experts believed that the likeliest way the superpower rivalry would end in catastrophe would be through some sort of accidental war arising out of a series of improbable but possible command-and-control failures, as distinct from a premeditated, bolt-from-the-blue attack. By the late 1980s and early 1990s, enough material about near accidents from the earlier days of the Cold War had emerged to give many people reason to look skeptically about the quality and reliability of the U.S. command-and-control system.
The collapse of the Soviet Union and the associated collapse of the Soviet military-industrial complex shifted the focus of concern from the United States onto the former Soviet Union. Whatever problems there might be in the U.S. system, it was obvious that the problems in the former Soviet nuclear command-and-control system were more serious and more urgent. One of the great bipartisan successes of the post-Cold War era, one that many experts would have bet would not succeed as well as it did, was the effort to establish a reasonable threshold of safety and security in the former Soviet arsenal. There is still work to be done there, but we are in a vastly better place than we were circa 1992.
Unfortunately, there is mounting evidence that the same cannot be said about the U.S. arsenal. There has been an alarming spate of stories about serious deficiencies in the command-and-control system set up to preserve the safety and security of the U.S. nuclear arsenal: an out-of-control Air Force general partying in Moscow, a senior nuclear commander with a gambling problem, launch control officers systematically violating safety rules, launch control officers systematically cheating on certification tests, and launch control officers with a pervasive drug problem. These stories could have been ripped from the headlines of the collapsing Soviet Union, but they are all from this past year and involve the U.S. nuclear command-and-control system.
The bad stories have culminated to the point where Defense Secretary Chuck Hagel has ordered a top-to-bottom review. This is welcome and perhaps even overdue.
The nuclear command-and-control system consists of hardware, software, and wetware. Hardware involves the technical safety and security measures, such as coded locks and other devices that work to thwart an unauthorized or accidental detonation. Software involves the administrative procedures and rules, such as the two-man rule or code-management systems, that specify how the hardware will be used. Wetware involves the professionalism and reliability of the men and women who are administering the "software" and are custodians of the "hardware."
A rule of thumb is that hardware is trumped by software and software is trumped by wetware. The launching mechanisms can be protected behind an impregnable fortress, but if there are no administrative rules about keeping the doors locked, the expensive hardware won’t protect you. And you can have carefully drafted rules that provide maximum security, but if the human operators refuse to obey the rules and get away with flouting them, then the rules don’t provide real protection.
The string of horrifying stories may just be coincidence, but they sure look like they point to a wetware problem. The senior nuclear commanders have assured the defense secretary that there is no systematic wetware problem. Proving that is the case will be the vital mission of Hagel’s review panel. And if it is not the case, figuring out how to make it so will be one of the highest national security priorities Hagel, and the country, will face this year.