From Stuxnet to biobombs, the future of war by other means.
- By David E. Hoffman
David E. Hoffman covered foreign affairs, national politics, economics, and served as an editor at the Washington Post for 27 years.
He was a White House correspondent during the Reagan years and the presidency of George H. W. Bush, and covered the State Department when James A. Baker III was secretary. He was bureau chief in Jerusalem at the time of the 1993 Oslo peace accords, and served six years as Moscow bureau chief, covering the tumultuous Yeltsin era. On returning to Washington in 2001, he became foreign editor and then, in 2005, assistant managing editor for foreign news.
Largely unseen by the world, two dangerous germs homed in on their targets in the spring and early summer of 2009. One was made by man to infect computers. The other was made by nature, and could infect man.
The man-made virus could invade a computer running Windows, replicate itself, wreck an industrial process, hide from human operators, and evade anti-virus programs. The natural pathogen could invade human cells, hijack them to replicate billions of copies of itself, and evade the body’s immune system.
The man-made weapon was Stuxnet, a mysterious piece of computer malware that first appeared in 2009 and was identified more than a year later by Ralph Langner, a Hamburg-based computer security expert, as a worm designed to sabotage Iran’s nuclear-enrichment facilities. The natural pathogen was the swine flu virus, which first appeared in Mexico City in March 2009 and touched off a global pandemic.
In the physical world, they have nothing in common. Stuxnet is computer code, bits of binary electronic data. The swine flu virus is a biological organism, a unique remix of genes from older influenza viruses. But they share one fundamental characteristic: They spread themselves and attack before their targets know what is happening. And in that way, they offer a glimpse of a rapidly evolving class of dangerous threats that former U.S. Navy Secretary Richard Danzig once described as instruments of “nonexplosive warfare.”
When Danzig first raised the concept in 1998, an Internet bubble was mushrooming, terrorist cult Aum Shinrikyo had attacked the Tokyo subway with sarin gas, and there were fresh disclosures about the vast, illicit biological-weapons program built by the Soviet Union. What has happened since then? Cyberattacks have grown in intensity and sophistication. The technology for manipulating biological organisms is advancing rapidly. But these potentially anonymous weapons continue to perplex and confound our thinking about the future of war and terrorism.
Both cyber and bio threats are embedded in great leaps of technological progress that we would not want to give up, enabling rapid communications, dramatic productivity gains, new drugs and vaccines, richer harvests, and more. But both can also be used to harm and destroy. And both pose a particularly difficult strategic quandary: A hallmark of cyber and bio attacks is their ability to defy deterrence and elude defenses.
Think of it this way: The most sophisticated cyberattacks, like Stuxnet, rarely leave clear fingerprints; bioweapons, too, are famously difficult to trace back to a perpetrator. But the concept of deterrence depends on the threat of certain retaliation that would cause a rational attacker to think twice. So if the attacker can’t be found, then the certainty of retaliation dissolves, and deterrence might not be possible.
What would a president of the United States say to the country if thousands of people were dying from a disease or trapped in a massive blackout and he did not know who caused it? A ballistic missile leaves a trajectory that can indicate its origins. An airline hijacker might be caught on video or leave behind a ticket or other telltale clue to his identity. When someone is shot with a weapon, the bullet and firearm can be traced. Not so for many cyber and bio threats.
Moreover, as Danzig pointed out, armies are of little use against such dangers, and neither the production nor delivery of such weapons requires large, expensive systems. They are accessible to small groups or individuals, and can hide under the radar.
So how to think about this? Recently, the Pentagon commissioned one of its most prestigious research advisory groups, JASON, to study the science of cybersecurity. One of the panel’s recommendations for dealing with threats: Draw lessons from biology and the functioning of the human body’s immune system. When it sees a dangerous pathogen, part of the immune system is adaptive and can resist the invader even if it has never seen the agent before. What computers might need to counter this new warfare is something similar, a “learning algorithm” that would allow them to adapt and resist when a bug like Stuxnet comes sneaking around — as it surely will.
ON FEB. 8, 2000, Joshua Lederberg, one of the founders of American microbiology and a Nobel Prize laureate, spoke at a Rand Corp. conference on bioterrorism and homeland defense in Santa Monica, California. Lederberg, a geneticist who had been concerned for years about the United States’ vulnerability to the use of biological agents in war and terrorism, told the group there would be no warning of such an attack, no big boom to alert everyone.
“We perhaps put too much stress on an acute incident, an explosion, a compelling notice that something really awful has happened,” Lederberg said. “No shrewd user” of a biological weapon “is going to give you that opportunity,” he warned. “The ‘incident’ will be people accumulating illness, disease, death.”
Within two years, it happened. In the fall of 2001, at least five envelopes containing anthrax bacteria were mailed to two senators in Washington and media organizations in New York City and Boca Raton, Florida. At least 22 people contracted anthrax as a result; five died. Ten thousand people were given antibiotics as a precaution. With just five envelopes, 35 postal facilities and commercial mailrooms were contaminated. The bacteria were found in seven buildings on Capitol Hill. The U.S. Postal Service closed two heavily contaminated processing centers; one in Washington did not open for two years, and one in New Jersey did not open for four years. More than 1.8 million letters, packages, and magazines were stuck in quarantine at the two centers, which cost roughly $200 million to clean up.
After the attack, the FBI and the U.S. Postal Inspection Service set up a task force to investigate who had done it. In the seven years that followed, more than 10,000 witnesses were interviewed, 5,750 grand jury subpoenas issued, and 6,000 items of evidence collected. In 2007, the FBI determined that the anthrax originated from a batch created and maintained by Bruce E. Ivins, a researcher at the U.S. Army’s biodefense laboratory at Fort Detrick, Maryland. Aware that he was under investigation, Ivins committed suicide in July 2008, leaving open the issue of his possible role and motives. There is still some uncertainty about the FBI’s microbial forensics, now under review by a committee of the National Academy of Sciences. Regardless, the investigation showed how hard it is to crack such a case.
Amy E. Smithson, a senior fellow at the James Martin Center for Nonproliferation Studies of the Monterey Institute of International Studies, has attempted to investigate and analyze how decision-makers would react to a future biological attack. “The pressures to finger the bad guy are going to be tremendous,” Smithson told me. Last year, Smithson assembled three teams of people for simulations of how high-level decision-makers might react. The groups were told they were playing the National Security Council, sitting in the White House Situation Room during the opening of a hypothetical G-8 summit in San Francisco, when a detector signaled the presence of a pathogen, Burkholderia pseudomallei, a bacterium that causes the disease melioidosis, which can be lethal if inhaled. The teams had been given several briefings on microbial forensics and the available intelligence, but still found themselves unsure how to untangle the evidence and how to respond.
Was the pathogen intended to harm the world leaders, or was it just a dispersal into the air, intended to shock? “They were massively frustrated at what microbial forensics and intelligence didn’t tell them,” Smithson said. “The effort to pinpoint a perpetrator is bound to confound, and the detection systems are not likely to deliver as much data as fast or as clearly as the policymakers want.”
So, the conundrum is clear: As Danzig put it a decade ago, “With nonexplosive weapons it may be difficult to tell if an incident is an act of war, the deed of a small terrorist group, a simple crime, or a natural occurrence.”
COULD SUCH AN ATTACK REALLY HAPPEN? In the field of biology, much of the debate has centered on the capabilities and intentions of terrorists. While some diseases occur easily in nature and are highly contagious, others require sophisticated processing for use as a weapon, probably well beyond the capability of today’s terrorist groups, which in the last decade have preferred explosive weapons — truck bombs, duffel bags filled with dynamite, exploding airplanes, and old-fashioned guns. By contrast, if the FBI is correct, the anthrax letters were sent by a skilled worker in a sophisticated, well-funded American military laboratory, not someone working out of a safe house or a cave in the Hindu Kush.
New alarms about bioterrorism were sounded in December 2008 by a congressionally mandated commission on weapons of mass destruction, headed by former Senators Bob Graham and Jim Talent. Their report, “World at Risk,” concluded that “terrorists are more likely to be able to obtain and use a biological weapon than a nuclear weapon.” No terrorist group currently has the ability to carry out a mass-casualty attack using pathogens, the panel reported — weaponizing pathogens and disseminating them in the air is extremely difficult. But, they warned, “the United States should be less concerned that terrorists will become biologists and far more concerned that biologists will become terrorists.” A group of U.S. scientists, however, responded that the commission had exaggerated the threat and that fears of bioterrorism were diverting resources from urgent public health needs for naturally occurring diseases, which have caused far more deaths.
One thing is certain: The technology for probing and manipulating life at the genetic level is accelerating. Advances in sequencing — plotting the genetic blueprint of an organism — have been particularly rapid, leading to great benefits in public health, medicine, and other fields. When swine flu was discovered in two children in Southern California in April 2009, sequencing helped identify it rather quickly as a new type of influenza with genes from pigs as well as birds and humans. That was critical information for launching a response to a looming pandemic. The machines for sequencing, once the size of a mainframe computer, are becoming smaller and cheaper every year.
The obvious question is whether states or terrorists could exploit this technology for malevolent ends. The worry has come up most recently with the rise of synthetic biology, a relatively new field using engineering techniques to create new biological parts, devices, or systems, or redesign existing ones. A U.S. presidential commission concluded last year that synthetic biology should be watched for risks, but did not see the need for more controls. Yet. The panel said that no one had so far created synthetic life, only modified existing, natural hosts. But the inquiry itself highlighted the rapid pace of change in manipulating biology. Will rogue scientists eventually learn how to use the same techniques for evil?
HOSTILE CYBERATTACKS have long since left the realm of the theoretical. The Pentagon has said it is hit by “myriad” attacks every day on its 15,000 separate computer networks around the world. Dennis Blair, at the time the director of national intelligence, told Congress in February 2010 that the United States’ critical infrastructure — power grids, information networks, and the like — is “severely threatened.” He didn’t provide statistics but added, “Malicious cyberactivity is occurring on an unprecedented scale with extraordinary sophistication.”
Cyberattacks come in different flavors, whether exploitations to steal money or data; disruptions, such as distributed denial-of-service attacks aimed at overloading or paralyzing a website; or thefts, of data or as espionage. Some of the most sophisticated cyberattacks are multistage, in which a piece of malware penetrates a computer to use it as a platform for attacking yet another machine. These can be among the most difficult to trace, and Stuxnet is the most impressive example yet seen. The authors of a dossier produced by Symantec, the anti-virus company, found that Stuxnet is “an incredibly large and complex threat.” It knew where it was going and how to get there.
The creator of Stuxnet is still unknown, but with the sophistication of the code and the heavy amount of insider knowledge required, all signs point to a state or group of states. The New York Times reported in January that it might have been the result of a collaboration between the United States and Israel, the goal likely being to sabotage Iran’s industrial centrifuges, which are used to enrich uranium. It could slow the machines down or speed them up — enough to cause subtle, but crippling, mechanical failures.
What’s clear so far is that Stuxnet was written to attack an industrial control system, such as those used for gas pipelines and power plants. At first it was probably brought into a network on a removable flash drive, but once inside, it had the capability to replicate itself over and over again, each time carrying the payload that would be used to do the dirty work. The worm was designed for stealth. Injected into a computer or network, Stuxnet could sidestep anti-virus programs found on Windows computers, conceal itself on removable drives so the user would not know they were infected, and hide from the operators of the industrial equipment. Langner, the Hamburg computer security expert, said Stuxnet works like a sophisticated bank robbery. “During the heist, the observation camera is fed with unsuspicious footage, keeping the guards happy,” he wrote.
The Institute for Science and International Security (ISIS), which has closely monitored the Iranian nuclear effort, reported that in late 2009 or early 2010, Iran decommissioned and replaced about 1,000 centrifuges in its uranium-enrichment plant at Natanz. If the goal of Stuxnet was to “set back Iran’s progress” while making detection of the malware difficult, an ISIS report stated, “it may have succeeded, at least for a while.”
But there are risks of blowback. Langner warns that such malware can proliferate in unexpected ways: “Stuxnet’s attack code, available on the Internet, provides an excellent blueprint and jump-start for developing a new generation of cyber warfare weapons.” He added, “Unlike bombs, missiles, and guns, cyber weapons can be copied. The proliferation of cyber weapons cannot be controlled. Stuxnet-inspired weapons and weapon technology will soon be in the hands of rogue nation states, terrorists, organized crime, and legions of leisure hackers.”
Industrial control systems that were the target of Stuxnet are spread throughout the world and vulnerable to such attacks. In one 11-year-old Australian case, a disenchanted employee of the company that set up the control system at a sewage plant later decided to sabotage it. From his laptop, the worker ordered it to spill 211,337 gallons of raw sewage, and the control system obeyed — polluting parks, rivers, and the grounds of a hotel, killing marine life and turning a creek’s water black.
But that attack was rare in another way: It was easy enough to identify the attacker. Stuxnet has been spotted on computer networks for nearly two years, but the world’s top computer security experts have yet to pinpoint exactly who created it, or why.
WHAT CAN BE DONE to prevent catastrophe from nonexplosive warfare? The old arms-control remedies of the Atomic Age may not work. States like China and Russia now encourage groups of freelance hackers to do their dirty work, allowing plausible deniability. It is not clear that treaties could do much to stop them. According to Langner, “such treaties won’t be countersigned by rogue nation states, terrorists, organized crime, and hackers. Yet all of these will be able to possess and use such weapons soon.”
A major lesson can be drawn from the 1972 global treaty banning germ warfare, which lacked an effective enforcement mechanism from the start and failed to prevent the Soviet Union, South Africa, and Iraq from working on clandestine programs. There are suspicions that Iran, North Korea, and Syria might be harboring germ-warfare research today. The biological-weapons treaty has only 163 signatories, compared with 189 for the Nuclear Non-Proliferation Treaty and 188 for the Chemical Weapons Convention. Two successive U.S. administrations have concluded that biotechnology advances are occurring so swiftly that they probably cannot be policed through an updated legal enforcement protocol in the treaty.
Instead, President Barack Obama in 2009 called for taking alternative measures to avoid risk, such as helping other countries fight infectious disease while keeping an eye out for misuse of biology. Additional attempts are being made in the United States to improve monitoring of research by scientists, companies, and government, but much of it is still voluntary; this is not going to stop a dedicated rogue actor.
As for cyberattacks? Whatever the evidence for possible secret U.S. involvement in using cyberweapons for offensive assaults, it seems probable there will be retaliatory action. And other countries are almost certain to say: If the U.S. program includes offensive operations, why should we refrain? A new arms race beckons.
And in this new arms race, there is no road map for disarmament or deterrence. In a shadowy, unaccountable world, we have not yet learned how to name the bad guys — never mind stop them.