The Case for Cyberwarfare

Why the electronic wars of the future will actually save lives.

Koichi Kamoshida/Getty Images
Koichi Kamoshida/Getty Images

According to an intriguing story in this week’s New York Times, the Obama administration decided not to use cyberwarfare against Libya, opting instead for a conventional attack on Muammar al-Qaddafi’s defense installations. Officials feared that it would set a precedent and invite other countries (think: China, Russia) to use similar means of attack in the future. As James Lewis, a cybersecurity expert at the Center for Strategic and International Studies, succinctly put it, "We don’t want to be the ones who break the glass on this new kind of warfare."

Senior officials such as Secretary of Defense Leon Panetta warn that the "next Pearl Harbor we confront could very well be a cyber attack." But what if cyberwarfare is not such a bad thing after all, though? What if it saves lives? The evidence so far actually suggests that cyberwarfare costs fewer lives compared with traditional types of warfare.

The prevailing view, however, holds that cyberwar is a terrifying prospect. The influential 2010 book Cyberwar, for instance — co-authored by Richard A. Clarke, who was responsible for cybersecurity at the White House until 2003 — paints a gloomy picture of potential future cyberattacks that could involve cutting millions of people off the electrical grid or, worse, as in the case of an attack on aviation control or a nuclear power plant, cost thousands of lives.

Yet the evidence of cyberwarfare, so far, reveals a very different picture. The cyberattack on Estonia in 2007 was the first to make major international headlines. But its damage was limited: The Distributed Denial of Service (DDoS) attack overburdened servers in Estonia and brought down several websites. Something similar happened in Georgia during the war in 2008. Such attacks could theoretically cost lives if they shut down emergency hotlines, for example. But they’re not the sort of thing that should keep us up at night.

The Stuxnet virus, on the other hand, was a very different animal. It infected computer systems and altered code in a way that made it too risky to run centrifuges used in Iran’s nuclear facilities. Some experts estimate that Stuxnet pushed back Iran’s nuclear development by several months, possibly years, and what’s wrong with that? This particular cyberattack may have actually saved rather than cost lives. 

Consider similar situations in the past. Former Vice President Dick Cheney, for example, writes about one such incident in his new memoir. He describes the decision-making process that occurred as the United States considered whether or not to bomb a Syrian nuclear facility in 2007.  Despite Israeli requests to do so, President George W. Bush decided to pursue a diplomatic rather than military option. So Israel took matters into its own hands. Cheney writes, "Under cover of darkness on September 6, 2007, Israeli F-15s crossed into Syrian airspace and within minutes were over the target at al-Kibar. Satellite photos afterward showed that the Israeli pilots hit their target perfectly."

Clarke writes about the same incident in his book, speculating how "Many North Korean workers had left the construction site six hours earlier … to the few Syrians and Koreans still on the site, there was a blinding flash, then a concussive sound wave, and then falling pieces of debris." Clarke’s imaginative account is probably more fiction than based on intelligence information. Yet it highlights an important point: Despite the attack being a perfect hit, a few people were probably still killed.

So is cyberware a better alternative to traditional war? Not necessarily. Three conditions will determine whether cyberwarfare will actually reduce the human costs of war:

First, security improvements. If critical civilian infrastructures such as hospitals, nuclear power plants, and transportation control systems can be better protected — for instance, by identifying and fixing vulnerabilities, isolating an attack, as well as creating back-up mechanisms to restore targeted systems — it significantly reduces the probability of cyberwarfare being able to cause direct bodily harm.

Second, norms governing the use of cyberwarfare. Will states, for instance, retaliate against a cyberattack with kinetic warfare? If a country responds with conventional weapons to, say, an adversary taking down its electrical grid, then all bets are off. But if strategic planners are able to work out a model of deterrence for the digital age, than we may all be safer for it. An open question is whether the possibility of conducting a less violent cyberattack would actually decrease the inhibition to perpetrate an attack in the first place.

Third, nonstate actors. My argument focuses only on interstate war excluding violent conflict with non-state actors such as terrorists. At present, though, this threat is considered to be minimal because of the resources and expertise needed to mount a sophisticated cyber attack like Stuxnet.

Cyberwarfare might be how we will fight the battles of the future. The evidence so far suggests, however, that a digital Pearl Harbor would cost fewer lives than the attack 70 years ago. It might not be pretty, but from a humanitarian point of view, that’s good news.

<p> Tim Maurer is a research associate in the technology and public policy program at the Center for Strategic and International Studies and a non-resident fellow at the Global Public Policy Institute in Berlin. David Weinstein is a graduate student at Georgetown University's School of Foreign Service in its Security Studies Program. </p>