A Liberal Case for Drones
Why human rights advocates should stop worrying about the phantom fear of autonomy.
In a press conference on Tuesday, Attorney General Eric Holder was asked what he planned to do to increase the Obama administration's transparency with regard to the drones program. "We are in the process of speaking to that," Holder said. "We have a roll-out that will be happening relatively soon."
In a press conference on Tuesday, Attorney General Eric Holder was asked what he planned to do to increase the Obama administration’s transparency with regard to the drones program. "We are in the process of speaking to that," Holder said. "We have a roll-out that will be happening relatively soon."
Due to the program’s excessive secrecy, few solid details are available to the public. Yet, as new technologies come online — on Tuesday, the Navy launched an unmanned stealth jet from an aircraft carrier — new concerns are emerging about how the U.S. government may use drones.
The X-47B, which can fly without human input, is a harbinger of what’s to come. A growing number of international human rights organizations are concerned about the development of lethal autonomy — that is, drones that can select and fire on people without human intervention. But as the outcry over this still-hypothetical technology grows, it’s worth asking: might the opposite be true? Could autonomous drones actually better safeguard human rights?
Last month, Christof Heyns, the U.N. special rapporteur on extrajudicial, summary, or arbitrary executions, released a major report calling for a pause in developing autonomous weapons and for the creation of a new international legal regime governing future development and use. Heyns asked whether this technology can comply with human rights law and whether it introduces unacceptable risk into combat.
The U.N. report is joined by a similar report, issued last year by Human Rights Watch. HRW argues that autonomous weapons take humanity out of conflict, creating a future of immoral killing and increased hardship to civilians. HRW calls for a categorical ban on all development of lethal autonomy in robotics. HRW is also spearheading a new global campaign to forbid the development of lethal autonomy.
That is not as simple a task as it sounds. "Completely banning autonomous weapons would be extremely difficult," Armin Krishnan, a political scientist at the University of Texas at El Paso who studies technology and warfare, told me. "Autonomy exists on a spectrum."
If it’s unclear where to draw the line on autonomy, then maybe intent is a better way to think about such systems. Lethally autonomous defensive weapons, such as the Phalanx missile defense gun, decide on their own to fire. Dodaam Systems, a South Korean company, even manufactures a machine gun that can automatically track and kill a person from two miles away. These stationary, defensive systems have not sparked the outcry autonomous drones have. "Offensive systems, which actively seek out targets to kill, are a different moral category," Krishnan explains.
Yet many experts are uncertain whether autonomous attack weapons are necessarily a bad thing, either. "Can we program drones well? I’m not sure if we can trust the software or not," Samuel Liles, a Purdue professor specializing in transnational cyberthreats and cyberforensics, wrote in an email. "We trust software with less rigor to fly airliners all the time."
The judgment and morality of individual humans certainly isn’t perfect. Human decision-making is responsible for some of the worst atrocities of recent conflicts. Just on the American side, massacres — like when Marines killed 24 unarmed civilians in Haditha or Marine special forces shot 19 unarmed civilians in the back in Jalalabad — speak to the fragility of human judgment about using force. Despite decades of effort to make soldiers less likely to commit atrocities, it still happens with alarming regularity.
Yet, machines are not given the same leeway: Rights groups want either perfect performance from machines or a total ban on them.
"If programmed with strict criteria, a drone could be more selective than a human," Krishnan explains. "But that could also introduce a vulnerability if an insurgent learns how to circumvent that criteria."
An accounting of how robots currently work is missing from much of the advocacy against drones and autonomy. In a recent article for the United Nations Association, Noel Sharkey, a high-profile critic of drones and a professor of artificial intelligence and robotics at the University of Sheffield, argued forcefully that machines cannot "distinguish between civilians and combatants," apply the Geneva Conventions, or determine proportionate use of force.
It is a curious complaint: A human being did not distinguish between civilians and combatants, apply the Geneva Convention, or determine an appropriate use of force during the infamous 2007 "Collateral Murder" incident in Iraq, when American helicopter pilots mistook a Reuters camera crew for insurgents and fired on them and a civilian van that came to offer medical assistance.
Humans get tired, they miss important information, or they just have a bad day. Without machines making any decisions to fire weapons, humans are shooting missiles into crowds of people they cannot identify in so-called signature strikes. When a drone is used in such a strike, it means an operator has identified some combination of traits — a "signature" — that makes a target acceptable to engage. These strikes are arguably the most problematic use of drones, as the U.S. government tightly classifies what these criteria are and has announced it will consider all "military-aged males" that die combatants unless proven otherwise. A machine could, conceivably, do it better.
"If a drones system is sophisticated enough, it could be less emotional, more selective, and able to provide force in a way that achieves a tactical objective with the least harm," Liles says. "A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run."
In other words, a lethal autonomous drone could actually result in fewer casualties and less harm to civilians.
That doesn’t mean machines will always be this way. Machine learning, a branch of artificial intelligence in which computers adapt to new data, poses a challenge if applied to drones. "I’m concerned with the development of self-programming," Krishnan says. "As a self
-programming machine learns, it can become unpredictable."
Such a system doesn’t exist now and it won’t for the foreseeable future. Moreover, the U.S. government isn’t looking to develop complex behaviors in drones. A Pentagon directive published last year says, "Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."
So, if problematic development of these types of weapons is already off the table, what is driving the outcry over lethal autonomy?
It’s difficult to escape the science fiction aspect to this debate. James Cameron’s Terminator franchise is a favorite image critics conjure up to illustrate their fears. Moreover, the concern seems rooted in a moral objection to the use of machines per se: that when a machine uses force, it is somehow more horrible, less legitimate, and less ethical than when a human uses force. It isn’t a complaint fully grounded in how machines, computers, and robots actually function.
The dangers posed by unpredictable, self-learning robots is very real. But that is only one way that drones would employ autonomy. In many cases, human rights would actually benefit from more autonomy — fewer mistakes, fewer misfires, and lower casualties overall.
And if something goes wrong, culpability can be more easily established. From a legal standpoint, countries cannot violate international human rights law or the laws of armed conflict, regardless of whether a drone has a human operator or not. But unlike the lengthy investigations, inquests, and trials required to unravel why a human made a bad decision, making that determination for a machine can be as simple as plugging in a black box. If an autonomous drone does something catastrophic or criminal, then there should be a firmly established liability for those responsible.
The issue of blame is the trickiest one in the autonomy debate. Rather than throwing one’s hands in the air and demanding a ban, as rights groups have done, why not simply point blame at those who employ them? If an autonomous Reaper fires at a group of civilians, then the blame should start with the policymaker who ordered it deployed and end with the programmer who encoded the rules of engagement.
Making programmers, engineers, and policymakers legally liable for the autonomous weapons they deploy would break new ground in how accountability works in warfare. But it would also create incentives that make firing weapons less likely, rather than more — surely the end result so many rights groups want to achieve.
Joshua Foust is a Ph.D student at the University of Colorado Boulder studying strategic communication.
More from Foreign Policy

Chinese Hospitals Are Housing Another Deadly Outbreak
Authorities are covering up the spread of antibiotic-resistant pneumonia.

Henry Kissinger, Colossus on the World Stage
The late statesman was a master of realpolitik—whom some regarded as a war criminal.

The West’s False Choice in Ukraine
The crossroads is not between war and compromise, but between victory and defeat.

The Masterminds
Washington wants to get tough on China, and the leaders of the House China Committee are in the driver’s seat.