Why we need to ban fully autonomous weapons systems, before it's too late.
- By Bonnie Docherty Bonnie Docherty is a senior researcher in the Arms Division of Human Rights Watch and a lecturer on law at Harvard Law School's International Human Rights Clinic. She is co-author of a new report by these organizations: Losing Humanity: The Case against Killer Robots.
Imagine a mother who sees her children playing with toy guns as a military force approaches their village. Terrified, she sprints toward the scene, yelling at them to hurry home. A human soldier would recognize her fear and realize that her actions are harmless. A robot, unable to understand human intentions, would observe only figures, guns, and rapid movement. While the human soldier would probably hold fire, the robot might shoot the woman and her children.
Despite such obvious risks to civilians, militaries are already planning for a day when sentry robots stand guard at borders, ready to identify intruders and to kill them, without an order from a human soldier. Unmanned aircraft, controlled only by pre-programmed algorithms, might carry up to 4,500 pounds of bombs that they could drop without real time authorization from commanders.
While fully autonomous robot weapons don’t exist yet, precursors have been deployed or are in development stages. So far, these precursors still rely on human decision making, but experts expect them to be able to choose targets and fire without human intervention within 20 to 30 years. Crude models could potentially be available much sooner. If the move toward increased weapons autonomy continues, images of war from science fiction could become more science than fiction.
Replacing human soldiers with "killer robots" might save military lives, but at the cost of making war even more deadly for civilians. To preempt this situation, governments should adopt an international prohibition on the development, production, and use of fully autonomous weapons. These weapons should be stopped before they appear in national arsenals and in combat.
Fully autonomous weapons would be unable to comply with the basic principles of international humanitarian law — distinction, proportionality, and military necessity — because they would lack human qualities and judgment.
Distinguishing between combatant and noncombatant, a cornerstone of international humanitarian law, has already become increasingly difficult in wars in which insurgents blend in with the civilian population. In the absence of uniforms or clear battle lines, the only way to determine a person’s intentions is to interpret his or her conduct, making human judgment all the more important.
Killer robots also promise to remove another safeguard for civilians: human emotion. While proponents contend that fully autonomous weapons would be less likely to commit atrocities because fear and anger wouldn’t drive their actions, emotions are actually a powerful check on the killing of civilians. Human soldiers can show compassion for other humans. Robots can’t. In fact, from the perspective of a dictator, fully autonomous weapons would be the perfect tool of repression, removing the possibility that human soldiers might rebel if ordered to fire on their own people. Rather than irrational influences and obstacles to reason, emotions can be central to restraint in war.
Fully autonomous weapons would also cloud accountability in war. Without a human pulling the trigger, it’s not clear who would be responsible when such a weapon kills or injures a civilian, as is bound to happen. A commander is legally responsible for subordinates’ actions only if he or she fails to prevent or punish a foreseeable war crime. Since fully autonomous weapons would be, by definition, out of the control of their operators, it’s hard to see how the deploying commander could be held responsible. Meanwhile, the programmer and manufacturer would escape liability unless they intentionally designed or produced a flawed robot. This accountability gap would undercut the ability to deter violence against civilians and would also impede civilians’ ability to seek recourse for wrongs suffered.
Despite these humanitarian concerns, military policy documents, especially in the United States, reflect the move toward increasing autonomy of weapons systems. U.S. Department of Defense roadmaps for development in ground, air, and underwater systems all discuss full autonomy. According to a 2011 DOD roadmap for ground systems, for example, "There is an ongoing push to increase [unmanned ground vehicle] autonomy, with a current goal of ‘supervised autonomy,’ but with an ultimate goal of full autonomy." Other countries, including China, Germany, Israel, Russia, South Korea, and the United Kingdom, have also devoted attention and money to autonomous weapons.
The fully autonomous sentry robot and aircraft alluded to above are in fact based on real weapons systems. South Korea has deployed the SGR-1 sentry robot along the Demilitarized Zone with North Korea, and the United States is testing the X-47B aircraft, which is designed for combat. Both currently require human oversight, but they are paving the way to full autonomy. Militaries want fully autonomous weapons because they would reduce the need for manpower, which is expensive and increasingly hard to come by. Such weapons would also keep soldiers out of the line of fire and expedite response times. These are understandable objectives, but the cost for civilians would be too great.
Taking action against killer robots is a matter of urgency and humanity. Technology is alluring, and the more countries invest in it, the harder it is to persuade them to surrender it. But technology can also be dangerous. Fully autonomous weapons would lack human judgment and compassion, two of the most important safeguards for civilians in war. To preserve these safeguards, governments should ban fully autonomous weapons nationally and internationally. And they should do so now.
John Arquilla earned his degrees in international relations from Rosary College (BA 1975) and Stanford University (MA 1989, PhD 1991). He has been teaching in the special operations program at the United States Naval Postgraduate School since 1993. He also serves as chairman of the Defense Analysis department.
Dr. Arquilla’s teaching interests revolve around the history of irregular warfare, terrorism, and the implications of the information age for society and security.
His books include: Dubious Battles: Aggression, Defeat and the International System (1992); From Troy to Entebbe: Special Operations in Ancient & Modern Times (1996), which was a featured alternate of the Military Book Club; In Athena’s Camp (1997); Networks and Netwars: The Future of Terror, Crime and Militancy (2001), named a notable book of the year by the American Library Association; The Reagan Imprint: Ideas in American Foreign Policy from the Collapse of Communism to the War on Terror (2006); Worst Enemy: The Reluctant Transformation of the American Military (2008), which is about defense reform; Insurgents, Raiders, and Bandits: How Masters of Irregular Warfare Have Shaped Our World (2011); and Afghan Endgames: Strategy and Policy Choices for America’s Longest War (2012).
Dr. Arquilla is also the author of more than one hundred articles dealing with a wide range of topics in military and security affairs. His work has appeared in the leading academic journals and in general publications like The New York Times, Forbes, Foreign Policy Magazine, The Atlantic Monthly, Wired and The New Republic. He is best known for his concept of “netwar” (i.e., the distinct manner in which those organized into networks fight). His vision of “swarm tactics” was selected by The New York Times as one of the “big ideas” of 2001; and in recent years Foreign Policy Magazine has listed him among the world’s “top 100 thinkers.”
In terms of policy experience, Dr. Arquilla worked as a consultant to General Norman Schwarzkopf during Operation Desert Storm, as part of a group of RAND analysts assigned to him. During the Kosovo War, he assisted deputy secretary of defense John Hamre on a range of issues in international information strategy. Since the onset of the war on terror, Dr. Arquilla has focused on assisting special operations forces and other units on practical “field problems.” Most recently, he worked for the White House as a member of a small, nonpartisan team of outsiders asked to articulate new directions for American defense policy.| Rational Security |
John Reed is a national security reporter for Foreign Policy. He comes to FP after editing Military.com’s publication Defense Tech and working as the associate editor of DoDBuzz. Between 2007 and 2010, he covered major trends in military aviation and the defense industry around the world for Defense News and Inside the Air Force. Before moving to Washington in August 2007, Reed worked in corporate sales and business development for a Swedish IT firm, The Meltwater Group in Mountain View CA, and Philadelphia, PA. Prior to that, he worked as a reporter at the Tracy Press and the Scotts Valley Press-Banner newspapers in California. His first story as a professional reporter involved chasing escaped emus around California’s central valley with Mexican cowboys armed with lassos and local police armed with shotguns. Luckily for the giant birds, the cowboys caught them first and the emus were ok. A New England native, Reed graduated from the University of New Hampshire with a dual degree in international affairs and history.| The Complex |