In Defense of Killer Robots

Hold on there, technophobe hippies. When it comes to “doing no harm,” robots are a hell of a lot better than humans.

TO GO WITH STORY BY KATHY KATAYI AND JUNIOR KANNAH
This picture taken on January 22, 2014 shows a traffic robot cop on Triomphal boulevard of Kinshasa at the crossing of Asosa, Huileries and Patrice Lubumba streets. Two human-like robots were recently installed here to help tackle the hectic traffic usually experienced in the area. The prototypes are equipped with four cameras that allow them to record traffic flow, the information is then transmitted to a center where traffic infractions can be analyzed. The team behind the new robots are a group of Congolese engineers based at the Kinshasa Higher Institute of Applied Technique, known by its French acronym, ISTA. AFP PHOTO / JUNIOR D. KANNAH        (Photo credit should read Junior D. Kannah/AFP/Getty Images)
TO GO WITH STORY BY KATHY KATAYI AND JUNIOR KANNAH This picture taken on January 22, 2014 shows a traffic robot cop on Triomphal boulevard of Kinshasa at the crossing of Asosa, Huileries and Patrice Lubumba streets. Two human-like robots were recently installed here to help tackle the hectic traffic usually experienced in the area. The prototypes are equipped with four cameras that allow them to record traffic flow, the information is then transmitted to a center where traffic infractions can be analyzed. The team behind the new robots are a group of Congolese engineers based at the Kinshasa Higher Institute of Applied Technique, known by its French acronym, ISTA. AFP PHOTO / JUNIOR D. KANNAH (Photo credit should read Junior D. Kannah/AFP/Getty Images)
TO GO WITH STORY BY KATHY KATAYI AND JUNIOR KANNAH This picture taken on January 22, 2014 shows a traffic robot cop on Triomphal boulevard of Kinshasa at the crossing of Asosa, Huileries and Patrice Lubumba streets. Two human-like robots were recently installed here to help tackle the hectic traffic usually experienced in the area. The prototypes are equipped with four cameras that allow them to record traffic flow, the information is then transmitted to a center where traffic infractions can be analyzed. The team behind the new robots are a group of Congolese engineers based at the Kinshasa Higher Institute of Applied Technique, known by its French acronym, ISTA. AFP PHOTO / JUNIOR D. KANNAH (Photo credit should read Junior D. Kannah/AFP/Getty Images)

Robots just can’t catch a break. If we’re not upbraiding them for taking our jobs, we’re lambasting their alleged tendency to seize control of spaceships, computer systems, the Earth, the galaxy or the universe beyond. The Bad Robot has long been a staple of film and fiction, from HAL (“I’m sorry, Dave, I’m afraid I can’t do that”) to the Terminator, but recently, bad robots have migrated from the screen to the world of military ethics and human rights campaigns.

Robots just can’t catch a break. If we’re not upbraiding them for taking our jobs, we’re lambasting their alleged tendency to seize control of spaceships, computer systems, the Earth, the galaxy or the universe beyond. The Bad Robot has long been a staple of film and fiction, from HAL (“I’m sorry, Dave, I’m afraid I can’t do that”) to the Terminator, but recently, bad robots have migrated from the screen to the world of military ethics and human rights campaigns.

Specifically, a growing number of ethicists and rights advocates are calling for a global ban on the development, production, and use of fully autonomous weapons systems, which are, according to Human Rights Watch, “also” — and rather conveniently — “known as killer robots.” (Not to their mothers, I’m sure!)

The term does tend to have a chilling effect even upon those harboring a soft spot for R2-D2 and Wall-E . But someone has to stand up for killer robots, and it might as well be me.

So: I’m here to tell you that killer robots are getting a bad rap — and ethicists and rights advocates are being far too generous in their assumptions about human beings.

Let’s review the case against the robots. The core concern relates to military research into weapons systems that are “fully autonomous,” meaning that they can “select and engage targets without human intervention.” Today, even our most advanced weapons technologies still require humans in the loop. Thus, Predator drones can’t decide for themselves whom to kill: it takes a human being — often dozens of human beings in a complex command chain — to decide that it’s both legal and wise to launch missiles at a given target. In the not-too-distant future, though, this could change. Imagine robots programmed not only to detect and disarm roadside bombs but to track and fire upon individuals concealing or emplacing IEDs. Or imagine an unmanned aerial vehicle that can fire missiles when a computer determines that a given individual is behaving like a combatant, based on a pre-programmed set of criteria.

According to the Campaign to Stop Killer Robots, this would be bad, because a) killer robots might not have the ability to abide by the legal obligation to distinguish between combatants and civilians; and b) “Allowing life or death decisions to be made by machines crosses a fundamental moral line” and jeopardizes fundamental principles of ”human dignity.”

Neither of these arguments makes much sense to me. Granted, the thought of an evil robot firing indiscriminately into a crowd is dismaying, as is the thought of a rogue robot, sparks flying from every rusting joint, going berserk and turning its futuristic super-weapons upon those it’s supposed to serve. But setting science fiction aside, real-life computers have a pretty good track record. When was the last time Siri staged a rebellion and began to systematically delete all your favorite videos, just to mess with you? When was the last time a passenger plane’s autopilot system got depressed and decided to plow into a mountain, notwithstanding human entreaties to remain airborne?

Arguably, computers will be far better than human beings at complying with international humanitarian law. Face it: we humans are fragile and panicky creatures, easily flustered by the fog of war. Our eyes face only one direction; our ears register only certain frequencies; our brains can process only so much information at a time. Loud noises make us jump, and fear floods our bodies with powerful chemicals that can temporarily distort our perceptions and judgment.

As a result, we make stupid mistakes in war, and we make them all the time. We misjudge distances; we forget instructions, we misconstrue gestures. We mistake cameras for weapons, shepherds for soldiers, friends for enemies, schools for barracks, and wedding parties for terrorist convoys.

In fact, we humans are fantastically bad at distinguishing between combatants and civilians — and even when we can tell the difference, we often make risk-averse calculations about necessity and proportionality, preferring dead civilians 8,000 miles away to dead comrades or compatriots. If the U.S. conflicts in Iraq and Afghanistan produced a surfeit of dead and mangled civilians, it’s not because of killer robots — it’s because of fallible human decision-making.

Computers, in contrast, are excellent in crisis and combat situations. They don’t get mad, they don’t get scared, and they don’t act out of sentimentality. They’re exceptionally good at processing vast amounts of information in a short time and rapidly applying appropriate decision rules. They’re not perfect, but they’re a good deal less flawed than those of us cursed with organic circuitry.

We assure ourselves that we humans have special qualities no machine can replicate: we have “judgment” and “intuition,” for instance. Maybe, but computers often seem to have better judgment. This has already been demonstrated in dozens of different domains, from aviation to anesthesiology. Computers are better than humans at distinguishing between genuine and faked expressions of pain; Google’s driverless cars are better at avoiding accidents than cars controlled by humans. Given a choice between relying on a human to comply with international humanitarian law and relying on a well-designed, well-programmed robot, I’ll take my chances with the killer robot any day.

Opponents of autonomous weapons ask whether there’s a legal and ethical obligation to refrain from letting machines make decisions about who should live and who should die. If it turns out, as it may, that machines are better than people at applying the principles of international humanitarian law, we should be asking an entirely different question: Might there be a legal and ethical obligation to use “killer robots” in lieu of — well, “killer humans”?

Confronted with arguments about the technological superiority of computers over human brains, those opposed to the development of autonomous weapons systems argue that such consequentialist reasoning is insufficient. Ultimately, as a 2014 joint report by Human Rights Watch and Harvard’s International Human Rights Clinic argues, it would simply be “morally wrong” to give machines the power to “decide” who lives and who dies: “As inanimate machines, fully autonomous weapons could truly comprehend neither the value of individual life nor the significance of its loss. Allowing them to make determinations to take life away would thus conflict with the principle of [human] dignity.”

I suppose the idea here is that any self-respecting person would naturally prefer death at the hands of a fellow member of the human species — someone capable of feeling “compassion” and “mercy” — to death inflicted by a cold, unfeeling machine.

I’m not buying it. Death is death, and I don’t imagine it gives the dying any consolation to know their human killer feels kind of bad about the whole affair.

Let’s not romanticize humans. As a species, we’re capable of mercy and compassion, but we also have a remarkable propensity for violence and cruelty. We’re a species that kills for pleasure: every year, more than half a million people around the globe die as a result of intentional violence, and many more are injured, starved, or intentionally deprived of shelter, medicine, or other essentials. In the United States alone, more than 16,000 people are murdered each year, and another million-plus are the victims of other violent crimes. Humans, not robots, came up with such ingenious ideas as torture and death by crucifixion. Humans, not robots, came up with the bright idea of firebombing Dresden and Tokyo; humans, not robots, planned the Holocaust and the Rwandan genocide.

Plug in the right lines of code, and robots will dutifully abide by the laws of armed conflict to the best of their technological ability. In this sense, “killer robots” may be capable of behaving far more “humanely” than we might assume. But the flip-side is also true: humans can behave far more like machines than we generally assume.

In the 1960s, experiments by Yale psychologist Stanley Milgram demonstrated the terrible ease with which ordinary humans could be persuaded to inflict pain on complete strangers; since then, other psychologists have refined and extended his work. Want to program an ordinary human being to participate in genocide? Both history and social psychology suggest that it’s not much more difficult than creating a new iPhone app.

“But wait!” you say, “That’s all very well, but aren’t you assuming obedient robots? What if the killer robots are overcome by bloodlust or a thirst for power? What if intelligent, autonomous robots decide to override the code that created them, and turn upon us all?”

Well: if that happens, killer robots will finally be able to pass the Turing Test. When the robots go rogue — when they start killing for sport or out of hatred, when they start accruing power and wealth for fun – they’ll have ceased to be robots in any meaningful sense. For all intents and purposes, they will have become humans — and it’s humans we’ve had reason to fear, all along.

Junior D. Kannah/AFP/Getty Images

Rosa Brooks is a law professor at Georgetown University and a senior fellow with the New America/Arizona State University Future of War Project. She served as a counselor to the U.S. defense undersecretary for policy from 2009 to 2011 and previously served as a senior advisor at the U.S. State Department. Her most recent book is How Everything Became War and the Military Became Everything. Twitter: @brooks_rosa

More from Foreign Policy

Residents evacuated from Shebekino and other Russian towns near the border with Ukraine are seen in a temporary shelter in Belgorod, Russia, on June 2.
Residents evacuated from Shebekino and other Russian towns near the border with Ukraine are seen in a temporary shelter in Belgorod, Russia, on June 2.

Russians Are Unraveling Before Our Eyes

A wave of fresh humiliations has the Kremlin struggling to control the narrative.

Chinese President Xi Jinping (R) and Brazilian President Luiz Inácio Lula da Silva shake hands in Beijing.
Chinese President Xi Jinping (R) and Brazilian President Luiz Inácio Lula da Silva shake hands in Beijing.

A BRICS Currency Could Shake the Dollar’s Dominance

De-dollarization’s moment might finally be here.

Keri Russell as Kate Wyler in an episode of The Diplomat
Keri Russell as Kate Wyler in an episode of The Diplomat

Is Netflix’s ‘The Diplomat’ Factual or Farcical?

A former U.S. ambassador, an Iran expert, a Libya expert, and a former U.K. Conservative Party advisor weigh in.

An illustration shows the faces of Chinese President Xi Jinping and Russian President Vladimir Putin interrupted by wavy lines of a fragmented map of Europe and Asia.
An illustration shows the faces of Chinese President Xi Jinping and Russian President Vladimir Putin interrupted by wavy lines of a fragmented map of Europe and Asia.

The Battle for Eurasia

China, Russia, and their autocratic friends are leading another epic clash over the world’s largest landmass.