Could Killer Robots Bring World Peace?
We're breaking Isaac Asimov's First Law -- and it could be good for humanity.
A few weeks ago, the United Nations affirmed Isaac Asimov’s First Law of Robotics: "A robot may not injure a human being." Christof Heyns, the U.N. special rapporteur on extra-judicial, summary, or arbitrary executions, said as much in a May 29 speech to the Human Rights Council in Geneva calling for a moratorium on the development of lethal robots. His argument followed two thoughtful paths, expressing concern that they cannot be as discriminating in their judgments as humans and that their very existence might make war too easy to contemplate. As he summed up the grim prospect of robot soldiers, "War without reflection is mechanical slaughter."
Asimov’s First Law — there are a couple more that evolved later to reinforce the principle of doing no harm to individuals, or humanity overall — was introduced in his 1941 short story "Liar" and expanded upon in his book I, Robot. In these tales, it is clear that robots are not only hard-wired to refrain from violence against humans; they will even lie to avoid causing harm. But just two years after the introduction of the fictional First Law, actual robots led missions in history’s most destructive strategic bombardment campaign: the massive air war against Nazi Germany. In this instance, the action was controlled by the Norden bombsight, a robot in the strictest sense — not humanoid, but rather "a machine able to carry out complex actions without human control." As bombers neared their targets, the Norden’s computer took control of the plane through its autopilot, adjusting course for wind and other factors. Then it calculated the optimum moment for dropping bomb loads. It was not a very accurate system, being off-target by about a quarter-mile, on average — but robots did a lot of killing in World War II.
Even before the robot-caused carnage inflicted by the Norden, and putting Asimov’s First Law aside momentarily, the general literary line of thinking about robots was that they were going to prove very dangerous to humankind. Ambrose Bierce, the great curmudgeon, may have started it all in his 1909 short story, "Moxon’s Master." In it, Moxon, an inventor, creates a mechanical man who can do any number of things, including play chess. But this was long before IBM’s Deep Blue came along and defeated world chess champion Garry Kasparov; when Moxon beats his robot in a game, it flies into a rage and kills him.
A decade later, but still 20 years before Asimov’s First Law, Karel Capek’s play R.U.R. premiered (the acronym refers to "Rossum’s Universal Robots"). The robots of the play are not machines, but biologically engineered entities, as in Philip K. Dick’s Do Androids Dream of Electric Sheep (better known under the film title, Blade Runner). As in Dick’s novel, they are exploited and rebel, but in Capek’s play they do so on a large scale, finally supplanting humanity. It is a trope that has taken hold ever since in movie franchises like The Terminator and The Matrix, and in the brilliant reboot of the Battlestar Galactica television series. In more recent sci-fi literature, John Ringo’s Von Neumann’s War digs quite deeply into the way that alien robots would think about strategy and tactics in a war of conquest against humanity. So it seems that, in literary terms, Isaac Asimov stood against a tide of thinking about the coming lethality of robots.
Lethal robots have been making progress in the real world as well. One of the principal weapons of modern warfare, the Tomahawk missile, is a robot. To be sure, its target is chosen by humans, but the missile guides itself to its destination — totally unlike human-controlled Predators, Reapers, and other so-called drones — working around terrain features and dealing with all other factors on its own as well. Tomahawks have done much killing in our two wars with Iraq — and in a few other spots as well. Israel’s Harpy is another fully autonomous robot attack system; while it aims to take out radar emitters rather than people, if enemy soldiers are on site…. The British Taranis is a robot aircraft capable of engaging enemy fighter jets. On the Korean Peninsula, Techwin is a patrol robot, usually remote-controlled but capable of autonomously guarding the demilitarized zone between the North and South — that narrow patch of green foliage surrounded by the most militarized turf on the planet.
Clearly, 21st century military affairs are already being driven by the quest to blend human soldiers with intelligent machines in the most artful fashion. For example, in urban battles, where casualties have always been high, it will be better to send a robot into the rubble first to scout out a building before the human troops advance. In future naval engagements, where the risk of killing civilians will be close to nil out at sea, robot attack craft might be the smartest weapon to use, particularly in an emerging era of supersonic anti-ship missiles that will imperil aircraft carriers and other large vessels. In the air, robots will pilot advanced jets built to perform at extreme G-forces that the human body could never tolerate. As Peter Singer has observed in his book Wired for War, robots are now implementing the swarming concept that my partner David Ronfeldt and I developed over a decade ago — the notion of attacking from several directions at the same time — at least in the United States military.
All this means that the moratorium Christof Heyns called for is likely to be dead on arrival if it ever gets to the U.N. Security Council — some veto-wielding members have no intention of backing away from intelligent-machine warfare. Also, those who keep the high watch in many other countries are no doubt going to seek the diffusion, rather than the banning, of armed robots. However, the concerns that Heyns expressed are important ones. Yes, we should take care to protect noncombatants, but I think the case can be made that robots will do no worse, and perhaps will do better than humans, in the area of collateral damage. They don’t tire, seek revenge, or strive to humiliate their enemies. They will make occasional mistakes — just as humans always have and always will.
As to Heyns’s worry that war will become too attractive if it can be waged by robots, I can only reaffirm Gen. William Tecumseh Sherman’s assessment: "War is hell." He was right during the Civil War, and the carnage of the nearly 150 years since — perhaps the very bloodiest century-and-a-half in human history — has done nothing at all to disprove his point. So the coming of lethal robots, as with other technological advances, will likely make war ever deadlier. The only glimmer of hope is that on balance, and contrary to Heyns’s concern, the cool, lethal effectiveness of robots properly used might, just might, give potential aggressors pause, keeping them from going to war in the first place. For if invading human armies, navies, and air forces can be decimated by defending robots, the cost of aggression will be seen as too high. Indeed, the country, or group of countries, that can gain and sustain an edge in military robots might have the ultimate peacekeeping capability.
Think of Gort and his fellow alien robots from the original Day the Earth Stood Still movie. As Klaatu, his humanoid partner, makes clear to the people of Earth, his alliance of planets had placed their security in the hands of robots programmed to annihilate any among them who would break the peace. A good use of lethal robots for a greater humane purpose.