The killer robot has been a science-fiction staple for decades, but rapid advances in artificial intelligence may soon usher in the era of lethal autonomous machines for real. If one counts certain ship-borne air-defense systems, that day has already arrived. But a growing chorus of critics think machines shouldn’t be licensed to kill. With the United Nations likely to take up the issue in 2014, here’s a look back at the surprisingly long history of lethal autonomy.
Nikola Tesla unveils the first wireless remote-controlled vehicle, a small iron-hulled boat, before a skeptical crowd in New York’s Madison Square Garden. He later tries to sell the device, dubbed a “telautomaton” — as well as plans for radio-guided torpedoes — to the U.S. military, but officials in Washington won’t take him seriously.
World War I brings a series of advances in robotic warfare, including the U.S.-made Kettering “Bug” (a gyroscope-guided winged bomb) and the German FL-7 wire-guided motorboat, loaded with hundreds of pounds of explosives. In 1916, the range of the coastal-patrolling German boats is doubled when they are outfitted with radio-control systems.
Two German-made FX-1400, or “Fritz X,” bombs slam into the Italian battleship Roma as it sails toward the Strait of Bonifacio, splitting the vessel in two and sending more than 1,200 sailors to their deaths. Fritz Xs are arguably the first radio-controlled drones.
British mathematician Alan Turing, arguably the godfather of artificial intelligence, writes, “I propose to consider the question, ‘Can machines think?'” In Turing’s mind, it’s less a matter of whether machines can reason like humans than how well they can imitate them.
The USS Mississippi test-fires one of the earliest computer-guided missiles, launching a 1,180-pound RIM-2 Terrier off the coast of Cape Cod. A few years later, the Talos missile system comes online, using a homing device that automatically corrects for variations in altitude and speed.
Concerned that the Soviet Union might technologically outdo the United States, the Pentagon’s Defense Advanced Research Projects Agency gives the Massachusetts Institute of Technology $2 million to explore “machine-aided cognition.” The cash infusion jump-starts research in artificial intelligence and computer science.
The U.S. Air Force uses laser-guided weapons to destroy the strategic Thanh Hoa Bridge in North Vietnam, marking the first time a so-called “smart bomb” successfully destroys a major enemy target. During the Vietnam War, the Air Force also deploys autonomous unmanned surveillance aircraft that fly in circular patterns and shoot film until their fuel runs out.
The U.S. Defense Department launches the first Navstar satellite, a major development in modern global positioning technology. The system reaches full operational capacity in 1995 — the same year that GPS is used to guide an unmanned aerial vehicle for the first time, marking a leap forward for drones.
The U.S. government awards General Atomics a contract to build the RQ-1 Predator drone, which will transmit video footage in real time over satellite link, guided by ground-based controllers who can be thousands of miles away. A little more than a year later, the unmanned aerial surveillance vehicle is operating over Bosnia. By 2001, it has been upgraded to carry Hellfire missiles. The era of killer drones is born.
South Korea announces plans to install Samsung Techwin SGR-A1 sentry robots along the Demilitarized Zone with North Korea. Armed with machine guns, they are capable of fully autonomous tracking and targeting, though human approval is reportedly required before they fire.
May 18, 2009
The U.S. Air Force releases a planning document that charts a long-term path to “fully autonomous capability” for aircraft — including the use of force. “The end result would be a revolution in the roles of humans in air warfare,” the report states.
August 5, 2012
Researchers with Cambridge University’s Center for the Study of Existential Risk publish an article on the potential hazards of artificial intelligence gone awry: “We risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable — things such as life and a sustainable environment.”
November 21, 2012
The U.S. Defense Department issues a directive designed to “minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems.” Although the directive allows for the development of fully autonomous nonlethal systems, it requires, for the time being at least, that “appropriate levels of human judgment” be exercised over robots that use deadly force.
Seizing on the public’s distaste for drones, a coalition of NGOs, including Human Rights Watch, launches the Campaign to Stop Killer Robots. “[A] number of countries, most notably the United States, are coming close to producing the technology to make complete autonomy for robots a reality,” Human Rights Watch had warned in 2012.
May 30, 2013
Christof Heyns, the U.N. special rapporteur on extrajudicial, summary, or arbitrary executions, calls for a moratorium on the development and deployment of autonomous robots and urges states to consider whether existing international law is sufficient to govern their use, noting that “war without reflection is mechanical slaughter.”
The Northrop Grumman X-47B unmanned combat air vehicle lands successfully on the deck of the USS George H.W. Bush, becoming the first unmanned autonomous vehicle to land on an aircraft carrier. The feat, Bloomberg raves, brings humankind into a “new era of flight.”
October 8, 2013
Documents submitted to the British Parliament reveal that BAE Systems’ supersonic, stealthy Taranis drone — with the ability to autonomously identify targets — has begun secret tests in the Australian Outback. But BAE reassures legislators that there is a “human operator in the loop.”
November 15, 2013
The 117 governments party to the U.N. Convention on Certain Conventional Weapons agree to take up the issue of lethal autonomy in 2014 — with activists hopeful that a ban could be in place as early as 2016. But which would you put your money on: the U.N. or Skynet?
*CORRECTION, Jan. 21, 2014: The print version of this article in the January/February 2014 issue incorrectly stated the year in which the U.S. Air Force used laser-guided weapons to destroy the Thanh Hoa Bridge in North Vietnam. The bridge was destroyed in 1972, not 1973.
Special thanks to P.W. Singer, author of Wired for War: The Robotics Revolution and Conflict in the 21st Century.
Illustrations by Jameson Simpson