The Algorithms of August
The AI arms race won’t be like previous competitions, and both the United States and China could be left in the dust.
An artificial intelligence arms race is coming. It is unlikely to play out in the way that the mainstream media suggest, however: as a faceoff between the United States and China. That’s because AI differs from the technologies, such as nuclear weapons and battleships, that have been the subject of arms races in the past. After all, AI is software—not hardware.
Because AI is a general purpose technology—more like the combustion engine or electricity than a weapon—the competition to develop it will be broad, and the line between its civilian and military uses will be blurry. There will not be one exclusively military AI arms race. There will instead be many AI arms races, as countries (and, sometimes, violent nonstate actors) develop new algorithms or apply private sector algorithms to help them accomplish particular tasks.
In North America, the private sector invested some $15 billion to $23 billion in AI in 2016, according to a McKinsey Global Institute report. That’s more than 10 times what the U.S. government spent on unclassified AI programs that same year. The largest share came from companies such as Google and Microsoft, as well as a number of smaller private firms, not from government-funded defense research. This reverses the dynamic from the Cold War, when government investments led to private sector innovation and produced technologies such as GPS and the internet.
China says it already holds more than 20 percent of patents in the field and plans to build its AI sector to be worth $150 billion by 2030. But while Beijing and Washington are the current leaders in this race, they are not the only competitors. Countries around the world with advanced technology sectors, from Canada to France to Singapore, also have the potential to make great strides in AI (or build on lower-level advances made by others). While this diffusion means that many more countries will have a stake in the regulation of AI, it also means that many more governments will have incentives to go it on their own.
Unlike the development of a stealth bomber, which has only military applications, basic AI research has both military and civilian uses, which makes it much harder to keep research secret and thereby sustain a large first-mover advantage. The dual-use character of many developments in AI creates an incentive to promote their release and spread to the general public. That means companies can co-opt advances made by market leaders—especially lower-level advances that do not require significant computing hardware.
Most military applications of AI will be a far cry from the killer robots depicted in Hollywood films. For example, computer-run algorithms could aid militaries in better designing recruiting campaigns, more effectively training personnel, cutting labor costs through better logistical planning and operations, and improving surveillance. Or consider image recognition algorithms, which have a range of applications—from tailoring ads in the commercial sector to monitoring disputed territory. The race to develop such applications will be crowded for several reasons. For developed countries that have plenty of capital to invest but face challenges in recruiting and retaining talented personnel for their armed forces, as well as autocracies that do not trust their populations, there will be an especially strong incentive to leverage AI for their militaries. Doing so will allow them to replace personnel with automation whenever possible.
This new competitive landscape will benefit middle powers such as Australia, France, Japan, and Sweden. These countries will have greater capacity to compete in the development of AI than they did in the creation of the complex military platforms used today, such as precision-guided missiles and nuclear-powered submarines. Advanced economies around the world are already working hard to help their private companies develop AI capabilities. In 2017, Canada committed to investing $94 million to attract and cultivate AI talent over the next five years—an annual investment equivalent to some 10 percent of Canada’s entire defense research and development budget in 2015, according to the Organization for Economic Cooperation and Development. Meanwhile, the European Commission approved an investment of $1.8 billion by 2020—a 70 percent increase.
The reason these countries are investing far more in both commercial and military AI than they generally do in military research and development is because AI has such great economic potential. Seemingly commercial AI capabilities can, in some cases, be parlayed into economic investments in the defense sector.
As long as the standard for air warfare is a fifth-generation fighter jet, and as long as aircraft carriers remain critical to projecting naval power, there will be a relatively small number of countries able to manufacture cutting-edge weapons platforms. But with AI, the barriers to entry are lower, meaning that middle powers could leverage algorithms to enhance their training, planning, and, eventually, their weapons systems. That means AI could offer more countries the ability to compete in more arenas alongside the heavy hitters.
Given AI’s many potential military uses, policymakers need to rethink the idea of an AI arms race and what it will mean for international politics.
The fundamental dilemma facing most attempts at arms control is that the more useful a technology is at providing armies with an edge, the harder it is to effectively regulate. There is, after all, no arms control agreement that meaningfully restricts countries from developing tanks, submarines, or fighter jets. Effective agreements tend to restrict the use of less important weapons that don’t decide wars—such as landmines and blinding lasers—or ones that have rarely been used, such as nuclear weapons.
Military history suggests that those applications of AI with the greatest relevance for fighting and winning wars will also be the hardest to regulate, since states will have an interest in investing in them.
Countries with advanced AI companies will be able to leverage those businesses to provide them with some military capabilities, either through adapting commercial technology or by offering financial incentives for talented researchers to focus on defense applications of AI. In these areas, the competition will be fierce because many actors could develop similar algorithms.
Some AI applications, such as operational planning for a complex battlefield or algorithms designed to coordinate swarms of planes or boats trying to attack an enemy target, may appeal exclusively to militaries (though even swarms have nonmilitary applications, including firefighting).
But even though there are clear military applications, AI cannot be compared to nuclear or biological weapons or even military mainstays such as tanks. AI is not itself a weapon. Just as there was not an arms control regime for combustion engines or electricity, it’s hard to imagine an effective regime for containing the coming AI arms race.
Mitigating the military risks involved should therefore focus on specific potential uses of AI, rather than the broad technology category. For example, the Convention on Certain Conventional Weapons, which focuses on weapons systems with the potential to cause excessive or indiscriminate injury, is convening discussions in Geneva with countries around the world about lethal autonomous weapons systems. It is critical, however, not to let the specter of killer robots obscure the broader ways AI could reshape militaries, just as the general purpose technologies of previous centuries did.
Rapid technological advances that outpace the ability of governments to administer them, fear of falling behind other countries, and uncertainty about the range of what is possible magnify the challenge of effectively regulating AI. Moreover, given the potential for economic investments in AI to spill over into potential military applications, many countries beyond the United States, China, and Russia may balk at regulatory approaches that limit their ability to develop more effective defense forces.
Still, governments can create norms and practices surrounding the use of AI both inside and outside of militaries. Setting standards for AI reliability is one possibility, just like international standard setting in other arenas, such as Wi-Fi. Focusing on AI safety is a promising avenue to help ensure that, whatever forms of AI a defense team chooses to adopt, those applications work as intended.
Predicting how AI will impact the future of warfare is difficult; it means assessing technologies that are still mostly immature. Much could change in just a few years. If a decade from now current predictions about the military uses of AI turn out to have been more fiction than fact, one reason might be because militaries failed to design algorithms that were robust and secure enough to withstand efforts by sophisticated adversaries to deceive and distort them.
It’s also possible, though unlikely, that AI will propel emerging powers and smaller countries to the forefront of defense innovation while leaving old superpowers behind. Washington’s current focus on U.S.-Chinese competition in AI misses an even more important trend. There is a risk that the United States, like many leading powers in the past, could take an excessively cautious approach to the adoption of AI capabilities because it currently feels secure in its conventional military superiority.
That could prove to be a dangerous form of complacency, especially if relations between the United States and many of its current allies and partners continue to fray over time. Faced with a less reliable United States, shunned NATO partners would, for example, have even more incentives to invest in alternatives, such as experimenting more with how AI can bolster their capabilities in a world without clear superpower leadership.
If these countries decide to strike out on their own, while China and Russia continue to invest in capabilities with the explicit goal of disrupting U.S. military superiority, and parts of the U.S. tech industry remain reluctant to work with the Defense Department, the U.S. military could even find itself in a position it has not faced for more than 75 years: playing catch-up when it comes to deploying cutting-edge technology on the battlefield.
This article originally appeared in the Fall 2018 issue of Foreign Policy magazine.
Michael C. Horowitz is a professor of political science and the author of The Diffusion of Military Power: Causes and Consequences for International Politics. Twitter: @mchorowitz