Congress Can Help the United States Lead in Artificial Intelligence
The United States is falling behind when it comes to AI. Here’s how a new congressional commission can ensure that Washington catches up.
This week the U.S. Congress will hold hearings focusing on the Defense Department’s progress on artificial intelligence, a critical technology area. On the agenda is the new National Security Commission on Artificial Intelligence, which Congress mandated in the most recent National Defense Authorization Act. Its members will be appointed by senior congressional leaders and agency heads and will develop recommendations for advancing the development of AI techniques to bolster U.S. national security. The commission could spur much-needed government attention and it should sketch out an ambitious agenda.
The United States is behind many other nations when it comes to crafting a national plan for AI. Last year, China released its “Next Generation Artificial Intelligence Development Plan,” with the explicit goal of becoming the world leader in artificial intelligence by 2030. Over a dozen other countries have also published national AI strategies. The AI revolution is global, and while the United States has a vibrant AI ecosystem, other nations do, too. Half of the top 10 AI start-ups in the world are American; the other half are Chinese. China is investing in research and development and STEM education, and recruiting top talent from Silicon Valley. Chinese companies such as Alibaba, Baidu, Tencent, and SenseTime are in the top tier of tech companies using AI around the globe. Without action, the United States risks losing its decades-long technology edge in the civilian and military sectors. The White House is working on updating the national AI research and development plan from 2016, but Washington needs a comprehensive AI strategy for both the government and private sector.
For the United States to maintain its global leadership role in artificial intelligence, the public and private sectors will need to work together, and the commission’s members should reflect the diverse experiences needed to grapple with this issue. Early signs demonstrate the ability of the commission to get private sector buy-in. Eric Schmidt, former chairman of Alphabet, and Eric Horvitz, a director at Microsoft Research, have been named as two of the fifteen commission members. As the commission gets underway, it should focus on advancing U.S. leadership in the key drivers of AI and machine learning: human capital, data, and computing power.
Increasing the pipeline of AI talent is critical. AI researchers in Silicon Valley currently command enormous salaries, a sign of the shortfall in top tier talent. The United States, including both the public and private sectors, must educate, recruit, and retain the best researchers in the world. Increasing education funding and opportunities for science, technology, engineering, and mathematics should be a top priority. The federal government plan for STEM education recently released by President Donald Trump’s administration takes some important steps, and the commission should work with university administrators to better understand what the government and industry can further do to expand the pipeline of students acquiring advanced STEM degrees.
Additionally, for decades, the United States’ ability to draw top talent from around the world to U.S. universities has been a core advantage. Many of these students stay, found companies, and feed the engine of American innovation. Immigrants launch one-quarter of start-ups in the United States. Unfortunately, for the past two years, international student enrollment in U.S. universities has declined. The Trump administration’s anti-immigration policies and rhetoric are contributing to this decline—international enrollment in other English-speaking countries is on the rise. The commission should tackle how to align immigration policy and the country’s STEM needs and provide recommendations for Congress to pass legislation to preserve the United States’s advantage in recruiting top-tier global talent.
Data is another key driver of AI and machine learning, and the United States desperately needs federal data-privacy legislation that both protects citizens and allows companies to build secure data sets for creating machine-learning algorithms. A raft of data breaches and scandals illustrate the urgent need for data privacy regulation. The United States has lagged the European Union in establishing regulations, and individual states have begun to step in to fill the gap, with California and Vermont recently passing major legislation. The commission should work with lawmakers to assist in crafting federal data-privacy regulations that balance between the need for data to train algorithms and the need to protect privacy.
Computing hardware is another critical component of AI technology, and the commission should examine the United States’s posture in the geopolitics of computing hardware. While U.S. companies dominate the market for chip design, much of the actual production occurs overseas. Security researchers have demonstrated the feasibility of inserting hidden hardware-level “back doors” into chips during manufacturing. The commission should consider the national security implications of supply chain vulnerabilities in the current hardware ecosystem, and whether reshoring semiconductor manufacturing is in U.S. national security interests.
Ensuring the United States remains a leader in AI and machine learning is a necessary first step, but the U.S. national security apparatus must also be able to use AI technology effectively. If history is any guide, the adoption challenge will be the difference between leading and lagging over the medium term. Being a leader in technological innovation is not enough if the national-security sector cannot effectively use those innovations. Three major challenges stand in the way.
First, the pace of government bureaucracy is woefully out of step with the tech sector. Many start-ups would go big or fail before they made it through the government’s traditional procurement system. The Defense Department has made important strides in addressing this mismatch through the creation of the Defense Innovation Unit (DIU), a Silicon Valley-based office that has more flexibility to fund small defense contracts quickly and facilitate interaction between the Pentagon and Silicon Valley. The Pentagon has also created a new Joint Artificial Intelligence Center to coordinate and advance defense-related AI activities. These efforts should be expanded, while at the same time continuously improved to make it easier for the government to work with start-ups and other companies not used to working with the federal government. Expanding the Pentagon’s efforts should include training existing defense personnel to safely and reliably use AI applications.
Second, current AI technology has limitations that could pose problems. These include the potential for bias, the risk that the real world is more complex than the training environment in ways that make algorithms brittle or prone to malfunction, and other issues that can undermine reliability. The transparency challenges associated with some AI methods, such as deep learning (a type of machine learning that uses multilayered neural networks), could be a problem for some national security applications where understanding why an AI system took an action may be important. For example, a predictive AI system that reported an increase in the probability that another state will undertake an action—a cyberattack, military intervention, or diplomatic move—would be less useful if it couldn’t explain why it was making that prediction.
Other countries or even nonstate actors could also exploit vulnerabilities in current AI systems. Learning systems are vulnerable to data-poisoning attacks, in which adversaries manipulate training data to ensure an algorithm learns the wrong thing. Data classifiers using neural networks are vulnerable to spoofing attacks in which adversaries feed specially created data inputs into the network to generate the wrong output, including in ways that are undetectable to human observers. These problems are particularly important in high-risk national security applications such as intelligence or defense, where accidents or successful adversarial attacks could have major consequences. Adversaries could manipulate weapons or cyberdefenses to focus on false threats and ignore real ones. The commission should work with federal agencies to better understand how they are accounting for these vulnerabilities in their use of AI technology.
The commission should also support increased government funding for AI safety research. While the federal government should not attempt to compete with the vast sums of money the private sector is investing in research and development, AI safety and progress measurement may be areas where there are insufficient private-sector incentives for funding. No one knows how to make cutting-edge AI systems safe; and the government can play a useful role in funding basic research into the engineering of safe systems.
Third, the commission should tackle questions about appropriate and responsible use of AI. Federal government applications of AI to defense and border security are already paying dividends. For example, facial recognition systems helped catch three impostors attempting to enter the United States at Washington’s Dulles Airport in the past few months. Yet some have raised alarm about government use of this powerful technology; many Google researchers objected to the company’s involvement in the Pentagon’s Project Maven, which uses AI to process drone video feeds, leading Google to end the relationship. Researchers at other companies have voiced similar concerns. While these specific applications do not directly involve weapons and lethal force, Pentagon rhetoric sometimes creates a disconnect with parts of Silicon Valley, such as when senior leaders proclaimed the “lethality” of cloud computing.
Defense Secretary James Mattis recently tasked the Defense Innovation Board to develop principles for AI in defense, but there remain critical unanswered questions about the norms and standards for responsible AI use across the full range of national security applications. Lethal autonomous weapons, which have been the subject of United Nations discussions for the past five years, are only one example. The commission should actively embrace the debate around national security use of AI technology and publicly articulate principles for responsible use, coordinating with the Defense Innovation Board’s ongoing efforts. Just as with digital surveillance tools, the commission should consider issues such as the proper role of oversight and transparency to the American public.
The AI revolution is underway and it won’t stop to wait for the U.S. government to keep up. There are important AI initiatives in the White House, the Pentagon, and Congress, and the commission should work with them while looking to fill gaps that others are not addressing. Immigration policy, strategies for improving cooperation between the Pentagon and the private sector, and a public set of AI principles to guide their use in defense and intelligence settings should be priorities.
If the commission can do that while also increasing awareness of AI safety and reliability concerns and outline recommendations for how the government can use AI technology safely and responsibly, it will help ensure continued U.S. global economic and military leadership.
Paul Scharre is Director for Technology and National Security at the Center for a New American Security (CNAS) and the author of Army of None: Autonomous Weapons and the Future of War. Michael C. Horowitz is a Professor of Political Science at the University of Pennsylvania and an Adjunct Senior Fellow at CNAS. Twitter: @paul_scharre