How to Regulate AI
Biden’s former top tech policymaker explains how guardrails around technology should work.
A strange thing is happening in the world of artificial intelligence. The very people who are leading its development are warning of the immense risks of their work. A recent statement released by the nonprofit Center for AI Safety, signed by hundreds of important AI executives and researchers, said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Ravi Agrawal is the editor in chief of Foreign Policy. Twitter: @RaviReports
More from Foreign Policy

No, the World Is Not Multipolar
The idea of emerging power centers is popular but wrong—and could lead to serious policy mistakes.

America Prepares for a Pacific War With China It Doesn’t Want
Embedded with U.S. forces in the Pacific, I saw the dilemmas of deterrence firsthand.

America Can’t Stop China’s Rise
And it should stop trying.

The Morality of Ukraine’s War Is Very Murky
The ethical calculations are less clear than you might think.
Join the Conversation
Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.
Already a subscriber?
.Subscribe Subscribe
View Comments
Join the Conversation
Join the conversation on this and other recent Foreign Policy articles when you subscribe now.
Subscribe Subscribe
Not your account?
View Comments
Join the Conversation
Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.