OpenAI’s CEO Goes on a Diplomatic Charm Offensive
Sam Altman’s global travels may be more opportunistic than altruistic.
Sam Altman has been on a world tour that would put Taylor Swift to shame.
Sam Altman has been on a world tour that would put Taylor Swift to shame.
The chief executive of OpenAI, best known for its artificial intelligence model ChatGPT, visited Israel, Jordan, Qatar, the United Arab Emirates, India, South Korea, Japan, Singapore, Indonesia, and Australia in the past two weeks. He met students, venture capitalists, and leaders including Indian Prime Minister Narendra Modi, South Korean President Yoon Suk-yeol, and Israeli President Isaac Herzog. He even carved out some time to dial into an AI conference in Beijing and engage in a bit of diplomacy himself, calling for “global cooperation” to make AI technology safer and for more exchanges between Chinese and American researchers.
It’s a good time to be talking to world leaders about AI. The technology has been front and center in the global conversation this year—spurred on in large part by OpenAI’s advances—and governments are racing to gain an upper hand in innovation and regulation in equal measure. Altman is trying to be the face of both and has balanced his bullishness about the technology’s benefits with warnings about its downsides and calls to institute some guardrails—as long as the guardrails don’t restrain his company too much.
Altman has been looking to parlay his face time with governments around the world into more favorable regulation for his own company. Most countries are still feeling their way around AI legislation. The European Union’s AI Act—which passed a key vote last week—is only expected to be fully approved by the end of this year and come into force by late 2025. The United States and China, the two countries leading advances in artificial intelligence, are further behind when it comes to regulation.
Altman has previously criticized the EU legislation, warning last month that OpenAI might have to cease operating within the bloc’s borders before walking back those comments and saying the company had “no plans to leave.” That was during another multi-country trip within Europe that preceded his Asia sojourn, and he also met with European Commission President Ursula von der Leyen days after making his controversial statements. And according to a report from Time magazine, OpenAI spent months lobbying the EU government behind the scenes to avoid having ChatGPT and other models classified as “high risk.” While that effort appears to have been somewhat successful, it highlights the double-edged sword that can cut tech executives trying to sweet-talk policymakers.
“One of the biggest challenges, I think, is companies going in saying they’re for regulation, but then they get slammed because they don’t really mean any regulation,” said Katie Harbath, founder and chief executive of the tech policy firm Anchor Change and a former director of public policy at Facebook who worked on global elections. “People are like: ‘Oh, wait, your public comments said you’re for regulation, and now you’re lobbying against it.’ That feels like a turn.”
In some ways, meeting with world leaders is a rite of passage for any Big Tech CEO—the likes of Meta’s Mark Zuckerberg, Google’s Sundar Pichai, Microsoft’s Satya Nadella, and Amazon’s Jeff Bezos have all done it at various stages of their tenures. But Altman so far seems to have had a more positive engagement with global policymakers and regulators than the damage-control exercises that his tech industry peers carried out during the multiple crises of the last decade.
“It seems a little bit that he’s learning the lesson a lot sooner than necessarily other companies did in the social media age, about the CEO himself really getting involved in setting up these relationships with policymakers around the world as they’re thinking about regulation,” Harbath said, pointing out that Zuckerberg didn’t really start engaging with policymakers himself until after the 2018 Cambridge Analytica scandal on Facebook’s data collection.
OpenAI representatives did not respond to multiple requests for comment for this story, but Altman has made several public statements urging governments to work with companies like his to put constraints around artificial intelligence and mitigate its potential risks.
In written testimony before a Senate Judiciary subcommittee last month, he called for the United States to adopt a licensing and registration regime for AI models “above a crucial threshold of capabilities,” arguing that requiring government approval would help mitigate potential safety concerns. His vision is wider than the Beltway: He wrote that regulations have to consider the global scale of AI and ensure international cooperation.
The trick, critics say, is where Altman draws that “crucial threshold.” They say Altman’s plans would only entrench larger companies like OpenAI that have the financial heft to comply with regulation or deal with its impacts.
“Licensing just empowers the big companies that can afford the cost of being licensed,” said Susan Ariel Aaronson, a research professor at George Washington University and co-principal investigator of the National Science Foundation’s Institute for Trustworthy AI in Law and Society. “If you have the money, anyone can challenge ChatGPT,” she said.
The sheer amount of computing and data processing power required to build AI models already favors larger players, and regulation that adds further hoops for smaller companies to jump through will make it harder to level the playing field, according to Aaronson. She said that entrenched firms with access to data can become virtually “data cartels.”
Another issue is the time frame for regulation, as reflected in the EU’s slow-walk approach to new rules for AI. Most focus on fears about what AI could potentially do in the future but ignore the here and now, said Sarah Myers West, managing director of the AI Now Institute and a former senior advisor on AI to the Federal Trade Commission. She cited concerns about the “very real harms that are unfolding before us right now,” from the spread of misinformation to exacerbating inequality.
The broad willingness among policymakers to start looking at guardrails, even if their gaze remains distant, is driven in many ways by precisely those recent, rapid developments in the technology. They’ve been burned before with new tech, from controversies around content moderation, data collection, privacy, and monopoly power. That might explain the interest—but could also lead to some regulatory blind spots.
“I think they, like any other big company, want to shape regulation,” Aaronson said, referring to OpenAI and Altman. “What is atypical is the response of policymakers who seem way too responsive to these guys after getting burned by them.”
Rishi Iyengar is a reporter at Foreign Policy. Twitter: @Iyengarish
More from Foreign Policy

Chinese Hospitals Are Housing Another Deadly Outbreak
Authorities are covering up the spread of antibiotic-resistant pneumonia.

Henry Kissinger, Colossus on the World Stage
The late statesman was a master of realpolitik—whom some regarded as a war criminal.

The West’s False Choice in Ukraine
The crossroads is not between war and compromise, but between victory and defeat.

The Masterminds
Washington wants to get tough on China, and the leaders of the House China Committee are in the driver’s seat.
Join the Conversation
Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.
Already a subscriber?
.Subscribe Subscribe
View Comments
Join the Conversation
Join the conversation on this and other recent Foreign Policy articles when you subscribe now.
Subscribe Subscribe
Not your account?
View Comments
Join the Conversation
Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.