Analysis

AI Has Entered the Situation Room

Data lets us see with unprecedented clarity—but reaping its benefits requires changing how foreign policy is made.

By , a retired four-star U.S. Army general and an advisor to Rhombus Power, and , the founder and CEO of Rhombus Power.
AI-war-predictions-technology-Brian-Stauffer-illustration-foreign-policy
AI-war-predictions-technology-Brian-Stauffer-illustration-foreign-policy
Brian Stauffer illustration for Foreign Policy

At the start of 2022, seasoned Russia experts and national security hands in Washington watched in disbelief as Russian President Vladimir Putin massed his armies on the borders of Ukraine. Was it all a bluff to extract more concessions from Kyiv and the West, or was he about to unleash a full-scale land war to redraw Europe’s borders for the first time since World War II? The experts shook the snow globe of their vast professional expertise, yet the debate over Putin’s intentions never settled on a conclusion.

At the start of 2022, seasoned Russia experts and national security hands in Washington watched in disbelief as Russian President Vladimir Putin massed his armies on the borders of Ukraine. Was it all a bluff to extract more concessions from Kyiv and the West, or was he about to unleash a full-scale land war to redraw Europe’s borders for the first time since World War II? The experts shook the snow globe of their vast professional expertise, yet the debate over Putin’s intentions never settled on a conclusion.

But in Silicon Valley, we had already concluded that Putin would invade—four months before the Russian attack. By the end of January, we had predicted the start of the war almost to the day.

How? Our team at Rhombus Power, made up largely of scientists, engineers, national security experts, and former national security practitioners, was looking at a completely different picture than the traditional foreign-policy community. Relying on artificial intelligence to sift through almost inconceivable amounts of online and satellite data, our machines were aggregating actions on the ground, counting inputs that included movements at missile sites and local business transactions, and building heat maps of Russian activity virtually in real time. 

We got it right because we weren’t bound by the limitations of traditional foreign-policy analysis. We weren’t trying to divine Putin’s motivations, nor did we have to wrestle with our own biases and assumptions trying to interpret his words. Instead, we were watching what the Russians were actually doing by tracking often small but highly important pieces of data that, when aggregated effectively, became powerful predictors. All kinds of details caught our attention: Weapons systems moved to the border regions in 2021 for what the Kremlin claimed were military drills were still there, as if pre-positioned for future forward advances. Russian officers’ spending patterns at local businesses made it obvious they weren’t planning on returning to barracks, let alone home, anytime soon. By late October 2021, our machines were telling us that war was coming.

Did the machines tell us with 100 percent certainty that Russia would invade? No, but they told us that the pattern of Russian activities leading up the war made it extraordinarily likely that Putin would order the attack. In fact, that is how AI works: Large language models learn by sifting through past data—in our case, about 10 years’ worth, going back to just before Russia’s 2014 invasion of Crimea. They look for patterns: Whenever X has happened in the past, Y has often been the outcome. Sometimes the correlation is weak, but other times the pattern is strong. Add up enough of these signals, and our system can make aggression predictions in future hot spots around the globe with specific levels of confidence.

Some of what AI does is not very different from traditional sleuthing. Twitter users, after all, posted open-source satellite images showing Russian equipment collecting near the border before the war. But it would take thousands of open-source investigators or intelligence analysts to replicate just one small part of the machine model. What AI can do—and humans cannot—is look at everything everywhere at once and very fast. Think of The Big Short, the movie about curious bankers wading through masses of mortgage data, finding suspicious quirks, and sleuthing house-to-house to uncover the shenanigans that led to the 2007 subprime crisis. AI is The Big Short a million times over—looking not only at mortgages but at everything that could conceivably be interesting and doing it simultaneously, automatically, and virtually in real time.

Just as importantly, the machines are dispassionate, making it easier to circumvent human biases and wishful thinking. Some experienced Russia policy hands didn’t want to believe that Putin would start a war with so few troops, such poorly prepared units, and such a high risk of economic disaster for Russia. They were right about the state of Putin’s preparations but projected their own definition of rationality onto the Russian leader. When the machines sift through historical patterns, they do not care for human notions of what a “rational” Putin might do—only for the likelihood that an observed pattern has led to a certain outcome in the past. With the model using countless data points from 2014 to the present moment, including Russia’s first invasion of Ukraine, there were plenty of patterns and outcomes to observe.

The machines are dispassionate, making it easier to circumvent human biases and wishful thinking.

It is less important that the large language models got it right and many lifelong experts did not. As we know from the early days of AI, machines are just as capable of hallucinating as human beings. More important is that we recognize that this tool has vast consequences for national security and foreign policy going forward—and acknowledge how little we have wrestled with those implications so far. Ask yourself: What can technology predict today about the likely course of the Russia-Ukraine war? What, for that matter, can it tell us about the future of warfare, geopolitics, and national security planning? As our team saw in the run-up to last year’s invasion, technology can already tell us more than we could have imagined only a decade ago. And it will be able to tell us far more a decade from now—if we are prepared to make the most of it. 

In a world where data can help us see and anticipate with unprecedented clarity, we must leverage our new capabilities and empower decision-makers by reorganizing processes designed around the inputs of human beings. The U.S. government’s systems for handling information and making national security decisions were perfected for 20th-century situation rooms, where the best brains deliberated face-to-face around a table, not for 21st-century data and network technologies. Now, we need not just a bigger table and situation room—but their digital versions. Some of the participants of future deliberations won’t be human at all but thinking machines that will empower the experts with inferences, intervention points, and what-if scenarios at faster and faster speeds of relevance. Instead of decision-makers gathering to debate how to react to an unfolding crisis, as the old system was constructed to do, they will need to routinely handle predictions of events before they happen. That alone will require a rethinking of how national security decisions are made.


FP LIVE | JUNE 28: Who will win the AI race? How will it impact global trade, sanctions, and great-power competition? Join Paul Scharre, the author of Four Battlegrounds: Power in the Age of Artificial Intelligence, in conversation with FP’s Ravi Agrawal as they discuss “The Scramble for AI,” the cover story in FP’s Summer 2023 print issue. Register to join.


Today, a confluence of developments—including the ubiquity of sensors, ever faster computers, the power of algorithms, and the open-source revolution—have brought us to a moment when more information than ever before can suddenly be collected, stored, and accessed. Now, it is also interoperable and manipulable by AI. In our headquarters, we use, aggregate, and, with the help of AI, make sense of all types of data, including early indicators, suspicious financial fingerprints, logistics activities, weapons flows, and subtle changes in infrastructure construction, as well as the tone and content of media reports. The result is a digital nervous system that warns decision-makers about gathering threats, often much earlier than in the past. 

For example, our system flashed a warning sign well ahead of China’s massive missile and aircraft overflights of Taiwanese waters following then-U.S. House Speaker Nancy Pelosi’s visit to the island in August 2022. Similarly, the system predicted heightened risk around Japan ahead of U.S. President Joe Biden’s visit in May 2022. The media later reported unusual flights in the area by Chinese and Russian strategic bomber aircraft. The system anticipated political instability in Sri Lanka months in advance, and it also flashed Chinese activities in Kiribati and the Solomon Islands.

In all of these instances, the U.S. government could conceivably have received similar alerts from traditional intelligence sources. But importantly, the AI-generated warning often comes earlier, thanks to the power and speed of data aggregation and sense-making models. What’s more, human resources are limited. Conventional intelligence might be focused on a handful of known hot spots, while seemingly placid regions of the globe may be barely monitored. AI can be your eyes when your human eyes are looking elsewhere.

Decision-makers will need to routinely handle predictions of events before they happen, requiring a rethink of how national security decisions are made.

Earlier warnings of impending events increase freedom of action across the entire spectrum of national security—including the diplomatic, informational, military, and economic spheres. Faster insights might also help prevent, change, or mitigate an adverse outcome before it happens. Right now, even as soldiers, politicians, and diplomats shape the final outcome in Ukraine, we’re sifting data in California to predict where Russia might strike next. (FP Insiders can get a better sense of our predictions for Ukraine in this companion article.)

Ready or not, AI already makes it possible to look at a multitude of possible futures and for us to know with surprisingly quantifiable likelihood which of them may or may not happen. Even more importantly, it gives policymakers the capacity to war-game and pressure-test possible responses during a real crisis situation—in minutes or hours, not days or weeks as in traditional tabletop exercises. The quantity of data we analyze helps predict the next card in an opponent’s deck with previously unimaginable confidence. It is increasingly difficult to catch a technologically equipped nation by surprise. 

But in many ways, while AI can make the picture clearer, it only makes decision-makers’ choices more complex. In the run-up to the Russian invasion, the Biden administration took the innovative step of publicizing its judgments on classified intelligence about Russia’s war preparations and broadcasting it to every capital. If AI makes it possible to consistently deduce your opponent’s next step, how will that affect diplomatic and negotiating strategy? The possibilities are dizzying. 

A grid of four photos shows the types of military surveillance that AI technology can parse, from left: a satellite image of the Russian troop build-up before the war in Ukraine, a Russian war plane, a Chinese-backed airstrip in Kiribati as seen from the air, and a China-flagged submarine.
A grid of four photos shows the types of military surveillance that AI technology can parse, from left: a satellite image of the Russian troop build-up before the war in Ukraine, a Russian war plane, a Chinese-backed airstrip in Kiribati as seen from the air, and a China-flagged submarine.

Maxar Technologies/Getty Images/Planet Labs PBC

By now, it should be clear that increasingly powerful AI cannot be a substitute for human judgment. As valuable as the insights generated by the models can be, policymakers must still decide what to do with the information. Just like human-generated intelligence assessments, AI-enabled understanding always comes with a likelihood of an event happening—never 100 percent certainty. It always takes confidence and courage to act in time to change events; in some ways, making decisions based on AI takes even more confidence because it means making a bet on a prediction from an unconventional source. 

Policymakers will therefore need to inject a dose of humility into the ways they incorporate such tools into their work, just as they do with fallible human-derived inputs. The entire system needs to lean into the fact that predictions will, at times, be wrong. AI has not created a crystal ball: Rather than expecting infallible accuracy to drive decisions on, say, the battlefield, the important thing is to have transparency about the inputs that led to a conclusion. This process should be even more rigorous for AI-enabled judgments than for those that rely on traditional means. 

Many of the experts who have spent a lifetime studying countries, geopolitics, statecraft, and war will remain indispensable in a world of AI-informed foreign policy. These professionals must be the checks and guardrails on technology-driven decisions, just as the technology is a check on human judgment with its known downsides of groupthink and blind spots. Ultimately, we can augment human intelligence, diplomacy, and military planning with a technological edge—and project forward instead of mainly looking back. In the military sphere, AI won’t determine the course of every battle, but it will expand options and increase freedom of action to make decisions at the speed of relevance. 

As we make further advances in predictive technology, leaders and policymakers will face a whole new set of challenges. Having so much information will force policymakers to decide which of the many situations anticipated by the machines are most critical to prepare for. And the growing power of prediction will sharply lessen the sense of uncertainty that often slows down the policy process, forcing governments to make faster decisions instead of covering all bets and preparing for all eventualities. There will be fewer excuses for delaying until the course of events removes all doubt—or for what CIA Director William Burns has called admiring the problem.

We have barely scratched the surface of the questions AI’s role in national security will pose.

The earth-shattering ability of enhanced foresight is something governments have yet to prepare for, in their personnel and processes, national security doctrines, and much else. When adversaries use the same technologies, it will create the game-changing reality of mutually assured transparency: a new situation in which they know that we know what they are planning several steps ahead and vice versa. Just as surprise and uncertainty defined the big security events of the past—from Pearl Harbor to the Cuban missile crisis to 9/11—anticipation and the ubiquity of actionable information will define the rest of the 21st century. 

And we have barely scratched the surface of the questions AI’s role in national security will pose. In a world where information dominance is the great advantage, how does the United States know it is maintaining an edge over its rivals and competitors? Are Washington and its allies investing in the right technologies and concepts? Are they adopting them at the necessary speed and scale to be able to deter and, if needed, defeat future aggression?

Then there are the really tough questions for policymakers. If we can now anticipate moves by adversaries or bad actors years in advance, what is our responsibility to act? What is the role of diplomacy? How do policymakers ensure that AI’s predictive powers don’t simply become an easy justification for preemptively using military force, when doing so would not align with U.S. interests and values? What is the international legitimacy or legality for lethal action based on predictions made by AI? Is there an obligation to share warnings publicly? Is there any role for strategic ambiguity in a world made transparent by AI—or will Washington want adversaries to know that it knows what they’re planning? When is inaction justified—or even essential?

AI is not science fiction. It is here now, and it has entered the door of the situation room. Our technology is miles ahead. But we are only now beginning to develop the human capacity to use it—and the organizational, procedural, and doctrinal changes that will be indispensable if we are to reap AI’s national security benefits in time.

This article appears in the Summer 2023 issue of Foreign Policy. Subscribe now to support our journalism.

Stanley McChrystal is a retired four-star U.S. Army general and an advisor to Rhombus Power. He led Joint Special Operations Command from 2003 to 2008 and U.S. and coalition forces in Afghanistan from 2009 to 2010. He is the author of My Share of the Task and a co-author of Team of Teams, Leaders, and Risk: A User’s Guide. Twitter: @StanMcChrystal

Anshu Roy is the founder and CEO of Rhombus Power.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

An illustration shows the Statue of Liberty holding a torch with other hands alongside hers as she lifts the flame, also resembling laurel, into place on the edge of the United Nations laurel logo.
An illustration shows the Statue of Liberty holding a torch with other hands alongside hers as she lifts the flame, also resembling laurel, into place on the edge of the United Nations laurel logo.

A New Multilateralism

How the United States can rejuvenate the global institutions it created.

A view from the cockpit shows backlit control panels and two pilots inside a KC-130J aerial refueler en route from Williamtown to Darwin as the sun sets on the horizon.
A view from the cockpit shows backlit control panels and two pilots inside a KC-130J aerial refueler en route from Williamtown to Darwin as the sun sets on the horizon.

America Prepares for a Pacific War With China It Doesn’t Want

Embedded with U.S. forces in the Pacific, I saw the dilemmas of deterrence firsthand.

Chinese Foreign Minister Wang Yi, seen in a suit and tie and in profile, walks outside the venue at the Belt and Road Forum for International Cooperation. Behind him is a sculptural tree in a larger planter that appears to be leaning away from him.
Chinese Foreign Minister Wang Yi, seen in a suit and tie and in profile, walks outside the venue at the Belt and Road Forum for International Cooperation. Behind him is a sculptural tree in a larger planter that appears to be leaning away from him.

The Endless Frustration of Chinese Diplomacy

Beijing’s representatives are always scared they could be the next to vanish.

Turkey's President Recep Tayyip Erdogan welcomes Crown Prince of Saudi Arabia Mohammed bin Salman during an official ceremony at the Presidential Complex in Ankara, on June 22, 2022.
Turkey's President Recep Tayyip Erdogan welcomes Crown Prince of Saudi Arabia Mohammed bin Salman during an official ceremony at the Presidential Complex in Ankara, on June 22, 2022.

The End of America’s Middle East

The region’s four major countries have all forfeited Washington’s trust.