Argument

An expert's point of view on a current event.

Social Media Is an Intel Gold Mine. Why Aren’t Governments Using It?

“To platform or to deplatform” is the wrong debate.

By , a senior fellow at the Foundation for Defense of Democracies.
Cardboard cutouts during a protest on the National Mall in Washington.
A cardboard cutout of Mark Zuckerberg, CEO of Facebook, dressed up as the QAnon Shaman, along with cutouts of other people involved in the U.S. Capitol insurrection on the National Mall in Washington on March 25. Caroline Brehman/CQ-Roll Call, Inc via Getty Images

On March 25, the CEOs of tech giants Facebook, Twitter, and Google appeared before Congress for the first time since the Jan. 6 U.S. Capitol insurrection—an event that fundamentally changed the relationship between Washington and Silicon Valley.

On Jan. 8, as if on cue, Twitter suspended then-U.S. President Donald Trump indefinitely, following in a slew of deplatforming efforts on sites ranging from Facebook to Pinterest. But eliminating Trump’s platform didn’t eliminate the ideas he espouses. Nor did Trump’s removal adhere to an established precedent—or create a new one. If Trump is a problem, why does Twitter continue to give voice to Iran’s supreme leader, Ali Khamenei, and his Holocaust denial? Ultimately, Trump’s suspension is a classic case of Silicon Valley’s performative self-regulation. Honor systems don’t protect us.

On March 25, the CEOs of tech giants Facebook, Twitter, and Google appeared before Congress for the first time since the Jan. 6 U.S. Capitol insurrection—an event that fundamentally changed the relationship between Washington and Silicon Valley.

On Jan. 8, as if on cue, Twitter suspended then-U.S. President Donald Trump indefinitely, following in a slew of deplatforming efforts on sites ranging from Facebook to Pinterest. But eliminating Trump’s platform didn’t eliminate the ideas he espouses. Nor did Trump’s removal adhere to an established precedent—or create a new one. If Trump is a problem, why does Twitter continue to give voice to Iran’s supreme leader, Ali Khamenei, and his Holocaust denial? Ultimately, Trump’s suspension is a classic case of Silicon Valley’s performative self-regulation. Honor systems don’t protect us.

Beyond speech, the dilemma of self-regulation—to platform or deplatform—becomes more complicated when social media becomes a gallery for illicit activities. Though one could argue Silicon Valley bears some responsibility for enabling the U.S. Capitol insurrection, Facebook—and platforms like it—have also proved integral in pursuing charges against the extremists who invaded and ransacked the building. Individual users have identified rioters online, and the FBI is combing their Facebook and Twitter feeds themselves—just as those same insurrectionists engage in a mad rush to scrub the internet of their own culpability. In cases like these, it’s unclear whether the best thing for our collective security is to remove dangerous content or take advantage of the intelligence it provides law enforcement.

As Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey, and Google’s Sundar Pichai defend their self-regulatory strategies before Congress, they will likely concede that the formation of established rules and guidelines for policing online rhetoric is a work in progress. That is true. What they won’t say—but what Congress must realize—is that we have done ourselves a disservice by reducing it to a black-and-white issue about free speech and privacy versus national security.

Ultimately, bickering about the philosophical intricacies of total internet freedom versus deplatforming does nothing for public safety. Instead, social media giants should take a more pragmatic approach and focus their efforts on real-time information sharing with law enforcement agencies. Facebook, Twitter, and their competitors have the benefit of hosting incredibly transparent intelligence: Sending it to those who can act does more than simply deleting it.


There is one thing we know for sure: Terrorists and criminals use social media—including many U.S.-sanctioned entities. Facebook appears to host profiles of multiple Hezbollah financiers whom the U.S. Department of Treasury has sanctioned for fundraising on behalf of the group in Latin America; there are also community pages devoted to memorializing Hezbollah fighters who died in combat. Ghazi Atef Nasr al-Din, a former Venezuelan diplomat in the Middle East under U.S. sanctions and wanted for questioning by the FBI for his support of Hezbollah, is on Twitter. Members of the Ayman Joumaa money laundering network—which the U.S. Treasury targeted in 2011—are on Facebook.

U.S.-sanctioned entities and terrorist groups aren’t the only communities to populate the likes of Facebook and Twitter. Relatives of Mexican drug lords boast of their lavish lifestyles on social media, which has become a platform for illicit trade. Most gruesomely, the internet is awash with criminal activity, from illegal sales of plots of land in the Amazon rainforest to child pornography. And we are not talking about the dark web: It is all happening on the same platforms that people of all ages rely on to keep in touch with friends, especially during the coronavirus pandemic.

But shutting down profiles of dangerous people and organizations doesn’t eradicate the dangers they pose—it simply shoves them further out of view, creating a false sense of security. Nor do social media companies have a particularly good track record of enforcing their own guidelines. Firms are supposed to close accounts when the user violates a platform’s terms of use, which usually include clauses about risks of physical harm or threats to public safety. Yet the sheer amount of drugs and child porn trafficked on Facebook, Instagram, and Snapchat suggests social media companies fail to excise even blatantly illegal content.

Shutting down profiles of dangerous people and organizations doesn’t eradicate the dangers they pose.

In June 2017, Facebook published an essay titled “Hard Questions: How We Counter Terrorism.” In the piece, Facebook’s Director of Global Policy Management Monika Bickert and Counterterrorism Policy Manager Brian Fishman wrote that the platform intended to be “a hostile place for terrorists.” They said Facebook “remove[s] terrorists and posts that support terrorism whenever [the company] becomes aware of them.” But those guidelines are not acted upon consistently. Two years later, the livestreamed video of the 2019 Christchurch terrorist attack went viral as it unfolded.

Even when enforced, closures are regularly outpaced by the creation of new accounts. Not only is it easy to reopen accounts once shut, but it is also easy to create fake accounts, obfuscate the identities associated with them, and use settings to filter out public scrutiny. And we are still faced with the question of whether that is even productive: Is there a benefit to letting terrorists and criminals use social media, as they frequently—and very openly—share actionable intelligence?

All evidence would indicate that there is. After a failed Iranian-backed offensive by the beleaguered Syrian regime to regain control of the city of Aleppo in 2015, members of Islamic Revolutionary Guard Corps-trained brigades began posting photos of the experience on Facebook, Twitter, and Instagram. These photos revealed the geolocation of fighters, the aircrafts they boarded, the weapons they carried, the identity of their comrades and commanders, and the airports the Iranian regime frequented in its logistical efforts.

As fighters died, militias took to social media to create commemorative pages that posted photos of funerals, graves, and shrines, as well as martyrdom videos that included names of fighters and the battles where they met their death. Their friends responded with likes and comments lamenting the losses—revealing their identities in the process. This wealth of open-source information allowed real-time intelligence gathering. Selfies taken on board Iran Air aircraft, for example, proved that the Iranian commercial carrier was being employed to support the fighting—a sanctionable activity. The United States eventually sanctioned Iran Air.

The usefulness of social media to criminals and terrorists is even more obvious when one goes beyond what its users post publicly. In April 2017, Paraguayan authorities determined that drug dealers potentially linked to Hezbollah used social media platforms’ instant messaging tools to communicate—in addition to email and WhatsApp. But they only discovered the use of encrypted communication through social media instant messaging apps upon seizing the traffickers’ electronic devices during a raid. The contents of these devices were provided to the Foundation for Defense of Democracies by a Paraguayan intelligence source.

Social media firms are reluctant to let intelligence and law enforcement agencies peep behind the privacy walls of active accounts because it would prompt a public outcry; after all, user privacy is at stake. At the same time, making those walls impenetrable prevents those same agencies from doing their job—namely, keeping us safe.

Giving intelligence agencies a backdoor to private social media activity no doubt raises privacy concerns and fear of a Big Brother surveillance state. But what the public fails to realize is that Big Brother is no longer the state. It is the tech giants, who abuse user privacy protections based on their own politically driven algorithms rather than submitting themselves to government’s due process.

At present, only a lengthy subpoena process that demonstrates probable cause hands material to law enforcement. That process must become swifter: Recent data shows that governments’ requests to access user data are only growing. If the government presents solid evidence that a community page is engaged in illicit traffic or certain users are plotting a terrorist attack in their instant messages, it needs to obtain access to their communications—or else it will seize them. Here, the benefits of surveillance trump privacy.


Accounts linked to terrorist groups and sanctioned entities like al Qaeda and Hezbollah—such as the recently sanctioned Al-Mustafa International University—deserve to be removed from social media platforms. And U.S.-based firms such as Facebook, Instagram, Apple, and Pinterest should all work harder to block U.S.-sanctioned entities from using their platforms for “brand” promotion, if one can call it that. But these groups’ supporters fall in a gray area—where monitoring their accounts does more for national security than shutting them down.

These individuals may elect to use U.S. social media platforms because the privacy protections outweigh the risks of storing sensitive data on U.S. servers. Though some may set up dual alternate profiles on Chinese- or Russian-based platforms, a full abandonment of U.S.-based servers would cause illicit actors to lose visibility or influence—which they don’t want. Making these accounts more accessible to investigators, then, won’t function as a deterrent. It is a win-win situation: By keeping quiet—without making a public fuss about deplatforming—U.S. authorities can spy on illicit actors without betraying their trust.

The challenge, then—for both social media platforms and government agencies—is to implement regulations and devise mechanisms that make accounts linked to terrorist groups accessible to law enforcement and intelligence agencies in real time. That is an arrangement that social media platforms likely do not want to contemplate, but they should. Oftentimes, evidence needed to thwart the next terrorist attack or piece together the puzzle of a criminal network is readily available, it is just hidden behind social media filters or buried in the maze of a social media account.

Social media firms may be less reluctant to take this step if duty of care parameters are expanded. Duty of care renders companies responsible for the virtual environment they create, obligating them to ensure a safe space for users. Far from a carte blanche to censor heterodox ideas, this principle should be used to overcome social media firms’ reluctance to disclose problematic content to law enforcement. Protecting communications between traffickers and terrorists, after all, should not be construed as privacy; firms know who these people are and can see what they are posting—even if it lurks behind privacy settings. The government should impose fines on companies that fail to provide swift access to these accounts. Similarly, the legal process by which law enforcement gains access to user content should be simplified.

Silicon Valley must learn that privacy and freedom are not all-or-nothing. There is a balance to be struck—and it’s one our safety depends on.

Emanuele Ottolenghi is a senior fellow at the Foundation for Defense of Democracies. Twitter: @eottolenghi

More from Foreign Policy

An aerial display of J-10 fighter jets of China’s People’s Liberation.

The World Doesn’t Want Beijing’s Fighter Jets

Snazzy weapons mean a lot less if you don’t have friends.

German infantrymen folllow a tank toward Moscow in the snow in, 1941 during Operation Barbarossa, Hitler's invasion of the Soviet Union. The image was published in. Signal, a magazine published by the German Third Reich. Art Media/Print Collector/Getty Images

Panzers, Beans, and Bullets

This wargame explains how Russia really stopped Hitler.

19th-century Chinese rebel Hong Xiuquan and social media influencer Addison Rae.

America’s Collapsing Meritocracy Is a Recipe for Revolt

Chinese history shows what happens when an old system loses its force.

Afghan militia gather with their weapons to support Afghanistan security forces.

‘It Will Not Be Just a Civil War’

Afghanistan’s foreign minister on what may await his country after the U.S. withdrawal.