The Real Threat to Social Media Is Europe

The EU is passing legislation that will weaken free speech laws beyond the breaking point.

By , CEO of The Future of Free Speech.
A picture taken on Sept. 4, 2019 shows logos of social networking websites Instagram, Twitter and Facebook, displayed on smart-phone screen, in Lille, northern France.
A picture taken on Sept. 4, 2019 shows logos of social networking websites Instagram, Twitter and Facebook, displayed on smart-phone screen, in Lille, northern France.
A picture taken on Sept. 4, 2019 shows logos of social networking websites Instagram, Twitter and Facebook, displayed on smart-phone screen, in Lille, northern France. DENIS CHARLET/AFP via Getty Images

Elon Musk’s acquisition of Twitter has caused fear and loathing among pundits and politicians wary of his vow to revert the platform to its position as “the free speech wing of the free speech party.” These fearful elites might seek solace in Europe, where Musk’s techno-utopian dreams of online free speech absolutism now face unprecedented obstacles. Whether they will recognize the dystopian aspects of Europe’s own technological culture is another question.

Elon Musk’s acquisition of Twitter has caused fear and loathing among pundits and politicians wary of his vow to revert the platform to its position as “the free speech wing of the free speech party.” These fearful elites might seek solace in Europe, where Musk’s techno-utopian dreams of online free speech absolutism now face unprecedented obstacles. Whether they will recognize the dystopian aspects of Europe’s own technological culture is another question.

The European Union is in the midst of finalizing the Digital Services Act (DSA), an ambitious legislative attempt to create a “global gold standard” on platform regulation. After five trilogues, on April 23, the European Parliament and European Council reached a provisional political agreement on the DSA. Given the EU’s economic and political clout, the DSA may have a substantial impact beyond Europe through the so-called “Brussels EffectBrussels Effect.” As such, the DSA is likely to affect the practical exercise of free speech on social media platforms, whether located in Silicon Valley or owned by American tech billionaires.

While free speech is protected by both the EU Charter of Fundamental Rights and the European Convention on Human Rights, these legal instruments offer governments much greater leeway than the First Amendment of the U.S. Constitution when it comes to defining categories, such as hate speech, that can be regulated. Nor does European law provide as robust protection against intermediary liability as Section 230 of the Communications Decency Act, which shields U.S. online platforms from liability for most user-generated content.

This balancing approach to free speech among European democracies is apparent from the content of the DSA. The text does include some elements worthy of praise. This includes greater transparency obligations on large social media platforms to lift the veil on their removal decisions and publish annual reports on content moderation and enforcement. This will allow users to better understand how content is recommended to them and how moderation decisions are made; users will also enjoy a right to reinstatement if platforms make mistakes. It is also a positive step that the DSA does not impose general monitoring obligations on social media platforms that would further increase the use of content-filtering algorithms to scan, flag, and remove supposedly illegal content.

But despite these positive elements, the DSA does not strike the right balance between countering genuine online harms and safeguarding free speech. It will most likely result in a shrinking space for online expression, as social media companies are incentivized to delete massive amounts of perfectly legal content.

In fact, the DSA includes a number of features that are likely to cause serious collateral damage to online free speech in Europe. The paradigm of short mandatory takedown limits, which requires platforms to quickly delete reported illegal content or face steep fines, was first adopted by Germany in 2017 as part of the Network Enforcement Act, or NetzDG. By 2020, it had been copy-pasted by more than 20 countries, including authoritarian regimes such as Russia and Turkey. This approach is now partly replicated in the DSA, which establishes a “notice and action” mechanism. Intermediaries should act on the receipt of such a notice “without undue delay,” taking into account the type of content and urgency of removal. Very large online platforms face fines of up to 6 percent of their annual worldwide turnover in cases of noncompliance. While there are no explicit time limits, the mandatory takedown approach is clearly inspired by the NetzDG. Legislation that obligates platforms to remove illegal content in a short time period is problematic for at least three reasons.

First, while many politicians frame a plethora of issues, including online terrorist propaganda, hate speech, and disinformation, in terms of the illegality of the content, available data suggests that most of the problematic content online is legal.

A recent legal analysis (which I co-authored) of 63 million Danish Facebook comments showed that while an algorithm based on Facebook’s community standards found that 1.4 percent of the comments constituted “hateful attacks,” only about 0.0066 percent actually violated relevant provisions of the Danish criminal code on hate speech, threats, glorification of terrorism, incitement to unlawful acts, and threats and insults of public officials. Another survey I co-authored of the Facebook accounts of five Danish media outlets found that only 1.1 percent of deleted comments violated the criminal code, while almost half of the deleted comments were not hateful, offensive, or threatening. Likewise, a 2021 study found that, contrary to popular perceptions in the media, Donald Trump’s incendiary 2016 presidential campaign did not result in a spike of hateful rhetoric on Twitter. Accordingly, the most popular social media platforms hardly constitute the “Wild West” that they are often perceived to be by prominent European politicians such as Emmanuel Macron.

Second, the legal question of resolving whether an utterance is illegal is complex. A 2021 report I co-authored found that national courts in five European democracies on average spend 778.47 days to decide hate speech cases. By imposing short mandatory takedown limits, governments are demanding that tech platforms make legal decisions that take trained judicial experts months or years to decide in just a few hours or days. This will almost inevitably result in platforms erring on the side of removal, given the steep fines for noncompliance, which creates a strong economic incentive to purge rather than protect user speech.

All in all, these aspects of the DSA are likely to result in substantial collateral damage to freedom of speech, affecting the ability of ordinary people to debate issues such as immigration, gender, religion, and identity and sometimes affecting the very minority groups that hate speech bans are supposed to benefit.

The objections set out above were also why the French Constitutional Council declared parts of the so-called Avia Law (a French NetzDG clone) as unconstitutional violations of freedom of expression and why the U.N. Human Rights Committee expressed concerns about the NetzDG’s compliance with free speech norms under international human rights law (IHRL).

Proponents of tougher regulation of content moderation often point to studies showing that the NetzDG has not resulted in mass takedown requests or “overblocking.” However, such studies miss the most profound effect of mandatory notice and takedown regimes. Rather than reacting to user complaints, platforms proactively expand their own definitions of hate speech and other prohibited categories in their terms of reference and ramp up automated content moderation to detect and delete offending content before any user has a chance to complain.

In the first quarter of 2018, Facebook removed 2.5 million pieces of content for the transgression of community standards on hate speech. By the third quarter of 2021, the number had increased almost tenfold to 22.3 million. This was mainly the result of increased reliance on AI-based content-filtering algorithms. In 2018, AI caught 4 out of 10 transgressions before any user complaint, but in the third quarter of 2021, this rose to 96.5 percent.

Ironically, the result of this development is that laws aimed at creating democratic accountability on private megaplatforms end up handing over even more power to these platforms when it comes to regulating the speech of the billions of people who rely on social media to share and access information and opinions.

The uncomfortable truth is that when it comes to the regulation of speech on global centralized social media platforms, there is no “perfect middle ground”—only a set of imperfect policies that inevitably include dilemmas and trade-offs.

However, there are alternative models to the DSA that would provide stronger free speech protections and empower users at the expense of increased platform power. One model would be to encourage the implementation of human-rights standards as a framework of first reference in the moderation practices of large social media platforms. This would result in a social media environment that would be both more transparent and protective of users’ free speech on categories such as hate speech and disinformation. Using human rights law as the standard of content moderation would also provide platforms with norms and legitimacy to resist demands to censor dissent made by authoritarian states keen to exploit the well-intentioned but misguided attempts by democracies to rein in harmful online speech.

However, IHRL is not a panacea and does not address categories such as spam and porn. Even if human-rights norms are more transparent and consistent than the current terms of service on issues such as hate speech and disinformation, applying these norms at scale on platforms facilitating hundreds of millions—or even billions—of users will at best decrease, not eliminate, the number of questionable and wrong decisions by both human and automated content moderators.

Moreover, many users are likely to resist human-rights standards since human rights law protects much speech that is “awful but lawful,” driving away users—and thus ultimately revenue—which presumably will make an impression on even Musk having just spent $44 billion to buy Twitter. These concerns might be partly offset by providing users more control over speech through various methods of distributed content moderation. For instance, many women report being confronted with misogynistic abuse, which leads some to self-censor or even leave platforms. However, if third parties, such as women’s rights organizations, were permitted to develop voluntary filters that individual users could apply at will, women could avoid much of the abuse that might be legal but nevertheless hurtful, frightening, and off-putting. Such “bring-your-own filters” could also be used for anti-LGBT content, antisemitism, Islamophobia, etc.

The obvious advantage of combining IHRL with distributed content moderation is that it reserves centralized content moderation for the worst and most heinous content while providing users agency over the content they wish to see and engage with. This could serve as a break on the censorship race to the bottom, where various governments and interest groups insist that speech they find particularly concerning should be prohibited and platforms find it difficult to resist, as they are ultimately more concerned with stakeholder management and public relations issues that affect their bottom line than with upholding principled free speech norms.

Jacob Mchangama is CEO of The Future of Free Speech, the author of Free Speech: A History From Socrates to Social Media, and Senior Fellow at the Foundation for Individual Rights and Expression.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Keri Russell as Kate Wyler walks by a State Department Seal from a scene in The Diplomat, a new Netflix show about the foreign service.
Keri Russell as Kate Wyler walks by a State Department Seal from a scene in The Diplomat, a new Netflix show about the foreign service.

At Long Last, the Foreign Service Gets the Netflix Treatment

Keri Russell gets Drexel furniture but no Senate confirmation hearing.

Chinese President Xi Jinping and French President Emmanuel Macron speak in the garden of the governor of Guangdong's residence in Guangzhou, China, on April 7.
Chinese President Xi Jinping and French President Emmanuel Macron speak in the garden of the governor of Guangdong's residence in Guangzhou, China, on April 7.

How Macron Is Blocking EU Strategy on Russia and China

As a strategic consensus emerges in Europe, France is in the way.

Chinese President Jiang Zemin greets U.S. President George W. Bush prior to a meeting of APEC leaders in 2001.
Chinese President Jiang Zemin greets U.S. President George W. Bush prior to a meeting of APEC leaders in 2001.

What the Bush-Obama China Memos Reveal

Newly declassified documents contain important lessons for U.S. China policy.

A girl stands atop a destroyed Russian tank.
A girl stands atop a destroyed Russian tank.

Russia’s Boom Business Goes Bust

Moscow’s arms exports have fallen to levels not seen since the Soviet Union’s collapse.