Argument

Jihadis Go to Jail, White Supremacists Go Free

Western governments are guilty of a double standard when it comes to policing digital hate culture. If they want to prevent the next attack, they need to recognize the threat of online white supremacists and act to stop them.

An armed police officer is seen in front of Al Noor Mosque in Christchurch, New Zealand, on May 11.
An armed police officer is seen in front of Al Noor Mosque in Christchurch, New Zealand, on May 11. Kai Schwoerer/Getty Images

After 51 Muslims were killed in a mosque shooting in Christchurch, New Zealand, there have been renewed calls to fight the online spaces in which right-wing terrorists are radicalized. Brenton Tarrant, the right-wing extremist who carried out the March attack, was deeply influenced by digital hate culture—the loose coalitions that researchers have identified among white power activists, anti-Muslim activists, neo-Nazis, alt-right trolls, and men’s rights activists (as well as other groups) that use websites, forums, social media platforms, and encrypted messaging apps such as WhatsApp, Signal, and Telegram. Digital hate culture is not a new phenomenon; white supremacists in the United States have been using digital communications technology for three decades. Nor is this culture unique to social media; blogs and websites have long been part of its networks.

Digital hate culture—at least in North America and Western Europe—has one core premise: that the “white race” is being replaced as a consequence of mass immigration, Islam, and liberals who have embraced cosmopolitanism and multiculturalism. This is better known as the “white genocide” myth, which took center stage in Tarrant’s manifesto. It’s true, of course, that demographic shifts in Europe are occurring, but it is the trenchant belief—expressed by Tarrant and white supremacists across the web—that these arrivals are in collusion with liberal multiculturalists and globalists (an anti-Semitic code for Jews) and are bent on the destruction of European culture itself.

A Twitter employee admitted that using the tools that Twitter deploys against the Islamic State on white supremacists would involve banning Republican politicians from the app.

Unlike the rigorous online policing of jihadi groups and their potential recruits, there has been a reticence on the part of social media companies to challenge right-wing extremism. Jihad has no mainstream political support in any liberal democracy—and the views and online networks of jihadis are rightly countered, disrupted, and even shut down with government and private sector cooperation.

There is a widespread consensus that the free speech implications of such shutdowns are dwarfed by the need to keep jihadi ideology out of the public sphere. When it comes to right-wing extremism, white supremacy, and white nationalism, however, there is no such consensus. Instead, right-wing views are defended on free speech grounds, giving extremists space to spread their ideologies. The latest example of this double standard comes from the White House, which refused on Tuesday to join an international effort to clamp down on online hate speech.

And it is not only the Trump administration; tech companies have thus far refused to treat violent white supremacist rhetoric the same way as violent jihadist rhetoric. Indeed, in a discussion on white supremacists on Twitter, a Twitter employee admitted that using the tools that Twitter deploys against the Islamic State on white supremacists would involve banning Republican politicians from the app.

With the ascendance of white identity politics in Western conservative parties and the rise of the populist radical right, right-wing extremism has been shielded from the full force of private sector and government responses to disrupt their online communications. It is not simply U.S. President Donald Trump’s dog whistles and retweets of exponents of digital hate culture such as Lauren Southern and Jayda Fransen. His administration has also reduced resources to counter right-wing extremism.

Digital hate culture now exists in a gray area between legitimacy and extremism.

Southern is a Canadian activist who joined the “Defend Europe” action group, organized by Generation Identity (a pan-European, extreme right-wing street movement), which sought to disrupt rescue vessels saving migrants from drowning during the journey across the Mediterranean Sea in 2016. In 2018, Southern was banned from entering Britain on the grounds that she may incite racial hatred. Fransen, the deputy director of Britain First, an anti-Muslim political party that amassed over a million likes on Facebook, has antagonized Muslim communities in the U.K. and was jailed in 2018 for racially aggravated harassment.

These are just two examples of influential figures in digital hate culture, which now exists in a gray area between legitimacy and extremism, an ambivalence afforded by popular conservative outlets including Breitbart and Fox News and tacit and overt support from Republican politicians in the United States, including Reps. Paul Gosar and Steve King and Trump himself.

In Europe, Geert Wilders, a Dutch member of parliament and leader of the far-right Party for Freedom, and Gerard Batten, a member of the European Parliament and leader of the UK Independence Party, have expressed support for Tommy Robinson, the founder of the English Defence League. He is currently campaigning as an independent to represent the North West England constituency in the European Parliament.

The notion that Twitter and others cannot shut down white supremacists without affecting conservative politicians illustrates that fears of a backlash from politicians and right-wing media outlets that weaponize hate outweigh the principled application of community guidelines to which all users of these platforms are bound. The platforms, of course, have the choice of how to enforce their guidelines, and it is unconscionable that they have failed to use them when politicians, pundits, columnists, and media outlets violate these rules and exploit their platforms to stoke hatred to win votes and attract the attention of audiences.

Social media companies have dithered when it comes to fighting right-wing extremists because they are more concerned with their public perception and their revenue streams. Facebook, for example, gave different treatment to Robinson and Britain First a few years ago, on the grounds that their anti-Muslim hate was legitimate political speech, before their recent about-face. The company’s trepidation in banning known conspiracy theorist Alex Jones of Infowars is another example. Of course, these individuals all cried “censorship” when their accounts were taken down.

Jihadis do the same thing when their accounts are banned. The difference is that politicians in the United States and Western Europe are grilling executives of these platforms about their purported censorship of conservative voices, and companies are concerned about the political risks this may present to their continued operation and their user base.

No jihadi leader has complained to the U.S. Congress and had these executives hauled before a committee to discuss charges of censorship. This has led to an evident double standard: Corporate executives and political elites are more troubled by the free speech implications of taking down a far-right activist’s account than with protecting the dignity, freedom, and security of minorities.

To their credit, social media companies have significantly changed their approaches and have become more decisive in countering white supremacist hate culture on their platforms. This has led to a backlash by activists on the far-right, whose rallies increasingly focus on the notion that they are being unfairly censored.

Tarrant’s livestream of his massacre in a Christchurch mosque may be one of the most brutal examples of how these platforms are used. But there are more complex ways in which they have supported digital hate culture. YouTube’s algorithms, for example, have been shown to recommend extremist content. The company has been reticent at best to take down white supremacist content. This appears to be changing. Facebook, Twitter, Reddit, and YouTube are increasing their efforts to disrupt white supremacist users on its platforms, but they must do more.

Social media platforms have been crucial in pulling larger audiences into the mythologies expressed by digital hate culture.

Before these companies began taking decisive measures after the Christchurch attack, the guiding framework relied on countermessaging, in which content that aims to contradict extremist narratives is pushed to those who are embedded in these networks with the hope of changing their minds. This approach has had limited effects. Stronger approaches such as account bans would have much more of an effect in limiting extremist recruitment.

The reliance on countermessaging is not only expedient for social media companies whose microtargeted advertising tools play a key a role in this. Facebook, for example, offers advertising credit to nongovernmental organizations to target supporters of jihad or the far-right on its platforms to expose them to counternarratives. It now links users who search for white supremacist content to the NGO Life After Hate. Moonshot CVE’s Redirect Method, incubated by Google’s Jigsaw division, targets viewers of extremist videos with contradictory content.

While such efforts are valuable and should not be abandoned, this is not a viable solution that will protect those threatened by digital hate culture. And it fails to even reach the darker corners of the internet. The countermessaging approach reinforces the double standard: Instead of disrupting key influencers in digital hate culture, which might raise concerns about freedom of speech, the goal is to dissuade their audiences.

Indeed, at the same time that mainstream social media platforms were being exploited by white supremacists, an army of anonymous keyboard warriors was also gaining prominence on sites such as 4chan, 8chan, Voat, and many others. Alt-tech platforms like Gab.ai cater specifically to those who have been kicked off the mainstream platforms. Just like jihadis, far-right activists have also moved to Telegram, a mobile application that affords users full encryption. These are important sites for creating content that is then disseminated on mainstream platforms—often under the radar of their content moderation teams.

Many recent right-wing terrorists have been influenced by these communities. Anders Behring Breivik, the perpetrator of the July 2011 attacks in Norway, was an avid reader of counterjihad blogs and a contributor to the neo-Nazi, U.S.-based web forum Stormfront. Darren Osborne, before he attempted a vehicular attack on Finsbury Park Mosque in London that killed one person, was found to have been rapidly radicalized by anti-Muslim discourse online about Muslim grooming gangs in Britain. Robert Bowers, who was charged with the murder of 11 Jews in Pittsburgh, was an active user of Gab.ai and worked with alt-right extremists to dox a left-wing blogger.

The Christchurch killer, Tarrant, used 8chan to post his manifesto and covered his rifle in slogans referencing historical massacres of Muslims and digital hate culture. John T. Earnest, the prime suspect in a shooting at a synagogue in southern California and who also claimed to have set fire to a mosque in the region, may have followed suit, as a user identifying himself as Earnest posted an open letter on 8chan a few hours before the attack.

The online experiences of white supremacist terrorists suggest that there is a decentralized process by which individuals find so-called truth in the white genocide conspiracy and other mythologies and act on them on their own. It is not a form of lone-wolf terrorism; such acts are fostered and approved by a culture of hate online.

Surveillance of these spaces resembling the monitoring of spaces used by jihadis is necessary, but the political will to establish such surveillance with the resources and tools needed to make it work is unlikely, given that politicians and CEOs continue to prioritize the free speech rights of white supremacists over the security of their potential victims. And in an online culture that reveres nihilistic irony and ambivalence, it is sometimes difficult to discern the actual threats. Surveillance itself might help prevent a few cases of violence, but it is unlikely to eliminate the problem. The real problem is how to police digital hate culture as a whole and to develop the political consensus needed to disrupt it.

Politicians and CEOs continue to prioritize the free speech rights of white supremacists over the security of their potential victims.

On Wednesday, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron hosted a meeting for ministers from G-7 nations to discuss a plan named the Christchurch Call. While the plan under consideration primarily focuses on the responsibilities of social media platforms, these leaders need to have a broader moral and ethical discussion. The dignity of minorities—whether Jews in the United States, Muslims in New Zealand, or Rohingyas in Myanmar—and the harms that digital hate culture has on their communities must become the center of this debate. The Ardern-Macron meeting presents an opportunity to develop real consensus, and it must start with the following question: Does the entitlement to free speech outweigh the harms that hateful speech and extreme ideologies cause on their targets?

Starting with this question is not a radical proposition. In fact, it resembles Section 130 of the German Criminal Code, which covers incitement to hatred and Holocaust denial. The law criminalizes those who “[assault] the human dignity of others by insulting, maliciously maligning, or defaming segments of the population” or publicly “[violate] the dignity” of victims of Nazism. The European Union has also passed laws outlawing Holocaust denial, though it allows national laws to take precedence.

The passage of such laws recognizes that incitement to hate and violation of dignity can have violent consequences. These threats are not abstract. The recent spate of terrorist attacks for which there was a clear online trail makes abundantly clear the deadly risk of permitting these online communities to operate without law enforcement monitoring. Platforms that serve as hubs for digital hate culture frequently host death threats and incitement to violence, and there is no excuse for law enforcement not acting on such expressions just as they would if a jihadi behaved similarly on an extremist site.

However, there are many less visible ways that digital hate culture harms citizens. Indeed, allowing such dehumanization of whole communities forces innocent people to face insecurity and fear. Hate crimes against minorities—particularly Jews and Muslims—have increased significantly in recent years. Veiled women have been spat on as they board buses, and there have been assaults on busy streets.

The political philosopher Jeremy Waldron has argued that hate speech laws do not protect groups from offense but rather their dignity: A “person’s basic entitlement to be regarded as a member of society in good standing, as someone whose membership of a minority group does not disqualify him or her from ordinary social interaction. That is what hate speech attacks,” he wrote.

Recognizing attacks on dignity can help drive how social media platforms regulate content. 

Attacking the dignity of minorities is what digital hate culture is designed to do. Taking dignity as a starting point can drive regulations that do not infringe on the freedom of speech while sharpening governments’ tools to punish those who denigrate and threaten minorities through digital platforms.

Recognizing attacks on dignity can also help drive how social media platforms regulate content and guide investigatory and monitoring efforts of websites like 8chan. It provides a moral and ethical framework for the disruption and limitation of access to such places and affirms the dignity of people online and offline to live as equals. While such recognition may not stop the next attack planned online, it is a useful complement to aggressive and decisive intelligence gathering and policing aimed at intercepting and punishing those planning to engage in premeditated violence.

Attacks on dignity online have been tacitly endorsed by a political debate that affirms free speech without balancing this entitlement against the harms that these attacks may cause. Challenging the claim that digital hate culture ought to be protected on the grounds of free speech would drive much-needed changes to the current anemic approach to right-wing extremism online.

Bharath Ganesh is a political geographer focusing on new media, political communication, and cultures of hate and intolerance online using computational and qualitative methods. He is currently a postdoctoral researcher at the Oxford Internet Institute and the Computational Propaganda Project. Twitter: @bganesh11

Trending Now Sponsored Links by Taboola

By Taboola

More from Foreign Policy

By Taboola