When Hate Goes Viral
Publicizing attacks and exploiting social media is the new normal for terrorism.
Three years ago, on March 15, Brenton Tarrant inaugurated a new era of mass terrorism when he livestreamed his mass shootings at two mosques in Christchurch, New Zealand, where he killed 51 worshippers. The attack was the bloodiest white supremacist attack since 2011, when Anders Breivik killed 77 people, bombing a government complex in Oslo (killing eight) and then proceeding to an iconic summer camp for young progressives at Utoya, where he murdered 69 people.
Three years ago, on March 15, Brenton Tarrant inaugurated a new era of mass terrorism when he livestreamed his mass shootings at two mosques in Christchurch, New Zealand, where he killed 51 worshippers. The attack was the bloodiest white supremacist attack since 2011, when Anders Breivik killed 77 people, bombing a government complex in Oslo (killing eight) and then proceeding to an iconic summer camp for young progressives at Utoya, where he murdered 69 people.
Tarrant proved a harbinger of things to come and, as with Breivik, entered the white supremacist pantheon. A month after his attack, a 19-year-old man yelling antisemitic slurs shot worshippers at the Chabad of Poway synagogue in California, killing one person and wounding several others. Inspired by Tarrant, he also left behind a manifesto and tried to livestream his attack, but his camera malfunctioned.
This essay is adapted from Spreading Hate: The Global Rise of White Supremacist Terrorism by Daniel Byman (Oxford University Press, 288 pp., $29.95, March 2022).
In August 2019, a Norwegian man attacked a mosque in Baerum, taking inspiration from Tarrant. In October of the same year, in Halle, Germany, a man attacked a synagogue during Yom Kippur services, livestreaming footage from his helmet-mounted camera on Twitch, a platform best known for its video game content. On April 11, 2020, a Texan right-wing terrorist livestreamed on Facebook as he drove around looking for police officers to ambush.
Publicizing attacks and otherwise exploiting social media is the new normal for terrorism. The good news is that companies are responding to the threat, and future Tarrants face more barriers if they want to star in their own lethal live video games. In addition, governments are more aggressively monitoring social media, and terrorists of all stripes are more likely to be detected and stopped as a result.
The bad news is that white supremacists and other radicals can still use social media to whip up hatred and advance their cause. Indeed, while social media companies can and should do better, much of the problem today is politics, where extreme positions like white supremacy are tolerated, making it harder for companies to crack down on those who promote hate.
Tarrant exploited both the most popular social media platform in the world, Facebook, and one of the more obscure ones, 8chan. A controversial website, 8chan portrayed itself as a free speech haven and was popular with conspiracy theorists, white nationalists, and other extreme voices before being taken offline when the content delivery network Cloudflare and other service providers stopped providing support for it in August 2019. (It has since come back as 8kun.) He also uploaded his manifesto on MediaFire, ZippyShare, and other small file-sharing sites.
With the advance notice Tarrant provided, 8chan users were ready to download his Facebook Live video and save it for later rebroadcasting. In total, the original video was viewed fewer than 200 times during the live broadcast and approximately 4,000 times before it was removed.
Copies, however, spread: In the first 24 hours after the shooting, Facebook blocked 1.2 million copies of the video from being uploaded and removed another 300,000 that had managed to slip through its outer defenses. Tarrant’s account was removed, as were imposter accounts that individuals claiming to represent him created. Facebook also designated the shooting as a terrorist attack, so praising or supporting it was not permitted on the platform.
Facebook coordinated with its partners in the Global Internet Forum to Counter Terrorism, a partnership among the leading internet companies to prevent terrorism-related video and content from moving between major platforms. As part of this cooperation, Facebook “hashed” the original video of the livestream, essentially giving it a digital fingerprint that allowed Facebook’s artificial intelligence systems and those of other platforms to better identify it.
Despite these efforts, some users quickly downloaded and cloned, edited, and reposted the video. As Kate Klonick, a legal expert who studies technology companies, told me, “Facebook did all you could possibly do after Christchurch, and it was not enough.”
YouTube later reported that in the hours after the shooting, videos of it were being posted at a rate of about one per second, with users editing the size of the clips, adding additional footage, or otherwise trying to fool the automatic censors. Links to the video and Tarrant’s manifesto showed up on numerous platforms, ranging from household names like YouTube and Facebook to less-known parts of the ecosystem such as 8chan.
Tarrant claimed that YouTube videos were an important source of his white supremacist beliefs, so it is not surprising that he sought to make his mark on social media. Tarrant was not alone: Around 90 percent of Americans ages 18-24 use YouTube, more than any other online platform, and a New York Times investigative report found that radicals cite YouTube as the most frequent source of their inspiration to turn to extremism, a problem found throughout the world, according to experts I interviewed.
Adulation of Tarrant continues to this day. On sites like Telegram, favored by many radicals because of its emphasis on encryption and privacy, fans churn out memes and videos in praise of Tarrant and call for others to “do a Tarrant.” Video games are modified to enable the player to play as Tarrant and go through mosques in Christchurch shooting people. Tarrant’s manifesto, despite efforts to take it down, was eventually translated into French, German, Ukrainian, and at least a dozen other languages.
Social media allows both groups and individuals to quickly and cheaply spread their messages. The technologies involved are simple to use, and members can quickly trade best practices, such as how to set up a virtual private network and how to use more anonymous email accounts. On social media platforms, individuals can upload videos and manifestos, communicate with friends and admirers, and otherwise build an organization and spread their cause—all for free.
A favored white supremacist tactic—one their left-wing foes also use against them—is to “troll and dox” their enemies. In one incident, National Action, a British neo-Nazi group, targeted Luciana Berger, a member of the U.K. Parliament, in what it called “Operation Jew Bitch,” spamming her with endless antisemitic tweets. Doxxing involves finding private information about an individual, such as a home address, and publishing it online, implicitly (and at times explicitly) calling for real-world violence or harassment against them. One white supremacist created an online “registrar” (since taken down) that listed the names and other personal information of “race traitors”: white women who dated Black men.
Terrorists, like extremists of all sorts, also enjoy social media because they can create their own narratives. For years, national newspapers, network news, and other traditional media acted as gatekeepers: Even though terrorists depended on the media for publicity, coverage of their actions was usually negative. Today, they can turn that around: White supremacists can take a world event, such as the 2020 Black Lives Matter protests following the murder of George Floyd, and frame it as left-wing and Black violence run amok.
Although the white supremacist movement has always had global aspects, social media puts all of this on steroids. White supremacist watcher Heidi Beirich observed to me that when she began watching this community in the late 1990s, concerns were local, even within the same country: Mexican immigrants in the U.S. Southwest, Jews in the Northeast, and so on. Now, there is more of a shared narrative focusing on supposed white genocide and the so-called Great Replacement, with believers all around the globe adhering to and building up the message.
Now, the virtual and real worlds combine. On Facebook, groups might urge new recruits to meet with local talent spotters or organize real-world rallies designed to intimidate Muslims or immigrants. In these in-person settings, they are then further radicalized.
The violent fringe of white supremacy regularly interacts with the broader right-wing and white supremacist online ecosystem, including alt-right figures who support former U.S. President Donald Trump and claim they are merely true conservatives, as well as “alt-lite” figures who revel in mocking liberals. They use jokes and sarcasm to lampoon the social justice community, feminists, and other supposed villains—and they have flourished. As one extremist noted several years ago, “I’m not sure the left understands the monumental ass-whupping being dished out to them on YouTube.”
White supremacists today employ an ironic style to attract recruits and protect themselves from being blocked by social media companies, often claiming that their hateful remarks are just edgy jokes. The neo-Nazi Daily Stormer website, for instance, published a style guide that calls for a “humorous, snarky style” while noting that the “Prime Directive” is to “Always Blame the Jews for Everything” and that “Women should be attacked” as well. 8chan users have been known for “shitposting,” deliberately inserting provocative material into posts in order to draw attention and spark a reaction, regardless of whether the user believes the material.
Part of the problem is how the companies themselves attract and retain users. An internal Facebook study from 2016 found that “64% of all extremist group joins are due to our recommendation tools.” Dylann Roof, who murdered nine people at a Black church in 2015, says he began his journey into white supremacy by Googling “black on white crime.” One of the top search results led him to the hate group Council of Conservative Citizens’ website, where he found white power propaganda.
Social media companies have made progress in trying to stop violent and hateful users like Roof and Tarrant. Leading companies have changed their policies, prohibiting content that “dehumanizes” other groups and broadening their focus and rules to stop a wider swath of violent groups, not just those formally designated as terrorists. They have also increased the number of human content moderators and, more important given the scale of the challenge, improved their AI tools (though non-English-language content moderation remains dramatically under-resourced).
(Full disclosure: I have served as an occasional paid consultant for Google.)
The simple step of kicking groups and individuals off social media—“deplatforming” them—can be surprisingly effective. This approach grew after the 2017 white supremacist rally in Charlottesville, Virginia, and spread even further after the Jan. 6, 2021, insurrection at the U.S. Capitol, when white supremacists, along with a range of conspiracy theorists and anti-government extremists, were thrown off major platforms—and an entire platform they used, Parler, was taken off Amazon Web Services.
After being deplatformed, users cannot troll their enemies as effectively, depriving them of one of their most important tactics. As the white nationalist Richard Spencer lamented in 2018, “at one point, say two years ago, Silicon Valley really was our friend … what has happened in terms of the Silicon Valley attacks on us are, just, really bad.”
However, many companies’ business models depend on keeping people glued to their screens so they can sell advertisements, and provocative content helps do this. The company officials who are genuinely trying to fight extremism have far less power than the same company’s marketers, who are focused on selling data and growing the user base.
Some extremists have turned to using small alternative platforms. Often, these platforms have no rules, do not require logins, and otherwise emphasize the freedom to act without consequences. Matthew Prince, CEO of the web security firm Cloudflare, described them as “lawless by design.”
Some of these platforms, like 8chan, were created in part because mainstream platforms kicked off extreme users. Gab, a sort of hybrid of Twitter and Reddit, declared its goal as seeking “to make speech free again and say FUCK YOU Silicon Valley elitist trash.” In addition to having limited reach, obscure platforms like Gab and 8chan are often technically and aesthetically cumbersome and unfamiliar to new users—there is a reason people prefer Facebook and Twitter.
White supremacists are also less group-oriented than other extremists, complicating the challenge: You remove al Qaeda content by focusing on the group, but white supremacists are often an amorphous set of individuals connected by complicated networks. Making this problem harder, the U.S. government does not designate domestic white supremacist groups, even violent ones, as terrorist organizations, leaving the companies without government guidance.
For all their promise, new technologies can be costly for extremists. Older organizations like the Ku Klux Klan, already in decline, have proved far less social media savvy than other white supremacist organizations and alt-right figures. This reordering has furthered the decentralization of an already fragmented movement: There are very few organized violent white supremacist groups of any consequence today.
As a result, many would-be recruits have no place to train, operations are usually amateurish, and many of the movement’s strategies for victory are fanciful to the point of delusional. At times, absurdity results. One neo-Nazi Feuerkrieg Division “commander” from Estonia, for instance, turned out to be a 13-year-old boy whose anonymity online enabled him to play a major role in the organization.
Perhaps most important, it’s easy for the FBI and civil society organizations like the Southern Poverty Law Center to infiltrate the movement—after all, as Peter Steiner’s now-famous 1993 New Yorker cartoon presciently noted, “On the internet, nobody knows you’re a dog.” Even encrypted platforms are vulnerable, as informants may pass information to the government. German intelligence, for example, was able to monitor WhatsApp despite white supremacist groups believing its encryption protected them.
Despite legitimate fears about online radicalization, the real world—and its interaction with the online sphere—still plays the most important role in radicalization. Acquiring some hard skills, such as handling explosives, is facilitated by in-person instruction. Even more important, without discounting the reality of online relationships, in-person ones tend to be stronger and more sustained.
The biggest challenge in countering white supremacist terrorism is not technical but political, especially in the United States. Because of the interaction between white supremacists and the more mainstream right-wing world, companies fear that reducing white supremacist content on their platforms will indirectly reduce content from conservative publishers and users, leading to charges of bias.
False positives that remove accounts of obscure Muslim users by accident are one thing; removing accounts of powerful white men with thousands or even millions of followers is another, at least as far as many U.S. companies are concerned. Twitter, for example, has proved reluctant to ban many white supremacists from its platform, and, in internal discussions, its leadership has explicitly cited the political risk of doing so.
Given the polarization in the United States, it seems unlikely that political leaders will speak with one voice and reject white supremacists, even though this would give cover for social media companies to act more aggressively. As a result, we will see incremental improvements, but companies will still hesitate to act decisively.
Daniel Byman is a senior fellow at the Center for Strategic and International Studies and professor in the School of Foreign Service at Georgetown University. His latest book is Spreading Hate: The Global Rise of White Supremacist Terrorism. Twitter: @dbyman
More from Foreign Policy


No, the World Is Not Multipolar
The idea of emerging power centers is popular but wrong—and could lead to serious policy mistakes.


America Prepares for a Pacific War With China It Doesn’t Want
Embedded with U.S. forces in the Pacific, I saw the dilemmas of deterrence firsthand.


America Can’t Stop China’s Rise
And it should stop trying.


The Morality of Ukraine’s War Is Very Murky
The ethical calculations are less clear than you might think.
Join the Conversation
Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.
Already a subscriber?
.Subscribe Subscribe
View Comments
Join the Conversation
Join the conversation on this and other recent Foreign Policy articles when you subscribe now.
Subscribe Subscribe
Not your account?
View Comments
Join the Conversation
Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.