How to Judge Facebook’s New Judges
The social media company’s search for consistent rules has been long, winding, and entirely self-defeating.
“We Must Save Democracy From Conspiracies,” insisted British comedian Sacha Baron Cohen in a Time magazine article from Oct. 8, 2020. More specifically Baron Cohen accused Facebook of being “the greatest propaganda machine in history,” due to the platform’s failure to remove disinformation.
Five days later, Baron Cohen complained on Twitter that Facebook had deleted a post sharing his Time article, as it was accompanied by a picture of a man wearing a face mask with the caption “COVID-19 is a hoax.” Baron Cohen’s double condemnation of Facebook for not removing enough and also removing too much content is emblematic of the controversy surrounding the burning question of how, who, and according to which principles free speech should be limited on social media. In the words of Techdirt editor Mike Masnick, content moderation at scale is “impossible to do well” given clashing global attitudes on where to draw the line, the vagueness of platform standards, and the fallibility of humans and technology.
Still, Facebook has contributed to the confusion by failing to articulate any coherent free-speech philosophy, even when it attempts to do just that. Long gone are the early days of utopian techno-optimism infused by American First Amendment ideals that turbocharged the spread of hate and hoaxes and clashed with the values of the rest of the world, including European democracies. But what has come instead is an incoherent and messy patchwork. In November 2018, amid growing calls for transparency and accountability, Facebook CEO Mark Zuckerberg announced the creation of an independent Oversight Board (OSB). This group of independent experts was granted the power to make binding rulings in cases where users appeal decisions made by Facebook to remove content from the social network.
On December 1, the OSB announced the first six cases chosen from 20,000 user appeals. Three of the cases OSB has agreed to hear involve “hate speech”, whereas the others concern “nudity”, “dangerous individuals”, and “violence and incitement.” They include a screenshot of tweets by former Malaysian Prime Minister Mahathir Mohamad saying Muslims had a right to perpetrate violence against French people “for the massacres of the past,” Instagram photos showing female nipples supposedly aimed to raise awareness of breast cancer, and a post with a photo of a deceased child in which the user asks in Burmese why there was no retaliation of China’s treatment of Uighur Muslims.
Yet it’s not at all clear how the OSB will reconcile Facebook’s commitment to free expression with the competing values of safety, dignity, and authenticity. Facebook has adopted constantly changing community standards as states, interest groups, NGOs, the media, and users demand renewed focus on whatever issue creates controversy, whether COVID-19 misinformation, hate speech, or Holocaust denial. The publicly available community standards are only the tip of the iceberg. The confidential implementation standards developed to guide Facebook’s human content moderators is a constantly growing index of prohibited content of around 12,000 words, which few content moderators truly understand.
Facebook says it respects international human-rights standards on freedom of expression (despite not being legally bound by international human-rights law). But its community standards are significantly less speech-protective than what follows from Article 19 and 20 of the U.N.’s International Covenant on Civil and Political Rights (ICCPR). Recent years have also seen a dramatic increase in the amount of purged content on Facebook. In the third quarter of 2020, Facebook deleted 22.1 million pieces of content for violating its ban against hate speech; a steep increase from the 2.5 million pieces of content deleted in Q1 of 2018. Almost 95 percent of the purged hate speech in 2020 was proactively identified by AI before any human user notified Facebook—up from 38 percent in 2018. Whether the millions of posts and comments deleted for hate speech and other prohibited categories each month live up the Facebook’s own standards—or indeed human-rights standards—is an open question, since purged content is not available to the public.
Regardless of whether one thinks that Facebook removes too much or too little content, the lack of transparency about the moderation and algorithmic distribution of content is deeply problematic given the impact that Facebook’s decisions have on what ideas and information can be shared and by who among its 2.7 billion monthly active users. Despite Zuckerberg’s aversion to Facebook acting as a gatekeeper, the platforms users are subjected to moderation without any representation.
The OSB is an attempt to address some of these systemic shortcomings and inconsistencies, though it does not correct the fundamental lack of transparency. The OSB’s decisions are final and its Charter mandates Facebook to implement the decisions “promptly.” But it’s already clear that no matter how the OSB rules in these cases, many people will disagree vocally.
It’s not too late, however, for OSB to strengthen the legitimacy of Facebook’s content moderation by explicitly adopting a number of principles to guide its decision-making.
A frequent frustration of Facebook users whose accounts or content gets blocked is the lack of any meaningful or reasoned decision, apart from a generic reference to a specific category such as “hate speech” or “nudity.” Accordingly, the OSB should strive to issue clear, understandable, and reasoned decisions that allow users—and Facebook moderators—to understand why and how the OSB ruled the way it did.
The user should enjoy the benefit of doubt
Context is crucial for the proper understanding of speech. This is even more so on a global platform with users from almost all countries in the world speaking hundreds of different languages and with very different cultural norms, humor, etc, that may be interpreted very differently by people across the world. Moreover, the so-called memefication of social media often makes it difficult to discern whether content is sincere or sarcastic. Given that free speech is a fundamental value, the OSB should establish that users enjoy the benefit of doubt when Facebook determines whether content violates its standards. In the words of the Norwegian Supreme Court: “In the interest of freedom expression, no one should risk criminal liability through attributing to a statement, a viewpoint which has not been expressly made and which cannot with a reasonably high degree of certainty be inferred from the context.”
Beware of hate speech “scope creep”
The thorniest category of content to be navigated by the OSB is undoubtedly that of hate speech, which is notoriously difficult to define. Not least on a global platform where users—and their governments—have very different ideas on what types of speech is considered “hateful.” Article 20 of the ICCPR provides a good starting point, and several expert reports and soft-law documents give useful guidance on how to interpret Article 20. However, the lack of legally binding case law applying these principles in concrete cases leaves much uncertainty. Moreover, Facebook’s definition of hate speech is much broader than Article 20. Given how hate-speech bans are frequently abused to protect powerful institutions and leaders rather than vulnerable groups and minorities, the OSB should provide as robust and objective definition as possible, limiting hate speech bans to the most egregious content.
The OSB should look to South Africa, which has grappled with how to protect both free speech and dignity in a country with a recent past of oppressive white supremacy that relied on both systematic discrimination and censorship to maintain apartheid. In one case, the South African Supreme Court of Appeal held that:
“[A] court should not be hasty to conclude that because language is angry in tone or conveys hostility, it is therefore to be characterized as hate speech. Even if it has overtones of race and ethnicity.”
In another case, the Supreme Court of Appeal noted:
“The fact that a particular expression may be hurtful of people’s feelings, or wounding, distasteful, politically inflammatory or downright offensive, does not exclude it from protection.” Crucially the South African courts’ narrow interpretation of hate speech has been justified with reference to the apartheid past, when the government used hate-speech bans to prohibit criticism of apartheid.
Beware of censorship envy
Shortly after Baron Cohen celebrated Facebook and Twitter’s decision to purge Holocaust denial in October 2020, Pakistani Prime Minister Imran Khan wrote a letter to Zuckerberg demanding “a similar ban on Islamophobia and against Islam for Facebook.” This was a textbook example of “censorship envy” where banning speech seen as particularly odious by one group encourages other groups to demand similar restrictions on content they view as beyond the pale. Unless the OSB establishes clear and convincing principles—such as an operational definition of harm—for content removal, censorship envy might degenerate into a race to the bottom as various groups, clamor to have “problematic” content banned.
These principles should not be seen as exhaustive, but would all contribute to ensuring that the OSB plays a positive role toward establishing new models of content moderation capable of providing transparency, legitimacy, and ensuring the survival of free speech as a fundamental value in the digital age.
Jacob Mchangama is the executive director of Justitia, a Copenhagen based think tank focusing on human rights and the rule of law and author of the forthcoming book Free Speech: A History From Socrates to Social Media.