Asia’s Authoritarians Are Big Fans of Regulating Facebook
Not everyone fighting “fake news” is doing it for the right reasons.
A few days after the Cambridge Analytica scandal broke, a Facebook representative was already being grilled on the company’s trustworthiness before a Singaporean select committee. The company’s vice president of public policy for the Asia-Pacific, Simon Milner, confirmed that, if the courts ordered it, the company would remove “falsehoods” that had been “defined as illegal.”
A few days after the Cambridge Analytica scandal broke, a Facebook representative was already being grilled on the company’s trustworthiness before a Singaporean select committee. The company’s vice president of public policy for the Asia-Pacific, Simon Milner, confirmed that, if the courts ordered it, the company would remove “falsehoods” that had been “defined as illegal.”
But just who gets to decide what constitutes an “illegal falsehood” is a worrying question. The social media, with its power and reach, has reshaped Southeast Asian politics — but it’s also handed a new tool to potential authoritarians as they look to define truth and falsehood online.
Singapore’s select committee was convened earlier this year to examine “deliberate online falsehoods” — a variant of the “fake news” phrase favored by U.S. President Donald Trump. The People’s Action Party-dominated government has said it was looking into this issue since the country’s supreme court ruled in January of last year that the government could not invoke an anti-harassment law to deal with alleged falsehoods — in this case an inventor’s allegations that the Ministry of Defense had infringed on his patent — as the law offered legal recourse only for individual people, not “non-natural persons” like corporations and government institutions. Singapore’s law minister said in June 2017 that laws tackling “fake news” would be introduced this year.
Southeast Asian governments have been struggling with independent online media and blogs for years, but social media has proved even more of a challenge for governments seeking to exert control over public discourse and media content. For governments used to wielding influence over the mainstream media, social media platforms like Facebook and Twitter can be a thorn in their side. Yet as platforms like Facebook get slammed for allowing the spread of fake news, governments are getting more confident in going after online critics — whether they’re rumor-mongers or truth-tellers.
Facebook is far from blameless in this situation. In a recent piece, journalist and academic James Crabtree pointed to several credible allegations of negligence against Facebook, including accusations of fueling sectarian violence in Sri Lanka.
The ubiquity of Facebook in the region contributes to making it a convenient target. A 2016 BuzzFeed report documented how an individual in Myanmar can buy a cheap smartphone, then pay someone to get them set up on the social network. For such users, Facebook is the internet.
As of January 2017, Statista reports, there were 14 million Facebook users in Myanmar — in a country of about 52.9 million people, where the majority remain offline and the internet only arrived, for most, over the past decade. The platform has allowed much higher levels of free speech in the country but has also become a conduit for hate speech, disinformation, and misinformation. Earlier this month, United Nations investigators pointed to social media’s “determining role” in the country, describing it as having “substantively contributed to the level of acrimony and dissension and conflict … within the public.”
Indonesia, too, has had its share of issues with disinformation spreading on social media. In August 2017, the authorities launched an investigation into Saracen, a fake news generator that exploited long-standing racial and religious divides in the in the country. In March 2018, police arrested 14 individuals who were allegedly part of the Muslim Cyber Army, a clandestine fake news network using WhatsApp, Twitter, and Facebook. The group employed bots and semi-automated accounts, as well as a practice of “bounty hunting,” where the personal details of people deemed to have criticized Islam were circulated to target them for attack.
Such problems, exacerbated by tech companies’ struggle to interpret content and enforce their own community standards and policies, have given governments grounds to argue for more control.
When I appeared before Singapore’s Select Committee on Deliberate Online Falsehoods on March 27, members of the committee pointed to example after example of offensive, inflammatory content designed to incite social tension in countries such as Myanmar, Germany, the United Kingdom, and the United States. In some cases, social networks like Facebook, Twitter, and YouTube had decided not to remove the post; in others, removal had been delayed due to lapses in the review process.
In such cases of platforms being either unwilling or unable (at least for some time) to remove abhorrent content, the committee members asked, shouldn’t societies have the tools to deal with such posts according to their own values and norms? When such disinformation is deliberately being spread to provoke and disrupt social harmony, shouldn’t something be done about it?
Such questions aren’t unreasonable in and of themselves; the worry comes in when governments with an authoritarian streak decide that the answer is legislation or state regulation. The Chinese option — banning social media services not in the government’s control altogether — is not open to smaller countries that depend on Western-oriented services instead of China’s billion-person internal market and separate online sphere. Instead, new laws are set to make life difficult for social media users and companies alike, by giving governments the option to intervene in what’s published online or dangle a sword of Damocles over critics’ heads.
Such powers fail to address another significant issue: when the disinformation or misinformation might come from the top. Amid the morass of rumors and false images swirling in Myanmar, for example, are efforts to skew the narrative that appear to have government backing. On one government-organized press trip to Rakhine State, journalists were handed photos meant to depict Muslims — as represented by men in white prayer caps and women with tablecloths on their heads — burning down houses. It was later discovered that the individuals in the photos were not Muslims, but displaced Hindus; the photos had been faked.
But it doesn’t even need to be so overt or deliberate. Following my own participation in the five-hour session before the select committee, a summary published on the Singapore Parliament’s official website misrepresented statements that I’d made. For example, the summary stated that, in arguing for a Freedom of Information Act in Singapore, I was “of the view that transparency should be valued, even if such legislation could compromise national security and waste resources.” I’d in fact said that a Freedom of Information Act “doesn’t impede the government’s ability to keep things confidential for legitimate national security reasons.” Other points within the summary were also inaccurate.
I filed a complaint to Parliament and the select committee in the evening of March 29. On April 9, I received an email from the committee saying that although they “believe the summary to be accurate,” amendments would be made to the summary. A press release issued on the same day revealed that four other individuals who had given evidence before the committee had requested amendments to their summaries — an example of how even the accuracy of official records can be contested from time to time.
Singapore’s select committee has not yet arrived at any official conclusion, but there are hints that some form of legislation is likely on its way. If this turns out to be the case, it will join a series of statutes that affect freedom of expression. Bloggers have been charged with contempt of court, sued for defamation, and jailed for “wounding religious feelings”. In the latter case, teen blogger Amos Yee was eventually granted asylum in the United States after the immigration court found that his prosecution had amounted to political persecution.
Vietnam’s government, on the other hand, is pressuring tech companies such as Google to set up shop in the country, so the government can have a more direct line. It has also announced a 10,000-strong cyber unit to combat “wrong” views online as part of an effort to exert more control over Facebook, which has allowed the Vietnamese access to information outside of the state’s influence.
An anti-fake news bill was proposed in Malaysia on March 26, with harsh penalties of up to six years’ imprisonment and up to 500,000 ringgit (about $129,000) in fines. The bill, which would apply to people of any nationality even if the offense took place outside of Malaysia, defines fake news as “news, information, data and reports which is or are wholly or partly false” — a definition slammed by human rights group Suara Rakyat Malaysia for being “unduly broad.”
The bill was passed by the Malaysian Parliament on April 2, just a week after its first reading, despite the outcry from civil society groups and opposition legislators. “This anti-fake news law is particularly obnoxious as it coerces Malaysians into self-censoring themselves. Having worked in the mainstream media, I know how insidious self-censorship can be. You can’t see it. It is invisible. It happens within the confines of our own minds. That’s why it’s dangerous. And very effective,” wrote Steven Gan, co-founder and editor-in-chief of Malaysiakini, an independent news portal that has already been a target of oppression.
Parliament was dissolved shortly after the bill’s passage; Malaysians will go to the polls on May 9. The incumbent prime minister, Najib Razak, is still under fire for the 1Malaysia Development Berhad scandal, which has prompted money-laundering investigations in at least six countries. His government has already clamped down on media reportage of the scandal; the anti-fake news bill is seen as one more move to shut down criticism.
Tech companies like Google, Facebook, and Twitter have limited leverage over governments determined to pass laws or introduce new regulations — it wouldn’t necessarily make sense to threaten to pull out of every country seeking to exert more control over social media. But this doesn’t mean they’re helpless. The first thing they can do is simply to clean house themselves: be clearer about their own community standards and policies, and more transparent about their processes. Efforts can also be made to remove financial incentives for publishing and spreading misinformation or disinformation.
Disinformation campaigns and data manipulation are genuine threats to democracies young and old, but the way in which certain governments have decided to approach the issue poses potentially dire consequences for civil society and civil liberties. If inadequately addressed, social media services like Facebook, which have until now given so many people a freer platform to access information and express themselves, might end up becoming the scapegoat for more authoritarian control.
Kirsten Han is a Singaporean freelance journalist and activist, covering politics, human rights, and social justice. Her work has been published in the Guardian, Asia Times, Southeast Asia Globe, and the Diplomat, among others. Twitter: @kixes
More from Foreign Policy

No, the World Is Not Multipolar
The idea of emerging power centers is popular but wrong—and could lead to serious policy mistakes.

America Prepares for a Pacific War With China It Doesn’t Want
Embedded with U.S. forces in the Pacific, I saw the dilemmas of deterrence firsthand.

America Can’t Stop China’s Rise
And it should stop trying.

The Morality of Ukraine’s War Is Very Murky
The ethical calculations are less clear than you might think.