To Protect Democracy, Protect the Internet
The voluntary efforts of tech companies aren’t enough. The U.S. government needs to regulate social media platforms and make election interference illegal.
Social media is now part of the foundational infrastructure of democracy. It has become the main way politicians connect to voters and a major news source. By connecting people worldwide, social media has also made it easier than ever before for authoritarian regimes to connect to voters outside their borders and manipulate democracies. Social media companies have made great strides in blocking authoritarian propaganda, but ultimately their voluntary efforts are not enough.
In order to ensure the future of democratic governance worldwide, the U.S. government must take proactive steps to regulate social media and prevent foreign election interference efforts, which currently take advantage of weak U.S. laws. As long as the world’s elections are increasingly mediated by U.S. social media companies, Washington must recognize that it has a unique responsibility to digitally protect them.
Social media has made it easier than ever for foreign powers to influence elections. Even ineffective foreign interference can undermine the legitimacy of an electoral outcome. In the 2016 U.S. presidential election, Russia ordered an influence campaign including both targeted leaks of political material and propaganda delivered via social media.
Russian actors hacked into U.S. political leaders’ email servers and leaked their content via the websites WikiLeaks and DCLeaks. Russia also launched a massive propaganda campaign, in which 80 or so contractors at the St. Petersburg-based Internet Research Agency (IRA) created fake social media accounts purporting to be Americans in order to spread divisive content. This campaign, which continued after Election Day, reached at least 126 million Americans on Facebook, 20 million on Instagram, and 1.4 million on Twitter.
Despite these numbers, there is no compelling evidence that Russian efforts to influence the 2016 election determined the outcome of the race. That’s because they had an even more subversive intent. As the U.S. intelligence community concluded in its 2017 report “Assessing Russian Activities and Intentions in Recent US Elections,” Moscow’s objective was “to undermine public faith in the US democratic process.” Rather than focusing on the immensely challenging task of changing votes in swing states or voter suppression, Russian operatives sought to stoke existing racial resentment and partisan polarization. Their ongoing efforts are likely one of many factors contributing to Americans’ increasing dissatisfaction with their democracy.
Russian leaders seem to think that they benefit from press coverage that promulgates the idea they can change the outcome of foreign elections. The IRA resorted to meta-trolling during the 2018 midterms, claiming it was running thousands of troll accounts that probably didn’t exist. Shortly thereafter, one of Vladimir Putin’s key aides declared that Russia had succeeded in altering the “consciousness” of the West by proving democracy was a sham.
Authoritarian regimes continue to meddle in elections around the world. Russia sought to influence key votes including the 2016 Brexit referendum and the 2017 French presidential election. While Russian efforts were far from decisive in either case, the fact that Russia intervened still had a negative effect on their legitimacy, especially in the U.K., where Russian interference is one of many factors that sowed doubt about the legitimacy of Brexit. This pattern—interference failing to change electoral outcomes but succeeding in undermining the legitimacy of those outcomes—is almost certain to continue. Russian propagandists continue to devise new ways to reach U.S. audiences, and researchers at Graphika have found that they are active across more than 300 platforms in at least seven languages.
Social media companies have made great strides in preventing foreign political interference since 2016. Facebook’s reports indicate that it removes hundreds of millions of pieces of content, almost all of it proactively, and numerous “coordinated inauthentic behavior” networks monthly. Twitter’s artificial intelligence-enabled censors ban millions of bots per week and have deleted billions of tweets. Reddit’s censorship AI has achieved a “proactive detection” rate above 99 percent.
Companies are also preparing for the 2020 election cycle and have begun making policy changes. Twitter banned all paid political ads and demonstrated its willingness to censor heads of state. YouTube changed its algorithm to upgrade authoritative news sources and removed thousands of Chinese propaganda accounts. Reddit revamped its political ad policy and created an ad tracker to allow anyone to monitor advertisements on the platform. Facebook launched a similar ad library and allowed users to opt out of political ads and is reportedly considering an election ad ban. Google, which saw only about $4,700 of IRA ad spending in 2016, banned targeted political ads.
Their efforts should be applauded. Most people agree that the private sector must take the lead in responding to the threat posed by information manipulation. However, protecting the legitimacy of elections cannot simply be an act of corporate charity. Foreign manipulation, which exploits loopholes in U.S. law, must also be made illegal.
Currently, foreign propagandists can easily register for U.S.-based social media services by purporting to be Americans and exploiting the anonymity and privacy rights afforded to users of U.S. social networks, which are protected by the U.S. Constitution. Currently, social media companies that unwittingly distribute their propaganda cannot suffer any penalty worse than a bad hearing on Capitol Hill.
There is precedent for penalizing distributors that specifically invite users to engage in illegal behavior, which could apply in cases such as microtargeting political advertisements at minorities for the purposes of voter suppression. But broadly speaking, while a Russian agent who covertly spreads political content on social media is violating U.S. law, the social media company that connects him to a voter is indemnified. Likewise, a Chinese agent who spreads content to manipulate a Taiwanese election on Facebook—as was the case in Taiwan’s recent presidential election and a major mayoral race—is immune to U.S. legal sanction, despite operating on a U.S. platform. There is currently no legal framework that could be used to prosecute them.
Because the issue of international political interference mixes foreign and domestic public policy—and given that major tech companies have even invited more regulation—public-private sector cooperation is the only solution. The United States needs a new legal framework that allows the government to play a more central role in protecting elections from foreign interference.
First, social media companies must become more transparent. Social media is unique because it allows targeted advertisements that cannot be easily seen by the general public. In order to prevent targeted advertisements from being abused, Congress should create a statutory requirement compelling social media companies to publish data listing the identities of individuals and organizations that purchase advertisements regarding political or divisive social issues, the content of those advertisements, and the characteristics used to target those advertisements to specific audiences.
Armed with this data, journalists and researchers will be able to verify whether or not social media companies’ efforts to prevent foreign influence are successful. This would redirect the “spotlight of pitiless publicity” that the Foreign Agents Registration Act (FARA) of 1938 already strives to shine on foreign propagandists to their social media activities. (FARA alone is likely inadequate to address these issues.) As the companies that have launched ad libraries have recognized, they will benefit from increased transparency in the long run because it will allow other actors to help them identify and assess which propaganda campaigns are ongoing.
Second, social media companies should be required by law to remove accounts that are covertly controlled by foreign states and used to distribute political content. These accounts strive to cover their digital footprints, which is why government intelligence resources are sometimes needed to identify them. There is precedent for restrictions on foreign political speech: U.S. law already restricts the right of foreign actors to speak about U.S. elections through donations. The new law should be narrowly constructed to avoid turning the government or social media into “speech police”—the target here is not the content, or who may have retweeted it, but simply the covertly controlled foreign accounts used to spread it.
The government often cannot share all of the information it has on covert activities by foreign states. It is unreasonable to expect private companies to assume liability for voluntarily deleting accounts that the government merely asserts are controlled by foreign agents. In such cases, the government must be able to compel companies to act; the government should also assume legal liability if its information proves inaccurate.
Third, social media companies should be required by law to label accounts overtly controlled by authoritarian states, including Russia, China, and Iran. YouTube and Facebook already label state-controlled channels and pages. Similarly, China’s “Wolf Warrior” diplomats deserve a “red checkmark,” warning users they are foreign propagandists, on all platforms. Red checkmarks would also make it easier for companies to determine which content they should restrict the distribution of by tweaking content distribution algorithms. Those algorithms do not enjoy First Amendment protections, although given their complexity and secrecy modifying them is best left to private companies.
Companies are already taking the lead on preventing abuse of their platforms. The government must now do more to identify foreign manipulation, pass that information on to private companies for action, and in some cases order, rather than invite, the removal of foreign content. Empowering the government to order such actions will allow it to address this threat more aggressively.
U.S. social media firms have become important global communications tools thanks to exceptional U.S. constitutional protections and laws that enable free speech, privacy, and anonymity online. Those rights must be protected by prudent measures to ensure they are not exploited. It is possible to disarm most international election interference without overturning cherished civil rights or mandating widespread censorship, and there has rarely been a more important time to establish a new legal framework for social media.
This year, which began with a pandemic and will end with a U.S. presidential election, is providing unprecedented opportunities for disinformation campaigns designed to undermine democracy and citizens’ faith in its institutions. This is not only a domestic problem: A Chinese agent spreading propaganda intended to undermine Taiwanese democracy on Facebook is an American problem both because the activity is occurring on a U.S. platform and because protecting democracy abroad—and at home—remains a vital American interest.
To do so, Washington needs a new approach: a Digital Responsibility to Protect that views social media as part of election security and provides a legal framework for technology companies to constructively engage the U.S. government. Failing to do so will continue to enable adversarial exploitation of voters and hamstring government officials in their quest to bolster faith in democratic institutions.
T.S. Allen is an intelligence officer in the U.S. Army with experience in cyber- and information operations. The views presented here are those of the writer and do not necessarily represent the views of the U.S. Defense Department and its components. Twitter: @TS_Allen