Suppressing Extremist Speech: There’s an Algorithm for That!
A GOP operative and an academic are teaming up to go after the Islamic State online, but Silicon Valley isn’t buying it.
As the Islamic State has made extraordinary use of social media to recruit followers and inspire attacks, U.S. government officials have pleaded with Silicon Valley to do something — anything– to crack down on the group’s online presence. With halting steps, Facebook and Twitter have shut down extremists’ profiles and removed offensive content.
But the Islamic State still has a huge online reach. After a gunman attacked a gay club in Orlando last weekend and killed 49, U.S. officials said he had been radicalized at least in part through the internet.
Now, a Dartmouth computer science professor who pioneered a technique for detecting child pornography online and a Republican political operative have teamed up to create an algorithm they claim could help reduce the presence of violent extremist propaganda online.
In short, an algorithm created by Hany Farid of Dartmouth analyzes an image, video, or audio file and creates what is known as a unique “hash” for that file. By creating a database of known extremist content, social media and tech companies can run content through Farid’s algorithm, and if that content matches a hash identified as “extremist,” the company could automatically flag it or remove it altogether.
“This is the technology solution that everyone has been looking for,” Mark Wallace, the CEO of the Counter Extremism Project, which describes itself as “a not-for-profit, non-partisan, international policy organization formed to combat the growing threat from extremist ideology,” told reporters on a conference call Friday. The algorithm, he argues, will help prevent radicalization and block calls to violence. Farid and Wallace’s initiative envisions the creation of the National Office for Reporting Extremism, which would identify extremist context, label it as such, and make its hash available to companies.
On Friday, Lisa Monaco, President Obama’s top counterterrorism adviser, welcomed that idea as a way to “enable companies to address terrorist activity on their platforms and better respond to the threat posed by terrorists’ activities online.”
“ISIL has spread its brutal and hateful message using one of America’s greatest innovations to call on people here in the United States and around the world to attack innocent civilians. The propaganda, videos, and postings of terrorists are pervasive and too easily accessible,” Monaco said, using an alternate name for the Islamic State.
The algorithm launched by Wallace and Farid, who is a senior adviser at the Counter Extremism Project, may indeed offer companies a way to crack down on violent, hateful speech. But if the project is ever to get off the ground it will have to overcome serious concern that using algorithms to police speech doesn’t end up as Orwellian as it sounds.
According to an executive at a Silicon Valley social media company who spoke on condition of anonymity to describe industry discussions, several major tech companies convened for a conference call on April 29 that was organized by Facebook to discuss Wallace and Farid’s proposal. During that call, the companies questioned the effectiveness of the concept and whether its organizers could come up with a sufficiently neutral definition of what constitutes “extremist” content.
When Wallace and Farid unveiled their technology Friday they said that they have had extensive discussions with Silicon Valley, but their announcement notably did not contain any commitments by companies to use the algorithm.
Farid argues that what he sees as foot-dragging by Silicon Valley is to be expected. His algorithm to detect extremist content builds on his work detecting child pornography, and Farid says Silicon Valley was similarly reluctant to integrate that algorithm into their services. “We’ve been here before, and it’s a hundred percent predictable,” he said.
To crack child porn, he focused on the worst of the worst — flagging the most flagrant content. He suggests doing the same with terrorist invective.
For example, in terror plots in Europe and the United States, the perpetrators are nearly always consumers of the sermons of American-born preacher Anwar al-Awlaki, who was killed in Yemen in 2011 in a U.S. drone strike. Islamic State execution videos of captured American journalists and the Jordanian pilot who was burned alive in a cage are other easy examples of extremist content that Wallace calls “the worst of the worst.”
But Silicon Valley views Wallace with intense suspicion. Wallace has criticized Silicon Valley for its unwillingness or inability to after the Islamic State online, and appears to have made some enemies among the companies he is now trying to court. “They can’t accuse the tech companies of treason and then expect to get invited over for dinner the next day,” the social media executive said.
Wallace is not known as a figure in the technology world, but is a player in Washington in no small part through his work with United Against a Nuclear Iran, an advocacy group of which he is the CEO. UANI emerged during the debate over the nuclear agreement with Iran as a strident, prominent voice against rapprochement with Tehran. A longtime GOP operative, Wallace worked as an advisor for the George W. Bush and John McCain presidential campaigns. He led the team charged with preparing Sarah Palin in her 2008 vice-presidential debate with Joe Biden. Wallace served the younger Bush as a senior official at the United States mission to the United Nations.
Even if Silicon Valley views Wallace and Farid with skepticism, companies there are trying to address a problem that terror experts argue borders on corporate negligence. Michael Smith, the COO of the security consultancy Kronos Advisory who has worked with vigilante groups in trying to reduce the Islamic State’s social media presence, describes Twitter as a “cyber-sanctuary” for the group.
Since the middle of 2015, Twitter wrote in a February blog post, the company suspended more than 125,000 accounts “for threatening or promoting terrorist acts, primarily related to ISIS.”
But this isn’t enough, Smith argues. Islamic State leaders get kicked off the platform and then re-emerge bragging about how easy it is for them to return to Twitter, he says. Smith says that Silicon Valley companies have to be more proactive about “disrupting” the social media presence of terror groups. While carrying out the attack on the Pulse nightclub, alleged gunman Omar Mateen reportedly searched for himself on Facebook to gauge reaction.
But embracing an automated policing system like the one proposed by Wallace and Farid also carries huge freedom of speech concerns. Wallace and Farid say their algorithm allows social media and tech companies to automatically police their terms of service agreements in order to remove content that either depicts or incites violence.
Freedom of speech activists view such takedowns under the purview of service agreements with deep suspicion. Social media companies, they argue, have intense control over what content individuals view and consume and face few requirements to disclose what they remove.
If defining what constitutes a terrorist is a famously tricky problem, nailing down what counts as terrorist rhetoric is doubly hard. Farid himself acknowledges that his algorithm could be turned toward nefarious ends. “You could also envision repressive regimes using this to stifle speech,” he said.
Wallace views the definition problem with contempt. During the conference call he described the old cliche that “one man’s terrorist is another man’s freedom fighter” as “insipid.”
That attitude apparently hasn’t won over Silicon Valley.
JM LOPEZ/AFP/Getty Images