Argument

Twitter’s Week of Reckoning

Twitter’s Week of Reckoning

The Westgate mall siege by al-Shabab has once again raised red flags about terrorist use of social media thanks to the al Qaeda affiliate’s brazen use of Twitter to promote its attack and threaten Kenyan civilians, during and after the bloody massacre.

Twitter stepped up its game in response to al-Shabab’s live-tweeting in support of the attack, showing more resolve against terrorist abuse of its service than ever before.

But its efforts still fell well short of an ideal response, and in the wake of the crisis, the social media service has returned to studied indifference about the content posted by its users, allowing threats of further violence against Kenyan civilians by al-Shabab, after a brief crackdown during the crisis itself. That’s a mistake. 

Twitter has suspended at least four accounts used by al-Shabab since Sept. 1, but none of the suspensions was initiated by the company. Each followed at least one abuse report, and in some cases, Twitter did not act until multiple reports had been filed — despite headlines blaring about al-Shabab’s use of its service.

Twitter made no visible effort to stop the single user behind these accounts, despite the existence of tools that should allow it to do so. While complete interdiction is likely impossible, Twitter has a suite of tools that it uses very aggressively to target spammers and that could be applied, perhaps with modification, to a designated terrorist organization in the midst of a lethal attack. Twitter has devoted substantial resources to stemming the flow of sex spam on its network. It should devote a fraction of that energy to its growing terrorism problem.

Since the attack ended, Twitter has scaled back even its minimal efforts, failing to respond to multiple abuse reports and once again allowing the Somali terrorist group to issue threats of violence that it clearly intends to fulfill. All of this suggests that Twitter is more concerned about bad publicity than creating a safe, coherent online environment. Once the headlines stop, the attention stops.

I sent the company a list of questions about its broad policy on terrorist content. A spokesman named Nu Wexler responded, "We don’t comment on individual Twitter accounts, for security and privacy reasons." I wrote back and pointed out that none of my questions pertained to an individual Twitter account. Nu never got back to me.

This lazy, tone-deaf approach might fly for a private company, but Twitter recently announced it will hold a public offering of stock. Future shareholders may not be as indifferent about the problems created by the company’s lax policing of violent content.

Twitter’s policies on violent and terrorist content are light-years behind those of industry peers Facebook and YouTube. While those social media services are far from perfect, both offer reporting tools that make it easy to flag terrorist content, and both take a proactive approach to the problem, aggressively seeking out offenders and crafting terms of service that provide significant latitude for dealing with terrorist abuses.

In contrast, Twitter allows violent terrorist users without any discouragement written into its terms, allowing designated terrorist groups — including self-professed and legally designated al Qaeda affiliates in Somalia, Iraq, the Maghreb, and Syria — to establish "official" accounts (often announced to reporters in email blasts from known spokesmen) that accrue substantial numbers of followers over long durations, all while chronicling their terrorist activities.

Designated terrorist groups still use Facebook and YouTube, but they don’t maintain official accounts over long periods of time because Facebook and YouTube don’t allow it. Twitter makes it easy, so that’s where you find the official terrorists.

Twitter may imagine itself a global public square and a champion of free speech, but it’s a business operating under American laws, which prohibit knowingly providing services to designated terrorist organizations.

The gap between Twitter’s practices and industry standards is large enough to raise the specter of negligence — a word that should strike fear into the hearts of future shareholders worried about liability.

If Westgate survivors and victims’ families aren’t already calling lawyers, the victims of the next attack just might. And it’s not just the pain and suffering caused by allowing the gloating threats of al-Shabab either. If would-be extremists follow al-Shabab, Twitter’s recommendations system will actively direct them to online terrorist forums, where they can learn to make bombs and meet al Qaeda recruiters.

Rather than wait for an ugly lawsuit, Twitter needs to get its act together and catch up to the minimum standards implemented by other major social media services:  

  • Implement a simplified reporting system.
  • Respond faster to reports of abuse.
  • Add terrorism to the list of prohibited activities on its terms of service.
  • Take action against account holders who use Twitter to facilitate criminal and terrorist activities.
  • Implement methods to monitor and restrict repeat offenders who have demonstrated clear links to designated terrorist organizations.

Twitter may not care about doing the right thing by victims of terrorism, but I’m pretty sure it cares about the feelings of its future shareholders.