Tweet With Caution
The government is watching.
Michele Grasso is a Sicilian drug dealer -- a fugitive who had evaded arrest since 2010. He had seemingly mastered the art of flying under police radar, until he made a simple mistake: He posted pictures of himself on his Facebook page, grinning in front of a wax model of U.S. President Barack Obama at London's famous Madame Tussauds museum. He also helpfully included the name and a photograph of the pizzeria where he was working, leading him to be snagged by British police this year. Grasso is now back in Italy, in prison.
Michele Grasso is a Sicilian drug dealer — a fugitive who had evaded arrest since 2010. He had seemingly mastered the art of flying under police radar, until he made a simple mistake: He posted pictures of himself on his Facebook page, grinning in front of a wax model of U.S. President Barack Obama at London’s famous Madame Tussauds museum. He also helpfully included the name and a photograph of the pizzeria where he was working, leading him to be snagged by British police this year. Grasso is now back in Italy, in prison.
Social media is profoundly affecting the work of security and law enforcement, even more than the invention of the telephone over a century ago. As more of us transfer details of our lives — our whereabouts, interests, political views, friends, and so on — online, it inevitably involves and interests the agencies tasked with keeping us safe. Facebook has been used to try to hire hit men, groom targets of pedophiles, violate protective orders, and steal identities. Al Qaeda’s Somali affiliate, al-Shabab, runs a Twitter account while pirates operating in the Gulf of Aden use blogs, Twitter, and Facebook to plan and coordinate their attacks. In late 2010, British police reportedly received more than 7,000 calls from the public concerned with crimes linked to Facebook.
British law enforcement agencies are also developing more powerful methods to patrol this area of cyberspace. Some police forces are believed to be testing various types of automated social media collection and analysis tools to help criminal investigations and gauge the "temperature" — the background levels of resentment and grievance — of communities they work with. London’s Metropolitan Police now has a social media hub to spot early signs of riots or demonstrations during this summer’s Olympics. The United States is getting in on the game as well: This year, the FBI was seeking companies to help it build up social media monitoring apps for much the same purpose. Dozens of crimes — most more complex than Grasso’s indiscretions — have already been solved by accessing social networks.
But law enforcement’s involvement in the communications revolution carries risks as well as rewards. A number of ethical, legal, and operational challenges have yet to be resolved, and they threaten to derail the whole affair. In "#Intelligence," a recent paper for the British think tank Demos, we describe two conditions that must be met before law enforcement extends its work into the world of social media. Unfortunately, the laws and norms to satisfy these conditions remain embryonic even as monitoring technologies grow more pervasive.
Take condition one: There should be a broadly proportionate relationship between the degree of intrusion into someone’s private life, and the necessity and authorization for that intrusion. But both Britain’s Regulation of Investigatory Powers Act and the U.S. Patriot Act are more than a decade old — signed into law before the existence of Facebook, YouTube, or Twitter. These pieces of legislation recognize citizens’ fundamental but qualified right to privacy, and they define the occasions when and process by which this right can be transgressed.
The problem is that the meaning of "private," in the world of social media, is less obvious than a decade ago — more a series of shifting shades of gray. This leads to obvious operational problems. Is entering a Facebook group covertly the same sort of intrusion as infiltrating an offline group? Is collecting tweets similar to listening and recording a person shouting in public? What are the mechanisms by which these steps can be accomplished, and when do they represent a violation of citizens’ privacy? In truth, no one really knows.
A recent U.S. Supreme Court decision (United States v. Jones) carries significant ramifications on these questions. The court decided that the use of GPS tracking devices without a warrant breached the Fourth Amendment, which bars unreasonable searches and seizures. A car on a public highway is not necessarily private — anyone can see it and potentially follow it — but the court determined that Jones would reasonably have expected his movements to be private and not subject to government monitoring. This "expectation of privacy" test was a complex enough question in Jones’s case, and it is only more contentious when it comes to social media, where public expectations of privacy are varied, confused, and constantly in flux.
The Jones decision is also important because of the potential of mass surveillance that technology now allows. As Justice Sonia Sotomayor noted in her concurring opinion on the Jones ruling, "[B]ecause GPS monitoring is cheap in comparison to conventional surveillance techniques and, by design, proceeds surreptitiously, it evades the ordinary checks that constrain abusive law enforcement practices: ‘limited police resources and community hostility.’" But GPS monitoring is nothing compared with what law enforcement can learn from social media — in fact, it is labor intensive next to what an individual police officer can learn about someone by spending a few minutes online. In Britain, a fairly senior police officer is required to authorize directed surveillance. None is required for Googling suspects.
Then there’s condition two: Intelligence collected must make a decisive difference to public safety. This condition is also under strain. The standards of evidence required when making decisions relating to people’s liberty are, rightly, very high. Turning social media data into something a police or intelligence agency would feel comfortable acting upon is not easy. Data collected from profile pages, chat rooms, or blogs — especially when using automated programs — is prone to numerous and serious weaknesses. Social media data collection is barely past its infancy and lacks the rigor of other statistical disciplines.
If you were reading Twitter during last August’s riots in England, for instance, you might be forgiven for thinking a tiger was on the loose, that the London Eye (a famous landmark) was set on fire, and that the British Army had been deployed on the streets. Each of these (untrue) rumors spread like wildfire, and without a rigorous way to establish authenticity, it was hard to tell the factual from the fantastical.
More sinister traps await too. Rumors, lies, distortions, and intentional misinformation are ubiquitous on social media, and sifting the wheat from the chaff is extremely difficult. A leaked cache of emails allegedly belonging to Syrian President Bashar al-Assad indicated that an Assad supporter posted pro-regime commentary under an assumed Facebook identity that was mistaken as genuine and given international coverage by CNN. True, misinformation has always been a problem — think German Funkspiel in World War II — but the spread of technical know-how and free software makes it far more likely, and again, very difficult to spot.
The United States and Britain must start addressing these issues now, before their capabilities to monitor social media get too far out ahead of the norms that limit their use. A useful starting point is to draw a distinction between open-source and closed-source intelligence. Open-source, non-intrusive work is accessing information that is in the public domain, but is not used to identify individuals guilty of a crime or as a means of criminal investigation, and should not puncture the privacy wishes of any user. This would include activity such as openly crowdsourcing information through Twitter or Facebook to gain situational awareness in the event of public disorder or gauging general levels of community tension by analyzing online conversations. This ought to be conducted on a basis similar to actions of nonstate actors, such as universities and commercial companies.
Closed-source work is quite different: It uses data in ways that are not covered by the reasonable expectation or implied consent of the user. In other words, when someone is investigated in the course of a criminal investigation and her privacy settings are broken — reading the person’s private Facebook messages, say — she has lost "data control." This breach of her privacy is termed intrusive surveillance.
The use of such techniques must be publicly argued and understood, taking into account other public goods, especially privacy, and informed by the principles of reasonable cause, necessity, authorization, and oversight. It is likely that this debate will initially be driven by a series of case-law examples, but ultimately the political authorities will have to respond by creating a legislative framework, just as they did with wiretap devices.
This will also require new categories of authorization because on social media it is very easy to identify individuals unintentionally. For example, creating profiles of individuals based on publicly available information about them on social media, even if easy, should probably not proceed unless at least some reasonable cause is demonstrated. True, it would probably require an exceptionally low-level authorization, resulting in a bit of extra paperwork, but that would be a price worth paying for public confidence.
At the same time, social media intelligence needs to become a new class of intelligence, something we call "SOCMINT." Just like human and signals intelligence, it needs its own experts, formula, and evolving techniques. This means new partnerships among academia, government, and technical experts; new training for analysts and law enforcement agencies on the norms of online behavior; and the development of techniques to sift through large data sets and spot misinformation.
The result will not be perfect. As technology continues to evolve, so will the ethical and operational challenges. It has ever been thus, but the speed of change is quickening. This all matters because, in the future, intelligence and law enforcement agencies will have to use social media to discharge their duty of public safety. But unless citizens stand up now and establish checks and balances to ensure what those agencies do is proportionate, necessary, effective, and limited with due oversight, it won’t wash. The way ahead is to remain firm to our fundamental principles: Governments will sometimes need to invade citizens’ space to ensure their safety, but when, why, and how that is done ultimately rests on our consent. Social media may be new, but the old rules are built to last.
More from Foreign Policy
No, the World Is Not Multipolar
The idea of emerging power centers is popular but wrong—and could lead to serious policy mistakes.
America Prepares for a Pacific War With China It Doesn’t Want
Embedded with U.S. forces in the Pacific, I saw the dilemmas of deterrence firsthand.
America Can’t Stop China’s Rise
And it should stop trying.
The Morality of Ukraine’s War Is Very Murky
The ethical calculations are less clear than you might think.